Moses’ Last Exodus

Wilmington, Del., Nov. 30, 1860

The knock came after dark. Hastening to answer it, the old Quaker found a familiar figure in the doorway: a tiny, dark-skinned woman, barely five feet tall, with a kerchief wrapped around her head. Someone who didn’t know her might have taken her for an ordinary poor black woman begging alms – were it not for her eyes. Wide-set, deep-socketed and commanding, they were the eyes not of a pauper or slave, but of an Old Testament hero, a nemesis of pharaohs and kings.

Harriet Tubman, circa 1860s.

Five others followed her: a man and woman, two little girls and, cradled in a basket, the swaddled form of a tiny infant, uncannily silent and still. They had braved many dangers and hardships together to reach this place of safety, trusting their lives to the woman known as “the Moses of her people.”

As politicians throughout the country debated secession and young men drilled for war, Harriet Tubman had been plotting a mission into the heart of slave territory. She did not know that it would be her last. Over the past 10 years, she had undertaken about a dozen clandestine journeys to the lower Eastern Shore of Maryland, the place from which she herself had escaped in 1849. She had managed to bring some six dozen people – most of them family and friends – across the Mason-Dixon Line into freedom, then across the Canadian border to safety. But Tubman had never managed to liberate several of her closest relatives: her younger sister Rachel and Rachel’s two children, Ben and Angerine. In the autumn of 1860, she decided to rescue them.

Slave ads from a newspaper on the Eastern Shore of Maryland, 1859.

Although it lay on the border between North and South and had few large plantations, the part of Maryland east of the Chesapeake Bay was an especially hazardous place to be a slave. Soil depletion and economic stagnation had left many local planters with more field hands than they needed – as well as chronically short of cash. By the mid-19th century, the Eastern Shore had become known as one of the nation’s principal “breeder” regions, where slaves were frequently sold to slave traders, speculators who sent them south to the burgeoning cotton and sugar plantations of the Gulf Coast. As a child, Tubman had seen two of her own sisters sold away, and heard her parents’ anguished tales of others taken before her birth. Four of her remaining siblings had escaped, three of them helped by their sister Harriet. Only Rachel had remained.

By this time, Tubman was well connected to the nationwide abolitionist movement, and before departing, she raised money for the trip (and for possible bribes along the way) from Wendell Phillips and other activists. She set out from her home in Auburn, N.Y., and by mid-November she was in Maryland.

Tubman arrived to learn that her sister would never know freedom: Rachel had died a short time earlier. There were still the two children, her niece and nephew, to rescue. Here too, Tubman failed. She set a rendezvous point in the woods near the plantation where the two were held, but they failed to appear at the appointed time. Tubman waited all through that night and the following one, crouching behind a tree for shelter from the wind and driving snow. At last she gave up. Ben and Angerine’s fate is unknown.

Ad for a runaway slave, in Macon (Georgia) Daily Telegraph, Nov. 30, 1860.

Tubman had, however, found another family that was ready to seek freedom: Stephen and Maria Ennals and their children, six-year-old Harriet, four-year-old Amanda and a three-month-old infant. (One or two other men may have joined them as well.) The fugitives made their way up the peninsula, traveling mostly by night. Once, they were pursued by slave patrollers alerted to their presence. The escapees hid on an island in the middle of a swamp, covering the baby in a basket. Eventually a lone white man appeared, strolling casually along the edge of the marsh, seemingly talking to himself. They realized he was an agent of the Underground Railroad, telling them how to reach a barn where they could take shelter.

As they continued on their journey, Tubman would go out each day in search of food while the Ennalses hid in the woods, their baby drugged with an opiate to keep it from crying. Returning at the end of the day, Tubman would softly sing a hymn until they heard her and reemerged:

Hail, oh hail, ye happy spirits,
Death no more shall make you fear,
Grief nor sorrow, pain nor anguish,
Shall no more distress you dere.

Even as the group approached Wilmington, it was not yet out of danger: Delaware was still officially a slave state. In fact, due to the Fugitive Slave Act of 1850, the escapees could have been recaptured anywhere in the North and returned to bondage. Tubman herself could have been re-enslaved, or – as an abettor of fugitives – sentenced to spend the rest of her life in a Maryland prison. But at last, on the night of Nov. 30, she reached the house of the elderly Quaker, Thomas Garrett, a leading Underground Railroad “conductor” who would smuggle the Ennals family to relative safety in Philadelphia.

Although the Underground Railroad had already become famous – and, for many Americans, infamous – only a tiny percentage of slaves managed to escape to the North: estimates have put the number at just a thousand or so each year out of a total enslaved population of some four million. Still, these fugitives were a major bone of contention for disgruntled Southerners. An adult field hand could cost as much as $2,000, the equivalent of a substantial house. To Southerners, then, anyone who helped a man or woman escape bondage was simply a thief. But more infuriating than the monetary loss it occasioned, the Underground Railroad was an affront to the slaveholders’ pride – and a rebuke to those who insisted that black men and women were comfortable and contented in bondage.

In an 1860 speech, Senator Robert Toombs of Georgia thundered against Republicans “engaged in stealing our property” and thus “daily committing offences against the people and property of these … States, which, by the laws of nations, are good and sufficient causes of war.” As secession loomed, some Northerners attempted to soothe such fears. A New York Times editorial suggested not only that stronger efforts be made to enforce the Fugitive Slave Act, but that the federal government compensate slaveholders for their escaped “property.”

Tubman was back in Auburn by Christmas Day, 1860, having conveyed the Ennals family safely to Canada. (Abolitionists often noted the irony of Americans fleeing the “land of liberty” to seek freedom under Queen Victoria’s sheltering scepter.) Her secret missions ended with the approach of war.

But one night in the midst of the secession crisis, while staying at the house of another black leader, a vision came to Tubman in a dream that all of America’s slaves were soon to be liberated – a vision so powerful that she rose from bed singing. Her host tried in vain to quiet her; perhaps their grandchildren would live to see the day of jubilee, he said, but they themselves surely would not. “I tell you, sir, you’ll see it, and you’ll see it soon,” she retorted, and sang again: “My people are free! My people are free.”

Sources: Kate Clifford Larson, “Bound for the Promised Land: Harriet Tubman, Portrait of an American Hero“; William Still, “The Underground Rail Road”; Sarah H. Bradford, “Harriet, the Moses of Her People”; Catherine Clinton, “Harriet Tubman, The Road to Freedom”; Fergus Bordewich, “Bound for Canaan: The Underground Railroad and the War for the Soul of America”; James A. McGowan, “Station Master on the Underground Railroad: The Life and Letters of Thomas Garrett”; “Speech of Robert Toombs, of Ga., Delivered in the Senate of the U.S. January 24, 1860”; New York Times, Dec. 10, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/29/moses-last-exodus/

Cleopatra’s Guide to Good Governance

LET’S say you can’t readily lay your hands on “Leadership Secrets of Attila the Hun” or those of Winnie the Pooh. And let’s say the political mood around you is bleak; gridlock is the order of the day. Why not turn to a different management guru, a woman who left some 2,000-year-old teachable moments, each of them enduring and essential?

At 18, Cleopatra VII inherited the most lucrative enterprise in existence, the envy of her world. Everyone for miles around worked for her. Anything they grew or manufactured enriched her coffers. She had the administrative apparatus and the miles of paperwork to prove it.

From the moment she woke she wrangled with military and managerial decisions. The crush of state business consumed her day. Partisan interests threatened to trip her up at every turn; she observed enough court intrigue to make a Medici blush. To complicate matters, she was highly vulnerable to a hostile takeover. Oh, and she looked very little like the other statesmen with whom she did business.

Herewith her leadership secrets, a papyrus primer for modern-day Washington:

Obliterate your rivals. Co-opting the competition is good. Eliminating it is better. Cleopatra made quick work of her siblings, which sounds uncouth. As Plutarch noted, however, such behavior was axiomatic among sovereigns. It happened in the best of families.

The royal rules for dispensing with blood relatives were as inflexible as those of geometry. Cleopatra lost one brother in her civil war against him; allegedly poisoned a second; arranged the murder of her surviving sister. She thereafter reigned supreme.

Does this suggest by extension that a family business is a bad idea? It does.

Don’t confuse business with pleasure. The two have a chronic tendency to invade each other’s territory. But what were John Edwards, Mark Hurd, Mark Sanford and Eliot Spitzer thinking?

If you’re going to seduce someone, set your sights high. Cleopatra fell in with the most celebrated military commanders of her day, sequentially allying herself and producing children with her white knights, Julius Caesar and Mark Antony. As she demonstrated, the idea is to kiss your way up the ladder. Along the same lines, there was an ancient world equivalent of the hire-an-assistant-of-whom-your-spouse-can’t-be-jealous wisdom. Cleopatra surrounded herself with eunuchs. They got into less trouble than did other aides, or at least different kinds of trouble.

Appearances count. As President Obama has learned and unlearned, theater works wonders. You may campaign in poetry, but you are wise to govern in pageantry. Deliver carnivals rather than tutorials; a little vulgarity goes a long way. Just wear the flag pin already.

Leadership is a trick of perception, a bit of wisdom Shakespeare lent Henry IV, to pass along to Prince Hal. And if you intend to command, look the part. Work boots with a suit are always a nice touch when you’re the head of the Coalition Provisional Authority in an occupied Middle Eastern country, for example. Make something of a spectacle of yourself. Yes, you can do that in jeans and a black turtleneck. In a televised world as in a pre-print era, it’s the stage management that counts. Literally or not, the idea is to create and star in your own reality show.

Go big or go home. Cleopatra appeared before Antony at an age when, according to Plutarch, “women have most brilliant beauty and are at the acme of intellectual power,” a moment every woman knows to be several years behind her. But no matter. Cleopatra took with her extravagant gifts, chests of money, rich textiles. She left behind the boxed sets of DVDs and scale models of Marine One. She traveled on a gilded barge with purple sails, amid a cloud of incense. She laid out carpets of roses. To Antony’s officers she handed around gem-studded vessels, couches, sideboards, tapestries, horses, torch-bearing Ethiopian slaves. It was not surprising that the most astute of Antony’s generals should several years later vouch for her military genius.

 

Never get involved in a land war in Asia. Millenniums before Wallace Shawn delivered up that pearl of wisdom in “The Princess Bride,” Cleopatra seems to have intuited as much. She nonetheless financed Antony’s military expedition to the restive area east of the Tigris, a multiethnic, multicultural region of shifting alliances, one that had resisted 30 years of Roman efforts at organization. The Roman general who had last ventured that way had not returned. His severed head wound up as a prop in a royal production of Euripides. His legions were slaughtered. Antony fared only marginally better. Asian allies double-crossed him. Guerrilla tactics and treacherous geography undid him. At the conclusion of a demoralizing campaign and a disastrous retreat he had lost some 24,000 men. Cleopatra bailed him out.

Underpromise and overdeliver. Cleopatra comported herself flamboyantly and delivered on drama. But occasionally — despite a huge staff that included pages and scribes, masseurs and tasters, lamplighters and pearl-setters — something slipped through the cracks.

Alas such was the case in her dealings with Cicero, who left only damning lines about the Egyptian queen, whom he would not deign even to mention by name. He had little reason to be inclined toward a rich and foreign female sovereign. But the animus derived from something else. Cleopatra had promised Cicero a manuscript — it may have been one from her library in Alexandria — on which she failed to deliver. The oversight sealed her fate for posterity. No one has ever paid so lasting a price for a forgotten library book.

It pays to sweat the details, as Newt Gingrich reminded us when he shut down the federal government in 1994, after he was assigned a lousy seat on Air Force One.

If you can’t pay your debts, debase your currency. Egypt’s economic affairs were dismal when Cleopatra ascended to the throne. She devalued the currency by a third. She issued no gold and critically lowered the value of her kingdom’s silver. And she ushered in a great innovation: she introduced coins of various denominations. In an early prefiguring of paper currency, the markings rather than the metal content determined their value. A coin might feel light in the hand, but if Cleopatra said it was worth 80 drachmae, it was worth 80 drachmae. The arrangement was both lucrative to her and encouraged an export-driven economy.

 

A friend of a friend may well be an enemy. Cleopatra’s charm was said to be irresistible, her presence spellbinding. But one person on whom she failed to work her magic was Herod.

Well before religion clouded the picture, the Queen of Egypt and the King of Judaea were rivals for Rome’s friendship. Cleopatra did everything in her power to frustrate Herod. She kept him as far from Antony as possible and claimed proceeds from Judaea’s most lucrative natural resources. At one point she incited a war between Herod and his Arab neighbors the Nabateans, ordering her commander in the region to prolong the contest as long as possible. She counted on them to destroy each other, which they did not. Cleopatra did supply Herod with further reason to malign her in Rome, however.

Good neighbors make good fences. Shortly after the war between Herod and the Nabateans, Julius Caesar’s adopted son Octavian soundly defeated Cleopatra at the battle of Actium. She retreated to Alexandria, from which she attempted several escapes. In one particularly bold maneuver, she dragged her Mediterranean fleet 40 miles overland in order to relaunch it, via the Gulf of Suez, into the Red Sea. Both the bravado and the engineering were staggering. Cleopatra essentially anticipated the Suez Canal.

The tribe on the far side of the Gulf was unfortunately the Nabateans, newly recovered from their costly war with Herod. They set fire to each of Cleopatra’s ships as it reached their shore.

Unsurprisingly, Herod was happy to escort the conquering Octavian directly to the Egyptian border. He saw to it that the Romans lacked nothing for the desert march ahead. Several weeks later Cleopatra was dead.

 

Control the narrative. Cleopatra understood well that the storytelling mattered as much as the decision-making, and that the best narrative is the easy-to-follow narrative.

She discovered early on that it helps to have a god on your side — or to claim to speak for one. She remained at all times on-message, truthfully and not. She cruised the Nile with Julius Caesar, a splendid advertisement of Egyptian abundance to her Roman visitor and of Roman military might to her people. After her defeat at Actium, she sailed back to Alexandria with head high, passing off a mission entirely botched as one expertly accomplished.

She astutely manipulated the nomenclature; as mission statements go, you can’t do better than the title she adopted at 32: “Queen Cleopatra, the Goddess, the Younger, Father-Loving and Fatherland-Loving.”

The problems came later. Her enemies wrote her history, reducing her shrewd politics and managerial competence to sexual manipulation. As one contemporary noted, “How much more attention people pay to their fears than to their memories!” It’s rarely about the library book, but so much easier to claim it is. And you never know who’s going to end up addressing posterity.

It could be Newt Gingrich.

Stacy Schiff is the author of “Cleopatra: A Life.”

___________

Full article and photo: http://www.nytimes.com/2010/12/05/opinion/05schiff.html

Famine in Kansas

Atchison, Kansas Territory, Dec. 9, 1860

Street in Atchison, Kansas.

They converged from far and wide on the dusty border town: grim-faced men and women driving teams of staggering oxen; children whose bare and filthy feet were blistered by the hard-baked earth. Not long before, these same trails and same oxen had brought the settlers westward into new lives, new lands, the promise of plenty. Now misery and starvation drove them back, exiles retracing their steps east – fleeing, in the words of a New York Times writer, “as if Death were in the rear.”

A Chicago Tribune correspondent, freshly arrived in Atchison that day, found dozens lined up with their wagons along the Missouri River levees, awaiting handouts of free foodstuffs. “Such a scene!” he wrote. “Great, stalwart men, gaunt, lean, hungry, looking weary, sad, tired, and dispirited; poorly clad, and in all respects filling one with the conviction of suffering patiently borne and long repressed – men, some of whom I recognized, and all of whom bore the unmistakable character of sturdy industry and independence common to our western pioneers.”

An appeal from the Chicago Tribune.

Spotting the stranger’s notebook, the Kansans crowded around to share their stories. A settler from Butler County, G.T. Donaldson, told of crops devoured by grasshoppers, cattle felled by disease, and relentless drought that sealed the overall ruin. A “small, keen eyed” farmer named A.V. Saunders had driven his ox teams more than 200 miles to fetch provisions for his beleaguered rural community; after a week of waiting, he had finally been issued just 12 sacks of meal and eight sacks of potatoes for the 400 inhabitants. Another “forlorn looking man” made a particular impression on the curious journalist:

He was literally clothed in rags. Such a tatterdemalion one can scarcely conceive of. His garments, originally home-spun, had been patched with so many different materials, mostly varieties of bed-ticking and sacking, that the feeble threads would no longer hold together, and the shreds were flopping about him as he walked. His face was haggard and hunger-worn; cheek-bones protruded; flesh had shrunk away, and his eyes were hollow and eager, and had the terrible starved look in them which I saw once in a famine-stricken party of Irish, in ’47, and which I shall never forget. I will tell his story, as near as may be, in his own words:

‘My name is Abraham Huck. I’m from Ilenoy. Came to Kansas last March, and hired a place on Deer Creek, Anderson County…. I’ve got a wife and eight children. Left home last Sunday (six days before). Wife and one of the children’s with me. Left seven at home, with some turnips and a peck of meal.’

‘What,’ I said, ‘a peck of meal for seven?’

‘That’s all, Sir, and we’ve had nothing to eat on the road for three days, except the little I’ve begged. … I planted fifty-five acres [this year], and harvested five bushels of wormy corn.’

The reporter added, by way of comparison, that a peck of meal represented a week’s rations for a single slave in the cotton South.

Kansas seemed cursed by both nature and man. Beginning in 1854, the nation had watched in horror as struggles between pro-slavery and free-soil pioneers devolved into a nightmare of torched and looted towns, murdered civilians and anarchy cloaked in the false trappings of justice. The nation’s leaders, cynically viewing the territory as nothing but a square in the political chess match between North and South, had conspired in its ruin. “The game must be played boldly,” urged the town of Atchison’s namesake, a senator from neighboring Missouri. “If we win we carry slavery to the Pacific Ocean.” In another letter, to his Senate colleague Jefferson Davis, he wrote: “We will be compelled to shoot, burn & hang, but the thing will soon be over.”

There had indeed been shooting, burning and hanging – but the thing had not ended as quickly as Senator David Rice Atchison had anticipated. Nor had the pro-slavery forces won the game. In October 1859, a popular referendum finally declared the ravaged territory to be free soil, and a bill for statehood began making its way through Congress. At last peace came – along with settlers by the thousands.

And then the elements themselves conspired against the land. Like a modern-day version of plague-ridden Egypt, “Bleeding Kansas” became Starving Kansas. Rains ceased; from the spring through the autumn of 1860, barely enough fell to dampen the surface of the soil. Temperatures reached 105 degrees in the shade. “The hot wind sweeps over the land blinding one with the dust or blistering the skin,” one settler wrote. “The poor squatter looks to his withered crops and sits down in despair.” Another Kansan described the conditions as “only fit for a Hottentot, accustomed to the ardors of the Sahara.” As many as a third of the territory’s 100,000 white inhabitants packed up their scanty belongings and trudged back toward the eastern states whence they had come.

Some blamed politicians for these latest calamities, scarcely less than for the bloodshed of years past. “The ills which Kansas endures are very largely derived from the misgovernment of [James Buchanan’s] administration,” declared an editorial in The Times. “Drought is not a visitation of Presidents but of Providence; but the poverty which preceded the bad harvest, and which renders the people wholly unable to support the deficiency of breadstuffs, is well known to have originated chiefly in the … savage and vindictive mismanagement [that] the affairs of the Territory have been deliberately subjected to.” Other critics charged that Republicans shared the guilt: party leaders had initially downplayed the emergency’s severity, allegedly because they did not want to undercut political fundraising while the presidential election hung in the balance. (Campaigning in Lawrence at the end of September, William H. Seward declared sanguinely that he had “carefully examined the condition of … the river bottoms and the prairies” and concluded that “there will be no famine in Kansas.”)

In the weeks after Abraham Lincoln’s election, as the chill of impending winter began gripping the heartland, Americans had finally begun paying attention to the disaster in the Midwest. (Savvy Kansans whipped up interest by sounding the alarm that pro-slavery raiders – known as “pukes” – were once again preparing to invade.) From New York, Chicago and other cities, donations poured into Atchison, the western terminus of the railway and designated base of the relief efforts. On Dec. 12, at the urging of such leading antislavery Republicans as Horace Greeley and William Cullen Bryant, citizens held a rally at the Cooper Union in Manhattan and raised the respectable sum of $1,200 toward the cause. Even President Buchanan closed his annual message by turning his attention from the secession crisis and asking Congress to aid the Kansas sufferers, “if any constitutional measure for their relief can be devised.”

The grim headlines from Atchison, side-by-side with those from secession-mad Charleston, fueled Americans’ forebodings that their nation had entered its end times – perhaps even that God was meting out a terrible judgment for their sins, just as he had done to Pharaoh and the slaveholding Egyptians. The image of free American citizens emaciated and in rags – apparently fed and clothed even worse than Southern slaves – was terrible to contemplate.

“The men whom I see waiting here for their scanty supplies, are the bone and sinew of the West,” wrote the Tribune correspondent. “They are men who are blazing the way of the American people across this Continent, and are laying broad and deep the foundations of free institutions. It is a question for Americans to consider whether these men shall be sustained in this their hour of dire misfortune.” If they could not be, what hope was there for those free institutions themselves?

Sources: Chicago Tribune, Dec. 13, 1860; New York Times, Oct. 3, Nov. 1, Nov. 19, Dec. 10 and Dec. 13, 1860; Craig Miner, “Kansas: The History of the Sunflower State, 1854-2000”; Joseph G. Gambone, “Starving Kansas: The Great Drought and Famine of 1859-60” (American West, July 1971); James McPherson, “Battle Cry of Freedom: The Civil War Era”; Lynda Lasswell Crist, ed., “The Papers of Jefferson Davis, 1853-1855”; Jay Monaghan, “Civil War on the Western Border, 1854-1865”; Sheffield Ingalls, “History of Atchison County, Kansas”; George W. Glick, “The Drought of 1860” (Transactions of the Kansas State Historical Society, 1905-1906); Thaddeus Hyatt, “The Prayer of Thaddeus Hyatt to James Buchanan, President of the United States, in Behalf of Kansas”; William H. Seward, speech at Lawrence, Kans., Sept. 26, 1860; James Buchanan, Annual Message to Congress, Dec. 3, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/12/08/famine-in-kansas/

On Mrs. Kennedy’s Detail

IT was with great trepidation that I approached 3704 N Street in Washington on Nov. 10, 1960. I had just been given the assignment of providing protection for the wife of the newly elected president of the United States, and I was about to meet her for the first time.

I soon realized I had little to worry about. Jacqueline Bouvier Kennedy, just 31 years old at the time, was a gracious woman who put me immediately at ease. She was the first lady, but she was also a caring mother; her daughter, Caroline, was nearly 3 years old, and she was pregnant with her second child. Three weeks later, she went into early labor with John Jr., and I followed her through the entire process. It would be the first of many experiences we would have together.

Being on the first lady’s detail was a lot different from being on the president’s. It was just the two of us, traveling the world together. Mrs. Kennedy was active and energetic — she loved to play tennis, water-ski and ride horses. She had a great sense of humor, and we grew to trust and confide in each other, as close friends do.

In early 1963, Mrs. Kennedy shared with me the happy news that she was pregnant again. She had curtailed her physical activities and had settled into a routine at the Kennedy compound in Hyannis Port, Mass., for the last few months of her pregnancy. I was on a rare day off when I got the call that she had gone into early labor. I raced to the hospital at Otis Air Force Base, arriving shortly after she did.

The president, who had been in Washington, arrived soon after she delivered their new baby boy, whom they named Patrick Bouvier Kennedy.

When Patrick died two days later, Mrs. Kennedy was devastated. I felt as if my own son had died, and we grieved together.

The following weeks were difficult as I watched her fall into a deep depression. Eventually, it was suggested that she needed to get away. In October 1963 I traveled with her to the Mediterranean, where we stayed aboard Aristotle Onassis’ yacht, the Christina. The trip to Greece, Turkey and Yugoslavia, along with a short stop in Morocco, seemed to be good therapy, and by the time we returned to Washington the light had returned to her eyes.

I was surprised, however, when not long after our return Mrs. Kennedy decided to join her husband on his trip to Texas. It was so soon after the loss of her son, and she hadn’t accompanied the president on any domestic political trips since his election.

Nevertheless, when we left the White House on Thursday, Nov. 21, I could tell that Mrs. Kennedy was truly excited. I remember thinking this would be a real test of her recovery, and that if she enjoyed the campaigning it would probably be a regular occurrence as soon as the 1964 race got into full swing.

The first day of the trip was exhausting. We had motorcades in San Antonio, Houston and finally Fort Worth, where we arrived around midnight. It had been a long day for everyone, and Mrs. Kennedy was drained.

On the morning of Nov. 22, I went to her room at the Hotel Texas to bring her down to the breakfast where President John F. Kennedy was speaking. She was refreshed and eager to head to Dallas. She had chosen a pink suit with a matching hat to wear at their many appearances that day, and she looked exquisite.

The motorcade began like any of the many that I had been a part of as an agent — with the adrenaline flowing, the members of the detail on alert. I was riding on the running board of the car just behind the president’s.

We were traveling through Dallas en route to the Trade Mart, where the president was to give a lunchtime speech, when I heard an explosive noise from my right rear. As I turned toward the sound, I scanned the presidential limousine and saw the president grab at his throat and lurch to the left.

I jumped off the running board and ran toward his car. I was so focused on getting to the president and Mrs. Kennedy to provide them cover that I didn’t hear the second shot.

I was just feet away when I heard and felt the effects of a third shot. It hit the president in the upper right rear of his head, and blood was everywhere. Once in the back seat, I threw myself on top of the president and first lady so that if another shot came, it would hit me instead.

The detail went into action. We didn’t stop to think about what happened; our every move and thought went into rushing the president and Mrs. Kennedy to the nearest hospital.

I stayed by Mrs. Kennedy’s side for the next four days. The woman who just a few days before had been so happy and exuberant about this trip to Texas was in deep shock. Her eyes reflected the sorrow of the nation and the world — a sorrow we still feel today.

Clint Hill, a former assistant director of the Secret Service, served under five presidents.

__________
Full article and photo: http://www.nytimes.com/2010/11/22/opinion/22hill.html

Drama in Milledgeville

Nov. 16 – 22, 1860

With ardent secessionist activity in South Carolina having a week ago reached a heated peak, a pregnant pause has followed. Until the secession convention comes to order in December, the focus of the disunion crisis last week shifted elsewhere.

In Georgia, men of probity and wisdom tried to decide what to do about secession.

In Washington, men of probity and wisdom tried to decide what to do about secession.

And in Springfield, Ill., a man of probity and wisdom reached a firm decision. By all accounts, the beard is coming in nicely.

Lodged between the deep South’s slave-rich Atlantic coast states and the just-developing Mississippi Valley states, rich, large Georgia is key to most of the secessionists’ plans. But with two regions that are relatively slave-free — the pine barrens in the southeast and the mountains in the north near Tennessee — Georgia’s appetite for secession is not everywhere so keen.

Knowing that it can’t treat this issue like South Carolina has, Georgia’s state legislature decided that before it deliberates on the question of secession, it wanted to hear the views of its brightest minds, or at least the brightest minds that don’t happen to belong to state legislators. And so last week, two dozen men traveled to the state capital in Milledgeville to offer their views.

Almost immediately two main schools of thought emerged, the Separatist and the Cooperationist. The Separatists support the idea that Georgia can and should leave the union on its own, regardless of what any other state does. The Cooperationists have mixed views about secession, but are united in their opposition to unilateral action; whatever Georgia does, they say, Georgia should do in concert with the other Southern states.

Some cooperationists favor secession, while others support secession as a last resort, pending the outcome of negotiations with the North, and still others support secession if and only if the North offers a military response to the South’s demands or to a southern state’s departure. The Separatists, too, have internal divisions. Most are urging the departing states to combine into a new nation, but some support secession as a mere tactic. They believe the South should rejoin the union once the North offers concessions on slavery, as they are confident it will.

The presentations took place over five evenings, and the flickering candelabras heightened the feelings of drama in the chamber. Right at the outset, the separatists boldly seized the rhetorical heights of the debate and in truth, never relinquished them. Disunion or dishonor — that’s how their first speaker, the legal scholar Thomas R.R. Cobb, starkly defined the legislature’s choice.

Momentarily modulating his emotions, Cobb argued that wisdom, not passion, should guide the legislators’ decisions, but then called upon them to think — wisely, mind you, not passionately — of their families. Remember the parting moment when you left your firesides to come to the capital. Remember the trembling hand of your beloved wife as she whispered her fears from the incendiary and the assassin. Recall the look of indefinable dread from your little daughter. “My friends, I have no fear of servile insurrection . . . Our slaves are the most happy and contented of workers.” But the “unscrupulous emissaries of Northern Abolitionists’” may turn the disgruntled few. “You cannot say whether your home or your family may be the first to greet your returning footsteps in ashes or in death.”

Robert Toombs

This sanguineous theme connected the comments of other Separatist speakers. Senator Robert Toombs noted that the slave population has quintupled from 800,000 in 1790 to four million at present, a rate that would result in 11 million slaves by 1900. What would we do with them? he asked. If we can’t expand our borders, extermination will be required.

The lawyer Henry Benning also had population growth on his mind. He pointed to the North and to rates of immigration, and argued that free states would soon outnumber slave states and abolitionist forces would dominate Congress. And what will happen then? Soon there will be a constitutional amendment that would require southerners “to emancipate your slaves, and to hang you if you resist.” This will be followed by a war in which emancipated slaves will “exterminate or expel” all southern white men. “As for the women, they will call upon the mountains to fall upon them.”

Alexander Stephens

In opposition to these dire visions were a few voices of skeptical calm, most notably that of Alexander Stephens, the 48-year-old former Whig congressman, whose corpus consists of a mere 98 pounds of ashen flesh that rheumatoid arthritis, colitis, cervical disc disease, bladder stones, angina, migraines, pruritis and chronic melancholy disease had not wasted away.

Wrapped in scarves and shawls, the cadaverous, mummified Stephens accepted the thankless task of trying to staunch the hyperbole. Lincoln is no dictator, Stephens argued. Constitutional checks hobble him. Democrats have majorities in both the House and the Senate. Lincoln cannot appoint any federal officers without the consent of the Senate. There are but two Republicans on the Supreme Court. “The president has been constitutionally chosen. If he violates the Constitution, then will come our time to act. Do not let us break the Constitution because he may.”

Of course, Stephens agreed, slaveholders have genuine grievances, and the North has to acknowledge them. Yes, there is a federal fugitive slave law, but too many northern states have personal liberty laws that prohibit state officials from apprehending runaway slaves. A slave can just walk off the farm in Virginia or Maryland or Kentucky, and no sheriff or constable in Pennsylvania or Ohio will lift a finger to apprehend him. Stephens argued that as a condition for remaining in the Union, northern states had to repeal those laws.

It was a canny and reasonable argument, the basis of a compromise many northerners might well accept. But with separatists conjuring the image of that Black Republican Abraham Lincoln unleashing troops of militant Wide Awakes to invade the South and liberate hordes of slaves who will rampage throughout the cotton belt like Mongol barbarians, poor Stephens might as well have brought watering can to quench an inferno. As sturdy a rope as Stephens’s proposal may be, it stands little chance of restraining the headstrong Separatists; it may, however, be the line they will try to grasp to save themselves if later they realize they have plunged into disaster.

James Buchanan

In Washington, meanwhile, the lame-duck Buchanan administration is responding to the threat of crisis with a combination of weariness and irresolution. Never a particularly dynamic leader — with more insight than he perhaps intended, Buchanan once referred to himself as an “old public functionary” — the president has always preferred to make policy by reaching consensus with a cabinet he balanced so carefully by region that he seemed like teamster packing a mule.

But the Solons of his cabinet are failing him. Interior Secretary Jacob Thompson of Mississippi and Treasury Secretary Howell Cobb of Georgia (yes, brother of the wise, dispassionate Thomas cited above) believe secession is a fait accompli and are eyeing opportunities with the new government. Secretary of War John Floyd of Virginia is torn between his southern sympathies and pro-union convictions. Michigan’s Lewis Cass, the 78-year-old secretary of state, is showing signs of mental feebleness; Connecticut’s Isaac Toucey, the secretary of the Navy, has never demonstrated much mental capacity to enfeeble.

Buchanan proposed to respond to secessionists with an ingenious proposal: to call a convention of the states, as permitted under Article V of the Constitution, to discuss an amendment that would permit secession. It was a shrewd idea: the hotspurs in South Carolina have already dispensed with talking, but the serious men of the South would have looked unreasonable if they refused an open-handed invitation to discuss their problems. And yet a national convention might well provide a place where pro-unionists of every stripe could come together and exhibit their considerable strength.

The Cabinet offered Buchanan scant support. Thompson and Cobb, participating in a government they no longer believed in, inveighed against the idea as too little, too late. Floyd, as is his custom, was non-committal. The others, unable to plan ahead to coffee until they’ve had their pie, objected to the scheme because it might offer legitimacy to the possibility of secession.

Faced with these nattering advisers, a stronger leader might have sacked the lot and pressed on with his proposal. But Buchanan is spent. Exhausted and fearful, he settled for a watered-down version of a statement against secession written by Attorney General Jeremiah Black. Black had argued in Cabinet meetings in favor of the government’s duty to defend itself against disunionists — “meet,” “repel” and “subdue” were the words Black used — but the timorous Buchanan scrapped Black’s vigorous language and issued a mild condemnation of secession that declined to so much as wag a disapproving finger at the ultras of the South. In two weeks the president is scheduled to present his annual message to Congress; perhaps that will still be enough time for him to look in the White House attic to see if Andy Jackson left behind some backbone he could use.

With the outgoing president marking time, many are looking for the incoming chief executive to show some leadership. Apparently they will have to wait until Mr. Lincoln is actually on the federal payroll and starts collecting the $25,000 a year he earns for the job.

Lincoln has made no comment about slavery or disunion since before the election, maintaining that his positions are already crystal clear — he’s against expansion, and regardless of his personal opinion, he is Constitutionally incapable of affecting slavery where it already exists. Repeating these positions could only give fodder to those who would twist his views, and he’s powerless to do anything for another three months anyway. As the editor of The Chicago Tribune, Joseph Medill, put it, “He must keep his feet out of all such wolf traps,” and Lincoln surely agrees.

Still, insiders paid particular attention last week to the address delivered in Springfield by Senator Lyman Trumbull at the Great Republican Jubilee celebrating Lincoln’s election. Despite the fact that Trumbull snatched his senate seat from Lincoln’s grasp five years ago, an act that earned both Trumbull and his wife the eternal enmity of Mary Lincoln, the two men are great friends.

Indeed, they are such great friends that it sometimes seems that they speak with one voice. Thus, when Trumbull told the crowd that under Lincoln, all the states will be left in complete control of their own affairs, including the protection of property, those in the know believed they were hearing the words of the president-elect. And when Trumbull said that secession is not only impractical, it is a constitutional impossibility, it was like hearing from Lincoln himself. What good it will do is another matter. The New York Herald cheerfully predicted that “The speech will go a great ways in clearing the Southern sky of the clouds of disunion.” But whoever wrote that probably hadn’t heard any of the speeches in Milledgeville this week.

Meanwhile, the president-elect continues to prepare for his presidency. Springfield has proven to be a magnet for eager office-seekers, most of whom depart in disappointment. Perhaps the saddest of those who have departed Springfield is not an office-seeker but an artist, Jesse Atwood of Philadelphia, who painted Lincoln just before the election. The portrait, described as “perfect in feature and delineation,” was generously praised when exhibited in the capitol in Springfield.

Unfortunately for Atwood, Lincoln decided that he would look more presidential with a beard, and after a day or two, Atwood’s portrait was out of date. Atwood, who had left Springfield, raced back and filled in some whiskers, but he wasn’t working from life, and he surmised the wrong style, and now has a picture that resembles Lincoln neither then nor now. But apart from Atwood, most people like the beard.

To read more about this period, see “The Road to Disunion Volume II: Secessionists Triumphant,” by William W. Freehling, Oxford University Press, 2007; “Days of Defiance,” by Maury Klein, Alfred A. Knopf, 1997; “Lincoln: President-Elect,: by Harold Holzer, Simon & Schuster, 2008.

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup.”

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/21/drama-in-milledgeville

‘Fort Madness:’ Britain’s Bizarre Sea Defense Against the Germans

Three of the seven forts linked together in the Thames Estuary photographed on Sept. 29, 1945. The towers were originally designed as first line invasion defenses and each was armed with a 3.5 artillery gun. The constructed forts were towed out to sea, sunk on sandbanks and then linked together by cat walks.

Clusters of steel huts and manned triumphal arches: From bizarre fortresses off the coast, the British military fought German mine layers in World War II. The huge forts weren’t just a thorn in the side of Hitler’s air force, but also drove their British crews insane.

Perhaps the captain of the Baalbeck did see, through the dense fog, the bizarre shape jutting out of the water in the Thames River estuary, but it was already too late to stop the engines. Traveling at full speed, the Swedish freighter slammed into a group of strange steel hulks. The accident happened about six kilometers off the east coast of England in the late afternoon of March 1, 1953. The steel structures were boxes the size of two-story apartment buildings, each of them perched on massive concrete piers and connected by walkways. Guns were mounted on the roofs.

When the fog lifted the next day, the scope of the catastrophe was clearly visible from the shore. The outline of the British Army’s damaged Nore Fort was visible on the horizon, but now it was missing two of its seven towers.

The accident dealt a serious blow to Great Britain’s plans to defend its air space, striking at the heart of a project that the country’s army and navy had forcefully pursued since the beginning of the Cold War in the 1950s: The construction of at least a dozen sea fortresses around the island to protect strategically important ports and shipping channels. The British wanted to be prepared. The Admiralty had no doubts as to the military effectiveness of the unusual air defense system. After all, the massive steel towers had already served as a frontline defense against Hitler’s Wehrmacht, though not from the very beginning of the war.

In the Stranglehold of the German Navy

Soon after World War II began, the German navy had already managed to strike the island kingdom in its most vulnerable place: shipping. About 2,500 freighters were sailing at any given time to bring goods to Great Britain from around the world. The British also used ships to handle the bulk of their domestic flow of goods. The busiest route ran along the east coast of the island, a lifeline that the enemy was already threatening to cut off only a few weeks into the war.

German destroyers relentlessly laid mines off the east coast and the Thames estuary. More than 100 ships sank in the first few months of the war, and their cargos, so vital to the British and their war effort, sank along with them. The situation became even more menacing when the Germans began dropping their bombs in mid-November 1939. British commercial shipping was brought to a virtual standstill.

Mine sweepers were constantly in operation, but the results remained unsatisfactory. Many losses were simply inexplicable. It appeared as if the Germans had mines that the British were unable to detect. To address the problem, the head of the Admiralty, Winston Churchill, ordered the navy to obtain a sample of Germany’s new weapons — whatever the cost.

A Deadly Competition

As luck would have it, British searchlights illuminated a German Heinkel bomber on the night of Nov. 21, 1939, just as it was dropping an unknown object attached to a parachute off of Britain’s east coast. Experts salvaged and examined the object, which turned out to be a magnetic mine.

Although the British were now able to adjust their mine sweepers to search for the devices, the risk to shipping would continue, as a competition erupted between mine layers and mine searchers, as well as between scientists and engineers on both sides as they continued to develop new mines — and new countermeasures. The Admiralty became convinced that its only hope to free itself of the deadly flotsam was to shoot down or at least deter the mine layers.

At this time, ministry officials remembered a civil engineer named Guy Maunsell who had apparently been thinking about an imminent invasion for some time. Maunsell had originally presented the Admiralty with his designs for unmanned diving capsules. According to his plan, these submersible stations would be permanently anchored in positions surrounding the British coast, and from there would be able to observe all enemy movements above and below the water surface. No one knows whether any of these bottle-shaped devices were ever built, but the original design apparently convinced the Admiralty to work with Maunsell.

A Manned Triumphal Arch

To fend off the mine layers in the Thames estuary, Maunsell initially proposed the building of offshore structures that resembled the Arc de Triomphe in Paris, both in shape and size. The Admiralty approved the design after a few changes were made. Under the modified design, two hollow reinforced concrete towers, each of them seven meters in diameter, would be mounted on a floating pontoon. Each of the seven-story towers would provide enough space for a crew of about 120 men, including equipment and food. Two 3.7-inch anti-aircraft guns and two 40 mm Bofors guns were to be mounted on a platform at the top.

Between February and June 1942, four of the structures, which measured 33 meters (108 feet) tall and weighed 4,500 tons each, were finally towed out to the sites, some 6 to 12 nautical miles off the coast. Once the pontoons had been flooded, the structures settled on the ocean floor and the crews were able to begin their work.

In early 1941, while these so-called naval sea forts were still being built, Maunsell was asked to design an anti-aircraft defense for the Mersey estuary near Liverpool. Because of the difficult ocean floor conditions there, Maunsell chose a different model. He placed four hollow reinforced concrete legs, each with a diameter of 90 centimeters (about three feet), on a reinforced concrete foundation in the shape of a picture frame. Each leg was to support a two-story steel structure with a footprint of 11 by 11 meters.

Seven of these 750-ton towers, spaced 30 meters apart and connected by walkways made of steel pipes, formed a fort. The arrangement of the towers was patterned on the land-based anti-aircraft batteries, with a control tower with radar at the center, surrounded by four towers with 3.7-inch guns and one tower with two Bofors guns and, slightly away from the core arrangement, one tower with searchlights. In 1943, the army ordered three structures similar to those in the Mersey estuary for the Thames, including the Nore Army Fort.

Madness Under Water

Living conditions on the artificial islands were extreme, with each of the seven-tower fortresses housing up to 265 men at a time. The isolation and close quarters were hard to bear, especially in the concrete legs of the naval sea forts. While the officers’ sleeping quarters were in the upper part of the cylinders, where there was adequate light and oil heating, it was intolerable for the crews, who spent their nights below the surface of the water.

To distract themselves when there was nothing to do, the men were convinced to take up hobbies. Psychologists recommended painting, knitting or building models. The men remained on board for six weeks at a time, spending 10 days on land in between deployments. Many required psychiatric treatment, and the soldiers soon came up with their own name for the manmade platforms: “Fort Madness.”

At the end of the war, the crews had chalked up an impressive list of successes. Some 22 aircraft and 30 V-1 flying bombs where shot down from the Thames forts, and one was involved in the sinking of a German speedboat. But the use of the forts in the Mersey estuary had proved to be difficult. Because of their location on a constantly shifting sandbar, the structures on stilts repeatedly sank into the ocean floor. In 1948, the Admiralty had them dismantled because they posed a danger to shipping.

A Secret Expansion Project

But there were different plans for the forts in the Thames estuary. In July 1948, a delegation from the War Office paid a visit to the Nore Army Fort to look into possible uses for the existing forts. A year later, a secret meeting was held at the War Office to discuss the need for additional Maunsell army forts.

The group agreed that another 11 forts were needed to secure all key points and shipping lanes around the perimeter of the United Kingdom. After several changes had been made, the sites had been determined, and so had the costs. The entire project would cost an estimated £2.8 million British, an enormous sum in the postwar period. The “Baalbeck” accident at the Nore Army Fort in early March 1953 didn’t exact enhance the popularity of the project, and in July 1953 the decision was made to suspend it.

There was another accident at the Nore Army Fort in late 1954, which prompted the War Office to discontinue all maintenance work on the forts in 1956. The sea forts attracted attention once again in the 1960s, when British pirate radio stations occupied the platforms. Today private initiatives are underway to preserve the four remaining Maunsell sea forts as curious relics of World War II.

__________

Full article and photo: http://www.spiegel.de/international/zeitgeist/0,1518,728754,00.html

Travels of a Teenage Prince

English Channel, Nov. 14, 1860

 Handkerchiefs waved as the prince graced New York with his royal presence.

Amid heavy fog, a red signal rocket flashed across the night sky, and the captain of the HMS Himalaya breathed a sigh of relief. The queen would rest easier now.

Victoria, waiting at Windsor Castle for word of her eldest son, had felt her anxiety turning slowly into panic. His little squadron, crossing the storm-wracked North Atlantic from Portland, Me., was more than a week overdue. Several days earlier, she had asked the Admiralty to send out search vessels; the first one had returned without success. But now all was well: at breakfast, the queen received news of the rocket sighting. By then the ship carrying her beloved Bertie was safe inside the breakwater at Plymouth.

Punch In an English cartoon from Punch, meanwhile, young Bertie is shown transformed into a typical American boy, much to the consternation of his father, Prince Albert.

The doughy-faced teenager, known more formally as Prince Albert Edward, had just become the first British royal to visit the United States since the Revolution. (In 1782, his great-uncle, Prince William Henry – later King William IV – had been stationed as a Royal Navy midshipman in New York, where he eluded a plot by George Washington to kidnap him as a hostage.) At an endless round of balls and receptions – in Detroit, St. Louis, Harrisburg, Albany and other unlikely locations – the once-defiant colonials had fallen over themselves to bow and curtsy at this rather nondescript twig on the world’s most famous family tree.

Even Harriet Beecher Stowe, hardly a royalist, gushed about him as “an embodiment, in boy’s form, of a glorious related nation” – going on to mention Milton, Spenser, Bacon and Shakespeare, all in the same breath, as if these luminaries were stuffed into Bertie’s vest pockets. His tour eclipsed even that of the Japanese envoys earlier in the year, and pushed news of the presidential contest into the back pages of the major papers. (Abraham Lincoln, then still a candidate, however, declined to meet the prince when the royal train passed through Springfield, Ill.; he felt it would be presumptuous.)

There had been a few glitches, to be sure. In Richmond, Va., paying his respects to a statue of Washington, Bertie was greeted with jeers of “He socked it to you in the Revolution!” and “He gave you English squirts the colic!” In New York, the 69th Regiment of state militia – soon to win fame in the Civil War as part of the “Irish Brigade” – refused to turn out for a parade in his honor.

On the return crossing, headwinds and heavy seas left the royal entourage wallowing in the mid-Atlantic troughs. The dignitaries passed the time as best they could; Viscount Hinchingbrooke later fondly recalled dancing in the evenings “with the midshipmen for partners.” On Nov. 9, the prince turned 19, an occasion marked with double rations of grog and a festive dinner – but dampened, literally, when a large wave drenched the birthday boy in ice-cold seawater.

Among the souvenirs that Bertie was bringing home from the New World were two gray squirrels and a mud turtle, gifts for his animal-loving mother. All of them survived the journey safe and sound – like the prince himself, who would live to succeed Victoria more than 40 years later, and reign as King Edward VII.

Sources: Ian Radforth, “Royal Spectacle: The 1860 Visit of the Prince of Wales to Canada and the United States”; Stanley Weintraub, “Edward the Caresser: The Playboy Prince Who Became Edward VII”; The Independent, Oct. 18, 1860; New York Times, Oct. 8, 1860 and Nov. 17, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/13/travels-of-a-teenage-prince/

Tea-Partying Like It’s 1860

On Nov. 8, 1860, the secessionists who published The Charleston Mercury greeted the news of Abraham Lincoln’s election as president with righteous defiance: “The tea has been thrown overboard, the revolution of 1860 has been initiated.”

Sound familiar? It turns out that tea-party revivalism is nothing new; it’s been in the public parlance for a long time. But not forever: in the decades after the Revolutionary War public figures aggressively avoided the “tea party” analogy, considering it an act of collective passion beneath the civility of the young republic. It took the clash over slavery and states’ rights to return the “tea party” to respectability and breathe lasting life into one of our country’s most potent political analogies.

From the start, politicians have invoked the words and deeds of the Revolutionary era for their own purposes. When William Jefferson Clinton made his way to Washington as president-elect in 1993, he stopped by Monticello and paid tribute to the man who supposedly inspired his middle name. (Detractors snickered that this couldn’t be true; for a white boy born in Arkansas in 1946, they said, the more likely namesake was Jefferson Davis.)

It was only natural that Northerners and Southerners would try to manipulate the iconography of patriotism as the United States lurched toward its constitutional crisis over the meaning of freedom. Both abolitionists and slaveholders wanted to portray themselves as the real descendants of the Founding Fathers and the proper inheritors of their legacy.

The Boston Tea Party, however, presented a challenge. Benjamin Carp, a historian at Tufts, points to the conundrum in his excellent new book, “Defiance of the Patriots: The Boston Tea Party and the Making of America.” “The American Revolution was in many respects a rite of passage for the new nation,” he writes. “Seen in this light, the Boston Tea Party was a national moment of adolescent rebellion.” In other words, George Washington and peers were trustworthy grown-ups, but the tea partiers were a bunch of teenage misfits who couldn’t be trusted with the buggy whips. 

For half a century afterward, Mr. Carp reports, a code of silence gripped Boston: nobody wanted to confess that they had tossed tea into the harbor in 1773. Part of their motivation sprang from a desire to escape justice. “For a long period apprehensions are said to have been entertained,” wrote the novelist James Fenimore Cooper in 1839, “by some engaged – men of wealth – that they might yet be made the subjects of a prosecution for damages, by the East India Company.”

No less important was a sense of shame, a belief that the Boston Tea Party was an act of hooliganism. In 1823, William Tudor, the co-founder of the North American Review, warned of how the tea party flirted with mob rule: “Their irregular action was salutary and indispensable at the time, but the habit of interfering in this manner with public affairs was a dangerous one, and it proves the virtue of the people that it did not produce permanent evils.” In his view, it was a good thing the tea party was a single episode of attention-grabbing mischief rather than a continuing movement devoted to violent mayhem.

To the ongoing consternation of historians, most of the original tea partiers took their secrets to the grave; Mr. Carp likens their behavior to a kind of gangland omertà. After they were gone, Edward Everett Hale – a relation of the man who declaimed at Gettysburg for two hours just before Lincoln spoke his 272 words – recalled the environment: “If, within the last seventy-five years, any old gentleman has said that he was of the Boston Tea Party, it is perfectly sure that he was not one of the party of men who really did throw the tea into the harbor. If, on the other hand, any nice old gentleman, asked by his grandchildren if he were of the Tea Party, smiled and put off the subject and began talking about General Washington, or General Gage, it is well-nigh certain that he was one of that confederation.” (Hale’s comment recalls the controversies surrounding two of this year’s senatorial candidates, Democrat Richard Blumenthal of Connecticut and Republican Mark Kirk of Illinois, who were accused of embellishing their military service records. Neither man was tied to the tea-party movement; both won on Election Day.)

South Carolinians were among the first to see the tea party in a different light. When South Carolina had its first fling with secession during the 1831 nullification crisis, its governor, James Hamilton, compared his state’s actions to those of the “Boston Tea Affair.” Anti-slavery crusader William Lloyd Garrison denounced the comparison – not for besmirching the hallowed memory of American patriots, but because he feared it was entirely too accurate. The tea party, he said, invoked “the demon of civil discord.”

Yet abolitionists eventually had second thoughts about this comparison. As they fought fugitive slave laws in the 1850s, they came to see the tea party a model of enlightened civil disobedience. When a group of Boston vigilantes freed a runaway slave from federal authorities in 1851, the minister Theodore Parker – a man whose words are embroidered into President Obama’s new Oval Office rug – celebrated. “I think it the most noble deed done in Boston since the destruction of the tea in 1773,” he wrote.

The temperance movement also got in on the act. In 1854, several women in DeWitt County, Ill., were arrested for trashing a saloon. Their prairie lawyer, who took the case on a moment’s notice, argued that they were simply acting in the spirit of the Boston Tea Party. The jury found the women guilty, but the judge decided to let them off with fines of $2 each. Local legend says that they weren’t even made to pay, so the ladies were probably satisfied with their legal representation – provided by one Abraham Lincoln.

By the time Lincoln was elected president, the tea-party trope had become an acceptable part of mainstream rhetoric, a statement of civic-minded frustration and protest. The 21st-century Tea Party is simply an extension of that development. Its detractors are likewise trying to return to an older habit and marginalize the movement as crude and dangerous – a position some modern-day Tea Partyers have inadvertently helped reinforce. Last year, Texas Gov. Rick Perry, a Republican, was unwise to hint that his state might become so aggravated by federal overreach that it would consider secession. And in post-election recriminations, many conservatives have criticized Tea Party activists in Colorado, Delaware and Nevada for nominating weak, unpracticed Senate candidates in races that were otherwise winnable for Republicans.

Yet none of these blunders is serious enough to warrant an earlier century’s sense of embarrassment. The Tea Party of 2010 hasn’t engaged in the crime of property destruction and its greatest provocateurs limited their incitements to asking pointed questions at the town-hall meetings of congressional incumbents. On the contrary, by tapping into a storied political analogy, the movement shows a more sophisticated grasp of American history than its critics give it credit for possessing.

Indeed, when they haven’t been trying to popularize vulgarisms like the insult “teabagger,” those critics have at times displayed their own lack of historical acumen. After Sarah Palin warned her listeners at an October rally that it wasn’t yet time to “party like it’s 1773,” several commentators accused her of getting her dates wrong – when in fact Mrs. Palin meant the date of the Boston Tea Party, not the Declaration of Independence. “She’s so smart,” sneered Daily Kos founder Markos Moulitsas.

By misunderstanding the reference, the movement’s critics advertised their unfamiliarity not only with one of America’s great political events but also one of its age-old traditions – and proved that there’s something to be said for silence.

John J. Miller writes for National Review. He is the author of “The First Assassin,” a historical novel set in 1861.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/17/tea-partying-like-its-1860

Lincoln’s Mailbag

The election of Nov. 6 was big news, to put it mildly. In the days following, newspaper headlines screamed it from one city to another, across the not very united states. As Adam Goodheart wrote earlier, the telegraph allowed the results to be known nearly as quickly as we would know today. Now, thanks to another marvel of technology — the Internet — we can see the private telegrams and letters that Lincoln himself was seeing, as Americans exhaled and realized, to their amazement, that he had pulled it off. In the days following the verdict, an enormous range of Americans, from all walks of life, wrote to their president-elect to express their feelings about where the country was headed. These letters present a remarkable documentary portrait of a nation at a crossroads.

Most of Lincoln’s correspondence is housed in the Library of Congress, just off the East Portico of the Capitol, where he gave his two great inaugural addresses. (They are there, too.) The Library is a national treasure, both for its holdings and for its robust commitment to make these priceless artifacts available to all. That means putting them online, for free, which the Library has been doing since February 2000, with scholarly support from the Lincoln Studies Center at Knox College.

By visiting the Library of Congress Web site, you can now read Lincoln’s mail more or less as he did. What a story these pieces of paper tell! They recreate the drama of election night, from anxiety over the election (will he win?), to joy at the result (he did!), to a new kind of anxiety (now what?). As Americans from all backgrounds wrote to Lincoln, you realize just how much depended on this one man. They bared their emotions to him, sometimes in surprising ways.

William F. Smith, a proud citizen from Germantown, PA, wrote in to say that his wife had given birth as the results were being announced, and their son would be henceforth be known as “young Abe.” A neighbor of Lincoln’s in Springfield, Henry Fawcett, wasted no time asking for a job, as his personal servant in the White House, the first of a torrent of similar letters to come.

An anonymous person, who identified himself only as “one of those who are glad today,” wrote, “God has honored you this day, in the sight of all the people. Will you honor Him at the White House?”

More disturbingly, “a citizen” in Pensacola, Fla., sent a telegram to say “you were last night hung in effigy in this city.” Undoubtedly many regretted the words “in effigy.”

This online collection is all the more astonishing for the fact that its contents have often been severely restricted from view. Robert Todd Lincoln supervised the removal of his father’s papers immediately after the assassination, and asked a trusted judge in Chicago, David Davis, to take care of them. Judge Davis stored them in a bank vault in Bloomington, Ill., but they were moved several times after that, by order of Robert T. Lincoln; to Washington, to Chicago, back to Washington, to Manchester, VT (Lincoln’s summer home). In 1919, he finally placed them in the Library of Congress, but on condition that they not be revealed to be there.

In the confused aftermath of the attack on Pearl Harbor in 1941, Lincoln’s documents were removed again for safekeeping. Most went to the University of Virginia, but several documents were deemed so central to American history, and therefore to national security, that they were sent to Fort Knox in Kentucky (these included the two inaugural addresses and the Gettysburg Address). All of the papers were returned in 1944, and the entire Lincoln collection opened to the public for the first time in 1947.

But still they were not as open as they could be. The people granted access to these precious papers, generally, were the small number of specialists in the highest stratosphere of Lincoln scholarship. It is only in the last decade that they have been truly open, in the sense that the Internet provides, allowing every American — indeed, every person on Earth — to access them from home. The collection contains both soaring oratory and the ordinary dross of everyday governance. But its unfiltered availability to all is itself a tribute to Lincoln’s insistence that government of the people, by the people, for the people shall not perish from the Earth.

Source: The Abraham Lincoln Papers at the Library of Congress.

Ted Widmer is director and librarian of the John Carter Brown Library at Brown University. He was a speechwriter for President Bill Clinton and the editor of the Library of America’s two-volume “American Speeches.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/12/lincolns-mailbag/

Jim Crow on West Broadway

New York, Nov. 17, 1860

Streetcars on Park Row, circa 1860. The large building in the background is the headquarters of The New York Times.

The young man saw the horse-drawn streetcar coming from up the block. It didn’t have the necessary sign in the window – “Colored People Allowed in This Car” – but he was in a hurry that Saturday morning, so he hopped aboard anyhow. The conductor, according to a brief report in The New York Times, “told him that he must either get off or ride on the front platform. He said that he would do neither, but he would stay where he was.” A scuffle ensued, a constable hurried to the scene, and 23-year-old Charles Sanders was hauled off to face charges of assault and battery.

Records reveal little about Sanders: it is unclear whether he was an early civil rights activist – a 19th-century Rosa Parks – or just an impatient and hassled New York commuter. In any case, he was no mere Bowery ruffian. Sanders resided on one of the most elegant blocks in Manhattan, in the home of a wealthy dry-goods merchant for whose family he worked as a domestic servant. (The house, 55 West 9th Street, still stands; it is currently on the market for $10 million.) Next door lived an army officer named Irvin McDowell, who would soon gain notoriety along an obscure Virginia stream called Bull Run; just up the block stood the mansion of Henry J. Raymond, founder and publisher of the Times.

Such exalted connections availed him little. As far as the streetcar company and the police were concerned, Charles Sanders was just another “colored” man who needed to be shown his place. But in the years before the Civil War, New York was already the battleground of a civil rights struggle that has been nearly forgotten: a hard-fought conflict foreshadowing events in the Deep South a century later.

Segregation was an old story in New York. Although the state had abolished slavery in 1827, most public transportation, schools, theaters, restaurants and churches still enforced a strict color line, as they did throughout the free states. In the 1830s, omnibus drivers sometimes used their whips to keep African-Americans from boarding. As the renowned Southern historian C. Vann Woodward would write, “one of the strangest things about the career of Jim Crow was that the system was born in the North and reached an advanced age before moving South in force.”

Separate and unequal facilities were hardly the only injustice confronting New York’s African-Americans. State law restricted blacks’ voting rights to men owning at least $250 in real estate, a tiny percentage of the total population.

Segregated streetcars, however, remained powerful, ubiquitous symbols of everyday racism in the city. In the 1850s, after decades of intermittent civil disobedience, a bold cohort of black New Yorkers made a concerted effort to integrate them. On July 16, 1854, a young African-American schoolteacher named Elizabeth Jennings was violently ejected from a trolley at the corner of Pearl and Chatham Streets. The black community rallied to raise money for a lawsuit and hired a young attorney named Chester A. Arthur – the future president – to represent her. Remarkably, the judge decided in favor of Jennings, awarded her damages of $250 and decreed that transit companies were “bound to carry all respectable persons” regardless of race.

When most trolley operators simply ignored this ruling, black leaders redoubled their efforts, forming a group called the Legal Rights Association (which included a special “female branch”) to continue the fight. A well-known minister, Rev. James W.C. Pennington, deliberately got himself arrested for boarding a whites-only car. A few prominent whites lent support; in September 1860 Horace Greeley, The New York Tribune’s famous editor, asked his readers: “Can anyone doubt that Jesus of Nazareth, if now on earth and in New York, would reject more indignantly and rebuke more sharply our negro-cars [and] negro-pews in churches that evoke his name?” By that point – thanks less to messianic intervention than to the activists’ tenacity – almost all the streetcar lines had accepted integration.

The timing of Charles Sanders’s act of defiance, just 10 days after Abraham Lincoln’s election, may not have been coincidental. Across the nation, indignant whites were reporting that blacks seemed suddenly, frighteningly rebellious. One unsettling story told of a Georgia slave who refused to chop wood for his master and mistress, telling them that “Lincoln was elected now, and he was free.” The black man, according to a newspaper, “after being sent to the whipping-post, gained new light on the subject of Lincoln and Slavery, and returned to his duty.” Yet the portents of revolution continued.

The final outcome of Sanders’s prosecution went unreported in the newspapers and is, for the time being, lost to history. (It may await discovery in New York’s vast criminal court records). Not until after the Civil War, in 1873, did the state legislature pass a bill sweeping away the last vestiges of segregated public transit – three years after abolishing the property requirement for voting.

Sources: New York Times, Nov. 19, 1860 and Nov. 13, 2005; 1860 Census; “Trow’s New York City Directory, for the Year Ending May 1, 1859”; Leslie M. Harris, “In the Shadow of Slavery: African-Americans in New York City, 1626-1863”; C. Vann Woodward, “The Strange Career of Jim Crow”; David N. Gellman and David Quigley, eds., “Jim Crow New York: A Documentary History of Race and Citizenship, 1777-1877”; John Hewitt, “The Search for Elizabeth Jennings, Heroine of a Sunday Afternoon in New York City” (New York History, October 1990); Edward Spann, “Gotham at War: New York City, 1860-1865”; Judith Ann Giesburg, “Army at Home: Women and the Civil War on the Northern Home Front”; Leon F. Litwack, “North of Slavery: The Negro in the Free States, 1790-1860”; New York Tribune, Feb. 25, 1858 and Feb. 20, 1861.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/16/jim-crow-on-west-broadway/

Female Partisans

New Orleans, Nov. 16, 1860

On fine afternoons that week, throngs of strollers promenaded on Canal Street. The thoroughfare, one newspaper reported, “was crowded with an unusually large and brilliant array of the beauty of our city – the stately matrons and lovely damsels of the South. What gave peculiar interest to this grand display of beauty, grace, and elegance, was the exhibition of blue [secessionist] cockades worn on the shoulders of nearly all the ladies who appeared in public. All our ladies are for the South, and for resistance to the aggressions, outrage, and insult of an Abolition dynasty. No man will merit their favor who is not ready to sacrifice everything for that cause.”

Much had changed in recent months in the Crescent City. At the corner of Canal and St. Charles, the promenading citizens passed an imposing statue of Henry Clay – the Great Compromiser, whom many said had single-handedly held the Union together – that had been installed, amid the usual patriotic fanfares, just that past April 12, Clay’s birthday. (Next spring, the first shot fired at Fort Sumter would give the date a new and very different significance.)

All across the South – as in New Orleans – “stately matrons and lovely damsels” seemed more eager than men to split up the nation that Americans had so long struggled to preserve. On an Alabama steamboat, a passenger took an informal poll: in the gentlemen’s cabin, there were still five votes for acquiescing to Lincoln’s election, while the ladies’ cabin was unanimous for disunion. “Secession was born in the hearts of Carolina women,” one Charlestonian went so far as to write in her diary.

On the cover of an 1861 issue of Harper’s Weekly, a “Southern belle” parades through Baltimore in a dress sewn with a Confederate flag.

This outburst of female political fervor took husbands, fathers, and sons by surprise. Below the Mason-Dixon Line, even more than in the rest of the country, women had long been expected to refrain from involving themselves in, or even commenting on, public affairs. But no longer. One young North Carolinian, Catherine Edmondston, described arguing with her Unionist parents and sisters when they expressed their attachment to the American flag. “Who cares for the old striped rag now that the principle it represented is gone?” she wrote bitterly. “It is but an emblem of a past glory.” On the day of Abraham Lincoln’s election, four sisters in Florida wrote a letter to the local newspaper calling for resistance to the “Abolition Emissaries of the North.” Women, they said, could not remain “idle spectators of the passing scenes and excitement,” but should “reserve their crinolines to present to our Southern Politicians who have compromised away the rights of the South.”

Some were yet more militant – even military – in declaring their sympathies. At one all-female high school in Columbia, S.C., students emulated the secessionist militia units called “Minute Men” by forming a company of “Minute Girls.” They turned out en masse for a nighttime rally with the letters “M.G.” emblazoned on the fronts of their dresses.

Even more widespread tokens of secession were the cockades – usually made with blue ribbons – that women stitched together and wore on their clothing or hats: “a token of resistance to abolitionist rule,” as one observer noted. Many, too, pinned them to the coats of husbands and sweethearts who had already begun drilling for battle. A Charlestonian named Mary Walsingham Crean even wrote a song entitled “The Blue Cockade”:

There’s many a gallant laddie who wears a blue cockade,
Will show them what it is to dare the blood of Southern braves!
And God be with the banner of those gallant Southern braves,
They may nobly die as freemen – they can never die as slaves.

Any incongruity in the last line of Crean’s song was apparently unintentional.

Not all secessionist cockades were blue. This red one from North Carolina may represent the color of a local militia regiment’s flag.

Not everyone in the South, let alone the North, was delighted by the sudden incursion of women into American political life. “Woman has not business with such matters,” sniffed Ada Bacot of South Carolina. The Nov. 16 issue of the Philadelphia Inquirer, meanwhile, featured an article headlined “Female Partisans.” “It is with much regret that we have seen, in various papers, notices of the prominent part taken by Southern ladies, as agitators in this (to all true Americans) sad state of affairs,” wrote the anonymous male author. “The Greeks thought it so improper for women to interest themselves in contests and contentions, that they forbade them, under pain of death, to be present at the Olympic Games …. Our ladies, who have so many accomplishments, should distinguish themselves as tender mothers and faithful wives, rather than as furious partisans.”

American women – both Northern and Southern – would end up sacrificing more for “the cause” than anyone could have predicted on that bright afternoon in New Orleans. In the process, however, many would also find new horizons opening up. Even before the first shots of the Civil War were fired, some heard amid the war cries a different call to arms. In another newspaper article that ran on Nov. 16, 1860 – this one in the San Francisco Bulletin – a California woman addressed herself “solely to ladies,” writing: “The opposing questions of Northern and Southern rights naturally suggests to me OUR RIGHTS.”

Sources: Baton Rouge Daily Advocate, Nov. 15, 1860; Leonard Victor Huber, “New Orleans: A Pictorial History”; The Constitution (Washington, D.C.), Nov. 14 and 24, 1860; Anya Jabour, “Scarlett’s Sisters: Young Women in the Old South”; Catherine Edmondston Diary, North Carolina Office of Archives and History; Tracy J. Revels, “Grander in Her Daughters: Florida’s Women in the Civil War”; Richmond Dispatch, Dec. 27, 1860; Frank Moore, “The Rebellion Record: A Diary of American Events”; Drew Gilpin Faust, “Mothers of Invention: Women of the Slaveholding South in the American Civil War”; Philadelphia Inquirer, Nov. 13 and 16, 1860; Daily Evening Bulletin (San Francisco), Nov. 16, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/15/female-partisans/

Never Mind Neutrality, Our Friends Are in Trouble

David van Epps, in a duffel coat, with members of the 894 Royal Naval Air Squadron. He and other Americans chose to fight for Britain before the U.S. entered World War II.

Between the outbreak of World War II in Europe in September 1939 and Hitler’s declaration of war against the U.S. in December 1941, 22 American citizens ignored the country’s Neutrality Act and joined the Royal Navy to fight for Great Britain. They flouted the law despite the draconian sanctions threatened by the act, including imprisonment, heavy fines and the loss of citizenship. These motives of these volunteers varied—some were gung-ho Anglophiles, others wanted to battle Nazi tyranny and still others joined up simply for the adventure.

The volunteers were given dangerous duty in the Atlantic and in the Murmansk convoys that skirted the Arctic Circle to supply the Soviet Union. Two of the Americans were eventually put in command of British vessels. Other equally brave American volunteers served with the Royal Air Force; the story of Eagle Squadron is relatively well known. Their compatriots at sea are now the subject of “Passport Not Required,” a slim but occasionally deeply moving book by Eric Dietrich-Berryman, Charlotte Hammond and R.E. White.

The book’s title refers to the fact that the Royal Navy, in its struggle against the German U-boat menace, was not about to quibble over paperwork when it came to enlisting any Americans who stepped forward. To help the volunteers avoid trouble over citizenship, the navy waived the requirement to swear allegiance to King George VI, and other British legal considerations were organized for the Americans’ convenience.

The U.S. Justice Department, after all, was unbending: In July 1941, a U.S. citizen, Philip Stegerer, who had volunteered for the Royal Canadian Air Force at a time when taking an oath to the king was still required, was refused re-admission to the U.S. His family, Stegerer told the press, had lived in America since before the Revolutionary War. He went to Canada, he said, “to fight for democracy and I wind up a guy without a country, without a job and without a dime.” The authors say that Stegerer’s later fate is unknown.

Eventually the U.S. allowed Americans who had volunteered for its allies to come back and fight for their homeland. But the U.S. military did not appear to place much value on their wartime experience. Boston-born William Homans, an idealist who is described in the book as “a crusader who left simply to fight a colossal evil,” found his hard-earned Royal Navy experience entirely set aside in March 1943 when he joined the U.S. Navy. Homans had served two years in the Royal Navy and been promoted to lieutenant; the U.S. Navy made him an ensign, as if he were a landlubber going to war for the first time. He ended his wartime career as the garbage-supervision officer on the Boston docks, then went on to graduate from Harvard Law School in 1948 and pursue a career in criminal law.

As the authors note, Americans have a long history of volunteering for British fights. Adm. Nelson’s command in 1805 at the Battle of Trafalgar included 23 Americans; a U.S. citizen won the Victoria Cross in the Royal Navy in Japan in 1864; and 20 Harvard men died on the battlefields of World War I before America entered the fray. But never was the British need for a few good Americans more urgent than in the Battle of the Atlantic from 1939 to 1943.

Winston Churchill said after the war that the struggle against German U-boats—and the threat of national starvation posed by their attacks on shipping—worried him more than any other front. By war’s end the German submarines had sunk 175 Allied warships and 2,603 merchant ships, killing 30,246 Allied merchant seamen. But the cost to Germany was also staggering: More than 700 U-boats were destroyed, out of a total of 1,162 commissioned; of 39,000 German submariners, 27,491 died.

The 22 Americans who forsook their country’s isolationism and braved its legal sanctions came from a wide variety of backgrounds. Remarkably, few of them had maritime experience and only three—Draper Kauffman, William Taylor and Henry Ripley—had professional military backgrounds. Otherwise the volunteers represented the American Everyman. In addition to the idealistic William Homans, there was Derek Lee, whose family owned a textile company; Charles Porter, who was in real estate; John Stilwell, in theater advertising; Carl Konow, a New York yacht broker and mail pilot; and John Parker, a Boston sales executive and former Navy enlistee.

The volunteers included three bankers: Edmund Kittredge from Cincinnati, Alex Cherry (a fierce Anglophile) and Edward Ferris, both from New York City. Oswald Dieter and Francis Hayes were doctors; Edwin Russell, a journalist; Gurdan Buck, a Maryland farmer. George Hoague, a naval architect; John Hampson, a California rancher. Peter Morison built warships at Bath Iron Works. Three of the volunteers—David Gibson, John Leggat and David van Epps—were too young even to have careers. “They were romantics prospecting for adventure,” the authors write.

The one thing these men had in common was a willingness to step forward and risk their lives for a good cause without being asked. As Prince Michael of Kent, honorary rear-admiral of the Royal Navy Reserve, writes in his introduction to this fascinating book: “No man can do more for another country than to volunteer to fight for it.”

In October 1941, a German U-boat wolf pack attacked a 50-ship convoy in the North Atlantic. One of German subs fired a torpedo at HMS Broadwater, a destroyer protecting the supply ships. “The detonation blew away the upper bridge works and bow,” the authors write. Forty-five officers and crew members died aboard the Broadwater, including Lt. John Parker, the Boston sales executive. He was serving bridge watch when the torpedo struck and was killed instantly.

At age 51, Parker was among oldest lieutenants in the Royal Navy, although his superiors didn’t know it. He had lied about his age, and the British were not inclined to probe his story: They needed good officer material, and Parker had served in the U.S. Navy during World War I. Of the 22 American volunteers for the Royal Navy, Parker was the only fatality. “He died as he would have easily chosen to die,” wrote his school friend and cousin Charles Curtis in the Groton School Quarterly, “killed in action, and among the first because he was among the most gallant.”

Lt. John Parker was old enough to have two adult sons, both of whom also fought in World War II. The eldest, Frank, followed his father’s example and signed up to fight Hitler before America did, enlisting in the Canadian army in 1940. He was taken prisoner during the misbegotten Dieppe Raid in France in 1942; he later escaped from his German POW camp and made his way to England. Parker’s other son, also named John, enlisted in the U.S. Navy and was killed in 1945 during a mine-clearing operation on Okinawa. These Parker men, the authors note, pursued different routes into the war but shared the same desire: “to fight a great menace.”

Mr. Roberts is the author of “Masters and Commanders: How Four Titans Won the War in the West, 1941-1945.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704141104575589072950609654.html

Would the South Really Leave?

Nov. 11, 1860

With nearly half a year to prepare for the possibility of a Lincoln election, the editorial writers of the South had ample time to sharpen their rhetoric, and the arias of wroth and venom unleashed after last Tuesday’s decision proved that those months were not idly spent.

“If we submit now to Lincoln’s election,” said the Fayetteville North Carolinian, “your homes will be visited by one of the most fearful and horrible butcheries that has cursed the face of the globe.” Said the Richmond Semi-Weekly Examiner, “Here [is] a present, living, mischievous fact. The Government of the Union is in the hands of the avowed enemies of one entire section. It is to be directed in hostility to the property of that section.” Added The Atlanta Confederacy, even more emphatically, “Let the consequences be what they may — whether the Potomac is crimsoned in human gore, and Pennsylvania Avenue is paved ten fathoms deep with mangled bodies, or whether the last vestige of human liberty is swept from the face of the American continent, the South will never submit to such humiliation and degradation as the inauguration of Abraham Lincoln.” Concluded a pithier Augusta Constitutionalist: “The South should arm at once.”

Hot words, those, but in South Carolina, there were even hotter deeds: the day after the election, fire-eaters lowered the Stars and Stripes flying above the state capitol, and raised the Palmetto flag. Three days later, the legislature voted to convene in December to decide whether to secede.

Southerners, of course, have called this tune before. They threatened to bolt in 1820, floated the divisive theory of nullification in the 1830s, and angrily convened in Nashville in 1850. (The governor of South Carolina, William Gist, even has a brother whose name is States Rights — yes, his actual name is States Rights Gist — who was born during the nullification crisis; Father Gist was evidently a fervent Calhoun man.)

Whatever the time and whatever the provocation, the story has always been the same: threats, indignation and outrage, followed in the end by placations from the North and reconciliations that left the South wealthier and the institution of slavery more entrenched. Most assume that past will be prologue. The South seceded last year when the Republicans elected William Pennington as Speaker of the House, jibed pro-Lincoln newspaperman Carl Schurz earlier this year. “The South seceded from Congress, went out, took a drink, and came back. When Old Abe gets elected, they’ll go out, and this time they’ll take two drinks before they come back.”

And yet, this time they might really mean it.

Is it all due to Lincoln? Certainly, but that his mere election would incite secession is not so obvious. Though an opponent of slavery, he is measurably more moderate than Senator Seward or Senator Chase, rivals for the nomination whom the Republicans, for all their abolitionist ardor, plainly did not prefer. Nearly a month has passed since Lincoln spoke in public about the issue of slavery, and all he did was repeat that he was constitutionally powerless to interfere with the institution of slavery in any state where it existed. “What is it I could say which would quiet alarm?” said Lincoln, his exasperation evident. “Is it that no interference by the government with slaves or slavery within the states is intended? I have said this so often already that a repetition of it is but mockery.”

But to the South, Lincoln is but the tip of the spear. “He rides a wave he cannot control or guide,” observes a perceptive editorialist for The Atlanta Daily Constitutionalist, who predicts that Lincoln’s “very restraint will give new strength to its pent up fury, and it will carry into the same office, four years hence, a man of more revolutionary ideas.”

Republicans come to Washington not just with an eye to stopping the expansion of slavery. Their program also includes higher tariffs, which will increase the power of Northern manufacturers; support for the railroads, which will lead to the settlement of the West and to the creation of who knows how many anti-slavery states between the Mississippi and the Pacific; and unrestrained immigration. Eighty percent of new arrivals settle in the North, swelling its power with their labor and their votes. The Constitution may prevent the Republicans from abolishing slavery now, but Southerners are concerned that the great unsettled Dakota prairies will be carved into a dozen states that will become full of Republican-loving Italians and Poles and Irishmen and escapees from the revolutions of 1848. See what happens then.

These developments might sit differently if the South felt weak, but in fact, it feels stronger than ever. Cotton production is at an all-time high; perhaps two billion pounds will be produced this year, enough to account for nearly 60 percent of the country’s exports. Almost half the crop will go to England, where a fifth of the population of the world’s greatest power works in the textile industry. Two years ago Senator James Hammond of South Carolina proclaimed, “The slaveholding South is now the controlling power of the world.” With an increasingly abundant cotton crop earning ever-rising prices, no one down south feels obliged to argue, unless it is with the abolitionist who wishes to cast moral aspersions upon him and deny him the labor force that is the underpinning of this ever-increasing wealth.

And so, inevitably, the South thinks of secession — and expansion. The South has long believed that unless slavery keeps expanding, it will die, and take the slave-holding elite with it. As Senator Jefferson Davis of Mississippi recently said, “We of the South are an agricultural people, and we require an extended territory. Slave labor is a wasteful labor, and it therefore requires a still more extended territory than would the same pursuits if they could be prosecuted by the more economical labor of white men.” Limiting slave territory, Davis says, would “crowd upon our soil an overgrown black population, until there would not be room in the country for whites and blacks to subsist in, and in this way. . . reduce the whites to the degraded position of the African race.” Oddly, Senator Charles Sumner, the ardent abolitionist from Massachusetts, has in a rather different way reached the same conclusion: limiting slavery will kill slavery.

And so the slaveholders seek to expand, although whether they can go further north and west is more than a political question; there is much doubt whether the climate and crops of western America would sustain slavery. But all doubts vanish when they turn their backs to the north, and see rimming the Gulf of Mexico verdant lands that could, and have, enriched slaveholding planters. “To the Southern republic bounded on the north by the Mason and Dixon line and on the south by the Isthmus of Tehuantepec, including Cuba and all the other lands on our southern shore,” toasted one Texan at a convention in 1856, and that sentiment burns at the heart of many of the fire-eaters now crying secession.

Don’t forget that not very long ago, such sentiments burned brightly in Washington as well. The Polk and Pierce administrations tried to buy Cuba. Just six years ago, the current president, James Buchanan, who was then Minister to Great Britain, was one of the three authors of the Ostend Manifesto, which maintained that if Spain wouldn’t sell us Cuba, we would be justified in seizing it. Accompanying these official efforts were unofficially encouraged forays by slaveholder-supported filibusteros to invade Cuba, foment a rebellion and grab the island on behalf of expansionist-minded southerners.

Expansionists north and south initially supported William Walker’s campaigns to seize control of Nicaragua, but it was the southern expansionists who were his true constituency. The south’s moral and financial support sustained Walker when he seized Nicaragua’s presidency in 1856, and though he governed only briefly, he managed to re-establish the legality of slavery before a coalition of Central American powers defeated his cholera-ravaged army and sent him scampering. Walker made further attempts to conquer Nicaragua, the last of which ended last September in front of a firing squad in Honduras. But southerners backed every one.

A mere freebooter, Walker nearly succeeded. The ultras dream of what could be accomplished in Nicaragua, and Cuba and northern Mexico and the West Indies if a cotton-rich American government should seek its destiny in commanding a tropical empire that would dominate the world’s supply of not only cotton but the staple of sugar as well.

So here, then, is the South’s choice. Does it select a future in which the southern slavocracy is less powerful; more isolated; consistently subjected to moral castigation by northerners for an economic system that profits not just planters but innumerable northern shippers and insurers and mill owners? Or does the South choose to establish a new nation that will sit at the center of a rich and powerful slaveholding empire that will dominate the hemisphere?

There are plenty of people in the south who oppose disunion and wish to move slowly or not at all. But most of the South’s leadership — its money and its political establishment and its opinion-makers — know that the South is at a crossroads, and they mean for it to choose independence.

(To read more about this period, see “Battle Cry of Freedom: The Civil War Era,” by James M. McPherson, Oxford University Press, 1988; “Days of Defiance,” by Maury Klein, Alfred A. Knopf, 1997)

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup.”

__________
Full article: http://opinionator.blogs.nytimes.com/2010/11/10/would-the-south-really-leave/

Enthusiasts for Hard Power

On the day that Harry S. Truman left the presidency in January 1953, he and senior members of his administration enjoyed a long, bittersweet luncheon at the Georgetown house of his secretary of state, Dean Acheson. Truman then took the train back to Independence, Mo., wrote his memoirs, established a presidential library and spoke out sharply on the politics of the Ike Age. Acheson stayed in Washington, however, pursued a lucrative career as an attorney, published his own recollections and thrived as an elder statesman. Yet of all the associates who had gathered that January day, he remained closest to Truman.

Surface appearances would have predicted him to be among the most distant. Truman was the son of a failed Missouri farmer and livestock trader, Acheson the offspring of a prominent Episcopal cleric. Truman’s formal education had begun and ended with the Independence public schools; Acheson attended Groton, Yale and Harvard Law. Truman was a marginal middle-class striver who failed in one business venture after another before turning to politics. Acheson moved from Harvard to a clerkship with Supreme Court Justice Louis Brandeis, joined a leading Washington law firm and made his way into government through the good offices of Harvard professor and future Supreme Court Justice Felix Frankfurter. Truman enjoyed friendships with bourbon-drinking pols; Acheson, a martini man, hobnobbed with scholars and intellectuals.

Yet each panders after the other’s good opinion in this extensive correspondence. The letters reveal that both men, for all their differences in experience, shared common values and possessed similar temperaments. Both were ideological liberal Democrats, suspicious of big business and devoted to civil liberties. Both were Cold Warriors, convinced that their great achievement was the North Atlantic alliance and the postwar structure of containment that had blocked Soviet expansion into Western Europe. While detesting demagogic anti-communism of the McCarthyite variety, they had no patience with the squishy negotiate-without-conditions faction of their own Democratic Party. (Acheson to his dying day believed that the proper response to Soviet missiles in Cuba would have been to unleash Gen. Curtis LeMay’s bombers.)

After office, Acheson and Truman criticized both parties.

Both also put great stock in personal loyalty and practiced it to the point of impetuosity. Acheson needlessly did grave damage to himself and his policies by ostentatiously declaring, after the conviction of former State Department official (and covert Soviet agent) Alger Hiss for perjury in early 1950: “I do not intend to turn my back on Alger Hiss.” He offered his resignation to the president, who promptly declined it.

Truman recalled that he had been harshly criticized during his short tenure as vice president when he had commandeered an Air Force bomber to attend the funeral of “a friendless old man just out of the penitentiary”—his unsavory political patron, “Boss Tom” Pendergast of Kansas City. Truman’s enemies never let him forget the alleged misdemeanor; he remained proud of it. Acheson, never persuaded of Hiss’s guilt, was equally adamant.

Both shared the belief that to govern was to make firm decisions after due deliberation and to act. Neither would have been impressed by today’s oft-bruited concept that nations can advance their interests through the deployment of “soft power.” When Truman was informed by Acheson of the North Korean invasion of South Korea in June 1950, he recalled his immediate reaction as: “Dean, we’ve got to stop the sons of bitches no matter what.” (Neither he nor Acheson ever fully confronted the probability that the attack was prompted by Acheson’s public exclusion of South Korea from America’s Asian “defense perimeter” six months earlier and facilitated by Truman’s skimping on military spending.)

Truman often told Acheson he was the greatest secretary of state in U.S. history; Acheson dedicated his imposing memoir, “Present at the Creation” (1969), to Truman, “the captain with the mighty heart.” Their mutual regard permeates these exchanges, surviving even Truman’s request for a critical reading of the final draft of his memoirs.

Acheson was a thorough and firm editor, concerned with both substance and the proper use of the English language. He frequently began his comments with such words as “Here, Mr. President, I shall try your patience and good nature.” At one point he declares: “The reader feels stuck on the fly paper for thirty pages.” Elsewhere he chides Truman for leaving “the impression of a two-gun man in the White House shooting with both hands in all directions at the same time.” Truman responded almost as if he were a dutiful doctoral student thanking a mentor for his “generosity.” Like many a doctoral student, however, he did not make every change suggested. Acheson would not ask Truman to critique his memoir.

Truman and Acheson left office distinctly out of favor with the public but confident in their policies. Inevitably they comment disparagingly on their successors, both Republican and Democratic. Eisenhower and Dulles can do no right. Acheson despairs of Adlai Stevenson—”this paunchy quipster is no Marcus Aurelius.” Truman calls John F. Kennedy immature; Acheson sees him as an “Indian snake charmer.” Dean Rusk is unable to give the State Department a sense of direction. Even Lyndon Johnson, whom both had admired, disappoints. “He could be so much better than he is,” Acheson tells Truman. “He creates distrust by being too smart. He is never quite candid. He is both mean and generous, but the meanness too often predominates.”

The critiques had merit, but the constant emphasis on the negative leaves an impression of petulance. Perhaps this can be forgiven, coming as it did from two men who, whatever their own mistakes and weaknesses, created a world order that gave breathing room to the forces of liberalism and democracy. There were giants in those days.

Mr. Hamby, a distinguished professor emeritus of history at Ohio University, is the author of “For the Survival of Democracy: Franklin Roosevelt and the World Crisis of the 1930s.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703514904575602643826420312.html

A Senator Secedes – Reluctantly

Charleston, S.C., Nov. 12, 1860

Almost everyone in Charleston, it seemed, had gone wild for secession. Flags with the state symbol, the palmetto tree, flew on every street, and even from ships in the harbor. Abraham Lincoln was burned in effigy. News agents throughout the city vowed never again to sell Harper’s Weekly – the most widely circulated magazine in America – when they saw that its post-election issue featured a large woodcut of the president-elect.

Mass meeting at Institute Hall, Nov. 12, 1860.

That night, several thousand people packed the floor and galleries of Institute Hall on Meeting Street; one witness wrote, “every part of the building was crowded to suffocation.” Many noted with satisfaction that members of the state’s social and financial elite – previously somewhat resistant to the swell of revolutionary fervor around them – were present tonight. Presiding over the assemblage was Judge Andrew Gordon Magrath, who, on the day after Lincoln’s election, had walked out of his courtroom to become a secessionist hero, the first of numerous federal officials in the state to resign his office in protest. “The Temple of Justice, raised under the Constitution of the United States, is now closed,” he had theatrically informed the jury before slipping off his robe and departing.

Now, in Institute Hall, spectators “rose to their feet, threw up their hats, and cheered until hoarse,” a newspaper reported. Their cheers grew even louder when Magrath announced that Senator James Henry Hammond – the very embodiment of South Carolina’s political establishment – had just cast his lot with the rebellion, resigning his seat to join the secessionists.

Senator Hammond was not present in the hall that night. In fact, he had little stomach for celebration. He had given up his office very reluctantly, ultimately doing so only from a politician’s instinctive fear of seeming out of touch with popular feeling and out of step with his colleagues. (The state’s other senator, James Chesnut, had resigned the day before.)

The imperious, aristocratic senator was no bleeding heart, to say the least. Master of more than 300 slaves, he did not hesitate to flog them when they transgressed, wielding the whip with his own hand. Nor did he hesitate to take sexual advantage of the women under his power, fathering several children with them. (In one instance, he did so with a household servant, and then with her teenage daughter.) In politics, he had popularized the phrase “cotton is king,” and gave a notorious speech in 1858 arguing that every society, even a republic, needed an inferior “mud-sill” class to “do the menial duties, to perform the drudgery of life.” Few men had been fiercer than Hammond in championing slavery and states’ rights.

Senator James Henry Hammond

Now, however, faced with the reality of the nation he knew – and perhaps even the society he knew – coming apart at the seams, the senator hesitated. “The scenes of the French Revolution are being enacted already,” he fretted as he watched the clamor in Charleston’s streets. The crisis might bring a new class of demagogues to power in the South; unless this was averted, “we shall soon have the guillotine at work upon good men.” In his private diary, Hammond went so far as to confess that if given a choice between saving the Union and saving slavery, he would choose the Union. But in any case, he did not think it needed to come to this. The South was wealthy and powerful enough to protect its interests without seceding. The election of Lincoln represented little more than a slight to its honor.

Yet in the end, Hammond chose to surrender to what seemed the tide of history rather than resist it. Trying to justify his decision in a letter to a close friend, the best he could come up with was this sardonic explanation: “You know the Japanese have an ancient custom, which therefore must have its uses, of ripping up their own bowels to revenge an insult.”

Hammond was far from the only eminent man to feel inward qualms – but almost no one else dared speak them openly. Among the few who did so was Magrath’s fellow judge, the elderly James L. Petigru, whose life coincided with the Union’s: he had been born just days after President Washington’s inauguration in 1789. “South Carolina is too small for a republic,” Judge Petigru said, “and too large for a lunatic-asylum.”

Sources: New York Herald, Nov. 13, 1860; Philadelphia Inquirer, Nov. 13, 1860; Baltimore Sun, Nov. 15, 1860; Charleston Courier, November 8, 1860; Charleston Mercury, November 8, 1860; Harper’s Weekly, January 19, 1861; William W. Freehling, “The Road to Disunion, Vol. 2: Secessionists Triumphant”; Robert N. Rosen, “Confederate Charleston: An Illustrated History of the City and the People During the Civil War”; Drew Gilpin Faust, “James Henry Hammond and the Old South: A Design for Mastery”; Jon L. Wakelyn, “The Changing Loyalties of James Henry Hammond: A Reconsideration” (South Carolina Historical Magazine, Jan. 1974); Abner Doubleday, “Reminiscences of Forts Sumter and Moultrie.”

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/11/a-senator-secedes-reluctantly/

A Slaveholder’s Diary

Nov. 12, 1860

On Friday morning, Nov. 9, 1860, Keziah Goodwyn Hopkins Brevard, a 57-year-old widowed plantation mistress, who lived some 10 miles east of Columbia, SC, wrote in her diary, “Oh My God!!! This morning heard that Lincoln was elected.” In the breathless entry that followed, she recorded her thoughts and fears:

I had prayed that God would thwart his election in some way & I prayed for my Country — Lord we know not what is to be the result of this — but I do pray if there is to be a crisis — that we all lay down our lives sooner than free our slaves in our midst — no soul on this earth is more willing for justice than I am, but the idea of being mixed up with free blacks is horrid!! I must trust in God that he will not forget us an unworthy as we are — Lord save us — I would give my life to save my country. I have never been opposed to giveing up slavery if we could send them out of our country — I have often wished I had been born in just such a country — with all our religious previleges & liberties with none of them in our midst — if the North had let us alone — the Master & the servant were happy with out advantages — but we had had vile wretches ever making the restless worse than they would have been & from my experience my own negroes are as happy as I am: — happier — I never am cross to my servants without cause & they give me impudence if I find the least fault, this is of the women, the men are not half as impudent as the women are. I have left a serious & what has been an all absorbing theme to a common one but the die is cast — “Caesar has past the Rubicon.” We now have to act. God be with us is my prayer & let us all be willing to die rather than free our slaves in their present uncivilized state.

There is much to unpack here. Southerners viewed the Republicans as an abolitionist party and, coming just a year after John Brown’s raid, they considered Lincoln’s election intolerable. Lincoln won by carrying every Northern state except New Jersey, which he split with Stephen Douglas; he was on the ballot in only five slave states.

Clearly, Brevard was deeply religious. Like many Americans, she would come to understand the war as God’s direct judgment on the nation. She sought in her life, she writes elsewhere in the diary, to “practice truth & love to God.” She often addressed God directly and hoped that by striving against sin and serving faithfully “thou wilt save us.”

On this date, her country was still the United States, but she knew what Lincoln’s election portended. The next day, the South Carolina legislature called for a convention to consider secession, and on Dec. 20 the state seceded. “I wish Lincoln & Hamlin could have died before this & saved our country disolution,” Brevard confessed. She would support secession in order to defend her way of life.

Brevard owned more than 200 slaves, and her entry illuminates the tensions in proslavery ideology. She was terrified of liberating her slaves because she believed they were uncivilized and could not possibly live side-by-side with whites in freedom. She says she would give up slavery, as long as blacks could be removed from the country — and, indeed, various schemes of colonization had been promulgated for decades; until 1863, Lincoln avidly supported colonizing blacks and even requested funds from Congress to do so.

Brevard believed that under the firm but benevolent tutelage of the master (or mistress), slavery transformed uncivilized blacks into contented servants. It was outsiders — Northern abolitionists — who made the slaves “restless.” She begins in her entry to distinguish between male and female slaves, and finds female slaves more “impudent,” by which she means not only saucy but sexually promiscuous (at another point she refers to females “meddling with the husbands of others”) — but then she drops the subject. It is taboo to speak of such things. She concludes again with her fear of the slaves being emancipated.

But at other places in her diary, she recognized that the slaves are not childlike, and that they are not happy in slavery. She erupted in anger “when I find out their feelings to me — with all I have done for them . . . I am every now & then awakened by the fact that they hate me.” She at times wished she could “cast them off without scruples of conscience,” but she believed she cannot do so “without a rebuke from my Heavenly father.” And yet she knew that all her slaves, if given a chance, “would aim at freedom — ‘tis natural they should & they will try for it.”

Brevard concluded “my Southern Sisters and brothers who think their slaves would be on our side in a civil war, will, I fear, find they have been artfully taken in.” The slaves feigned contentment to endure enslavement, but they dreamed of freedom. Many slaveholders also feigned contentment with the institution, but knew not what to do and so carried on.

Over the next four years, not only slaveholders but also non-slaveholders, in the North as well as South, would be challenged to reconsider their assumptions about the institution. Brevard expresses the feelings of one individual, a member of the planter elite. In 1860, one quarter of Southern families owned slaves, and more than half of those who did possessed fewer than five. Less than 1 percent of the slaveholders owned as many slaves as Brevard, though these slaveholders owned approximately one-fourth of all the slaves and held political power.

It is difficult to say how typical her experiences were — there probably were not many widowed plantation mistresses responsible for 200 slaves spread over several properties on the eve of the Civil War. There were even fewer who wrote with such candor and verve. But it is safe to say that many slaveholders, male as well as female, shared Brevard’s political and religious beliefs, and, over the course of the war, would also express growing ambivalence, and even bewilderment, about slaves and slavery.

Later in the day on Nov. 9, a cloudy, damp, drizzly afternoon, Brevard returned to her diary and concluded: “Nature seems to be weeping o’er our cause.”

The diary is published in John Hammond Moore, editor, “A Plantation Mistress on the Eve of the Civil War: The Diary of Keziah Goodwyn Hopkins Brevard, 1860-1861″ (Columbia: University of South Carolina Press, 1993).

Louis P. Masur chairs the American Studies Program at Trinity College (CT) and is author of “The Civil War: A Concise History” (forthcoming from Oxford University Press).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/11/11/a-slaveholders-diary/

The Abolitionist’s Epiphany

Boston, Nov. 7, 1860

Throughout most of the nation’s history, it had taken weeks for votes to be counted and for Americans to find out who their new president was. But by 1860, telegraph lines – more than 50,000 miles of them – had spread so far and wide across the country that the results were in the morning editions of the next day’s papers.

Wendell Phillips

In Boston that night, Wendell Phillips strode onstage to address a large audience of abolitionists in the Tremont Theatre, just off the Common. Phillips, one of the nation’s most prominent antislavery leaders, had been skeptical of Abraham Lincoln from the beginning. To him, the unknown Midwesterner – born in Kentucky to Virginian parents, he must have noted with alarm – was going to be just one more mediocre politician to warm the presidential chair for another four years, while black Americans continued to languish in bondage. Addressing an anti-slavery meeting that summer, just after the Republicans announced their nominee, Phillips had sneered: “Who is this huckster in politics? Who is this county court advocate? … What is his recommendation? It is that nobody knows anything good or bad of him…. His recommendation is that his past is a blank.” In an article he wrote for The Liberator, the leading abolitionist newspaper, a month later, Phillips went further still: he turned in a manuscript headlined “ABRAHAM LINCOLN, THE SLAVE-HOUND OF ILLINOIS.”

From “Bowen’s Picture of Boston, 1838″

But by November, his feelings had changed. It wasn’t anything the candidate had said – for he had said almost nothing. Rather, it was how Americans had rallied around Lincoln with an outpouring of antislavery feeling. A few weeks earlier, Phillips had watched Republicans parade through Boston carrying banners reading “No More Slave Territory” and “The Pilgrims Did Not Found an Empire for Slavery.” But the most welcome sight of all was the company of “West Boston Wide Awakes”: two hundred black men marching proudly in uniform, keeping stride in perfect tempo with their white comrades, under a banner that said “God Never Made a Tyrant or a Slave.”

So now, less than 24 hours after Lincoln’s election, it was a chastened Phillips who addressed the crowd at the Tremont Theatre. “Ladies and gentlemen,” he intoned as the hall fell momentarily quiet, “if the telegraph speaks truth, for the first time in our history, the slave has chosen a President of the United States.”

Sources: Ralph Korngold, “Two Friends of Man: The Story of William Lloyd Garrison and Wendell Phillips and Their Relationship with Abraham Lincoln”; Henry Mayer, “All On Fire: William Lloyd Garrison and the Abolition of Slavery”; Boston Evening Transcript, November 8, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/06/the-abolitionists-epiphany/

A Superabundance of Velocity

Nov. 9 -15, 1860

The day after Lincoln’s election, revolutionary fever breaks out in South Carolina. Nearly all of the state’s federal officials resign, and the state legislature speedily passes a bill authorizing a state convention to meet on Dec. 20 to consider, and if it desires, to authorize, secession.

“The greater number is generally composed of men of sluggish tempers, slow to act . . . and so disposed to peace that they are unwilling to take early and vigorous measures for their defense, and they are almost always caught unprepared. . . .A smaller number, more expedite, awakened, active, vigorous and courageous, make amends for what they want in weight by their superabundance of velocity.’” — Edmund Burke

In the deep south of South Carolina, Georgia, Florida, Alabama, Louisiana, Mississippi and Texas, where the idea of disunion is taken most seriously, three main groups of secessionists can be identified. There are those who are talking about talking; those who are talking about walking; and those who have already stopped talking and started walking.

The first group includes men like former congressman Alexander Stephens of Georgia. He wants to express the South’s grievances to the North, and give the new Lincoln government a chance to respond. In the second group are men like Senator Jefferson Davis of Mississippi. He’s been sounding out his fellow southern senators about participating in a collective leave-taking some time after the new year. Davis seems to envision an almost ceremonious exodus from the union, a solemn departure embarked upon more in sorrow than in anger, the better to encourage among northerners a reaction itself more sad than belligerent.

South Carolina, however, is the home of the ultras, men like William Yancey and Robert Barnwell Rhett, and they all belong to the third group. For two decades Yancey and Rhett have shouted secession whenever so much as an ominous raincloud drifts down from the North. Lately, however, they have been joined by men of a different sort, prominent men of wealth and influence, grandees who heretofore have disdained agitation. This past week, these men succeeded in inflaming passions that might well have been safely jawed to death.

The first to act was Robert Gourdin, a wealthy 48-year-old cotton broker who resides with his brother, business partner and fellow bachelor Harry in one of Charleston’s more magnificent mansions. Gourdin is one of the leaders (chairman of the executive committee, officially) of the 1860 Association, a group of Charleston’s most prominent citizens who have taken it upon themselves to promote secession among their fellow gentlemen of the South.

On the morning after Lincoln’s election, Gourdin was in the U.S. District Court in Charleston, where he was undertaking the prosaic task of serving as the foreman of a grand jury. When Judge Andrew Magrath asked him for the grand jury’s presentments, Gourdin, who with his white hair and white beard is reminiscent of Clement Clarke Moore’s St. Nick, shockingly declined. Your honor, we cannot proceed, he said. The results of yesterday’s balloting has brought to an end federal jurisdiction in South Carolina.

Of course, it did no such thing, and Judge Magrath was not about to tolerate this usurpation. It is not appropriate for the citizens of a grand jury to shut down a federal court, Magrath sternly responded. Such a decision must come from legitimate authorities. The judge paused, then rose to his feet: Given the probable action of the state, we must prepare to act on its wishes. This Temple of Justice, raised under the Constitution of the United States, shall not be desecrated by a mob. Instead, it will be closed by a duly-authorized federal officer — me. Whereupon he declared the court closed, removed his robe, folded it over his chair, and announced that he had just administered the law of the United States for the final time.

A dramatic moment, shocking to be sure, but one perhaps better suited to an amateur theatrical than to the great stages of London or New York, let alone the pages of history. Andrew Magrath, after all, is not just a federal judge. He is also the legal adviser to the 1860 Association, and was as well a classmate of Robert Gourdin in South Carolina College’s thinly populated Class of 1831.

Still, an impressed Charleston Mercury gave the performance a rave review: “There were few dry eyes among the spectators and auditors as Judge Magrath divested himself of his judicial robe.” Within hours, and with less fanfare, the U.S. District Attorney, the U.S. Marshall and the U.S. Collector of Customs Duties also resigned. Throngs celebrated in the streets. “The tea has been thrown overboard,” pronounced The Mercury.

But even a revolution requires its formalities. To authorize secession, a state convention must be held, and to do that, the state legislature has to vote to call one to assemble. Rhett and Yancey and the other hotspurs pressed legislators to act quickly; as Rhett has been saying, “Successful revolutions leave no time for reaction on the part of the people.” They have been pressing the ultras in Georgia and Mississippi and Alabama to drive the slow-moving legislators in those states to also call conventions, but they have needed South Carolina to go first.

This posed a problem: in every controversy with the federal government since 1830, South Carolina always started by going first, and always ended having gone alone. Learning from their headstrong mistakes, South Carolina’s cautious legislators in the capital in Columbia demanded convincing reassurance that at least one other state would follow South Carolina’s lead. Until then, the bill calling for a state convention would be scheduled for the customary trio of readings. Already one could feel the fervor of rebellion cooling in the torpor of the legislative process.

Reenter Robert Gourdin. The prominent businessman has long been one of the prime proponents of the construction of just-completed railroad tie between Charleston and Savannah, and was one of several dozen pillars of Charleston who went to Savannah a couple of days before election day for the festive grand opening.

At the welcoming dinner, Georgians left and right encouraged South Carolina’s secessionist inclinations, although Savannah’s Francis Bartow, the dinner’s keynote speaker, was circumspect. Handsome, Yale-educated, a leading member of the bar, son-in-law of a U.S. Senator, captain of the Oglethorpe Light Infantry, Bartow was on record as opposing separate state secessions. Like any good lawyer, however, he left a loophole. If you think the time has come for disunion, we differ, he said. But if you choose to break up the union without consulting us, you have the power of precipitating us into any kind of revolution that you chose.

Not exactly a ringing endorsement, but it looked to Gourdin like a blank check. As it happened, on Friday, two days after the resignation of Magrath et al, Bartow and his fellows from Savannah were arriving on a reciprocal visit to Charleston. Gourdin laid on a spread, treating 77 Georgians and 123 South Carolinians, august and important men every one, to a banquet consisting of turtle soup, turkey, mutton, capon, ham, tongue, lamb chops, duck, shrimp, oysters, turtle steak, pies, pastries, ice cream, figs, coffee, sherry, bourbon, scotch, wine, champagne, claret, port, brandy and Madeira.

After dinner, Bartow, full of delicacies and fervent fellow feeling, went further in his remarks than he had just days earlier in Savannah. I am a Union man, he said, eloquently enumerating the virtues of the republic. But I am tired of this endless controversy. But since the storm is to come, be its fury ever so great, I court it now, in my day of vigor and strength. Put it not off until tomorrow, for we shall not be stronger by waiting. With escalating fervor, Bartow’s neighbors rose and endorsed his sentiments.

“A wild storm seemed suddenly to sweep over the minds of men,” said The Mercury. “Every man recognized that he stood in the presence of the Genius of Revolution.” Guests stormed the telegraph office to send messages urging the legislature to act, and a deputation from the dinner saddled up and headed for Columbia with news of the Georgians’ staunch devotion. On Saturday evening, little more than 24 hours after Gourdin’s waiters started ladeling the turtle soup, and pretty near as swiftly as two houses of a legislature can move, a bill was passed that scheduled a convention on secession for Dec. 17, with delegates to be elected on Dec. 6.

In Charleston and Columbia, caution had been routed, and in a profound way, the dynamic of the situation has been altered. In all other the states throughout the South, the question is no longer whether we should leave. It is whether we should join.

To read more about this period, see “The Road to Disunion Volume II: Secessionists Triumphant,” by William W. Freehling, Oxford University Press, 2007; “Days of Defiance,” by Maury Klein, Alfred A. Knopf, 1997.

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup.”

__________

Full article: http://opinionator.blogs.nytimes.com/2010/11/14/a-superabundance-of-velocity/

Silence Before the Storm

The words of a president can convey — or conceal — a great deal of meaning. As the United States approached the central crisis in its history, there was no reason to expect great eloquence from the man whose election had precipitated that conflict. Abraham Lincoln was often described as an uncouth barbarian, and had received the least education of any presidential nominee in American history, save Andrew Jackson. Yet there he was in the fall of 1860, standing before the American people, the embodiment of their hopes and fears. In this politically charged atmosphere, the words that flowed from his pen, and his equally expressive silences, did a great deal to define the conflict coming into view. This occasional feature in the Disunion series will probe Lincoln’s language, looking at his speeches and public pronouncements during the long transition between his election eve on Nov. 5, 1860, and his First Inaugural Address on March 4, 1861.

Nov. 5, 1860

The history books are relatively quiet about how Abraham Lincoln spent the night before the great election. But we can be relatively certain that he was quiet, too. There were no speeches given in Springfield that election eve, in keeping with his self-imposed exile from the public realm for many months now. Public silence was a new approach for Lincoln; he had already given many hundreds of speeches in his career: according to his private secretary, John Nicolay, he gave some 100 in 1858, when he failed in his quest to become a senator, and more than 50 in 1856, when he stumped for John Frémont, the Republican candidate.

A Lincoln campaign photograph taken on May 20, 1860, by William Marsh.

That he was a candidate at all stemmed from an eloquence that never failed to surprise people, for the simple reason that he looked so unlikely to give a good speech. His appearance was awkward and ungainly, his voice undistinguished and full of Western twang. We are used to hearing it, in our mind’s ear, as a basso profundo (so Hollywood always interprets it, from Raymond Massey to Disneyland’s animatronic Abe to “Bill and Ted’s Excellent Adventure”). But his law partner, William Herndon, called it “sharp — shrill piping and squeaky.” An audience member at his famous 1860 speech at Cooper Union wrote, “the first utterance of the voice was not pleasant to the ear, the tone being harsh and the key too high.”

Yet it was his words that counted: this prairie lawyer got to the nub of an issue like no one else. In 1858, when so many Americans were willing to accept compromise after compromise to maintain the union, he solemnly predicted that the United States would have to choose between one version of itself or another. At Cooper Union he dazzled a well-heeled New York audience with an exhaustive examination of the limits the founders had placed upon slavery. These oratorical triumphs had brought the great prize of the presidency within his grasp — first, the nomination at Chicago in May, and now, in November, the final days of a campaign on the verge of victory. Nicolay wrote that he quietly felt “a considerable confidence that [Lincoln] would be elected.”

It was a remarkable ascent for someone who had hungered for glory as long as his close friends could remember, but who had always mocked his own chances of achieving it. Only two years earlier, the candidate had confessed to a journalist that his wife hoped he might someday be president. Then: “Those last words he followed with a roar of laughter, with his arms around his knees, and shaking all over with mirth at his wife’s ambition. ‘Just think,’ he exclaimed, ‘of such a sucker as me as president!’”

But as Lincoln approached the prize, a curtain of silence descended around him. There were no speeches at all, save utterly perfunctory ones, when he could not avoid them. He was barely involved in his nomination, except to verify that a rail dragged in by his cousin might have been split by him, decades earlier. The New York Times, nonplussed by this non-entity who had seized the prize from local hero William H. Seward, published the news of his nomination with a gigantic misspelling, presenting “Abram” Lincoln as the Republican standard-bearer.

To an extent, this self-imposed exile descended from tradition. George Washington’s silences were often louder than his words, and Lincoln was vividly aware of his precedent. By an old rule, candidates did not campaign for the office they had spent their entire lives seeking; instead, the work was done by proxies. But Lincoln had more reason than most to remain mute. On all sides, Americans strained to misrepresent him, to declare him weak, or tyrannical, or unwilling to abolish slavery, or unwilling not to. In a highly electric atmosphere, it was essential to speak precisely, and if necessary, not to speak at all.

The candidate understood this perfectly; according to his secretary, “his self-control was simply wonderful” as he achieved “an enforced idleness” completely out of character. Day after day Americans clamored to learn more about this cipher, yet no news flowed from his office. The torrent of mail that arrived was answered with the most perfunctory replies. Autograph requests were acceded to; but requests for speeches were declined, with the lame excuse, “I am not a professional lecturer — have never got up but one lecture; and that, I think, rather a poor one.”

Lincoln enforced this discipline on his closest advisers as well. To John Nicolay, charged with a difficult interview to sound out a possible supporter, he wrote a short note that ended “commit me to nothing.” Other fragments from that frantic year reveal an almost maniacal commitment to secrecy, such as a May 17th note that read “make no contracts that will bind me” or a May 30th note that said, more to the point, “burn this, not that there is any thing wrong in it; but because it is best not to be known that I write at all.” All summer and fall, as the noise around him grew, the silence within him deepened.

But that did not mean he wasn’t paying attention. On the contrary, he monitored all communications in and out with the greatest attention to detail, like a spider feeling the most minute tugs on an intricate web. To a New York printer reproducing a copy of his Cooper Union speech, on May 31, he wrote exacting instructions, explaining why tiny words such as “quite,” “as” and “or” were essential to his argument. In short, everything had to be perfect for this most discriminating of editors: “I do not wish the sense changed, or modified, to a hair’s breadth.” It was one of the longest letters he wrote all year; a rare departure from silence, for the higher purpose of enforcing exactitude.

And yet silence did not convey uncertainty, as so many accused. The candidate was already amassing ammunition for the struggle to come, one that would be waged with written and spoken words, along with blunter instruments of persuasion. His secretary could not be certain, but left open the tantalizing possibility that the candidate was already at work on his inaugural address, well before the election confirmed that he would deliver one. As the final results came clicking across the telegraph to the Illinois State House on Nov. 6, he may already have been writing out the first sentences of the speech that would emphatically break the silence.

Sources: John Nicolay, “Lincoln in the Campaign of 1860″; George Haven Putnam, “The Speech that Won the East for Lincoln”; Henry Villard, “Lincoln on the Eve of ‘61″; Harold Holzer, “Lincoln: President-Elect“; David Herbert Donald and Harold Holzer, “Lincoln in the Times“; “Complete Works of Abraham Lincoln.”

Ted Widmer is director and librarian of the John Carter Brown Library at Brown University. He was a speechwriter for President Bill Clinton and the editor of the Library of America’s two-volume “American Speeches.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/04/silence-before-the-storm/

An All-Time Great Marine

After it emerged in the 1990s that Madeleine Albright, Wesley Clark and Christopher Hitchens—notable goyim all—had discovered the existence of Jewish ancestors, I formulated Boot’s Law of Genealogy: Everyone is Jewish; some people just don’t know it yet. Further confirmation, if any were needed, comes courtesy of this new biography of Lt. Gen. Victor “Brute” Krulak, who died last year at 95.

Before reading “Brute,” I had no idea that the famous Marine was a hebe like me. Krulak was born in Denver in 1913. His father, Morris (originally Moschku), had emigrated from Russia in 1890. His mother, Bessie Zalinsky, had arrived two years earlier. Yet by the time Krulak entered the Naval Academy in 1930, he was telling everyone, Robert Coram reports, “that his great-grandfather had served in the Confederate army, that his grandfather had moved from Louisiana to Colorado to homestead 640 acres, and that his father had been born in the Colorado capital.” He claimed to be an Episcopalian, associating himself with the most socially prestigious religious denomination. His children were raised as Episcopalians; two even became ministers. (Another son, Charles, became Marine Commandant in the 1990s.)

Krulak was so determined to put his past behind him that when he married the daughter of a Navy officer from “an old, genteel East Coast family,” he did not invite a single one of his relatives to the wedding, for fear that his Jewishness would be discovered. Nor did he tell anyone that he had been married once before. At 16, he had eloped with his girlfriend. The marriage was annulled after just nine days but, if discovered, it would have kept Krulak from entering the Academy, which barred students who had ever been married.

The 1933 Naval Academy rowing team tower over their coxswain, Brute Krulak.

Krulak figured, no doubt rightly, that in the starchy, snobbish officer corps of his day, a Jew with a failed marriage would not have gotten far. There was nothing he could do to hide his other handicap—his tiny size. When he entered the Academy he was 5’4” and 116 pounds. On his first day, Mr. Coram writes, “a towering midshipman looked down at him, smirked and said: ‘Well, Brute.’ ” Thus was born the nickname that Krulak loved.

He was so small that he did not meet the Marine Corps’ minimum size requirements. To get his commission, he made use of high-level connections. At Annapolis he had cultivated Holland Smith, who would go on to become a famous World War II general nicknamed “Howlin’ Mad.” Smith and future commandant Lemuel Shepherd would turbo-charge Krulak’s ascent.

Brute rewarded their trust by becoming one “squared-away” Marine. You do not have to be convinced by Mr. Coram’s overblown claim that Krulak was “the most important officer in the history of the United States Marine Corps” to recognize his signal contributions.

In 1937, while stationed in Shanghai, Krulak observed Japan’s use of landing craft with “large, flat bows” that opened on a beach, “allowing the boats to disgorge vehicles and personnel on dry land.” At the time the U.S. had nothing comparable. Krulak was a prime mover in getting the Marine Corps to adopt similar boats made by an obscure shipyard (Higgins Industries of New Orleans). The Higgins boat would make possible all of the American amphibious assaults of World War II, from Normandy to Iwo Jima.

Having a major role in the development of the landing craft would, by itself, have been enough to secure Krulak’s place as a military innovator. But he further burnished his reputation when, immediately after World War II, he pushed the Marine Corps to adopt helicopters ahead of the other services. He realized their potential not only to evacuate wounded and move supplies but also to outflank the enemy in battle. Krulak, still only a colonel, also played a key behind-the-scenes role in rallying Congress to defeat President Truman’s efforts to severely trim the Marine Corps’ size and mission. This led to Truman’s famous complaint that the Marine Corps has a “propaganda machine that is almost equal to Stalin’s.”

Krulak’s combat exploits, while distinguished, were far too brief to put him in the company of such Marine legends as Lewis “Chesty” Puller or Dan Daly. He commanded a battalion sent in 1943 to raid the Pacific island of Choiseul to distract the Japanese from the invasion of Bougainville. He won the Navy Cross, his service’s second-highest decoration, but the raid was a minor affair that lasted just seven days. It is remembered primarily because one of the PT boats that evacuated Krulak’s men was commanded by a young officer named John F. Kennedy. When Kennedy became a senator, Krulak claimed to have gotten chummy with him during the war. This was just another of Krulak’s tall tales; the two never met in the Pacific. (The Marine contingent that Kennedy evacuated was led by Krulak’s second-in-command.)

It would be easy to condemn Krulak for his dissembling were it not for the fact that he wound up sacrificing his career by telling a painful truth. In the 1960s he was commander of Fleet Marine Force-Pacific, which oversaw the Marines fighting in Vietnam. The most successful Marine program, known as the Combined Action Platoons (CAP), sent squads to protect villages alongside South Vietnamese militia. This was a more effective counterinsurgency approach than the big-unit sweeps favored by Gen. William Westmoreland. After initially claiming that conventional tactics were a big success (a part of his history that Mr. Coram glosses over), Krulak became an ardent convert to counterinsurgency and a big booster of CAP.

In 1967 he told President Johnson that if the U.S. approach did not change, “he would lose the war and . . . the next election.” It wasn’t what LBJ wanted to hear, and it probably cost Krulak a chance to get four stars and become commandant. He was forced to retire the next year. He could take solace, however, in having displayed more moral courage than his seniors who went along with the administration’s failed strategy.

Mr. Coram, a reporter turned biographer, does a good job of telling Krulak’s story in clear, simple prose. His account is marred only by relentless Marine boosterism. The Battle of Belleau Wood was a notable Marine victory in World War I, but contra Coram, it does not belong alongside Cannae, Gaugamela and Agincourt—three of the most significant battles in history. Mr. Coram also claims that the roots of the new counterinsurgency doctrine produced by Gen. David Petraeus “could be found in the Marine Corps during the Vietnam War.” The Marine experience was significant, but other wars where the Marines didn’t fight (e.g., Algeria and Malaya) were more influential—as was Gen. Petraeus’s own experience in Iraq.

These are the sort of exaggerations you expect of a retired gunny. Mr. Coram, however, isn’t a “devil dog” himself. He writes that he was often asked by Marines: “How can you write about the Marine Corps when you were not a Marine?” His answer, apparently, is to adopt a Mariner-than-thou tone. That annoying tic aside, he has produced a valuable work that significantly revises our understanding of—but does not diminish our respect for—one of the all-time great Marines.

Mr. Boot, a senior fellow at the Council on Foreign Relations, won the Marine Corps Heritage Foundation’s General Wallace M. Greene Jr. Award for “The Savage Wars of Peace: Small Wars and the Rise of American Power” (2002)

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703856504575600843375911252.html

Return of the Samurai

Tokyo Bay, Nov. 10, 1860

 

A contingent of some 60 Japanese ambassadors and their staff returned to Tokyo on Nov. 10, 1860, after a long trip to the United States.

Strange music, discordant to local ears, echoed across the harbor: a brass band playing “Home, Sweet Home” and “Auld Lang Syne.” Dozens of small craft, bright banners fluttering at their sterns, clustered around the great black hull of the foreign frigate. On deck, ranks of blue-coated Marines stood at attention; high overhead, cheering sailors lined the yardarms and clung to the rigging. One by one, the boats began drawing away, each carrying a little group of silk-robed men who fluttered paper fans in farewell. Tears shone on the faces of the Japanese. American eyes – to judge from the newspaper accounts – were dry.

“Amerikakoku”: print by a Japanese artist, c. 1865. Clearly the balloon ascension in Philadelphia had left a strong impression on the travelers.

Almost 10 months after departing from their homeland, the shogun’s six dozen envoys had returned from visiting the United States. They had eaten ice cream in San Francisco, gone on a shopping spree through New York, watched a balloon ascension in Philadelphia and been feted at the White House by President James Buchanan. Their enjoyment of the trip had been dampened somewhat by the fact that their “translators” spoke only broken English and not a single American citizen, as yet, spoke Japanese. (Since Dutch traders had been coming to Japan for centuries, a number of educated Japanese spoke that language – so communications with English-speakers usually required two interpreters: one of them Japanese-to-Dutch, the other Dutch-to-English.)

Still, the travelers had been impressed by how frequently Americans combed their hair and by the ingeniousness of Western bathroom facilities – though the envoys caused a near-scandal at their Washington hotel when several were found naked together in the same bathtub: a Japanese, though apparently not American, custom. (Some of the envoys, for their part, were shocked when they visited a Washington brothel and found multiple couples having sex in the same room – apparently an American, though not Japanese, custom.) Several kept diaries of their journey; it is clear from these that despite linguistic and cultural differences, they quickly grasped certain peculiarities of local politics. One diarist noted perplexedly that in the United States, “anyone of good character except a Negro may be elected president.”

Americans’ fascination with the island nation on the other side of the world – the “double-bolted land,” as Herman Melville called it – which had been growing ever since Commodore Matthew C. Perry forced it open to Western trade in 1854, now reached a crescendo. Japanese words even entered English slang; Abraham Lincoln’s secretaries, John Hay and John Nicolay, used one of them as a nickname for their boss behind his back: “The Tycoon.” Taikun was a title of the chief shogun, and it suggested – at least to the minds of Hay and Nicolay – not just a wise and powerful ruler but one of deep Asiatic inscrutability.

A sketch of Washington, D.C., by a member of the Japanese delegation, 1860. The Capitol is visible (the artist has finished its incomplete dome), along with the base of the Washington Monument and the Long Bridge across the Potomac to Virginia.

Not all Americans were enthralled with the diplomatic visit, though. One editorial in the Times complained about the Japanese being given information on the latest military technology. “Our Government has expended nearly two millions of dollars in this attempt to cultivate the good will of the Japanese,” the article noted sourly. “We shall be agreeably disappointed if it does not cost us, some day, ten times that sum to avert the results of our excessive civility.”

“Shosha – Amerikajin” (“True Picture of Americans”), c. 1861.

Meanwhile, Japanese were becoming fascinated with Americans as well. Artists created woodblock prints depicting life in the far-off land, basing them on the envoys’ accounts and sketches, observations of Western visitors and a healthy dose of imagination. The results – some of which can be seen on this page – were fanciful, but no more so than Americans’ impressions of Japan.

The steam frigate USS Niagara departed New York on June 30, its cargo holds packed with the envoys’ souvenirs and with official gifts – including, for the shogun himself, a gold medal from Tiffany’s bearing President Buchanan’s likeness. The ship traveled eastward via the Cape of Good Hope, Djakarta, and Hong Kong. Though far from home and cut off from news, its American officers were not unmindful of current events. On Nov. 6, amid a gale in the East China Sea, they celebrated Election Day aboard ship by circulating a ballot box; some even posted placards with anti-secession slogans or racist slurs against Republicans.

Election results from that far-off precinct crossed the Pacific via the Sandwich Islands (Hawaii) and traveled over the Great Plains by Pony Express to reach The New York Times almost three months later. In early 1861, the newspaper dutifully reported the tally. The Constitutional Union ticket of John Bell and Edward Everett took first, with 14 votes; Abraham Lincoln and Hannibal Hamlin, winners on land, received just three votes at sea.

Sources: New York Times, Oct. 6, 1860; Jan. 28 and 29, Feb. 12, 1861 (news report and editorial); Harper’s Weekly, Feb. 9, 1861; Masao Miyoshi, “As We Saw Them: The First Japanese Embassy to the United States (1860)”; Masayiko Kanesaboro Yanagawa, “The First Japanese Mission to the United States”; Dana B. Young, “The Voyage of the Kanrin Maru: To San Francisco, 1860” (California History, Winter 1983); Dallas Finn, “Guests of the Nation: The Japanese Delegation to the Buchanan White House” (White House History, 12).

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/10/return-of-the-samurai/?ref=opinion

Premonition at Vicksburg

Vicksburg, Miss., Nov. 3, 1860

Antebellum Vicksburg

During the last days of the campaign, while Lincoln stayed close to home and held his tongue, another man who would soon be president played somewhat less coy.

For six full weeks, Senator Jefferson Davis had been barnstorming through Mississippi on behalf of the Southern Democrats. The state was ablaze with excitement, even though — or perhaps because — most knew that the party’s candidate was bound for defeat. Amid torchlight marches, barbecues and fireworks shows, orators were preaching less about what would happen on election day itself than on what might follow it. At Vicksburg on Nov. 3, Davis told a crowd:

If Mississippi in her sovereign capacity decides to submit to the rule of an arrogant and sectional North, then I will sit me down as one upon whose brow the brand of degradation and infamy has been written, and bear my portion of the bitter trial. But if, on the other hand, Mississippi decides to resist the hands that would tarnish the bright star which represents her on the National Flag, then I will come at your bidding, whether by day or by night, and pluck that star from the galaxy and place it upon a banner of its own. I will plant it upon the crest of battle, and gathering around me the nucleus of Mississippi’s best and bravest, will welcome the invader to the harvest of death; and future generations will point to a small hillock upon our border, which will tell the reception with which the invader met upon our soil.

Not all of his state’s “best and bravest” shared Davis’s apparent eagerness to welcome federal troops to “the harvest of death.” The Vicksburg Whig’s editor denounced the senator’s oration as showing “how inordinate vanity, operating upon a moderate intellect, flattered by past successes, may influence its possessor to the most inflated of self-laudation.”

But death would indeed reap its ample harvest at Vicksburg, less than three years later.

Sources:
William J. Cooper, “Jefferson Davis, American”; Percy Lee Rainwater, “Mississippi: Storm Center of Secession, 1856-1861”; Vicksburg Whig, Nov. 7, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

___________
Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/02/premonition-at-vicksburg/

Lincoln Wins. Now What?

Nov. 7, 1860

A cartoon from the campaign.

Yesterday, the start of the most exciting day in the history of Springfield, Ill., could not wait for the sun. At 3 a.m., somebody got Election Day started with volleys of cannon fire, and after that there were incessant and spontaneous eruptions of cheering and singing all day long. A moment of delirium erupted in mid-afternoon, when the city’s favorite citizen emerged from his law office and went to vote, taking care to slice his name off the top of the ballot so as to prevent accusations that he had voted for himself.

Abraham Lincoln’s campaign button.

After the sun went down, he joined other Republican stalwarts in the Capitol building, where they eagerly received the early returns that were trotted over from the telegraph office.

There were no surprises: the long-settled Yankees in Maine and New Hampshire and pioneering Germans of Michigan and Wisconsin delivered the expected victories. And then came news from Illinois: “We have stood fine. Victory has come.” And then from Indiana: “Indiana over twenty thousand for honest Old Abe.”

The throngs in the streets cheered every report, every step towards the electoral college number, but news from the big Eastern states was coming painfully slowly, and finally the candidate and his closest associates decamped the capitol and invaded the narrow offices of the Illinois and Mississippi Telegraph Company. The advisers paced the floorboards, jumping at every eruption of the rapid clacking of Morse’s machine, while the nominee parked on the couch, seemingly at ease with either outcome awaiting him.

It wasn’t until after 10 that reports of victory in Pennsylvania arrived in the form a telegram from the canny vote-counter Simon Cameron, the political boss of the Keystone State, who tucked within his state’s tallies joyfully positive news about New York: “Hon. Abe Lincoln, Penna seventy thousand for you. New York safe. Glory enough.”

Not until 2 a.m. did official results from New York arrive, and the expected close contest in the make-or-break state never appeared: the one-time rail-splitter won by 50,000 votes. His men cheered, and broke out into an impromptu rendition of “Ain’t You Glad You Joined the Republicans?” Outside, pandemonium had been unleashed, but Abraham Lincoln partook of none it, and instead put on his hat and walked home to bed.

“The Republican pulse continues to beat high,” exulted a correspondent for The New York Times. “Chanticleer is perched on the back of the American Eagle, and with flapping wings and a sonorous note proclaims his joy at the victory. The return for the first Napoleon from Elba did not create a greater excitement than the returns for the present election.”

Well should he sing, for the days of song will end soon enough. Mr. Lincoln is indeed the president-elect, but barely by a whisker, and what exactly one means by “the United States” any more is apt to become a topic of some heated discussion. Lincoln won his parlay, taking 16 of the 17 Northern states that he set his sights upon, including the hard-fought New York, and most by a solid majority.

But there were states where he was more lucky than popular, like California, where all four candidates polled significant numbers. Lincoln won only 32.3 percent of the ballots, but managed to eke out a victory and capture the state’s four electoral votes by the wafer-thin margin of 734 votes. A similar, if slightly less dramatic story played out in Oregon, where Lincoln’s victory margin was fewer than 1,200 votes. In his home state of Illinois, facing Mr. Douglas, Mr. Lincoln won by fewer than 12,000 out of 350,000 votes cast, a clear win but hardly a romp.

The Lincoln and Hamlin election ticket from 1860.

The South, of course, presents a vastly different picture. In the states of Alabama, Arkansas, Florida, Georgia, Louisiana. Mississippi, North Carolina, Tennessee and Texas Mr. Lincoln received a combined total of no votes. None. True, his name wasn’t even listed on the ballot, but that seems to be a mere technical oversight that would have had no great consequence. After all, in Virginia, the largest and wealthiest southern state, Mr. Lincoln was on the ballot, and there he tallied a total of 1,887 votes, or just 1.1 percent of the total cast. The results were even worse in Kentucky, his place of birth. One might have thought that sheer native pride should have earned him more than 1,364 of the 146,216 votes cast, but perhaps Kentuckians resented that he deserted them at such a tender age.

All told, Mr. Lincoln will assume the presidency in March on the strength of his muscular 180 electoral votes, and despite the puny 39.8 percent of the popular vote he accumulated.

The narrowness of this fragile mandate (if that word can even be used) naturally invites speculation about what might have been. The year began with Mr. Douglas standing, like Franklin Pierce and James Buchanan before him, as an electable anti-slavery Northerner who could be depended on to maintain southern prerogatives. But from the moment last April when fire-eating Southern Democrats made it clear that they would rather punish Mr. Douglas for his vote on the Kansas-Nebraska Act two years ago than win the White House in the fall, it was ordained that the Little Giant, so long touted as a certain president-to-be, was steering a doomed vessel.

Yet there were times when his campaign picked up speed, and at such moments Mr. Douglas seemed very close to capturing enough support to thwart Mr. Lincoln’s northern sweep and deny him his electoral college majority. Had that happened, Mr. Douglas would be sitting solidly in second place. He would have demonstrated support both north and south, and he would offer the South preservation of the status quo. That might well have been enough to pacify the reckless Southern Democrats who shunned him in the spring, and to win their support in the House of Representatives.

But for every Douglas surge there was a Douglas blunder. Final tallies show that wherever Mr. Douglas actually campaigned in New York, he won more votes than President Buchanan took when he captured the state four years ago. But instead of investing his time in the Empire State, Mr. Douglas headed into the inhospitable South, where he did the seemingly impossible — he managed to make southern voters dislike him even more than they already did. Appearing before a crowd in Virginia, he was asked if the election of Mr. Lincoln would justify secession. A politician of Mr. Douglas’s experience should have known how to handle this kind of question with finesse, but instead he offered the one answer certain to damage him. No, he told the crowd.

He might have stopped at that, but perhaps figuring that, having jumped the fence, he may as well have a picnic, he told the crowd, It is the duty of the president of the United States to enforce the laws of the United States, and if Mr. Lincoln is the winner, I will do all in my power to help the government do so. With that answer, Mr. Douglas dismissed the purported right to secede that the south so cherishes, and surrendered his claim as the only man who could be counted on to keep the union together.

Now that task falls to a president who received fewer than four votes in 10; a president who is purely the creature of only one section of the country; a president who, apart from one undistinguished term in the House of Representatives a decade ago (and a period in the state legislature), has no experience in public office; a president who comes from a Republican party that has been stitched together from various interests, who will be asked to work with a Congress whose two houses are controlled by Democrats.

The fire eaters in South Carolina have already announced that they will immediately introduce a bill of secession. But that has been something they have been itching to do for years; as any doctor or fireman will tell you, sometimes the best way to end a fever or a blaze is to just let the thing burn out. Not everyone in the South is a slave owner, and not every slave owner is a disunionist. If any of the firebrands would take the time to listen to what Mr. Lincoln has actually said, they would see that he is no raving abolitionist like Sen. William Seward and his ilk. (Indeed, anti-slavery activist Wendell Phillips sneeringly calls Mr. Lincoln a “huckster” and William Lloyd Garrison says he has “not one drop of anti-slavery blood in his veins.”)

Mr. Lincoln has made his position clear: while he is against slavery and calls it evil, he would not do anything — more to the point, that he is powerless under the Constitution to do anything — to end slavery where the Constitution already permits it. The line that he has drawn is against an expansion of slavery in the territories, but look at a map: there are no more territories held by the United States in North America that are in dispute. On every other matter relating to slavery he has been silent. And ultimately, they ought to realize that Mr. Lincoln may not be an experienced politician, or have strong political support, but that by training and avocation, he is a lawyer, and a good one. And almost every lawyer will tell you that it is cheaper to settle a matter quietly than to fight it out in court.

(To read more about these events, see “Lincoln for President,” by Bruce Chadwick, published by Sourcebooks, Inc., 2009; “Lincoln: An Illustrated Biography,” by Philip B. Kunhardt Jr., Philip B. Kunhardt III, and Peter W. Kunhardt, published by Alfred A. Knopf, 1993; and “The New York Times Complete Civil War 1861-1865,” edited by Harold Holzer and Craig L. Symonds, published by Black Dog & Leventhal, 2010.)

Note: An earlier version of this piece failed to mention that Lincoln had served in the Illinois State Legislature.

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup.”

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/07/lincoln-wins-now-what/

Against Rebellion

Why did some colonials remain loyal to the king?

Thomas B. Allen begins “Tories” with an anecdote that the author apparently considers a useful way of illustrating his theme. A column of American rebel soldiers was marching through a Virginia town in 1777, he tells us, when a shoemaker rushed out of his shop and shouted: “Hurrah for King George!” None of the soldiers paid attention to him. When the troops stopped to rest in a woods, the shoemaker pursued them, hurrahing for King George. Once more the men ignored him. When the loyalist shouted his defiance virtually in the ear of the commanding general, he ordered him taken to a nearby river and ducked. When that ordeal failed to silence the shoemaker, the general ordered him tarred and feathered. His weeping wife and four daughters pleaded with him to be quiet. He was then drummed out of town.

The reader may wonder: Was the shoemaker drunk? Suicidal? Insane? Mr. Allen is less curious—he sees simply “an act of casual cruelty upon a stubborn Tory,” adding: “How many other Tories were taunted, tortured or lynched we will never know.”

It is unfortunate that Mr. Allen frames “Tories” as the tale of early victims of American political rage, because there is something to be said for focusing on the loyalists of the Revolutionary era. It is a fascinating and relatively neglected subject, one that the author tackles with verve as he spins a narrative starting with a nascent antitax insurrection in Boston in 1768. When British soldiers arrived to clamp down on restive Bostonians, Mr. Allen says, “Loyalists welcomed the Redcoats as protectors; Patriots and their supporters in the streets saw the soldiers as an occupation force, sent by Britain to tame or even punish dissent.”

By 1775, Boston was a garrisoned city where Loyalists courted trouble by fraternizing with the Redcoats. Mr. Allen relates the story of merchant Thomas Amory, who invited a few British officers to his house one night in nearby Milton, Mass. “Word reached the Patriots,” and soon “a brick-throwing mob attacked the house.” One of the bricks, Mr. Allen writes, “smashed a windowpane in his young daughter’s room and landed on her bed.”

The officers scooted out the back door while Amory tried to calm the crowd. He would later move to Watertown, about eight miles west of the city. Countless other loyalists also “fled real or imagined mobs.” How many Tories resided in the colonies at the outbreak of war? Using the latest research, Mr. Allen reports that the old estimate—a third of Americans—is outdated; under scrutiny, the number has dwindled to about 20%, or roughly half a million people. But they were a combative minority: When war came, loyalists formed more than 50 military units that often fought well beside their British allies.

“Tories” ably evokes the sense of fear felt by the loyalists, but Mr. Allen neglects to look at why the rebels took an increasingly angry view of those who sided with the British. The rebels knew they were risking their lives and property to defy King George, and they were enraged by loyalists’ eagerness for the sort of awful vengeance that the crown had previously unleashed on Scottish and Irish rebels. After the Continental Army narrowly averted total collapse in 1776, gloating loyalists—noting that the three sevens in 1777 looked like gibbets—began calling it “the year of the hangman.” The rebels, they hoped, would soon be swinging from British rope.

Why did some colonials remain loyal to the king while most did not? Mr. Allen does not dwell on the subject—he is more interested in what happened than why. But others have considered the loyalists’ motivations. Historian Leonard Labaree, in a pioneering study in 1948, found seven psychological reasons, including the belief that a resistance to the legitimate government was morally wrong and a fear of anarchy if the lower classes were encouraged to run wild. Another important factor: Unlike the rebels, who tended to come from families that had lived in America for several generations, many loyalists were born in England. These first-generation immigrants brought with them a sense of British liberty, steeped in obeisance to the king and his aristocrats, while in the colonies a longing for a “more equal liberty”—John Adams’s declared goal for the rebels—had already taken hold.

One of the book’s themes is that the conflict between the loyalists and rebels amounted to “America’s first civil war.” But not until the later pages, when the fighting with the British shifts to the South, does a semblance of civil war become evident. The Irish Presbyterians of the Southern backcountry had a history of feuding with wealthy coastal planters, who supported the insurrection. The ingrained antipathy for the planters, more than any fondness for King George, prompted the backcountry boys to ally themselves with the British—leading to vicious seesaw fighting.

A climax to this war within the larger war came in late 1780 with the battle of Kings Mountain in South Carolina, a purely American versus American, loyalist versus rebel fight. The rebels won a total victory, and in the process quashed British dreams of creating a native-grown loyalist army that might provide a decisive advantage.

The best section of “Tories” deals with black loyalists, the thousands of runaway slaves who responded to a British offer of freedom in return for military service. The British used these men largely as laborers, not fighters. In making peace at war’s end the politicians agreed to return the runaways. But Gen. Guy Carleton, the last British commander in America, refused to do so. About 3,000 blacks were among the 80,000 loyalists who retreated to Canada and the West Indies when hostilities ended.

A thousand-strong contingent of these former slaves in Halifax, Nova Scotia, became disenchanted with their treatment there, Mr. Allen notes. The black loyalists sailed for West Africa—present-day Sierra Leone—where in 1792 they established a settlement they called Freetown. For these Americans, the yearning for a more equal liberty did not end with the treaty of peace.

Mr. Fleming’s books on the American Revolution include “Now We Are Enemies,” recently republished in a 50th anniversary edition by American History Press. He is the senior scholar at the American Revolution Center in Philadelphia.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703506904575592613344540820.html

How (and Where) Lincoln Won

The Civil War was the largest, bloodiest conflict fought on American soil, but today its geography — from elections and secession to the back-and-forth of military struggle — is often obscure. This article is the first in a series in which Susan Schulten, a historian at the University of Denver, uses maps to illuminate the secession crisis and the war it produced.

Abraham Lincoln won a decisive victory on Nov. 6, 1860, with more than double the Electoral College votes of John C. Breckinridge, the runner-up. The election also sparked a crisis where 11 Southern states left the union, formed a new country and fell into a disastrous war with the North, all within six months of Lincoln’s win. In other words, to understand the origins of the war, we have to understand not just why Lincoln won, but how and where.

Most importantly, Lincoln’s victory came entirely from the states of the North, Midwest and far West. He failed to win a single slave state, and 10 of the 15 even refused to place him on the ballot. Such a sectional win, following decades of growing sectional tensions, seemed proof to many that the country was irredeemably divided.

Normally a candidate with such geographically limited appeal wouldn’t stand a chance. But Lincoln had a few advantages. For one, he benefited from a deep divide among the Democrats and between the Upper and Lower South. Many Democrats in the South refused to support northerner Stephen Douglas as the party’s nominee; following a convention in Charleston, these Southern Democrats formed a separate party and chose Breckinridge as their candidate. Meanwhile, members of the defunct Whig Party in the South — unable to support a Democrat or a Republican and hoping to defuse the crisis — formed the Constitutional Union Party and nominated John Bell as their candidate.

The result was essentially two separate races, with Lincoln and Douglas vying for supremacy in the North while Breckinridge and Bell split the Southern vote. Lincoln trounced Douglas, conceding only Missouri and three of New Jersey’s seven electors to his opponent. In the South, the Southern Democrats won every state except Virginia, Kentucky and Tennessee, which supported the Constitutional Union Party.

The Democratic split certainly widened Lincoln’s margin of victory, but what really mattered was sectional division and the national population distribution. Few Northern Democrats voted for Breckinridge, certainly not enough to have put Douglas over the top in any of Lincoln’s states. Nor would it have mattered if the Democrats had united behind Breckinridge and thus delivered him Missouri and the three New Jersey votes. Even if Breckinridge had captured all of Bell’s votes as well, Lincoln’s victory in the populous states of New York, Ohio and Pennsylvania made his election all but certain.

Given the divided electorate, few observers were surprised when South Carolina began the secession process just weeks later, and announced its break from the Union just before Christmas. By Feb. 1, Mississippi, Florida, Alabama, Georgia, Louisiana and Texas had done the same. Each of these seven states had voted Southern Democrat in 1860, outraged by the prospect of a president opposed to the extension of slavery.

Yet voting patterns were more complicated than the map indicates. In Georgia and Louisiana, Unionists provided a powerful counterweight to the Southern Democrats. Similarly, a vote for Southern Democrats did not always predict secession. While a majority in Delaware and Maryland voted Southern Democrat, those states remained loyal. Conversely, in Tennessee Bell actually defeated Breckinridge, even though that state seceded in early June. Kentucky and North Carolina were split between the two parties, and while the former remained in the Union, the latter did not. The winner-take-all model of the Electoral College obscures this complexity.

Equally intriguing is the relationship between secession and slavery. The pace of secession roughly correlates with the proportion of the slave population in each state. The first six states to secede enslaved well over 40 percent of their respective populations. The seventh—Texas—is an interesting outlier: though slaves made up only 30 percent of the population, the institution was rapidly growing, and fostered a secessionist spirit that overwhelmed Gov. Sam Houston’s Unionist sentiment.

After Texas voted to secede on Feb. 1, months passed without another state joining the Confederacy. The strong showing of the Constitutional Union Party in the Upper South further convinced Lincoln that secessionists were a distinct minority. But the crisis at Fort Sumter ended hopes of a rebellion limited to the Lower South, and soon Americans found themselves in one of history’s bloodiest wars.

Sources: Two works that do particular justice to the complexity of the sectional crisis are David Potter and Don Fehrenbacher, “The Impending Crisis, 1848-1861“; and Daniel Crofts, “Reluctant Confederates: Upper South Unionists in the Secession Crisis.”

Susan Schulten is a history professor at the University of Denver and the author of “The Geographical Imagination in America, 1880-1950.” She is writing a book about the rise of thematic mapping in the United States.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/10/how-and-where-lincoln-won/

Hearing the Returns with Mr. Lincoln

In 1860, a cub reporter named Samuel R. Weed scored the assignment of a lifetime when his St. Louis newspaper sent him to spend Election Day with the man who might become America’s president. Surprisingly, no one else had thought of it, and Weed arrived to find a relaxed Abraham Lincoln, greeting him “as calmly and as amiably as if he had started on a picnic.”

He dutifully recorded the ordinary ways Lincoln spent this extraordinary day; sitting on a chair tipping backwards, endlessly dispensing witticisms as if from a secret rivulet inside him, avoiding crowds at times, and perhaps avoiding Samuel R. Weed as well (he checks out for lunch at one point).

The result is a riveting, human portrait. Here is Lincoln getting the news that New York, the swing state, has swung; there he is, smothered with kisses by a bevy of young ladies, curiously unsupervised. The reporting is crisp; we hear shouting, and church bells, and the occasional cannon. That is not the only premonition of war — Lincoln says no fewer than six times that his troubles have just begun. Quite an election night.

But for all this good reporting, the piece was not written until 1882, and not published until 1932, when it appeared in The New York Times on Valentine’s Day (was it the kisses?). It also appeared in a 1945 compilation of Lincoln memories, with no explanation for the long delay in bringing an essential Lincoln story before the public.

Sources: The New York Times, Feb. 14, 1932; Rufus Rockwell Wilson, ed., “Intimate Memories of Lincoln.”

Ted Widmer is director and librarian of the John Carter Brown Library at Brown University. He was a speechwriter for President Bill Clinton and the editor of the Library of America’s two-volume “American Speeches.”

Note: The 1932 article has not been reproduced (see the blog).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/11/05/hearing-the-returns-with-mr-lincoln/

Head-Stompers, Wrench-Swingers and Wide Awakes

New York, Nov. 2, 1860

Young Republicans with axes! New York firemen run amok!

Welcome to election week, 1860.

Hurled brickbats, smashed glass and howled curses were the soundtrack of American electoral politics a century and a half ago. The oratorical eloquence that most people today associate with the 19th century — those resonant fanfares of prose carved upon monuments, enshrined in history textbooks, hammered into the brains of 10th graders — often provided little more than the faintest melodic line, drowned out amid the percussive din. Last week’s notorious “head-stomping” incident outside a Senate debate in Kentucky, footage of which has drawn nationwide condemnation and half a million views on YouTube, seems almost gentle in comparison.

On the last Friday night before the 1860 election, Senator William H. Seward delivered a rousing Republican campaign address to a large outdoor gathering on 14th Street in Manhattan. Afterward, crowds of pro-Lincoln “Wide Awakes” fanned out through the surrounding area. Wide Awakes, members of an organization with strong paramilitary overtones, could be a menacing sight: they wore military-style caps and shrouded themselves in long black capes made of a shiny fabric that reflected the flames of the torches they carried. Some strapped axes to their backs, in tribute to their rail-splitting hero.

New York Wide Awakes marching, autumn 1860.

According to the next day’s Times and other papers, things began to spin out of control when supporters of a rival presidential contender, John Bell, charged toward the Lincoln men, “calling them ‘negro stealers,’ ‘sons of b____s,’ &c.” At the corner of 12th Street and Fourth Avenue, several dozen volunteer firemen — members of Engine Company 23 — joined the fray, swinging roundhouse blows with clubs and heavy iron wrenches that the Wide Awakes tried to parry with their torches. But the tide of battle turned when the young Republicans brought their Lincoln axes into play. They chased the enemy back into the company firehouse and promptly began smashing down its barricaded doors, as other idealistic marchers flung bricks and cobblestones. (News reports are vague about what finally ended the fracas.)

Similar disturbances happened almost daily in various East Coast cities. In Baltimore the previous night, Republican marchers had been pelted with stones and rotten eggs. (That city was justly known as “Mobtown”; dozens sometimes died in a single campaign season there.) In Washington on Election Day itself, pro-slavery forces stormed a Wide Awake clubhouse a block or two from the Capitol. The attackers practically demolished the building and were only narrowly prevented from burning the ruin — along with several Wide Awakes trapped on the third floor — by the timely arrival of police.

There was little talk of bipartisan civility during that particular election cycle.

Sources:

New York Times, Nov. 3, 1860; New York Tribune, Nov. 3, 1860; New York Herald, Nov. 5, 1860; Baltimore Sun, Nov. 2, 1860; Public Ledger (Philadelphia), Nov. 3, 1860; Jon Grinspan, “‘Young Men for War’: The Wide Awakes and Lincoln’s 1860 Presidential Campaign” (Journal of American History, September 2009); David Grimsted, “American Mobbing, 1828-1861: Toward Civil War.”

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/01/head-stompers-wrench-swingers-and-wide-awakes/

Trying to Show the Unknowable

The ordeals, strategies, problems and triumphs of Holocaust literature.

‘A novel about Auschwitz,” Elie Wiesel once wrote, “is not a novel, or it is not about Auschwitz.” The testimony of Holocaust survivors, he seemed to imply, is inherently true, while literary representations of the Holocaust are, at some level, inherently false.

Of course, it is not a simple matter. As Ruth Franklin argues in “A Thousand Darknesses,” her superb study of Holocaust literature, every canonical work, including “Night” (1958), Mr. Wiesel’s famous book about his imprisonment in Auschwitz, blurs the distinction between fiction and reality. If the best works do so self-consciously, as she contends, there is always the danger that certain works will cross the line into bad faith, inviting charges of distortion or fraud.

And indeed, in recent years, a number of Holocaust stories have been exposed as hoaxes. The case of Binjamin Wilkomirski stands out. After being lauded by survivors for faithfully conveying their ordeal, his purported memoir, “Fragments: Memories of a Wartime Childhood” (1996), became a scandal when it was discovered to be, like the author’s name and personal history, a fabrication. Ms. Franklin does not discuss “Fragments” in detail, but it offers a touchstone for her investigation, showing the tension between Holocaust testimony and the fiction derived from it—in this case, fiction posing as lived experience.

Despite the hazards of trying to represent events often said to be “unknowable,” Ms. Franklin insists on the moral authority of the imagination and shows the power of literature to uncover the truths that are latent in documentary material. There is the case, for instance, of the postwar German novelist Wolfgang Koeppen, who rewrote an obscure Holocaust memoir by one Jakob Littner and turned it into a superior work of art. For “Schindler’s Ark” (1982), the novelist Thomas Keneally based his narrative on careful research of Oskar Schindler’s life, almost to the point of making the book (as one critic said) a “workaday piece of reportage” rather than a textured work of fiction. For the film “Schindler’s List” (1993), as Ms. Franklin observes, Steven Spielberg more freely manipulated the factual history to create, for his audience, the potent illusion of “witnessing” the Holocaust.

Ms. Franklin is especially drawn to difficult cases. Tadeusz Borowski, a non-Jewish Pole, was a prisoner at Auschwitz and served in the camp’s Sonderkommando—the squad that processed the dead and their belongings—if only for a day. Although fellow survivors reported that he acted heroically in the camp, he suffered, Ms. Franklin concludes, a “psychological wound.” In the stories collected in “This Way for the Gas, Ladies and Gentlemen,” published soon after the war, he adopted the voice of a cynical narrator who alternately mocks Jewish victims and recoils in disgust at their suffering. By implicating himself in the workings of the camp, Ms. Franklin says, Borowski found a powerful way to explore the tangled roles of victim and perpetrator.

In other cases, fiction’s autobiographical core is even more perplexing. Imre Kertész draws on his own experience in Auschwitz for his novel “Fatelessness” (1975), but the naiveté of his narrative voice denies us the consolation of straightforward testimony. “We can never be certain,” Ms. Franklin says, “of an episode’s truth-value.” In his quasi-autobiographical novel “Blood From the Sky” (1961) the Ukrainian-born French writer Piotr Rawicz presents two capricious storytellers who deliberately obscure facts and recount brutality in language at once florid and sardonic. Together they create a form of “anti-witness”—not false witness but witness whose immersion in evil has made mental and moral clarity impossible.

Nonfiction writers may seem to be more trustworthy, but we must not always take their words at face value, Ms. Franklin warns. Primo Levi, whose profession as a chemist helped him survive Auschwitz, presented his own experience—in “If This Is a Man” (1947)—in language of scientific clarity. But he also took many liberties in telling the stories of his comrades. In W.G. Sebald’s mesmerizing blend of fiction, encyclopedic detail and travelogue in “Austerlitz” (2001) and “The Emigrants” (1993)—both grounded in the experiences of Jewish children in the Holocaust—Ms. Franklin finds a painstaking strategy for restoring people and places to life. “Restitution,” Sebald called it.

Questions of authenticity became acute once therapists and cultural theorists asserted that trauma was transmissible, permitting readers (and filmgoers) to “bear witness” to events they had not experienced. The archetypal test case is Jerzy Kosinski’s 1965 novel, “The Painted Bird.” Because Kosinski cagily led readers to believe that his story of an unnamed boy wandering through a violent Eastern European landscape was based on his own childhood, Elie Wiesel and Arthur Miller, among others, hailed it as a Holocaust masterpiece. Once it became known that the story’s incidents were invented—and that Kosinski’s family had hidden safely from the Nazis during the war—the book was condemned as a sadomasochistic fairy tale. By exploring the gray zone between witness and voyeurism, however, Kosinski had suggested that the lies of literature could provide surprising access to horrific events.

If the documents on which historians depend can prove unreliable, the best of Holocaust literature, Ms. Franklin emphasizes, has the advantage of being “self-conscious about its own unreliability.” True enough. But since the events of the Holocaust, not to mention its vast historiography, play very little role in her book, an important dimension of the problem is left out of account. (A more practical drawback is that she provides no endnotes or bibliography.) Still, by scrupulously defending the integrity of literature, Ms. Franklin has offered her own eloquent testimony.

Mr. Sundquist is a professor of English at Johns Hopkins University and the author of “Strangers in the Land: Blacks, Jews, Post-Holocaust America.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703514904575602652942567516.html

Georgia to U.S.: ‘Don’t Tread on Me’

Nov. 9, 1860

Across the country, the day’s headlines blazed with reports of Southerners’ response to Lincoln’s election. Perhaps most disturbing to many Americans, though thrilling to others, was news of a mass meeting in Savannah, Ga., the previous afternoon. Thousands of citizens – the largest gathering that the city had ever seen, newspapers said – had filled Johnson Square at the heart of downtown, thronging around a monument to Revolutionary War general Nathanael Greene to launch a revolution of their own. The crowd cheered wildly as a speaker declared that “the election of Abraham Lincoln and Hannibal Hamlin to the Presidency and Vice Presidency of the United States, ought not and will not be submitted to.” The shouts and whoops redoubled as a flag was unfurled across the white marble obelisk: a banner with a coiled rattlesnake and the words “SOUTHERN RIGHTS. EQUALITY OF THE STATES. DON’T TREAD ON ME.”

Library of Congress The first flag of Southern independence, raised in Savannah, Ga., on November 8, 1860.  

Probably no one mentioned the ironic fact that this Southern banner, one of the very first flags of secession, was raised atop a monument to a Northerner: General Greene had been born and raised in Rhode Island. Like many Americans of the founding generation, he had harbored mixed feelings about slavery – to say the least. “On the subject of slavery, nothing can be said in its defence,” he wrote to a Quaker acquaintance in 1783, while he was in the process of moving to Georgia to take possession of a large plantation and its hundreds of enslaved African Americans, a gift from the state of Georgia. Greene justified this acquisition by claiming that he planned to treat his new chattels kindly. Two years later, just before his early death, he still harbored vague plans to free his slaves and keep them in a system resembling medieval feudalism.

Cleveland Plain Dealer headline, Nov. 9, 1860

In 1860, however, Georgia’s leaders felt no such ambivalence about human bondage. As the secessionists gathered in Savannah, Governor Joseph E. Brown issued a proclamation vindicating Georgia’s right to withdraw from the Union rather than submit to “proud and haughty Northern Abolitionists.” Brown, who came from a family of hardscrabble farmers in northern Georgia, struck a populist tone, as he often did, reminding the South’s poor whites how much better off they were than Northern factory workers:

Here the poor white laborer is respected as an equal. His family are treated with kindness, consideration and respect. He does not belong to the menial class. The negro is in no sense of the term his equal. Be feels and knows this. He belongs to the only true aristocracy, the race of white men. …

These [laborers] know that in the event of the abolition of Slavery, they would be greater sufferers than the rich, who would be able to protect themselves. They will, therefore, never permit the slaves of the South to be set free among them, come in competition with their labor, associate with them and their children as equals – be allowed to testify in our Courts against them – sit on juries with them, march to the ballot-box by their sides, and participate in the choice of their rulers – claim social equality with them – and ask the hands of their children in marriage. …[T]he ultimate design of the Black Republican Party is to bring about this state of things in the Southern States.

But the crowd in Savannah on Nov. 8 probably needed no reminder about the current state of race relations. One of the largest slave pens in Georgia – a business establishment where hundreds of people at a time were often imprisoned, awaiting sale – faced the Greene Monument across Johnson Square.

Sources: New York Times, Nov. 9 and Nov. 12, 1860; Cleveland Plain Dealer, Nov. 9, 1860; Macon Daily Telegraph, Nov. 12, 1860; Terry Golway, “Washington’s General: Nathanael Greene and the Triumph of the American Revolution”; Gerald M. Carbone, “Nathanael Greene: A Biography of the American Revolution”; George Washington Greene, “The Life of Nathanael Greene, Major-General in the Army of the Revolution”; William W. Freehling and Craig M. Simpson, “Secession Debated: Georgia’s Showdown in 1860”; Walter J. Fraser, “Savannah in the Old South”; Malcolm Bell Jr., “Major Butler’s Legacy: Five Generations of a Slaveholding Family.”

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/08/georgia-to-u-s-dont-tread-on-me/

The Last Ordinary Day

Nov. 1, 1860

Seven score and 10 years ago, a little Pennsylvania town drowsed in the waning light of an Indian summer. Almost nothing had happened lately that the two local newspapers found worthy of more than a cursory mention. The fall harvest was in; grain prices held steady. A new ice cream parlor had opened in the Eagle Hotel on Chambersburg Street. Eight citizens had recently been married; eight others had died. It was an ordinary day in Gettysburg.

It was an ordinary day in America: one of the last such days for a very long time to come.

In dusty San Antonio, Colonel Robert E. Lee of the U.S. Army had just submitted a long report to Washington about recent skirmishes against marauding Comanches and Mexican banditti. In Louisiana, William Tecumseh Sherman was in the midst of a tedious week interviewing teenage applicants to the military academy where he served as superintendent. In Galena, Ill., passers-by might have seen a man in a shabby military greatcoat and slouch hat trudging to work that Thursday morning, as he did every weekday. He was Ulysses Grant, a middle-aged shop clerk in his family’s leather-goods store.

  
Even the most talked-about man in America was, in a certain sense, almost invisible — or at least inaudible.
 
On Nov. 1, less than a week before Election Day, citizens of Springfield, Ill., were invited to view a new portrait of Abraham Lincoln, just completed by a visiting artist and hung in the statehouse’s senate chamber. The likeness was said to be uncanny, but it was easy enough for viewers to reach their own conclusions, since the sitter could also be inspected in person in his office just across the hall. Politically, however, Lincoln was almost as inscrutable as the painted canvas. In keeping with longstanding tradition, he did not campaign at all that autumn; did not so much as deliver a single speech or grant a single interview to the press.
An ad for Lincoln & Herndon, attorneys at law.

Instead, Lincoln held court each day in his borrowed statehouse office, behind a desk piled high with gifts and souvenirs that supporters had sent him — including countless wooden knicknacks carved from bits and pieces of fence rails he had supposedly split in his youth. He shook hands with visitors, told funny stories, and, answered mail. Only one modest public statement from him appeared in the Illinois State Journal that morning: a small front-page ad, sandwiched between those for a dentist and a saddle-maker, offering the services of Lincoln & Herndon, attorneys at law.

Article in The New York Herald.

The future is always a tough thing to predict — and perhaps it was especially so on the first day of that eventful month. Take the oil painting of Lincoln, for example: it would be obsolete within weeks when its subject unexpectedly grew a beard. (The distraught portraitist tried to daub in whiskers after the fact, succeeding only in wrecking his masterpiece.) Or, on a grander scale, an article in the morning’s New York Herald, using recent census data to project the country’s growth over the next hundred years. By the late 20th century, it stated confidently, America’s population would grow to 300 million (pretty close to accurate), including 50 million slaves (a bit off). But, asked the author, could a nation comprising so many different people and their opinions remain intact for that long? Impossible.

Writing about the past can be almost as tricky. Particularly so when the subject is the Civil War, that famously unfinished conflict, with each week bringing fresh reports of skirmishes between the ideological rear guards of the Union and Confederate armies, still going at it with gusto.

In many senses, though, the Civil War is a writer’s — and reader’s — dream. The 1860s were an unprecedented moment for documentation: for gathering and preserving the details of passing events and the texture of ordinary life. Starting just a few years before the war, America was photographed, lithographed, bound between the covers of mass-circulation magazines, and reported by the very first generation of professional journalists.

Half a century ago, as the nation commemorated the war’s centennial, a scruffy young man from Minnesota walked into the New York Public Library and began scrolling through reels of old microfilm, reading newspapers published all over the country between 1855 and 1865. As Bob Dylan would recount in his memoir, “Chronicles: Volume 1,” he didn’t know what he was looking for, much less what he would find. He just immersed himself in that time: the fiery oratory, the political cartoons, the “weird mind philosophies turned on their heads,” the “epic, bearded characters.” But much later, he swore that this journey deep into the Civil War past became “the all-encompassing template behind everything I would write.”

In the months ahead, this part of the Disunion series will delve like Dylan into the sedimentary muck of history, into that age of unparalleled American splendor and squalor. Several times each week — aided in my research by two of my students at Washington College, Jim Schelberg and Kathy Thornton — I will write about something that happened precisely 150 years earlier. My subject may be as large as a national election or as small as a newspaper ad. I won’t be trying to draw a grand saga of the national conflict (much less searching for any all-encompassing templates). Instead, I’ll try to bring the reader, for a brief present moment, into a vanished moment of the past — and into a country both familiar and strange.

Sources:

The Compiler (Gettysburg, Pa.), Oct. 29, 1860; Adams Sentinel and General Advertiser (Gettysburg, Pa.), Oct. 31, 1860; Robert E. Lee to the Department of War, Oct. 30, 1860; William T. Sherman to Ellen Ewing Sherman, November 3, 1860; Jean Edward Smith, Grant; Brooks D. Simpson, “Ulysses S. Grant: Triumph Over Adversity, 1822-1865″; Illinois State Journal, Nov. 1, 1860; Harold Holzer, “Lincoln President-Elect: Abraham Lincoln and the Great Secession Winter 1860-1861″; Michael Burlingame, “Abraham Lincoln: A Life”; New York Herald, Nov. 1, 1860; Bob Dylan, “Chronicles: Volume 1.”

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/10/31/the-last-ordinary-day/

A Slave Ship in New York

New York, Nov. 4, 1860

If you had risen early on that Sunday morning, you probably would have ventured out to marvel at the wreckage left by the past night’s storm. Trees had toppled; shop signs lay smashed on the cobblestones. All along the wharves of lower Manhattan, ships had lost spars and rigging.

And on the harbor’s restless water, a three-masted merchant vessel tossed and bucked at her mooring lines. If you drew close, you might still have caught a whiff of the distinctive stench that every well-traveled mariner in that day and age knew: the reek of close-packed bodies, of human misery, of captivity and death.

She was the slaver Erie, and she had recently come to New York as a captive herself. A U.S. naval vessel, patrolling for ships engaged in the illicit trade, had seized her off the mouth of the Congo River. Flinging open the hatches to the cargo hold, the officers saw a dim tangle of bodies moving in the darkness, packed so tightly that they seemed almost a single tormented soul. Nearly 900 Africans — half of them children — had been stripped naked and forced below decks at the height of equatorial summer, aboard a vessel barely more than 100 feet long. Just a few days into their weeks-long voyage, a witness later recalled, “their sufferings were really agonizing, and . . . the stench arising from their unchecked filthiness was absolutely startling.” Even after their rescue, dozens died in a matter of days.

It might seem odd today that the American government was freeing slaves across the Atlantic while zealously protecting the “property rights” of slaveholders closer to home. Not long after Congress abolished slave importation in 1808, however, U.S. and British naval vessels had begun policing the African coasts and the waters of the Caribbean, occasionally even bringing the captains and crews back to stand trial under federal law. (The freed captives, no matter where in Africa they had come from, were set ashore in Liberia, often to be set to work there in conditions little better than slavery.) It was one of many such hypocrisies, born of political compromise, that most Americans in 1860 took for granted.

Harper’s Weekly Newly freed Africans aboard a slaver, 1860.

Like the majority of slavers at the time, the Erie had been bound for Cuba, where importation was still legal. Her human “cargo” might have fetched somewhere between half a million and a million dollars there — depending, of course, on how many captives perished during the crossing. A mortality rate of one in five or so was taken for granted in the trade, but the Erie’s record on past voyages had been even worse than this horrific average. Still, enormous profits were to be made. The slaver’s New England-born captain, Nathaniel Gordon, had purchased the Africans with kegs of whiskey. He was now a prisoner in the Eldridge Street jail.

The Erie was no stranger to New York. It was, indeed, her home port, as it was of many such vessels. Nearly 100 clandestine — or barely clandestine — slaving voyages had set out from the city over the past 18 months alone. Notorious traders in human flesh hung out their shingles in front of offices on Pearl and Beaver Streets downtown, scarcely bothering to camouflage themselves as legitimate shipping merchants.

Slavery was in the lifeblood of the metropolis. An editorial in that same Sunday’s New York Herald warned local citizens against electing a candidate like Lincoln who might interfere with the institution in the American South. Slave-grown cotton was one of the greatest sources of the city’s wealth, the paper pointed out. Rashly frightening the slave states out of the Union would be “like killing the goose that laid the golden eggs.”

The next day, in U.S. district court, the Erie was officially confiscated by the government and ordered to be sold. The slave ship went up for auction a few weeks later at the Atlantic Dock in Brooklyn, sold for $7,550, and was lost to history.

Her captain’s fate would take much longer for the courts — and the incoming president — to decide.

Sources:
New York World, Nov. 5, 1860; Ron Soodalter, “Hanging Captain Gordon: The Life and Death of an American Slave Trader”; Karen Fisher Younger, “Liberia and the Slave Ships” (Civil War History, December 2008); “Trow’s New York City Directory, for the Year Ending May 1, 1859”; New York Herald, Nov. 1, Nov. 4 and Dec. 7, 1860; New York Tribune, Nov. 6, 1860.

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/03/a-slave-ship-in-new-york/

A Lincoln Photograph – and a Mystery

Washington, Nov. 6, 1860

There is no photograph of Lincoln from the day he was elected president – nor any of voters lining up to cast their ballots, nor of citizens hearing the results of a contest that would change their country forever. Newspapers did not run pictures in those days, and what we think of as photo reportage was still in its infancy, difficult to achieve with fragile, cumbersome, long-exposure cameras.

In fact, the respected Lincoln scholar Harold Holzer – author of the books “The Lincoln Image” and “Lincoln President-Elect,” among others – recently told me that he knew of not a single photograph of any kind taken on one of the most momentous days in American history, Nov. 6, 1860.

But I think I’ve found two – and they’re of Lincoln. Well, almost.

A few months ago, I was poking around in the vast picture collections at the Library of Congress, researching images to use in my forthcoming book, when I came across one I’d never seen reproduced in any history of Lincoln or the Civil War. It shows a group of men in front of the Capitol, about to raise an enormous stone column into place. And it’s dated, right on the photograph itself:

The photo comes from an album kept by Benjamin Brown French, commissioner of public buildings in Washington during the 1850s and ’60s. (The original is now in the collection of the Architect of the Capitol.) In the album there’s another intriguing note accompanying this photograph: “The ‘Lincoln column,’ first monolith raised, Nov. 1860, Pres’l election, being S. column of connecting corridor.”

Like the nation itself, the Capitol was a work-in-progress as the Civil War began. Several years earlier, a forward-thinking Southern statesman had overseen the start of an ambitious expansion project, raising the dome and spreading the marble wings across the hilltop, ready to encompass all the delegations and committees, offices and bureaus, that the rapidly growing Union might require. That statesman was Secretary of War Jefferson Davis, who of course would soon set his hand to slicing up the Union rather than enlarging it. A further irony: slave laborers almost certainly worked on the building known as America’s “Temple of Liberty,” as they had when it was first constructed a half-century earlier. (The Library of Congress’s catalog entry suggests that some of the workmen in the Lincoln Column photo are African-American, but I don’t see it; I suspect it’s just that they’re blurry and the picture is dark.)

Construction was clearly in full swing on Election Day. Someone present at the column-raising on Nov. 6 – probably French himself, a staunch Republican – apparently decided to name it in commemoration of Lincoln’s victory that day. But the name didn’t stick. William C. Allen, chief historian in the Architect of the Capitol’s office for the past 28 years and author of the definitive book on the building’s history, told me in an e-mail that he’d never heard it before.

I began to wonder: is the long-forgotten Lincoln Column still there?

Yesterday morning, accompanied by my friend Abbie Kowalewski, a historian in the Office of History and Preservation of the House of Representatives, I went over to the Capitol to take a look. We quickly found what seemed to be the very spot the 1860 photo was taken. It was on the East Front of the Capitol, the recessed part of the facade toward the left-hand side of this picture:

With help from Abbie, I took a picture from an angle as similar to the 1860 one as I could get. I think they’re pretty close. A Capitol Police officer saw us shooting frame after frame of the same nondescript spot and came over to ask, rather menacingly, what we were up to. (Um, sorry – just taking exterior photos of the most famous public building in America. Clearly we must be terrorists.)

The Benjamin French album says that the Lincoln Column was the southernmost on the connecting corridor – that is, the far left-hand one. I was curious to know whether there might be any other period photographs showing the column after it was installed. So I went back across the street to the Library of Congress and quickly turned up another from the album. It, too, was inscribed “Nov. 6, 1860.” So now I had not one but two pictures taken on Election Day, 1860. This one must have been taken a few minutes after the first, and strongly suggests that the column was being lifted into the left-hand spot:

Success! I was certain I’d pinpointed it. I imagined Capitol tour guides for generations to come sharing the results of my research with curious visitors: “And that’s the famous ‘Lincoln Column,’ installed on the very day that Honest Abe was elected president.”

And how cool was it that Lincoln was sworn in on the same side of the building just a few months later, a stone’s throw from his eponymous column? I wondered if anyone had pointed it out to him.

Then I received an e-mail from the Architect of the Capitol’s office. Back in the Eisenhower era, Mr. Allen reminded me, the building had been enlarged again. All the 1860s columns were taken down and put in storage. They were reinstalled a few years later – but not in their original order. Nobody had recorded which one, of a hundred, had gone where. The Lincoln Column is still part of the Capitol, but no one knows where.

Well, at least we have the photographs.

Sources:
Harold Holzer, “Lincoln President-Elect: Abraham Lincoln and the Great Secession Winter 1860-1861”; Harold Holzer, Gabor Boritt and Mark E. Neely Jr., “The Lincoln Image: Abraham Lincoln and the Popular Print”; Lloyd Ostendorf, “Lincoln’s Photographs: A Complete Album”; William C. Allen, “History of the United States Capitol: A Chronicle of Design, Construction, and Politics”; Benjamin Brown French, “Witness to the Young Republic: A Yankee’s Journal, 1828-1870.”

Adam Goodheart is the author of the forthcoming book “1861: The Civil War Awakening.” He lives in Washington, D.C., and on the Eastern Shore of Maryland, where he is the Hodson Trust-Griswold Director of Washington College’s C.V. Starr Center for the Study of the American Experience.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/11/05/a-lincoln-photograph-and-a-mystery

Behind closed doors

Early Renaissance Italy

Odd goings-on in Godless nunneries

Nuns Behaving Badly: Tales of Music, Magic, Art, & Arson in the Convents of Italy. By Craig Monson. University of Chicago Press; 264 pages

CRAIG MONSON says his book, five tales about unruly nuns, might while away a plane flight. He is too modest. “Nuns Behaving Badly” wears its learning with a smile, but it throws a sharp light into dark Roman Catholic corners.

Convents in 16th- and 17th-century Italy were largely dumping-grounds for spare women: widows, discarded mistresses, converted prostitutes and, above all, the unmarried daughters of the nobility. Aristocratic families were loth to stump up dowries for more than one daughter. The rest were walled away. In Milan in the 1600s, three-quarters of the female nobility were cloistered. At the same time the church was cracking down on lax discipline, in nunneries as much as anywhere.

The result was a headache for the (male) authorities. With few genuinely spiritual nuns, convents were full of women finding ways round the rules through scheming and backbiting, through art or music or lesbian love and once, even, through torching their convent and escaping en masse. All this meant extra paperwork: complaints to the Vatican, petitions, investigations, and interrogations.

Verbatim transcripts offer a brawling, reality-television world: “Sister Domitella, with her hands on her hips, said to her, ‘If I don’t get some respect, for the love of God, I’d like to strangle you… because you blab to the Curia about everything that goes on here in the convent’.” Or again: “Sister Vinciguerra found my gift [of embroidery to the convent chapel] intolerable…she ripped it off, tore it apart and finally burned it.” Mr Monson unravels these mysteries, tracing their origins and following their repercussions beyond the convent walls.

The saddest stories are about the suppression of music. Nuns sang unseen beyond the altar and people flocked to hear their hidden choirs. But the church pronounced female music the devil’s work, allowing only the plainest chant. Two nuns stand out: Laura Bovia, who transported audiences “so high that, here on Earth, they seem to taste heavenly harmony”; and Christina Cavazza, a singer who crept out in disguise to attend the opera. Both were silenced.

__________

Full article and photo: http://www.economist.com/node/17460578

Newly Published Memoir Recalls Horror of Western Front

The WWI Diary of Ernst Jünger

A photo of Ernst Jünger, a lieutenant in the German infantry, taken shortly before the Battle of the Somme in northeastern France in 1916. It shows him wearing his Iron Cross. Jünger’s war diary has just been published for the first time, shortly ahead of the 92nd anniversary of the armistice that ended World War I on November 11.

One of the most graphic accounts of World War I, the diary of German author Ernst Jünger, has been published for the first time. Its dispassionate description of life and death on the Western Front is a cold indictment of war — even though Jünger embraced the conflict throughout as a glorious test of manhood.

August 26, 1916, Guillemont, Somme region, northeastern France:

“In front of my hole lies an Englishman who fell there yesterday. He is fat and bloated and has his full pack on and is covered in thousands of steel blue flies.”

July 1, 1916, Monchy, near Arras:

“In the morning I went to the village church where the dead were kept. Today there were 39 simple wooden boxes and large pools of blood had seeped from almost every one of them, it was a horrifying sight in the emptied church.”

March 22, 1918, Vraucourt

“… there was a bang and he fell covered in blood with a shot to the head. He collapsed into his corner of the trench and remained there with his head against the wall of the trench, in a crouching position. His snoring death rattle came at lengthening intervals until it stopped altogether. During the final twitches he passed water. I crouched next to him and registered these events impassively.”

Shortly ahead of the 92nd anniversary of the Nov. 11, 1918 armistice that ended World War I, one of the most graphic and comprehensive descriptions of the conflict has been published for the first time — the war diaries of the author Ernst Jünger, a lieutenant in the German infantry who fought from shortly after the outbreak in August 1914 until August 1918, three months before its end, when he was shot through the lung. It was his seventh wound. Jünger only died in 1998, at the age of 102.

The diary written in 15 notebooks — Jünger wrote later that he couldn’t remember if the stains on them were blood or red wine — were the basis for his book “In Stahlgewittern,” or “Storm of Steel,” a deeply controversial reminiscence first published in 1920 that glorified the war as purifying test of individual and national strength. He became an icon for conservative nationalists after the war and the Nazis celebrated him as a hero. But he kept them at a distance and declined to join the party.

Diary Covered 3 Years and 8 Months of Fighting

Jünger kept on refining “Storm of Steel” with increasingly poetic passages to satisfy his literary pretensions. But the newly published diary is a raw, factual record of events written down hours or days at most after they occurred. It offers a clear view of life and death in the trenches seen by a soldier who was in the thick of it for most of the war that shaped the 20th century. Jünger’s widow, Liselotte Lohrer, only gave permission last year for it to be published. She died this year.

“I am not aware of any comparable diary, either in German, French or English, that describes the war in such detail and over such a long period,” Jünger’s biographer Helmuth Kiesel, who arranged its transcription and publication, told SPIEGEL ONLINE. “All other diaries are usually far shorter and span just a few weeks or months.”

That is because the average infantryman had a slim chance of surviving even one year intact, let alone the entire conflict. And most weren’t inclined to relive the misery of life in the trenches by writing it down, day after day, in anatomical detail.

Jünger, 19 at the outbreak of the war, joined up immediately, like millions of young men across Europe who thought it would be a quick adventure. But while the dreadful reality of the fighting quickly cooled most men’s ardour, Jünger appears to have been gripped throughout by a glowing fascination for war as trial by combat.

“And still, the heroic, grand impression given by this endless passage of death uplifts and strengthens us survivors. As strange as it may sound, here you become reacquainted with ideals, the total devotion to an ideal right up to the gruesome death in battle,” he wrote on July 3, 1916.

On Sept. 3 of that year, after he had been wounded the second time, by shrapnel in his left leg, he wrote: “I have witnessed much in this greatest war but the goal of my war experience, the storming attack and the clash of infantry, has been denied me so far (…) Let this wound heal and let me get back out, my nerves haven’t had enough yet!”

On June 19, 1917, after leading a dangerous patrol into no man’s land, he writes: “To be a leader with a clear head in such moments is to resemble God. Few are chosen.”

Miraculous Luck

Most people would deem his attitude hopelessly conceited and even insane. Kiesel said Jünger may have written those words because he wanted to be remembered as a hero if he was killed — but also because he was bent on retaining a sense of individual worth in a mechanized war of massive artillery bombardments in which the individual counted for nothing.

Jünger, who won the Iron Cross and Prussia’s highest military award, the “Pour le Merite,” took part in the battle of the Somme in 1916 and fought at Ypres in 1917, as well as at Artois and in Champagne. He spent almost three years in the trenches in the bloody triangle between the north-eastern French towns of Arras, Albert and Cambrai on the Western Front. He was promoted to lieutenant in November 1915.

“It’s the war experience of a hard-as-nails storm troop leader,” historian Gerd Krumeich, one of the authors of the book “National Socialism and the First World War,” told SPIEGEL ONLINE.

Jünger was as lucky as he was courageous. It is almost miraculous that he wasn’t killed or crippled. The diary is filled with near misses such as dud shells landing next to him, live ones exploding where he had stood just a minute before and shell splinters whizzing between his legs or around his ears.

He was hit by 14 projectiles, only three of which came from indiscriminate artillery fire. The other eleven – rifle bullets and hand grenade splinters — were directed at him personally by British or French troops. His company of the 73rd Hanoverian infantry regiment was almost completely wiped out on two occasions. When he was wounded the last time at Cambrai, two men who tried to carry him to safety on their backs were killed with shots to the head.

Gung-Ho Approach Contrasts With Most WWI Authors

Jünger’s approach is in stark contrast with most other prominent portrayals of World War I. German writer Erich Maria Remarque and British authors such as Siegfried Sassoon, Wilfred Owen, Robert Graves or Edmund Blunden depicted it as an unmitigated disaster for the men who fought in it and for the whole of mankind. Blunden described the battle of Passchendaele as “murder, not only to the troops but to their singing faiths and hopes.”

Jünger’s account is matched, at least in length of war service, by the British private Francis Philip Woodruff, whose memoir “Old Soldiers Never Die” written under the pseudonym Frank Richards was first published in 1933. Woodruff served the entire war as an infantryman on the Western front, from August 1914 until the Armistice, without serious injury. He said he had pulled off a “twenty thousand to one chance.”

Paradoxically, Jünger’s enthusiasm makes his diary dramatically effective as an anti-war book because he describes death and destruction with relentless precision. He documented events with the same scientific dedication he showed in his other passion of collecting beetles. In his determination to leave nothing out, he also conveys how soldiers became numb to the death around them and to the suffering of others in this first modern war of constant artillery barrages, gas attacks, machine guns and tanks.

“This afternoon I found two fingers still attached to the metacarpal bone near the latrine of the Altenburg fortress,” wrote Jünger on Oct. 17, 1915. “I picked them up and had the tasteful idea of having them worked into a cigarette holder. But there was still greenish-white decomposed flesh between the joints (…) so I decided not to.”

The diaries are filled with references to wild drinking parties and accounts of weeks spent in wet, cold trenches under constant shellfire. He describes the most horrific wounds with a dispassionate eye, and shows how death in the trenches from a sniper’s bullet or a stray shell becomes so commonplace that soldiers don’t let it interrupt their card game for long.

“A Clean Head Shot”

After one battle a trench “looked like a butcher’s bench even though the dead had been removed. There was blood, brains and scraps of flesh everywhere and flies were gathering on them.” 

Jünger writes with unmasked pride about killing a British soldier with “a clean head shot.” He records how a medic, Kenzira by name, who is still conscious after being struck by two shell splinters, one in the lower right side and one in the back, says: “The shot is fatal, I can feel it quite clearly.”

He gives a breathless description of the German spring offensive of 1918 in which he led a storm troop attacking British machine gun positions. “In a mixture of feelings brought on by excitement, bloodthirstiness, anger and alcohol consumption we advanced in step towards the enemy lines,” he wrote on March 21, 1918.

In another passage from that day, he wrote: “I was incredibly hot. I tore off my coat, some people helped me buckle up again. I still remember calling out several times very energetically: ‘Now Lieutenant Jünger is taking off his coat’ and people laughing at that.”

Jünger’s war hero status and his books made him an idol of the German right in the 1920s and 30s. “The Nazis worshipped him but he didn’t want anything to do with that rabble,” said Krumeich, the historian. “He represented the idea that, happen what may to my body, I am indestructible, Germany is indestructible. That was a powerful argument among young people at the time.”

Some 10 million soldiers and civilians died, and 18 million were seriously wounded in the war, according to conservative estimates. A generation was decimated — 35 percent of German men born in Jünger’s birth year of 1895 were killed. But the war has faded from Germany’s public memory in recent decades as the country was preoccupied with confronting the Nazi period and the Holocaust that followed it. The death in 2008 of the last German veteran known to have fought in World War I, Erich Kästner, went largely unnoticed.

Can Germany Have War Heroes?

Jünger’s book could help open up a new chapter of remembering the conflict in Germany, and historical interest is bound to increase with the coming of the 100th anniversary of its outbreak in 2014, said Kiesel.

“None of the victorious nations shunned calling their soldiers heroes. But it has always been problematic to describe Jünger as a hero, there was always an outcry against it. The time may have come to approach that difficult debate again to restore a certain equality, even if these solders were involved in a war for which Germany bears the main guilt.”

It may also be interesting to explore why Jünger didn’t noticeably suffer from post-traumatic stress disorder, an affliction that has hit large numbers of soldiers who have fought in Iraq and Afghanistan. Kiesel said keeping a diary to write down the events in detail shortly after they happened may have helped.

But most importantly, Jünger’s crystal-clear descriptions unwittingly offer a fresh reminder of the devastation and terror caused by all wars.

An entry on August 28, 1916, written during the Somme battle, reads: “This area was meadows and forests and cornfields just a short time ago. There’s nothing left of it, nothing at all. Literally not a blade of grass, not a tiny blade. Every millimeter of earth has been churned up and churned again, the trees uprooted and torn apart and ground to sludge. The houses shot to pieces, the bricks crushed into powder. The railway tracks turned into spirals, hills flattened, everything turned to desert. And everything full of corpses who have been turned over a hundred times. Whole lines of soldiers are lying in front of the positions, our passages are filled with corpses lying over each other in layers.”

__________

Full article and photo: http://www.spiegel.de/international/europe/0,1518,726672,00.html

Would the South Really Leave?

Nov. 11, 1860

With nearly half a year to prepare for the possibility of a Lincoln election, the editorial writers of the South had ample time to sharpen their rhetoric, and the arias of wroth and venom unleashed after last Tuesday’s decision proved that those months were not idly spent.

“If we submit now to Lincoln’s election,” said the Fayetteville North Carolinian, “your homes will be visited by one of the most fearful and horrible butcheries that has cursed the face of the globe.” Said the Richmond Semi-Weekly Examiner, “Here [is] a present, living, mischievous fact. The Government of the Union is in the hands of the avowed enemies of one entire section. It is to be directed in hostility to the property of that section.” Added The Atlanta Confederacy, even more emphatically, “Let the consequences be what they may — whether the Potomac is crimsoned in human gore, and Pennsylvania Avenue is paved ten fathoms deep with mangled bodies, or whether the last vestige of human liberty is swept from the face of the American continent, the South will never submit to such humiliation and degradation as the inauguration of Abraham Lincoln.” Concluded a pithier Augusta Constitutionalist: “The South should arm at once.”

Hot words, those, but in South Carolina, there were even hotter deeds: the day after the election, fire-eaters lowered the Stars and Stripes flying above the state capitol, and raised the Palmetto flag. Three days later, the legislature voted to convene in December to decide whether to secede.

Southerners, of course, have called this tune before. They threatened to bolt in 1820, floated the divisive theory of nullification in the 1830s, and angrily convened in Nashville in 1850. (The governor of South Carolina, William Gist, even has a brother whose name is States Rights — yes, his actual name is States Rights Gist — who was born during the nullification crisis; Father Gist was evidently a fervent Calhoun man.)

Whatever the time and whatever the provocation, the story has always been the same: threats, indignation and outrage, followed in the end by placations from the North and reconciliations that left the South wealthier and the institution of slavery more entrenched. Most assume that past will be prologue. The South seceded last year when the Republicans elected William Pennington as Speaker of the House, jibed pro-Lincoln newspaperman Carl Schurz earlier this year. “The South seceded from Congress, went out, took a drink, and came back. When Old Abe gets elected, they’ll go out, and this time they’ll take two drinks before they come back.”

And yet, this time they might really mean it.

Is it all due to Lincoln? Certainly, but that his mere election would incite secession is not so obvious. Though an opponent of slavery, he is measurably more moderate than Senator Seward or Senator Chase, rivals for the nomination whom the Republicans, for all their abolitionist ardor, plainly did not prefer. Nearly a month has passed since Lincoln spoke in public about the issue of slavery, and all he did was repeat that he was constitutionally powerless to interfere with the institution of slavery in any state where it existed. “What is it I could say which would quiet alarm?” said Lincoln, his exasperation evident. “Is it that no interference by the government with slaves or slavery within the states is intended? I have said this so often already that a repetition of it is but mockery.”

But to the South, Lincoln is but the tip of the spear. “He rides a wave he cannot control or guide,” observes a perceptive editorialist for The Atlanta Daily Constitutionalist, who predicts that Lincoln’s “very restraint will give new strength to its pent up fury, and it will carry into the same office, four years hence, a man of more revolutionary ideas.”

Republicans come to Washington not just with an eye to stopping the expansion of slavery. Their program also includes higher tariffs, which will increase the power of Northern manufacturers; support for the railroads, which will lead to the settlement of the West and to the creation of who knows how many anti-slavery states between the Mississippi and the Pacific; and unrestrained immigration. Eighty percent of new arrivals settle in the North, swelling its power with their labor and their votes. The Constitution may prevent the Republicans from abolishing slavery now, but Southerners are concerned that the great unsettled Dakota prairies will be carved into a dozen states that will become full of Republican-loving Italians and Poles and Irishmen and escapees from the revolutions of 1848. See what happens then.

These developments might sit differently if the South felt weak, but in fact, it feels stronger than ever. Cotton production is at an all-time high; perhaps two billion pounds will be produced this year, enough to account for nearly 60 percent of the country’s exports. Almost half the crop will go to England, where a fifth of the population of the world’s greatest power works in the textile industry. Two years ago Senator James Hammond of South Carolina proclaimed, “The slaveholding South is now the controlling power of the world.” With an increasingly abundant cotton crop earning ever-rising prices, no one down south feels obliged to argue, unless it is with the abolitionist who wishes to cast moral aspersions upon him and deny him the labor force that is the underpinning of this ever-increasing wealth.

And so, inevitably, the South thinks of secession — and expansion. The South has long believed that unless slavery keeps expanding, it will die, and take the slave-holding elite with it. As Senator Jefferson Davis of Mississippi recently said, “We of the South are an agricultural people, and we require an extended territory. Slave labor is a wasteful labor, and it therefore requires a still more extended territory than would the same pursuits if they could be prosecuted by the more economical labor of white men.” Limiting slave territory, Davis says, would “crowd upon our soil an overgrown black population, until there would not be room in the country for whites and blacks to subsist in, and in this way. . . reduce the whites to the degraded position of the African race.” Oddly, Senator Charles Sumner, the ardent abolitionist from Massachusetts, has in a rather different way reached the same conclusion: limiting slavery will kill slavery.

And so the slaveholders seek to expand, although whether they can go further north and west is more than a political question; there is much doubt whether the climate and crops of western America would sustain slavery. But all doubts vanish when they turn their backs to the north, and see rimming the Gulf of Mexico verdant lands that could, and have, enriched slaveholding planters. “To the Southern republic bounded on the north by the Mason and Dixon line and on the south by the Isthmus of Tehuantepec, including Cuba and all the other lands on our southern shore,” toasted one Texan at a convention in 1856, and that sentiment burns at the heart of many of the fire-eaters now crying secession.

Don’t forget that not very long ago, such sentiments burned brightly in Washington as well. The Polk and Pierce administrations tried to buy Cuba. Just six years ago, the current president, James Buchanan, who was then Minister to Great Britain, was one of the three authors of the Ostend Manifesto, which maintained that if Spain wouldn’t sell us Cuba, we would be justified in seizing it. Accompanying these official efforts were unofficially encouraged forays by slaveholder-supported filibusteros to invade Cuba, foment a rebellion and grab the island on behalf of expansionist-minded southerners.

Expansionists north and south initially supported William Walker’s campaigns to seize control of Nicaragua, but it was the southern expansionists who were his true constituency. The south’s moral and financial support sustained Walker when he seized Nicaragua’s presidency in 1856, and though he governed only briefly, he managed to re-establish the legality of slavery before a coalition of Central American powers defeated his cholera-ravaged army and sent him scampering. Walker made further attempts to conquer Nicaragua, the last of which ended last September in front of a firing squad in Honduras. But southerners backed every one.

A mere freebooter, Walker nearly succeeded. The ultras dream of what could be accomplished in Nicaragua, and Cuba and northern Mexico and the West Indies if a cotton-rich American government should seek its destiny in commanding a tropical empire that would dominate the world’s supply of not only cotton but the staple of sugar as well.

So here, then, is the South’s choice. Does it select a future in which the southern slavocracy is less powerful; more isolated; consistently subjected to moral castigation by northerners for an economic system that profits not just planters but innumerable northern shippers and insurers and mill owners? Or does the South choose to establish a new nation that will sit at the center of a rich and powerful slaveholding empire that will dominate the hemisphere?

There are plenty of people in the south who oppose disunion and wish to move slowly or not at all. But most of the South’s leadership — its money and its political establishment and its opinion-makers — know that the South is at a crossroads, and they mean for it to choose independence.

Note: An earlier version of this story misstated the Republican proposal on tariffs; their plan called for them to be higher, not lower.

(To read more about this period, see “Battle Cry of Freedom: The Civil War Era,” by James M. McPherson, Oxford University Press, 1988; “Days of Defiance,” by Maury Klein, Alfred A. Knopf, 1997)

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup.”

___________

Full article: http://opinionator.blogs.nytimes.com/2010/11/10/would-the-south-really-leave

More than rhetoric

What makes a great presidential speech? Ten top moments from Ted Sorensen and John F. Kennedy.

Ted Sorensen, who died early last week, was legendary among all of us in the speechwriting fraternity for the extraordinary body of work he crafted with President John F. Kennedy. Even the Republicans pilfered from him, and inside the Clinton White House, he represented a gold standard that we constantly strove to reach, with imperfect results (it’s harder than it looks).

There is a great deal to be said about why those speeches were so good. Obviously, much of the credit belongs to the person delivering the speech, and Ted Sorensen was blessed with a partner of rare ability. But he brought his own great abilities, which dovetailed perfectly with those of President Kennedy. He was lean in every sense; not a single word was wasted in those taut, muscular orations. Famously, Sorensen consulted the great speeches of American history before writing the inaugural address, and discovered that Lincoln’s Gettysburg Address had very few polysyllabic words. The result was those thrilling two syllables, “ask not!” (the “ask” stretched dramatically into Bostonese), more insisting than asking. They were essential to set up the rest of the famous sentence. Not just the “ask not,” but the important pause that came after, with an index finger jabbing the frosty air. That was political theater of the highest order.

Sorensen was gifted in many other ways; his phenomenal work ethic, his lightning speed, his mordant wit. He was mischievous, and with his perfect crop of hair, retained an air of Kennedy-esque boyishness well into senescence. All of those qualities gave spice to the speeches — unlike so much of Washington oratory, every utterance contained the possibility of a surprise; an unusual allusion; a bracing witticism; and always, a summons to action.

In Sorensen’s memoir, “Counselor,” he wrote, “I approached each speech draft as if it might someday appear under Kennedy’s name in a collection of the world’s great speeches.” That is setting the bar pretty high — but consider the results. This top ten list of speeches by John F. Kennedy and Ted Sorensen does not include what may have been the greatest contribution Sorensen made to history — he drafted the letter to Nikita Khrushchev that helped to resolve the Cuban Missile Crisis. It also does not include some memorable speeches drafted by other pens within Kennedy’s inner circle — for example, the exquisite address given at Amherst College in 1963, praising poetry, drafted by Arthur Schlesinger Jr. But what a list, all the same.

1. Address at American University, June 10, 1963. This remarkable speech completely recast the Cold War. Coming seven months after the Cuban Missile Crisis, it proclaimed that the United States and the Soviet Union could find common ground; indeed, that they must. It has less flash than many Kennedy speeches, and more hard-won realism, seeing the world “as it is,” a line used by President Obama in his 2009 Nobel address. Declaring peace a human right, it offered a new conciliatory approach to the Soviet Union, whose leader, Nikita Khrushchev, responded by calling it “the greatest speech by any American president since Roosevelt.” A nuclear test ban treaty followed shortly. Three short sentences were especially moving, and as it proved, all too prophetic: “We all breathe the same air. We all cherish our children’s future. And we are all mortal.”

2. Inaugural Address, January 20, 1961. This speech, of course, was the template for all that followed. It pulsed with energy and determination; it contained a real agenda for the future; and it energized a generation that had been quiescent, mainly because it had never been asked to do anything. It contained both power and poetry, including rhymes (“Let every nation know…that we shall oppose any foe”). Despite a rare Sorensen clunker (the mixed metaphor, “if a beach-head of cooperation may push back the jungle of suspicion…”), this was a speech for all time.

3. Televised Address on Civil Rights, June 11, 1963. With a lawyer’s clarity, Sorensen’s draft went to the heart of America’s most entrenched problem. A single sentence perfectly cast the tone, claiming that civil rights was a moral issue “as old as the Scriptures and as clear as the Constitution.” (Perhaps if the Constitution had been more clear, we might have eliminated slavery earlier than we did, but that is a historian’s question, not a speechwriter’s.) Martin Luther King Jr., watching at home said, “Can you believe that white man not only stepped up to the plate, he hit it over the fence!”

4. Berlin Speech, June 26, 1963. No American president has ever looked more attractive to the rest of the world than President Kennedy did on the day he went into West Berlin, encircled and nearly walled off by communism, and delivered this short, exciting, and utterly winning address. It contained memorable soundbites (“Ich bin ein Berliner”), surprising flashes of humor (JFK thanked his interpreter for translating his German into German), and the drama of a president perfectly matched to his time.

5. Speech to the Greater Houston Ministerial Association, September 12, 1960. The nomination of a Roman Catholic touched off tensions that went to the heart of American history. And Sorensen and Kennedy went to American history to resolve them. The most effective line, tugging at Texan heartstrings, came when JFK reminded his listeners that no one knew who was Catholic among the defenders of Alamo, “for there was no religious test at the Alamo.”

6. Address at Rice University, September 12, 1962. Two years to the day after another Houston speech, Kennedy affirmed the role of science in driving the nation’s progress forward, and called specifically for lunar exploration. Again, it was couched in history (William Bradford got a surprising shout-out), and classically punchy Sorensen sentences about exertion and excellence. (“We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard….”)

7. Speech on Algeria, July 2, 1957. This speech, given by then-Senator Kennedy, proclaimed his independence from party orthodoxies on the right and left, and his willingness to think anew about the Cold War and the role of the developing world within it. It infuriated Democratic Cold Warriors like Dean Acheson, but with the advantage of hindsight, we can see that it was not only visionary, but correct. Sorensen called it “one of the most carefully researched speeches he ever gave,” and it had to be, for it questioned nearly all of the assumptions guiding US foreign policy.

8. Address at University of Washington, November 16, 1961. This speech is not as well remembered, but it contains a single line which is often quoted because of its relevance to globalization and the shrinking dominance of the United States, especially in the wake of Iraq. Kennedy said, “We must face the fact that the United States is neither omnipotent nor omniscient — that we are only 6 percent of the world’s population — that we cannot impose our will upon the other 94 percent of mankind — that we cannot right every wrong or reverse each adversity — and that therefore there cannot be an American solution to every world problem.”

9. Commencement Address, Yale University, June 11, 1962. After a long and witty introduction, poking fun at the fact that so many of JFK’s critics were Yale men, the president went into the heart of his speech, defending the role of government to improve lives and promote fairness. It bears re-reading in a week of Tea Party insurgency, and it also reflects on the difficulty of getting right with historical figures whom we magnify or vilify out of proportion to the real human beings they were: “For the great enemy of truth is very often not the lie — deliberate, contrived, and dishonest — but the myth — persistent, persuasive, and unrealistic. Too often we hold fast to the cliches of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought.”

10. Farewell to Massachusetts, Boston, January 9, 1961. This call to integrity was delivered inside a legislative chamber (the General Court) that has not always lived up to those standards, but has stood the test of time. Inspired by President Lincoln’s farewell to Springfield, the speech insisted that it was not a farewell, but like Lincoln’s, it was all the same. It went deeply into the original errand that brought settlers to the Bay Colony, citing John Winthrop’s City on a Hill passage, correctly for once (“We shall be as a city on a hill — the eyes of all people are upon us”). That phrase has been borrowed by many others, notably Ronald Reagan (who added the un-Kennedyesque adjective “shining”), but never more effectively. Along with Daniel Webster’s Reply to Hayne, and his Bunker Hill Address, it is the finest speech ever given about this state by an elected official.

Ted Widmer served in the Clinton White House from 1997 to 2001, first as a speechwriter, then as a senior adviser. He directs the John Carter Brown Library at Brown University and is a senior research fellow with the New America Foundation.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/11/07/more_than_rhetoric/

When History Rides the Waves

A stormy home to explorers, traders and pirates. A prison to Napoleon.

We humans, as Shakespeare noted, are as ephemeral as dreams. So to us the oceans seem eternal. But they are not, a fact that geologists only learned in the mid-20th century. One hundred and ninety million years ago the Atlantic Ocean was born as the supercontinent Pangaea began to split apart. For now, it continues to widen at the rate of about an inch a year. But perhaps 180 million years hence it will have disappeared as the planet’s ever-restless tectonic plates once more coalesce into a new supercontinent, Pangaea Ultima. Roughly at this midpoint in the ocean’s existence, Simon Winchester, in “Atlantic,” tells us the story so far.

The Atlantic, Mr. Winchester notes, has had a relatively brief life as an important geographic feature of the globe. For most of European history, the Atlantic was simply “the great outer sea,” as opposed to the inner sea, the Mediterranean, and thought to encircle the world. It was, therefore, practically as alien and unknown as the back side of the moon and of little more worldly importance.

Two events at the end of the Middle Ages changed that decisively. In the 15th century, western Europeans developed the full-rigged ship, which was more capable than earlier vessels of dealing with the far greater distances and tougher conditions of the Atlantic. And in 1453, the Turks finally took Constantinople, closing off the old trade routes to the East, the source of spices, silk and other luxury goods.

With the development of new trade routes around the southern tip of Africa by the Portuguese and the discovery of the New World by Columbus, the center of the Western world moved decisively from the Mediterranean to the Atlantic. Venice and Genoa declined into insignificance. England, France and Spain, battening on the burgeoning trade and treasure of the New World, fought for mastery of the Atlantic. For the next 500 years, the Atlantic Ocean would be the cockpit of history.

The last naval battle in the Mediterranean of great strategic significance, the Battle of Lepanto, was fought in 1571. The first one not fought in the Atlantic thereafter was the Battle of Tsushima in 1904, between Russia and Japan, which announced Japan’s arrival as a great power. Control of the Atlantic sea lanes determined the outcome of both world wars in Europe.

But the story of the Atlantic Ocean involves far more than just battles, however sanguinary and significant. It is equally the story of geology on a grand scale, of discovery and exploration, of fishing and the incomparable piscine riches of the Grand Banks. It is the story of trade in gold, wheat, furs, sugar and slaves. It is the story of piracy and of technology, as wood gave way to steel and undersea cables knitted the Old and New Worlds together. It is the story of titanic storms and the Titanic shipwreck. And it is the story of Napoleon, whose navy once patrolled the ocean while he ruled a continent and whose last days were spent in exile, his prison walls the endless waters of the Atlantic.

Mr. Winchester—a trained geologist and inveterate globetrotter—is well suited to tell the story. And he tells it with the sort of panache that he has brought to previous books, such as “Krakatoa,” about the volcanic disaster of 1883, and “The Professor and the Madman,” about the creation of the Oxford English Dictionary.

He begins his tale by looking out from the windswept heights of the Faroe Islands, a Danish possession on the edge of the Arctic Ocean. He ends it on the Skeleton Coast in Namibia on the southwest coast of Africa, 10,000 miles away. With much history, technology, economics and even poetry in between, “Atlantic” perhaps inevitably reads a bit like one of Edna Ferber’s sprawling, unruly novels. But it is no less readable for that.

There is much to relish here. I had no idea, for instance, that NASA’s five space shuttles were named for 18th- and 19th-century ships of exploration, two American and three British. (That’s why the space shuttle Endeavour is spelled in the British fashion: It is named for Capt. Cook’s ship.) I had never heard of William Marsden (1754-1836), who as secretary of the British Admiralty devised the ocean-sectioning “Marsden squares” that Lloyd’s has used ever since to chart the location of shipwrecks. (He also left a vast coin collection to the British Museum, wrote a definitive dictionary of Malay and in 1805 woke the First Lord of the Admiralty to tell him the happy outcome of the Battle of Trafalgar.)

The story of how the first telegraph transoceanic telegraph cable was laid in 1858 takes up only a few pages but is, typically for “Atlantic,” riveting. The 2,500 miles of cable, “about as thick as a man’s index finger” and weighing 1,500 tons, broke repeatedly; storms lashed the project and scoffers proliferated. Thoreau wanted to know why anyone would need to communicate from continent to continent—surely, he said, the news would consist of trivia on the order of “King of Prussia too ill to visit Queen Victoria.” The cable worked, sort of, for 15 days, then went dead. Eight years and two more attempts would be needed before a long-lasting success was achieved.

Inevitably, however, in painting so vast a canvas, Mr. Winchester commits the occasional miscue. For instance: Matthew Fontaine Maury, the father of the science of oceanography, was indeed appointed the first head of the U.S. Naval Observatory in 1844, but he didn’t serve in that position “for the next thirty years.” A Virginian by birth, Maury resigned his commission in 1861 and sided with the Confederacy, going to England to help acquire ships for its navy. He ended his days as a professor at the Virginia Military Institute.

Noting that more then 400,000 commercial flights cross the Atlantic annually, a wistful Mr. Winchester writes: “The casual public acceptance of transoceanic air travel has dulled us to the wonders and beauties and the preciousness of the sea below.” His lively, lyrical telling of the ocean’s story does much to sharpen our appreciation.

Mr. Gordon is the author of “An Empire of Wealth: The Epic History of American Economic Power.”

__________

Full article and photo:

http://online.wsj.com/article/SB10001424052702303738504575568671386415824.html

Why Did Germans Embrace Him?

Nazis are never far from the news here, but “Hitler and the Germans: Nation and Crimes” is Hitler’s biggest coup in Berlin since Mel Brooks’s “The Producers” lit up the German capital last year. The exhibition at the German Historical Museum has attracted a mountain of domestic and international media attention and brisk business. During its opening weekend alone, 10,000 visitors lined up to learn how Germans embraced this man who led them down a road of madness and mass murder. It is a story told through everyday objects, photographs, videos and a lot of text.

There is a satisfying irony to the fact that the Zeughaus, which currently houses the museum, was the scene of a botched attempt to assassinate Hitler in 1943. The show makes a case that the German people invested Hitler with their hopes and dreams and explores the fascination that the dictator continues to exert on the German public. “With many of the Germans who come here, a grandfather might look around and say, ‘We also played this Nazi board game’ or ‘I too was in the Hitler Youth,'” explained Simone Erpel, one of the exhibit’s three curators. “It’s not like we’re done with this chapter because 65 years have passed. The new generation is asking new questions about this history.” The show addresses the question of how Hitler was possible, and how violence on this scale was condoned.

Women working on small busts of Adolf Hitler in 1937.

The eight exhibit rooms, covering nearly 11,000 square feet, contain the sort of Nazi paraphernalia that, in any other context, would be illegal in Germany. Yet the curators have excluded items that might have special fetish value. The museum decided, for instance, not to present Hitler’s dinner jacket, currently displayed at a military museum in Moscow. “The history changes depending whether you see it in Berlin or Moscow,” said Ms. Erpel. “If we were to bring Hitler’s coat back to Berlin, it would be some bizarre sort of triumph, like Hitler returning to Berlin.”

Instead, you find a collection of letters and brightly colored postcards to Hitler on his 43rd birthday. Nearby is a set of Nazi toy soldiers with a miniature Hitler orating wildly from a podium. A computerized display lets you click through an eighth-grader’s assignment titled “Culture Theory,” a meticulously presented school report on the glories of Nazism. This 80-page notebook is one of the show’s most effective items; in our thinking about Hitler, we rarely consider how young people were indoctrinated with evil.

Hitler’s biography itself does not play much of a role in the show, which concentrates on the years 1933 to 1945. Aside from a picture of Hitler at age 10, one of the only bits of background is a collection of busts of “Führer figures,” which includes Siegfried, Frederick the Great, Bismarck, Hindenburg and Mussolini.

By stressing the popular support that Hitler enjoyed, at least until 1943, “Hitler and the Germans” is intent on shattering the myth that the Nazis seized power. It describes various Nazi strategies to win the loyalty of ordinary Germans, while stressing how symbiotic the relationship was. Some Nazi methods, such as the occupation of public space through the building of monuments and renaming of streets, are presented with compelling displays.

None of the ideas advanced by the exhibit are especially new or revelatory, admitted the museum’s press officer, Rudolf Trabold. “We are dealing with classic themes, but we’ve made an exhibit for the general public,” he said.

It is one thing to understand Hitler’s appeal to German society, and quite another thing to hazard an explanation of the crimes the Nazi regime committed in the name of the German people. The exhibit is more concerned with the former than in answering the question of how German society went along with genocidal plans.

At most, the exhibit seems to agree with the British historian Ian Kershaw that “the road to Auschwitz was built by hate, but paved with indifference.” At one point, the show claims that the murder of the Jews was “condoned with a mixture of partial approval, moral indifference and growing fears of terrorist measures.” Another typical sign tells us that public acts of violence met with “the approval or at least acceptance of the population.”

The exhibit also stresses how Nazis embraced technology, which enhanced their appeal and could also work against them. The radio was a powerful disseminator of propaganda, while lightweight and inexpensive 35mm cameras enabled the regime to document its crimes so thoroughly.

The final rooms of the exhibit shift focus to the fascination that Hitler continues to exert. Small though it is, it is the most unique section of an exhibit that often feels like a repackaging of the museum’s permanent collection. A welcome, if unexpected, touch is the inclusion of “Downfall” parodies that have become a YouTube phenomenon. The opposite wall is taken up with the 46 issues of Der Spiegel where Hitler has graced the magazine’s cover from 1946 to 2009. A bit further down, one finds a case full of confiscated Nazi memorabilia. (Incidentally, this stuff often winds up at flea markets in Berlin, the Nazi insignias hidden beneath little stickers.)

Ms. Erpel feels that this exhibit is fruitful for Holocaust education and coming to terms with the past. “The goal of this exhibit is to make a contribution to the de-demonizing of Hitler,” she said, suggesting that we can no longer view Hitler as a ranting lunatic who seduced the German people. She added that the public’s fascination with Hitler clearly indicates the central role he still occupies in our thinking about the Nazi period.

“To confront this question seriously, as we have, is an important step on the path of normalizing the shadows of history that still exist today,” she said. “My wish as a curator is that such exhibits will no longer be necessary, and that 10 years from now nobody will be interested in seeing a show about Hitler.”

Mr. Goldmann writes about arts and culture from Berlin and New York

__________

Full article and photo: http://online.wsj.com/article/SB10001424052702303362404575580281932712418.html

In All Her Infinite Variety

A shrewd ruler, not a wastrel, though she worked her bed as no one before or since.

Cleopatra was the last of the Ptolemies, a Greek dynasty that ruled Egypt from 305 B.C. to 30 B.C. Had her power lasted, she might have become the greatest female sovereign of all time. She was fated instead to be remembered mainly for two melodramatic liaisons with powerful men, Julius Caesar and Mark Antony.

Cleopatra was barely out of her teens when she became a contestant for serious power, in 48 B.C. She had been deposed from precarious rule by her younger brother, but now she gambled boldly by making her way back into the Egyptian capital, Alexandria, where she had herself smuggled to the recently arrived Julius Caesar. She managed to ingratiate herself intimately with the Roman general, who helped secure her independent reign. But he did not act entirely from personal motives.

Rome at the time was involved in a cataclysmic civil war. The main contestants had been Caesar and Pompey; Cleopatra’s rivals overreached in their efforts to win Caesar’s favor: They murdered Pompey the moment he arrived in Egypt in 48 B.C. as a refugee after a major defeat. The killing was too crude a tactic. Though Egypt was not yet a Roman province, it was weak and unstable enough that its rulers had to calculate carefully in their dealings with powerful Romans.

Cleopatra seems to have understood this. She and Caesar became lovers, and—more important—were allies until Caesar’s assassination in 44 B.C.—after which she found a new ally in Marc Antony.

Antony was Caesar’s lieutenant and heir apparent, but Octavian, Caesar’s grand-nephew, turned out to be Caesar’s legal heir, and a new round of civil war was inevitable. Antony and Cleopatra met when she came to him at Tarsus (in modern Turkey) in 41 B.C. She would lavishly court him, win his affections and form a tight military alliance with him. Their end, so memorably depicted by Shakespeare, came about in 30 B.C., some months after they withdrew from a naval engagement off Actium (in western Greece) and fled to Alexandria, where Octavian’s forces could corner them. They both committed suicide.

It is unlikely that Cleopatra would have made herself available either to Caesar or Antony if not for hard-nosed political calculations and her insistence on being an ally, not a vassal, of Rome. She worked her bed as none has been worked before or since, but contrary to Shakespeare, her victimhood (if any) was not that of a captivated (and captivating) woman but rather of a strategist overcome by unpredictable events. Had not the teenage Octavian emerged as a ruthless rival of the veteran Antony, the fate of Cleopatra, Egypt and even the Roman Empire might have been quite different.

Cleopatra, it should be said, was not the beauty represented by Elizabeth Taylor in the infamous 1963 movie. Contemporary coin portraits show a jutting nose and a chin. But her charm was probably as enthralling as reported. Intelligent and highly educated, she was fearless and practical, too.

With “Cleopatra: A Life,” Stacy Schiff draws a portrait worthy of her subject’s own wit and learning. There are many modern accounts of the queen, but this one alone takes up the Modernist project, started by Virginia Woolf, of penetrating the “silence” of women in history and literature. One of Ms. Schiff’s previous books, a biography of Véra Nabokov, the novelist’s brilliant but retiring wife and collaborator, undertook just such a reclamation.

A woman like Cleopatra, however, conspicuous and yet unable to speak for herself, is a special challenge for a biographer. The layers of self-interested, alien observation will be numerous. And in this case they carry great authority as well. In the years that followed Cleopatra’s death, men such as Horace, Propertius and Virgil were on hand in Rome to celebrate the conquerors—especially Octavian—and deplore the conquered, especially Egypt’s late ruler. Cleopatra the witch and seducer is near the foundations of Western literature. Professional classicists (like me) naturally succumb to the classical authors’ venom, if only through our slowness to ask questions.

Ms. Shiff’s biography is an excellent antivenom. Drawing on a range of ancient written sources and archaeology, she plausibly gets into Cleopatra’s head—by picturing Alexandria through the eyes of the Ptolemies, for example, and giving an in-depth description of the Greek curriculum she would have studied. It’s an alluring way to reread the queen’s story, emphasizing her most likely inward experiences instead of the maneuvering around her.

Her genius is most evident from her rule of Egypt. She inherited tricky ethnic divisions—with Greeks and Egyptians sharing little but rowdiness—and a huge economy. Roman interest in Egypt had a great deal to do with Egypt’s agricultural surplus, which could feed a restive populace in Italy, and with its royal treasures, which could pay for the personal army of a civil-war leader.

Cleopatra won over the neglected indigenous Egyptians with a campaign of rituals and images, presenting herself as Isis, the greatest local goddess. But there would have been no way to stabilize the economy and government (which she clearly did) except through attentive work. Cleopatra appears to have been a hands-on administrator in a dynasty that had often left contentious detail to flunkies and let corruption run wild. Roman tales of Cleopatra as louche and languid, a bedizened wastrel, are probably crudely slanted.

Ms. Schiff manages to tell Cleopatra’s story with a balance of the tragic and the hilarious. With exquisite timing, for instance, she unfolds the source of one of the most famous brandings in history: the title “Augustus”—meaning something like “font of all prosperity”—which Octavian adopted and later emperors would inherit. The title, as it happens, was the brainchild of a Roman deserter from Antony and Cleopatra’s faction who had shown up at one of Cleopatra’s dinner parties nude and painted blue, in the role of a sea-god. It is as if the guy best known for his antics on a photocopier at a White House party became the new president’s communications director. Through this story, among many others, Ms. Schiff does a rare thing: She gives us a book we’d miss if it didn’t exist.

Ms. Ruden is the author of “Paul Among the People: The Apostle Reinterpreted and Reimagined in His Own Time” (2010)

__________

Full article: http://online.wsj.com/article/SB10001424052702304741404575565321810001664.html

Will Lincoln Prevail?

The story of the Civil War will be told in this series as a weekly roundup and analysis, by Jamie Malanowski, of events making news during the corresponding week 150 years ago. Written as if in real time, this dispatch will, after this week, appear every Monday. Additional essays and observations by other contributors, along with maps, images, diaries and so forth, will be published several times a week. For another perspective on the war, see this op-ed by Tony Horwitz. — The Editors

Oct. 31, 1860

Seven days to go until election day, and the campaigns are reaching a rousing climax. In Manhattan, the at-long-last-united Tamany and Mozart Democrats mass in the evenings under torch lights and stomp up and down Broadway bellowing for their man Stephen Douglas, while in cities and towns upstate, young Republican Wide Awakes holler and whistle for their tiger Lincoln. Apple farmers fear for their crops, as there is hardly a basket that has not already been overturned and had a surrogate speaker installed on top. The outcome of one of the bitterest presidential elections in the history of the republic — or perhaps only the beginning of the outcome — is falling squarely onto the shoulders of New York.

A campaign banner for Abraham Lincoln and his running mate.

Earlier this year, when Abraham Lincoln, the Illini lawyer, looked at the electoral map, he made a stunning discovery: if he could win 16 of the 17 Northern states plus the Western states of California and Oregon, he’d have enough votes in the electoral college to win the presidency.

Forget the tempestuous South; even if every Southern state fell into line behind one of his rivals — Senator Stephen Douglas of Illinois, Senator John Bell of Tennessee, Vice President John Breckenridge of Kentucky, the squabbling standard bearers of the factions of the suicidally splintered Democratic party — Lincoln could win with just the votes of the increasingly populated, firmly Free State North. And after the results from the state elections earlier this month — Pennsylvania had been suspect and Indiana iffy, but both tilted decisively towards the Republicans — the erstwhile rail-splitter looked like he just might convert his audacious gamble.

A procession of Lincoln supporters, known as the Wide Awakes, marching in New York on Oct. 3, 1860.

But one of Lincoln’s anti-slavery 16 always needed to be New York, with its muscular 35 electoral votes that are more than a fifth of the 152 needed to win.

And the race in New York, once thought to be a breeze, has tightened. In the past week, Thurlow Weed, the Republican Party political boss known for his unsavory methods and infallible acumen, has taken on a decidedly dyspeptic expression. The perpetually feuding Democratic factions in New York City finally coalesced behind Douglas, and the Democratic money spigots began to gush.

Simultaneously, chagrined Republicans across the state began to report that they had spent far too lavishly during the easy-going summer, and now had little or nothing left for the final push. “We are gaining so rapidly it is impossible to foretell the result,” Douglas’s man George Sanders has been telling associates.

Impossible, indeed. Even if Douglas was able to capture the Empire State, the Little Giant has no chance of winning an electoral college victory straight up. The Southern Democrats who walked out on him at their convention in the spring will surely snub him once again. They will split their votes between Breckenridge and Bell. Should Douglas snatch New York from Lincoln, no candidate will be able to claim a majority.

The choice next week, then, is not only between Lincoln and Douglas. It is, to put it another way, a choice between having someone clearly entitled to call himself president-elect (if Lincoln prevails), and muddy irresolution that will yield a second phase of electioneering to be held according to the secret and arcane processes of the House of Representatives. Our previous experiences with this process has shown it to be rife with pitfalls. In 1800, a deadlocked House nearly elected Aaron Burr, a man who had gone into the election hoping at most to become vice president. In 1824, the House picked John Quincy Adams, instead of Andrew Jackson, the man who had actually received the most electoral votes (but just a plurality, not a majority).

Should Douglas take New York, the Adams-Jackson scenario is almost certain to reoccur. Lincoln’s popular and electoral college pluralities will not factor. In the House, each state delegation gets one vote; to win the presidency, a candidate needs the votes of at least 17 of the 33 states. The Republicans control 15 delegations, the Democrats 14, the American Party (heir of anti-immigrant Know-Nothings) controls one. Maryland, Kentucky and North Carolina are split, but he has no support among the voters of those states.

It’s conceivable that Lincoln could wrestle away his home state, Illinois; the Democrats outnumber Republicans in the delegation 5 to 4, but Lincoln is enormously popular, and might take it. That would still leave him one frustratingly elusive vote short, with virtually no chance of finding it among the staunchly pro-slavery delegations that remain. In a real sense, then, next week we will witness not one election among four men, but two elections between two pairs: Lincoln vs. Douglas in New York, to see whether Honest Abe will be able to fill his inside straight; and should he fail, a secondary contest, Breckenridge vs. Bell, to see which of the pro-slavery candidates will enter the House proceedings as the favorite to exit as president.

But let’s not get ahead of events. Thurlow Weed never likes losing elections, and he will be especially loathe to lose the presidency in his own backyard. Already his appeals to Lincoln headquarters in Springfield have resulted in more campaign funds. Moreover, a veritable regiment of Republican big shots have been burning up the rails moving from Buffalo and Long Island and all points between, praising the Lincoln-Hamlin ticket. “New York is the Democrats’ forlorn hope,” James Gordon Bennett, the editor of the New York Herald, wrote last week. Exactly — and Thurlow Weed aims to crush it.

(To read more about this period, see “Lincoln for President,” by Bruce Chadwick, published by Sourcebooks, Inc., 2009.)

Jamie Malanowski has been an editor at Time, Esquire and Spy, and is the author of the novel “The Coup”.

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/10/30/will-lincoln-prevail/

The Führer in the Making

All Quiet: Adolf Hitler (front left) in a 1915 group portrait of dispatch runners from the List Regiment

When Nazi Germany took over Austria in March 1938, there was an outburst of not just anti-Semitism but outright sadism against the Jews. They were, among much else, made to scrub the slogans of the previous regime off walls and pavements. Then the expropriations started. An elderly Jewish couple who lost their shop appealed to Hitler in Berlin. Did His Excellency the Chancellor, they wrote, perhaps remember that as a young painter before the war selling his paintings on the corner of the Siebensterngasse, he would when it rained drop in at a certain shop and be given a cup of tea? Could he now see his way to helping the people who had treated him with such kindness? Hitler marked that the letter should be ignored, and the old couple surely went to a death camp.

We owe our knowledge of this fact to a remarkable 1999 book: “Hitler’s Vienna” by Brigitte Hamann. Her extensive research revealed that Hitler was not really an anti-Semite until after World War I. What had happened in those crucial wartime years is the question that Thomas Weber now answers in “Hitler’s First War.” Like Ms. Hamann, he has searched out original documents and found new material. Like her, he fundamentally alters our understanding of one of the most studied figures of the 20th century.

Hitler wrote about his war experiences in “Mein Kampf” (1925), and biographers have generally relied on his account. He put himself across as a soldier-hero: a “runner” carrying messages back and forth through machine-gun fire and artillery, twice decorated with the Iron Cross for bravery, wounded and then, toward the end of the war, blinded by poison gas. He learned of the end of the war at a military hospital in Pasewalk, not far from Berlin, and he wept.

In Hitler’s version, the weeping soon turned vindictive against the soft-brained academics, Jews and members of the left who, he alleged, had caused Germany to lose the war. Remaining in the army, he was sent to Bavaria to fight against left-wing revolutionaries. (And yet Mr. Weber has discovered that, briefly at the turn of 1918-19, and unmentioned in “Mein Kampf,” Hitler wore a red brassard and supported the short-lived Bavarian Soviet Republic.) Demobilized, he became an informer for the army’s propaganda unit— though whether he volunteered or was coerced because of his short-lived involvement with the Bavarian Soviet Republic, Mr. Weber admits we cannot know—and was sent to monitor a meeting of the obscure German Workers’ Party, soon to be re-named National Socialist German Workers’ Party. Hitler was deeply impressed by the party’s hypernationalism and anti-Semitism and joined within a week of attending his first meeting. He also found that he was a tremendously effective public speaker. The speeches do not translate: What sounds superb in one language can sound plain comic in another. But desperate Germans were soon paying to hear Hitler speak, and, as the party’s chief source of revenue, he took over the leadership.

How did the young Hitler—diffident, gauche, without solid political convictions—turn into the fascist demagogue of 1922? There is no simple answer to this question, but “Hitler’s First War” debunks some of the standard responses. Biographers have long assumed that the war marked a turning point: the comradeship of the trenches, the common soldier’s hatred of the profiteers in the rear and the sense of betrayal with the peace made in 1918. Yet there was the nagging question of why the brave, decorated soldier of “Mein Kampf” was not promoted. Hitler served more or less for the whole of the war and never rose above the rank of corporal, which, given that he undoubtedly had leadership qualities, comes as a considerable surprise.

With some luck and a lot of diligence, Mr. Weber has discovered the missing documents of Hitler’s war service, and it is fair to say that very little of Hitler’s own account survives the discovery. There were indeed two Iron Crosses, but his regimental runner’s job was not necessarily dangerous, and he lived in relative comfort at the regimental headquarters away from the front lines. Ordinary soldiers referred to such men as Etappenschweine (“rear pigs”) —all armies have such a word: “cushy number” and “base wallah” are British examples. Officers had to dish out a quota of medals, and if you did not offend them they would just put your name on the list. Hitler was not, it appears, particularly courageous. He was just there. And, as it happens, a Jewish superior officer, Hugo Gutmann, recommended Hitler for his first Iron Cross. He was not thanked for this act in later life—though his fate, emigration to the United States, was greatly preferable to that of the old couple in Vienna.

There also wasn’t much comradeship. When Hitler broke surface in politics, he asked his old comrades in the regiment for support and discovered that on the whole they had not liked him one bit. Men who had fought at the front in World War I were, moreover, not at all keen on staging a second war, and extraordinarily few of Hitler’s old comrades went along with Nazism. Most supported the Weimar Republic. Mr. Weber’s research shows that it’s not really possible to connect the brutalization of men in the trenches to the birth of National Socialism.

It is very much to Mr. Weber’s credit that he has managed to dig out the details, and we can place his book together with Ms. Hamann’s as a triumph of original research in a very stony field. The conclusion that might be drawn is that Hitler was far more of the opportunist than is generally supposed. He made things up as he went along, including his own past. If we still haven’t answered the question of what turned Hitler into an anti-Semitic idealogue, at least attention has been shifted to the Bavarian years of 1919-22. Ms. Hamann and Mr. Weber point the way forward for the next scholar’s diligent researches.

Mr. Stone is a professor of modern history at Bilkent University in Ankara, Turkey.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703673604575550730579575058.html

The 150-Year War

MY attic office is walled with books on Lincoln and Lee, slavery and secession. John Brown glares from a daguerreotype on my desk. The Civil War is my sanctum — except when my 7-year-old races in to get at the costume box. Invariably, he tosses aside the kepi and wooden sword to reach for a wizard cloak or Star Wars light saber.

I was born in a different era, the late 1950s, when the last Union drummer boy had only just died and plastic blue-and-gray soldiers were popular toys. In the 1960s, the Civil War centennial recalled great battles as protesters marched for civil rights and the Rev. Dr. Martin Luther King Jr. declared from the steps of the Lincoln Memorial, “One hundred years later, the Negro still is not free.”

Today the Civil War echoes at a different register, usually in fights over remembrance. Though Southern leaders in the 1860s called slavery the cornerstone of their cause, some of their successors are intent on scrubbing that legacy from memory. Earlier this year in Virginia, Gov. Robert F. McDonnell proclaimed April to be Confederate History Month without mentioning slavery, while the state’s Department of Education issued a textbook peddling the fiction that thousands of blacks had fought for the South. Skirmishes erupt at regular intervals over flags and other emblems, like “Colonel Reb,” whom Ole Miss recently surrendered as its mascot. The 1860s also have a particular resonance at election time, as the country splits along political and cultural lines that still separate white Southern voters from balloters in blue Union states.

But as we approach the 150th anniversary of Abraham Lincoln’s election, on Nov. 6, and the long conflict that followed, it’s worth recalling other reasons that era endures. The Civil War isn’t just an adjunct to current events. It’s a national reserve of words, images and landscapes, a storehouse we can tap in lean times like these, when many Americans feel diminished, divided and starved for discourse more nourishing than cable rants and Twitter feeds.

“The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so we must think anew, and act anew. We must disenthrall ourselves, and then we shall save our country.” Those famous lines come from President Lincoln, delivered not in the Gettysburg Address, but on a routine occasion: his second annual message to Congress. Can you recall a single line from any of the teleprompted State of the Union messages in your own lifetime?

The Civil War abounded in eloquence, from the likes of Frederick Douglass, Walt Whitman, the Southern diarist Mary Chesnut and warriors who spoke the way they fought. Consider the Southern cavalryman J. E. B. Stuart, with panache, saying of his father-in-law’s loyalty to the Union: “He will regret it but once, and that will be continually.” Or Gen. William Tecumseh Sherman, brutal and terse, warning besieged Atlantans: “You cannot qualify war in harsher terms than I will. War is cruelty, and you cannot refine it.”

These and other words from the war convey a bracing candor and individuality, traits Americans reflexively extol while rarely exhibiting. Today’s lusterless brass would never declare, as Sherman did, “I can make this march, and make Georgia howl!” or say of a superior, as Sherman did of Gen. Ulysses S. Grant, “He stood by me when I was crazy, and I stood by him when he was drunk.”

You can hear the same, bold voice in the writing of common soldiers, their letters unmuzzled by military censors and their dialect not yet homogenized by television and Interstates. “Got to see the elephant at last,” an Indianan wrote of his first, inglorious combat. “I don’t care about seeing him very often any more, for if there was any fun in such work I couldn’t see it … It is not the thing it is bragged up to be.” Another soldier called the Gettysburg campaign “nothing but fighting, starving, marching and cussing.” Cowards were known as “skedaddlers,” “tree dodgers,” “skulkers” and “croakers.”

There’s character even in muster rolls and other records, which constantly confound the stereotype of a war between brotherly white farm boys North and South. You find Rebel Choctaws and Union Kickapoos; Confederate rabbis and Arab camel-drivers; Californians in gray and Alabamans in blue; and in wondrous Louisiana, units called the Corps d’Afrique, the Creole Rebels, the Slavonian Rifles and the European Brigade. By war’s end, black troops constituted over 10 percent of the Union Army and Navy. The roster of black sailors included men born in Zanzibar and Borneo.

Then there are the individuals who defy classification, like this one from a Pennsylvania muster roll: “Sgt. Frank Mayne; deserted Aug. 24, 1862; subsequently killed in battle in another regiment, and discovered to be a woman; real name, Frances Day.”

If the words of the 1860s speak to the era’s particularity, the bleakly riveting data of the Civil War communicates its scale and horror — a portent of the industrial slaughter to come in the 20th century. Roughly 75 percent of eligible Southern men and more than 60 percent of eligible Northerners served, compared with a tiny fraction today, and more than one million were killed or wounded. Fighting in close formation, some regiments lost 80 percent of their men in a single battle. Three days at Gettysburg killed and wounded more Americans than nine years of war in Afghanistan and Iraq have. Nearly one in three Confederate soldiers died — a statistic that helps to explain the deep sense of loss that lasted in the South for over a century. In all, the death rate from combat and disease was so high that a comparable war today would claim six million American lives.

As horrific as these numbers are, they’re made graphic by the pioneering photography of the Civil War. It’s hard for us to conjure the Minutemen of 1775, but we can look into the eyes of Union and Confederate recruits, study their poses, see emotion in their faces. They look lean (and they were: on average, Civil War soldiers were 40 pounds lighter than young men today), but their faces are strikingly modern and jaunty.

Then we see them again, strewn promiscuously across fields, limbs bloated, mouths frozen in ghastly O’s. When Mathew Brady first exhibited photographs of battlefield dead in 1862, The Times likened viewing them to seeing “a few dripping bodies, fresh from the field, laid along the pavement.” Oliver Wendell Holmes Sr. wrote that photographs forced civilians to confront the true face of battle — “a repulsive, brutal, sickening, hideous thing.” We’re spared this discomfort today, with the American dead from two ground wars carefully airbrushed from public view.

There’s another great difference between the Civil War and every other war in our history: the ground itself, a vast and accessible Yosemite of memory that stretches across the South and to points beyond, from Gettysburg in Pennsylvania to New Mexico’s Glorieta Pass. True, much of the Civil War’s landscape has been interred beneath big-box malls and subdivisions named for the history they’ve obliterated. But at national parks like Shiloh and Antietam you can still catch a whisper of a human-scaled America, where soldiers took cover in high corn and sunken roads, and Lincoln’s earthy imagery spoke to the lives of his countrymen.

In an electronics-saturated age, battlefield parks also force us to exercise our atrophied imaginations. There’s no Sensurround or 3D technology, just snake-rail fences, marble men and silent cannons aimed at nothing. You have to read, listen, let your mind go. If you do, you may experience what Civil War re-enactors call a “period rush” — the momentary high of leaving your own time zone for the 1860s.

You wouldn’t want to stay there; at least I wouldn’t. Nor is battle the only way into the Civil War. There are countless other portals, and scholars are opening them to reveal lesser-known aspects of Civil War society and memory. Know about the 11-year-old girl who convinced Lincoln to grow a beard? The Richmond women who armed themselves and looted stores, crying, “Bread or blood”? The “Mammy Monument” that almost went up in Washington a year after the Lincoln Memorial?

It’s a bottomless treasure, this Civil War, much of it encrusted in myth or still unexplored. Which is why, a century and a half later, it still claims our attention and remembrance.

Tony Horwitz is the author of “Confederates in the Attic” and the forthcoming “Midnight Rising: John Brown’s Raid and the Start of the Civil War.”

__________

Full article and photo: http://www.nytimes.com/2010/10/31/opinion/31Horwitz.html

In the name of godlessness

Atheism and the Enlightenment

An 18th-century Paris salon where philosophers met to eat and drink and deny the existence of God and the soul

A Wicked Company: The Forgotten Radicalism of the European Enlightenment. By Philipp Blom. Basic Books. To be published in Britain in March by Weidenfeld & Nicolson as “Wicked Company: Freethinkers and Friendship in Pre-Revolutionary Paris”.

ATHEISM is a hot topic. In recent years writers from Richard Dawkins and Daniel Dennett to Christopher Hitchens and Sam Harris have penned popular tracts advancing the cause of godlessness. But, as the Bible reminds us, there is nothing new under the sun. Philipp Blom’s latest book tells the story of a set of remarkable individuals on the radical fringes of the 18th-century European Enlightenment, whose determinedly atheistic and materialist philosophies denied the existence of God or the soul. Echoing ancient thinkers such as Democritus and Lucretius, they held ideas that were to prove too revolutionary even for a revolutionary age.

It is the story of the scandalous Paris salon run by Baron Paul Thierry d’Holbach, a philosophical playground for many of the greatest thinkers of the age. Its members included Denis Diderot (most famous as the editor of the original encyclopedia, but, Mr Blom argues, an important thinker in his own right), Jean-Jacques Rousseau, the father of romanticism, and the baron himself; even David Hume, a famous Scottish empiricist, paid the occasional visit.

A philosophy grew up around the baron’s generously stocked table that denied religious revelation and shunned Christian morality, embracing instead the primal passions (the fundamental motives, said the philosophes, for human behaviour) and cool reason (which could direct the passions, but never stand against them). They dreamt of a Utopia built on pleasure-seeking, rationality and empathy. Their ideal nation would leave no room for what they saw as the twisted ethical code of Christianity, which they argued prized suffering and destructive self-repression.

Not only was their thinking radical, but expressing it was dangerous. Diderot was imprisoned for his writings, an experience, Mr Blom argues, that left him too scared to lay out his philosophy plainly, instead disguising it within numerous plays, novels and letters. Baron d’Holbach published most of his works under pseudonyms, which helped to keep him safe but also condemned him to centuries of philosophical obscurity (except in the officially godless Soviet Union). Even when the French revolution finally came, its self-appointed guardians had no place for the philosophy of the true radicals. For Maximilien Robespierre, chief architect of the reign of terror that followed the revolution, God and religion were far too useful in keeping the population in line.

Mr Blom’s book is part biography and part polemic. He sketches the early lives of Diderot, Holbach, Rousseau and other players in the drama, and describes the philosophy they hammered out. It is also an iconoclastic rebuttal of what he describes as the “official” history of the Enlightenment, the sort of history that he finds “cut in stone” on a visit to the Paris Panthéon. There the bodies of Voltaire and Rousseau were laid to rest with the blessing of the French state. Neither deserved it, suggests Mr Blom.

Voltaire, he insists, was a milquetoast careerist, too concerned with his own reputation and his comfortable life to say anything truly unsettling. Rousseau he finds even worse. By denigrating reason, celebrating impulse and advocating repression and tyranny in the name of a loosely defined “general will”, Rousseau’s thinking, argues Mr Blom, was actively maleficent (and, unsurprisingly, venerated by Robespierre). It is a tragedy of history, the author concludes, that Voltaire and Rousseau won the battle of ideas, whereas Diderot was reduced to the rank of editor of the encyclopedia, and Holbach was forgotten utterly.

Even today, and even in secular western Europe, the bald and confident atheism and materialism of Diderot and Holbach seems mildly shocking. We still cling stubbornly to the idea of an animating soul, a spiritual ghost in the biological machine. For Mr Blom, the modern, supposedly secular world has merely dressed up the “perverse” morality of Christianity in new and better camouflaged ways. We still hate our bodies, he says, still venerate suffering and distrust pleasure.

This is the message of Mr Blom’s book, hinted at but left unstated until the closing chapters. He believes the Enlightenment is incomplete, betrayed by its self-appointed guardians. Despite all the scientific advances of the past two centuries, magical thinking and the cultural inheritance of Christianity remain endemic.

__________

Full article and photo: http://www.economist.com/node/17358838

Study Highlights German Foreign Ministry’s Role in Holocaust

Historians Deliver Damning Verdict

A camera man films the Foreign Ministry building in Berlin. A panel of historians is due to present a study of the ministry’s history during and after the Nazi era.

Historians have found that the German Foreign Ministry was far more deeply involved in the Holocaust than had been thought. A new study commissioned by former minister Joschka Fischer in 2005 is due to present its findings this week, and concludes that diplomats went on covering up the past for decades.

As far as book launches go, this will be an unusual one. Three German foreign ministers past and present will be marking the publication on Thursday of a history about the ministry’s role during the Nazi era.

The 880-page work compiled by a panel of historians was commissioned in 2005 by Joschka Fischer shortly before the end of his tenure as Germany’s top diplomat. It will be formally handed over to the present incumbent, Guido Westerwelle, on Thursday afternoon.

That evening, Fischer and Frank-Walter Steinmeier, who was foreign minister from the end of 2005 until last year, will be attending an event hosted by the publishing company Blessing Verlag.

All three ministers will have to talk about the Holocaust, about war crimes, about diplomatic failure, about perfidious behavior and about rare incidents of heroism, all in the context of the German Foreign Ministry during the Third Reich.

The book will be presented by a commission that includes the historians Eckart Conze and Norbert Frei of Germany, Peter Hayes of the United States and Moshe Zimmermann of Israel. Their book deals with the history of this most distinguished of German ministries during this dark chapter, and about how it dealt with its past after the war.

Diplomats ‘Actively Involved’ in Holocaust

The experts’ verdict is damning. “The diplomats were aware of the Jewish policy throughout,” they write, “and actively involved in it.” Cooperating in mass murder was “an area of activity” of ministry staff “everywhere in Europe.”

Fischer had commissioned the study in 2005 to settle a heated dispute in his ministry about the extent of its historical guilt. The results are unlikely to calm the controversy. Fischer was shocked by the findings. “It makes me feel sick,” he said.

The head of the commission, Eckart Conze, even described the Foreign Ministry as a “criminal organization” in an interview with SPIEGEL (to be published in English later this week). That was the term used at the Nuremberg Trials to describe the SS. Conze’s assessment amounts to a condemnation of Germany’s upper classes during the Nazi era. No other institution had so many members from illustrious families on its staff — the Weizsäckers, the Bismarcks, the Mackensens.

The historians’ findings about the ministry in the post-war West German era are also explosive. Chancellor Konrad Adenauer, who had the job of foreign minister from 1951 until 1955 during his tenure as West German leader, allowed former Nazis to remain on the ministry’s staff even though he was well aware of the roles they had played under Hitler. Diplomats with Nazi pasts were posted in Arab countries and Latin America where they were unlikely to encounter public criticism.

Former Nazis in West German Foreign Service

The situation didn’t improve much when the center-left Social Democratic Party came to power in 1966. Willy Brandt, who resisted the Nazis and emigrated during the 1930s, became foreign minister and then chancellor. But he continued to work with Ernst Achenbach, a foreign policy expert for the Free Democratic Party in the 1960s and 1970s, who — according to the commission — was involved in the deportation of Jews from occupied France during the war when he was a high-ranking member of the German embassy in Paris. Right up until 1974, Achenbach blocked an agreement between West Germany and France to permit the prosecution of Nazis who had committed crimes in France.

Well into the 1980s, during the tenure of Foreign Minister Hans-Dietrich Genscher, historians ran into a wall of silence when they wanted to dig for incriminating documents in the ministry’s archives in order to refute the official version of events — that it had been “haven of resistance.”

Former Foreign Minister Steinmeier says the study’s findings about the post-war years were among the most depressing passages. He said it was “incredible” that it had taken 60 years to conduct systematic research into the history of the ministry. The study was only launched because Fischer got into an argument with his staff.

Fischer says the trigger was a “ridiculous obituary” circulated among staff in 2003 about Franz Nüsslein, who had been a diplomat in the West German Foreign Ministry. The text declined to mention that Nüsslein had been senior prosecutor in Prague during the war and had been partly responsible for hundreds of executions there. Fischer, who was foreign minister at the time, ordered that the ministry should refrain in future from honoring former Nazi party members.

This ban was applied for the first time a year later after the death of Franz Krapf, West Germany’s ambassador to NATO under Genscher. He had been a member of the Nazi party and the SS.

Former diplomats rebelled against the ban and many active members of the diplomatic service joined the protest. They argued that it was unfair to condemn staff who had been members of the Nazi party, and 128 former diplomats put a large death notice in the respected Frankfurter Allgemeine Zeitung newspaper in defense of Krapf’s honor.

Surprised by the reaction, Fischer responded by hiring the commission. He feels that the findings have confirmed his stance. “That’s the obituary these gentlemen deserve,” he said.

Study Lacks Balance

But Fischer’s victory isn’t that clear-cut. The study shows that membership in the Nazi party in itself says nothing about the extent of involvement in crimes. But above all, it isn’t as balanced as the studies that usually put debates such as this to rest. 

It contains repeated references to “the” diplomats even though they didn’t all commit crimes, as the book itself emphasizes in another passage. In addition, it assumes that diplomats had demonstrated their support for the “Final Solution” — the term the Nazis used for the Holocaust — just by reading the reports filed by the murderous death squads and signing them as read.

The study also creates the impression that several diplomats were involved in murders, but then fails to provide proof.

For example, Krapf was stationed at the German embassy in Tokyo during the war. The historians write: “Little is known about Krapf’s activities (editor’s note — in Japan), but it’s clear that German diplomats dealt with the ‘Final Solution’ of the Jewish question even in the Far East.” That is supposed to mean: Krapf took part in the genocide somehow.

Former diplomats won’t be the only ones to scrutinize such passages. The historians are also likely to face criticism from younger diplomats because the study accuses staff members of having failed to question the official line right up to the 1990s. One high-ranking ministry official said that wasn’t true. He pointed to research conducted long ago by the historian Hans-Jürgen Döscher about the crimes committed by diplomats. Staff members had read that research, the official said.

Contrary to the commission’s claims, the ministry has already adopted a differentiated view of its own past, the official added. An official brochure published in 1995 says the ministry had contained “several fanatical supporters” and a “considerable number” of people who went along with the Nazis and were indifferent about their crimes.

In a sign of how sensitive the study’s findings are, Westerwelle cancelled a joint book presentation with Steinmeier and Fischer after the publishing firm said it planned a panel discussion between the three ministers and the historians.

Westerwelle seems to have had a feeling that he couldn’t win in a clash with the eloquent Fischer, for whom confronting Germany’s Nazi past has been a lifelong theme and who always relishes taking a swipe at Westerwelle.

New Approach to Dealing With Past

But Westerwelle too has praised the book as “a weighty piece of work” which would help reaffirm the ministry’s sense of self. He wants to incorporate the book in the training course for young diplomats and to change the way the ministry observes its traditions.

The ministry also plans to revise any brochures that fail to mention the roles former staff members played during the Nazi era. In addition, it will take a closer look at the portraits of diplomats hanging on the walls of the ministries and of embassies.

It may well be that embassies follow the example of the London embassy, which mentions the Nazi past of Konstantin von Neurath, the foreign minister from 1932 to 1938, beneath a portrait of him. It may be that in future, only portraits of post-war ambassadors will be shown.

The study in itself represents a break with the past in one important respect: The foreign ministry has put itself at the forefront of historical research into its past. The other ministries largely ignore their Nazi history to this day.

__________

Full article and photo: http://www.spiegel.de/international/germany/0,1518,725248,00.html

What He Saw at the Revolution

A firebrand as opposed to a strong national government as he was to British tyranny.

‘I know not what course others may take, but as for me,” Patrick Henry famously declared at a revolutionary convention of his fellow Virginians on March 23, 1775, “give me liberty, or give me death!” The war for independence was inevitable, he said—and in fact Lexington and “the shot heard around the world” were less than a month away. Even after Lexington, there were moderates, like John Dickinson of Pennsylvania, who still hoped for reconciliation with the mother country. But Henry, who had begun publicly flirting with treason a dozen years earlier, was definitely not among them.

Henry’s radical advocacy of independence is not his only legacy, as Harlow Giles Unger observes in “Lion of Liberty,” his vivid biography of the Virginia firebrand. A foe of a strong national government who fought against ratification of the federal Constitution, Henry helped bring about the addition of the Bill of Rights. And his championing of states’ rights had less fortunate reverberations down through the decades.

Our knowledge of Henry’s words and deeds at some crucial points is less than certain. The known text of his “liberty or death” speech, for instance, is a reconstruction made 40 years after the event. Mr. Unger at times brings his subject into a sharper focus than a strict adherence to what is surely known would permit. But it is illuminating to see Patrick Henry thus, part legend though the “lion” may be.

A self-taught back-country lawyer and spellbinding orator, Henry in 1763 at his first major trial denounced Britain’s putative tyranny in annulling an act by Virginia’s House of Burgesses. The measure let landowning parishioners pay their taxes to the Anglican Church in cash rather than the usual tobacco (which drought had made unusually precious). The act was needed for the people’s economic survival, Henry said; a king who overruled such an act “had degenerated into a tyrant and forfeited all right to his subjects’ obedience to his order of annulment.” Despite the cries of “treason,” the 27-year-old Henry effectively won the damages case brought by a clergyman and was carried from the courthouse in triumph.

Two years later, newly elected to the House of Burgesses, Henry raged against the supposed tyranny of Britain’s Stamp Act, which required the purchase of revenue stamps on legal documents and other items. The anti-Act resolutions that Henry put forward “represented the first colonial opposition to British law,” Mr. Unger writes.

The Stamp Act was the first direct tax imposed by Parliament on the colonies, the author notes, but the tax would have had only a trivial impact on the average American. Heavily in debt after the French and Indian War, and with its empire suddenly enlarged by the acquisition of Canada from the French, Britain not unreasonably thought that Americans should pay for imperial protection against Indian attacks. The stamp tax had been in effect in England for decades. And because the franchise was so restricted, most taxpayers there—despite Henry’s claim to the contrary—had no more representation in Parliament than the Americans did. Even so, Parliament’s extension of the tax was ill-timed, Mr. Unger says: “Increased duties were already strangling the American economy.”

By asserting that only Virginia’s General Assembly had the right to impose taxes on Virginians, and by warning of what might happen if George III persisted in tyranny, Henry once again provoked cries of “treason”—but Virginia adopted his resolutions against the Stamp Act, and other colonies soon followed suit. “Mr. Henry gave the first impulse to the ball of the revolution,” Thomas Jefferson said. Over the next 10 years, as that ball received more such impulses, Henry seems never to have factored into his “liberty or death” calculations the risk that an armed struggle for independence would also turn into a civil war, as of course it did. In Henry’s apparent view, averting civil war was not up to intransigent radicals like himself.

Elected governor in 1776 and then twice re-elected, Henry became an effectual wartime executive (unlike his successor, Jefferson), even taking on what Mr. Unger calls “dictatorial powers” in a “political turnabout [that] was nothing more than a statesman’s adaptation to changing realities.” After the war, the champion of small farmers in Virginia’s Piedmont hills served two more terms as governor before retiring to private life.

Out of office in 1787, Henry refused to attend the Constitutional Convention, later saying that he had “smelt a rat.” As opposed to a strong national government as he had been to British “tyranny,” Henry claimed that a coup d’état was in progress and said that the proposed constitution “squints towards monarchy.” He objected not only to the absence of a bill of rights but also to the federal government’s powers to tax the people without their state legislature’s consent and to send troops into any state to enforce federal laws.

Without Henry’s and others’ active opposition, there would have been no Bill of Rights. But he was not satisfied with the outcome, declaring: “This Constitution cannot last.” And just as Henry had predicted, Mr. Unger notes, “tyranny” ensued. “Congress imposed a national whiskey tax without the consent of state legislatures—much as the British had done with the stamp tax—and President Washington sent troops to crush tax protests in western Pennsylvania, much as the British had in Boston.” With the Alien and Sedition Acts of 1798, President John Adams and Congress suppressed free speech and freedom of the press. Despite such infringements of liberty, Henry did not urge taking up arms against the government.

A few months before his death in June 1799—he was the father of 18 children by then and a wealthy man thanks to his law practice and land speculation—Henry advised: “We should use all peaceable remedies first before we resort to the last argument of the oppressed—revolution—and avoid as long as we can the unspeakable horrors of civil war.” But tragically, Mr. Unger writes, Henry’s passionate struggle for states’ rights had “sowed the seeds of secession in the South” for the Civil War to come.

Mr. Landers is the author of “An Honest Writer: The Life and Times of James T. Farrell.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052702304741404575564484261874188.html

Still Under Cleopatra’s Spell

The Romans were the first, but hardly the last, to be unnerved by female ambition, authority and allure

How is it possible that Cleopatra continues to enchant, 2,000 years after her sensational death? It helps that, with her suicide in 30 B.C., she brought down two worlds; with her went both the 400-year-old Roman Republic and the Hellenistic age. Egypt would not recover its autonomy until the 20th century.

Shakespeare and G.B. Shaw lent a hand in her immortality, of course, as did Cleopatra’s eloquent Roman critics. She endures for reasons beyond the fame and talent of her chroniclers, however; the issues that she raised continue to fluster and fascinate. Nothing enthralls us so much as excessive good fortune and devastating catastrophe. As ever, we lurch uneasily between indulgence and restraint. Sex and power still combust in spectacular ways.

And we remain unnerved by female ambition, accomplishment and authority. The wise woman mutes her voice in order to maintain her political or corporate constituency. She is often cast all the same as a scheming harridan or a threatening seductress. Her clothing budget attracts uncommon scrutiny, by definition either too large or too small. If she is not overly sexual, she is suspiciously sexless.

For reasons that remain murky, Julius Caesar invited Cleopatra to Rome in 46 B.C. Though her fortune had dwindled from that of her forebears—she was the last of the Ptolemies, the Greek dynasty that ruled in Egypt after the death of Alexander the Great—she remained the richest person in the Mediterranean world. A decade earlier, her father had traveled about Rome on the shoulders of eight men and with an escort of 100 swordsmen. He distributed lavish gifts left and right. There is little reason to believe that Cleopatra did things differently. The pageantry unsettled, as will a convoy of Maybachs in Paris today.

ELIZABETH TAYLOR retouches her makeup on the set of ‘Cleopatra’ in 1962.

In the late republic, that outsized wealth impugned her morals. To wax eloquent about someone’s embossed silver, sumptuous carpets or marble statuary was to indict him. In the Roman view, Cleopatra quite literally possessed an embarrassment of riches. This meant that every evil in the profligacy family attached itself to her. Well before she became the sorceress of legend—a reckless, careless destroyer of men—Cleopatra was suspect as a reckless, careless destroyer of wealth. Even if she never melted a pearl in vinegar, as legend has it, she could well afford to do so.

Cleopatra’s fortune derived from Egypt’s inexhaustible natural resources. Her kingdom was miraculously, effortlessly fecund, the most productive agricultural land in the Mediterranean. Its crops appeared to plant and water themselves. Those harvests—and Egypt’s absolutist government—accounted for the Ptolemaic fortune. Very little grew in or left Egypt without in some way enriching the royal coffers. And Cleopatra controlled the greatest grain supply in the ancient world. Rome stood at her mercy. She could single-handedly feed that city. She could equally well starve it if she cared to.

Wealth and culture also happened to share an address in Cleopatra’s lifetime. Compared to Alexandria, Rome qualified as a provincial backwater. It was still the kind of place where a stray dog might deposit a human hand under the breakfast table, where an ox could burst into the dining room. Alexandria remained the fashion capital, the center of learning, the seat of culture. If you wanted a secretary, a tutor or a doctor, you wanted one trained in Egypt. And if you wanted a bookstore, you dearly hoped to find yourself in Alexandria.

By contrast, it was difficult to get a decent copy of anything in Rome, which nursed a healthy inferiority complex as a result. Gulping down his envy with a chaser of contempt, a Roman found himself less awed than offended by Egypt. He wrote off extravagance as detrimental to body and mind, sounding like no one so much as Mark Twain, resisting the siren call of Europe many centuries later. Staring an advanced civilization straight in the face, the Roman dismissed it as either barbarism or decadence.

Egypt confounded as well for its exoticism. Nothing so much proved the point as the perceived femininity of the East, that beguiling, voluptuous realm of languor and luxury. There was something subversive about a land that exported a female goddess—the Isis temples in Rome were notorious spots for assignations—and a female pharaoh.

In Egypt, on the other hand, competence regularly trumped gender. Cleopatra followed to the throne a sister who had briefly succeeded in deposing their father. She could look to any number of female forebears who had built temples, raised fleets, waged military campaigns. And she came of age in a country that entertained a singular definition of women’s roles. They inherited equally and held property independently. They enjoyed the right to divorce and to be supported after a divorce. Romans marveled that in Egypt female children were not left to die. A Roman was obligated to raise only his first-born daughter. Egyptian women loaned money and operated barges, initiated lawsuits and hired flute players. They enjoyed rights women would not again enjoy for another 2,000 years.

Not only was a Roman woman without political or legal rights, she was often without a personal name. Caesar had two sisters, both named Julia. A good woman was an inconspicuous woman, something that rather defied Cleopatra’s training. As ever, what kept a woman pure was the drudge’s life, of which Juvenal supplied the traditional formula: “Hard work, short sleep, hands chafed and hardened” from housework. For the Romans, a world ruled by a woman was a world turned upside down; like the north-flowing Nile itself, it reversed the course of nature. Female authority was in Rome a meaningless concept. This posed a problem for an Egyptian sovereign.

Cleopatra spoke many languages, flattery perhaps most fluently. Though famed for her charm and her powers of persuasion, she did not always temper her style. She was an autocrat who very much sounded the part. Few resented her tone as deeply as her Judaean neighbor, Herod the Great; the relationship between the two sovereigns proceeded by mutual betrayals. Complicating their dealings was each ruler’s friendship with Rome, the western superpower intent on maintaining peace between them. (Herod owed his crown in part to Roman fears of Cleopatra; he balanced power in a volatile corner of the world.) Cleopatra conspired to separate Herod from their mutual Roman friends. In turn, he proposed her assassination. All would be so much simpler, argued the Judaean king, if his henchmen simply eliminated the pesky Egyptian queen.

What else to do with a clever woman who could not be subjugated by the usual means? Cleopatra’s relationship with Mark Antony was the longest of her life, but that with Octavian, the future Caesar Augustus, would prove the more enduring. She allowed him to recycle the oldest trope: The allergy to the powerful woman was even sturdier than that to monarchy or to the impure, inferior East. Octavian delivered up the tabloid version of an Egyptian queen, insatiable, treacherous and decadent. To prepare the ground for Actium, the battle that would decide the future of Rome and at which Octavian would defeat Antony and Cleopatra, he needed a worthy opponent. He wisely oversold the enemy.

In Octavian’s version, Cleopatra assumed the role of the “wild queen,” lusting after Rome and plotting its destruction. For his one-time ally Antony to have succumbed to something other than a fellow Roman, she had to be a disarming seductress. Her powers had to be exaggerated because—for one man’s political purposes—she needed to have reduced another to abject slavery. And as ever, the easiest way to disarm a capable woman was to sexualize her. Herod did the same, expounding in the course of Cleopatra’s Jerusalem visit on her shameless behavior. Blushingly, he swore that she had forced herself upon him. As everyone knew, such was her wont. (She was at the time hugely pregnant with Mark Antony’s child.)

The divide between the civilized, virtuous West and the tyrannical, dissolute East began in part with Rome and its Egyptian problem. Cleopatra emerged as stand-in for her occult, alchemical land, the intoxicating address of sex and excess. She wielded power shrewdly and easily, making her that rarest of things: a woman who—working from an original script—discomfited the very male precincts of traditional authority. Two thousand years later, those tensions and anxieties have not relaxed their hold.

Stacy Schiff is the author of “Cleopatra: A Life,” which will be published next month. She won the Pulitzer Prize in 2000 for her biography of Vera Nabokov.

___________

Full article and photo: http://online.wsj.com/article/SB10001424052702304510704575562194068357552.html

Eisenhower’s Pit Bull

There have been countless biographies of the generals of World War II, and many are excellent. This biography of Walter Bedell Smith, Eisenhower’s chief of staff, is one of the best. Smith has never received the attention and the credit that he deserves. A chief of staff is perhaps bound to be an unsung hero, but “Beetle” Smith was far more than just a tough and able administrator. In the words of a fellow officer, he possessed “all the charm of a rattlesnake.” Yet the bad-cop routine—one he used almost entirely with fellow Americans and not with Allies—was forced upon him because Eisenhower, his supreme commander, desperately wanted to be liked by everybody.

A GENERALS GATHERING: Bernard Montgomery explains his plan for taking the Sicilian city of Messina to Walter Bedell Smith, George Patton and Harold Alexander, July 25, 1943.
_____
Like almost every key U.S. Army officer in World War II, Smith (1895– 1961) was spotted by George C. Marshall. After serving as Marshall’s right-hand man in Washington, Smith moved to Europe as Eisenhower’s chief of staff in 1942. His first operational task, while based in London, was to coordinate the North African landings codenamed Operation Torch. Although the invasion was a success, problems mounted rapidly. The most serious was the supply chain, which Smith tried to reorganize radically, but Eisenhower was reluctant to take hard decisions.

In addition to all his operational duties, Smith was also left to handle the press, and political and diplomatic relations, acting as Eisenhower’s “primary shock-absorber.” The politically naïve Eisenhower had suddenly discovered the pitfalls of supreme command, especially when it involved the latent civil war of French politics. His decision to make use of Admiral François Darlan, the head of the Vichy French navy, to defuse opposition to the Allied landings in North Africa produced a storm of condemnation in the U.S. and Britain, especially as Vichy’s anti-Jewish laws were left in place. Eisenhower complained to an old friend of his role as supreme commander: “I am a cross between a one-time soldier, a pseudo-statesman, a jack-legged politician and a crooked diplomat.” These first trials, and especially the failures in the advance on Tunisia, did not constitute Eisenhower’s finest hour. He was close to a breakdown by January 1943, and his weak performance briefing the Combined Chiefs of Staff at the Casablanca conference—Roosevelt thought him “jittery”—nearly led to his resignation. He confided to Patton that he thought “his thread [was] about to be cut.” But the British did not insist on his removal, and with Smith’s steady advice Eisenhower weathered the storm.

Eisenhower and Smith were both caught up in the great strategic debate within the Allied camp. Marshall wanted the invasion of France to have every priority and remained deeply suspicious of British attempts to postpone it by diverting efforts to the Mediterranean theater because of their material and manpower shortages. As in the Napoleonic wars, British strategy was to avoid a major continental engagement until, making use of the Royal Navy, the enemy had been worn down at the periphery. American doctrine was the very opposite: using industrial supremacy to fight a battle of equipment (Materialschlacht) and confronting the enemy in a head-on land engagement. Mr. Crosswell quotes the boast of one U.S. general: “The American Army does not solve its problems, it overwhelms them.”

But Marshall’s plans for an early invasion of Northwest Europe were thwarted by Churchill, who went directly to Roosevelt. As things turned out, Churchill proved to be right to postpone D-Day, albeit for the wrong reasons. He longed to attack the “soft under-belly of Europe” through Italy and into central Europe to forestall a Soviet occupation after the war. (Roosevelt, Marshall and Eisenhower all failed to foresee the Stalin’s ambitions.) Marshall, on the other hand, was wrong because any attempt to mount a cross-Channel invasion in 1942 or even 1943 would have ended in disaster. The U.S. Army was simply not ready, the shipping and landing-craft were not available and the Allies lacked air supremacy.

The stress of Smith’s job, especially dealing with the rival egos of Eisenhower’s army group and army commanders—to say nothing of the constant political interference from Churchill—contributed to his irascibility and ulcers. His infrequent escapes from his desk revolved around needlepoint, fishing and collecting objets d’art. Smith was, in Mr. Crosswell’s words, both “a loner and an inveterate collector all his life.”

Eisenhower has always received the credit for the close Allied cooperation, but in “Beetle” we find that Smith achieved much of it working behind the scenes. Eisenhower knew this and wrote to Marshall about the necessity of promoting him. “Smith seems to have a better understanding of the British and is more successful in producing smooth teamwork among the various elements of the staff than any other subordinate I have.” Yet Eisenhower’s feelings about Beetle seem to have been ambivalent, even though he depended on his abilities to an extraordinary degree. They were never close friends, and Eisenhower failed to give Smith the credit he deserved. Smith’s ability to get on well with the British also often led to accusations that he was prejudiced in their favor. Yet he was brilliant in containing inter-Allied explosions, especially those provoked by the prima donna Bernard Montgomery. Major turf wars were avoided by Smith’s skilled handling of the insufferable British general. When Montgomery came to Eisenhower’s headquarters in Algiers in 1943, he said to Smith: “I expect I am a bit unpopular up here.” Smith replied: “General, to serve under you would be a great privilege for anyone, to serve longside you wouldn’t be too bad. But, say, General, to serve over you is hell.”

Montgomery, however, was not the only senior commander to exploit Eisenhower’s failure to establish firm control and his attempts to compromise. American generals like Omar Bradley and George Patton also played games and threw tantrums, which Smith had to resolve. “The trouble with Ike,” Smith observed, “is that instead of giving direct and clear orders, [he] dresses them up in polite language; and that is why our senior American commanders take advantage.” Eisenhower’s reliance on charm and manipulation all too often failed to work. Patton likened him to a politician running for office rather than a real commander.

This book, which manages to be both brutally honest and fair, does little to bolster the Ike myth, but clearly shows his moment of glory during the Ardennes offensive in December 1944, when he really did at last take a grip. But Eisenhower quickly lost it again during the rest of that terrible winter. And perhaps predictably, it was Smith who had to fire a semi-deranged Patton in September 1945 after his outrageous remarks attacking denazification.

Smith was disappointed not to get Eisenhower’s job after the end of the war. But his talents for tough negotiation were not ignored. He was appointed to Moscow as ambassador, and Eisenhower said that it would “serve those bastards right.” Although in bad health, Smith was called upon again, in 1950, to reorganize the fledgling CIA. He was appalled by the gifted amateurs in covert operations, who clearly were out of their league up against the ruthless KGB. On becoming president, Eisenhower again called on Smith—to serve under John Foster Dulles at the State Department—and Smith dutifully obeyed. His main role was dealing with the collapse of French Indochina and the Geneva conference in 1954. Struggling against ill health, partly due to a diet of cigarettes, “bourbon and Dexedrine,” Smith died in 1961.

Mr. Crosswell’s account both of Smith’s life and of supreme command in Europe is expert and written in good clean prose. Almost a third of it is devoted to logistic problems, which have never received the importance they deserve, especially for the war in Northwest Europe. Although strangely structured, with Smith’s postwar career at the beginning, the book provides a vital addition to our understanding of the politics and problems of allied warfare.

Mr. Beevor is the author of “D-Day: The Battle for Normandy” (Penguin).

__________

Full article and photo: http://online.wsj.com/article/SB10001424052702304510704575562073415363844.html

Norwegians Find Perfectly Preserved Stone Age Site

‘Mini-Pompeii’

Like a dinosaur skeleton: Excavations of the ‘min-Pompeii’ under way in Norway.

A Norwegian camping ground is the site of what may become one of Europe’s most significant archeological discoveries. Archeologists have found an almost perfectly preserved Stone Age settlement which may have been buried by a sandstorm over 5,000 years ago.

In Norway archeologists have found what is being described as a kind of “mini-Pompeii.” The well-preserved site is by the sea shore at Hamresanden in southern Norway and was discovered when excavators began digging there, prior to the construction of retirement homes.

The “sealed” Stone Age settlement, near the city of Kristiansand’s airport, is thought to have been covered by a sandstorm, possibly in the course of a few hours. Under about a meter (three feet) of sand excavations uncovered an almost perfectly preserved example of a settlement from what is known as the “Funnel Beaker Culture,” so called because of the distinctive clay beakers used by the first Stone Age farmers, with a funnel shaped rim. This was the major culture in north-central Europe between around 4000 BC to 2700 BC and archeologists estimate that the Hamresanden settlement was buried by sand around 3500 BC — that is, around 5,500 years ago. At the time, Norway’s climate was much more arid and geological formations have shown that sand storms were not uncommon.

Archeological Sensation

The sudden prehistoric sand storm conserved walls, arrowheads, complete wooden artifacts and vessels from the era, in much the same way that the volcanic ash preserved the doomed town of Pompeii in Italy around 2,000 years ago. Up until now archeologists in Norway had only found pieces of broken clay pots from the Stone Age. But at the Hamresanden site, which lies at the edge of a camping site, one complete vessel has already been pulled from the ground. Other large shards of pottery already found will enable archeologists to recreate several more of the large vessels.

“This is the first time we’ve made a find like this in Norway,” Håkon Glørstad of the University of Oslo told Norwegian daily Aftenposten. He said that the dig would be “carefully and finally stripped of the last of the earth, in about the same way that one uncovers a dinosaur skeleton. This is an archaeological sensation,” he concluded.

The location of the dwellings may also provide the researchers with information about the ways in which the southern Norwegian shoreline has changed over time. When the settlement was inhabited the ground was nine meters (29 feet) lower than it is today. As a result, archeologists believe they may find the remains of even older settlements nearby, under water.

__________
Full article and photo: http://www.spiegel.de/international/europe/0,1518,723712,00.html

Paleolithic Humans Had Bread Along With Their Meat

Starch grains found on 30,000-year-old grinding stones suggest that prehistoric humans may have dined on an early form of flatbread, contrary to their popular image as primarily meat eaters.

The findings, published in The Proceedings of the National Academy of Sciences journal on Monday, indicate that Paleolithic Europeans ground down plant roots similar to potatoes to make flour, which was later whisked into dough.

“It’s like a flatbread, like a pancake with just water and flour,” said Laura Longo, a researcher on the team, from the Italian Institute of Prehistory and Early History.

“You make a kind of pita and cook it on the hot stone,” she said, describing how the team replicated the cooking process. The end product was “crispy like a cracker but not very tasty,” she added.

The grinding stones, each of which fits comfortably into an adult’s palm, were discovered at archaeological sites in Italy, Russia and the Czech Republic.

The researchers said their findings throw humankind’s first known use of flour back some 10,000 years, the previously oldest evidence having been found in Israel on 20,000-year-old grinding stones.

The findings may also upset fans of the so-called Paleolithic diet, which follows earlier research that assumes early humans ate a meat-centered diet.

Also known as the “cave man diet,” the regime frowns on carbohydrate-laden foods like bread and cereal, and modern-day adherents eat only lean meat, vegetables and fruit.

It was first popularized by the gastroenterologist Walter L. Voegtlin, whose 1975 book lauded the benefits of the hunter-gatherer diet.

__________

Full article: http://www.nytimes.com/2010/10/19/science/19bread.html

Savagery in the East

How Stalin and then Hitler turned the borderlands of Eastern Europe into killing fields

The story of World War II, like that of most wars, usually gets told by the victors. Diplomatic and military accounts are set largely in the West and star the morally upright Allies—the U.S., Britain and Soviet Union—in battles against fascism. The Holocaust gets its own separate history, as a case apart in its genocidal intent and human tragedy.

Timothy Snyder’s “Bloodlands: Europe Between Hitler and Stalin” forces a dramatic shift in these perceptions. First, there is the setting: the flat and marshy eastern borderlands—inhabited by Jews, Poles, Ukrainians, Belarusians and others—that Stalin and then Hitler turned into what Mr. Snyder calls the “bloodlands.” No GIs fought on or liberated this soil, so the fate of its people never entered the collective Western imagination. Yet this was the true heart of the European conflict. By Mr. Snyder’s “conservative” reckoning, 14 million people were shot, deliberately starved or gassed while Hitler and Stalin were in power. All these dead were noncombatants. Mr. Snyder puts a third of the total on Stalin’s account.

Both Hitler and Stalin dreamed of a new European order, one in the name of a master race, the other of a master class. Their visions met in the borderlands. In his use of political mass murder to achieve it, Stalin was the trailblazer, an elder statesman of terror. The Soviet-made famine of 1932-33, which killed more than three million Ukrainians, launched an era of horror that ended only with the end of the war.

Among his other goals in “Bloodlands,” Mr. Snyder attempts to put the Holocaust in context—to restore it, in a sense, to the history of the wider European conflict. This is a task that no historian can attempt without risking controversy. Yet far from minimizing Jewish suffering, “Bloodlands” gives a fuller picture of the Nazi killing machine. Auschwitz, which wasn’t purely a “death camp,” lives on in our memory due in large part to those who lived to tell the tale. Through his access to Eastern European sources, Mr. Snyder also takes the reader to places like Babi Yar, Treblinka and Belzec. These were Nazi mass-murder sites that left virtually no survivors.

Yet Mr. Snyder’s book does make it clear that Hitler’s “Final Solution,” the purge of European Jewry, was not a fully original idea. A decade before, Stalin had set out to annihilate the Ukrainian peasant class, whose “national” sentiments he perceived as a threat to his Soviet utopia. The collectivization of agriculture was the weapon of choice. Implemented savagely, collectivization brought famine. In the spring of 1933 people in Ukraine were dying at a rate of 10,000 per day.

Stalin then turned on other target groups in the Soviet Union, starting with the kulaks—supposedly richer farmers, whom Stalin said needed to be “liquidated as a class”—and various ethnic minorities. In the late 1930s, Mr. Snyder argues, “the most persecuted” national group in Europe wasn’t—as many of us would assume—Jews in Nazi Germany, a relatively small community of 400,000 whose numbers declined after the imposition of race laws forced many into emigration at a time when this was still possible. According to Mr. Snyder, the hardest hit at that time were the 600,000 or so Poles living within the Soviet Union.

Convinced that this group represented a fifth column, Stalin ordered the NKVD, a precursor to the KGB, to “keep on digging out and cleaning out this Polish filth.” Mr. Snyder writes that before World War II started, 111,091 Soviet Poles were executed. This grim period is little known in Poland itself, but its detailed recounting here shows how a determined totalitarian machine could decimate a national group. Apologists for Stalin, in the West and elsewhere, have insisted that his Great Terror was needed to prepare the Soviets for a coming showdown with Hitler. Mr. Snyder destroys this argument.

Barbarism reached new lows after the Wehrmacht and the Red Army invaded Poland in 1939. The Ribbentrop-Molotov pact, signed in August a week before the blitzkrieg, had split sovereign Poland between the Nazi and Soviet allies. The invading Germans obeyed orders not to spare the civilian population. But the Soviets were more experienced then at brutality. In the spring of 1940, Stalin ordered the murder of 21,768 Polish officers in what came to be known as the Katyn massacres. Hundreds of thousands of other people from “enemy” classes and nationalities were deported to the east, where many died.

Plans for the Holocaust fell into place after Hitler’s surprise attack on the Soviet Union in 1941 failed to produce the quick victory that the Nazis expected. The killing began east of the Ribbentrop-Molotov line. Most of the victims were shot over pits. Nearly half of the millions of Jews killed by the Germans died in lands taken from the Soviets. In territory that the Nazis occupied in 1939, the extermination started later. The innovation was the gas chamber in the main “death factories” at Treblinka, Chelmno, Belzec, Majdanek and Sobibor, which took in Jews only to kill them. By the time the sixth death camp came on line at Birkenau, near Auschwitz, in early 1943, more than three-quarters of the Jews killed in the Holocaust, and most Soviet and Polish Jews, were already dead.

In the grim postscript to World War II, millions of Poles, Ukrainians, Balts and Germans were ethnically cleansed from lands they had occupied for generations. Churchill and Roosevelt let Stalin redraw Europe’s borders, and all the bloodlands fell into his hands. Unlike Hitler, Stalin realized his dreams of a global empire. His last murderous act was to launch another anti-Semitic purge, in late 1952, before he himself died in early 1953.

“Bloodlands” manages to clarify as well as darken our view of this era. “To dismiss the Nazis or the Soviets as beyond . . . historical understanding is to fall into their moral trap,” Mr. Snyder writes. “The safer route is to realize that their motives for mass killing, however revolting to us, made sense to them.”

Mr. Kaminski is a member of the Journal’s editorial board.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703794104575546611651621270.html

The road that built us

How the Post Road wrote New England’s history

Without knowing it, you’ve almost surely walked it or driven it, maybe on your way to the grocery store in Wayland, or a restaurant in the South End.

Since it became America’s first mail route back in 1673, the Boston Post Road has connected Boston to New York City, delivering messages, guiding travelers, and tying the Northeast together. In that time, some legs of the route have shifted, and most of it is now known by other names — Washington Street, Route 20, Main Street, or Mass. Route 9. But if you know how to follow the thread, you can still trace the Post Road beneath our modern streets and highways. A few stretches, as residents of Marlborough and Sudbury know, among others, are still called Boston Post Road.

The road in its Colonial form began in downtown Boston, at the Old State House, and followed modern Washington Street over what was once a thin neck of land into Roxbury. It split into two branches at the “parting stone” near what’s now Roxbury Crossing; the northern branch linked Boston and Springfield (with a spur up through Cambridge) before hooking toward Hartford, while the southern branch ran through Providence to New Haven. There the branches unified en route to New York.

To trace the Post Road through its history is to witness how important one connective thread can be to a growing region — and how it can still determine the shape of the city and state hundreds of years later.

Eric Jaffe is the author of ”The King’s Best Highway: The Lost History of the Boston Post Road, the Route That Made America,” recently published by Scribner.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/10/17/the_road_that_built_us/

An Age of Creative Destruction

‘Gentlemen: You have undertaken to cheat me. I won’t sue you, for law takes too long. I will ruin you.” Thus Cornelius Vanderbilt writing to business partners who had exploited his absence to gain control of one of his companies. He was as good as his word.

The nature of both ruin and success is the subject of “American Colossus,” H.W. Brands’s account of, as the subtitle has it, “The Triumph of Capitalism” during the period 1865-1900. Mr. Brands paints a vivid portrait of both this understudied age and those industrialists still introduced by high-school teachers as “robber barons”—Vanderbilt, Andrew Carnegie, John D. Rockefeller and J.P. Morgan. Together these men of the 19th century laid the foundations that would allow the use of innovations that we think of as modern, such as trains and automobiles, on a massive scale in the 20th century.

“Colossus” also reminds us of something more subtle: the terrifying difficulty of remaining at the top once one has arrived. Vanderbilt, for example, seemed doomed to be sidelined during his lifetime. He was a “water man” and remained devoted to the steamship even as railroads threatened to relegate river transport to the status of the fax. His hostility to trains was so great he referred to them simply as “them things that go on land.” But the Commodore eventually admitted to himself the looming obsolescence of the river highway—just in time to corner the stock of the New York & Harlem Railroad in the 1860s. Thus did he postpone—albeit only for a few decades—the decline of the great Vanderbilt empire.

Rails to riches: An 1870 cartoon depicting James Fisk’s attempt to stop Cornelius Vanderbilt from gaining control of the Erie Railroad Company

As Mr. Brands relates the tycoons’ stories, he drops some anecdotes wonderfully relevant today. Many Americans these days are buying their first gold shares—but with a certain ambivalence, all too aware that the metal’s price can move suddenly. Mr. Brands reminds us just how suddenly with a description of gold’s gyrations on Friday, Sept. 24, 1869, the day the Treasury signaled the Grant administration’s intention to combat rising gold prices by putting a supply worth $4 million on the market. That day, before Treasury’s move, gold shot to $162 an ounce from $143. Then the government’s gold came online. “As the bells of Trinity pealed forth the hour of noon,” reported the Herald Tribune, “the gold on the indicator stood at 160. Just a moment later, and before the echoes died away, gold fell to 138.”

“Colossus” also reminds us just how colossally wrong bets can be. When New York’s first apartment building, on East 18th Street, was planned in 1869, the reception it received was as cold as a February wind off the Hudson. New Yorkers reckoned that “cohabitation,” as apartment life was called, would fail and that gentlemen would never live “on shelves.”

For all the pleasure that “Colossus” offers in the way of anecdote, two flaws undermine its attractions. First, Mr. Brands frames the book—and indeed all of American history—as a contest between capitalism and democracy. Democracy depends on equality, the author claims, while capitalism needs inequality to function. “In accomplishing its revolution, capitalism threatened to eclipse American democracy,” he writes.

The author’s attachment to a sweeping theme like the democracy-capitalism clash is understandable: It’s the sort of duel that Will and Ariel Durant and other producers of pageant-style history have featured to unify their multivolume works.

Still, this “wasn’t it grand?” mode of writing is imprecise. Mr. Brands laments that capitalism’s triumph in the late 19th century created a disparity between the “wealthy class” and the common man that dwarfs any difference of income in our modern distribution tables. But this pitting of capitalism against democracy will not hold. When the word “class” crops up in economic discussions, watch out: it implies a perception of society held in thrall to a static economy of rigid social tiers. Capitalism might indeed preclude democracy if capitalism meant that rich people really were a permanent class, always able to keep the money they amass and collect an ever greater share. But Americans are an unruly bunch and do not stay in their classes. The lesson of the late 19th century is that genuine capitalism is a force of creative destruction, just as Joseph Schumpeter later recognized. Snapshots of rich versus poor cannot capture the more important dynamic, which occurs over time.

One capitalist idea (the railroad, say) brutally supplants another (the shipping canal). Within a few generations—and in thoroughly democratic fashion—this supplanting knocks some families out of the top tier and elevates others to it. Some poor families vault to the middle class, others drop out. If Mr. Brands were right, and the “triumph of capitalism” had deadened democracy and created a permanent overclass, Forbes’s 2010 list of billionaires would today be populated by Rockefellers, Morgans and Carnegies. The main legacy of titans, former or current, is that the innovations they support will produce social benefits, from the steel-making to the Internet.

The second failing of “Colossus” is its perpetuation of the robber-baron myth. Years ago, historian Burton Folsom noted the difference between what he labeled political entrepreneurs and market entrepreneurs. The political entrepreneur tends to compete over finite assets—or even to steal them—and therefore deserves the “robber baron” moniker. An example that Mr. Folsom provided: the ferry magnate Robert Fulton, who operated successfully on the Hudson thanks to a 30-year exclusive concession from the New York state legislature. Russia’s petrocrats nowadays enjoy similar protections. Neither Fulton nor the petrocrats qualify as true capitalists.

Market entrepreneurs, by contrast, vanquish the competition by overtaking it. On some days Cornelius Vanderbilt was a political entrepreneur—perhaps when he ruined those traitorous partners, for instance. But most days Vanderbilt typified the market entrepreneur, ruining Fulton’s monopoly in the 1820s with lower fares, the innovative and cost-saving tubular boiler and a splendid advertising logo: “New Jersey Must Be Free.” With market entrepreneurship, a third party also wins: the consumer. Market entrepreneurs are not true robbers, for their ruining serves the common good.

Mr. Brands appreciates the distinction between political entrepreneurs and market entrepreneurs, but he chooses not to highlight it. Thus he misses an opportunity to emphasize a truth about the late 19th century that rings down to our own rocky times: The best growth is spurred by the right kind of ruin.

Miss Shlaes, a senior fellow at the Council on Foreign Relations, is writing a biography of Calvin Coolidge.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704380504575530332139071998.html

Another Quiet American

On May 23, 1950, the FBI arrested Harry Gold, a 49-year-old chemist who lived with his father in Philadelphia. The FBI accused him of being a Soviet espionage agent—the man who, as a courier, literally gave the Russians the secrets of the atomic bomb. Though other Soviet spies from that era—Alger Hiss, Julius and Ethel Rosenberg—remain notorious to this day, Gold has faded from the story. Allen M. Hornblum offers a welcome corrective with the biography “The Invisible Harry Gold.”

Gold was the son of Russian Jewish immigrants—he was born in Switzerland in 1910 before his parents made their way to America. While studying chemical engineering at Drexel University in the mid-1930s, he was recruited as a Soviet spy. Thus began a 15-year espionage career that began with the theft of industrial secrets—Gold later said that he just wanted to make life easier for the Soviet people—and ended with his passing information from the Manhattan Project, provided by physicist Klaus Fuchs, to the Soviet Union. Gold confessed to his crimes, and his testimony would help convict the Rosenbergs.

Off the Cuffs:Harry Gold, 1950

Gold has long been a riddle. To his supporters, he was a shy, decent man whose total being seemed at odds with his secret life. To his critics— not a few of them on the left—he was a liar and a psychopath who sought fame as a government witness. Mr. Hornblum calls him “one of the most denounced, slandered, and demonized figures in twentieth-century America,” which may be excessive, but Harry Gold was certainly loathed by many.

Mr. Hornblum presents us with a balanced portrait, tracing Gold’s hardscrabble young life, his slow entanglement with the Soviet espionage network and the many unhappy years he spent working on Moscow’s behalf. Gold was never much of an ideologue but was grateful for a job that a communist friend helped him to land. He was naïve enough to believe that the Soviet Union was actually fighting anti-Semitism, and he was easily bullied into continuing to work with the Soviets whenever he tried to return to a normal life. During World War II he could even convince himself that he was sharing secrets with America’s wartime partner—and thus not undermining his own country’s security.

Gold didn’t confess until the FBI tracked him down, but shortly after his arrest he began cooperating fully. Sentenced to 30 years’ imprisonment (a longer sentence than the government had requested), he was paroled in 1965. He went to work at a hospital, where Gold was, according to Mr. Hornblum, a beloved employee with many friends. When Gold died in 1972, he had kept such a low profile that a year would pass before newspapers noticed.

“I have never intended any harm to the United States,” Gold wrote from prison. “For I have always steadfastly considered that first and finally I am an American citizen. This is my country and I love it.” If Gold had not cooperated with authorities, Mr. Hornblum writes, it is unlikely that he could have been convicted: Fuchs, the physicist, had confessed, too, but he was being held by the British and would probably not have been allowed to testify. If Gold had not implicated other spies, the author notes, they might well have escaped.

“How did such a gentle, apolitical person,” Mr. Hornblum asks, “get caught up in the ‘crime of the century’?” This finely crafted biography gives us the most complete answer we are ever likely to have.

Mr. Ybarra writes about art, literature and extreme sports for the Journal.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703794104575545990360049672.html

A Visit to Germany’s First-Ever Hitler Exhibition

Führer Show

The German History Museum in Berlin will open an extensive exhibition on Adolf Hitler on Friday, the first of its kind in postwar Germany.

On Friday, the German History Museum is opening postwar Germany’s first-ever comprehensive exhibition on Adolf Hitler. Curators went out of their way to avoid creating an homage — yet they are still concerned about attracting cheering neo-Nazis and angry protesters.

The portrait is rather pretentious — oil on canvas, and a whopping156 by 120 centimeters (5 feet 1 inch by 3 feet 11 inches). It shows Adolf Hitler the way his Nazis and favorite artists most liked to see him — posing as the visionary ruler against a backdrop of an imagined German landscape.

“We likely could have had it,” says Hans-Ulrich Thamer. The US Army, which has held the painting since 1939, would certainly have been open to loaning out the painting. But Thamer, a curator for the German Historical Museum (DHM), didn’t want it. The rationale, it appears, was that German eyes shouldn’t be dazzled a second time by this sort of monumental state-sponsored art from the Third Reich. Thamer prefers, instead, to display smaller reproductions of the Führer. “That detracts from his impact,” the curator says.

An extensive exhibition about the dictator opens this Friday at the German History Museum in Berlin, the first in Germany’s post-war history to focus exclusively on Hitler’s life. The exhibition is not without risks: Organizers fear that the show could be showered with unwanted praise from right-wing extremists — and with bitter protest from the rest of the country. Indeed, out of a lack of trust in those who will visit the exhibition, the show will omit anything that might glorify Hitler as a hero. “We cannot provide any opportunity to identify with him,” has been the watchword for Thamer, who developed the show.

To be sure, there have been countless exhibitions in Germany addressing the Third Reich, covering topics including the Holocaust, crimes committed by the German military, the Nazi justice system, medicine in the Third Reich, forced labor, concentration camps and other horrors. Until now, though, museum directors and politicians responsible for cultural affairs have shied away from dealing directly with the man who presided over those horrors.

A Nervous Wreck and a Laughingstock

The exhibition comes following years in which the public image of Hitler has changed drastically. Recently, he has been portrayed as a nervous wreck during his last days in his Berlin bunker in the film “Downfall;” then as a laughingstock played by German comedian Helge Schneider in “Mein Führer.” Hitler is even in Madame Tussauds in Berlin.

Each new realistic portrayal of Hitler has been accompanied by a debate as to the wisdom, for example, of installing a wax Hitler as a tourist attraction next to Heidi Klum in a wax museum. But the discussion has tended to quickly subside (apart from the incident which saw the Hitler wax figure decapitated by a protester. The figure has since been repaired by the museum and is now displayed behind bulletproof glass).

Nevertheless, there are plenty of taboos the museum curators didn’t dare touch, starting with the exhibition’s title. The idea of calling the show simply “Hitler” was quickly nixed by an expert committee of prominent historians convened in 2004, when the museum first raised the idea for the show. For both conservative historian Michael Stürmer and more leftist colleagues such as Reinhard Rürup, such a name was not permissible.

Above all, the organizers wanted to avoid breathing new life into the old excuses from the postwar period that painted Hitler as the evil seducer of an unknowing and innocent general population. In the end, the exhibition on this sensitive topic was given the title “Hitler and the Germans: Nation and Crime.”

Great Caution

Today, of course, it is no longer a matter of dispute that Hitler represented the view of the vast majority of Germans, of the Volksgemeinschaft, at least until 1941 — that Germans saw themselves in their “Führer.” Thamer, who teaches history at Münster University, says “our picture of German society grows bleaker and bleaker,” in reference to the current state of research.

Still, an exhibition held in the center of the former Nazi capital follows different rules than the ones that govern academic discourse, and the curators exercised great caution in choosing items for the show. Many of Hitler’s uniforms and other personal items have been preserved, in storage in Moscow, but there are no plans to display them. “Using relics like those would cross the line into making this an homage to a hero,” Thamer says. The intention is always to create a critical distance between the viewer and the 20th century’s greatest criminal.

Visitors to the exhibition are first greeted with three photographic portraits — Hitler as a party agitator, as a statesman and — in a photomontage — as a death’s head. Behind these images, which are projected onto a transparent screen, other photographs light up — of unemployed people, of cheering supporters and of soldiers marching past a burning house. The dictator is never shown alone — he is always embedded in the social, political and military context in which he acted.

As an additional precaution, an accompanying catalog provides an enormous amount of context, with historians presenting their views on “the Nazi Party’s breakthrough,” “the iconography of the nation,” “women in the wartime community” and various other subjects related to Hitler.

‘Germany’s Good Fortune!’

Typifying the entire exhibition is an essay by Ian Kershaw, the British biographer of Hitler, who describes Hitler supporters’ quasi-religious relationship to their messiah. “It’s a miracle of our times that you have found me,” the dictator declared to 140,000 excited supporters in Nuremberg in 1936, “that you have found me among so many millions! And that I have found you, that is Germany’s good fortune!”

Kershaw takes a sentence uttered by a Nazi state secretary — that every German should “work toward the Führer to fulfill his goals” — as a good explanation of the inner logic of the Nazi dictatorship and of the crimes committed by a population that sometimes acted on its own initiative.

The German History Museum exhibition includes evidence supporting this thesis. There is a tapestry, for example, embroidered by members of two women’s groups in the town of Rotenburg an der Fulda. It shows Hitler Youth, SA and League of German Girls formations arranged in the shape of a cross, marching toward a church. The embroiderers further embellished the work with the text of the Lord’s Prayer in half cross-stitch. Thamer, who was born in Rotenburg, came across the tapestry by chance during a visit.

The exhibition’s items of Hitler memorabilia are rather hidden in a small display case. There’s a photo book called “The Hitler No One Knows,” by Hitler’s favorite photographer Heinrich Hoffmann, and a deck of Führer playing cards that also includes resplendent pictures of Rudolf Hess and other Nazi leaders. These objects are treated almost like pornography, obscene material meant to lose its potential to excite in this solemn museum setting. A chest of drawers from Hitler’s New Reich Chancellery is hung crooked, instead of simply being placed on the floor, and an oil painting glamorizing the nation at war likewise hangs askew.

The central insight and most important message for visitors to the exhibition is that “we won’t be finished with Hitler for a long time yet,” says Simone Erpel, who prepared the show together with Thamer. “Every generation has to find its own answers.”

The Museum in History

The show’s last room presents 46 SPIEGEL covers on Hitler and National Socialism — along with the fake Hitler diaries published by Stern magazine. From the earliest title (“Anatomy of a Dictator” from 1964) to the most recent (“The Accomplices” in 2009), these articles also reflect changing perceptions of history.

The one aspect that remains oddly underrepresented in all this is information about the museum’s own location in the historical Zeughaus, a former arsenal on Unter den Linden in Berlin. Diagonally opposite the building, on a square next to the State Opera, is the site of the infamous May 1933 book burning. The nearby Lustgarten was decked out with swastika flags during ceremonial occasions.

“Hitler and the Germans” makes no mention of these connections, not even of the day when the Zeughaus itself made history. Hitler had come to view an exhibition of captured Soviet weapons in the building on “Heroes’ Commemoration Day,” a Nazi holiday, on March 21, 1943. An officer named Rudolf-Christoph von Gersdorff attended as well, carrying two explosive devices and intending to assassinate Hitler.

Gersdorff had already triggered the time-delayed fuses when Hitler sped through the exhibition and left the building earlier than expected. Escaping notice but dripping with sweat, the officer ducked into a bathroom and was just able to defuse the devices before they detonated.

__________

Full article and photo: http://www.spiegel.de/international/germany/0,1518,722612,00.html

How Middle Eastern Milk Drinkers Conquered Europe

Neolithic Immigration

An excavation of a Linear Pottery village in Bavaria

New research has revealed that agriculture came to Europe amid a wave of immigration from the Middle East during the Neolithic period. The newcomers won out over the locals because of their sophisticated culture, mastery of agriculture — and their miracle food, milk.

Wedged in between dump trucks and excavators, archeologist Birgit Srock is drawing the outline of a 7,200-year-old posthole. A concrete mixing plant is visible on the horizon. She is here because, during the construction of a high-speed rail line between the German cities of Nuremberg and Berlin, workers happened upon a large Neolithic settlement in the Upper Franconia region of northern Bavaria.

Expansion of crop cultivation and dairy farming during the Neolithic period

The remains of more than 40 houses were unearthed, as well as skeletons, a spinning wheel, bulbous clay vessels, cows’ teeth and broken sieves for cheese production — a typical settlement of the so-called Linear Pottery culture (named after the patterns on their pottery).

This ancient culture provided us with the blessing of bread baking. At around 5300 BC, everyone in Central Europe was suddenly farming and raising livestock. The members of the Linear Pottery culture kept cows in wooden pens, used rubbing stones and harvested grain. Within less than 300 years, the sedentary lifestyle had spread to the Paris basin.

The reasons behind the rapid shift have long been a mystery. Was it an idea that spread through Central Europe at the time, or an entire people?

Peaceful Cooperation or Invasion?

Many academics felt that the latter was inconceivable. Agriculture was invented in the Middle East, but many researchers found it hard to believe that people from that part of the world would have embarked on an endless march across the Bosporus and into the north.

Jens Lüning, a German archaeologist who specializes in the prehistoric period, was influential in establishing the conventional wisdom on the developments, namely that a small group of immigrants inducted the established inhabitants of Central Europe into sowing and milking with “missionary zeal.” The new knowledge was then quickly passed on to others. This process continued at a swift pace, in a spirit of “peaceful cooperation,” according to Lüning.

But now doubts are being raised on that explanation. New excavations in Turkey, as well as genetic analyses of domestic animals and Stone Age skeletons, paint a completely different picture:

  • At around 7000 BC, a mass migration of farmers began from the Middle East to Europe.
  • These ancient farmers brought along domesticated cattle and pigs.
  • There was no interbreeding between the intruders and the original population.

Mutated for Milk

The new settlers also had something of a miracle food at their disposal. They produced fresh milk, which, as a result of a genetic mutation, they were soon able to drink in large quantities. The result was that the population of farmers grew and grew.

These striking insights come from biologists and chemists. In a barrage of articles in professional journals like Nature and BMC Evolutionary Biology, they have turned many of the prevailing views upside down over the course of the last three years.

The most important group is working on the “Leche” project (the name is inspired by the Spanish word for milk), an association of 13 research institutes in seven European Union countries. The goal of the project is to genetically probe the beginnings of butter, milk and cheese.

An unusual circumstance has made this research possible in the first place. Homo sapiens was originally unable to digest raw milk. Generally, the human body only produces an enzyme that can break down lactose in the small intestine during the first few years of life. Indeed, most adults in Asia and Africa react to cow’s milk with nausea, flatulence and diarrhea.

But the situation is different in Europe, where many people carry a minute modification of chromosome 2 that enables them to digest lactose throughout their life without experiencing intestinal problems. The percentage of people with this modification is the highest among Britons and Scandinavians (see graphic).

It has long been known that these differences are based on Europeans’ primeval origins. But where did the first milk drinker live? Which early man was the first to feast on cow’s milk without suffering the consequences?

Groups Did not Intermingle

In a bid to solve the mystery, molecular biologists have sawed into and analyzed countless Neolithic bones. The breakthrough came last year, when scientists discovered that the first milk drinkers lived in the territory of present-day Austria, Hungary and Slovakia.

But that was also where the nucleus of the Linear Pottery culture was located. “The trait of lactose tolerance quickly became established in the population,” explains Joachim Burger, an anthropologist from the University of Mainz in southwestern Germany who is a member of the Leche team.

Deep-frozen thighs are stacked in Burger’s laboratory, where assistants wearing masks saw open skulls. Others examine bits of genetic material from the Stone Age under a blue light.

The group will hold a working meeting in Uppsala, Sweden in November. But even at this stage it is already clear that large numbers of people from the Middle East once descended upon Central Europe.

There are also signs of conflict. The intruders differed from the continent’s Ice Age inhabitants “through completely different genetic lines,” Burger explains. In other words, the two groups did not intermingle.

Tension Between Locals and Incomers

This isn’t exactly surprising. The old hunter-gatherers on the continent had long been accustomed to hunting and fishing. Their ancestors had entered Europe 46,000 years ago — early enough to have encountered the Neanderthals.The early farmers moving into Central Europe were sophisticated compared with these children of nature. The farmers wore different clothing, prayed to other idols and spoke a different language.

It was these differences that probably led to tensions. Researchers have discovered that arsonists set the villages of the Linear Pottery culture on fire. Soon the farmers built tall palisades to protect their villages. Their advance was blocked for a long time by the Rhine River, however.

There are signs that bartering and trade existed, but the two groups did not intermingle sexually. Burger suspects that there was probably a “strict ban on intermarriage.”

The farmers even protected their livestock from outside influences, determined to prevent the wild oxen known as aurochs from breeding with their Middle Eastern cows. They feared that such hybrids would only introduce a new wild element into the domesticated breeds.

Their breeding precautions were completely understandable. The revolutionary idea that man could subjugate plants and animals went hand in hand with enormous efforts, patience and ingenuity. The process took thousands of years.

Getting Animals Under Control

The beginnings can now be delineated relatively well. About 12,000 years ago, the zone between the Zagros Mountains in present-day Iran, Palestine and Turkey was transformed into a giant field experiment.

The first farmers learned to cultivate wild emmer and einkorn wheat. Then they went on to domesticate animals. Goats had been successfully domesticated in Iran by about 9,000 BC. Sheep and pigs were domesticated in southern Anatolia.

Enormous settlements soon sprang up in the region known as the Fertile Crescent. Çatalhöyük, known as “man’s first metropolis,” had about 5,000 inhabitants, who lived in mud huts packed tightly together. They worshipped an obese mother goddess, depicted in statues as a figure sitting on a throne decorated with the heads of carnivores.

One of the most difficult challenges was the breeding and domestication of Middle Eastern wild cattle. The male specimens of the species weighed up to 1,000 kilograms (2,200 pounds) and had curved horns. People eventually drummed up the courage to approach the beasts somewhere in the central Euphrates Valley.

They found different ways of getting the cattle under control. One Neolithic sculpture depicts a steer with a hole punched through its nasal septum. Removing the testicles was also quickly recognized as a way of improving the animals’ temperament. Once the wild cattle had been castrated, they could finally be yoked.

The clever farmers realized that if they gave calves from other mothers to the cows, their udders would always be full of milk.

No Taste for Milk

Oddly enough, the Mesopotamian farmers didn’t touch fresh milk. A few weeks ago, Joachim Burger returned from Turkey with a sack full of Neolithic bones from newly discovered cemeteries where the ancient farmers were buried.

When the bones were analyzed, there were no signs of lactose tolerance. “If these people had drunk milk, they would have felt sick,” says Burger. This means that at first the farmers only consumed fermented milk products like kefir, yogurt and cheese, which contain very little lactose.

Even more astonishing, as recent excavations in Anatolia show, is the fact that the ancient farmers did not leave their core region for almost 2,000 years. They had put together the complete “Neolithic cultural package,” from the rubbing stone to seeds, “without advancing into other areas,” says archeologist Mehmet Özdogan.

The coastal zones were long avoided. The people who lived there were probably fishermen who defended themselves against the new way of life with harpoons.

Renegade Settlers

The crossing of the Bosporus did not occur until sometime between 7000 and 6500 BC. The farmers met with little resistance from the hunter-gatherer cultures, whose coastal settlements were being inundated by devastating floods at the time. Melting glaciers had triggered a rise in the sea level of over 100 meters (160 feet).

Nevertheless, the advance across the Balkans was not a triumph. The colonists’ dwellings there seem small and shabby. At the 47th parallel north, near Lake Balaton in modern-day Hungary, the advance came to a standstill for 500 years.

The Linear Pottery culture, which was the first to shift to the northern shore of Lake Balaton, gave the movement new life. Lüning talks about “renegade” settlers who had created a “new way of life” and a “reform project” on the other side of the lake.

With military determination, the advancing pioneers constantly established new settlements. The villages often consisted of three to six windowless longhouses, strictly aligned to the northwest, next to livestock pens and masterfully constructed wells. Their tools, picks and bowls (which were basically hemispheric vessels) were almost identical throughout Central Europe, from Ukraine to the Rhine.

Migration and Mass Murder

The settlers, wielding their sickles, kept moving farther and farther north, right into the territory of backward peoples. The newcomers were industrious and used to working hard in the fields. Clay statues show that the men were already wearing trousers and shaving. The women dyed their hair red and decorated it with snail shells. Both sexes wore caps, and the men also wore triangular hats.By comparison, the more primitive existing inhabitants of the continent wore animal hides and lived in spartan huts. They looked on in bewilderment as the newcomers deforested their hunting grounds, tilled the soil and planted seeds. This apparently upset them and motivated them to resist the intruders.

In the Bible, Cain, the crop farmer, slays Abel the shepherd. In the Europe of the Neolithic Age, conditions may have been just as violent. One of the most gruesome discoveries is a mass grave that has been dubbed the “Talheim Death Pit” in the German town of that name. The pit is filled with the remains of 34 bodies. The members of an entire clan were apparently surprised in their sleep and beaten to death with clubs and hatchets. So far, archeologists haven’t been able to figure out whether the incomers killed the existing inhabitants, or vice versa.

Drinking Milk by the Bucketful

It is clear, however, that the dairy farmers won out in the end. During their migration, they encountered increasingly lush pastures, a paradise for their cows. An added benefit of migrating farther to the north was that raw milk lasted longer in the cooler climate.

This probably explains why people soon began drinking the abundant new beverage by the bucketful. Some had genetic mutations that enabled them to drink milk without getting sick. They were the true progenitors of the movement.

As a result of “accelerated evolution,” says Burger, lactose tolerance was selected for on a large scale within the population in the space of about 100 generations. Europe became the land of the eternal infant as people began drinking milk their whole lives.

The new food was especially beneficial for children. In the Neolithic Age, many small children died after being weaned in their fourth year of life. “As a result of consuming healthy milk, this could be greatly reduced,” Hamburg biologist Fritz Höffeler speculates. All of this led to population growth and, as a result, further geographical expansion.

‘White Revolution’

Does this explain why the inventors of the sickle and the plow conquered Europe so quickly, leading to the demise of the old hunter-gatherers?

Imagine, if you will, a village of the Linear Pottery culture in the middle of winter. As smoke emerges from the top of a wooden hut, the table inside is surrounded by rosy-cheeked children drinking hot milk with honey, which their mother has just prepared for them. It’s an image that could help explain why people adopted a sedentary way of life.

Burger, at any rate, is convinced that milk played a major part in shaping history, just as gunpowder did much later. “There was once a white revolution,” he says.

__________

Full article and photos: http://www.spiegel.de/international/zeitgeist/0,1518,723310,00.html

A Country as Seen Through Its Crowns

David Starkey’s new book takes on the most cherished of British institutions.

In our modern age nothing or no one has done as much to maintain the mystique of the British royal family as television. Near the end of his spicily compelling history “Crown & Country,” David Starkey describes the surge of excitement he felt as a youngster watching Queen Elizabeth II’s coronation.

“On 2 June 1953, I, then a boy of eight in my Sunday best, gathered along with countless millions more to watch the coronation on a neighbour’s television which had been bought especially for the occasion. It was the first time that I had seen television or a monarch. And I have never forgotten it.”

For Mr. Starkey it was the beginning of a life-long obsession with the British monarchy and a special fascination with the court intrigues of the Tudor dynasty—a second volume of his entertaining history about Henry VIII, titled “Henry: Model of a Tyrant,” will be published next year.

With “Crown & Country”—which brings together and updates two previously published volumes, “The Monarchy of England: The Beginnings” and “Monarchy: From the Middle Ages to Modernity”—Mr. Starkey re-examines the role of an institution, the British/English monarchy, that has been in existence for nearly 2,000 years.

Starting with the first Roman invasions of Britain by Julius Caesar in 55 and 54 BC and ending with a chapter calling into question whether a future archbishop of Canterbury would be prepared to crown and consecrate Prince Charles and his wife Camilla Parker-Bowles, Mr. Starkey never ceases to wield a historian’s greatest weapon: utter self confidence.

Rather than simply provide a straight, by-the-numbers biography of Britain’s kings and queens down the years, Mr. Starkey has done something trickier and more ambitious by digging into the ideas underpinning monarchy and the philosophies of those who formulated them.

Ever the populist, Mr. Starkey describes these kingly advisers and publicists as “the shock troops of monarchy.”

“When they were talented and imaginative, monarchy flourished: when they were not, the crown lost its sheen and the throne tottered.”

So this is as much a history about intelligent and ambitious priest-ministers like Thomas Wolsey (whom Mr. Starkey describes as “spin doctor in chief”) or Roger of Salisbury, as it is about the kings under whom they served—Henry VIII in Wolsey’s case and Henry I in Roger of Salisbury’s.

But with Mr. Starkey controversy is never very far away, especially when he states that the Saxon Conquest, as opposed to the Norman Conquest, was the turning point in England’s history. School teachers up and down the land will no doubt be pulling out their red pens in protest.

“They [the Anglo Saxons] would invent a new politics which depended on participation and consent, rather than the top down autocracy of Rome,” Mr. Starkey argues by way of justification, before hailing the House of Wessex for producing a king such as Alfred “The Great.”

“Like all Anglo-Saxon kings,” Mr. Starkey writes,” Alfred was a man of action and a warrior. But he was also, uniquely for his own age and for long after, a true philosopher-king.”

Indeed, it is a lack of intellectual rigor that Mr. Starkey finds most disappointing among British monarchs, though there are notable exceptions such as Edward I, Henry VIII and especially his daughter Elizabeth I.

In the English press Mr. Starkey has been particularly scathing about Elizabeth II, whose coronation so enthralled him as a youngster. In “Crown & Country” he is rather more circumspect, though no less dismissive: “Her [the queen’s] education, at the hands of a devoted governess, was modest and undemanding . . . riding, along with dogs became a lifelong passion. Books, on the other hand, remained alien: reading was for state papers.”

Of course, the burning question is whether Mr. Starkey believes the monarchy has outlived its usefulness. Though he never comes right out and says it, this is certainly one very disillusioned English historian.

For Mr. Starkey the rot set in at the beginning of the 18th century with the Hanoverians, the second wave of German-born kings to arrive in England after the Saxons.

Never one to mince his words, Mr. Starkey describes the Hanoverians, of which the current House of Windsor is really an extension, as “the least able and attractive house to sit on the British throne.” He goes even further by declaring that “[it] was the awkward, unattractive personalities of the first two Hanoverian kings [George I and George II] which accelerated” the advent of the British premiership and “made it irreversible.”

By the end of “Crown and Country” Mr. Starkey snaps that a civil inauguration for Charles might even be for the best. “It would be a recognition that the United Kingdom has become—as it has indeed—the Royal Republic of Britain.” And while on the subject of sacred cows, Mr. Starkey muses that it is “America today which best embodies the ideas of freedom, power and empire which inspired that great denizen of Stowe, William Pitt, in the reign of George II.”

Does such provocative rhetoric merely mean Mr. Starkey is planning his next book tour? Or is the writing really on the wall for the British monarchy? Only time will tell.

Mr. Grey is a writer based in Paris.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703673604575550550771283386.html

Out for Blood

A portrait of the prosecutor charged with investigating the causes of the 1929 crash.

As night follows day, so inquests follow crashes. What followed the 1929 crash were the sensational stock-market hearings of 1933 and 1934. Michael Perino’s “The Hellhound of Wall Street” is the story of the chief inquisitor.

Contrary to the book’s overpromising subtitle, the Senate Banking and Currency Committee investigation did not, in fact, “forever change American finance.” Dramatic it was, and shocking, too. But if Ferdinand Pecora, the committee’s chief counsel, were gazing down on Wall Street today, he might be struck not by how much has changed but how little. Regulations we have in profusion, and regulators, too. Yet fallible human beings persist in buying high and selling low, rather than the other way around.

Mr. Perino roots hard for his protagonist, who had spunk enough for three. Ferdinand Pecora was born in Sicily in 1882 and brought to New York City at age 4. He grew up in a cold-water basement flat that was part residence, part shoe-repair shop. When his father, Luigi, the cobbler, was incapacitated in an industrial accident, 14- year – old Ferdinand became the family’s principal bread winner.

The striving young immigrant had energy left over to attend law school at night. He passed the bar exam and cast his political lot with the city’s Democratic Party machine. In 1918, now 35, he became a deputy assistant district attorney. A decade later he made a run for the DA’s office. Defeated, he left the city payroll for private practice on Dec. 31, 1929. The stock market had already crashed, of course, but the Depression was just beginning.

The inquisition that would make Pecora a household name in Depression-era America was set in motion by President Hoover in 1932. Led by Sen. Peter Norbeck of South Dakota, the Republican chairman of the Senate Banking and Currency Committee, the panel was charged with unmasking the short sellers who, according to urban legend, were undermining share prices. Hoover wasn’t out to regulate American finance, Mr. Perino relates, but rather to finger the scoundrels he suspected of wrecking the market and ruining his presidency.

Norbeck’s investigation went nowhere. The villains he hoped to expose—notably, Richard Whitney, the imperious head of the New York Stock Exchange—walked all over a succession of ineffectual chief counsels. With Franklin Roosevelt’s election, Norbeck would soon lose his committee chairmanship. How to reinvigorate his moribund investigation? The Republican senator hired the eager, obscure, Democratic ex-assistant DA.

What little Pecora knew about banking and finance he had gleaned from prosecuting low-level frauds. He was, however, a master cross-examiner, a quick study and a tireless worker. He wanted blood, too. In a dinner speech to the Elks Club of New York, he assailed the “men of might” on Wall Street who had taken “millions and millions of the hard-earned pennies of the people.” As for what his investigation might achieve, Pecora ventured: “When the nation again comes to days of plenty and prosperity, let us seek to make it impossible for water and hot air to be sold to men and women for gold taken from their life savings.”

For 10 days in March 1933, Pecora’s investigatory target was Charles E. Mitchell, chief executive of National City Bank, later to become Citigroup. “Sunshine Charley,” as Mitchell was mockingly known after his fall from grace, came pre-convicted, but his bank was a pillar of strength. Today, in the wake of the serial bailouts of 2008-09, Mitchell’s managerial achievement seems almost mythical. From the 1929 peak to the 1933 depths, nominal GDP fell by 45.6%—the American economy was virtually sawed in half. By contrast, during our late, Great Recession, nominal GDP dropped by only 3.1%. Yet this comparatively minor perturbation sent Citigroup into the arms of the federal government to the tune of $45 billion in TARP funds and wholesale FDIC guarantees of the bank’s tattered mortgage portfolio.

National City did accept a $50 million federal investment in 1934, after Mitchell resigned. However—and herein lies the difference—the bank’s solvency didn’t hinge on that cash infusion. Many banks did fail in the Depression, of course. But from today’s perspective the wonder is that so many didn’t.

To Pecora, though—and to Mr. Perino, too—the health of the National City balance sheet was not the question. Mitchell was a whipping boy from central casting. If, Pecora reasoned, he could expose Mitchell as a tax evader, ridicule his boom-time predictions and reveal his high-pressure sales tactics, an enraged public would demand that the Roosevelt administration put capitalism in its place. “Pecora gave them proof,” Mr. Perino writes, “proof that the honesty and integrity of the financial establishment were inadequate—proof that laissez-faire didn’t work. If Wall Street could not or would not regulate itself, Washington would have to regulate for them.”

Thanks in good part to Pecora’s work, investors today have prospectuses to read and an SEC to complain to. Bank depositors have federal deposit insurance to protect them. And banks like Citi operate in the certainty that they won’t be allowed to fail, however much they deserve to.

Pecora went on to become an SEC commissioner, a judge on New York state’s Supreme Court and a crusader for progressive political causes. Mitchell, who resigned in disgrace from National City and lost his houses to foreclosure, refused to file for personal bankruptcy. Rather, he honorably worked to pay every last dollar of debt. Later he built Blyth & Co into a thriving investment bank.

Mitchell or Pecora—who’s your hero?

Mr. Grant is the editor of Grant’s Interest Rate Observer and the author, most recently, of “Mr. Market Miscalculates.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704380504575530392819077162.html

In Search Of True Believers

The voters may have liked their president, but they didn’t want him picking their senator.

In 1936, Franklin Delano Roosevelt trounced Republican Alf Landon by 24 percentage points in the popular vote and won the biggest electoral landslide in American history. Equally impressive were the lopsided congressional victories that year: a 76-16 majority over the feeble Republicans in the Senate, a 334-88 majority in the House.

With such a mandate, Roosevelt set out to expand the New Deal and to give himself the power to make it work. He pushed bills to establish a minimum wage and streamline his control over the executive branch. To fend off a Supreme Court that had struck down key aspects of the New Deal, he tried adding another six justices to the court. Yet the popular president soon found that all his political capital wasn’t worth much in Congress.

“Just nine months after Roosevelt’s landslide election, opposition in his own party had grown assertive, militant, and confident—and the New Deal had come to a standstill,” writes Susan Dunn in “Roosevelt’s Purge.” Ms. Dunn, a professor at Williams College, delves into a fascinating and overlooked aspect of the FDR presidency: Roosevelt’s brazen effort to assert control over his own party in the summer of 1938.

Ms. Dunn has written an engaging story of bare – knuck led political treachery that pits a president at the peak of his popularity against entrenched congressional leaders who didn’t like where he was taking the country and their party. FDR tried to use the power of the White House, and his personality, to run his opponents out of the Democratic Party. He failed miserably.

When Roosevelt’s second-term agenda hit a brick wall of Democratic opposition, he first tried a charm offensive. In June 1937, he invited every Democrat in the House and Senate to be his guest for a weekend getaway at the Jefferson Islands Club on the Chesapeake Bay. (Well, not quite every Democrat—the six women in Congress were not on the list.) The president treated them to a weekend of skeet shooting, fishing, poker and skinny dipping. The New York Times reported he had done himself “a world of good,” easing tension with congressional Democrats.

Not really. When the skinny dipping and skeet shooting were over, his agenda was still stalled. Four weeks later, 70 senators again voted to block his court-packing bill. One of the few to support the president was Sen. Hattie Caraway of Arkansas, the only woman in the Senate and the only Democratic senator not invited to the president’s weekend retreat.

It was time to play hardball. As Treasury Secretary Henry Morgenthau put it: “There has got to be a fight and there has got to be a purge.” Roosevelt made a decision. He would drive the conservatives out of the party, beginning with those who faced competitive primaries in 1938. He had reason to believe that he could call the shots. He had won the South in 1936 by the kind of margins that would make a Soviet leader blush: 87% of the vote in Georgia, 96% in Mississippi, 98.6% in South Carolina.

One of FDR’s first targets was Georgia Sen. Walter George. The senator had opposed parts of FDR’s agenda but eagerly sought his support in his Democratic primary, even writing him a letter apologizing for his political transgressions. “I have never meant to be offensive to you,” he wrote, adding that he had never “at any time felt anything but deep affection for you.”

With much fanfare, FDR traveled to Barnesville, Ga., in August 1938 to dedicate a rural electrification project. Before a large crowd of enthusiastic FDR supporters and with George sitting a few feet behind him, Roosevelt went for the kill against “my old friend, the senior senator from this state.”

“On most public questions,” Roosevelt said of George, “he and I don’t speak the same language.” After lambasting the senator for standing in the way of progress, he told the crowd that if he could vote in the upcoming primary, he would “most assuredly” cast his ballot for George’s opponent, Lawrence Camp. To reinforce FDR’s popularity in Georgia, Ms. Dunn writes, “federal money rained down on Georgia, including $53 million in WPA funds for building projects in Georgia that promised to create thirty-five thousand jobs.”

FDR did the same in state after state, endorsing liberal primary challengers against incumbent Democratic senators. The conservatives fought back hard. “Their attempt to pack the Court failed,” one opponent said of Roosevelt and his team, “and their attempt to pack the Senate will fail.” In Maryland, Sen. Millard Tydings turned FDR’s support for his primary opponent into a central campaign issue, condemning the president’s “invasion” of Maryland and declaring: “The Maryland free state shall remain free.”

Tydings was perhaps the most anti-New Deal Democrat in Congress and the one Roosevelt wanted defeated above all others. He instructed Harold Ickes to “take Tydings’ hide off and rub salt in it.” But it was FDR who would be rubbed in salt. Tydings trounced his FDR-backed opponent in a 20-point landslide. A bitter Roosevelt refused to congratulate him.

And it wasn’t just Tydings. All of the Democratic senators targeted by FDR coasted to victory in their Democratic primaries. The voters may have liked their president, but they didn’t want him picking their senator. In the general election, Roosevelt didn’t fare any better. Republicans picked up eight Senate seats and nearly doubled their numbers in the House.

For FDR, it may have been a blessing in disguise. As the focus of his presidency quickly changed to containing Nazi Germany, Roosevelt’s closest allies would be the very conservatives he opposed in 1938. He would never again attempt to intervene in a party primary. He had learned a lesson that needs re-learning from time to time: Political purges are more effectively done by the voters, not by the power brokers in Washington.

Mr. Karl is senior political correspondent for ABC News.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704631504575532371905049074.html

One Nation, Indivisible

The Pledge of Allegiance was drafted in two hours on a sweltering August night in 1892.

On Oct. 12, 1892, schoolchildren inaugurated an American tradition that continues more than a century later: They recited the Pledge of Allegiance. The occasion was the 400th anniversary of Christopher Columbus’s discovery of America, and the promoter of the Pledge was a popular national magazine called the Youth’s Companion.

At the end of the 19th century, the Civil War was still a recent memory, industrialization was proceeding apace and immigrants were streaming into the country. It was a time of national upheaval, and the Companion’s owner and editors decided to champion a cause worth reviving today. They believed that part of their magazine’s mission was to promote national pride and encourage the egalitarian ideals that were at the heart of American democracy.

This was also the job of the public schools, they believed. The schools’ “great task,” the magazine editorialized, was to make each child a “thorough going American.” The flag flying over your local school today comes courtesy of the Youth’s Companion too, which had earlier launched a successful program to encourage public schools to display the Stars and Stripes.

All this and other compelling details about the origin of the Pledge of Allegiance are recounted in “The Pledge,” a concise and often entertaining history by Peter Meyer and the late Jeffrey Owen Jones. As the authors show, the Pledge’s popularity hasn’t faded after more than a century of use. It is a powerful unifying ritual that brings together Americans in an affirmation of shared patriotism. At the same time, it has been “a lightning rod for bitter controversy,” including three decisions by the U.S. Supreme Court.

The Pledge was the brainchild of Francis Bellamy, a 36-year-old Baptist minister who worked at the Youth’s Companion. It was included in the magazine’s suggested school program for the first Columbus Day, along with a reading of President Benjamin Harrison’s proclamation of the holiday, the singing of “America,” the reciting of a prayer and a patriotic oration. At the time, no one thought that the Pledge—then known as the “salute to the flag”—would last beyond the 1892 event.

The Pledge wasn’t the only flag salute in use at the time, and one of the pleasures of this book is to become acquainted with now-forgotten expressions of patriotism from a time when citizens were more open and direct about declaring their love of country. The Balch Salute, written in 1885 by a New York City teacher, had a brief run of popularity. It read: “I give my heart and my hand to my country—one country, one language, one flag.”

Bellamy found the Balch Salute “juvenile” and wanted something more dignified, with greater historical meaning. He composed the Pledge in two hours on a sweltering August evening. The result, Messrs. Jones and Meyer say, was a “clean, easy- flowing, and pleasantly cadenced piece of writing.” This kind of “compact prose” is “deceptively difficult” to write, they note. In the hands of a lesser craftsman, the Pledge would have disappeared into oblivion.

The original Pledge contained 22 words, compared with 31 today, and Messrs. Jones and Meyer trace the development of each iteration. The first change occurred when Bellamy, dissatisfied with how the Pledge sounded when he heard it recited, aimed to improve the rhythm by adding the word “to” in front of “the Republic.” In 1923, some Americans worried that the Pledge didn’t name the U.S. and worked to change the original “I pledge allegiance to my flag” to “I pledge allegiance to the flag of the United States.” In 1924, the words “of America” were added to the same phrase. Also in 1924, the Pledge began to be accompanied by a raised-arm salute, a practice that was abandoned as Hitler rose to power.

The fourth change came in 1954, when the Knights of Columbus lobbied for the addition of the words “under God.” Congress passed a resolution authorizing the change, and it was signed into law by President Eisenhower. In the 1970s there was a failed proposal to change the ending of the Pledge to “with liberty, justice and responsibilities for all.” Nice idea—citizenship carries duties—but it sounded lousy.

Legal challenges started in the first years of the 20th century, when states passed laws mandating that the Pledge be recited in the public schools. In an era less given to litigation than ours, many of the early objections were settled privately. Students whose religious or political convictions precluded them from reciting the Pledge were usually suspended for a few days and then quietly encouraged to arrive at school after the Pledge had been said.

Courts that heard early Pledge cases usually ruled against those who wanted to opt out. In 1918, Mennonites sued and lost, earning a stern lecture from a judge who warned that the refusal to say the Pledge was the “forerunner of disloyalty and treason.” But the most vigorous dissenters were Jehovah’s Witnesses, who brought dozens of lawsuits in the 1930s. In 1940, the Supreme Court ruled against the Witnesses, only to reverse itself two years later when it ruled that every American has the First Amendment right to refuse to say the Pledge. Along the way, passions ran so high that thousands of Witness children were expelled from school, and there were so many attacks on Witnesses and their houses of worship that Eleanor Roosevelt made a public plea for nonviolence.

More recently, the Supreme Court ruled in 2004 against California atheist Michael Newdow, who argued that the inclusion of the words “under God” was unconstitutional. That his daughter had the right to stand quietly while her classmates recited the Pledge did not satisfy him. He claimed that it was a violation of her First Amendment rights even to have to listen. Messrs. Jones and Meyer don’t say so, but it’s a measure of the Pledge’s continuing popularity that most Americans thought Mr. Newdow was nuts.

Ms. Kirkpatrick is a former deputy editor of the Journal’s editorial page.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703735804575536041452086002.html

Berlin Researchers Crack the Ptolemy Code

Mapping Ancient Germania

A 2nd century map of Germania by the scholar Ptolemy has always stumped scholars, who were unable to relate the places depicted to known settlements. Now a team of researchers have cracked the code, revealing that half of Germany’s cities are 1,000 years older than previously thought.

The founding of Rome has been pinpointed to the year 753. For the city of St. Petersburg, records even indicate the precise day the first foundation stone was laid.

Historians don’t have access to this kind of precision when it comes to German cities like Hanover, Kiel or Bad Driburg. The early histories of nearly all the German cities east of the Rhine are obscure, and the places themselves are not mentioned in documents until the Middle Ages. So far, no one has been able to date the founding of these cities.

Our ancestors’ lack of education is to blame for this dearth of knowledge. Germanic tribes certainly didn’t run land survey offices — they couldn’t even write. Inhabitants this side of the Rhine — the side the Romans never managed to occupy permanently — used only a clumsy system of runes.

According to the Roman historian Tacitus, people here lived in thatched huts and dugout houses, subsisting on barley soup and indulging excessively in dice games. Not much more is known, as there are next to no written records of life within the barbarians’ lands.

Astonishing New Map

That may now be changing. A group of classical philologists, mathematical historians and surveying experts at Berlin Technical University’s Department for Geodesy and Geoinformation Science has produced an astonishing map of central Europe as it was 2,000 years ago.

__________

A 2nd century map of Germania by the scholar Ptolemy has always stumped scholars, who were unable to relate the places depicted to known settlements. Now a team of researchers have cracked the code, revealing that half of Germany’s cities are 1,000 years older than previously thought.

The researchers appear, for example, to have accurately located three particularly important Germanic sites, known to Ptolemy as “Eburodunum,” “Amisia” and “Luppia.” The new calculations put these sites at the present day cities of Brno, Fritzlar und Bernburg (Saale), all places already possessing unusually distinguished recorded histories.

In 150 AD, the mathematician and astronomer Ptolemy embarked on a project to depict the entire known world. Living in Alexandria, in the shadow of its monumental lighthouse, the ancient scholar drew 26 maps in colored ink on dried animal skins. This photo shows an 18th century depiction of Ptolemy.

__________

The map shows that both the North and Baltic Seas were known as the “Germanic Ocean” and the Franconian Forest in northern Bavaria was “Sudeti Montes.” The map indicates three “Saxons’ islands” off the Frisian coast in northwestern Germany — known today as Amrum, Föhr and Sylt.

It also shows a large number of cities. The eastern German city that is now called Jena, for example, was called “Bicurgium,” while Essen was “Navalia.” Even the town of Fürstenwalde in eastern Germany appears to have existed 2,000 years ago. Its name then was “Susudata,” a word derived from the Germanic term “susutin,” or “sow’s wallow” — suggesting that the city’s skyline was perhaps less than imposing.

This unusual map draws on information from the mathematician and astronomer Ptolemy, who, in 150 AD, embarked on a project to depict the entire known world. Living in Alexandria, in the shadow of its monumental lighthouse, the ancient scholar drew 26 maps in colored ink on dried animal skins — a Google Earth of the ancient world, if you will.

Rainy Realm of Barbarians

One of these drawings depicts “Germania Magna,” the rainy realm inhabited, according to Roman sources, by rough barbarians whose reproductive drive, they said, was giving rise to an alarming number of tribes.

Ptolemy demonstrated extensive knowledge of this remote area, indicating the locations of mountains, rivers and islands. An index lists 94 “poleis,” or cities, noting their latitude and longitude accurately to within a few minutes.

The map shows settlements as far afield as the Vistula River in present-day Poland, where Burgundians, Goths and Vandals once lived, and mentions the Saxons for the first time. It appears Ptolemy was even familiar with the Swina River, which flows from the Szczecin Lagoon into the Baltic Sea, near the present day German-Polish border.

It seems surprising that an academic living along the Nile had such detailed knowledge of northern Europe — and it’s certain that Ptolemy never took his own measurements in the Germanic lands. Instead, researchers believe he drew on Roman traders’ travel itineraries, analyzed seafarers’ notes and consulted maps used by Roman legions operating to the north.

Yet the data the ancient geographer used is distorted. Errors of scale crept in as he transcribed the Earth’s sphere to the flat plane of a map. Ptolemy believed the northern lands to be narrower and more elongated than they are and bent Jutland in Denmark and Schleswig-Holstein in Germany too far to the east.

‘Enchanted Castle’

Ptolemy also failed to accurately connect the different parts of his map. Mistakes worked their way in despite his attempts to locate calibration points to tie together his patchwork of geographical information. The inevitable result was confusion.

Linguists and historians have tried repeatedly to decode the yellowed document — in vain. Among researchers, it came to be known as an “enchanted castle,” a mystery no one could crack. Access to Germany’s prehistory was believed closed off forever.

Now the ancient map appears to be revealing its secrets at last. For the first time, a high-caliber team of experts in the field of surveying and mapping came together in a bid to solve the map’s perplexing puzzle. The Berlin-based team pored over the recalcitrant data for six years, working together to develop a so-called “geodetic deformation analysis” that would help to correct the map’s mistakes.

The result is an index that pinpoints the hometowns of the legendary figures Siegfried and Arminius to within 10 to 20 kilometers (6 to 12 miles). A new book, “Germania und die Insel Thule” (“Germania and the Island of Thule”), has just been published about the project. The publisher, Darmstadt-based WBG, calls it a “sensation.”

The Istanbul Connection

The essential question is whether the new data is accurate. Ptolemy’s “Geography” is preserved only in duplication. The copy so far considered the most authentic is an edition produced around the year 1300 and kept by the Vatican.

But the team of experts in Berlin had the great fortune to be able to refer to a parchment tracked down at Topkapi Palace in Istanbul, Turkey, the former residence of the Ottoman sultans. The document, consisting of unbound sheepskin pages with writing in Roman capital letters, is the oldest edition of Ptolemy’s work ever discovered. A reproduction of this version is due to be published next year.

Using the parchment as a reference and drawing on their own geographical expertise, the academics from Berlin seem to have finally managed to bridge the gap back to the realm of Odin and Valhalla.

‘Lost Places in Our Past’

The new map suggests that minor German towns such as Salzkotten or Lalendorf have existed for at least 2,000 years. “Treva,” located at the confluence of the Elbe and Alster Rivers, was the precursor to Hamburg; Leipzig was known as “Aregelia.” 

All this offers up rather exciting prospects, since it makes half the cities in Germany suddenly 1,000 years older than previously believed. “Our atlas is a treasure map,” team member Andreas Kleineberg says proudly, “and the coordinates lead to lost places in our past.”

Archaeological interest in the map will likely be correspondingly large. Archaeologists’ opinions on the Germanic tribes have varied over the years. In the 19th century, Germany’s early inhabitants were considered brave, wild-bearded savages. The Nazis then transformed them into great heroes, and in the process of coming to terms with its Nazi past, postwar Germany quickly demoted the early Germanic peoples to proto-fascist hicks. The Romans, it was said, had to put up a border wall between themselves and the nuisance Germans before they could finally get some peace.

Bribes and Assassinations

More recent research proves this view to be complete invention. New excavations show that the Germanic groups were anything but isolated — quite the contrary. Veritable hordes of Roman traders crossed the border to deal in amber, pomade, smoked fish and leather with their neighbors. Caesar mentioned that his people traded with the “Sueben,” the Swabians of southwestern Germany. As far back as the first century AD, a Roman knight traveled from Carnuntum, a legion camp near Vienna, to the Baltic Sea coast to trade in amber.

Roman diplomats were also eager to intervene in their neighbors’ affairs, bribing tribal princes, organizing assassinations and supporting their favorites all the way to the throne. Excavations in the state of Lower Saxony in August 2008 even uncovered a battlefield containing the remains of 3rd century weapons. Closer inspection revealed that a Roman legion equipped with catapults had advanced as far as the Harz region in central Germany in a lightning campaign probably intended to punish insubordinate tribes.

These soldiers didn’t have to struggle through wastelands and swamps to get there. “We were able to locate 11 settlements along the highway that started at Moers on the Rhine and reached as far as the Sambia peninsula in present day Kaliningrad,” Kleineberg explains.

Most Germanic sites appear to have been situated along rivers and at road junctions, indicated by the word “furd” included in many place names. “Lupfurdum,” the predecessor to Dresden, for example, was located at a shallow, fordable spot along the Elbe River. Hanover, then “Tulifurdum,” was a place where the Leine River could be crossed.

Researchers believe Ptolemy’s map now allows them to trace the path followed by amber traders from the Vienna area up to Gdansk Bay as well.

Military Work

It was primarily surveyors with the Roman army, which appears to have advanced as far as the Vistula River, who collected information on the barbarians’ lands. Dieter Lelgemann, a geodesist in Berlin, is firmly convinced that “Ptolemy was drawing on work done by military engineers.”

The ancient astronomer indicated cities’ exact locations down to minutes of degrees. These coordinates, once decoded, indeed often turn out to line up precisely with sites where archaeologists have previously found Gothic or Teutonic houses and grand burial tombs erected for tribal princes.

The evidence suggests that the researchers in Berlin have truly cracked the code. The group appears, for example, to have accurately located three particularly important Germanic sites, known to Ptolemy as “Eburodunum,” “Amisia” and “Luppia.” The new calculations put these sites at the present day cities of Brno, Fritzlar und Bernburg (Saale), all places already possessing unusually distinguished recorded histories:

  • Waldau, now a part of Bernburg in eastern Germany, was mentioned in a monastic chronicle for the first time in 806, at which point the town was also a military center.
  • Brno in the Czech Republic has offered up a wealth of splendid Germanic archaeological finds and was likely a stop along the amber trading route.
  • Legend has it that Fritzlar in central Germany is the site where the missionary St. Boniface felled the Donar Oak, a sacred symbol to the Germanic Chatti tribe, in 723.

Astonishing Finds

The next question is what these metropolises of early northern Europe looked like. Old maps mark them with massive defensive towers, but this makes little sense, since the Germanic tribes didn’t have stone structures, only wood and clay mortar.

But this doesn’t mean the villages this side of the Alps were unimpressive. On this point too, experts are adjusting their views. A town on the Elbe River called Hitzacker, for example, has yielded up astonishing archaeological finds over the years, such as magnificent tombs filled with silver dishes. This year, archaeologists added houses, a large farmstead and ironworking ovens to their finds here. The area under investigation extends across more than 10 hectares (25 acres).

This settlement too can be found in the new atlas. In Ptolemy’s day, it was called “Leufana,” a center of the Germanic Lombards.

__________

Full article and photos: http://www.spiegel.de/international/zeitgeist/0,1518,720513,00.html

O Captain, Our Captain

George Washington was a genius and a titan, but it was politics, not war, at which he excelled

It was said of Prussian chancellor Otto von Bismarck that he was the subtle son of his feline mother posing all his life as his heavy, portentous father. Similarly, the George Washington who emerges from this truly magnificent life is an acute, consummate politician who posed all his life—with next to no justification—as a bluff but successful soldier. The pose came off because Washington himself so desperately wanted it to be true, but Ron Chernow wrenches back the curtain to reveal the real Washington, a general almost bereft of tactical ability yet a politician full of penetrating strategic insight. In this (English, anti-Revolutionary) reviewer’s estimation, Washington emerges a far greater man.

‘Parson Weem’s Fable’ by Grant Wood (1939)

Six-feet tall, immensely strong, with the muscular thighs of a superb horseman—he was an almost obsessive rider to hounds—Washington was “made like a hero,” as Mr. Chernow puts it, despite having a small head in proportion to his frame. Nor did his “weak, breathy voice” and lack of oratorical ability detract from his image as a man of action.

A formidable but unloving mother and the death of his father when Washington was 11 instilled in him a ravening ambition. There was, believes Mr. Chernow, a “constant struggle between his dignified reserve and his underlying feelings,” especially a tempestuous temper. Thomas Jefferson recalled him being “most tremendous in wrath.”

The adjective that best describes Washington’s personality and instincts, ironically enough, is “English.” He had a fair complexion that sunburnt easily; he bought his clothes and most other goods from London merchants; he never affirmed the divinity of Jesus Christ but actively supported his local Anglican churches; he rebuilt Mount Vernon (named after an English admiral) on classically English architectural principles; he was phlegmatic and disliked overfamiliarity; he even played cricket during the dark days of Valley Forge. Mr. Chernow ascribes his break with Britain to the moronic refusal of the British authorities to grant Washington a regular army commission. “His hostility to the mother country,” Mr. Chernow writes, “was a case of thwarted love.” (Though he hints that it might also have involved greed, since Britain was threatening to curtail the distinctly dodgy Ohio land speculation that was enriching Washington in the mid-1770s.)

“The young Washington could be alternatively fawning and assertive, appealingly modest and distressingly pushy,” states Mr. Chernow, due to “the unstoppable force of his ambition.” Certainly everything in his early life conspired perfectly to win him the job of commander in chief of the Continental Army in 1775—excepting actual military ability. He had contracted smallpox early and so was immune from the disease that killed more soldiers than bayonets; he hailed from the most populous colony, Virginia; he had fought enthusiastically in the earliest part of the French and Indian War; he was physically tough after his extensive travels in the West, where he had often come close to death; and he was ethereally comfortable with the sound of bullets whistling past his ears. Perhaps best of all, he was a well-connected member of the Virginia House of Burgesses, even though his sharp practices at two of his elections hardly reflect honorably on “the Father of the Country.”

‘Washington Crossing the Delaware’ (1851) by Emanuel Leutze

Much of his social standing in Virginia stemmed from the wealth of his wife, Martha. Although he undoubtedly married for money—he was in love with Sally Fairfax, the wife of a friend—it proved a successful union. Martha spent half of the War of Independence, 53 of its 105 months, alongside her husband. He might not have seen his beloved Mount Vernon once in over six years of campaigning, but he saw plenty of his wife.

Mr. Chernow presents Washington’s battlefield decisions as lackluster at best. The rookie commander in chief took six months to make Gen. William Howe evacuate Boston. Washington was defeated at the Battle of Brooklyn and then shouldn’t have tried to hold New York and Fort Washington, whose 2,837 defenders were sent to prison hulks, after which “he temporarily lost the internal fortitude to obey his own instincts.” He then lost the Battle of White Plains. The famed crossing of the Delaware and subsequent Battle of Trenton was simply a raid in force; only 22 Hessians were killed and 84 wounded, and Washington returned to Pennsylvania immediately afterward. Similarly, he had to retreat straight after the Battle of Princeton. This was nonetheless a genuine victory, unlike the “shattering defeat” at the much larger Battle of Brandywine. Washington understandably ceded Philadelphia to Howe at a time when 1,000 of his men were marching barefoot.

__________

The Life of the Lives

In even the most impressive biographies, a curious bifurcation can appear when the author’s source notes are compared with his acknowledgments. In “Washington: A Life,” Ron Chernow emphasizes his reliance on primary sources. According to his notes, however, other biographers—notably Douglas Southall Freeman and James Thomas Flexner, authors of justifiably renowned multivolume lives of Washington—provided a good deal of the narrative fuel. It does Mr. Chernow no disservice to regard his biography as a culmination of a long biographical tradition that has divested Washington of his marmoreal armor.

The Washington myth begins with Parson Weems’s 1800 tale of the boy who could not tell a lie. How many cherry trees owe their longevity to the parson’s prevarication God only knows, but each subsequent biographer has aimed to reveal the man behind the myth. Freeman sought to portray not the marble man but a self-seeking, hot- tempered youth who only gradually put the concerns of his country first—and who had the personal ambition and political acumen to establish himself as first among the Founders. Indeed, Freeman’s seven volumes (1948-57) debunked the Washington myth so well that Flexner, in writing his four-volume life (1965-72), complained that he had to contend with considerable hostility toward his subject from those who seemed to take an almost perverse delight in detecting the great man’s flaws. But biography swung back to lauding Washington (while conceding his failings)—as in Joseph Ellis’s “His Excellency” (2004) or Richard Brookhiser’s elegant “Founding Father” (1996), which the author calls a “moral biography” in the tradition of Plutarch. Ellis’s remains the finest short life (320 pages) of Washington, while Brookhiser’s study supplies the kind of character analysis difficult to achieve in chronological narratives.

Washington dominated the national scene far longer than Abraham Lincoln or even FDR, and scholars have been loath to take on the whole man within the covers of a single volume. And so we have admirable, if truncated, studies such as Edward Lengel’s “General George Washington” (2005) and Peter R. Henriques’s thematic “Realistic Visionary” (2006).

Flexner’s 1974 redaction has been the standard, but it cannot compete with the vivacity of Ron Chernow’s new narrative, even if the two arrive at many of the same conclusions. For those who want their Washington in even greater detail than Mr. Chernow supplies, Flexner’s multivolume work remains the most readable and authoritative source.

More than a mastery of history and of the facts of Washington’s life are at stake when a writer commits himself to a one-volume life. The author has to write with supreme confidence, which means, for all his research and respect for the historical record, he owes an even greater obligation to the craft of biography itself. In Mr. Chernow’s case, this means daring to dress Washington in a contemporary costume so as to keep 18th-century trappings from distracting the reader from truly seeing the man. In his author’s note, Mr. Chernow announces that he has changed his subject’s grammar, fixing commas to smooth older texts. This is no minor matter, but it works well in a biography that wants most of all to create a living George Washington.

If Mr. Chernow’s source notes reveal how deeply he has immersed himself in previous accounts of Washington, his narrative structure demonstrates how profoundly conscious he is of the way his own decisions shape the image of Washington that emerges in this subtly self-aware biography.

—Carl Rollyson

—Mr. Rollyson is the author of seven biographies and of “Biography: A User’s Guide’ (2008).

__________

At Brandywine and Germantown, Washington’s next defeat, there were 150 Americans killed, 520 wounded and 400 captured against British losses of 70 killed, 450 wounded and 15 captured. Yet Washington reported both battles to Congress as something so approaching victories that it had a medal struck in his honor. The Battle of Monmouth Court House was at best a draw in terms of numbers killed, and the British effected an unharassed retreat during the night.

The next and last of Washington’s battles was Yorktown. He arrived there after his mistaken attempt to retake New York (he later claimed that the effort was a feint to deceive the enemy, which Mr. Chernow terms a “lie”) and long after Adm. de Grasse’s 28 ships-of-the-line and the French infantry and artillery had bottled up Cornwallis’s 9,000-strong army near the end of the peninsula.

Mr. Chernow notes “that the Yorktown victory had depended upon the French skill at sieges, backed up by French naval superiority.” Small wonder that Cornwallis ordered his sword to be delivered up to the French when he surrendered on Oct. 19, 1781, rather than to Washington. Meanwhile, Washington allowed the captured Loyalists to sail back to New York, whereas the recaptured slaves—including two of his own 300 or so—were sent back to their plantations.

Although Washington is acknowledged as a master of espionage and disinformation, anyone hoping to find him lauded by Mr. Chernow as the figure whose presiding genius won the War of Independence will be disappointed. “With a mind neither quick nor nimble,” he writes, “Washington lacked the gift of spontaneity and found it hard to improvise on the spot.” At best he had “keen powers of judgment rather than originality.”

Yet, crucially for America’s as well as his own future, Washington was endowed with preternatural leadership qualities—primarily the ability to seem confident when privately he felt, as Mr. Chernow puts it, “gloomy, scathing, hot-blooded and pessimistic.” It was “perhaps less his military skills than his character which eclipsed all competitors,” he writes. “Washington was dignified, circumspect and upright, whereas his enemies seemed petty and skulking.” This was true not just of his overt British enemies but also of his many covert detractors inside the Continental Army and in Congress.

During the war, Washington needed to keep the army—which Mr. Chernow describes as “a bizarre mongrel corps that flouted the rules of contemporary warfare”—as a fighting force in the field, no matter how many towns or battles were lost. In this he succeeded triumphantly, despite venereal disease among the troops, a dearth of gunpowder, mass desertions, treachery (even from some of his own bodyguards), and truly monstrous winters. “Whatever his failings as a general,” writes Mr. Chernow, “Washington’s moral force held the shaky army together.”

It is worth considering whether the “windswept plateau” of Valley Forge, where 2,000 of Washington’s men died of diseases compounded by malnutrition, was really the best place to spend the winter of 1777–78, but it seems that the only reason there was no mutiny in that “scene of harrowing misery” was Washington’s sheer force of personality. (And perhaps the fact that any man caught stealing food was given 600 lashes.)

A 19th-century etching of the ragged Continental army marching to Valley Forge

In the end, nothing can detract from the untarnishable glory of Washington’s having been the commander in chief throughout the war, in which the largest expeditionary force of the 18th century came to total grief.

American Revolutionary politics was a contact sport, and in the course of this 817-page adventure story one stands astonished that Washington never had to fight a duel. Libels were constant. Thomas Jefferson, John Adams, Benjamin Rush, Charles Lee and many more—even occasionally his generally supportive aides Alexander Hamilton and Joseph Reed—all decried him, though most came to regret it.

They all took the bluff soldier pose at face value and missed the brilliant politician lurking beneath. By the end of this well-researched, well-written and absolutely definitive biography, readers will conclude that George Washington was indeed a genius and a titan, but for very different reasons than the world thought at the time.

Mr. Roberts is the author of “Masters and Commanders: How Four Titans Won the War in the West” (HarperCollins).

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703882404575520061512222160.html

A Gangster Goes to War

In New York right after the turn of the 20th century, the baddest man in the whole downtown was a thug named Monk Eastman, who controlled a gang of 2,000 Jewish hoodlums on Manhattan’s Lower East Side.

His was among the most scandalous criminal enterprises in American history, according to biographer Neil Hanson in “Monk Eastman,” the story of, as the subtitle has is, “The Gangster Who Became a War Hero.” Mr. Hanson fashions Lower Manhattan into a mirror of hell that would make even Damon Runyon recoil. The city’s teeming tenements were “anthills of humanity,” the author says, and rightly so: They packed in 290,000 people to the square mile. By contrast, “the comparable figure for the worst of the London slums was 175,000.”

“Whole buildings,” we are told, “seemed to sweat as condensation formed on every wall, and the stench—always terrible—even in the depths of winter frosts—reached new heights of toxicity, flowing up from the sewers, privies, and yards, and filling the halls, stairways and airshafts like a rising tide.” Another 20 pages of this and it becomes hard to resist the impulse to go and wash one’s hands. But oh, the crime!

The area was divided into territories, just like in “West Side Story,” except that these were not hubcap-stealing Puerto Ricans in Spanish Harlem settling their differences with the occasional knife-fight; no siree, the downtown guys were seriously into guns and shoot-outs that consumed entire city blocks, slaughtering the innocent and the guilty with equal magnanimity.

The western part of Lower Manhattan was run by the Irish, while the Lower East Side was divided between the Italians and the Jews. The gangs had names like the Five Pointers, Yakey Yakes, Gophers, Whyos—but the most fearsome gang of all was a mostly Jewish mob called the Eastmans, after its leader, Monk, who had climbed the ladder of thuggish respectability by a combination of low cunning and epic brutishness.

By his own admission, Monk—who was not Jewish and seems to have been of English extraction—liked “to beat up a guy every once in a while. It keeps my hand in.” His given name was Edward; “Monk” came from his ability to “climb like a monkey” while pulling off second-story jobs.

Killings were frequent in the gang’s activities but generally incidental. Assault was more intentional: “Monk and his henchmen put so many people in the hospital that ambulance drivers started calling the accident ward at Bellevue ‘The Monk Eastman Pavilion.’ “

An especially noxious feature of the gangs, including Eastman’s, was their symbiotic relationship with New York’s octopodian political machine. Tammany stayed in power through the political payoff; Monk Eastman and the other gangsters stayed in power by earning protection from the long arm of the law. Eastman was arrested more than 30 times at just one police precinct, but the cases were routinely dismissed by corrupt Tammany judges or police officials.

In exchange for the get-out-of-jail help, gang ruffians provided muscle at the polls. “Repeaters” was the name given to hoodlums who voted multiple times, and “sluggers” were toughs who stuffed or stole ballot boxes—or intimidated, or even assaulted, voters at the polls. It was said that Monk Eastman alone was good for 10,000 votes.

Eastman’s luck finally ran out in 1904 when an honest jury convicted him for assault. He had attempted to mug a well-dressed drunk young man—who, it turned out, was being followed by Pinkerton detectives at the request of his worried parents. A disgraceful running gunfight ensued—coming to an end only when an alert policeman applied a nightstick to Eastman’s head. His increasing notoriety, it turned out, had cooled Tammany’s interest in protecting him from prosecution.

The five years that Eastman spent in Sing Sing did not discourage his criminal inclinations; in 1915 he was arrested for stealing a car and sentenced for a two-year stretch. The incarceration ended in September 1917, just as America’s entry into World War I was heating up. Out of prison for 10 days, with his criminal enterprises no longer in operation or promising to revive, Monk presented himself at the Army recruiting office in Brooklyn and offered his services. He also lied about his age, saying he was 39 when he was really 43.

Pvt. Monk Eastman, according to the bare-bones Army records, turned into a proper soldier and shipped out with the 106th New York Regiment as part of the Army’s 27th Infantry Division. His comrades called him “Pop.”

The division, 28,000-strong, landed in France on May 25, 1918, to confront the beaten but still dangerous German army on the Western Front. Two months earlier the Germans had launched and lost their huge last-ditch offensive, designed to defeat the Allies before the Americans arrived in force. Now the Allied commander smelled blood and decided the time was ripe to bring the war to a successful conclusion.

Thus, after a few scant weeks of training, Monk Eastman and the New Yorkers were abruptly pitchforked into the great final push of the conflict—an attack on Germany’s Hindenburg Line. Here we see the awful brutality of the war, the bone-by-bone straining of thousands of men for yards, even feet, in the horrid sewer that was the Flanders Front along the Franco-Belgian border.

It was said that you could smell the battlefield miles before you could see it—the ghastly smell of death; earth churned into slime by millions of artillery shells; the lingering rotten-egg stench of poison gasses; the decaying carcasses of horses, mules and, yes, men; the blended odors both of the cooking, and the excrement, of a million soldiers packed into a battle area just a few miles wide. Veterans later tried to describe the smell but no one could, and the place made even the aroma of New York slums seem respectable.

Eastman’s battalion, according to one of its members, “jumped off on time” when the battle began but then “fairly melted away” amid the machine gun and artillery fire. “We were up against the [Hindenburg] Line itself,” said the battalion major, “and a lousy, dirty, dangerous place it was.” After two weeks of nonstop fighting, they had the Germans on the run, but at a terrible price: Nearly 80% of the regiment were casualties.

Some idea of the ferocity and tenacity of the battle can be taken from these instructions to a machine-gun company: “(1) This position will be held and the section will remain here until relieved. (2) The enemy cannot be allowed to interfere with this program. (3) If the gun team cannot remain here alive, it will remain here dead, but in any case, it will remain here. (4) Should any man, through shell shock or any other cause, attempt to surrender, he will remain here—dead. (5) Should the gun be put out of action, the team will use rifles, revolvers, Mills grenades, or other novelties. (6) Finally, the position, as stated, will be held.”

Then the war was suddenly over, and the men were shipped home and mustered out of the service. The survivors mostly went back to their former professions, including Monk Eastman, who had performed admirably for his country as a soldier.

At first he appeared to go straight and secured a petition from his army outfit testifying to his “exceptional record in the army overseas” and “utmost courage and devotion to duty.” His rights of citizenship were restored by New York Gov. Al Smith, and Monk found work as an automobile mechanic, though how he obtained the skills we are not told. Maybe stealing cars also entailed learning how to make them run. But word soon had it that he was again up to his old tricks, which had their consequences.

Since most career criminals don’t write their memoirs, it is hard to get a full picture of the man, save from spare police and military records and from a few anecdotal tales told by contemporaries and from newspaper stories of the tabloid type, which tend to be suspect.

The book’s claim that Eastman was a war hero might be a bit of a stretch, since he was neither decorated nor promoted, despite the high casualties in his unit. A case can be made, though, that anybody, just by going to the Western Front during the war—and staying there—was by definition a hero.

Thus while the story of Monk Eastman is exquisitely rich with the ganglife of New York and the perils of World War I, we’re left with the impression that, in the end, the man was a puzzle, even to himself.

Mr. Groom’s books include “A Storm in Flanders: The Ypres Salient, 1914-1018” and the novel “Forrest Gump.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704358904575477632752858848.html

The Experiments in Guatemala

A medical historian’s discovery that American researchers in the 1940s deliberately infected hundreds of people in Guatemala with syphilis or gonorrhea has provoked outrage in both countries. President Obama and Secretary of State Hillary Rodham Clinton rightly apologized to President Álvaro Colom of Guatemala. More will be needed to make amends, beginning with a planned investigation of this appalling breach of medical ethics.

The experiments were brought to light by Susan Reverby, a professor at Wellesley College, who found unpublished records in the archives at the University of Pittsburgh. The studies were led by Dr. John C. Cutler, an internationally known expert on sexually transmitted diseases and a former assistant surgeon general.

From 1946 to 1948, American public health doctors under his command infected nearly 700 Guatemalans — prisoners, mental patients and soldiers — without their permission or knowledge. Anyone who became infected was given penicillin and presumed to be cured, although records suggest that many were not adequately treated.

The aim of the research was to test whether penicillin could prevent the transmission of syphilis, whether better blood tests for the disease could be developed, and what dosages could cure syphilis. That cannot justify experimenting on human beings without their consent.

Although the American government, which financed the research, bears the chief responsibility, the studies were carried out in collaboration with Guatemala’s top venereal disease expert and several Guatemalan ministries and institutions.

Top health officials insist that current rules governing federally financed research would prohibit such experiments. They require that subjects be fully told of the risks and give their informed consent. Institutional review boards must approve the research.

The Obama administration has said it will ask the Institute of Medicine to investigate the experiments; a presidential bioethics commission will suggest methods to ensure that all human research around the globe meets rigorous ethical standards. The Guatemalan government plans to conduct its own investigation. The United States should also pay reparations to any survivors that can be found and compensate Guatemala by paying for ethical health projects there.

Editorial, New York Times

__________

Full article: http://www.nytimes.com/2010/10/08/opinion/08fri3.html

A Wealth Of Ideas

A mind that ranged over politics, law and ethics—and produced the definitive defense of free markets.

Having dined with Adam Smith on a number of occasions, Samuel Johnson once described him “as dull a dog as he had ever met with.” Smith’s biographers might be inclined to agree. The most celebrated political economist in history led a remarkably quiet life. Born in the sleepy Scottish port of Kirkcaldy in 1723, he was raised by his widowed mother and lived with her for much of his life. He studied at the University of Glasgow (which he loved) and at Oxford (which he loathed). Only once in his life did he travel outside of Britain. He wrote few letters and burned his personal papers shortly before his death in 1790. Even his appearance is a mystery. The only contemporary likenesses of him are two small, carved medallions. We know Adam Smith as we know the ancients, in colorless stone.

It is a measure of Nicholas Phillipson’s gifts as a writer that he has, from this unpromising material, produced a fascinating book. Mr. Phillipson is the world’s leading historian of the Scottish Enlightenment. His “Adam Smith: An Enlightened Life” animates Smith’s prosaic personal history with an account of the eventful times through which he lived and the revolutionary ideas that inspired him. Adam Smith finally has the biography that he deserves, and it could not be more timely.

Smith’s fame, of course, was made by the “Wealth of Nations.” The book appeared in 1776, a good year in the annals of human liberty. Its teachings are so fundamental to modern economics that familiarity often dulls our appreciation of its brilliance.

Smith constructed his masterpiece on a few ingenious insights into the workings of a commercial economy. Where his contemporaries calculated national wealth in terms of gold or agricultural output, Smith measured “opulence” by the flow of consumable goods. The division of labor would accelerate the production of goods, he argued, and render manufacture ever more efficient. The division of labor itself was best determined by markets of self-interested individuals. Markets, in turn, operated best when freed of regulation and interference, thus allowing the value and price of both commodities and labor to align themselves.

Each conclusion led inexorably to the next. Smith relentlessly vindicated the value of free markets and of the individual economic freedom that made markets work. As a manifesto against protectionism, economic planning and grasping rentier behavior, the “Wealth of Nations” has never been bettered. Still, the book is often read in arid isolation, as merely a prophetic anticipation of more modern economic theory. Mr. Phillipson, by contrast, vividly describes the historical circumstances that shaped the “Wealth of Nations.”

Smith’s favorable account of luxury and consumption spoke for Britain’s increasingly affluent middle class and its delight in the dawning age of manufactured gadgets. His attack on monopolies directly targeted at the era’s crony capitalists, notably the oligarchic tobacco kings of Glasgow. His rejection of protectionism was partly an assault on the British Empire itself, which was struggling to keep its burgeoning American colonies pinned under the imperial thumb.

Mr. Phillipson also provides a lucid account of Smith’s broader philosophical ambitions, which were much more expansive than the “Wealth of Nations” alone might suggest. Smith’s famed lectures and his other great book, “The Theory of Moral Sentiments” of 1759, ranged widely over politics, law, ethics and aesthetics. Inspired by his friend, the skeptic David Hume, Smith swept aside all timeless or divine notions of moral and political order. In his view, society emerged from the historical experience of a needy species driven to create conditions in which property, affection and opinions alike could be stably exchanged. Manners and morals, like goods, thus had an “economy.” The material and moral economies were, indeed, linked, in that a rising material prosperity helped to encourage civility and taste. Mr. Phillipson reconstructs Smith’s intricate system with erudition and imagination, often from student notes of Smith’s long-lost lectures, which he had delivered in both Glasgow and Edinburgh.

Modern conservatives are fond of claiming Smith as an intellectual forebear, but they are only partly right to do so. Like Hume, Smith was a religious skeptic. Morality, to him, was not natural or divine law but a set of mere conventions, a law we give to ourselves. Unlike Edmund Burke, who knew him, Smith scorned the European aristocracy and had little time for English constitutional traditions.

Nevertheless, Smith’s conservative side does emerge in Mr. Phillipson’s biography. As a reformer, Smith valued prudence and gradualism. Statesmen, he wrote, should establish not “the best system of laws” but the “best that the people can bear.” He found the French taste for revolutionary cataclysm repellent. And if Smith’s economic ideas affronted the paternalism of the traditional Tory party, they were eventually taken up by William Pitt the Younger, the late-18th-century prime minister who is now seen as one of the fathers of free-market conservatism.

Smith’s was a complex legacy, and in reading about it one is struck by its uncanny relevance. When the “Wealth of Nations” appeared, Britain staggered under massive war spending and a colossal national debt. Bad loans blighted banks across the country. Several had collapsed, leaving their investors ruined. Gold bugs abounded. In the face of international competition, well-connected manufacturing interests clamored for protective tariffs. The times called for Adam Smith, and his theories worked to stabilize and liberate the British economy as it entered the industrial age. If we need a reminder of his achievements, and of late it appears that we may, Mr. Phillipson has given us a superlative one.

Mr. Collins, a professor of history at Queen’s University in Kingston, Ontario, is currently a visiting fellow at Cambridge University.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704654004575518151286879946.html

New England’s hidden history

More than we like to think, the North was built on slavery.

In the year 1755, a black slave named Mark Codman plotted to kill his abusive master. A God-fearing man, Codman had resolved to use poison, reasoning that if he could kill without shedding blood, it would be no sin. Arsenic in hand, he and two female slaves poisoned the tea and porridge of John Codman repeatedly. The plan worked — but like so many stories of slave rebellion, this one ended in brutal death for the slaves as well. After a trial by jury, Mark Codman was hanged, tarred, and then suspended in a metal gibbet on the main road to town, where his body remained for more than 20 years.

It sounds like a classic account of Southern slavery. But Codman’s body didn’t hang in Savannah, Ga.; it hung in present-day Somerville, Mass. And the reason we know just how long Mark the slave was left on view is that Paul Revere passed it on his midnight ride. In a fleeting mention from Revere’s account, the horseman described galloping past “Charlestown Neck, and got nearly opposite where Mark was hung in chains.”

When it comes to slavery, the story that New England has long told itself goes like this: Slavery happened in the South, and it ended thanks to the North. Maybe

we had a little slavery, early on. But it wasn’t real slavery. We never had many slaves, and the ones we did have were practically family. We let them marry, we taught them to read, and soon enough, we freed them. New England is the home of abolitionists and underground railroads. In the story of slavery — and by extension, the story of race and racism in modern-day America — we’re the heroes. Aren’t we?

As the nation prepares to mark the 150th anniversary of the American Civil War in 2011, with commemorations that reinforce the North/South divide, researchers are offering uncomfortable answers to that question, unearthing more and more of the hidden stories of New England slavery — its brutality, its staying power, and its silent presence in the very places that have become synonymous with freedom. With the markers of slavery forgotten even as they lurk beneath our feet — from graveyards to historic homes, from Lexington and Concord to the halls of Harvard University — historians say it is time to radically rewrite America’s slavery story to include its buried history in New England.

“The story of slavery in New England is like a landscape that you learn to see,” said Anne Farrow, who co-wrote “Complicity: How the North Promoted, Prolonged, and Profited From Slavery” and who is researching a new book about slavery and memory. “Once you begin to see these great seaports and these great historic houses, everywhere you look, you can follow it back to the agricultural trade of the West Indies, to the trade of bodies in Africa, to the unpaid labor of black people.”

It was the 1991 discovery of an African burial ground in New York City that first revived the study of Northern slavery. Since then, fueled by educators, preservationists, and others, momentum has been building to recognize histories hidden in plain sight. Last year, Connecticut became the first New England state to formally apologize for slavery. In classrooms across the country, popularity has soared for educational programs on New England slavery designed at Brown University. In February, Emory University will hold a major conference on the role slavery’s profits played in establishing American colleges and universities, including in New England. And in Brookline, Mass., a program called Hidden Brookline is designing a virtual walking tour to illuminate its little-known slavery history: At one time, nearly half the town’s land was held by slave owners.

“What people need to understand is that, here in the North, while there were not the large plantations of the South or the Caribbean islands, there were families who owned slaves,” said Stephen Bressler, director of Brookline’s Human Relations-Youth Resources Commission. “There were businesses actively involved in the slave trade, either directly in the importation or selling of slaves on our shores, or in the shipbuilding, insurance, manufacturing of shackles, processing of sugar into rum, and so on. Slavery was a major stimulus to the Northern economy.”

Turning over the stones to find those histories isn’t just a matter of correcting the record, he and others say. It’s crucial to our understanding of the New England we live in now.

“The absolute amnesia about slavery here on the one hand, and the gradualness of slavery ending on the other, work together to make race a very distinctive thing in New England,” said Joanne Pope Melish, who teaches history at the University of Kentucky and wrote the book “Disowning Slavery: Gradual Emancipation and ‘Race’ in New England, 1780-1860.” “If you have obliterated the historical memory of actual slavery — because we’re the free states, right? — that makes it possible to turn around and look at a population that is disproportionately poor and say, it must be their own inferiority. That is where New England’s particular brand of racism comes from.”

Dismantling the myths of slavery doesn’t mean ignoring New England’s role in ending it. In the 1830s and ’40s, an entire network of white Connecticut abolitionists emerged to house, feed, clothe, and aid in the legal defense of Africans from the slave ship Amistad, a legendary case that went all the way to the US Supreme Court and helped mobilize the fight against slavery. Perhaps nowhere were abolition leaders more diehard than in Massachusetts: Pacifist William Lloyd Garrison and writer Henry David Thoreau were engines of the antislavery movement. Thoreau famously refused to pay his taxes in protest of slavery, part of a philosophy of civil disobedience that would later influence Martin Luther King Jr. But Thoreau was tame compared to Garrison, a flame-thrower known for shocking audiences. Founder of the New England Anti-Slavery Society and the newspaper The Liberator, Garrison once burned a copy of the US Constitution at a July Fourth rally, calling it “a covenant with death.” His cry for total, immediate emancipation made him a target of death threats and kept the slavery question at a perpetual boil, fueling the moral argument that, in time, would come to frame the Civil War.

But to focus on crusaders like Garrison is to ignore ugly truths about how unwillingly New England as a whole turned the page on slavery. Across the region, scholars have found, slavery here died a painfully gradual death, with emancipation laws and judicial rulings that either were unclear, poorly enforced, or written with provisions that kept slaves and the children born to them in bondage for years.

Meanwhile, whites who had trained slaves to do skilled work refused to hire the same blacks who were now free, driving an emerging class of skilled workers back to the lowest rungs of unskilled labor. Many whites, driven by reward money and racial hatred, continued to capture and return runaway Southern slaves; some even sent free New England blacks south, knowing no questions about identity would be asked at the other end. And as surely as there was abolition, there was “bobalition” — the mocking name given to graphic, racist broadsides printed through the 1830s, ridiculing free blacks with characters like Cezar Blubberlip and Mungo Mufflechops. Plastered around Boston, the posters had a subtext that seemed to boil down to this: Who do these people think they are? Citizens?

“Is Garrison important? Yes. Is it dangerous to be an abolitionist at that time? Absolutely,” said Melish. “What is conveniently forgotten is the number of people making a living snagging free black people in a dark alley and shipping them south.”

Growing up in Lincoln, Mass., historian Elise Lemire vividly remembers learning of the horrors of a slaveocracy far, far away. “You knew, for example, that families were split up, that people were broken psychologically and kept compliant by the fear of your husband or wife being sold away, or your children being sold away,” said Lemire, author of the 2009 book “Black Walden,” who became fascinated with former slaves banished to squatter communities in Walden Woods.

As she peeled back the layers, Lemire discovered a history rarely seen by the generations of tourists and schoolchildren who have learned to see Concord as a hotbed of antislavery activism. “Slaves [here] were split up in the same way,” she said. “You didn’t have any rights over your children. Slave children were given away all the time, sometimes when they were very young.”

In Lemire’s Concord, slave owners once filled half of town government seats, and in one episode town residents rose up to chase down a runaway slave. Some women remained enslaved into the 1820s, more than 30 years after census figures recorded no existing slaves in Massachusetts. According to one account, a former slave named Brister Freeman, for whom Brister’s Hill in Walden Woods is named, was locked inside a slaughterhouse shed with an enraged bull as his white tormentors laughed outside the door. And in Concord, Lemire argues, black families were not so much liberated as they were abandoned to their freedom, released by masters increasingly fearful their slaves would side with the British enemy. With freedom, she said, came immediate poverty: Blacks were forced to squat on small plots of the town’s least arable land, and eventually pushed out of Concord altogether — a precursor to the geographic segregation that continues to divide black and white in New England.

“This may be the birthplace of a certain kind of liberty,” Lemire said, “but Concord was a slave town. That’s what it was.”

If Concord was a slave town, historians say, Connecticut was a slave state. It didn’t abolish slavery until 1848, a little more than a decade before the Civil War. (A judge’s ruling ended legal slavery in Massachusetts in 1783, though the date is still hotly debated by historians.) It’s a history Connecticut author and former Hartford Courant journalist Anne Farrow knew nothing about — until she got drawn into an assignment to find the untold story of one local slave.

Once she started pulling the thread, Farrow said, countless histories unfurled: accounts of thousand-acre slave plantations and a livestock industry that bred the horses that turned the giant turnstiles of West Indian sugar mills. Each discovery punctured another slavery myth. “A mentor of mine has said New England really democratized slavery,” said Farrow. “Where in the South a few people owned so many slaves, here in the North, many people owned a few. There was a widespread ownership of black people.”

Perhaps no New England colony or state profited more from the unpaid labor of blacks than Rhode Island: Following the Revolution, scholars estimate, slave traders in the tiny Ocean State controlled between two-thirds and 90 percent of America’s trade in enslaved Africans. On the rolling farms of Narragansett, nearly one-third of the population was black — a proportion not much different from Southern plantations. In 2003, the push to reckon with that legacy hit a turning point when Brown University, led by its first African-American president, launched a highly controversial effort to account for its ties to Rhode Island’s slave trade. Today, that ongoing effort includes the CHOICES program, an education initiative whose curriculum on New England slavery is now taught in over 2,000 classrooms.

As Brown’s decision made national headlines, Katrina Browne, a Boston filmmaker, was on a more private journey through New England slavery, tracing her bloodlines back to her Rhode Island forebears, the DeWolf family. As it turned out, the DeWolfs were the biggest slave-trading family in the nation’s biggest slave-trading state. Browne’s journey, which she chronicled in the acclaimed documentary “Traces of the Trade: A Story from the Deep North,” led her to a trove of records of the family’s business at every point in slavery’s triangle trade. Interspersed among the canceled checks and ship logs, Browne said, she caught glimpses into everyday life under slavery, like the diary entry by an overseer in Cuba that began, “I hit my first Negro today for laughing at prayers.” Today, Browne runs the Tracing Center, a nonprofit to foster education about the North’s complicity in slavery.

“I recently picked up a middle school textbook at an independent school in Philadelphia, and it had sub-chapter headings for the Colonial period that said ‘New England,’ and then ‘The South and Slavery,’ ” said Browne, who has trained park rangers to talk about Northern complicity in tours of sites like Philadelphia’s Liberty Bell. “Since learning about my family and the whole North’s role in slavery, I now consider these things to be my problem in a way that I didn’t before.”

If New England’s amnesia has been pervasive, it has also been willful, argues C.S. Manegold, author of the new book “Ten Hills Farm: The Forgotten History of Slavery in the North.” That’s because many of slavery’s markers aren’t hidden or buried. In New England, one need look no further than a symbol that graces welcome mats, door knockers, bedposts, and all manner of household decor: the pineapple. That exotic fruit, said Manegold, is as intertwined with slavery as the Confederate flag: When New England ships came to port, captains would impale pineapples on a fence post, a sign to everyone that they were home and open for business, bearing the bounty of slave labor and sometimes slaves themselves.

“It’s a symbol everyone knows the benign version of — the happy story that pineapples signify hospitality and welcome,” said Manegold, whose book centers on five generations of slaveholders tied to one Colonial era estate, the Royall House and Slave Quarters in Medford, Mass., now a museum. The house features two carved pineapples at its gateposts.

By Manegold’s account, pineapples were just the beginning at this particular Massachusetts farm: Generation after generation, history at the Royall House collides with myths of freedom in New England — starting with one of the most mythical figures of all, John Winthrop. Author of the celebrated “City Upon a Hill” sermon and first governor of the Massachusetts Bay Colony, Winthrop not only owned slaves at Ten Hills Farm, but in 1641, he helped pass one of the first laws making chattel slavery legal in North America.

When the house passed to the Royalls, Manegold said, it entered a family line whose massive fortune came from slave plantations in Antigua. Members of the Royall family would eventually give land and money that helped establish Harvard Law School. To this day, the law school bears a seal borrowed from the Royall family crest, and for years the Royall Professorship of Law remained the school’s most prestigious faculty post, almost always occupied by the law school dean. It wasn’t until 2003 that an incoming dean — now Supreme Court Justice Elena Kagan — quietly turned the title down.

Kagan didn’t publicly explain her decision. But her actions speak to something Manegold and others say could happen more broadly: not just inserting footnotes to New England heritage tours and history books, but truly recasting that heritage in all its painful complexity.

“In Concord,” Lemire said, “the Minutemen clashed with the British at the Old North Bridge within sight of a man enslaved in the local minister’s house. The fact that there was slavery in the town that helped birth American liberty doesn’t mean we shouldn’t celebrate the sacrifices made by the Minutemen. But it does mean New England has to catch up with the rest of the country, in much of which residents have already wrestled with their dual legacies of freedom and slavery.”

Francie Latour is an associate editor at Wellesley magazine and a former Globe reporter.

____________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/09/26/new_englands_hidden_history/

The Best of Enemies

Leaked documents, dirty tricks, nasty rumors: Richard Nixon and Jack Anderson deserved each other.

The Cliffs Notes version of The Fall of Richard Nixon is straightforward enough: The most corrupt president in American history was brought down by courageous young newspaper muckrakers who rescued the republic. But there is a supple revisionist narrative that adds more than a few layers of complexity to the established account and makes the tale much more interesting.

In “Poisoning the Press,” Mark Feldstein tells the story of the long, feral struggle between two poor, driven boys born 30 miles apart in the West who grew up to be Richard Nixon and Jack Anderson. A protégé of the columnist Drew Pearson and a devout Mormon, Anderson tormented Nixon, a fighting (non-pacifist) Quaker, throughout his 30-year political career and, Mr. Feldstein says, taught Nixon some of the dirty tricks that would later destroy his presidency.

In Mr. Feldstein’s telling, it’s hard to decide whether Nixon or Anderson was the greater rogue. For every one of Nixon’s well-known crimes against the republic, there turns out to be an equal and opposite crime against journalism by Anderson.

Dirty reporting tricks were Anderson’s M.O. He bugged the hotel room of the notorious Bernard Goldfine, a Bostontextile manufacturer who had bestowed an Oriental rug and a vicuna overcoat on President Eisenhower’s starchy chief of staff, Sherman Adams. Anderson even rooted through J. Edgar Hoover’s garbage searching for evidence that the G-man and his handsome deputy, Clyde Tolson, were lovers but found nothing more provocative than empty bottles of the antacid Gelusil. Anderson splashed stolen documents in his syndicated column, “Washington Merry-Go-Round,” routinely made false accusations of homosexuality and drunkenness—and took payoffs from a mob-connected fixer.

Starting as a legman for the patrician, ruthless Pearson and then on his own, Anderson drew first blood on most of the scandals that tainted Nixon almost from the start. There was the infamous $205,000 “loan”—$1.6 million today—that was passed to Nixon though his squirrelly brother Donald by Howard Hughes right after Nixon was re-elected as Eisenhower’s vice president in 1956. In just the first three months of 1972, Anderson broke the stories of the Nixon administration’s secret support for Pakistan on the eve of the India-Pakistan war; an additional $100,000 payoff from Hughes; the fixing of an antitrust case against the conglomerate ITT in return for a $400,000 pledge to underwrite the 1972 Republican National Convention; and the CIA plot against Salvador Allende, the Marxist president of Chile. Stolen or leaked secret documents fueled each Anderson scoop.

Nixon absorbed the lessons and fought back. He tried to insinuate a spy into Anderson’s staff, arranged for counterfeit secret documents to be slipped to the columnist, even had the CIA dog him in an episode straight from the Keystone Kops. Nothing worked, and the president became so enraged by Anderson’s relentless snooping that he uttered his own version of Henry II’s famous death sentence for Thomas à Becket: “Will nobody rid me of this turbulent priest?”

After a hideaway chat with Nixon, his consiglieri, Chuck Colson, concluded that it was imperative “to stop Anderson at all costs.” Soon plumbers E. Howard Hunt and G. Gordon Liddy were meeting with a CIA poison expert to explore slipping LSD to Anderson so that he would trip out while driving and die in a car crash. According to Mr. Feldstein, Liddy even volunteered to stab Anderson to death or break his neck in what would look like a street mugging before the hit was finally shelved as impractical.

Cataloging Nixon’s villainies, Mr. Feldstein, a TV newsman turned academic, mines fresh treasures from the president’s trove of secret Oval Office tapes. Nixon is even more foul-mouthed than we remember—and weirder.

At one point, the president orders his men to find out whether Anderson is the gay lover of a Navy yeoman who leaked to Anderson the secret documents proving the U.S. tilt to Pakistan. After all, Nixon says, Whittaker Chambers and Alger Hiss were romantically entangled: “They were both—that way.” Earlier he lectures his aides on how “homosexuality destroyed” Greece and Rome. “Aristotle was a homo. We all know that,” he explains. “So was Socrates. You know what happened to the Romans? The last six emperors were fags.”

Nixon’s path to Watergate was predictable, Mr. Feldstein suggests, given his character and his conviction that he had to defeat his implacable political and press enemies by any means. During the 1968 campaign, Pearson and Anderson prophesied that a President Nixon would “revert to type,” create “dossiers on all potential rivals” and direct “personal goons” to do his dirty work.

They were right, but Jack Anderson, the crack sleuth, blew the biggest Nixon scandal of all. He had a tip about a Republican espionage operation against the Democratic National Committee offices in the Watergate hotel. The columnist even ran into one of the burglars—a Cuban he knew—at the Washington airport a few hours before the break-in. But Anderson didn’t work the tip hard and didn’t pursue the Cuban, even though the man, before dashing off, blurted that he was on “top secret” business.

So a couple of young reporters named Woodward and Bernstein cultivated “Deep Throat” and carried Anderson’s crusade to Nixon’s doom. Seconding W. Joseph Campbell’s recent book, “Getting It Wrong,” Mr. Feldstein astutely notes: “All mythmaking to the contrary, Watergate journalism was largely derivative, reporting on investigations that were already under way before news outlets began covering them.”

He nails the baleful Nixon-Anderson legacy, too. “The ghosts of Richard Nixon and Jack Anderson continue to haunt Washington long after their departure,” he concludes. “The poisoning of politics and the press that marked their careers has tainted governance and public discourse ever since.”

Mr. Kosner is the author of “It’s News to Me,” a memoir of his career as the editor of Newsweek, New York magazine, Esquire and the New York Daily News.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704198004575311053737866396.html

Don’t care much about history

A QUICK-ON-THE-DRAW David Brooks replies in this morning’s Times to Arthur Brooks and Paul Ryan’s op-ed in yesterday’s Wall Street Journal. As Mr Brooks has it, the trouble with Messrs Brooks and Ryan’s line of thinking is that it’s bad history:

[T]he story Republicans are telling each other, which Ryan and Brooks have reinforced, is an oversimplified version of American history, with dangerous implications.

The fact is, the American story is not just the story of limited governments; it is the story of limited but energetic governments that used aggressive federal power to promote growth and social mobility. George Washington used industrial policy, trade policy and federal research dollars to build a manufacturing economy alongside the agricultural one. The Whig Party used federal dollars to promote a development project called the American System.

Abraham Lincoln supported state-sponsored banks to encourage development, lavish infrastructure projects, increased spending on public education. Franklin Roosevelt provided basic security so people were freer to move and dare. The Republican sponsors of welfare reform increased regulations and government spending — demanding work in exchange for dollars.

I know I’m simultaneously pushing a boulder uphill and spitting into the wind here, but who cares? Well, everybody cares. But why should everybody care? Mr Brooks of the Times is correct that American history does not correspond to fusionist conservative mytho-history. Pointing this out is indeed a useful corrective to the rhetoric of “restoration” deployed by the likes of Glenn Beck. However, if “limited but energetic” government is a good idea, it’s a good idea and can be defended on its own terms. Leave dead presidents out of it! Slavery doesn’t look better because George Washington was into it. Suspending habeus corpus isn’t a crackerjack move because Honest Abe made it. Attempting to crush judicial branch checks on executive power isn’t smart because FDR gave it a go. So what does it matter that the Whigs or the Pilgrims or the ancient Iroquois nation had their own boondoggles? It doesn’t!

Yes, as David Brooks tells us at least once a month, we human beings are not deracinated logic chimps. But shouldn’t we be a little more deracinated, a little more logical? Shouldn’t we try to combat our degraded argumentum ad verecundiam culture and elevate the quality of public deliberation? This is Logic 101 stuff. Isn’t it sad that we expect more of our college sophomores than of our think-tank presidents and prestige columnists? Ingsoc’s “Who controls the past controls the future” should not be treated as a political pro tip. It is a regrettable half-truth grounded in mental barbarism which civilised people aspire to falsify. 

__________

Full article and photo: http://www.economist.com/blogs/democracyinamerica/2010/09/dead_president_fallacy

People of the book

The true history of the Koran in America

Nine years later, we are still haunted by Sept. 11, and in some ways it’s getting worse. All summer, a shrill debate over whether to build a mosque near the Ground Zero site was fueled by pundits on the right, who drummed up a chorus of invective that made it impossible to focus on the modest facts of the case. Then in the days leading up to the 11th, a church in Gainesville, Fla., sparked a firestorm — almost literally — by inviting Christians to come by on the anniversary for a ceremonial burning of the Koran. The Dove World Outreach Center — a misnomer if ever there was one — has made a cottage industry of its Islam-bashing, promoting its old-fashioned hate crusade with the most modern weapons — YouTube, podcasts, Facebook, and blogs (“Top Ten Reasons to Burn a Koran”).

Obviously, this was an act of naked self-promotion as much as a coherent statement about religion. Its instigator, the church’s pastor, Terry Jones, based his crusade on a series of mind-bending assumptions, including his belief that Muslims are always in bad moods (he asks, on camera, “Have you ever really seen a really happy Muslim?”). But for all of its cartoonish quality, and despite his cancellation under pressure Thursday, the timing of this media circus has been a disaster for US foreign policy and the troops we ask to support it. At the exact moment that we want to act as the careful steward of peace in the Middle East, minds around the world have been filled with the image of Korans in America being tossed onto pyres.

For better or worse, there is not much anybody can do about religious extremists who offend decency, yet stay within the letter of the law. The same Constitution that confirms the right to worship freely protects the right to worship badly. But September is also the anniversary of the 1787 document that framed our government, and in this season of displaced Tea Party anger, it is worth getting right with our history. There is nothing wrong with the desire to go back to the founding principles that made this nation great — but we should take the time to discover what those principles actually were.

For most Americans, the Koran remains a deeply foreign book, full of strange invocations. Few non-Muslims read it, and most of us carry assumptions about a work of scripture that we assume to be hostile, though it affirms many of the earlier traditions of Christianity and Judaism. Like all works of scripture, it is complex and sometimes contradictory, full of soothing as well as frightening passages. But for those willing to make a genuine effort, there are important areas of overlap, waiting to be found.

As usual, the Founders were way ahead of us. They thought hard about how to build a country of many different faiths. And to advance that vision to the fullest, they read the Koran, and studied Islam with a calm intelligence that today’s over-hyped Americans can only begin to imagine. They knew something that we do not. To a remarkable degree, the Koran is not alien to American history — but inside it.

No book states the case more plainly than a single volume, tucked away deep within the citadel of Copley Square — the Boston Public Library. The book known as Adams 281.1 is a copy of the Koran, from the personal collection of John Adams. There is nothing particularly ornate about this humble book, one of a collection of 2,400 that belonged to the second president. But it tells an important story, and reminds us how worldly the Founders were, and how impervious to the fanaticisms that spring up like dandelions whenever religion and politics are mixed. They, like we, lived in a complicated and often hostile global environment, dominated by religious strife, terror, and the bloodsport of competing empires. Yet better than we, they saw the world as it is, and refused the temptation to enlarge our enemies into Satanic monsters, or simply pretend they didn’t exist.

Reports of Korans in American libraries go back at least to 1683, when an early settler of Germantown, Pa., brought a German version to these shores. Despite its foreign air, Adams’s Koran had a strong New England pedigree. The first Koran published in the United States, it was printed in Springfield in 1806.

Why would John Adams and a cluster of farmers in the Connecticut valley have bought copies of the Koran in 1806? Surprisingly, there was a long tradition of New Englanders reading in the Islamic scripture. The legendary bluenose Cotton Mather had his faults, but a lack of curiosity about the world was not one of them. Mather paid scrupulous attention to the Ottoman Empire in his voracious reading, and cited the Koran often in passing. True, much of it was in his pinched voice — as far back as the 17th century, New England sailors were being kidnapped by North African pirates, a source of never ending vexation, and Mather denounced the pirates as “Mahometan Turks, and Moors and Devils.” But he admired Arab and Ottoman learning, and when Turks in Constantinople and Smyrna succeeded in inoculating patients against smallpox, he led a public campaign to do the same in Boston (a campaign for which he was much vilified by those who called inoculation the “work of the Devil,” merely because of its Islamic origin). It was one of his finer moments.

Other early Americans denounced Islam — surprisingly, Roger Williams, whom we generally hold up as a model of tolerance, expressed the hope that “the Pope and Mahomet” would be “flung in to the Lake that burns with Fire and Brimstone.” But Rhode Island, and ultimately all of New England, proved hospitable to the strangers who came in the wake of the Puritans — notably, the small Jewish congregation that settled in Newport and built Touro Synagogue, America’s oldest. And in theory — if not often in practice (simply because there were so few) — that toleration extended to Muslims as well.

This theory was eloquently expressed around the time the Constitution was written. One of its models was the 1780 Massachusetts Constitution, which John Adams had helped to create, and which, in the words of one of its drafters, Theophilus Parsons, was designed to ensure “the most ample of liberty of conscience” for “Deists, Mahometans, Jews and Christians.”

As the Founders deliberated over what types of people would ultimately populate the strange new country they were creating, they cited Muslims as an extreme of foreign-ness whom it would be important to protect in the future. Perhaps, they daydreamed, a Muslim or a Catholic might even be president someday? Like everything, they debated it. Some disapproved, but Richard Henry Lee insisted that “true freedom embraces the Mahometan and Gentoo [Hindu] as well as the Christian religion.” George Washington went out of his way to praise Muslims on several occasions, and suggested that he would welcome them at Mount Vernon if they were willing to work. Benjamin Franklin argued that Muslims should be able to preach to Christians if we insisted on the right to preach to them. Near the end of his life, he impersonated a Muslim essayist, to mock American hypocrisy over slavery.

Thomas Jefferson, especially, had a familiarity with Islam that borders on the astonishing. Like Adams, he owned a Koran, a 1764 English edition that he bought while studying law as a young man in Williamsburg, Va. Only two years ago, that Koran became the center of a controversy, when the first Muslim ever elected to Congress, Keith Ellison, a Democrat from Minnesota, asked if he could place his hand on it while taking his oath of office — a request that elicited tremendous screeches from the talk radio extremists. Jefferson even tried to learn Arabic, and wrote his Bill for Establishing Religious Freedom to protect “the Jew and the Gentile, the Christian and the Mahometan, the Hindoo and infidel of every denomination.”

Jefferson and Adams led many of our early negotiations with the Islamic powers as the United States lurched into existence. A favorable treaty was signed with Morocco, simply because the Moroccans considered the Americans ahl-al-kitab, or “people of the book,” similar to Muslims, who likewise eschewed the idolatry of Europe’s ornate state religions. When Adams was president, a treaty with Tripoli (Libya) insisted that the United States was “not in any sense founded upon the Christian religion” and therefore has “no character of enmity against the laws, religion and tranquility of Mussulmen.”

There was another important group of Americans who read the Koran, not as a legal sourcebook, or a work of exoticism, but as something very different — a reminder of home. While evidence is fragmentary, as many as 20 percent of African-American slaves may have come from Islamic backgrounds. They kept their knowledge of the Koran alive through memory, or chanted suras, or, in rare cases, smuggled copies of the book itself. In the 1930s, when WPA workers were interviewing elderly African-Americans in Georgia’s Sea Islands, they were told of an ancestor named Bilali who spoke Arabic and owned a copy of the Koran — a remarkable fact when we remember that it was a crime for slaves to read. In the War of 1812, Bilali and his fellow Muslims helped to defend America from a British attack, inverting nearly all of our stereotypes in the process.

In 1790, as the last of the original 13 states embraced the Constitution, and the United States finally lived up to its name, George Washington visited that state — unruly Rhode Island — and its Jewish congregation at Newport. The letter he wrote to them afterwards struck the perfect note, and drained much of the antiforeign invective that was already poisoning the political atmosphere, only a year into his presidency. Addressing himself to “the children of the Stock of Abraham” (who, in theory, include Muslims as well as Jews), the president of the United States offered an expansive vision indeed:

“May the children of the Stock of Abraham, who dwell in this land, continue to merit and enjoy the good will of the other Inhabitants; while every one shall sit in safety under his own vine and figtree, and there shall be none to make him afraid.”

For democracy to survive, it required consent; a willingness to surrender some bits of cultural identity to preserve the higher goal of a working community. Washington’s letter still offers a tantalizing prospect, especially as his successor turns from the distracting noise of Gainesville to the essential work of building peace in the Middle East, for all of the children of the Stock of Abraham.

Ted Widmer is the Beatrice and Julio Mario Santo Domingo director and librarian of the John Carter Brown Library at Brown University.

__________

Full article: http://www.boston.com/bostonglobe/ideas/articles/2010/09/12/the_true_history_of_the_koran_in_america/

The Father of American Politics

James Madison’s role in drafting the Constitution is well-known. His role as a media-savvy party activist is not.

James Madison is known as the Father of the Constitution, reflecting his role in planning, writing and ratifying the nation’s fundamental law. This should be his month: The Constitutional Convention, where he starred, finished the document in September 1787. And Congress sent the amendments that became the Bill of Rights—which Madison also played a major role in shaping—to the states in September 1789.

But Madison has another claim on our attention. He is the father of American politics as we know it.

Madison helped establish America’s first political party, the Republicans. In 1791, as a representative from Virginia, he joined Secretary of State Thomas Jefferson on a trip through upstate New York and New England, supposedly collecting biological specimens for the American Philosophical Society but actually collecting political allies for themselves. The politician they wished to combat, Treasury Secretary Alexander Hamilton, already wielded great power through his office, and hence he was somewhat slower to organize a party; when he did, it took the name Federalists.

James Madison, the fourth U.S. president

Madison and Jefferson built better than Hamilton: the Federalists disappeared as a national party in 1816, while the old Republicans march on today as the Democrats. (The modern GOP is an unrelated organization established in 1854.)

Madison helped found the first party newspaper, the National Gazette. (The Nation, The New Republic and National Review are latter-day reincarnations.) He recruited the paper’s first editor, Philip Freneau, a versifier and college chum. Jefferson gave Freneau a nominal job as a translator in the State Department and in his free time Freneau smacked Hamilton in prose.

Madison’s interest in newspapers flowed from his interest in the power of public opinion. “Whatever facilitates a general intercourse of sentiments,” he wrote in a December, 1791 National Gazette essay,
“. . . a circulation of newspapers throughout the entire body of the people . . . is favorable to liberty.” Then “every good citizen will be . . . a sentinel over the rights of the people.”

Drowning in both media and poll data today, we understand the importance of regularly measuring public opinion. But in the early republic consulting public opinion was a new concept.

The Federalists had little use for it. They thought the people should rule at the polls, then let the victors do their best until the next election. Madison foresaw, and applauded, our world of 24/7 news, comment and pulse-taking before it existed.

Madison belonged to an early form of the political machine, the dynasty. America had revolted against George III and the House of Hanover, but the dynastic temptation lingered on. Federalist John Adams, our second president, saw his eldest son, John Quincy Adams, become the sixth president. But the Adamses were unpopular one-termers. Between them stretched the Virginia Dynasty—two terms of Jefferson, two terms of Madison, two terms of James Monroe—24 years of government by friends and neighbors.

The Adamses—and the Kennedys, Bushes and Clintons in our day—had dynasties of blood and marriage. Jefferson, Madison and Monroe made a dynasty of ideological brotherhood.

Not that Madison ignored the political importance of marriage. After an unhappy courtship in his early 30s, he left romance alone until he was 43, when he married a pretty widow, Dolley Payne Todd. When Madison took office as Secretary of State (in 1801) and as president in 1809, Dolley Madison became more than a hostess. She was a political wife, America’s first: half a campaign tag-team, and often the better half. Gregarious and outgoing, she completed her husband’s personality, which was shy and stiff except with intimates.

Martha Washington, the first First Lady, was beloved but domestic; Abigail Adams, the second, was political but abrasive. Thomas Jefferson, the third president, was a widower. As one U.S. senator put it, only Madison had “a wife to aid in his pretensions.”

Madison succeeded as a political innovator because he was good at politics. He did what came naturally to him: agenda-setting, committee work, parliamentary maneuvering. He grew up in a family as large as an oyster bed—six siblings who survived childhood, numerous nieces, nephews and cousins—good training for a future legislator.

He worked at what didn’t come naturally: public speaking and campaigning. His voice was weak; time and again, note-takers at debates he participated in (such as in Virginia’s convention to ratify the U.S. Constitution) left blanks in his remarks or simply gave up, because Mr. Madison “could not be distinctly heard.” Yet when circumstances required it, he took on the flamboyant Patrick Henry and once tangled with his friend Monroe in the open air of a snow storm so bitter he got frost bite on his nose. He won both debates.

Madison played well with others. He worked with George Washington, profiting from his charisma and judgment, and before they fell out with Hamilton, profiting from his exuberance. (Hamilton tapped Madison to contribute to the Federalist Papers, which was initially Hamilton’s project; Madison wrote 29 of the 85 essays.) As president, he learned something about money and the world from his Treasury secretary, Albert Gallatin. He was a great man who was not afraid of assisting or deferring to other great men (another legacy of his tight family life). He also worked with the less-than-great: hatchetmen, gossips, wire-pullers. They do the work of politics too. They are part of the game.

James Madison helped build a republic. He was also an ambitious party activist who counted votes, stumped, spoke, scratched backs and (when necessary) stabbed them. He would not be afraid of the contrast, for his deepest thinking told him that the architects of liberty had to understand and sometimes use the ordinary political materials of ambition and self-advancement to ensure that this republic would endure.

Mr. Brookhiser is the author, most recently, of “Right Time, Right Place: Coming of Age with William F. Buckley Jr. and the Conservative Movement” (Basic Books, 2009).

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704644404575481993440363322.html

Russian Submarine Hunts Clues to Century-Old Mystery

The Czar’s Lost Gold

Legend has it that almost a century ago a series of railway wagons stuffed with gold sank into the depths of a lake in Siberia. This week, researchers, exploring the depths by submarine, may have found the Russian royals’ lost gold.

As Bair Tsyrenov slowly guided his Mir submersible up an underwater slope, a shimmer of gold was caught in the vehicle’s headlights, 400 meters (1,300 feet) below the surface of Lake Baikal. First the ship’s three-man crew discovered “steel girders that looked like railway bridges.” Then they struck upon the “bars with a particular golden radiance,” Tsyrenov, a researcher from the Lake Baikal Protection Fund, reports.

The find, made by researchers at the beginning of this week, was a spectacular one. For the last two years, the two Mir submersible research vehicles, usually at work in the Atlantic Ocean and the Indian Ocean, have been operating in Siberia’s Lake Baikal, the world’s largest freshwater body. These are the same two mini-submarines that brought the world the first underwater images of the Titanic.

The Mir expedition to Lake Baikal was actually supposed to be finishing up around now. But the vessels are currently hot on the trail of a legend: the last czars’ hoard of gold, which has been missing for 90 years and which, according to legend, lies in the depths of the Siberian lake.

Moving Millions Worth of Gold

Russian experts and journalists believe that the recent finds might be part of the gold taken by Admiral Aleksandr Kolchak, which has been missing since the chaos of Russia’s civil war. During a major offensive in 1919, Kolchak led the “White Guards” under his command over the Ural Mountains. Kolchak and his forces drove the Bolsheviks out of Kazan, a city east of Moscow, and took control of a major part of Russia’s gold reserves.

Fearing that German troops might get their hands on it during World War I, Czar Nicholas II had had 500 tons of gold transported from St. Petersburg to Kazan. The gold, worth about 650 million rubles, reportedly filled 5,000 crates and 1,700 sacks; the “Whites” required 40 railway cars for the journey.

The victors, however, didn’t have that much luck after their victory. Although Kolchak, an officer in the czar’s navy with brisk, short hair, called on others to pursue “victory over the Bolsheviks,” he himself set up an authoritarian military dictatorship in the area under his control. After the “Reds” launched a counteroffensive, he fell into Soviet hands and was executed by a firing squad.

Gold Sank into Lake?

The “Czechoslovakian corps” which had been fighting the Bolsheviks alongside Kolchak, handed over 410 million rubles’ worth of the gold over to the government in Moscow in return for safe passage home. But what happened to the rest? The last traces of the gold have disappeared in the wide open spaces of Siberia.

According to legend, members of the “White Guards” tried to cross Lake Baikal with the railway cars while it was frozen over with winter ice. But the weight of the cars caused them to crash through the ice and the gold sank into the depths. In fact, the frozen lake is still used as a route for traffic in the winter. During the Russo-Japanese War (1904-1905), railway tracks were even laid across the meter-thick ice.

And now many interested observers are hoping that the sensational find is a reality. Last year one of the Mir submersibles came upon fragments of a railway car at the bottom of the lake during a dive, as well as some crates full of ammunition dating from the period of the civil war. Still, historians expressed their doubts that this was the czar’s gold. It was much more likely that the gold never sank, they guessed. Instead the “White Guards” might have smuggled it out of the country and deposited it into bank accounts in Great Britain and Japan. Another explanation: The withdrawing Czechs had taken it with them and it had brought about a period of unexpected prosperity in that country during the 1920s.

“For the time being, it is hard to say whether the gold is really Kolchak’s,” Tsyrenov told SPIEGEL ONLINE. “Unfortunately we did not succeed in recovering any of the bars,” his colleague Roman Afonin says. The bars were too stuck in the lake’s muck.

But Bair Tsyrenov and his colleagues, who are preparing to make additional dives, are now determined to solve the riddle of the czar’s gold. Whatever happens though, Afonin concludes, “the legend will live on.”

__________

Full article: http://www.spiegel.de/international/world/0,1518,715373,00.html

When the Killing Stopped

How the British tried to start again after the carnage of World War I.

‘They shall not grow old, as we that are left grow old”—we are all familiar with Laurence Binyon’s lament for the fallen of World War I. “The Great Silence” is the less-known story of the aftermath of that war: of those who were left and who did grow old. It complements Juliet Nicolson’s earlier account, in “The Perfect Summer,” of the golden period prefacing the outbreak of hostilities, an interlude of prosperity that only served to throw the horror of the conflict and the social disintegration that followed into sharper relief.

Of the five million British servicemen who went out to fight in the European trenches, 1.5 million came back with permanent injuries and disfigurements; others were traumatized in less immediately obvious ways. Taking stock, the Illustrated London News wrote at the time that the war had “destroyed millions of men, broken millions of lives, ruined great cities and hamlets”; it had left “a belt of earth ravaged, crowded the world with maimed men, blind, mad, sick men, flinging empires into anarchy.” Those who did return, anticipating the “land fit for heroes” promised by the British Prime Minister Lloyd George, found that neither glory nor reward were forthcoming. The economy had collapsed, jobs were scarce and housing was in short supply. Once the euphoria following the Armistice had run its course, the silence that descended when the guns finally stopped was largely one of stunned bewilderment.

Ms. Nicolson focuses on how the British tried, in the two years after the war ended, to reorient themselves and to start again. Her approach is anecdotal and eclectic, drawing freely on contemporary diaries, letters and memoirs to create an impressionistic picture of the lull preceding the Roaring ’20s. The civil work force was ill-equipped to accommodate the more than 40,000 men who had lost one or more limbs during the war. There was little understanding of the phenomenon of shell shock, which was treated with a punitive regime of electrical “therapy,” codeine tablets, rectal injections and chastisement.

As disconcerting to the public were the otherwise able-bodied survivors who had sustained severe facial damage. A hospital department (nicknamed the Tin Noses Shop) was set up in Wandsworth to manufacture galvanized copper masks for those “who no longer had noses, eyes, jawbones, cheekbones, chins, ears or much of a face at all.” But the masks were both uncomfortable and eerily immobile.

The medical establishment was swiftly challenged, through this mass confrontation with illness and debility, to develop modern methods. The New Zealand-born surgeon Harold Gillies established the first specialized plastic-surgery unit at the Queen’s Hospital in Sidcup, where he carried out more than 11,000 maxillofacial reconstructions after 1917. Ground- breaking psychoanalytic work was done with mentally afflicted veterans under the auspices of Dr. William Rivers at Craiglockhart War Hospital near Edinburgh (the poets Siegfried Sassoon and Wilfred Owen were both patients).

Society was also forced to readjust its attitudes to other groups: Women were unwilling to surrender the freedoms they had gained while employed in the war effort, and the working classes were equally reluctant to resume their old position of voiceless subservience after fighting side by side with their “betters” for four years. In recognition of this, millions were enfranchised for the first time in 1918.

All of this is ably if sketchily rendered in “The Great Silence.” The problem with Ms. Nicolson’s magpie overview of the period is that it is too often sidetracked by specious glitter. The author is the granddaughter of Vita Sackville-West and the diplomat Harold Nicolson, and while this lineage gives her behind-the-scenes insights into the political machinations of the period—her grandfather had a ringside seat when the Treaty of Versailles was brokered in 1919—her interest is largely in her own set. In a book that doesn’t go into any real detail about the pacifist or suffragette movements, we are told far too much about Lady Diana Cooper’s parties, the laborious demolition of the Duke of Devonshire’s conservatory, Lady Ottoline Morrell’s love life and the fortunes of the Savoy hotel.

Yet these anecdotes have an authenticity that can be awkwardly lacking in Ms. Nicolson’s attempts at entering the lives of “downstairs” folk. The years 1918-20 were ones of enormous social unrest at home and abroad. The reverberations from the Bolshevik Revolution in Russia were continuing, and 1920 saw the founding of the Communist Party in Britain, where the electorate had trebled from a privileged 7.7 million to a restless 21.4 million desperate for tangible change.

But you would never guess it from Ms. Nicolson’s account. In 1919 the Savoy, we discover, was serving “bear from Finland, snails from France, caviar from Russia and Scottish plovers’ eggs.” Fortunately, Ms. Nicolson contends, “for those who lived lives remote from the extravagant surroundings of the Savoy hotel, the simple gathering of relations reunited round a table set for tea, with jam tarts and a huge currant cake in the centre, was enough.” Let them eat cake, indeed.

Ms. Nicolson is at her most effective when describing the nation’s search for a fitting public expression of its abiding sense of grief. Something was needed that would be accessible to everyone and that would transcend the British curse of class. In the end, the eloquence of the solution lay in its simplicity. The two-minute Great Silence, or act of remembrance, was first suggested by an Australian journalist and eventually secured the endorsement of the king. At 11 a.m. on Nov. 11, 1919, Britain stopped, as it has every year since on that date, to remember its war dead.

Ms. Nicolson notes that there were some in the crowd gathered around the Cenotaph in Whitehall that day for whom silence no longer held any particular meaning. These were the men whom the roar of the trenches had robbed of the ability to hear any sound at all. For them, as Ms. Nicolson observes with poignant understatement, silence would be a permanent state.

Ms. Lowry is the author of “The Bellini Madonna: A Novel.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703447004575449684115658478.html

How Winston Churchill Stopped the Nazis

The Man Who Saved Europe

In July 1945, when the victorious Churchill toured the ruins of Berlin, he asked to be taken to the bunker where Hitler ended his life. He was also shown the spot in the courtyard of the Reich Chancellery where the dictator’s body was incinerated. Churchill’s visit was not announced ahead of time. Nevertheless, a large number of people gathered in front of the Chancellery, and when Churchill walked through the crowd, he was astonished to hear the Germans celebrate him as a hero.

Some 70 years ago, Hitler’s Wehrmacht was chalking up one victory after the next, but then Winston Churchill stood up to the dictator. Their duel decided World War II. The former British prime minister has been viewed as one of the shining lights of the 20th century ever since. Is the reputation justified? 

Adolf Hitler and Winston Churchill never met, and who knows how it might have changed the course of history in the 20th century if the Nazi had made a different decision in the spring of 1932. 

He was already standing in the lobby of the Grand Hotel Continental in Max Joseph Strasse in Munich, unshaven, exhausted from his election campaign, wearing a shabby trench coat. In another room, Churchill was dining with his family and members of his entourage, waiting for Hitler. 

The short, stout Briton, the scion of one of England’s most important families, was already famous. He was a successful journalist and author of bestsellers, and before World War I he had already served as home secretary, president of the board of trade and first lord of the admiralty (head of the navy). During World War I, he was appointed minister of munitions, then secretary of state for war and secretary of state for air. After the war, he became secretary of state for the colonies and, finally, served as chancellor of the Exchequer from 1924 to 1929. The British Isles had not seen someone with such an illustrious career in a long time. 

Hitler Showed Little Interest 

Of course, Churchill was a member of the opposition at the time. He had come to Munich to conduct research for a new book, and while he was there, he wanted to use the opportunity to meet the notorious Hitler, whose supporters were in the process of destroying the Weimar Republic. Churchill’s son and Hitler’s foreign press agent Ernst “Putzi” Hanfstaengl arranged for the two men to meet over dinner at the Continental, although Hanfstaengl neglected to tell the Churchills that the Fuehrer had shown little interest and had left it open as to whether he would attend. 

The evening progressed without Hitler. After the dessert, Hanfstaengl excused himself and hurried to the hotel telephone booth to call the Fuehrer and find out whether he still intended to show up. Suddenly he saw Hitler standing in the lobby. The Nazi had coincidentally met with a benefactor at the Continental. 

Hanfstaengl took the Nazi party leader aside and told him that if Churchill saw him now, his failure to appear would be seen as an insult. And then he said: “Mr. Hitler, you should come. It’s truly important.” But the party leader remained obstinate, and said: “Hanfstaengl, you know perfectly well that I have a lot to do at the moment and that we plan to get an early start tomorrow. So — good night.” 

Churchill put on a good face over the rejection. Later on, Hanfstaengl sat down at the piano in the hotel’s music room, and they sang Scottish songs together. But even in his memoirs, Churchill writes with regret that Hitler “lost his only chance of meeting me.” 

If Hitler had met Churchill in Munich, would he have realized that he was facing a man who was every bit his match? A man who actually enjoyed the war? And who would eventually force Hitler to his knees? 

A Man Who Loved Danger and Sought Out Adventure 

Churchill had killed people in battle as a young man, but he was not particularly struck by the experience. “Nothing in history was ever settled except by wars,” the bellicose Churchill believed. He loved danger and sought out adventure. Even when he was in his sixties, as prime minister, he would stand on the roof of a government building in London during German air raids to observe the murderous spectacle from above, while his cabinet ministers fled into the bomb shelters. 

Adolf Hitler and Winston Churchill. It was a rivalry that pitted a member of the petit bourgeoisie against a son of the aristocracy, an ascetic against a hedonist, and ideologue against a pragmatist, a murderer against an adventurer, a racist revolutionary against an imperial political realist. 

Eight years after Hitler’s failure to turn up at that dinner in Munich, the duel between these two men was to shape the fate of the world. 

Britain Defies the Dictator 

It was the summer of 1940, and Hitler, who was the Chancellor of the German Reich by then, was closer to winning the war than he would ever be again. The German had overrun Poland, occupied Norway and defeated and humiliated France, a major power. It seemed only a matter of time before Hitler would dominate all of Europe. He was aligned with the Soviets and the Americans were still neutral and biding their time. 

Ironically, it was Great Britain, the only country Hitler truly admired and respected, that defied the dictator. Churchill declared that he had only one goal: “Victory — victory at all costs, victory in spite of all terror, victory however long and hard the road may be.” 

Great Britain persevered for a good year, from France’s capitulation until Hitler’s attack on the Soviet Union on June 22, 1941, despite German air raids on London and Coventry, despite German victories in Africa, the Balkans and Scandinavia, and despite the threat of national bankruptcy. 

And Churchill deserved the credit for this perseverance. 

The prime minister, with his trademark Cuban cigars, polka dot bow tie and conspicuous hats, became the world’s most important symbolic figure of resistance against Nazi Germany. Whenever he appeared in public, the crowds would raise their hands and part their index and middle fingers to form the victory symbol, just as he had done. 

Hitler berated his rival as a “lunatic,” “paralytic” and “world arsonist.” Churchill shot back, calling Hitler a “wicked man,” the “monstrous product of former wrongs and shame” and said that “Europe will not yield itself to Hitler’s gospel of hatred.” It soon became clear that the loser in this duel would pay with his life. 

The perseverance of the British was of great and probably decisive importance in shaping the course of World War II. How else could the United States have launched an invasion of the European continent if the British Isles hadn’t been available to it as a giant aircraft carrier? 

And what would have happened if Hitler could have shifted the divisions and bombers to the Eastern front that were tied up in the war against England? It wouldn’t have taken Hitler much more to defeat Stalin. 

Of course, the Red Army and, with significant casualties, American GIs achieved final victory, but the fact that Churchill had stood his ground in 1940 played an important role in their success. 

Britain’s ‘Finest Hour’ 

It’s been 70 years since Britain experienced its “finest hour,” as Churchill called it, but the fascination remains unbroken. There are few wars that can be described without qualification as just wars, and as wars in which the right side prevailed. 

Nowadays, hundreds of thousands of visitors stream through the award-winning Churchill Museum and Cabinet War Rooms in London, in the basement of England’s Treasury Department Building on St. James’s Park, where the cabinet met during the war. They stroll through the Baroque rooms and gardens of Blenheim Palace, one of England’s most magnificent palatial complexes, where Churchill was born in 1874. Or they enjoy the view from Churchill’s estate, Chartwell, across the meadows of the region known as the Weald of Kent. 

Churchill has long been one of the icons of the 20th century, admired by statesmen in all countries and political parties, from former German Chancellors Helmut Kohl and Helmut Schmidt to former US Presidents Bill Clinton and George W. Bush. Bush even borrowed a bust of the prime minister from the British government art collection and placed it in the Oval Office, because he saw Churchill as a visionary with whom he hoped to be compared. The Briton, Bush said, “charged ahead … wasn’t afraid of opinion polls … and the world is better for it.” 

A Mythical Component to Churchill’s Achievements 

But as with all great historic figures who embark on the path to immortality, there is also a mythical component to wartime prime minister’s achievements, a component to which Churchill himself repeatedly contributed. During his lifetime, he found it amusing that the history books would judge him kindly — because he intended to write them himself. 

And that’s what the author of various historical works did. His six-volume work “The Second World War” became a bestseller and was part of the reason he was awarded the 1953 Nobel Prize in Literature. Readers loved his sparkling style and, of course, the many anecdotes that the sharp-tongued aristocrat told. One concerned his vicious war of words with Lady Nancy Astor, the first female member of parliament, who once hissed: “Winston, if I were your wife I’d put poison in your coffee.” Churchill replied: “Nancy, If I were married to you I’d drink it.” 

Since then, of course, countless historians, journalists and political commentators have studied the battle plans of the day, analyzed decision-making sequences and evaluated secret documents. Surprising as it may be, some documents are still classified today, but the existing material is enough to allow us to form our own opinions about this duel, which brought together two men whose paths, until then, could not have been more different. 

Both were moderate students and, like all young men, they believed that they were destined to be great men. Hitler hoped to be a successful artist while Churchill, more than 14 years his senior, did poorly in school and eventually embarked on a military career. 

Trench Warfare Cooled Churchill’s Romance for War  

At the time, the British Empire was still what German historian Peter Alter calls an “enormous playground and source of adventure for young Britons,” and Churchill, too, felt the pull of the battlefield. He was an excellent writer and sought out assignments as a combat reporter: in tropical Cuba, in the Indian jungles and the deserts of Sudan. 

After being captured by rebel Boers in South Africa, he escaped from their prison camp and made his way through the desert to what is now Mozambique. His spectacular escape and trek turned him into a national hero, one who could delight an audience of millions with his articles, lectures and books. 

Both men were outsiders. But there was a difference between being descended from a family named Schicklgruber in a poor, forested region in Lower Austria and being part of a family that resided in Blenheim Palace and counted the Duke of Marlborough, one of the most famous military leaders in British history, among its ancestors. 

In 1898, while Hitler was still going to school, the 23-year-old Churchill went to the Conservative Party headquarters in London to explore whether “a constituency could be found for me.” 

It could, of course. 

To understand the Churchill phenomenon, it is important to consider the historic impact of his origins. He was mainly interested in seeing his painting appear alongside those of his ancestors in the family gallery and, of course, in the British Empire, to which his ancestors had already felt committed. He said it best himself: “The British Empire is everything to me. What is good for the Empire is also good for me, and what is bad for the Empire is bad for me.” 

The restless, resourceful, eloquent politician would soon become a member of the government, and while Hitler, who had been expelled from school, lived a Bohemian life, Churchill was already conferring with world leaders. One was the German Kaiser, who invited the young deputy secretary of state for the colonies to attend fall maneuvers in Silesia as an observer in 1906. 

‘Germany Must Feel She Is Beaten’ 

Before World War I, Churchill was not among the agitators in London, but when the country began losing large numbers of soldiers in 1914, he became an unrelenting supporter of the war, insisting that “Germany must feel she is beaten.” 

Once again, Churchill and Hitler were at opposite ends of their respective chains of command. Hitler was never promoted past the rank of lance corporal, while Churchill, now First Lord of the Admiralty, commanded the world’s largest war fleet. 

The war did bring the two later rivals into close physical proximity with each other. Churchill assumed responsibility for the catastrophic failure of the British landing operation at the Dardanelles, where the Ottoman army prevailed, and resigned in 1915. To make amends, he signed up to serve at the front and, as a lieutenant colonel, was assigned to a section of Flanders where Hitler was serving in the German army. Only 13 kilometers (8 miles) separated the two men. 

The carnage of trench warfare cooled Churchill’s romantic enthusiasm for war. After a few months, he resigned and was soon appointed to another cabinet minister position. Hitler, on the other hand, had to fight to the end. 

Hitler Admired Victorious British 

It was an unusually turbulent time. Europe’s great empires were breaking apart, and so was the world of Lance Corporal Hitler. The 29-year-old social Darwinist believed that he had found the purpose of human existence in the bone mills of Verdun (“Every generation should take part in a war at least once”). After the German defeat, he decided to “become a politician.” 

Hitler admired the victorious British and, as historian Hermann Graml wrote derisively, his early writings suggest that the Nazi would have preferred to become Fuehrer in the British Isles than in Germany. Hitler saw the Empire as a model for his racist empire, because he assumed that it was based on rigidity and a sense of racial superiority. 

He soon dreamed of an “Aryan world order,” in which the Germans would control Eurasia and the British, as their junior partner, would dominate the world’s oceans. He envisioned death or slavery for a large share of mankind. 

Churchill also proved to be susceptible to the prevailing sentiments of the day. As a monarchist, he was horror-stricken by the Russian Revolution, which he initially blamed on a Jewish-Bolshevik global conspiracy, and for a time he even sympathized with the Italian fascist Benito Mussolini, who he called “a Roman genius.” But Churchill abandoned positions as quickly as he adopted them. He was not interested in party agendas or ideological designs, and he once told his mother that he considered his quickness to pass judgment a “mental flaw.” He changed parties twice, once from the Conservatives to the Liberals and then back to the Conservatives. 

In the early 1930s, his checkered career seemed to have come to an end. Churchill fell out with his own party because it wanted to grant more rights to the Indians, which Churchill, an imperialist, saw as a threat to the Empire. 

The sullen Churchill retired to Chartwell, where gardeners, cooks and servants attended to the needs of a hedonist who suffered from depression, began drinking in the morning and lived well beyond his means. When he was a politician, he wrote history books to make money (they were a huge success), and now he returned to his writing. 

To the public, the 56-year-old Churchill was outdated, a man of the 19th century who refused to accept change. In one respect, however, his predictions for the future were downright prophetic. When Hitler won almost 20 percent of the vote in the 1930 parliamentary election, Churchill told a Berlin diplomat that the Nazi Party leader “would seize the first available opportunity to resort to armed force.” 

It was a view he continued to hold. 

Churchill Advocates a Massive Military Buildup 

Hitler had hardly risen to power before Churchill began advocating a massive military buildup in Great Britain. At this point, he even believed that an alliance with the hated Soviet Union was the right thing for Britain. 

Why? 

He had read parts of Hitler’s “Mein Kampf,” and he despised the dictator’s methods, but this wasn’t his greatest concern. In 1937, in remarks directed at Hitler, he said: “We cannot say that we admire your treatment of the Jews or of the Protestants and Catholics of Germany … But, after all, these matters, as long as they are confined inside Germany, are not our business.” 

Indeed, Churchill was motivated by the maxims of the traditional British balance-of-power approach, in which the major powers were to balance each other out on land, while “Rule, Britannia!” applied on the high seas. 

In a letter to a friend, he wrote that the Britain had never yielded to the strongest power on the continent, not to Philipp II of Spain (in the 16th century), not to the French Sun King Louis XIV (in the 18th century), not to Napoleon (in the 19th century) and not to Kaiser Wilhelm II (in the 20th century). London had always aligned itself with the second-strongest power. The acceptance of German hegemony, Churchill wrote, “would be contrary to the whole of our history.” 

‘Stop It! Stop It! Stop It Now!!!’ 

It was pure realpolitik, and it was the same logic that prompted Churchill to turn against Stalin once again after World War II. 

Churchill can hardly be blamed for feeling committed to a special mission in the regard. It was his ancestor, the Duke of Marlborough, who, as commander-in-chief of the British army, had reined in the troops of the Sun King, and Churchill was writing the Duke’s biography in the 1930s. 

He now called upon his government to obstruct the Third Reich. “Stop it! Stop it! Stop it now!!!” he said, “Hitler constitutes the greatest danger for the British Empire!” 

But his warnings went unheard. His fellow conservatives, supporters of then-Prime Minister Neville Chamberlain, favored appeasement of the Germans, because they feared another world war and believed that the dictator could be kept happy. 

The Duke of Marlborough’s descendant was on his own. 

The duel had not yet begun, and ironically, the chief Nazi sought to curry favor with Churchill, because he feared that the British politician could end up playing an important role. He invited the MP with the illustrious past to Germany twice, but Churchill turned down the invitations. Of course, Churchill received envoys from Berlin and met with Nazi Ambassador Joachim von Ribbentrop, who sought to convince Churchill, who favored war, of the benefits of appeasement. 

Ribbentrop, the host, and Churchill stood together in front of an enormous map in the German Embassy in London, while the Nazi explained that the Germans needed space for a greater Germany, or Lebensraum, in the Ukraine and Belarus. He assured Churchill that the Empire would be left untouched, but that the British would have to accept Germany’s eastward expansion in return. 

Churchill, however, felt that this division of territory was unacceptable, to which Ribbentrop brusquely replied: “In that case, war is inevitable.” 

The Duel Begins 

The gauntlet had been thrown down, and the mood quickly shifted. A furious Hitler publicly berated Churchill as a “warmonger,” while Churchill increasingly ignored diplomatic etiquette. By now he was sharply criticizing the persecution of the Jews, and in a newspaper commentary in the summer of 1939, he wrote that the Third Reich represented an unprecedented “cult of malignancy.” 

When World War II began a few weeks later, it was Hitler, ironically, who paved the way for Churchill’s political comeback. The German invasion of Poland shed a new light on Churchill’s earlier predictions. He had been right, after all, and the fact that the Nazis were now railing against him, calling him a “filthy liar” and a “bloated pig,” only enhanced his popularity. 

Yielding to public pressure, Chamberlain appointed him to his cabinet, and in the spring of 1940, Churchill finally succeeded him as prime minister. 

On the evening of May 10, Churchill, now 65, was sitting in a limousine on his way to Buckingham Palace, where King George VI would ask him to form a new government. In his memoirs, Churchill writes: “I felt as if I were walking with destiny, and that all my past life had been but a preparation for this hour and for this trial.” The duel could begin. 

The Nazis behaved as if they welcomed this development. “Clear fronts! We love that,” Nazi Propaganda Minister Josef Goebbels noted in his diary. Of course, the diary also contained other entries that testified to his respect for the new British prime minister. Goebbels described Churchill as a “man with great gifts,” “completely unpredictable” and “the soul of the English attack.” 

Curiously enough, the Friday Churchill took office was also a fateful day for Hitler. 

‘Utter Dejection Was Written on Every Face’ 

In the early morning hours, he traveled to Euskirchen near Cologne on an armored special train, the “Amerika.” From there, several convoys took the dictator and his entourage to Rodert, a village near the town of Bad Münstereifel, which had been fortified with flak positions and road blocks. Hitler moved into a Spartan combat bunker on a hill named the Eselsberg (Donkey Mountain), where he expected his guests to sit on simple wicker chairs. The ascetic fanatic was determined not to go down in history as a man who had lived in the lap of luxury. 

Little had happened on the western front since the beginning of the war. France and Great Britain were unwilling to chance an advance on Germany, and Hitler had also hesitated. But now it was time to move forward. 

At 5:35 a.m., the muffled roar of artillery was heard in the distance for the first time. Hitler raised his hand, pointed to the west, and said: “Gentlemen, the offensive against the Western powers has just begun.” 

The Wehrmacht’s Coup 

On paper, the Wehrmacht was inferior to the combined armed forces of the French, British and Belgians. The Germans had fewer soldiers, fewer tanks and fewer artillery guns. 

The Wehrmacht’s coup was a success nonetheless. German troops invaded Belgium, the Netherlands and Luxembourg, where they created the impression that they would then launch their main attack from these countries, as Germany had done in World War I. At the same time, German armored units pushed their way through the hills of the Ardennes, which the French had considered a buffer against attack, and then suddenly appeared in the rear of the front. After a few days, a substantial portion of the Allied divisions threatened to be surrounded. 

Hitler was suspicious of the Wehrmacht’s success and feared that he was marching into a trap. He sought to curb his officers and “ranted and raved that they were about to spoil the entire operation,” as Franz Halder, the head of the Army General Staff, noted. 

The German advance also took Churchill by surprise. He later admitted that he had underestimated the extent of the change that had taken place since the last war, as a result of the emergence of large numbers of fast-moving, heavy armored vehicles. “Neither in France nor in Britain had there been any effective comprehension of the consequences of the new fact that armored vehicles could be made capable of withstanding artillery fire, and could advance a hundred miles a day,” Churchill wrote in his memoir. 

He flew to France several times to encourage his French allies to hang on. He promised squadrons of aircraft (which he didn’t send) and divisions (which he didn’t have). But after his first visit to the Foreign Ministry in Paris on May 15, he already noted that “utter dejection was written on every face.” 

Then the prime minister looked out of the window. As he later wrote: “Outside in the garden of the Quai d’Orsay clouds of smoke arose from large bonfires, and I saw from the window venerable officials pushing wheelbarrows of archives onto them. Already therefore the evacuation of Paris was being prepared.” 

Hitler Briefly Becomes Churchill’s Unwitting Ally 

When the British expeditionary force and parts of the French army were forced to retreat in northern France, Churchill said, maliciously: “Of course, if one side fights and the other does not, the war is apt to become somewhat unequal.” Nevertheless, he was determined to rescue the effort. 

It was then that Hitler, ironically, became Churchill’s unwitting ally. 

By halting the German advance, the dictator enabled the British to stage the biggest evacuation in their military history at Dunkirk in northern France. Most military historians believe that if the evacuation had failed, London would probably have had to sue for peace. 

Hitler would later claim that he had spared the British so as to solicit “the recognition of our dominance on the continent.” Churchill, he added, had unfortunately “failed to appreciate” his “generosity and chivalry.” 

Chivalry is not a trait for which Hitler is commonly known, and in fact there is every indication that the decision to halt the German advance was a miscalculation on the part of his military leaders, a decision with which the dictator had concurred. He corrected the course two days later, but by then it was too late to stop the British evacuation. 

As a result, hundreds of thousands of Allied soldiers were waiting on the beaches in late May, protected by the piers and breakwaters of Dunkirk. Nevertheless, German artillery shells and the bombs dropped by the German dive bombers known as Stukas made their wait a living hell. When the boats finally arrived, the soldiers wading through the shallow waters had step over and push aside the cold bodies of their dead comrades. 

The Royal Navy used its own ships, but it also commandeered cutters, sailing dinghies, yachts and motorboats. In the ensuing evacuation, about 1,000 ships crisscrossed the English Channel to bring the boys home. 

The British were in luck, because low clouds made for poor visibility for Germany’s Luftwaffe. The Royal Air Force also stood up to the Luftwaffe for a few days, enough time for the evacuation to succeed. By the time the Wehrmacht captured Dunkirk on June 4, most of the Allied soldiers had escaped. 

The British were ecstatic over the rescue effort, but it had, in fact, masked a disaster elsewhere. The British and the French had stood their ground against the Germans for four years in World War I, but now, after only a few weeks, the Nazis were at the gates of Paris. 

Had the duel between Hitler and Churchill already been decided? 

Churchill’s Strongest Weapon Was the Word 

While the evacuation was underway, Mussolini offered to a broker a peace with Berlin. We will probably never know what was then discussed in London, where it was noted in the minutes of the cabinet that the minutes were to be temporarily suspended. 

The appeasers, allies of former Prime Minister Chamberlain, were still in the cabinet, and for reasons of domestic policy, Churchill needed their support. Foreign Secretary Lord Halifax, an English Catholic with a passion for hunting (hence his nickname, “Holy Fox”), headed the peace faction. Although he was adamantly opposed to peace at all costs, he was interested in exploring London’s options. 

Churchill, too, seemed to vacillate, or was it only a tactical maneuver? According to the minutes of the cabinet, he said that the government could consider making peace with Hitler, provided the German leader would settle for the return of former German colonies and would agree to limit German dominance to Central Europe. 

Churchill knew that Hitler would never agree to such conditions. 

Hitler Would Turn England into ‘Slave State’ 

On the afternoon of May 29, the time had come to reach a decision. The prime minister assembled the expanded cabinet and explained that whether the British sued for peace or “fought it out,” it would make no difference in the end, because Hitler would only seek to turn Great Britain into “a slave state.” For that reason, he argued, the British should continue the fight. 

According to the Churchill, his remarks were received with great enthusiasm. Some members of the cabinet jumped up from the table, ran to his chair, shouted and slapped him on the back. Another source describes the reaction in somewhat more muted terms, as a murmur of consent from the entire table. 

Either way, the situation was clear: The war would continue. 

Churchill, for his part, savored the drama of the day. He would fly more than 180,000 kilometers (about 112,000 miles) by 1945. He would inspect the troops at the fronts, and when he did, he would venture so dangerously close to the enemy lines that his commanders feared for his life. 

The war was entirely to the taste of the prime minister, a man about whom the writer H.G. Wells wrote, after World War I: “He believes quite naively that he belongs to a peculiarly gifted and privileged class of beings to whom the lives and affairs of common men are given over, the raw material of brilliant careers … Before all things he desires a dramatic world with villains — and one hero.” 

Churchill’s strongest weapon was the word. The equally eloquent John F. Kennedy, son of the then US ambassador in London and later president of the United States, once said that Churchill had sent the English language to war. He gave magnificent speeches, and even the Nazis were impressed by his eloquence. “In his crudeness, he does command a certain amount of respect,” Goebbels wrote. 

Churchill’s rhetorical performances were only slightly diminished by the fact that some of his wording was not entirely new. 

On June 4, he uttered the following famous words in the House of Commons: “We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender.” 

A similar passage appears in Rudyard Kipling’s collection of stories, “The Jungle Book.” 

‘We Will Make Germany a Desert’ 

To this day, scholars disagree over whether the prime minister was trying to turn around a defeatist mood with his passionate words, or whether he was merely echoing the sentiments of an already determined populace. Churchill addressed the people directly only a few times, but when he did, up to two-thirds of Britons sat in front of their radios, hanging on his every word. 

The prime minister would later seek to downplay the impact his words may have had, saying that “there was no need to rouse their spirit,” and that “nothing moves an Englishman so much as the threat of an invasion.” 

Churchill emphasized a total commitment to the war. While the Third Reich exploited forced laborers and ransacked the countries it occupied, the Britons were expected to contribute directly to the war effort. Sebastian Haffner, a German immigrant, reported that at Easter in 1940, uniformed doormen were still standing in front of the luxury hotels, while a million unemployed were looking for work. Only a few months later, the people had disappeared from the streets and the War Office had requisitioned the hotels. 

Meanwhile, Churchill was dreaming of air attacks on the German Reich. “We will make Germany a desert, yes, a desert,” he announced over lunch. 

He was probably the most powerful British prime minister in history. And the Empire has probably never been governed in such a bizarre way, by a prime minister who conducted a significant portion of government affairs from a horizontal position. Dressed in his red dressing gown, he would lie on his four-poster bed, chewing a cigar and sipping ice-cold soda water, and dictate memos to his secretary, memos that were often titled “Action This Day.” 

According to the Churchill saga, the British Isles were practically defenseless against Hitler in the summer of 1940. Churchill, too, was infected by the so-called invasion fever that was taking hold all around him. Speaking to a friend, he grimly predicted: “You and I will be dead in three months’ time.” 

‘When Will that Creature Churchill Finally Capitulate?’ 

He had barricades made of sandbags erected in front of key government buildings to provide cover for soldiers, to fend off a potential attack by German paratroopers. For a time, street signs were removed in the government district to prevent attackers from getting their bearings. Churchill was also determined to join the fight himself, if necessary, practicing with his Mannlicher pistol on the firing range at Chequers, the prime minister’s country house. 

Today we know that there was no threat of a German invasion, at least not in the summer of 1940. Hitler and his senior military leaders were in agreement that the British Navy was far too powerful. 

They considered a landing operation to be a remote option that was only to be considered if the British Air Force could be put out of commission first. An invasion across the Channel was hardly a possibility before the end of September. The Germans felt that they stood a better chance of succeeding in May 1941, but even then they were not enthusiastic about the idea. 

According to a report by an adjutant, Hitler was “more indecisive than ever before, and doesn’t know what he wants to do and how he wants to do it.” 

The dictator had expected that after his victory in France, the appeasers would prevail in London. 

Instead, he found himself with allies he either despised (Italy) or wanted to destroy (the Soviet Union). At the same time, he was waging war against the country he would have preferred to have as his junior partner. “They want to extend the hand of the Germans to England,” a high-ranking Nazi complained in Berlin. 

An Attempt to Bomb the British to the Negotiating Table 

In the end, Hitler reluctantly decided to bomb the British to the negotiating table. He likened himself to Martin Luther, who had been against opposing Rome but felt that he was left with no other choice. 

At first, Hitler focused his attacks on ports, airports and armament factories. Day after day, German fighter planes and bombers appeared in the skies over southeastern England, forcing Churchill to stay away from his beloved Chartwell, which was in the Germans’ flight path. 

On the ground, many Englishmen anxiously looked on as the German Messerschmitts dueled with the British Spitfires and Hurricanes. The fast fighter planes created vapor trails that formed giant circular patterns in the sky, as metal parts and cartridges rained down on the gardens of Kent and Sussex. 

The Royal Air Force was technically superior to the Germans. Its radar, guidance and warning systems were among the most advanced in the world. In addition, British aircraft factories were producing more planes than their German counterparts. 

Most of all, Britain benefited from its island position. When British pilots were shot down, they could save themselves by parachuting into their own territory, possibly even flying new missions on the same day, while German pilots either drowned in the sea or — if they were lucky enough to land on solid ground — ended up in prisoner-of-war camps. 

The Blitzkrieg 

Although things were looking good for the Luftwaffe at first, Hitler would not win the so-called Battle of Britain. 

On Aug. 24, 1940, the Germans bombed residential neighborhoods in London for the first time, probably by mistake. After that, Churchill gave the order to attack Berlin. Although he did not expect bombing the German capital to yield significant military benefits, it was “good for the morale of all of us.” 

Although Berlin suffered little damage, the attacks prompted Hitler to vow: “If they declare that they will attack our cities on a large scale, we will eradicate their cities.” 

Any sympathy Hitler might have had for the British had disappeared. The “Blitz,” the British rendering of the German term “Blitzkrieg,” had begun. London’s air ride sirens howled night after night, and by the end of December 1940, about 14,000 people had died in the British capital — burned, suffocated or crushed by wreckage. 

Buckingham Palace and the House of Commons were also damaged, and Churchill’s war cabinet moved to the cellar vaults beneath the Treasury on St. James’s Park, which are accessible to visitors today. 

The map room, with its large wall maps of the world and its candy-colored telephones, the kitchen with its cast-iron tableware — everything still looks the same. Only the rats have disappeared and, of course, the unsavory odor of cigar smoke mixed with the stench of fecal matter coming from the chemical toilets. 

Churchill and his cabinet ministers were not especially well protected in the facility, which was only adequately fortified later on. A direct hit would have put an end to the duel between Hitler and Churchill. 

The Nazis regaled themselves with the destruction of London, but they were also puzzled. “When will that creature Churchill finally capitulate?” Goebbels asked himself. “England cannot hold out forever!” 

But indeed, it could. 

Churchill made a point of demonstratively trudging through the destroyed areas of London, Coventry and Birmingham. Photos depict him bent forward slightly at the shoulders, giving him an air of determination. Cartoonists drew him as a bulldog. 

Get back at them, the people called out, and he did. The first major attack on a German city struck Mannheim in December 1940. 

‘I Shall Drag the United States In’ 

At the end of 1940, Hitler openly conceded that he could not force his rival to capitulate with his bombing campaign. He had already abandoned his half-hearted plans to invade Britain months earlier. 

The two men were at a stalemate. Churchill was also unable to bring down his rival, because the British Army alone was incapable of defeating the Wehrmacht. 

What next? 

Churchill did have a plausible plan, and if we are to believe his son Randolph’s account, he began putting his plan into action on May 18, 1940. On that morning, Randolph was waiting outside the bathroom door for his father, who was shaving. Suddenly Churchill stopped shaving and said, through the open door: “I think I see my way through.” “Do you mean that we can avoid defeat?” Randolph asked. “Of course I mean we can beat them,” Churchill replied, “I shall drag the United States in.” 

It was an obvious proposition for Churchill, whose mother was American and who was both fond of and familiar with the new superpower. But the US public wanted no part of the Europeans’ fight. American arms shipments were modest, and although Churchill was eloquent in his warnings and appeals to Washington, even historians with a favorable view of Churchill believe that the situation would not have changed much if Hitler’s ally, Japan, hadn’t bombed Pearl Harbor in December 1941, thereby drawing the United States into the war. 

Churchill danced for joy when he heard the news. “This certainly simplifies things,” he told US President Franklin Roosevelt, “God be with you.” 

Hitler, too, was looking for new options and was puzzled that the British were holding their ground. Finally, he developed the original theory that the reason Churchill was refusing to yield was that he was secretly counting on the Soviet Union, which was still aligned with Hitler. Armed with this nonsensical notion, Hitler seriously resolved to attack Stalin, which had been his intention all along. “Once Russia is annihilated, England’s last hope will be gone.” 

Hitler Invades Soviet Union 

In May 1941, there was a significant decline in the German air raids on Great Britain, because the bombers were needed in the East. The Wehrmacht invaded the Soviet Union six weeks later. 

What a compliment! Hitler believed that it was easier to conquer Moscow instead of London. Shortly before his demise in 1945, Hitler complained that Churchill was the “real father of this war.” 

With the attack on the Soviet Union, the British political system had been saved once and for all. 

There was nothing more that Churchill could achieve. 

He paid a high price for this success, because the war accelerated the decline of the overstretched Empire. It is an irony of history that it was Churchill, the imperialist, who was forced to expedite this decline. But as a political realist he had only one choice: a junior partnership with either democratic America or Nazi Germany. It wasn’t difficult to recognize which of the two arrangements was more likely to further British interests. 

Now that the Soviet Union and the United States had entered the war, it was time for Churchill to give up his position as Hitler’s main rival, because the other Allies would bear the main burden of the war from then on. Although Churchill magnanimously promised his new ally Stalin all the assistance “that time, geography, and our growing resources allow,” the British shipments were marginal. In fact, Churchill had been thwarting an invasion of the continent, the “Second Front,” by the Western powers for some time, because he feared it would turn into a disaster. 

“You British are afraid of fighting,” Stalin said derisively. “You should not think that the Germans are supermen.” 

Civilians Were Primary Victims of British Bombings 

Hoping to make a significant contribution to victory over the Nazis, the British began systematically bombing German cities in the spring of 1942. Despite his occasional doubts, Churchill was relentless. After viewing film of devastated German cities, he asked: “Are we beasts? Are we taking this too far?” 

Churchill even toyed with the idea of dropping poison gas on German cities, but his generals objected. 

While the RAF attacks on armament factories and rail lines did shorten the war by several months, civilians were primarily the victims of its bombings of residential areas. 

About 600,000 Germans died in the bombings, most of them women, old men and children. A number of cities were all but destroyed. 

When Dresden was destroyed near the end of the war, in February 1945, even Churchill admitted that the bombings were “mere acts of terror and wanton destruction.” 

By then, the duel had been decided long ago, and the only decision remaining for the Allies was to determine what to do with Hitler and the Germans once they were defeated. 

As he had done so many times before, Churchill vacillated between extremes, between a Carthaginian peace and chivalrous generosity. In the end, Stalin’s and Roosevelt’s ideas prevailed. 

Churchill’s Role in the Explusion of Germans from Easter Europe 

Churchill did, however, contribute to the expulsion of Germans from Eastern Europe, as historian Detlef Brandes has shown. He did so by supporting (and thus legitimizing) the demands of the Polish and Czechoslovak exile governments in London. According to the Churchill, the Germans were to “be given a brief amount of time to gather the bare necessities and leave.” 

At first, he was referring to East Prussia and the Sudetenland, but he eventually included Pomerania and parts of Silesia in his plans. 

It was one of Churchill’s darkest hours when, at the Summit of the Big Three in Tehran in 1943, he picked up three matchsticks, which were meant to represent Germany, Poland and the Soviet Union. He had already agreed to Stalin’s demand that part of Poland was to go to the Soviet Union. Now he placed the matches together to illustrate the consequences: By pushing the Soviet match toward the West, he was also shifting the positions of the other two matches. Stalin found this depiction of Poland’s westward shift amusing. 

Of course, the Germans would have to vacate the territory that fell to Poland. As a result, several million people were ultimately rounded up, robbed and expelled, and tens of thousands died during the forced marches. 

‘A Tragedy on a Prodigious Scale’ 

Churchill later criticized the brutal behavior of the Poles and Soviets, calling it “a tragedy on a prodigious scale” — as if ethnic cleansings had ever been anything but tragic. 

And what was to happen to Hitler, who was ultimately responsible for the entire calamity by starting the war in the first place? 

Before the Holocaust, Churchill toyed with the idea of banishing Hitler and other top Nazis to an isolated island, just as Napoleon had once been banished to Elba. Or perhaps he was simply tipsy when he voiced this idea. 

But when the Holocaust began, such bizarre ideas were quickly taken off the table. Churchill learned of the Nazis’ crimes after the British cracked the code the Germans had used to encrypt SS and police reports on the massacres of Jews in the Soviet Union in the summer of 1941. 

In 1942, the prime minister told the cabinet that he would have Hitler put to death if he were captured — without a trial and in the electric chair, like a “gangster.” 

For Churchill, Hitler was the “mainspring of evil.” 

As we know, Hitler committed suicide in his bunker a few days before Germany capitulated in 1945. As Churchill writes in his memoirs, it was a more preferable end for the Nazi dictator, after all. 

He had finally prevailed, and the duel had ended. 

In July 1945, when the victorious Churchill toured the ruins of Berlin, he asked to be taken to the bunker where Hitler ended his life. He was also shown the spot in the courtyard of the Reich Chancellery where the dictator’s body was incinerated. 

Of course, Churchill’s visit was not announced ahead of time. Nevertheless, a large number of people gathered in front of the Chancellery, and when Churchill walked through the crowd, he was astonished to hear the Germans celebrate him as a hero. Only one old man shook his head disapprovingly. 

And that was how it was with Churchill. 

There are those who dislike him because he was an imperialist, because a single human life meant little to him, and because he lost his sense of perspective during the bombing war and endorsed ethnic cleansing. 

In the end, however, we can only be pleased that he won the duel. 

__________

Full article and photo: http://www.spiegel.de/international/europe/0,1518,712259,00.html

Lorenz Of Arabia

As a wartime strategy, Germany tried to foment a Grand Jihad in Muslim lands.

The Ottoman Empire took its time to die. Hovering around the death bed, the Great Powers of the late 19th century—Russia, France, Germany and Britain—were eager to have a share of the spoils and fearful that others might pre-empt them. None was so eager or so greedy as the German emperor, Kaiser Wilhelm II.

A grandson of Queen Victoria, the kaiser nonetheless found the British a “hateful, lying, conscienceless people of shopkeepers.” He especially resented that they were ruling India. In the course visiting Turkey and its Arab provinces, he fantasized that he could build an empire out of these lands, a German counter-weight to British India. This foolish and neurotic fellow has much to answer for. Sean McMeekin, a professor at Bilkent University in Ankara, Turkey, now produces a charge sheet, and it is detailed and instructive.

The first step in the kaiser’s policy of expansion was to build a railway from Germany to Constantinople, eventually terminating in Baghdad, with an extension to the Persian Gulf. This great engineering feat, begun in 1903, was intended to carry German merchandise on German rails, but its military purpose was clear—to establish German hegemony in Ottoman lands. But intervening mountain ranges in eastern Turkey made for slow progress and prevented the railway’s completion in time to help fulfill the kaiser’s ambitions before war broke out in August 1914.

In Turkey itself, in the prewar years, revolution was in air, complicating the Germans’ calculations. The Young Turks, conspirators with an army background, rebelled against the sultanate and pushed for constitutional reforms, forming the government in 1908. Still, they were uncertain how to modernize and preserve the empire. In the crisis of 1914 they were pressured into an alliance with Germany, and this alliance brought about the collapse that they had hoped to avoid.

For Germany, the Ottoman alliance was a help, but not enough in itself. Facing Russia in the east and Britain and France in the west, Germany simply did not have the manpower or the means to fight on multiple fronts. Complex strategies of subversion were devised instead. They were to pay off in one notorious case, when the Germans, in 1917, sent Lenin in a special train to launch the Bolshevik Revolution and take Russia out of the war. The Germans encouraged Zionism, too, in the belief that Germany could recruit the loyalty of persecuted Russian Jews.

The strategy of subversion that most interests Mr. McMeekin in “The Berlin-Baghdad Express” was the kaiser’s plan to foment rebellion among Muslims living under British rule. Toward this end he pushed for a Grand Jihad, the aim of which was to revive the figurehead of a Sultan Caliph, to whom all Muslims of the world would show loyalty. If Muslims in Egypt and India could be persuaded to rise and free themselves from their colonial masters, the kaiser believed, the British Empire would lose its prize possessions and the British could not win the war.

In charge of this Grand Jihad was Baron Max von Oppenheim, a rich dilettante, an Arabist and an Anglophobe who knew how to excite the kaiser with the news that all Muslims were looking to him for leadership. Urged by the Germans, Ottoman sheiks, all of them Sunni, duly issued fatwas ordering Muslims to kill infidels. Mr. McMeekin makes it plain that this gave Turks license for the mass murder of Armenians and Greeks, the infidels and enemies within reach. The impact of the fatwas was dissipated by the absurd fact that they had to exempt infidels who were allies, namely Germans, Austrians and Hungarians.

Meanwhile, Oppenheim put out a mass of printed propaganda and sent German agents fanning out to one Muslim ruler after another, urging each to pursue jihad. As Mr. McMeekin shows, these agents had experience of the Muslim world; they were usually linguists, explorers and scholars, at least as impressive and as hardy as Lawrence of Arabia on the Allied side. To their dismay, though, Oppenheim’s agents discovered that the position of Sultan Caliph was of no more interest in the broad Muslim world than the position of Holy Roman Emperor was in Christendom.

What really mattered to the Muslims, as Mr. McMeekin puts it, “was superior force in theatre, pure and simple.” The Shia Grand Mufti of Karbala gave the Germans a solitary success by signing up for jihad, but the emir of Afghanistan, the shah of Persia and the religious dynasty of Sanussi in Libya were among those waiting to see which side would ultimately win the war before committing themselves. Of course, Muslim leaders were delighted to be propositioned by German agents and in return for subsidies and armaments made the airiest promises of support, exactly as they were doing with the British, playing one side off against the other.

Sherif Hussein of Mecca, Mr. McMeekin notes, was the most skillful of all these blackmailers. Head of the Hashemite family and engaged in tribal rivalry in Arabia, he had made sure to send his sons to treat with Oppenheim while also testing what the British might give him. The price he extracted from Britain was kingdoms for himself and for two of his sons, and he was duly rewarded with them when the war ended.

In addition to bringing to life a fascinating episode in early 20th-century history, “The Berlin-Baghdad Express” contains several timely lessons and cautionary tales. Purchased loyalty is worthless. Western countries may possess superior military force, but they are outwitted time and again by diplomacy as practiced by Muslim leaders. Lastly, there is no such thing as global Islamic solidarity—jihad is an expedient, not a belief system.

Mr. Pryce-Jones is the author of, among other books, “The Closed Circle: An Interpretation of the Arabs.”

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703461504575444532238358048.html

A Renegade and His Regrets

A Confederate who refused to quit fighting but ended up a model citizen.

In August 1865 the French military garrison at Matehuala, in central Mexico, fully expected to be slaughtered by the Mexican troops besieging the outpost. Napoleon III had imposed Emperor Maximilian I on the country the year before, supplanting Mexican President Benito Juarez. Now forces loyal to Juarez were fighting back, driving for Mexico City. Only Matehuala stood in their way.

Then word arrived that the French might have an unlikely savior: Jo Shelby, a renowned Confederate cavalry general who had refused to surrender when the Civil War ended in April. Having led a contingent of Southerners into Mexico, many of them soldiers, he now offered his help to the French. He was ready to attack the Mexican army, he declared—President Juarez, after all, enjoyed the U.S. government’s support, and Shelby was not fond of the U.S. government.

As historian Anthony Arthur notes in “General Jo Shelby’s March,” Maximilian would ultimately refuse Shelby’s broad offer of support. But the beleaguered French garrison at Matehuala welcomed him. Shelby’s adjutant, Maj. John Newman Edwards, later described the cavalry attack on Juarez’s forces: “Shelby’s charge was like a thunder-cloud. Nothing could live before the storm of its revolver bullets. Lurid, canopied in smoke-wreathes, pitiless, riding right onward.” The Mexicans fled “in hopeless and helpless flight.” Shelby and his men were fêted by the grateful French for three days.

The remarkable story of how Gen. Jo Shelby (1830-97)—his full name was Joseph Orville, shortened to J.O., or Jo—came to be fighting for France in Mexico is the focus of Mr. Arthur’s work, a page-turner of a history that the author completed shortly before his death last year. Another volume or two would be needed to capture Shelby in full, but Mr. Arthur sketches in enough of Shelby’s early life and Civil War exploits to give us a vivid sense of the unbowed, renegade commander who headed south of the border.

Shelby grew up with a passion for horses and riding in Lexington, Ky., but when he turned 21 and inherited $80,000 from the estate of his father (who had died when Shelby was 5 years old), he moved to Waverly, Mo., and started a rope-manufacturing business. There, on the Missouri River in the western part of the state, the wealthy, charismatic young man built a mansion and bought steamboats. He also took part during the 1850s in the bitter fighting between advocates of the Kansas Territory’s entry into the Union as a slave-free state and “border ruffians,” many of them pro-slavery Missourians. Shelby, a gifted horseman who counted himself among the proponents of slavery, emerged from the bloody campaign (in the end, Kansas was admitted as a free state in 1861) with a reputation as a fierce and wily fighter.

Shelby’s legend only grew with the outbreak of war and his appointment as the commander of all Confederate cavalry forces between the Arkansas and Mississippi rivers. The soldiers he led directly were known as the Iron Brigade. In the fall of 1863, the brigade launched a wide-ranging raid that covered more than 1,500 miles, destroyed 10 Union forts and, perhaps most important, bolstered morale throughout the South.

A grim note in Shelby’s story occurred in April 1864 at Marks’ Mills, Ark. Shelby led a technically brilliant capture of a Union supply train consisting of 240 wagons and five pieces of artillery, guarded by 1,500 men—all killed or captured by Shelby’s raiders. But the Confederates also killed dozens of black teamsters, servants and escaped slaves. They were shot down, a Union witness said, like dogs. Mr. Arthur says that Shelby’s rampaging soldiers were beyond his control: “No orders, threats, or commands could restrain the men from vengeance on the negroes,” according to Maj. Edwards, an eyewitness. (Revenge for what? Attempting to gain their freedom?) In any case, author doesn’t present any evidence that the general regretted the shameful episode.

At war’s end, Shelby led an embittered expedition of perhaps a thousand men to Mexico. Their number included about 200 of his former troopers, soldiers from other Confederate commands and what must have seemed like half the Confederate government, including the governors of Missouri, Louisiana, Tennessee and Texas.

Shelby and the others didn’t enter Mexico with military action in mind; they were simply determined to make a new start far from the hated Yankees. But the ex- Confederates rode into the middle of another civil war. As Mr. Arthur relates, Shelby’s ragtag group ran a gantlet of bandits, Apaches and Mexican rebel forces—including those at Matehuala—as it headed to Mexico City to offer the emperor military assistance. Maximilian received Shelby cordially but astutely judged that aligning himself with former Confederates would only inflame the U.S., which already resented France’s incursion in Mexico.

But Maximilian invited the Southerners to stay on and granted them land where they could farm. Shelby did take up farming in Mexico—briefly. When France withdrew its support of Maximilian in 1867, Shelby and the other Americans who had come to Mexico after the Civil War scrambled to escape the doomed regime.

Shelby—back in America barely two years after Appomattox—embarked on an astonishingly successful campaign to rehabilitate his reputation. He restarted his life in Missouri and renounced his support for slavery, becoming such a model citizen that in 1893 President Grover Cleveland appointed him U.S. marshal for western Missouri.

One of Shelby’s first acts as marshal was to name a black man as a deputy. Racists denounced the old rebel general, who responded with a written statement. He had “no patience with that sentiment that gropes always among the tombstones,” Shelby said, “instead of coming out into the bright light of existing life and conditions.” As Mr. Arthur concludes, Shelby was a “man who fought bravely for a doomed cause, and who ultimately reconciled himself not only to defeat, but to the fact that his cause had been fatally flawed by the greatest evil”—slavery.

Mr. Ferguson is a Rossetter House Foundation Scholar of the Florida Historical Society.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704164904575422410676917510.html

Power couple

Two presidents, two speeches — and a profound question about the American military that has yet to be answered

The two most famous presidential speeches of the last 50 years occurred within three days of each other, yet exist in different spheres of memory. Dwight D. Eisenhower’s farewell address flickers in the foggy black and white of early TV, a strange benediction from an old warrior; John F. Kennedy’s inaugural address pierces the crystal blue of a Washington January, a burst of color and energy.

“We will pay any price, bear any burden, meet any hardship, support any friend, oppose any foe, in order to assure the survival and the success of liberty,” Kennedy intoned.

Three days earlier, in a very different scene, his predecessor struck a note of aged wisdom, warning his countrymen to “guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex.”

Now, 50 years later, the Kennedy and Eisenhower libraries are preparing to mark the anniversary of the two famous speeches, but not highlighting any link between them. In fact, they represented the final shots in a year-long duel between the two men. Their dispute — whether boundless military spending should be seen as a symbol of national resolve, or a drain on national resources — is even more relevant today, without the Soviet Union as a rival. The arguments Eisenhower and Kennedy put forth — and the world views they presented — frame a debate that’s been revived in Washington as recently as last week, when Defense Secretary Robert Gates announced sweeping cuts to the service budgets, calling for “separating appetites from real requirements.”

Throughout his time in office, Eisenhower believed that as long as the United States had the power to destroy any foe, peace could be maintained. This required a large military budget, but not a limitless one. Kennedy, however, believed the United States should always be dominant in the world, and that Eisenhower, in holding the line on military spending, had allowed a “missile gap” to develop between the USSR and the USA. The gap didn’t exist. (It’s unclear when Kennedy, who was eventually briefed by the CIA, found out.) But so many military leaders, journalists, and defense contractors insisted it did exist that Kennedy gained significant political advantage, to Eisenhower’s undying frustration.

Eisenhower’s fear of a confluence of political and corporate forces pushing for military spending was palpable. But so too was Kennedy’s belief that visible military superiority was essential to the nation’s destiny. The two speeches sketched out different visions of the source of American strength. Eisenhower found it in the small towns of his own boyhood, in the solitary pursuit of virtue and innovation; Kennedy found it in the nation’s willingness to mobilize behind a set of ideals.

By January 1961, when both men delivered their speeches, the dispute between Eisenhower and Kennedy was ending. Their driving points, however, would continue to be discussed. How much defense spending is too much? Is there ever a limit? In politics, will the more hawkish, fear-provoking stance always prevail?

Despite the end of the Cold War, the equation of military hardware with American strength endures, as indelible as Kennedy’s fierce expression on that January afternoon. But so too does the skepticism represented by Eisenhower, the idea that while some defense spending is crucial, a lot of it is simply a tool of special interests — big corporations, opportunistic politicians, ideologues with hidden agendas — hiding under a cloak of patriotism. Eisenhower, in his waning days in office, could only wonder how he, a five-star general who was instrumental in winning World War II, could lose the trust of the people on national security, especially over a missile gap that did not, in fact, exist.

“Eisenhower’s vision played out through the Cold War, but in some respects it’s more remarkable that it persisted 20 years beyond the Cold War,” declares Christopher A. Preble, author of “John F. Kennedy and the Missile Gap,” adding, “Eisenhower was a man going into retirement who really worried about what it would take to stand up to this. He perceived that a person of his stature wasn’t going to come along again. It was a warning, and a lament.”

In retrospect, the saga of the “missile gap” is the true precursor to that of Saddam Hussein’s weapons of mass destruction. Both followed events — the test of a Soviet intercontinental missile and the 9/11 attacks — that caused many people to think in terms of worst-case scenarios. Both were fanned by selective use of intelligence, much of it brandished by ideologues.

In the case of the missile gap, however, the president was the one urging moderation. Eisenhower was the product of a prairie boyhood in Abilene, Kan. Though his Army career paralleled the unprecedented growth of the military, he never lost the sense that true American values were found near the hearths of small-town America, and that concentrating too much power in big institutions could quell that spirit.

As president, he set his own defense policy. A professional strategist, he understood the importance of having a good plan and sticking to it. With a defense based on massive nuclear deterrence, Eisenhower determined that some of the conventional forces beloved of the Joint Chiefs — increased Army troops, new Air Force jets, more Navy carriers — were unnecessary. And, with his unique credibility, he felt freer than most presidents to reject the recommendations of the uniformed commanders.

Besides, even with a strategy based on creating a standoff between the two nuclear-fueled adversaries, the United States enjoyed major advantages: Its network of bases in Europe gave it numerous launch points for an attack on the Soviet Union, while the Soviets had no similar access to the United States. Then, in 1957, the Soviets tested an intercontinental missile, and soon after launched a satellite into space. The Soviet achievements caught the American public off guard; there was sudden concern about Soviet superiority.

Almost immediately, experts with axes to grind, either ideological (Paul Nitze, whose Truman-era call for a defense buildup was sidelined by Eisenhower), or financial (Werner Von Braun, seeking cash for his rocket program), or bureaucratic (former Army commanders Matthew Ridgway and Maxwell Taylor, who deplored Eisenhower’s neglect of conventional forces), or political (Kennedy and Stuart Symington, both presidential contenders), began to argue that the Soviet missile arsenal either dwarfed, or would soon dwarf, the American one.

Their evidence was a series of cryptic statements by those on the fringes of the intelligence establishment, amplified by journalists (including Joseph Alsop, a Kennedy supporter) who took the most speculative, worst-case scenarios and declared them to be true.

The administration’s denials sounded mealy-mouthed in comparison. When Eisenhower’s defense secretary, Thomas Gates, posited that even if a missile gap were to exist, there would be no “deterrence gap,” because of American air superiority, many in the country took it as confirmation that a missile gap did, in fact, exist.

Most distressing to Eisenhower was the fact that some military commanders, hoping to beef up budgets, seemed to be among those leaking dire estimates. And then there were Kennedy’s own needling comments.

“They cannot hide the basic facts that American strength…has been slipping, and communism has been steadily advancing,” Kennedy told the American Legion.

Eisenhower’s farewell was one of few presidential addresses to emphasize moderation; it stressed the need for “balance in and among national programs” and between “the private and public economy.” The last was clearly a reference to the sudden prominence of defense contractors among leading American corporations — firms with a live-or-die stake in government spending. Most distressing to him was that Kennedy had gone into factory towns and proclaimed that Eisenhower’s stinginess on defense had cost American jobs.

Aerospace contractors, for instance, were pushing the B-70 Valkyrie bomber to replace the B-52, at a cost of untold billions. Eisenhower felt it was a costly waste in the missile age; Kennedy suggested that it was necessary both for defense and to keep the defense industry churning. Eisenhower rejected the idea that defense spending was good for the economy; unlike other types of public investment (such as his interstate highway system), unneeded defense hardware moldered in hangars and warehouses, with little usefulness.

Kennedy saw the issue through a completely different prism, believing that limiting defense spending to preserve the private economy was tantamount to declaring that America was too poor to defeat communism; throughout the campaign, he stressed the need to “bear any burden” against communism. “Only a few generations have been granted the role of defending freedom in its hour of maximum danger,” Kennedy declared in his inaugural address. “I do not shrink from this responsibility — I welcome it.”

The cause of “defending freedom” was close to Kennedy’s heart — and a key to overcoming one of his major political liabilities, his father’s support of the British appeasement of Nazi Germany. From his years at Harvard, Kennedy tried to separate his reputation from that of his father, writing a thesis that analyzed the failure of appeasement. And by the ’50s, any softness on defense, by any national figure, was perceived as an echo of Munich. Harry Truman “lost” China; Adlai Stevenson, the Democratic presidential nominee in 1952 and 1956, was too professorial to be an effective defender of freedom. Rather than kindle memories of his father’s folly, Kennedy was determined to outflank the Eisenhower administration on defense.

All these factors were noticed by Eisenhower, who foresaw an endless defense mobilization at an unnecessary cost to what he, in his farewell address, called “our toil, resources, and livelihood.”

Fifty years later, such phrases are remembered, but their context is lost. Eisenhower and Kennedy’s views on military spending don’t fit the dominant historical narratives of either man. Eisenhower endures as the hero who won the war in Europe. The ’50s, the decade of his presidency, is recalled as a honeyed interlude, a national vacation after the trauma of the ’40s; it’s a serious misreading of an era of shell-shock and paranoia. As president, Kennedy retreated somewhat from the hawkishness of his campaign. (The Valkyrie was quietly scuttled.) In the Cuban Missile Crisis, he was a voice of moderation against some of the same hawks whom Eisenhower sought to contain. And after Kennedy’s death, his brothers chose to emphasize his liberalism; the Camelot myth takes no notice of his conservative stances.

Therefore, Kennedy’s inaugural address endures as an expression of energy and optimism (“Ask not what your country can do for you…”), its sterling phrases viewed largely through the lens of domestic progress. The military-industrial complex, meanwhile, is treated like a mystery, as if Eisenhower handed down a riddle for posterity. The phrase has been adopted by the antimilitarist left, invoked whenever opposition arises to an American military action. But there is little evidence that Eisenhower worried about militarism leading to war; his concerns were for the shattering impact of fear-mongering and budgetary waste on the domestic well-being of the country. Subsequent events have proven the acuity of his vision.

Members of Congress, seeking federal largesse for their districts, routinely broker deals for weapons systems that even Defense Department planners find unnecessary. Commanders routinely follow the revolving door from the Pentagon to industry, where they help maintain funding through a fusillade of lobbying — more than $130 million worth per year — and volleys of campaign contributions — $24 million for the 2008 cycle, according to the Center for Responsive Politics.

Presidential candidates see far more political upside in supporting defense spending than in opposing it. And even with a budget of about $700 billion — six times more than any other nation, by most estimates, and more than the next 18 combined — the perception of softness on national security can doom a president who seeks to trim the defense budget.

Former President George W. Bush’s advisers deprecated some costly weapons systems, but Bush’s own spare-no-cost rhetoric made any cuts impossible. President Obama has promised a serious effort to weed out waste, and even cited Eisenhower’s desire for a balance among national programs. Gates, too, has lauded Eisenhower as a prudent critic of the Pentagon. But the Obama administration’s level of commitment remains unclear; its fiscal review team has largely exempted defense spending from its deficit-reduction planning.

Kennedy’s zeal in promoting military hardware as an expression of strength suggests there is more behind political backing for a wasteful Pentagon bureaucracy than fear and manipulation. For the United States, a massive, spare-no-expense military functions like the ornate castles built by European monarchs: Its very wastefulness projects an image of wealth and power.

But when confronted with some of the arguments that feed the need to project power, it is vital to understand that worst-case estimates, magnified in the media and political glare, can surpass all bounds of rationality. The image sought by Kennedy was grounded in a real desire to boost American power, but constructed on a foundation of untruths. Eisenhower spoke to a reality that America, five decades hence, still can’t fully accept.

Peter S. Canellos is the editorial page editor of the Globe.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/08/15/power_couple/

Chronicling the Holocaust from Inside the Ghetto

Roughly 50 men and women in the Warsaw Ghetto chose a special form of resistance. In a secret archive, they documented their path to doom for future generations, chronicling the Nazis’ crimes as they were being perpetrated.

Jewish men, women and children being marched out of the Warsaw Ghetto in May 1943.

David Graber was 19 when he hurriedly scribbled his farewell letter. “I would be overjoyed to experience the moment when this great treasure is unearthed and the world is confronted with the truth,” he wrote.

While German soldiers combed the streets outside, Graber and his friend Nahum Grzywacz buried 10 metal boxes in the basement of an elementary school on Nowolipki Street in Warsaw’s Jewish ghetto. It was Aug. 2, 1942.

The boxes were dug up more than four years later. By then, Graber and Grzywacz were long dead, murdered like almost all of their roughly 50 collaborators. Only three survived the Nazi terror. They provided the information that led to the recovery of the boxes.

The buried treasure consisted of about 35,000 pieces of paper that a group of chroniclers had collected and used to document how, during World War II, the German occupiers had deprived Warsaw’s Jews of their rights, tormented them and, finally, killed them in the death camps. “These materials tell a collective story of steady decline and unending humiliation, interspersed with many stories of quiet heroism and self-sacrifice,” writes American historian Samuel Kassow. His book “Who Will Write Our History?: Rediscovering a Hidden Archive from the Warsaw Ghetto,” which has now been published in German translation, throws a new light on the exceptional source material.

Nightmarish Body of Text

Jews also collected documents and wrote diaries elsewhere in Europe during the Holocaust, but the Warsaw archive is the most comprehensive and descriptive collection of all. The Polish capital was home to Europe’s largest Jewish community, which became a magnet for many talented scientists and writers. As one female author wrote, she hoped that her account would be “driven under the wheel of history like a wedge.” Contributions like hers would turn the clandestine archive into probably the most nightmarish body of text ever written about the Holocaust.

The group called itself Oyneg Shabes, or “Sabbath Joy,” because it usually convened on Saturday afternoons, beginning in November 1940. The chief thinker of the group, which included a large number of intellectuals, journalists and teachers, was Emanuel Ringelblum, a historian born in Galicia in 1900. He had written a doctoral dissertation at the University of Warsaw on the history of the city’s Jews prior to 1527, and he was part of the Jewish self-help organization “Aleynhilf.”

Two weeks before the outbreak of World War II, Ringelblum attended the World Zionist Congress in Geneva as an envoy of the Marxist party Poalei Zion. The other delegates told him it was too dangerous to go back to Poland and urged him to stay in Switzerland, but Ringelblum wanted to be with his wife Yehudis and their nine-year-old son Uri. He had hardly returned home before German troops invaded Poland and captured Warsaw soon afterwards.

In October 1940, the occupation authorities decreed that all Jews were to be moved to a separate residential district. Workers then built a three-meter wall around the area. The Germans also relentlessly drove Jews from the surrounding countryside into the Warsaw Ghetto. Before long, half a million people were living in an area of only four square kilometers (1.5 square miles).

Ringelblum and his fellow members of Oyneg Shabes quickly recognized the dimensions of the drama and began to document it for posterity. They collected decrees, posters, ration cards, letters, diaries and drawings — documents of horror in Yiddish, German and Polish.

One of the documents specified the average daily calorie allotment for 1941, according to which Germans were to receive 2,613 kilocalories, Poles 699 and Jews only 184. The ghetto residents had to smuggle in food to survive. The archive used wages and prices on the black market to conduct market research and prepare sample calculations for a family of four.

Questionnaires and Essay Contests

Like ethnologists, the chroniclers went about investigating their environment, scientists studying their own surroundings. They issued standardized questionnaires and conducted hundreds of interviews with refugees and people on the verge of starvation.

Between 1940 and 1942, about 100,000 people died of hunger, exposure to cold temperatures and disease. In November 1941, Ringelblum, describing the deaths around him, wrote: “The most terrible thing is to look at the freezing children…Today in the evening I heard the wailing of a little tot of three or four years. Probably tomorrow they will find his little corpse.”

The archive held an essay contest to encourage traumatized children to tell their stories. A 15-year-old girl described how her mother had died next to her: “During the night, I felt her becoming cold and stiff. But what could I have done? I lay there until the morning, still clinging to her body, until a neighbor helped me lift her out of the bed and place her on the ground.”

Outside, residents constantly ran the risk of being stopped by a German policeman and then beaten or shot. The ghetto residents even had a name for a particularly dangerous bottleneck-like street: “The Dardanelles.”

Smuggled Evidence of the Extermination Program

In 1942, the chroniclers began to receive dramatic news from other parts of the country. Refugees told of mass shootings and synagogues burned to the ground. One refugee told the chroniclers how the SS had used gas to kill people in railroad cars in Chelmno west of Warsaw.

The industrial-scale mass killing had begun, leading many to ask themselves when the “Hell of Polish Jewry,” as the title of one of the documents in the archive read, would reach Warsaw. Several German officials had promised Jewish elder Adam Czerniakow that the Warsaw Ghetto would be spared. But on July 22, 1942, SS officer Herman Höfle announced that the “resettlement” had begun. A few days later, the archivists’ helpers buried the first of the metal boxes.

The Gestapo and the Jewish police rounded up the residents and took them to the “transshipment point,” where the transports to the Treblinka death camp began. A particularly cynical proclamation, dated July 29, lured the starving Jews with the promise that anyone who went to the point voluntarily would receive a ration of three kilos of bread and one kilo of marmalade. In an effort to deceive those who had been left behind, deportees were forced to sent reassuring postcards home from the death camps.

The archivists had already started studying the Holocaust while it was in full swing. In several instances, they managed to smuggle evidence of the extermination program abroad, including to the BBC in London. Ringelblum hoped, in vain as it turned out, that his group had “completed a meaningful historical task and perhaps saved hundreds of thousands from extermination.”

The ghetto was quickly cleared. According to a statistic in the archive, 99 percent of all children had already been deported by November 1942. There were still 60,000 people living in the residential area, most of them men who worked in the workshops. Many turned over their personal reflections to the archive, documents of great emotional power.

Abraham Lewin, a teacher, described how his wife fell into the clutches of the henchmen: “There was a solar eclipse, and it was completely dark. My Luba was apprehended at a roadblock. I still see a shimmer of hope shining in front of my eyes. Perhaps she will be spared. And if not, what may God prevent?”

Uprising Suppressed

Another teacher, Natan Smolar, mourned his “only, beloved daughter Ninkele,” whose third birthday the family had just celebrated. “There were so many toys, and there was so much noise and play, so much happiness and shouting of children. And today Ninkele is no more, and her mother is gone, and so is my sister Etl.”

Those who had remained in the ghetto were plagued by feelings of guilt. They complained “that the Jews allowed themselves to be lead like sheep to the slaughter.” One man wrote: “If only we had all climbed over the ghetto wall and stormed the streets of Warsaw, armed with knives, axes or even stones — then perhaps they would have killed 10,000 or 20,000, but never 300,000!”

There are hardly any documents left on the armed resistance that eventually did erupt, in April 1943. The Germans brutally suppressed the uprising. SS brigade leader Jürgen Stroop had the buildings burned down, one after another, and the main synagogue blown up. On May 16, 1943, he reported: “The former Jewish residential area of Warsaw no longer exists.”

By that time, historian Ringelblum and his family had already fled to the non-Jewish section of Warsaw. He spent the last few months of his life together with about 40 men, women and children in a 23-by-16-foot cellar underneath a greenhouse that belonged to a Polish vegetable merchant. Day after day, Ringelblum sat at the end of a long table between the rows of bunk beds, surrounded by his books and lists.

The hiding place was discovered in March 1944, when the girlfriend of the Polish man betrayed him after they had separated. Ringelblum was taken to the notorious Pawiak Prison, where his captors tortured him in the hope that he would reveal details about Jewish resistance fighters. Then the Germans shot the chronicler of their crimes, together with his family and the other prisoners.

Only six days before his hiding place was discovered, Ringelblum wrote to a friend about his archive: “If none of us survives, at least this will remain.”

__________

Full article and photo: http://www.spiegel.de/international/europe/0,1518,707506,00.html

How America got its name

The suprising story of an obscure scholar, an adventurer’s letter, and a pun

Each July 4, as we celebrate the origins of America, we look back ritually at what happened in 1776: the war, the politics, the principles that defined our nation.

But what about the other thing that defines America: the name itself? Its story is far older and far less often told, and still offers some revealing surprises.

If you’re like most people, you’ll dimly recall from your school days that the name America has something to do with Amerigo Vespucci, a merchant and explorer from Florence. You may also recall feeling that this is more than a little odd — that if any European earned the “right” to have his name attached to the New World, surely it should have been Christopher Columbus, who crossed the Atlantic years before Vespucci did.

But Vespucci, it turns out, had no direct role in the naming of America. He probably died without ever having seen or heard the name. A closer look at how the name was coined and first put on a map, in 1507, suggests that, in fact, the person responsible was a figure almost nobody’s heard of: a young Alsatian proofreader named Matthias Ringmann.

How did a minor scholar working in the landlocked mountains of eastern France manage to beat all explorers to the punch and give the New World its name? The answer is more than just an obscure bit of history, because Ringmann deliberately invested the name America with ideas that still make up important parts of our national psyche: powerful notions of westward expansion, self-reinvention, and even manifest destiny.

And he did it, in part, as a high-minded joke.

Matthias Ringmann was born in an Alsatian village in 1482. After studying the classics at university he settled in the Strasbourg area, where he began to eke out a living by proofing texts for local printers and teaching school. It was a forgettable life, of a sort that countless others like him were leading. But sometime in early 1505, Ringmann came across a recently published pamphlet titled “Mundus Novus,” and that changed everything.

The pamphlet contained a letter purportedly sent by Amerigo Vespucci a few years earlier to his patron in Florence. Vespucci wrote that he had just completed a voyage of western discovery and had big news to report. On the other side of the Atlantic, he announced, he had found “a new world.”

The phrase would stick, of course. But it didn’t mean to Vespucci what it means to us today: a new continent. Europeans of the time often used the phrase simply to describe regions of the world they had not known about before. Another Italian merchant had used the very same phrase, for example, to describe parts of southern Africa recently explored by the Portuguese.

Like Columbus, Vespucci believed the world consisted of three parts: Europe, Africa, and Asia. He also knew that the world was round, a fact that had been common knowledge since antiquity. This meant, he realized, that if one could sail far enough to the west of Europe, one would reach the Far East.

This was exactly what Vespucci and Columbus both believed they had done. Columbus, in particular, clung doggedly until the end of his life to the idea that in crossing the Atlantic he had reached the vicinity of Japan and China. He had no idea he had expanded Europe’s geographical horizons, in other words. He thought he’d shrunk them.

The expanding horizons began with Vespucci. In his letter, he reported sailing west across the Atlantic, like Columbus. After making landfall, however, he had turned south, in an attempt to sail under China and into the Indian Ocean — and had ended up following a coastline that took him thousands of miles almost due south, well below the equator, into a region of the globe where most European geographers assumed there could only be ocean.

When Ringmann read this news, he was thrilled. As a good classicist, he knew that the poet Virgil had prophesied the existence of a vast southern land across the ocean to the west, destined to be ruled by Rome. And he drew what he felt was the obvious conclusion: Vespucci had reached this legendary place. He had discovered the fourth part of the world. At last, Europe’s Christians, the heirs of ancient Rome, could begin their long-prophesied imperial expansion to the west.

Ringmann may well have been the first European to entertain this idea, and he acted on it quickly. Soon he had teamed up with a local German mapmaker named Martin Waldseemüller, and the two men printed 1,000 copies of a giant world map designed to broadcast the news: the famous Waldseemüller map of 1507. One copy of the map still survives, and it’s recognized as one of the most important geographical documents of all time. That’s because it’s the first to depict the New World as surrounded by water; the first to suggest the existence of the Pacific Ocean; the first to portray the world’s continents and oceans roughly as we know them today; and, of course, the first to use a strange new name: America, which Ringmann and Waldseemüller printed in block letters across what today we would call Brazil.

Why America? Ringmann and Waldseemüller explained their choice in a small companion volume to the map, called “Introduction to Cosmography.” “These parts,” they wrote, referring to Europe, Asia, and Africa, “have in fact now been more widely explored, and a fourth part has been discovered by Amerigo Vespucci….Since both Asia and Africa received their names from women, I do not see why anyone should rightly prevent this from being called Amerigen — the land of Amerigo, as it were — or America, after its discoverer, Americus.”

Libraries today attribute this little book to Waldseemüller. But the work itself actually identifies no author — and Ringmann’s fingerprints, I would argue, appear all over it. The author, for example, demonstrates a familiarity with ancient Greek, a language that Ringmann knew well and that Waldseemüller did not. He also incorporates snatches of classical verse, a literary tic of Ringmann’s. The one contemporary poet quoted in the text, too, is known to have been a friend of Ringmann.

Waldseemüller the cartographer, Ringmann the writer: This division of duties makes sense, given the two men’s areas of expertise. And, indeed, they would team up in precisely this way in 1511, when Waldseemüller printed a new map of Europe. In dedicating that map, Waldseemüller noted that it came accompanied by “an explanatory summary prepared by Ringmann.”

This question of authorship is important because whoever wrote “Introduction to Cosmography” almost certainly coined the name America. Here again, I would suggest, the balance tilts in the favor of Ringmann, who regularly entertained himself by making up words, punning in different languages, and investing his writing with hidden meanings. In one 1511 essay, he even mused specifically about the naming of continents after women.

The naming-of-America passage in “Introduction to Cosmography” is rich in precisely the sort of word play Ringmann loved. The key to the passage is the curious name Amerigen, which combines the name Amerigo with the Greek word gen, or “earth,” to create the meaning “land of Amerigo.” But the name yields other meanings. Gen can also mean “born,” and the word ameros can mean “new,” suggesting, as many Renaissance observers had begun to hope, that the land of Amerigo was a place where European civilization could go to be reborn — an idea, of course, that still resonates today. The name may also contain a play on meros, a Greek word sometimes translated as “place,” in which case Amerigen would become A-meri-gen, or “No-place-land”: not a bad way to describe a previously unnamed continent whose full extent was still uncertain.

Whatever its meanings, the name America filled a need. By the middle of the 16th century it had caught on, and mapmakers were using it to define not only South but North America. But Ringmann himself didn’t live to see the day. By 1511 he was complaining of weakness and shortness of breath, and before the year’s end he was dead, probably of tuberculosis. He hadn’t yet reached 30.

Both Ringmann and Waldseemüller soon slipped into obscurity. The two would remain forgotten for centuries, but Waldseemüller’s star rose again in the 20th century, thanks to the accidental rediscovery, in 1901, of the sole surviving copy of his great map. A century later, calling it America’s birth certificate, the Library of Congress bought the map for the astonishing sum of $10 million — and in 2007, to celebrate the 500th anniversary of the naming of America, put it on public display. Waldseemüller now seems guaranteed permanent celebrity as the author of one of the most important documents ever created.

History hasn’t served poor Matthias Ringmann nearly as well. That doesn’t seem quite fair. So tonight let’s send up a few of our fireworks in honor of the man who had the audacity to declare, before anybody else, that the world had a fourth part — and to imagine that he might be the one who could give it a name.

Toby Lester is a contributing editor to The Atlantic and the author of ”The Fourth Part of the World,” which comes out in paperback on Tuesday.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/07/04/where_america_really_came_from/

What George Washington Heard

As Americans prepare to celebrate the nation’s birth, it’s safe to say that the most familiar figure connected with that birth is George Washington. Although he didn’t actually sign the Declaration of Independence—he had been in New York since March, commanding the Continental troops there—his image is more completely bound up with the revolution and its success than any of America’s other patriarchs.

Whether known as the “Father of his Country,” the “Atlas of America,” the “Sage of Mount Vernon” or just the “Old Fox,” Washington has always been a figure of mythic proportions, and during his lifetime he was venerated practically as a living saint by many citizens of the new republic.

One of the president’s most endearing and least familiar weakness was for music, theater and dancing.

True, at 6-foot-2 he really did stand taller than most men of his time, and his repeated escapes from death in battle contributed to his somewhat divine aura. But despite the sentimental hagiography of Parson Weems, whose best-seller “The Life of Washington” (1800) fairly reduced his subject’s career to a series of Aesop-like fables, Washington was a very human being, however formally he conducted himself in public. As his biographer Joseph Ellis notes, Washington cultivated his exceptional self-control to compensate for his weaknesses, among them a fiery temper and a passionate love for Sally Fairfax, the wife of a friend.

Perhaps Washington’s most endearing and least familiar weakness, however, was for music, theater and dancing. Unlike his secretary of state, Thomas Jefferson, a talented amateur violinist, Washington wasn’t a musician himself. But he took great pleasure in musical and theatrical events—both of which were closely intertwined in 18th-century America—and from early adulthood eagerly attended performances at theaters in Fredericksburg and Williamsburg, Va.

Throughout his life, Washington attended concerts wherever he traveled. During the American Revolution, while spending a night in Bethlehem, Pa., he enjoyed a concert of chamber music, and on a subsequent visit there in 1782, we read of his being serenaded, to his great pleasure, by a Moravian trombone choir.

Meanwhile, Washington was always eager to pay ready money for good music. In 1757, Philadelphia enjoyed its first two public concerts. For the second of these, then-Lt. Col. Washington purchased a block of tickets costing 52 shillings and six pence, or £2 12s 6d, a considerable outlay at a time when a teacher might earn an annual salary of £60. On another occasion, in 1787, now-Gen. Washington gave a dinner in Philadelphia for which he engaged a nine-piece orchestra to perform, this time paying £7 10s for the pleasure—no mean sum either.

Among his friends was the Philadelphia lawyer Francis Hopkinson, a talented amateur poet and musician who also designed several issues of Continental currency and did sign the Declaration of Independence. Acknowledged as the first native-born American composer, Hopkinson dedicated a set of “Seven Songs for the Harpischord or Forte Piano” to Washington in 1788.

Not surprisingly, musical performances in the newborn nation were often connected with major current events. Thus shortly after America’s victory, in 1781, we find Washington in the audience at the “hotel of the Minister of France” (i.e., the French Embassy) in Philadelphia to enjoy the celebratory premiere of “The Temple of Minerva, America Independent, an oratorical entertainment,” another Hopkinson opus. Similarly, in May 1787, four days after the opening of the Constitutional Convention in Philadelphia, Washington notes in his diary that he “accompanied Mrs. Morris to the benefit concert of a Mr. Juhan.” The program, divided into three “acts,” included overtures by Pierre-Alexandre Monsigny, William Shield and Padre Giovanni Battista Martini (who had given lessons to the young Mozart in Bologna). There was also a “Sonato Piano Forte” [sic] by the English-born Alexander Reinagle, one of America’s leading musicians of the day.

Reinagle, who had been a close friend of Carl Philipp Emanuel Bach in Europe, had settled in Philadelphia while it was the nation’s capital and was engaged by President Washington as music master to his step-granddaughter, Nelly Custis, providing a noteworthy, if indirect, connection between the Father of his Country and the son of Johann Sebastian Bach.

Apart from attending public performances, Washington also relished listening to music- making at home, and not only Nelly but Washington’s stepchildren and step-grandchildren were offered a musical education befitting the landed gentry. A small household collection of music books is still preserved at Washington’s beloved Mount Vernon, along with Nelly’s harpsichord, which Washington bought for her.

In addition, various eyewitness accounts of Washington on the dance floor reveal a side far removed from the starchy image on currency and canvas. Most notable is the account by his step-grandson George Washington Parke Custis of a ball held a few weeks after the conclusion of the American Revolution: “The minuet was much in vogue at that period,” writes Custis, “and was peculiarly calculated for the display of the splendid figure of the chief, and his natural grace and elegance of air and manners. . . . As the evening advanced, the commander-in-chief, yielding to the general gaiety of the scene, went down some dozen couples in the contre-dance with great spirit and satisfaction.”

Throughout his life, Washington enjoyed the balls and “assemblies” that were a popular entertainment in 18th-century America. One of his final letters in 1799 is a poignant response to an invitation to the managers of the Alexandria Assembly in Virginia: “Mrs. Washington and myself have been honored with your polite invitation to the assemblies of Alexandria this winter, and thank you for this mark of your attention. But, alas! our dancing days are no more.”

Though at age 67 Washington was still relatively vigorous, he caught a cold while riding five hours through a snowstorm on Dec. 12 of that year. Two days later the “American Cincinnatus” died, and to the raft of musical compositions already written saluting his military and presidential accomplishments was added another repertoire of dirges, elegies, odes and marches lamenting his passing. The first, Benjamin Carr’s dignified “Dead March and Monody for General Washington” was ready for performance in Philadelphia a mere 12 days later, with musical offerings continuing to appear up and down the Eastern seaboard throughout the winter of 1800.

The music-loving “Sword of the Revolution” probably would have enjoyed listening to them.

Mr. Scherer writes about music and the fine arts for the Journal.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703426004575338861774677920.html

The Feuding Fathers

Americans lament the partisan venom of today’s politics, but for sheer verbal savagery, the country’s founders were in a league of their own. Ron Chernow on the Revolutionary origins of divisive discourse.

In the American imagination, the founding era shimmers as the golden age of political discourse, a time when philosopher-kings strode the public stage, dispensing wisdom with gentle civility. We prefer to believe that these courtly figures, with their powdered hair and buckled shoes, showed impeccable manners in their political dealings. The appeal of this image seems obvious at a time when many Americans lament the partisan venom and character assassination that have permeated the political process.

[Cover_Bottom]

Unfortunately, this anodyne image of the early republic can be quite misleading. However hard it may be to picture the founders resorting to rough-and-tumble tactics, there was nothing genteel about politics at the nation’s outset. For sheer verbal savagery, the founding era may have surpassed anything seen today. Despite their erudition, integrity, and philosophical genius, the founders were fiery men who expressed their beliefs with unusual vehemence. They inhabited a combative world in which the rabble-rousing Thomas Paine, an early admirer of George Washington, could denounce the first president in an open letter as “treacherous in private friendship…and a hypocrite in public life.” Paine even wondered aloud whether Washington was “an apostate or an imposter; whether you have abandoned good principles, or whether you ever had any.”

Such highly charged language shouldn’t surprise us. People who spearhead revolutions tend to be outspoken and courageous, spurred on by a keen taste for combat. After sharpening their verbal skills hurling polemics against the British Crown, the founding generation then directed those energies against each other during the tumultuous first decade of the federal government. The passions of a revolution cannot simply be turned off like a spigot.

By nature a decorous man, President Washington longed for respectful public discourse and was taken aback by the vitriolic rhetoric that accompanied his two terms in office. For various reasons, the political cleavages of the 1790s were particularly deep. Focused on winning the war for independence, Americans had postponed fundamental questions about the shape of their future society. When those questions were belatedly addressed, the resulting controversies threatened to spill out of control.

The Constitutional Convention of 1787 had defined a sturdy framework for future debate, but it didn’t try to dictate outcomes. The brevity and generality of the new charter guaranteed pitched battles when it was translated into action in 1789. If the constitution established an independent judiciary, for instance, it didn’t specify the structure of the federal court system below the Supreme Court. It made no reference to a presidential cabinet aside from a glancing allusion that the president could solicit opinions from department heads. The huge blanks left on the political canvas provoked heated battles during Washington’s time in office. When he first appeared in the Senate to receive its advice and consent about a treaty with the Creek Indians, he was so irked by the opposition expressed that he left in a huff. “This defeats every purpose of my coming here,” he protested.

[Founders_promo]

Like other founders, Washington prayed that the country would be spared the bane of political parties, which were then styled “factions.” “If I could not go to heaven but with a party,” Thomas Jefferson once stated, “I would not go there at all.” Washington knew that republics, no less than monarchies, were susceptible to party strife. Indeed, he believed that in popularly elected governments, parties would display their “greatest rankness” and emerge as the “worst enemy” to the political system. By expressing narrow interests, parties often thwarted the popular will. In Washington’s view, enlightened politicians tried to transcend those interests and uphold the commonweal. He was so opposed to anything that might savor of partisanship that he refused to endorse congressional candidates, lest he seem to be meddling.

In choosing his stellar first cabinet, President Washington applied no political litmus test and was guided purely by the candidates’ merits. With implicit faith that honorable gentlemen could debate in good faith, he named Alexander Hamilton as treasury secretary and Jefferson as secretary of state, little suspecting that they would soon become fierce political adversaries. Reviving his Revolutionary War practice, Washington canvassed the opinions of his cabinet members, mulled them over at length, then arrived at firm conclusions. As Hamilton characterized this consultative style, the president “consulted much, pondered much; resolved slowly, resolved surely.” Far from fearing dissent within his cabinet, Washington welcomed the vigorous interplay of ideas and was masterful, at least initially, at orchestrating his prima donnas. As Gouverneur Morris phrased it, Washington knew “how best to use the rays” of intellect emitted by the personalities at his command.

During eight strenuous years of war, Washington had embodied national unity and labored mightily to hold the fractious states together; hence, all his instincts as president leaned toward harmony. Unfortunately, the political conflicts that soon arose often seemed intractable: states’ rights versus federal power; an agrarian economy versus one intermixed with finance and manufacturing; partiality for France versus England when they waged war against each other. Anything even vaguely reminiscent of British precedent aroused deep anxieties in the electorate.

As two parties took shape, they coalesced around the outsize personalities of Hamilton and Jefferson, despite their joint membership in Washington’s cabinet. Extroverted and pugnacious, Hamilton embraced this role far more openly than Jefferson, who preferred to operate in the shadows. Although not parties in the modern sense, these embryonic factions—Hamiltonian Federalists and Jeffersonian Republicans—generated intense loyalty among adherents. Both sides trafficked in a conspiratorial view of politics, with Federalists accusing the Republicans of trying to import the French Revolution into America, while Republicans tarred the Federalists as plotting to restore the British monarchy. Each side saw the other as perverting the true spirit of the American Revolution.

As Jefferson recoiled from Hamilton’s ambitious financial schemes, which included a funded debt, a central bank, and an excise tax on distilled spirits, he teamed up with James Madison to mount a full-scale assault on these programs. As a result, a major critique of administration policy originated partly within the administration itself. Relations between Hamilton and Jefferson deteriorated to the point that Jefferson recalled that at cabinet meetings he descended “daily into the arena like a gladiator to suffer martyrdom in every conflict.”

The two men also traded blows in the press, with Jefferson drafting surrogates to attack Hamilton, while the latter responded with his own anonymous essays. When Hamilton published a vigorous defense of Washington’s neutrality proclamation in 1793, Jefferson urged Madison to thrash the treasury secretary in the press. “For God’s sake, my dear Sir, take up your pen, select the most striking heresies, and cut him to pieces in the face of the public.” When Madison rose to the challenge, he sneered in print that the only people who could read Hamilton’s essays with pleasure were “foreigners and degenerate citizens among us.”

Slow to grasp the deep-seated divisions within the country, Washington also found it hard to comprehend the bitterness festering between Hamilton and Jefferson. Siding more frequently with Hamilton, the president was branded a Federalist by detractors, but he tried to rise above petty dogma and clung to the ideal of nonpartisan governance.

Afraid that sparring between his two brilliant cabinet members might sink the republican experiment, Washington conferred with Jefferson at Mount Vernon in October 1792 and expressed amazement at the hostility between him and Hamilton. As the beleaguered president confided, “he had never suspected [the conflict] had gone so far in producing a personal difference, and he wished he could be the mediator to put an end to it,” as Jefferson recorded in a subsequent memo. To Hamilton, Washington likewise issued pleas for an end to “wounding suspicions and irritating charges.” Both Hamilton and Jefferson found it hard to back down from this bruising rivalry. To his credit, Washington never sought to oust Jefferson from his cabinet, despite their policy differences, and urged him to remain in the administration to avoid a monolithic uniformity of opinion.

Feeding the venom of party strife was the unrestrained press. When the new government was formed in 1789, most newspapers still functioned as neutral publications, but they soon evolved into blatant party organs. Printing little spot news, with no pretense of journalistic objectivity, they specialized in strident essays. Authors often wrote behind the mask of Roman pseudonyms, enabling them to engage in undisguised savagery without fear of retribution. With few topics deemed taboo, the press lambasted the public positions as well as private morality of leading political figures. The ubiquitous James T. Callender typified the scandalmongers. From his poison-tipped pen flowed the expose of Hamilton’s dalliance with the young Maria Reynolds, which had prompted Hamilton, while treasury secretary, to pay hush money to her husband. Those Jeffersonians who applauded Callender’s tirades against Hamilton regretted their sponsorship several years later when he unmasked President Jefferson’s carnal relations with his slave Sally Hemings.

At the start of his presidency, Americans still viewed Washington as sacrosanct and exempt from press criticism. By the end of his first term, he had shed this immunity and reeled from vicious attacks. Opposition journalists didn’t simply denigrate Washington’s presidential record but accused him of aping royal ways to prepare for a new monarchy. The most merciless critic was Philip Freneau, editor of the National Gazette, the main voice of the Jeffersonians. Even something as innocuous as Washington’s birthday celebration Freneau mocked as a “monarchical farce” that exhibited “every species of royal pomp and parade.”

Other journalists dredged up moldy tales of his supposed missteps in the French and Indian War and derided him as an inept general during the Revolutionary War. In his later, anti-Washington incarnation, Thomas Paine gave the laurels for wartime victory against the British to Gen. Horatio Gates. “You slept away your time in the field till the finances of the country were completely exhausted,” Paine taunted Washington, “and you had but little share in the glory of the event.” Had America relied on Washington’s “cold and unmilitary conduct,” Paine insisted, the commander-in-chief “would in all probability have lost America.”

[founders]

George Washington pleaded with Alexander Hamilton to end his feud with Thomas Jefferson, saying he hoped that “liberal allowances will be made for the political opinions of one another.” He continued, “Without these I do not see how the reins of government are to be managed, or how the union of the states can be much longer preserved.”

Another persistent Washington nemesis was Benjamin Franklin Bache, grandson of Benjamin Franklin, and nicknamed “Lightning Rod, Jr.” for his scurrilous pen. In his opposition newspaper, the Aurora, Bache questioned Washington’s loyalty to the country. “I ask you, sir, to point out one single act which unequivocally proves you a FRIEND TO THE INDEPENDENCE OF AMERICA.” Resurrecting wartime forgeries fabricated by the British, he raised the question of whether Washington had been bribed by the Crown or even served as a double agent.

So stung was Washington by these diatribes that Jefferson claimed he had never known anyone so hypersensitive to criticism. For all his granite self-control, the president succumbed to private outrage. At one cabinet session, Secretary of War Henry Knox showed Washington a satirical cartoon in which the latter was being guillotined in the manner of the late Louis XVI. As Jefferson recalled Washington’s titanic outburst, “The President was much inflamed; got into one of those passions when he cannot command himself,” and only regained control of his emotions with difficulty. A few years later, in a strongly worded rebuke to Jefferson, Washington reflected on the vicious partisanship that had seized the country, saying that he previously had “no conception that parties” could go to such lengths. He hotly complained of being slandered in “indecent terms as could scarcely be applied to a Nero, a notorious defaulter, or even to a common pick-pocket.” To Washington’s credit, he tolerated the press attacks and never resorted to censorship or reprisals.

As it turned out, the rabid partisanship exhibited by Hamilton and Jefferson previewed America’s future far more accurately than Washington’s noble but failed dream of nonpartisan civility. In the end, Washington seems to have realized as much. By his second term, having fathomed the full extent of Jefferson’s disloyalty, he insisted upon appointing cabinet members who stood in basic sympathy with his policies. After he left office, he opted to join in the partisan frenzy, at least in his private correspondence. He no longer shrank from identifying with Federalists or scorning Republicans, nor did he feel obliged to muzzle his blazing opinions. To nephew Bushrod Washington, he warned against “any relaxation on the part of the Federalists. We are sure there will be none on that of the Republicans, as they have very erroneously called themselves.” He even urged Bushrod and John Marshall to run as Federalists for congressional seats in Virginia.

Only a generation after Washington’s death in 1799, during the age of Andrew Jackson, presidents were to emerge as unabashed chieftains of their political parties, showing no qualms about rallying their followers. The subsequent partisan rancor has reverberated right down to the present day—with no relief in sight.

Ron Chernow is the author of “Alexander Hamilton” and “Titan: The Life of John D. Rockefeller, Sr.” His next book, “Washington: A Life,” is due out in October.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704911704575326891123551892.html

Flowers for the Führer in Landsberg Prison

Hitler receives visitors in Landsberg Prison, including Rudolf Hess (second from right.)

New historical documents show that Adolf Hitler wanted for nothing during his short incarceration at Landsberg Prison in 1924. He was able to hold court and maintain his political contacts — all with the consent of the prison management.

A personnel manager couldn’t have been more well-meaning in his description. “He was always reasonable, frugal, modest and polite to everyone, especially the officials at the facility,” prison warden Otto Leybold wrote about the inmate on Sept. 18, 1924. The prisoner, he added, didn’t smoke or drink and “submitted willingly to all restrictions.”

The prisoner to whom the warden was referring so reverentially was none other than Adolf Hitler. Still an ambitious beer-hall agitator at the time, Hitler was serving a prison sentence at Landsberg Castle for attempting to stage a coup against the Weimar Republic in November 1923, together with fellow right-wing extremists.

It was a defining period for Hitler and German history. According to his biographer Ian Kershaw, his time in prison Hitler served as the genesis “of his later absolute preeminence in the völkisch movement and his ascendancy to supreme leadership.”

It is commonly known that the conditions of Hitler’s incarceration in Landsberg am Lech were comfortable, and that he used his time there to write “Mein Kampf.” But historic documents now offer new insights into how he was able to continue to organize his network under the eyes of the prison management.

Intake Book and Visitors’ Cards

The material, which probably stems from the former records office at Landsberg Prison, is to be sold at auction at the Behringer Auction House in the Bavarian city of Fürth on July 2. The bundle of paper includes 300 cards filled out by Hitler’s visitors, as well as extensive correspondence from the prison management.

Some of the documents were previously unknown, while others are transcripts of papers that have already been analyzed. They include a copy of the excessively mild sentence imposed by the Munich People’s Court: five years imprisonment at Landsberg Castle, with the possibility of parole.

One of the newly discovered documents is the prison’s “intake book,” which contains an entry that reads: “Hitler, Adolf.” Date of admission: April 1, 1924. Medical examination results: “health, of moderate strength.” Height: 1.75 meters (5’9″). Weight: 77 kilograms (169 lbs.) The names of the loyal followers who joined Hitler at Landsberg are listed on the same page: Friedrich Weber, Hermann Kriebel, Emil Maurice and his later deputy, Rudolf Hess.

Hitler began to receive visitors shortly after he was admitted to the prison. Erich Ludendorff, the strategist behind the Battle of Tannenberg in World War I who, to his outrage, was acquitted of charges of involvement in the Hitler-led coup attempt, visited several times. Hitler’s other guests included “Captain Röhm, Munich,” “Councillor Dr. Frick, Munich” and “Alfred Rosenberg, certified architect and writer, Munich,” the inner circle of leaders of the young Nazi Party at the time. Röhm, Frick and Rosenberg later became head of the SA, interior minister of the German Reich and the Nazis’ chief ideologue, respectively.

‘Like Walking into a Delicatessen’

Other visitors could more readily be categorized as wealthy benefactors, like Helene Bechstein, the wife of a Berlin piano manufacturer, who shared Hitler’s love for the music of Richard Wagner. Another visitor, Hermine Hoffman from Munich’s Solln neighborhood, was nicknamed “Hitler’s Mommy.” According to the documents, the attentive visitor and widow of a school principal also had securities sent to Hitler.

As an inmate, Hitler wanted for nothing. His wing on the second floor was nicknamed “Feldherrenhügel,” or “the general’s hill.” His confidant Ernst Hanfstaengl later said, after having visited Hitler, that he felt as if he had “walked into a delicatessen. There was fruit and there were flowers, wine and other alcoholic beverages, ham, sausage, cake, boxes of chocolates and much more.” Despite having gained a significant amount of weight as a result of his lavish diet, Hitler apparently turned down Hanfstaengl’s suggestion that he get some exercise.

The details that are now emerging are unlikely to prompt a rewriting of history. But according to the Bavarian state archives and the Munich state archives, where Hitler’s estate and the incomplete Landsberg prison records are kept, the newly discovered papers provide “an impression of Hitler’s intensive contacts and his opportunities to be in contact with people.” Officials at the archives also say that there are “no questions” as to the authenticity of the material. Nevertheless, the documents have yet to be studied in detail.

Found in a Flea Market

They are being sold by the owner of a taxicab company, whose father apparently purchased them in the late 1970s, together with books from World War I, at a Nuremberg flea market.

It is quite possible that the documents were stolen when the Americans established a prison for war criminals at Landsberg in 1946, or that they were pilfered during the Third Reich, when Hitler’s followers turned his cell into a pilgrimage site and a memorial to his supposedly harsh imprisonment.

A transcript of a letter to Jakob Werlin, a Munich car dealer, also reveals Hitler’s true living conditions at Landsberg. Long before his release, on Dec. 20, 1924, due in large part to the efforts of Warden Leybold, Hitler was already thinking about what type of car to buy: a Benz 11/40, which “would meet my current requirements,” or a 16/50 with a more powerful engine. His preferred color was gray, and he wanted “wire wheels.”

Hitler asked the car dealer for preferential treatment. He wrote that he would probably have to obtain a loan for the purchase, and that the “court costs and legal fees” he owed were making his “hair stand on end.”

In the letter, Hitler asked Werlin to inquire at his main office as to “what sort of a discount you can give me.”

__________

Full articleand photo: http://www.spiegel.de/international/germany/0,1518,702159,00.html

Of Storytellers and Statesmen

A 17th-century Flemish depiction of the Trojan Horse tale from “The Aeneid.”

Cicero wrote that to be ignorant of what happened before your own birth is to remain always a child. Much later, Henry Ford disagreed. He famously dismissed history as “bunk,” believing that each new generation would make its own history, by its own lights.

You would think that, in the face of a culture that seems inclined to Ford’s way of thinking, academic historians would want to keep the Ciceronian view alive, and perhaps they do. But their efforts are often distorted by abstruse theory or the formulas of social science. Charles Hill, a diplomat in residence at Yale University, has a different approach. He wants the past to inform the present by taking literature as a guide.

Mr. Hill believes that, alone among the arts, literature—by which he means history and philosophy as well as imaginative writing—stands free of strict rules or by ideas about “acceptable” subject matter. It can say what needs to be said, blending experience, abstract reasoning and moral judgment. More particularly, it can be a source of instruction for governing in a dangerous world and for understanding the endless, and seemingly impossible, quest for comity among nations. “Grand Strategies” concerns statesmanship and strategy: the uses of power, the fate of alliances, war and peace. It also, happily, provides a tour through the Great Books, giving special attention to nation-states and their vexed relations.

[BK_Cover4]

Mr. Hill’s tour begins, not surprisingly, in ancient Greece. “The Iliad,” he says, shows us a world before nation-states, when blood feuds led to conflict and grievances (e.g., the abduction of a woman) to war. Diplomacy of a sort existed among rulers, he notes, but it functioned in an inherently unstable world that seemed desperate for a better way to avoid conflict, or manage it.

One answer, as the plays of Aeschylus suggest, was for people to transfer their social allegiance from tribes to states, ceding justice to the rule of law. Thus do the Furies, in Aeschylus’ cycle of plays called “The Oresteia,” seek a just punishment for Orestes (who has killed his mother) and not merely tribal vengeance.

To underline the momentousness of this shift, Mr. Hill reminds us of a scene from Mark Twain’s “Huckleberry Finn” in which Huck asks: “What is a feud?” He is told: “A feud is this way: a man has a quarrel with another man, and kills him; then the other man’s brother kills him; then the other brothers, on both sides, goes for one another, then the cousins chip in—and by and by everybody’s killed off.” A great deal of history records mankind’s attempts to steer away from this cycle of violence and trust a process of justice, however imperfect. The state, Mr. Hill argues, became the vehicle for this dramatically new social arrangement.

But what kind of state? And how could it be guarded against its own tendencies toward self-destruction? Thucydides’ history of the Peloponnesian wars, in the fifth century B.C., describes how conflict destroyed not only Athenian democracy but also a beneficially balanced system of Greek states. Mr. Hill cites Plato’s “Republic” as a text on statehood to be read ironically for its lesson that an intellect over-devoted to abstract ideas can bring about a repulsive outcome—in the case of Plato’s imagined republic, a dictatorship of philosopher-kings. Experience offered a better guide. Xenophon’s “Anabasis” recounts how a defeated band of Greek mercenaries made their way home through hostile territory, and their capacity for self- government made possible their journey. Virgil’s “Aeneid,” for its part, showed the founding of a civilization that would thrive by imposing law and even peace (Pax Romana) on a vast empire.

As Mr. Hill observes, old problems recur in new forms. Thomas Hobbes, writing his treatise “Leviathan” in the mid-17th century, was compelled by the times he lived in to ponder yet again the question of how to establish order in a world of competing centers of power. The Reformation had shattered the unity of Christendom. Wars of Religion had recently divided France. Britain fought a civil war while the Thirty Years War, both a political and religious conflict, raged in Germany.

Hobbes’s answer to such anarchy (as it seemed to him) was to propose a powerful sovereign to whom citizens would give their rights, providing the sovereign, as Mr. Hill writes, “with the power required to keep them safe—safe from each other other.”

The Peace of Westphalia, ending the Thirty Years War in 1648, set up a system of independent states, but it left open the question of what kind of states they would be. Jonathan Swift, in 1726, explored the possibilities with biting irony in “Gulliver’s Travels.” He sent Gulliver to an absolute monarchy, an agrarian commonwealth, a utopia of Enlightenment science and reason, and finally a hyper- rationalist aristocracy. In Swift’s hands, as one might expect, no system comes off well.

The modern era brought its own problems, which literature highlighted with typical vividness and acuity. Among much else, Charles Dickens’s “Tale of Two Cities” (1859) offers a portrait of modern state terror aimed at remaking human nature itself as another “solution” to history’s quest for order. The French Revolution stands at the center of that story, but Mr. Hill notes how that event prefigured the Bolshevik revolution and other experiments in political extremism by Communist regimes in China and Cambodia. Dickens captured the attempt to make a secular religion out of political ideas—to enforce “belief” by surveillance and cruelty while identifying virtue with violence. Dostoevsky, in “The Possessed” (1872) and Joseph Conrad, in “The Secret Agent” (1907), portrayed this terrorist mind with special intensity. As Mr. Hill notes, the mindset lives on today in the Islamist threat.

Part of the present clash between civilizations arises from the effects of what Mr. Hill calls “the imported state,” where non-Western countries adopted Western institutions without possessing the cultural matrix they required to work. Chinua Achebe’s novel “Things Fall Apart” (1958), set in decolonized Africa, portrays the effects of this mismatch, tracing the tragic decline of a tribal leader and thereby revealing the gap between a “modern” state and the society on which it is supposed to rest.

Mao Zedong, Mr. Hill notes, tried to close the gap in his own “imported state” by changing China’s culture as he remade its economy and institutions. But his cure proved worse than the disease—by destroying traditional society. Building something that can sustain itself is harder than merely tearing down.

Mr. Hill makes the point that, if the nation-state triumphed in the West, providing the conditions for its citizens’ security and prosperity, the verdict is still out elsewhere. The West’s matrix of civil liberties, political rights and liberal institutions has few analogues in the rest of the world. The resulting tensions play a key part in present discontent. Statesmen navigating through own our turbulent era might want to take a look at “Grand Strategies” for guidance, not to mention Aeschylus, Hobbes, Dickens and Dostoevsky.

Mr. Hay, a historian at Mississipi State University, is the author of “The Whig Revival, 1808-1830.”

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703302604575295533874735788.html

A wild, wild place

A master storyteller retells one of America’s greatest military adventures

The Last Stand: Custer, Sitting Bull, and the Battle of the Little Bighorn. By Nathaniel Philbrick. Viking; 466 pages; $30. The Bodley Head; £20.

REINFORCEMENTS arrived too late at the site of the bloodiest military defeat suffered by whites in their settlement of the American West. They had, they told a shocked nation celebrating its centenary, found only a single survivor on the battlefield, a veteran bleeding from the wounds of seven bullets and arrows. With the help of a sling, the bay gelding Comanche was gently conveyed to a ship for medical treatment. A general order was issued later saying that as “the only living representative” of the Battle of the Little Bighorn, Comanche’s “kind treatment and comfort should be a matter of special pride and solicitude” for the Seventh Cavalry. Henceforth he would not be ridden. Neither would he be put to any work.

Such sentimentality towards animals helped cavalrymen retain their humanity. The scenes, sounds and even smells of their brutalised lives suffuse Nathaniel Philbrick’s gory account of the campaign against the Plains Indians, commanded by General Alfred Terry, that culminated in the catastrophic end, in June 1876, of George Armstrong Custer. Mr Philbrick proves here as fine a writer on land as he is at sea, where he is famed for his history of the whaler that inspired Herman Melville’s “Moby Dick” and for arguing plausibly in his “Sea of Glory” that the ocean, not the West, was America’s first frontier.

His military biography of Custer brings balance to a life that is often distorted by the preconceptions of the historian. This also happens, as the author notes, in films where Custer, a noble hero in Raoul Walsh’s “They Died With Their Boots On” (1941), becomes a deranged maniac in Arthur Penn’s “Little Big Man” (1970). In reality he was a fine soldier, “one of the best cavalry officers, if not the best, in the Union Army” in the American civil war. Tragically for his men, his military successes there, and in later wars against the Indians, induced a hubris that led Custer to make some catastrophic decisions at Little Bighorn. While Custer divided his forces, Sitting Bull consolidated. Ever since his boyhood the Indian chief had been renowned for a methodical manner. His intimates nicknamed him “Slow”.

Custer also overestimated the stamina and hardiness of his troops. His regiment was not made up of gnarled Marlboro men; many were born abroad, 17% of them in Ireland, 12% in Germany and 4% in England. Almost all its native-born Americans came from east of the Mississippi. For these men, says Mr Philbrick, the Plains were a strange and unworldly place. They found it hard to accustom themselves to the constant eye-watering reek of horse hair and human sweat. The stench was especially bad at night when they camped near their newly dug latrines. If it was too wet to light a fire, they had to subsist on hardtack biscuits and cold sowbelly doused in vinegar. Since their boots shrank when they dried, troopers had to keep them on at night.

Their mounts were often famished, tired and weighed down by equipment, whereas the ponies of Sitting Bull’s warriors were well-watered, fresh and, in many cases, barebacked. Controversially, Mr Philbrick claims that Sitting Bull’s men were better armed, citing the findings of an archaeological sweep of the battlefield with metal detectors in the 1980s. In addition to the Springfield carbines and Colt revolvers fired by the soldiers, there were 43 weapons used by the Indians. Some were old muzzle-loaders, but up to 300 Indians possessed Henry and Winchester repeating rifles capable of firing 17 rounds without reloading. Custer’s men, with their single-shot carbines, were “overwhelmingly outgunned”.

In American military mythology Custer’s last stand ranks alongside the Battle of the Alamo, an attack by Mexican forces on a compound in Texas, which left all the defenders dead. Indian war buffs will be absorbed by Mr Philbrick’s intelligent, yet necessarily speculative, reconstruction of Custer’s reckless encounter. Others will be even more intrigued by the author’s vivid recreation of the tribulations of 19th-century cavalrymen. And of their horses, especially Comanche. His stuffed remains reside at the University of Kansas.

__________

Full article and photo: http://www.economist.com/culture/displaystory.cfm?story_id=16056383&source=hptextfeature

Mapping Ancient Civilization, in a Matter of Days

CITY LIVING A plaza in Caracol, a Maya city in Belize. Jungles surrounding it were penetrated using a new method, lidar

For a quarter of a century, two archaeologists and their team slogged through wild tropical vegetation to investigate and map the remains of one of the largest Maya cities, in Central America. Slow, sweaty hacking with machetes seemed to be the only way to discover the breadth of an ancient urban landscape now hidden beneath a dense forest canopy.

Even the new remote-sensing technologies, so effective in recent decades at surveying other archaeological sites, were no help. Imaging radar and multispectral surveys by air and from space could not “see” through the trees.

Then, in the dry spring season a year ago, the husband-and-wife team of Arlen F. Chase and Diane Z. Chase tried a new approach using airborne laser signals that penetrate the jungle cover and are reflected from the ground below. They yielded 3-D images of the site of ancient Caracol, in Belize, one of the great cities of the Maya lowlands.

In only four days, a twin-engine aircraft equipped with an advanced version of lidar (light detection and ranging) flew back and forth over the jungle and collected data surpassing the results of two and a half decades of on-the-ground mapping, the archaeologists said. After three weeks of laboratory processing, the almost 10 hours of laser measurements showed topographic detail over an area of 80 square miles, notably settlement patterns of grand architecture and modest house mounds, roadways and agricultural terraces.

“We were blown away,” Dr. Diane Chase said recently, recalling their first examination of the images. “We believe that lidar will help transform Maya archaeology much in the same way that radiocarbon dating did in the 1950s and interpretations of Maya hieroglyphs did in the 1980s and ’90s.”

The Chases, who are professors of anthropology at the University of Central Florida in Orlando, had determined from earlier surveys that Caracol extended over a wide area in its heyday, between A.D. 550 and 900. From a ceremonial center of palaces and broad plazas, it stretched out to industrial zones and poor neighborhoods and beyond to suburbs of substantial houses, markets and terraced fields and reservoirs.

This picture of urban sprawl led the Chases to estimate the city’s population at its peak at more than 115,000. But some archaeologists doubted the evidence warranted such expansive interpretations.

“Now we have a totality of data and see the entire landscape,” Dr. Arlen Chase said of the laser findings. “We know the size of the site, its boundaries, and this confirms our population estimates, and we see all this terracing and begin to know how the people fed themselves.”

The Caracol survey was the first application of the advanced laser technology on such a large archaeological site. Several journal articles describe the use of lidar in the vicinity of Stonehenge in England and elsewhere at an Iron Age fort and American plantation sites. Only last year, Sarah H. Parcak of the University of Alabama at Birmingham predicted, “Lidar imagery will have much to offer the archaeology of the rain forest regions.”

The Chases said they had been unaware of Dr. Parcak’s assessment, in her book “Satellite Remote Sensing for Archaeology” (Routledge, 2009), when they embarked on the Caracol survey. They acted on the recommendation of a Central Florida colleague, John F. Weishampel, a biologist who had for years used airborne laser sensors to study forests and other vegetation.

Dr. Weishampel arranged for the primary financing of the project from the little-known space archaeology program of the National Aeronautics and Space Administration. The flights were conducted by the National Science Foundation’s National Center for Airborne Laser Mapping, operated by the University of Florida and the University of California, Berkeley.

Other archaeologists, who were not involved in the research but were familiar with the results, said the technology should be a boon to explorations, especially ones in the tropics, with its heavily overgrown vegetation, including pre-Columbian sites throughout Mexico and Central America. But they emphasized that it would not obviate the need to follow up with traditional mapping to establish “ground truth.”

Jeremy A. Sabloff, a former director of the University of Pennsylvania Museum of Archaeology and Anthropology and now president of the Santa Fe Institute in New Mexico, said he wished he had had lidar when he was working in the Maya ruins at Sayil, in Mexico.

The new laser technology, Dr. Sabloff said, “would definitely have speeded up our mapping, given us more details and would have enabled us to refine our research questions and hypotheses much earlier in our field program than was possible in the 1980s.”

At first, Payson D. Sheets, a University of Colorado archaeologist, was not impressed with lidar. A NASA aircraft tested the laser system over his research area in Costa Rica, he said, “but when I saw it recorded the water in a lake sloping at 14 degrees, I did not use it again.”

Now, after examining the imagery from Caracol, Dr. Sheets said he planned to try lidar, with its improved technology, again. “I was stunned by the crisp precision and fine-grained resolution,” he said.

“Finally, we have a nondestructive and rapid means of documenting the present ground surface through heavy vegetation cover,” Dr. Sheets said, adding, “One can easily imagine, given the Caracol success, how important this would be in Southeast Asia, with the Khmer civilization at places like Angkor Wat.”

In recent reports at meetings of Mayanists and in interviews, the Chases noted that previous remote-sensing techniques focused more on the discovery of archaeological sites than on the detailed imaging of on-ground remains. The sensors could not see through much of the forest to resolve just how big the ancient cities had been. As a consequence, archaeologists may have underestimated the scope of Mayan accomplishments.

For the Caracol survey, the aircraft flew less than a half-mile above the terrain at the end of the dry season, when foliage is less dense. The Airborne Laser Terrain Mapper, as the specific advanced system is named, issued steady light pulses along 62 north-south flight lines and 60 east-west lines. This reached to what appeared to be the fringes of the city’s outer suburbs and most agricultural terraces, showing that the urban expanse encompassed at least 70 square miles.

Not all the laser pulses transmitted from the aircraft made it to the surface. Some were reflected by the tops of trees. But enough reached the ground and were reflected back to the airborne instruments. These signals, measured and triangulated by GPS receivers and processed by computers, produced images of the surface contours. This revealed distinct patterns of building ruins, causeways and other human modifications of the landscape.

The years the Chases spent on traditional explorations at Caracol laid the foundation for confirming the effectiveness of the laser technology. Details in the new images clearly matched their maps of known structures and cultural features, the archaeologists said. When the teams returned to the field, they used the laser images to find several causeways, terraced fields and many ruins they had overlooked.

The Chases said the new research demonstrates how a large, sustainable agricultural society could thrive in a tropical environment and thus account for the robust Maya civilization in its classic period from A.D. 250 to 900.

“This will revolutionize the way we do settlement studies of the Maya,” Dr. Arlen Chase said on returning from this spring’s research at Caracol.

Lidar is not expected to have universal application. Dr. Sheets said that, for example, it would not be useful at his pre-Columbian site at Cerén, in El Salvador. The ancient village and what were its surrounding manioc fields are buried under many feet of volcanic ash, beyond laser detection.

Other modern technologies, including radar and satellite imaging, are already proving effective in the land beyond the temples at Angkor, in Cambodia, and in surveys of the Nile delta and ancient irrigation systems in Iraq.

Laser signals breaking through jungle cover are only the newest form of remote sensing in the pursuit of knowledge of past cultures, which began in earnest about a century ago with the advent of aerial photography. Charles Lindbergh drew attention to its application in archaeology with picture-taking flights over unexplored Pueblo cliff dwellings in the American Southwest.

NASA recently stepped up its promotion of technologies developed for broad surveys of Earth and other planets to be used in archaeological research. Starting with a few preliminary tests over the years, the agency has now established a formal program for financing archaeological remote-sensing projects by air and space.

“We’re not looking for monoliths on the Moon,” joked Craig Dobson, manager of the NASA space archaeology program.

Every two years, Dr. Dobson said, NASA issues several three-year grants for the use of remote sensing at ancient sites. In addition to the Caracol tests, the program is supporting two other Maya research efforts, surveys of settlement patterns in North Africa and Mexico and reconnaissance of ancient ruins in the Mekong River Valley and around Angkor Wat.

Nothing like a latter-day Apollo project, of course, but the archaeology program is growing, Dr. Dobson said, and will soon double in size, to an annual budget of $1 million.

John Noble Wilford, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/05/11/science/11maya.html

Temperance Tantrum

How the Anti-Saloon League’s political cunning led to Prohibition—and a colorful, sordid era

 

A public demonstration in 1920 of the government’s resolve to enforce Prohibition.

It’s a safe conjecture that the snapshot impression most Americans have of the Prohibition era is a gauzy haze of speakeasies, Al Capone, bootleggers, flappers, bathtub gin and Harlem’s Cotton Club. For decades, the Hollywood and literary glorification of those who flouted the 18th Amendment—which went into effect on Jan. 17, 1920—has promoted the entirely accurate notion that the Prohibition story is at times outrageously picaresque. But the pop-culture view has also fostered the inaccurate belief that alcohol back then was a rare commodity, available only to the privileged, the daring and the outright criminal.

In fact, as Daniel Okrent shows in “Last Call,” his superb history of the Prohibition era, obtaining a drink with a lot more kick than a bottle of pop wasn’t at all difficult for the thirsty public. The law’s loopholes were numerous, and the judiciary, suddenly overwhelmed by Prohibition-related arrests, was extraordinarily lenient. Fortunes were made by taking advantage of exemptions for “medicinal” alcohol, for hard cider made by farmers from fermented fruit and for sacramental wine used in religious services. And that was just the legal stuff. As for the illegal booze, there was plenty of that around too, as public servants grew rich taking bribes and kickbacks in exchange for turning a blind eye. It was a rollicking and sordid period in the country’s history.

[BK_Cover1]

Anti-Saloon League leader Wayne Wheeler.

Mr. Okrent wisely expends much effort in carefully gathering the many threads of American thinking, beginning in the late 19th century, that eventually made the preposterous idea of outlawing alcohol seem like a thoroughly reasonable aim. He also provides evocative sketches of the men and women—on both sides of the decades-long, impassioned debate— who were once household names but are now forgotten.

Another of the book’s virtues is that it is likely to prompt the reader to reflect in a benign way on life in America today. It’s the accepted, and lazy, wisdom of the current moment that the nation suffers to an unprecedented degree from a “partisan” and “polarized” political culture. We’ve got nothing on the “dry” versus “wet” combatants. As becomes clear from the many vintage documents, speeches and newspaper articles that Mr. Okrent unearthed, the bickering today on talk radio and blogs and cable chat-shows looks like a tea party—not a Tea Party—compared with the incendiary sermonizing, bigotry and violence of the long-ago alcohol debate. For beyond-the-pale rhetoric it’s hard to beat Carry Nation, the God-fearing temperance zealot—she used a hatchet (and hammers, rocks and bricks) to attack saloons in the first decade of the 20th century—who celebrated the assassination of President William O. McKinley in 1901 by calling him a “whey-faced tool of Republican thieves, rummies and devils.”

“Last Call” might also make you feel a little better about contemporary government at a time when the mere suggestion of malfeasance or hypocrisy makes the news; in the 1920s, the level of corruption and bad faith, from beat cops and local pols to judges and congressmen, is jaw-dropping. Even the lone law-enforcement official from that time whose heroism was mythologized on television and in film doesn’t come off too well: Eliot Ness had almost nothing to do with Al Capone’s eventual arrest and conviction (on tax-evasion charges), and the famed leader of the “Untouchables” Prohibition-enforcement team died “a semidrunk” in 1957.

The most compelling figure in a book full of vivid characters is Wayne Wheeler, the leader of the Anti-Saloon League and the person most responsible for passage of the 18th Amendment as well as the more detailed Volstead Act of 1919. Wheeler was a nondescript lawyer but a relentless, single-minded campaigner against alcohol and a man of marvelous political cunning. When Wheeler died at age 57 in 1927, even writer H.L. Mencken—a confirmed “wet” who was vicious in his mocking of “drys”—had to admit: “In fifty years the United States has seen no more adept political manipulator” than Wheeler, “His successors, compared to him, were as pee-wees to the Matterhorn.”

Wheeler’s strategy with the Anti-Saloon League was to pick off “wet” legislators who represented districts where a temperance message would be likely to resonate. Thus he wrote off urban districts largely populated by recent immigrants, who tended to be unsympathetic to the ASL’s message. But elsewhere, in districts where elections could be decided by the 10% or 20% swing in votes that an anti-alcohol message could produce, Wheeler went all out. The ASL targeted voters with mass telegram campaigns, hundreds of speeches at local churches and countless leaflets. His success in electing candidates who supported his agenda was staggering. The ASL “effectively seized control of both the House and the Senate in the 1916 elections,” Mr. Okrent writes, and “only tightened it” in the years that followed.

Wheeler also recognized that disparate strains in the culture could be exploited to achieve his goal. He supported the women’s suffrage movement, recognizing that many women were sympathetic to the banning of alcohol, in the hope of reforming wayward husbands. One argument against prohibition contended that the country couldn’t afford it: Fully 30% of the government’s revenue in 1910 came from taxes on alcohol. Wheeler seized on the introduction of the federal income tax in 1913 as ample compensation for lost levies if alcohol sales were banned; disorganized “wet” supporters— brewers, distillers and many politicians—were caught flat-footed by this development. Another windfall for Wheeler was the wave of virulent xenophobia that came with World War I, directed primarily at German immigrants but also at Italians, Eastern European Jews and the Irish, groups that regarded alcohol consumption as part of their cultural traditions.

Even though President Woodrow Wilson was himself fond of a cocktail, he was indifferent to Wheeler’s increasing power and to the clamor for prohibition from Wheeler-aligned “dry” politicians. And why would he expend political capital trying to protect citizens’ rights to lift a glass? After all, Wilson effectively suspended the First Amendment during World War I, jailing some 1,200 dissenters. Wheeler had the president’s ear; with Wilson’s successor in 1921, Warren G. Harding, Wheeler had a virtual puppet.

“Wheeler’s grip on the short leash he allowed Harding was so firm that when he wanted something from the president, Harding would respond with the eagerness of a puppy,” Mr. Okrent says. When Wheeler learned, for instance, that Harding intended to nominate Sen. John Shields of Tennessee to the Supreme Court, the ASL leader objected: Shields had voted for the 18th Amendment but had opposed the Volstead Act—not acceptable, Wheeler said. “Harding instantly capitulated,” Mr. Okrent writes.

As the Roaring ’20s wheezed to a conclusion, Prohibition was generally regarded as a farce. Once the Depression hit, the country needed a drink more than ever. And the government needed the tax revenues that alcohol sales would bring. It was only a matter of time before the 18th Amendment was repealed. The Anti-Saloon League, which had spent $2.5 million on its cause in 1920, was able to raise only $122,000 in 1933 to fight repeal. “The most powerful pressure group the nation had ever known,” Mr. Okrent observes, “had been reduced to looking for nickels under the couch cushions.”

Franklin Roosevelt, who had promised repeal in his presidential campaign, moved quickly after his inauguration on March 4, 1933. At his urging, Congress within a few weeks redefined “intoxicating,” legalizing beer that was no more than 3.2% alcohol. The Anheuser-Busch brewery sent a team of Clydesdales pulling a beer wagon to the White House. In the first week of December, the 21st Amendment was ratified. “At the age of thirteen years, ten months, and nineteen days,” Mr. Okrent writes, “national Prohibition was dead.” It’s safe to say that the country will never order another round.

Mr. Smith is managing director of the website splicetoday.com.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704342604575222132698695108.html

__________

See also:

‘Last Call’

Prologue

January 16, 1920

The streets of San Francisco were jammed. A frenzy of cars, trucks, wagons, and every other imaginable form of conveyance crisscrossed the town and battled its steepest hills.

Porches, staircase landings, and sidewalks were piled high with boxes and crates delivered on the last possible day before transporting their contents would become illegal. The next morning, the Chronicle reported that people whose beer, liquor, and wine had not arrived by midnight were left to stand in their doorways “with haggard faces and glittering eyes.” Just two weeks earlier, on the last New Year’s Eve before Prohibition, frantic celebrations had convulsed the city’s hotels and private clubs, its neighborhood taverns and wharfside saloons. It was a spasm of desperate joy fueled, said the Chronicle, by great quantities of “bottled sunshine” liberated from “cellars, club lockers, bank vaults, safety deposit boxes and other hiding places.” Now, on January 16, the sunshine was surrendering to darkness. San Franciscans could hardly have been surprised. Like the rest of the nation, they’d had a year’s warning that the moment the calendar flipped to January 17, Americans would only be able to own whatever alcoholic beverages had been in their homes the day before. In fact, Americans had had several decades’ warning, decades during which a popular movement like none the nation had ever seen—a mighty alliance of moralists and progressives, suffragists and xenophobes—had legally seized the Constitution, bending it to a new purpose.

[lastcallbook]

Up in the Napa Valley to the north of San Francisco, where grape growers had been ripping out their vines and planting fruit trees, an editor wrote, “What was a few years ago deemed the impossible has happened.” To the south, Ken Lilly—president of the Stanford University student body, star of its baseball team, candidate for the U.S. Olympic track team—was driving with two classmates through the late-night streets of San Jose when his car crashed into a telephone pole. Lilly and one of his buddies were badly hurt, but they would recover. The forty-gallon barrel of wine they’d been transporting would not. Its disgorged contents turned the street red. Across the country on that last day before the taps ran dry, Gold’s Liquor Store placed wicker baskets filled with its remaining inventory on a New York City sidewalk; a sign read “Every bottle, $1.” Down the street, Bat Masterson, a sixty-six-year-old relic of the Wild West now playing out the string as a sportswriter in New York, observed the first night of constitutional Prohibition sitting alone in his favorite bar, glumly contemplating a cup of tea. Under the headline goodbye, old pal!, the American Chicle Company ran newspaper ads featuring an illustration of a martini glass and suggesting the consolation of a Chiclet, with its “exhilarating flavor that tingles the taste.”

In Detroit that same night, federal officers shut down two illegal stills (an act that would become common in the years ahead) and reported that their operators had offered bribes (which would become even more common). In northern Maine, a paper in New Brunswick reported, “Canadian liquor in quantities from one gallon to a truckload is being hidden in the northern woods and distributed by automobile, sled and iceboat, on snowshoes and on skis.” At the Metropolitan Club in Washington, Assistant Secretary of the Navy Franklin D. Roosevelt spent the evening drinking champagne with other members of the Harvard class of 1904. There were of course those who welcomed the day. The crusaders who had struggled for decades to place Prohibition in the Constitution celebrated with rallies and prayer sessions and ritual interments of effigies representing John Barleycorn, the symbolic proxy for alcohol’s evils. No one marked the day as fervently as evangelist Billy Sunday, who conducted a revival meeting in Norfolk, Virginia. Ten thousand grateful people jammed Sunday’s enormous tabernacle to hear him announce the death of liquor and reveal the advent of an earthly paradise. “The reign of tears is over,” Sunday proclaimed. “The slums will soon be only a memory. We will turn our prisons into factories and our jails into storehouses and corncribs. Men will walk upright now, women will smile, and the children will laugh. Hell will be forever for rent.”

A similarly grandiose note was sounded by the Anti-Saloon League, the mightiest pressure group in the nation’s history. No other organization had ever changed the Constitution through a sustained political campaign; now, on the day of its final triumph, the ASL declared that “at one minute past midnight . . . a new nation will be born.” In a way, editorialists at the militantly anti-Prohibition New York World perceived the advent of a new nation, too. “After 12 o’clock tonight,” the World said, “the Government of the United States as established by the Constitution and maintained for nearly 131 years will cease to exist.” Secretary of the Interior Franklin K. Lane may have provided the most accurate view of the United States of America on the edge of this new epoch. “The whole world is skew-jee, awry, distorted and altogether perverse,” Lane wrote in his diary on January 19. “. . . Einstein has declared the law of gravitation outgrown and decadent. Drink, consoling friend of a Perturbed World, is shut off; and all goes merry as a dance in hell!”

How did it happen ? How did a freedom-loving people decide to give up a private right that had been freely exercised by millions upon millions since the first European colonists arrived in the New World? How did they condemn to extinction what was, at the very moment of its death, the fifthlargest industry in the nation? How did they append to their most sacred document 112 words that knew only one precedent in American history? With that single previous exception, the original Constitution and its first seventeen amendments limited the activities of government, not of citizens. Now there were two exceptions: you couldn’t own slaves, and you couldn’t buy alcohol.

Few realized that Prohibition’s birth and development were much more complicated than that. In truth, January 16, 1920, signified a series of innovations and alterations revolutionary in their impact. The alcoholic miasma enveloping much of the nation in the nineteenth century had inspired a movement of men and women who created a template for political activism that was still being followed a century later. To accomplish their ends they had also abetted the creation of a radical new system of federal taxation, lashed their domestic goals to the conduct of a foreign war, and carried universal suffrage to the brink of passage. In the years ahead, their accomplishments would take the nation through a sequence of curves and switchbacks that would force the rewriting of the fundamental contract between citizen and government, accelerate a recalibration of the social relationship between men and women, and initiate a historic realignment of political parties.

In 1920 could anyone have believed that the Eighteenth Amendment, ostensibly addressing the single subject of intoxicating beverages, would set off an avalanche of change in areas as diverse as international trade, speedboat design, tourism practices, soft-drink marketing, and the English language itself? Or that it would provoke the establishment of the first nationwide criminal syndicate, the idea of home dinner parties, the deep engagement of women in political issues other than suffrage, and the creation of Las Vegas? As interpreted by the Supreme Court and as understood by Congress, Prohibition would also lead indirectly to the eventual guarantee of the American woman’s right to abortion and simultaneously dash that same woman’s hope for an Equal Rights Amendment to the Constitution. Prohibition changed the way we live, and it fundamentally redefined the role of the federal government. How the hell did it happen?

Excerpted from Last Call by Daniel Okrent.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703338004575230383055310668.html

Ptolemaic statue and temple gate discovered at Taposiris Magna

The large granite statue discovered at Taposiris Magna. It possibly represents Ptolemy IV.

Archaeologists excavating at Taposiris Magna, a site west of Alexandria, have discovered a huge headless granite statue of a Ptolemaic king, and the original gate to a temple dedicated to the god Osiris.

 In a statement issued by the SCA, Dr Zahi Hawass says that the monumental sculpture, which is a traditional figure of an ancient Egyptian pharaoh wearing collar and kilt, could represent Ptolemy IV, the pharaoh who constructed the Taposiris Magna temple. He added that the statue is very well preserved and might be one of the most beautiful statues carved in the ancient Egyptian style.

The joint Egyptian-Dominican team working at Taposiris Magna discovered the temple’s original gate on its western side. In pharaonic Egypt the temple was named Per-Usir, meaning ‘A place of Osiris’. Legend has it that when the god Seth killed Osiris he cut him into fourteen pieces and threw them all over Egypt. This is one of fourteen temples said to contain one piece of the god’s body.

The team also found limestone foundation stones, which would once have lined the entrance to the temple. One of these bears traces indicating that the entrance was lined with a series of Sphinx statues similar to those of the pharaonic era.

The team, led by Dr Kathleen Martinez, began excavations in Taposiris Magna five years ago in an attempt to locate the tomb of the well-known lovers, Queen Cleopatra VII and Mark Antony. There is some evidence that suggests that Egypt’s last Queen might not be buried inside the tomb built beside her royal palace, which is now under the eastern harbour of Alexandria.

Dr Hawass pointed out that in the past five years the mission has discovered a collection of headless royal statues, which may have been subjected to destruction during the Byzantine and Christian eras. A collection of heads featuring Queen Cleopatra was also uncovered along with 24 metal coins bearing Cleopatra’s face.

Behind the temple, a necropolis was discovered, containing many Greco-Roman style mummies. Early investigations, said Dr Hawass, show that the mummies were buried with their faces turned towards the temple, which means it is likely the temple contained the burial of a significant royal personality, possibly Cleopatra VII.

Dr Hawass has already hailed the dig as a success, whatever the outcome: “If we discover the tomb… it will be the most important discovery of the 21st century. If we do not discover the tomb… we made major discoveries here, inside the temple and outside the temple.”

Ann Wuyts, The Independent

__________

Full article and photo: http://www.independent.co.uk/news/science/archaeology/news/ptolemaic-statue-and-temple-gate-discovered-at-taposiris-magna-1961972.html

Lady Liberty’s Path to America

The cold rains of Oct. 28, 1886, did little to dampen the ardor of the tens of thousands of giddy New Yorkers who crowded onto the southern tip of Manhattan that afternoon to watch the festivities on Bedloe’s Island, a patch of land in New York Harbor. On cue, an enormous veil dropped, and the spectators gazed for the first time at the face of the massive statue that, until then, had been the subject not only of curiosity but also of skepticism.

The Statue of Liberty as a work in progress at sculptor Frédéric Auguste Bartholdi’s Parisian workshop in the early 1880s.

Whatever doubts Americans might have had about this unsolicited, and rather costly, gift from the French seemed at once to vanish. A “thunderous cacophony of salutes from steamer whistles, brass bands, and booming guns, together with clouds of smoke from the cannonade, engulfed the statue for the next half hour,” Yasmin Sabina Khan writes in “Enlightening the World,” her account of how the Statue of Liberty came to be.

The crowd roared, then various speakers held forth, welcoming the 225-ton, 151-foot-tall Lady Liberty, as she would soon be known. President Grover Cleveland, in his remarks, tried to distinguish this colossus from others of its kind throughout human history. Where the statue-symbols of other nations might depict “a fierce and warlike god, filled with wrath and vengeance,” this one exhibited only “our own peaceful deity keeping watch before the open gates of America.”

President Cleveland’s interpretation of the statue turned out to be but one of many over the years. To Ms. Khan the Statue of Liberty’s symbolic significance is not a complicated matter and never was. The statue celebrates the “friendship” of the people of France and those of the U.S.; it represents “liberty” and “liberty” alone.

But of course liberty means different things to different people, not least the French ideologues and power-seekers who, in the 20 years it took for sculptor Frédéric Auguste Bartholdi to see his immense undertaking through to completion (from 1865 to 1885), claimed to be its champions. Liberty seems to have nothing but friends in France during this period, including Napoleon III, the socialists of the Paris Commune and everyone in between. This was a time of intense political strife in France, and Ms. Khan gives insufficient attention to the extent to which making a grand gesture—giving a symbol of liberty to America—constituted a form of propaganda devised by Edouard-Rene Lefebvre de Labouyale and other French intellectuals under the Third Republic to gain U.S. support for their program of liberal reform.

[bookcobv]

Americans were skeptical of the project from the start and understandably so. French support during the Revolutionary War had been indispensable to the nation’s founding, and Gen. Lafayette’s wartime contributions were still revered. But within living memory the friendship between the two countries had suffered mightily: During the Civil War, Napoleon III, sympathetic to the Confederacy, had invaded Mexico, driven the republican Benito Juarez from office and installed the Austrian archduke Maximilian as the region’s emperor. Although President Lincoln regarded this involvement in Mexico as a violation of the Monroe Doctrine, he could do little about it, preoccupied as he was by the war at home. But Andrew Johnson, Lincoln’s successor, supplied Juarez’s government-in-exile, leading to the withdrawal of French forces, the restoration of the Juarez presidency and Maximilian’s execution. Bad feelings lingered.

Americans were also leery of the statue—Harper’s magazine called it “Frenchy and fanciful”—on financial grounds. This supposed gift, after all, had come at a high price. The massive pedestal would cost even more than the $250,000 construction of the statue itself, and Americans were expected to come up with the money. Emma Lazarus’s “The New Colossus” (“Give me your tired, your poor”) was written in 1883 for a fund-raising auction and didn’t make much an impression at the time; two decades passed before a plaque bearing the poem was attached to the pedestal. Americans eventually did come up with the money to build the pedestal, but just barely, thanks to a last-minute fund-raising campaign by the publisher Joseph Pulitzer.

Once Miss Liberty was unveiled, the American people responded enthusiastically and, in time, grew to adore her, although on sentimental rather than aesthetic grounds. One critic at the time of the unveiling said that from Battery Park she resembled “a bag of potatoes with a stick projecting from it.” The statue was certainly an “industrial tour de force,” as American Architect and Building News noted in 1883, but as a work of art its standing has not improved much through the years.

Born of long-forgotten Parisian political struggles and at a time of American enthusiasm for the French Enlightenment, the statue came to represent different ideals to new generations. Where she had once been content to cast the light of republican virtue, setting an example for the decadent monarchies of old Europe, before long she was welcoming the Continent’s “wretched refuse.” By the outbreak of World War I, as Marvin Trachtenberg noted in his superb “The Statue of Liberty” (1976), the ships that had conveyed those immigrant masses “now sailed out under her militant gaze” on a mission to make the world safe for democracy. President Cleveland’s “peaceful deity” had become the emblem of a fearsome military power.

Mr. Crawford is the author of “Twilight at Monticell: The Final Years of Thomas Jefferson.”

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704671904575194273198104184.html

Revealing the Young Bureaucrats Behind the Nazi Terror

New Third Reich Monument in Berlin

 

Index cards from the post-World War II investigation into members of the Reichssicherheitshauptamt — the amalgamation of the Gestapo and SS.

Who exactly were the men who planned and administered the Nazi crimes? The new “Topography of Terror” documentation center opened on Thursday in Berlin at the site of the former Gestapo and SS headquarters. It reveals the faces of the almost unknown perpetrators of the Holocaust.

The index cards cover an entire wall, several hundred of them in pink, beige or green, containing names, dates of birth and handwritten notes. They are the details of some of the 7,000 former employees of the Reichssicherheitshauptamt, the amalgamation of the feared SS paramilitary group and Gestapo secret police force — the men who worked at the very epicenter of the Nazi terror regime.

Sixteen of these thousands of cards that were collected by investigators in Berlin in 1963 jut out from the wall, representing the only former employees of this terror headquarters who ever faced prosecution. And three of these cards are raised further — showing the trio who were eventually convicted. That is just three out of a total of 7,000.

The exhibition can be seen at the new “Topography of Terror” documentation center opened by German President Horst Köhler on Thursday, just two days before the 65th anniversary of the end of World War II. Unlike the nearby Holocaust Memorial, which is dedicated to the Nazis’ victims, this modest metallic gray building is designed to highlight the role of the perpetrators, those managers and bureaucrats who from their Berlin offices administered mass murder across Europe.

Undisturbed by the Justice System

The center also reveals the worst historical legacy of the German Federal Republic. The fact that thousands of murderers and their accomplices were able to lead quiet lives in post-war Germany, undisturbed by the criminal justice system. The site of the exhibition is probably the most historically contaminated place in Berlin. The complex on what was once Prinz Albrecht Strasse, just a stone’s throw from today’s government buildings, was the headquarters for the Third Reich’s brutal repression.

In 1933 the Gestapo made the former art school at Prinz Albrecht Strasse 8 its headquarters. The adjacent Hotel Prinz Albrecht became the SS headquarters in 1934 and that same year the SS intelligence service, the SD, took over the Prinz Albrecht palace on nearby Wilhelm Strasse. It was from this complex of buildings that Hitler’s officials administered the concentration and extermination camps, directed the deadly campaigns by the SS death squads and kept an eye on the regime’s opponents.

The “Final Solution” that was discussed at the famed Wannsee Conference on Jan. 20, 1942 was also prepared here. A group of ministerial officials and SS functionaries based here chose the venue for the conference of high-ranking Nazis, where the plan for the murder of Europe’s Jews was hatched.

Hitler was rarely at the complex. He preferred to stay away from Berlin, sometimes ruling from his Wolf’s Lair military headquarters and sometimes from his Bavarian mountain retreat. But this was where the brains behind the Nazi crimes, such as SS leader Heinrich Himmler and SD chief Reinhard Heydrich, had their offices.

Climbing the Career Ladder

And they surrounded themselves by men who didn’t necessarily fit into today’s stereotype of a Nazi war criminal, neither boorish sadists nor bloodless bureaucrats. They were ambitious university-educated men, aged around 30 and more likely to be ideologues than technocrats. They alternated between serving at the Berlin headquarters and in foreign posts, like young managers at a big company making their way up the career ladder. And after the rupture of 1945 most of them simply faded away into the background.

Erich Ehrlinger, for example, a lawyer from Giengen in southern Germany, who at the age of 25 was already a staff leader at the SD main office, before becoming a commander in the German security police in Ukraine and leading the 1b Einsatzkommando, or mobile death squad. A case against him in 1969 was dropped because he was deemed incapable of standing trial. Yet Ehrlinger lived for another 35 years.

Then there was the Munich businessman, Josef Spacil, who joined the SS at the age of 27. He was stationed in an occupied area of the Soviet Union as an SS economist, then came back to Prinz Albrecht Strasse to serve as a department head. He appeared as a witness in the Nuremberg Trials but was never prosecuted himself.

Photographs of a group of young lawyers, Werner Best, Ernst Kaltenbrunner and Hans Nockemann, stare down from the walls — all were born in 1903. “One easily forgets that National Socialism was a young movement,” say Andreas Nachama, the director of the new documentation center.

Nachama proudly walks through the building, showing its concrete walls and dark stone floor. “The site is our most important exhibit,” he says. “We don’t need impressive architecture, it is all there.” He points to the views. Across the way is the former Air Ministry of Hermann Göring, now the German Finance Ministry. A few meters away is the Berlin state parliament, once the seat of the Prussian parliament. Adjacent to the site is a 170-meter section the Berlin Wall. No other place contains quite so much German history.

A Neglected Waste Land

It is hard to understand today why this crucial historical site was neglected for so long. Between 1949 and 1956 the authorities demolished the war-damaged buildings. It might have been possible then to reconstruct the site, but people preferred to forget this place of horror. And the Communist authorities in the east of the city did their part by building the Berlin Wall in 1961 right along the former Prinz Albrecht Street. The four hectares of waste land fell into obscurity.

The West Berlin authorities at one stage planned a freeway for the site but it was never built. The cars came anyway, with young Berliners learning to drive in the derelict space. Another part of the site was taken over by a construction materials firm that salvaged parts of old buildings here.

It was a left-wing citizens group in the 1980s that rediscovered the site, not only as a place where the perpetrators planned the Holocaust but also as a site of the German resistance. Around 15,000 people were arrested by the Gestapo and kept in the ground-floor cells of Prinz Albrecht Strasse 8. People like Pastor Martin Niemöller, the Communist and later leader of East Germany Erich Honecker and the Social Democrat Kurt Schumacher.

Excavation work uncovered the floors of the cells, as well as parts of the walls and foundations. A provisional exhibition was put on in 1987 as part of Berlin’s 750th anniversary celebrations. That original Topography of Terror, a row of information boards displaying text in the open air, stayed put for over 20 years and eventually would attract half a million visitors annually.

And it took almost that long until the documentation could be housed properly. An architecture competition from 1993 ended in chaos. The Swiss architect Peter Zumthor won with his plans for a 125-meter long, 17-meter wide construction across the site. But construction costs quickly got out of hand and, after construction had already been halted for five years, the Berlin state finally pulled the plug in 2004. The diggers moved in and tore down three stairwells and the foundations of the building. The still-born project had cost the city €12 million.

Functional and Sober

Then architect Ursula Wilms and landscape architect Heinz Hallman stepped in with their more modest plan. It was finished in two and half years and came in on budget costing €20 million, shared equally between Berlin and the federal government. It is functional and sober, and suits the concept of the exhibition and the consciously unfinished aspect of the site.

So a good end to a long-drawn-out process? “It speaks for the quality of the project that it survived all that,” says Reinhard Rürup. The 75-year-old historian was responsible for building the original exhibition and in 2004 he resigned as academic director of the project because of the Zumthor debacle. In hindsight that sent the right signal, he says, as the politicians then decided on a fresh start.

However, valuable years were wasted, during which the area’s government quarters and other memorials rapidly changed. The Topography now appears as a late-comer, says Rürup. And the opportunity was wasted to bring together the various Nazi memorials in the city under the umbrella of a single foundation. That would have been in a better position to cooperate with big documentation centers abroad, as well as the Holocaust museums in Washington and London and the Mémorial de la Shoah in Paris, he argues.

Nachama, however, says that the Topography of Terror was never supposed to supply comprehensive answers. “We are pleased if visitors leave us with more questions then they started with.”

These visitors can stare in amazement at the photograph of Karl Wolff, head of Himmler’s personal staff, as he stands happily in front of his house on Starnberg Lake and waters his plants.

Or at a color photo from 2002 that shows Friedrich Engel, the former head of the SS in Genoa, in a Hamburg court. Engel, known as the “Butcher of Genoa,” was convicted in absentia by an Italian military court for the 1944 murder of hundreds of Italian captives. Germany’s highest court, the Bundesgerichthof, then overturned his Hamburg conviction. Engel died in 2006 — a free man.

___________

Full article and photo: http://www.spiegel.de/international/zeitgeist/0,1518,693373,00.html

A New Openness to Discussing Allied War Crimes in WWII

D-Day may have been the beginning of the end of Germany’s campaign of horror during World War II. But a new book by British historian Antony Beevor makes it clear that the “greatest generation” wasn’t above committing a few war crimes of its own.

It was the first crime William E. Jones had ever committed, which was probably why he could still remember it well so many years later. He and other soldiers in the 4th Infantry Division had captured a small hill. “It was pretty rough,” Jones later wrote, describing the bloody battle.

At some point, the GIs lost all self-control. As Jones wrote: “(The Germans) were baffled and they were crazy. There were quite a few of them still in their foxholes. Then I saw quite a few of them shot right in the foxholes. We didn’t take prisoners and there was nothing to do but kill them, and we did, and I had never shot one like that. Even our lieutenant did and some of the non coms (non-commissioned officers).”

__________

Allied forces bringing in troops and equipment at Omaha Beach after it was conquered in bloody fighting on D-Day. At daybreak on June 6, the Americans, British and their allies launched “Operation Overlord,” the biggest amphibious landing of all time.

August 9, 1944. US infantrymen make their way past a wrecked German truck on the way to Avranches during the Allied invasion of Normandy. During the operation, Allied and German troops fought each other in one of the fiercest battles of World War II, first on the beaches and then in the countryside of Normandy.

US soldiers helping comrades ashore on Utah Beach, June 6. More than 250,000 soldiers and civilians were killed or wounded in the Normandy campaign, and Normandy itself was ravaged.

June 1944: German prisoners being marched through Cherbourg, after the liberation of the town. British historian Antony Beevor’s newest book, “D-Day: The Battle for Normandy,” addresses a subject that is currently a matter of much debate among experts. Some scholars say Allied soldiers committed war crimes in Normandy to a much greater extent than was previously realized. Beevor extensively quotes reports and memoirs of those who took part in the invasion, many of whom state that American, British and Canadian troops killed German POWs and wounded soldiers.

July 1944: German prisoners from Cherbourg behind a barbed wire fence somewhere in England. For German apologists, accounts of Allied war crimes shouldn’t be something to make them feel better about their own side’s behavior. In fact, although the extent of Allied war crimes may have been greater than previously known, it cannot be compared with the scope of German crimes against civilians. For example, shooting innocent hostages was part of the German strategy for fighting the French partisans who struck out after D-Day. Up to 16,000 French citizens — men, women and children — fell victim to the terror of the Wehrmacht and the SS

__________

The dead will most likely never be identified by name, but one thing is clear: The victims of this war crime were German soldiers killed in Normandy in the summer of 1944.

At daybreak on June 6, the Americans, British and their allies launched “Operation Overlord,” the biggest amphibious landing of all time. During the operation, Allied and German troops fought each other in one of the fiercest battles of World War II, first on the beaches and then in the countryside of Normandy. When it was over, more than 250,000 soldiers and civilians had been killed or wounded, and Normandy itself was ravaged.

The Only Good German Is a Dead German

There is no shortage of books on the Battle of Normandy, which also goes by the name of D-Day. And the same can be said about films, such as Steven Spielberg’s award-winning film “Saving Private Ryan,” which was a global success. Indeed, it would almost seem that everything that could be said about the battle has been said.

Still, that didn’t deter British historian and best-selling author Antony Beevor from taking another stab at the material. While conducting research for his newest book, “D-Day: The Battle for Normandy,” Beevor stumbled upon something that is currently a matter of much debate among experts. If some of these scholars are correct, Allied soldiers committed war crimes in Normandy to a much greater extent than was previously realized.

Beevor extensively quotes reports and memoirs of those who took part in the invasion, many of whom state that American, British and Canadian troops killed German POWs and wounded soldiers. They also reportedly used soldiers belonging to the German Wehrmacht or Waffen SS as human shields and forced them to walk through minefields.

For example, one recounts the tale of a private named Smith, who was fighting with the 79th US Infantry Division. Smith allegedly discovered a room full of wounded Germans in a fortification while he was drunk on Calvados, a local apple brandy. According to the official report: “Declaring to all and sundry that the only good German was a dead one, Smith made good Germans out of several of them before he could be stopped.”

In another account, Staff Sergeant Lester Zick reportedly encountered an American soldier on a white horse who was herding 11 prisoners in front of him. He called out to Zick and his men and told them that the prisoners were all Poles, except for two Germans. Then, according to Zick, the soldier took out his pistol “and shot both of them in the back of the head. And we just stood there.”

Beevor also quotes John Troy, a soldier with the 8th Infantry Division, who writes of finding the body of an American officer the Germans had tied up and killed because he had been caught carrying a captured German P-38 pistol. Troy describes his reaction in the following way: “When I saw that, I said no souvenirs for me. But, of course, we did it too when we caught (Germans) with American cigarettes on them, or American wristwatches they had on their arms.”

Rage and Violence

The issue of war crimes is an incredibly sensitive one. But, in this case, the evidence is overwhelming.

Given the high number of casualties they suffered, Allied paratroopers were particularly determined to exact bloody revenge. Near one village, Audouville-la-Hubert, they massacred 30 captured Wehrmacht soldiers in a single killing spree.

On the beaches, soldiers in an engineering brigade had to protect German prisoners from enraged paratroopers from the 101st Airborne Division, who shouted: “Turn those prisoners over to us. Turn them over to us. We know what to do to them.”

When the same LSTs (landing ship tanks) were used to evacuate both German POWs and Allied wounded, the wounded attacked the Germans, and it was only through the intervention of a pharmacist’s mate that nothing more serious happened.

A New Approach to Writing History

Beevor frequently quotes from personal memoirs of Allied soldiers that have been available to historians for years. But could it be that they were ignored by them until now because they didn’t support the image of the “greatest generation,” the term that Americans have liked to use to describe their victorious soldiers from 1945? It would seem that no shadows were to be cast on the war that gave the Americans, in particular, the moral right to have a say in shaping Europe’s postwar future as well as creating the practical conditions for it to do so.

Still, that approach has recently been revised. In his 2007 book “The Day of Battle: The War in Sicily and Italy, 1934-1944” Pulitzer Prize-winning author Rick Atkinson described various war crimes committed by the Allies. And now we have the same thing with Normandy.

Beevor primarily attributes the Allied crimes to the epic ferocity of the battles. The Germans themselves called it a “dirty bush war,” a reference to the bushes and hedgerows, ranging in height between one and three meters (three and ten feet), used to demarcate the fields in Normandy’s bocage landscape.

Indeed, Normandy’s terrain is ideally suited for ambushes and booby traps. For example, German units stretched thin steel cables across roads at head level, so that when an American Jeep came roaring down the road, its driver and passengers would be decapitated. They also attached hand grenades to the dog tags of dead GIs, so that anyone who tried to remove the dog tags was blown up. Likewise, it is an established fact that German soldiers, and particularly those in the Waffen SS, shot prisoners.

Allied Behavior Doesn’t Forgive Germany ‘s

The artillery fire from both sides and the Allied bombing attacks transformed Normandy into a moonscape. Beevor writes about soldiers who huddled in the craters screaming and weeping, while others walked around as if in a trance picking flowers in the midst of explosions. Indeed, American physicians reported 30,000 cases of combat neurosis among their troops alone.

In a letter to his family in Minnesota, a US infantryman wrote that he had never hated anything quite as much, adding: “And it’s not because of some blustery speech of a brass hat.”

But such “blustery speeches” did exist. According to the findings of German historian Peter Lieb, many Canadian and American units were given orders on D-Day to take no prisoners. If true, that might help explain the mystery of how only 66 of the 130 Germans the Americans took prisoner on Omaha Beach made it to collecting points for the captured on the beach.

It is also conspicuous that the Allies rarely captured members of the Waffen SS. Was it because the members of this organization — with its Totenkopf (death’s head) insignia — had sworn allegiance to Hitler until death and often fought to the last man? Or did the Allied propaganda about the SS have its desired effect on soldiers? “Many of them probably deserved to be shot in any case and know it,” a British XXX Corps report bluntly stated.

Of course, for German apologists, this new information shouldn’t be something to make them feel better about their own side’s behavior. In fact, although the extent of Allied war crimes may have been greater than previously known, it cannot be compared with the scope of German crimes against civilians. For example, shooting innocent hostages was part of the German strategy for fighting the French partisans who struck out after D-Day. Up to 16,000 French citizens — men, women and children — fell victim to the terror of the Wehrmacht and the SS.

__________

Full article and photos: http://www.spiegel.de/international/world/0,1518,692037,00.html

Court Puts Pressure on Germany to Open Adolf Eichmann Files

Guilty — Adolf Eichmann during his trial in Jerusalem in 1961.

Germany’s secret service has lost a court battle to keep secret thousands of potentially embarrassing files on Nazi criminal Adolf Eichmann. Even though it remains unclear when and how many of the files will be opened, the ruling sets a precedent that could force Germany to reveal the fu