The power of lonely

What we do better without other people around

You hear it all the time: We humans are social animals. We need to spend time together to be happy and functional, and we extract a vast array of benefits from maintaining intimate relationships and associating with groups. Collaborating on projects at work makes us smarter and more creative. Hanging out with friends makes us more emotionally mature and better able to deal with grief and stress.

Spending time alone, by contrast, can look a little suspect. In a world gone wild for wikis and interdisciplinary collaboration, those who prefer solitude and private noodling are seen as eccentric at best and defective at worst, and are often presumed to be suffering from social anxiety, boredom, and alienation.

But an emerging body of research is suggesting that spending time alone, if done right, can be good for us — that certain tasks and thought processes are best carried out without anyone else around, and that even the most socially motivated among us should regularly be taking time to ourselves if we want to have fully developed personalities, and be capable of focus and creative thinking. There is even research to suggest that blocking off enough alone time is an important component of a well-functioning social life — that if we want to get the most out of the time we spend with people, we should make sure we’re spending enough of it away from them. Just as regular exercise and healthy eating make our minds and bodies work better, solitude experts say, so can being alone.

One ongoing Harvard study indicates that people form more lasting and accurate memories if they believe they’re experiencing something alone. Another indicates that a certain amount of solitude can make a person more capable of empathy towards others. And while no one would dispute that too much isolation early in life can be unhealthy, a certain amount of solitude has been shown to help teenagers improve their moods and earn good grades in school.

“There’s so much cultural anxiety about isolation in our country that we often fail to appreciate the benefits of solitude,” said Eric Klinenberg, a sociologist at New York University whose book “Alone in America,” in which he argues for a reevaluation of solitude, will be published next year. “There is something very liberating for people about being on their own. They’re able to establish some control over the way they spend their time. They’re able to decompress at the end of a busy day in a city…and experience a feeling of freedom.”

Figuring out what solitude is and how it affects our thoughts and feelings has never been more crucial. The latest Census figures indicate there are some 31 million Americans living alone, which accounts for more than a quarter of all US households. And at the same time, the experience of being alone is being transformed dramatically, as more and more people spend their days and nights permanently connected to the outside world through cellphones and computers. In an age when no one is ever more than a text message or an e-mail away from other people, the distinction between “alone” and “together” has become hopelessly blurry, even as the potential benefits of true solitude are starting to become clearer.

Solitude has long been linked with creativity, spirituality, and intellectual might. The leaders of the world’s great religions — Jesus, Buddha, Mohammed, Moses — all had crucial revelations during periods of solitude. The poet James Russell Lowell identified solitude as “needful to the imagination;” in the 1988 book “Solitude: A Return to the Self,” the British psychiatrist Anthony Storr invoked Beethoven, Kafka, and Newton as examples of solitary genius.

But what actually happens to people’s minds when they are alone? As much as it’s been exalted, our understanding of how solitude actually works has remained rather abstract, and modern psychology — where you might expect the answers to lie — has tended to treat aloneness more as a problem than a solution. That was what Christopher Long found back in 1999, when as a graduate student at the University of Massachusetts Amherst he started working on a project to precisely define solitude and isolate ways in which it could be experienced constructively. The project’s funding came from, of all places, the US Forest Service, an agency with a deep interest in figuring out once and for all what is meant by “solitude” and how the concept could be used to promote America’s wilderness preserves.

With his graduate adviser and a researcher from the Forest Service at his side, Long identified a number of different ways a person might experience solitude and undertook a series of studies to measure how common they were and how much people valued them. A 2003 survey of 320 UMass undergraduates led Long and his coauthors to conclude that people felt good about being alone more often than they felt bad about it, and that psychology’s conventional approach to solitude — an “almost exclusive emphasis on loneliness” — represented an artificially narrow view of what being alone was all about.

“Aloneness doesn’t have to be bad,” Long said by phone recently from Ouachita Baptist University, where he is an assistant professor. “There’s all this research on solitary confinement and sensory deprivation and astronauts and people in Antarctica — and we wanted to say, look, it’s not just about loneliness!”

Today other researchers are eagerly diving into that gap. Robert Coplan of Carleton University, who studies children who play alone, is so bullish on the emergence of solitude studies that he’s hoping to collect the best contemporary research into a book. Harvard professor Daniel Gilbert, a leader in the world of positive psychology, has recently overseen an intriguing study that suggests memories are formed more effectively when people think they’re experiencing something individually.

That study, led by graduate student Bethany Burum, started with a simple experiment: Burum placed two individuals in a room and had them spend a few minutes getting to know each other. They then sat back to back, each facing a computer screen the other could not see. In some cases they were told they’d both be doing the same task, in other cases they were told they’d be doing different things. The computer screen scrolled through a set of drawings of common objects, such as a guitar, a clock, and a log. A few days later the participants returned and were asked to recall which drawings they’d been shown. Burum found that the participants who had been told the person behind them was doing a different task — namely, identifying sounds rather than looking at pictures — did a better job of remembering the pictures. In other words, they formed more solid memories when they believed they were the only ones doing the task.

The results, which Burum cautions are preliminary, are now part of a paper on “the coexperiencing mind” that was recently presented at the Society for Personality and Social Psychology conference. In the paper, Burum offers two possible theories to explain what she and Gilbert found in the study. The first invokes a well-known concept from social psychology called “social loafing,” which says that people tend not to try as hard if they think they can rely on others to pick up their slack. (If two people are pulling a rope, for example, neither will pull quite as hard as they would if they were pulling it alone.) But Burum leans toward a different explanation, which is that sharing an experience with someone is inherently distracting, because it compels us to expend energy on imagining what the other person is going through and how they’re reacting to it.

“People tend to engage quite automatically with thinking about the minds of other people,” Burum said in an interview. “We’re multitasking when we’re with other people in a way that we’re not when we just have an experience by ourselves.”

Perhaps this explains why seeing a movie alone feels so radically different than seeing it with friends: Sitting there in the theater with nobody next to you, you’re not wondering what anyone else thinks of it; you’re not anticipating the discussion that you’ll be having about it on the way home. All your mental energy can be directed at what’s happening on the screen. According to Greg Feist, an associate professor of psychology at the San Jose State University who has written about the connection between creativity and solitude, some version of that principle may also be at work when we simply let our minds wander: When we let our focus shift away from the people and things around us, we are better able to engage in what’s called meta-cognition, or the process of thinking critically and reflectively about our own thoughts.

Other psychologists have looked at what happens when other people’s minds don’t just take up our bandwidth, but actually influence our judgment. It’s well known that we’re prone to absorb or mimic the opinions and body language of others in all sorts of situations, including those that might seem the most intensely individual, such as who we’re attracted to. While psychologists don’t necessarily think of that sort of influence as “clouding” one’s judgment — most would say it’s a mechanism for learning, allowing us to benefit from information other people have access to that we don’t — it’s easy to see how being surrounded by other people could hamper a person’s efforts to figure out what he or she really thinks of something.

Teenagers, especially, whose personalities have not yet fully formed, have been shown to benefit from time spent apart from others, in part because it allows for a kind of introspection — and freedom from self-consciousness — that strengthens their sense of identity. Reed Larson, a professor of human development at the University of Illinois, conducted a study in the 1990s in which adolescents outfitted with beepers were prompted at irregular intervals to write down answers to questions about who they were with, what they were doing, and how they were feeling. Perhaps not surprisingly, he found that when the teens in his sample were alone, they reported feeling a lot less self-conscious. “They want to be in their bedrooms because they want to get away from the gaze of other people,” he said.

The teenagers weren’t necessarily happier when they were alone; adolescence, after all, can be a particularly tough time to be separated from the group. But Larson found something interesting: On average, the kids in his sample felt better after they spent some time alone than they did before. Furthermore, he found that kids who spent between 25 and 45 percent of their nonclass time alone tended to have more positive emotions over the course of the weeklong study than their more socially active peers, were more successful in school and were less likely to self-report depression.

“The paradox was that being alone was not a particularly happy state,” Larson said. “But there seemed to be kind of a rebound effect. It’s kind of like a bitter medicine.”

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

Research is still far from offering any concrete guidelines. Insofar as there is a consensus among solitude researchers, it’s that in order to get anything positive out of spending time alone, solitude should be a choice: People must feel like they’ve actively decided to take time apart from people, rather than being forced into it against their will.

Overextended parents might not need any encouragement to see time alone as a desirable luxury; the question for them is only how to build it into their frenzied lives. But for the millions of people living by themselves, making time spent alone time productive may require a different kind of effort. Sherry Turkle, director of the MIT Initiative on Technology and Self, argues in her new book, “Alone, Together,” that people should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

John Cacioppo of the University of Chicago, whose 2008 book “Loneliness” with William Patrick summarized a career’s worth of research on all the negative things that happen to people who can’t establish connections with others, said recently that as long as it’s not motivated by fear or social anxiety, then spending time alone can be a crucially nourishing component of life. And it can have some counterintuitive effects: Adam Waytz in the Harvard psychology department, one of Cacioppo’s former students, recently completed a study indicating that people who are socially connected with others can have a hard time identifying with people who are more distant from them. Spending a certain amount of time alone, the study suggests, can make us less closed off from others and more capable of empathy — in other words, better social animals.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” Cacioppo said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh is the staff writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2011/03/06/the_power_of_lonely/

Beyond Understanding

I ought to have known better than to have lunch with a psychologist.

“Take you, for example,” he said. “You are definitely autistic.”

“What!?”

“I rest my case,” he shot back. “Q.E.D.”

His ironic point seemed to be that if I didn’t instantly grasp his point — which clearly I didn’t — then, at some level, I was exhibiting autistic tendencies.

Autism is often the subject of contentious and emotional debate, certainly because it manifests in the most vulnerable of humans — children. It is also hard to pin down; as a “spectrum disorder” it can take extreme and disheartening forms and incur a devastating toll on families. It is the “milder” or “high functioning” form and the two main agreed-upon symptoms of sub-optimal social and communication skills that I confine myself to here.

Simon Baron-Cohen, for example, in his book “Mindblindness,” argues that the whole raison d’être of consciousness is to be able to read other people’s minds; autism, in this context, can be defined as an inability to “get” other people, hence “mindblind.” 

A less recent but possibly related conversation took place during the viva voce exam Ludwig Wittgenstein was given by Bertrand Russell and G. E. Moore in Cambridge in 1929. Wittgenstein was formally presenting his “Tractatus Logico-Philosophicus,” an already well-known work he had written in 1921, as his doctoral thesis. Russell and Moore were respectfully suggesting that they didn’t quite understand proposition 5.4541 when they were abruptly cut off by the irritable Wittgenstein. “I don’t expect you to understand!” (I am relying on local legend here; Ray Monk’s biography of Wittgenstein has him, in a more clubbable way, slapping them on the back and bringing proceedings cheerfully to a close with the words, “Don’t worry, I know you’ll never understand it.”)

I have always thought of Wittgenstein’s line as (a) admittedly, a little tetchy (or in the Monk version condescending) but (b) expressing enviable self-confidence and (c) impressively devoid of deference (I’ve even tried to emulate it once or twice, but it never comes out quite right). But if autism can be defined, at one level, by a lack of understanding (verbal or otherwise), it is at least plausible that Wittgenstein is making (or at least implying) a broadly philosophical proposition here, rather than commenting, acerbically, on the limitations of these particular interlocutors. He could be read as saying:

Thank you, gentlemen, for raising the issue of understanding here. The fact is, I don’t expect people in general to understand what I have written. And it is not just because I have written something, in places, particularly cryptic and elliptical and therefore hard to understand, or even because it is largely a meta-discourse and therefore senseless, but rather because, in my view, it is not given to us to achieve full understanding of what another person says. Therefore I don’t expect you to understand this problem of misunderstanding either.

If Wittgenstein was making a statement along these lines, then it would provide an illuminating perspective in which to read the “Tractatus.” The persistent theme within it of “propositions which say nothing,” which we tend to package up under the heading of “the mystical,” would have to be rethought. Rather than clinging to a clear-cut divide between all these propositions ─ over here, the well-formed and intelligible (scientific) and over there, the hazy, dubious and mystical (aesthetic or ethical) ─ we might have to concede that, given the way humans interact with one another, there is always a potential mystery concealed within the most elementary statement. And it is harder than you think it is going to be to eliminate, entirely, the residue of obscurity, the possibility of misunderstanding lurking at the core of every sentence. Sometimes Wittgenstein thinks he has solved the problem, at others not (“The solution of the problem of life is seen in the vanishing of the problem,” he writes in “Tractatus.”) What do we make of those dense, elegiac and perhaps incomprehensible final lines, sometimes translated as “Whereof one cannot speak thereof one must remain silent”? Positioned as it is right at the end of the book (like “the rest is silence” at the end of “Hamlet”), proposition number 7 is apt to be associated with death or the afterlife. But translating it yet again into the sort of terms a psychologist would readily grasp, perhaps Wittgenstein is also hinting: “I am autistic” or “I am mindblind.” Or, to put it another way, autism is not some exotic anomaly but rather a constant.

I am probably misreading the text here — if I have understood it correctly, I must be misreading it. But Wittgenstein has frequently been categorized, in recent retrospective diagnoses, as autistic. Sula Wolff, for example, in “Loners, The Life Path of Unusual Children” (1995), analyzes Wittgenstein as a classic case of Asperger’s syndrome, so-called “high-functioning autism” ─ that is, being articulate, numerate and not visibly dysfunctional, but nevertheless awkward and unskilled in social intercourse. He is apt to get hold of the wrong end of the stick (not to mention the poker that he once waved aggressively at Karl Popper). An illustrative true story: he is dying of cancer; it is his birthday; his cheerful landlady comes in and wishes him “Many happy returns, Mr. Wittgenstein”; he snaps back, “There will be no returns.”

Wittgenstein, not unlike someone with Asperger’s, admits to having difficulty working out what people are really going on about. In “Culture and Value” (1914) he writes: “We tend to take the speech of a Chinese for inarticulate gurgling. Someone who understands Chinese will recognize language in what he hears. Similarly I often cannot recognize the humanity of another human being.” Which might also go some way towards explaining his remark (in the later “Philosophical Investigations”) that even if a lion could speak English, we would still be unable to understand him.

Wittgenstein is not alone among philosophers in being included in this category of mindblindness. Russell, for one, has also been labeled autistic. Taking this into account, it is conceivable that Wittgenstein is saying to Russell, when he tells him that he doesn’t expect him to understand, “You are autistic!” Or (assuming a handy intellectual time machine), “If I am to believe Wolff and others, we are autistic. Perhaps all philosophers are. It is why we end up studying philosophy.”

I don’t want to maintain that all philosophers are autistic in this sense. Perhaps not even that “You don’t have to be autistic, but it helps.” And yet there are certainly episodes and sentences associated with philosophers quite distinct from Wittgenstein and Russell, that might lead us to think in that way. 

Consider, for example, Sartre’s classic one-liner, “Hell is other people.” Wouldn’t autism, with its inherent poverty of affective contact, go some way towards accounting for that? The fear of faces and the “gaze of the other” that Sartre analyzes are classic symptoms. Sartre recognized this in himself and in others as well: he explicitly describes Flaubert as “autistic” in his great, sprawling study of the writer, “The Family Idiot,” and also asserts that “Flaubert c’est moi.” Sartre’s theory that Flaubert starts off autistic and everything he writes afterwards — trying to work out what is in Madame Bovary’s mind, for example — is a form of compensation or rectification, could easily apply to his own work.

One implication of what a psychologist might say about autism goes something like this: you, a philosopher, are mindblind and liable to take up philosophy precisely because you don’t “get” what other people are saying to you. You, like Wittgenstein, have a habit of hearing and seeing propositions, but feeling that they say nothing (as if they were rendered in Chinese). In other words, philosophy would be a tendency to interpret what people say as a puzzle of some kind, a machine that may or may not work.

I think this helps to explain Wittgenstein’s otherwise slightly mysterious advice, to the effect that if you want to be a good philosopher, you should become a car mechanic (a job Wittgenstein actually held during part of the Great War). It was not just some notion of getting away from the study of previous philosophers, but also the idea that working on machines would be a good way of thinking about language. Wittgenstein, we know, came up with his preliminary model of language while studying court reports of a car accident in Paris during the war. The roots of picture theory (the model used in court to portray the event) and ostensive definition (all those little arrows and labels) are all here. But at the core of the episode are two machines and a collision. Perhaps language can be seen as a car, a vehicle of some kind, designed to get you from A to B, carrying a certain amount of information, but apt to get stuck in jams or break down or crash; and which will therefore need fixing. Wittgenstein and the art of car maintenance. This car mechanic conception of language is just the sort of thing high-functioning autistic types would come up with, my psychologist friend might say, because they understand “systems” better than they understand people. They are “(hyper-)systemizers” not “empathizers.” The point I am not exactly “driving” at but rather skidding into, and cannot seem to avoid, is this: indisputably, most car mechanics are men. 

My psychologist friend assured me that I was not alone. “Men tend to be autistic on average. More so than women.” The accepted male-to-female ratio for autism is roughly 4-to-1; for Asperger’s the ratio jumps even higher, by some accounts 10-to-1 (other statistics give higher or lower figures but retain the male prevalence). Asperger himself wrote that the autistic mind is “an extreme variant of male intelligence”; Baron-Cohen argues that “the extreme male brain” (not exclusive to men) is the product of an overdose of fetal testosterone.

If Wittgenstein in his conversation with Russell is suggesting that philosophers are typically autistic in a broad sense, this view might explain (in part) the preponderance of male philosophers. I went back over several sources to get an idea of the philosophical ratio: Russell’s “History of Western Philosophy” (about 100-to-1), Critchley’s “Book of Dead Philosophers” (30-to-1), while, in the realm of the living, the list of contributors to The Stone, for example, the ratio narrows to more like 4-to-1.

A psychologist might say something like: “Q.E.D., philosophy is all about systemizing (therefore male) and cold, hard logic, whereas the empathizers (largely female) seek out more humane, less mechanistic havens.” I would like to offer a slightly different take on the evidence. Plato took the view (in Book V of “The Republic”) that women were just as philosophical as men and would qualify to become the philosopher “guardians” of the ideal Greek state of the future (in return they would have to learn to run around naked at the gym). It seems likely that women were among the pre-Socratic archi-philosophers. But they were largely oracular. They tended to speak in riddles. The point of philosophy from Aristotle onwards was to resolve and abolish the riddle.

But perhaps the riddle is making a comeback. Understanding can be coercive and suffocating. Do I really have to be quite so “understanding”? Isn’t that the same as being masochistically subservient? And isn’t it just another aspect of your hegemony to claim to understand me quite so well? Simone de Beauvoir was exercising her right to what I would like to call autismo when she wrote that, “one is not born a woman but becomes one.” Similarly, when she emblazons her first novel, “She Came To Stay,” with an epigraph derived from Hegel ─ “every consciousness seeks the death of the other” ─ and her philosophical avatar takes it upon herself to bump off the provincial young woman she has invited to stay in Paris: I refuse to understand, to be a mind-reader. Conversely, when Luce Irigaray, the feminist theorist and philosopher, speaks — again paradoxically — of “this sex which is not one,” she is asking us to think twice about our premature understanding of gender — what Wittgenstein might call a case of “bewitchment.”

The study of our psychopathology, via cognitive neuroscience, suggests a hypothetical history. Why does language arise? It arises because of the scope for misunderstanding. Body language, gestures, looks, winks, are not quite enough. I am not a mind-reader. I don’t understand. We need noises and written signs, speech-acts, the Word, logos. If you tell me what you want, I will tell you what I want. Language is a system that arises to compensate for an empathy deficit. But with or without language, I can still exhibit traits of autism. I can misread the signs. Perhaps it would be more exact to say that autism only arises, is only identified, at the same time as there is an expectation of understanding. But if autism is a problem, from certain points of view, autismo is also a solution: it is an assertion that understanding itself can be overvalued.

It is a point that Wittgenstein makes memorably in the introduction to the “Tractatus,” in which he writes:

I therefore believe myself to have found, on all essential points, the final solution of the problems [of philosophy]. And if I am not mistaken in this belief … it shows how little is achieved when these problems are solved.

Which is why he also suggests, at the end of the book, that anyone who has climbed up his philosophical ladder should throw it away.

Andy Martin is currently completing “Philosophy Fight Club: Sartre vs. Camus,” to be published by Simon and Schuster. He was a 2009-10 fellow at the Cullman Center for Scholars and Writers in New York, and teaches at Cambridge University.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/21/beyond-understanding

Should This Be the Last Generation?

Have you ever thought about whether to have a child? If so, what factors entered into your decision? Was it whether having children would be good for you, your partner and others close to the possible child, such as children you may already have, or perhaps your parents? For most people contemplating reproduction, those are the dominant questions. Some may also think about the desirability of adding to the strain that the nearly seven billion people already here are putting on our planet’s environment. But very few ask whether coming into existence is a good thing for the child itself. Most of those who consider that question probably do so because they have some reason to fear that the child’s life would be especially difficult — for example, if they have a family history of a devastating illness, physical or mental, that cannot yet be detected prenatally.

All this suggests that we think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don’t usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as “the asymmetry” and it is not easy to justify. But rather than go into the explanations usually proffered — and why they fail — I want to raise a related problem. How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic, in the absence of specific knowledge that the child will have a severe genetic disease or other problem? 

The 19th-century German philosopher Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction. New desires then lead us on to further futile struggle and the cycle repeats itself.

Schopenhauer’s pessimism has had few defenders over the past two centuries, but one has recently emerged, in the South African philosopher David Benatar, author of a fine book with an arresting title: “Better Never to Have Been: The Harm of Coming into Existence.” One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her. Few of us would think it right to inflict severe suffering on an innocent child, even if that were the only way in which we could bring many other children into the world. Yet everyone will suffer to some extent, and if our species continues to reproduce, we can be sure that some future children will suffer severely. Hence continued reproduction will harm some children severely, and benefit none.

Benatar also argues that human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states. If we think that this is a tolerable state of affairs it is because we are, in Benatar’s view, victims of the illusion of pollyannaism. This illusion may have evolved because it helped our ancestors survive, but it is an illusion nonetheless. If we could see our lives objectively, we would see that they are not something we should inflict on anyone.

Here is a thought experiment to test our attitudes to this view. Most thoughtful people are extremely concerned about climate change. Some stop eating meat, or flying abroad on vacation, in order to reduce their carbon footprint. But the people who will be most severely harmed by climate change have not yet been conceived. If there were to be no future generations, there would be much less for us to feel to guilty about.

So why don’t we make ourselves the last generation on earth? If we would all agree to have ourselves sterilized then no sacrifices would be required — we could party our way into extinction!

Of course, it would be impossible to get agreement on universal sterilization, but just imagine that we could. Then is there anything wrong with this scenario? Even if we take a less pessimistic view of human existence than Benatar, we could still defend it, because it makes us better off — for one thing, we can get rid of all that guilt about what we are doing to future generations — and it doesn’t make anyone worse off, because there won’t be anyone else to be worse off.

Is a world with people in it better than one without? Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

What do you think?

Readers are invited to respond to the following questions in the comment section below:

If a child is likely to have a life full of pain and suffering is that a reason against bringing the child into existence?

If a child is likely to have a happy, healthy life, is that a reason for bringing the child into existence?

Is life worth living, for most people in developed nations today?

Is a world with people in it better than a world with no sentient beings at all?

Would it be wrong for us all to agree not to have children, so that we would be the last generation on Earth?

 

Peter Singer is Professor of Bioethics at Princeton University and Laureate Professor at the University of Melbourne. His most recent book is “The Life You Can Save.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/06/06/should-this-be-the-last-generation/

Friendship in an Age of Economics

When I was 17 years old, I had the honor of being the youngest person in the history of New York Hospital to undergo surgery for a herniated disc. This was at a time in which operations like this kept people in the hospital for over a week. The day after my surgery, I awoke to find a friend of mine sitting in a chair across from my bed. I don’t remember much about his visit. I am sure I was too sedated to say much. But I will not forget that he visited me on that day, and sat there for I know not how long, while my humanity was in the care of a morphine drip. 

The official discourses of our relations with one another do not have much to say about the afternoon my friend spent with me. Our age, what we might call the age of economics, is in thrall to two types of relationships which reflect the lives we are encouraged to lead. There are consumer relationships, those that we participate in for the pleasure they bring us. And there are entrepreneurial relationships, those that we invest in hoping they will bring us some return. In a time in which the discourse of economics seeks to hold us in its grip, this should come as no surprise.

The encouragement toward relationships of consumption is nowhere more prominently on display than in reality television. Jon and Kate, the cast of “Real World,” the Kardashians, and their kin across the spectrum conduct their lives for our entertainment. It is available to us in turn to respond in a minor key by displaying our own relationships on YouTube. Or, barring that, we can collect friends like shoes or baseball cards on Facebook.

Entrepreneurial relationships have, in some sense, always been with us. Using people for one’s ends is not a novel practice. It has gained momentum, however, as the reduction of governmental support has diminished social solidarity and the rise of finance capitalism has stressed investment over production. The economic fruits of the latter have lately been with us, but the interpersonal ones, while more persistent, remain veiled. Where nothing is produced except personal gain, relationships come loose from their social moorings.

Aristotle thought that there were three types of friendship: those of pleasure, those of usefulness, and true friendship. In friendships of pleasure, “it is not for their character that men love ready-witted people, but because they find them pleasant.” In the latter, “those who love each other for their utility do not love each other for themselves but in virtue of some good which they get from each other.” For him, the first is characteristic of the young, who are focused on momentary enjoyment, while the second is often the province of the old, who need assistance to cope with their frailty. What the rise of recent public rhetoric and practice has accomplished is to cast the first two in economic terms while forgetting about the third.

In our lives, however, few of us have entirely forgotten about the third — true friendship. We may not define it as Aristotle did — friendship among the already virtuous — but we live it in our own way nonetheless. Our close friendships stand as a challenge to the tenor of our times.

Conversely, our times challenge those friendships. This is why we must reflect on friendship; so that it doesn’t slip away from us under the pressure of a dominant economic discourse. We are all, and always, creatures of our time. In the case of friendship, we must push back against that time if we are to sustain what, for many of us, are among the most important elements of our lives. It is those elements that allow us to sit by the bedside of a friend: not because we know it is worth it, but because the question of worth does not even arise.

There is much that might be said about friendships. They allow us to see ourselves from the perspective of another. They open up new interests or deepen current ones. They offer us support during difficult periods in our lives. The aspect of friendship that I would like to focus on is its non-economic character. Although we benefit from our close friendships, these friendships are not a matter of calculable gain and loss. While we draw pleasure from them, they are not a matter solely of consuming pleasure. And while the time we spend with our friends and the favors we do for them are often reciprocated in an informal way, we do not spend that time or offer those favors in view of the reciprocation that might ensue.

Friendships follow a rhythm that is distinct from that of either consumer or entrepreneurial relationships. This is at once their deepest and most fragile characteristic. Consumer pleasures are transient. They engulf us for a short period and then they fade, like a drug. That is why they often need to be renewed periodically. Entrepreneurship, when successful, leads to the victory of personal gain. We cultivate a colleague in the field or a contact outside of it in the hope that it will advance our career or enhance our status. When it does, we feel a sense of personal success. In both cases, there is the enjoyment of what comes to us through the medium of other human beings.

Friendships worthy of the name are different. Their rhythm lies not in what they bring to us, but rather in what we immerse ourselves in. To be a friend is to step into the stream of another’s life. It is, while not neglecting my own life, to take pleasure in another’s pleasure, and to share their pain as partly my own. The borders of my life, while not entirely erased, become less clear than they might be. Rather than the rhythm of pleasure followed by emptiness, or that of investment and then profit, friendships follow a rhythm that is at once subtler and more persistent. This rhythm is subtler because it often (although not always) lacks the mark of a consumed pleasure or a successful investment. But even so, it remains there, part of the ground of our lives that lies both within us and without.

To be this ground, friendships have a relation to time that is foreign to an economic orientation. Consumer relationships are focused on the momentary present. It is what brings immediate pleasure that matters. Entrepreneurial relationships have more to do with the future. How I act toward others is determined by what they might do for me down the road. Friendships, although lived in the present and assumed to continue into the future, also have a deeper tie to the past than either of these. Past time is sedimented in a friendship. It accretes over the hours and days friends spend together, forming the foundation upon which the character of a relationship is built. This sedimentation need not be a happy one. Shared experience, not just common amusement or advancement, is the ground of friendship.

Of course, to have friendships like this, one must be prepared to take up the past as a ground for friendship. This ground does not come to us, ready-made. We must make it our own. And this, perhaps, is the contemporary lesson we can draw from Aristotle’s view that true friendship requires virtuous partners, that “perfect friendship is the friendship of men who are good.” If we are to have friends, then we must be willing to approach some among our relationships as offering an invitation to build something outside the scope of our own desires. We must be willing to forgo pleasure or usefulness for something that emerges not within but between one of us and another.

We might say of friendships that they are a matter not of diversion or of return but of meaning. They render us vulnerable, and in doing so they add dimensions of significance to our lives that can only arise from being, in each case, friends with this or that particular individual, a party to this or that particular life.

It is precisely this non-economic character that is threatened in a society in which each of us is thrown upon his or her resources and offered only the bywords of ownership, shopping, competition, and growth. It is threatened when we are encouraged to look upon those around us as the stuff of our current enjoyment or our future advantage. It is threatened when we are led to believe that friendships without a recognizable gain are, in the economic sense, irrational. Friendships are not without why, perhaps, but they are certainly without that particular why.

In turn, however, it is friendship that allows us to see that there is more than what the prevalent neoliberal discourse places before us as our possibilities. In a world often ruled by the dollar and what it can buy, friendship, like love, opens other vistas. The critic John Berger once said of one of his friendships, “We were not somewhere between success and failure; we were elsewhere.” To be able to sit by the bed of another, watching him sleep, waiting for nothing else, is to understand where else we might be.

Todd May is a professor of philosophy at Clemson University. He is the author 10 books, including “The Philosophy of Foucault” and “Death,” and is at work on a book about friendship in the contemporary period.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/04/friendship-in-an-age-of-economics/

Two Friendships: A Response

Earlier columns in The Stone have raised the question of what philosophy is. Surely among its tasks is to think about matters that are at once urgent, personal and of general significance. When one is lucky, one finds interlocutors who are willing to share that thought, add to it in one way or another, or suggest a different direction. In the comments from readers of my earlier post, “Friendship in an Age of Economics,” I have been fortunate.

I would like to linger over two friendships described in the comments. One, offered by Echo from Santa Cruz, describes a life-long friendship with someone from whom she was physically separated for many years, and who eventually died of cancer. (I am inferring from the context of her comment, that Echo, as in Greek mythology, is a woman, though she never says so explicitly.) The other is from E. Kelley Harris in Slovakia, who recounts the example of an intimidating seaman named Frank with whom, over the course of intense theological dispute, a moment of intimacy arose in an unexpected way. 

The friendship described by Echo is one that many of us will find examples of in our life. I am still close friends with the person who sat by my bedside 38 years ago, even though we live far from each other. Regarding her friendship, Echo comments that, “There was no work to that friendship. Our instincts told us what to do, in the same way as a new mother takes her child and holds it to her breast.” I am sure Echo would agree with me that a friendship without work is not something that is given; it is an achievement. Friendships take time. They must be cultivated, sometimes when one is in the mood, sometimes when one is not. That is part of its non-economic character. What Echo describes in personal language is an achieved friendship, one that likely started with a spark, but has been tended over the years and allowed the two friends to continue sharing with each other up to the end of one of their lives.

Several comments insisted that one would never become friends with someone unless there was something to be gained. This is certainly true. Close friendships are not simply exercises in altruism. Friendships that come to resemble relationships between donors and recipients begin to fray. Eventually they come to look like something other than friendships. The non-economic character of friendship does not lie in its altruism, but in its lack of accounting. We are friends not solely because you amuse me or assist me, but more deeply because we have rooted ourselves together in a soil we have both agreed to cultivate. Echo has provided an example of the fruit of that cultivation.

What E. Kelley depicts is a more unlikely friendship between someone who can best be described as a bully and another person, the author, who found himself in the unenviable position of bunk mate. Over time, passionate theological conversation developed between them, leading to a moment where the author put himself in a vulnerable position before the bully, who declined to play his expected role. As with Echo’s example, there is the accretion of shared time that is necessary for that moment to occur. It would hardly have happened the first night Frank stepped from the brig. But there is something else as well. There is the development of aspects of oneself that otherwise might have gone neglected or even unrecognized. E. Kelly displayed a kind of courage that seemed even to surprise him, and Frank lent himself to passionate discussion without having to overpower his conversational adversary. This is what I meant when I wrote in my column that in close friendships we step into the stream of another’s life.

One might say that there is, among seamen — as among military personnel and those facing collective harm generally — a motivation for a common bond that helped drive the two together. BlueGhost from Iowa says this explicitly in his discussion of his son’s decision to join the military. If so, this would be another example of the idea in the column that we are always creatures of our time and our circumstances. There were some who worried that in criticizing the consumer and entrepreneurial models of friendship, I might be suggesting that there was a previous period in which friendships were better or more pure. That would be, as the comments noted, naïve. Each age has its context, and people in that age — or in one specific aspect of it — cannot escape engaging with the themes of that context, its motifs and parameters. Consumerism and entrepreneurship are dominant themes of our age; if my column is right, they are a threat to our friendships. Other ages have had different themes and their friendships different dangers.

There is, of course, much more to be said about how consumerism and entrepreneurship endanger our friendships. I neglected to do so in the column because my goal in that short space was not so much critique as a description or a reminder of how we often still participate in relationships whose value is not the subject of most of our public discourse about them. To trace the development of consumerism and entrepreneurship in their particular character over the past 30 or 40 years, as well as their effects on our relationships, would require a much longer discussion as well as an engagement with many contemporary theorists and social scientists — basically, a book. What I counted on in the column was that there would be a resonance among readers for what was being suggested. If the comments are any indication, I was fortunate there as well.

A last note. Several comments suggested that there may be other ways to characterize friendship than by appeal to the Aristotelean distinctions I invoked. This is undoubtedly true. It is also true that there is a certain oversimplification to any categorization of friendship. There is more to Echo’s and E. Kelley’s friendships than the themes I have isolated here. What Aristotle offers us — and this over two millennia after his death — are tools that help us think about ourselves. It is not that there are three and only three types of friendships. Rather, in thinking about Aristotle’s categories of friendship in the context of our time we can begin to see ourselves and our relationships more clearly than we might otherwise. This is also true of many other philosophers, a number of whose names were invoked in the comments. It is what philosophers who stand the test of time offer us: not rigid categories to which we must conform, but instead ways of making sense of ourselves and our lives, of considering who we are, where we are, and what we might become.

Todd May is a professor of philosophy at Clemson University. He is the author 10 books, including “The Philosophy of Foucault” and “Death,” and is at work on a book about friendship in the contemporary period.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/07/13/two-friendships-a-response/

Beyond the Veil: A Response

I’m extraordinarily grateful to the many people who posted comments on my piece, “Veiled Threats?”  I note that many have come from educated and active Muslim women (in countries ranging from the U. S. to India), who have expressed a sense of “relief” at having their convictions and voices taken seriously.

I’ll begin my reply with a story.  The day my article came out, I went to a White Sox game (the one in which my dear team took over first place!).  I was there with two friends from Texas and my son-in-law, who was born in Germany and now has a green card.  So, in Chicago terms, we were already a heterogeneous lot.  Behind me was a suburban dad with shoulder-length gray hair (an educated, apparently affluent ex-hippie, like the “Bobos” of David Brooks’s book), who took pleasure in explaining the finer points of the game (like the suicide squeeze) to his daughter and two other preteen girls in fashionable sundresses.  On our right was a sedate African-American couple, the woman holding a bag that marked her as working for the “U. S. Census Religion subcommittee” of her suburban county.  In front of us were three Orthodox Jewish boys, ages around 6, 10, and 18, their tzizit (ritual fringes) showing underneath their Sox shirts, and cleverly double-hatted so that they could doff their Sox caps during the national anthem, while still retaining their kipot.  Although this meant that they had not really bared their heads for the Anthem, not one person gave them an ugly stare or said, “Take off your hat!” — or, even worse, “Here we take off our hats.”  Indeed, nobody apart from me seemed to notice them at all.

I don’t always feel patriotic, but I did then.  I would not encourage a child or relative of mine to wear tzizit or, outside of temple, a kipoh.  I’m a Reform Jew, and I view these things as totemism and fetishism.  But I would not offend strangers by pointing that out, or acquaintances unless they were friends who had asked my advice.  And that’s the way I feel about most of these things: it’s not my business.  Luckily, a long-dominant tradition in American culture and law agrees with me.  From the time when Quakers and Mennonites refused to doff their hats, and when both Mennonites and Amish adopted “pre-modern” dress, we Americans are pretty comfortable with weird clothes, and used to the idea that people’s conscientious observances frequently require them to dress in ways that seem strange or unpleasant to the majority.  To the many people who wrote about how immigrants have to learn to fit in, I ask: what would you have liked to see at that ball game?  The scene I witnessed, or three Jewish boys being ejected from the park because they allegedly failed to respect the flag?  (And note that, like most minorities, they did show respect in the way they felt they could, without violating their conscience.)

Before addressing a series of points raised in the comments, two prefatory notes:

1.  Throughout, I am speaking only about liberal democracies, not about autocratic regimes.   It’s only in such democracies where liberty of conscience is a reality anyway, so I think that examples of autocracy in Saudi Arabia are beside the point.  We’re talking about what limits liberal democracies may reasonably impose on freedom of conscience and expression while remaining consistent with their basic principles.

2.  To those who described me as in an “ivory tower,” let me point out that I have spent many years working in international development organizations and that I have particularly close ties with India, home to the second-largest Muslim population in the world (the largest being in Indonesia).  I’ve written a book about interreligious violence in India (“The Clash Within: Democracy, Religious Violence, and India’s Future,” 2007), which turns out to be largely a story of Hindu neo-fascist organizations fomenting violence against Muslims.  So in fact I am not in the ivory tower so far as these issues are concerned, and I’ve spent many years working with organizations that foster education and other opportunities for poor women.

All right, now to my argument.   Remember that my contention was that pursuit of conscientious commitments is a very important human interest, closely linked to human dignity, which can rightly be limited only by a “compelling state interest.”  I then went on to argue that none of the interests standardly brought forward against the burqa is compelling, and, moreover, that any ban on the burqa in response to these reasons would require banning other common practices that nobody objects to because of their familiarity.   As Annie rightly summarizes (126): “Hypocrisy isn’t democratic.”

1. The position of the Catholic Church. Stephen O’Brien points out helpfully that the “Catechism of the Catholic Church,” in sections dealing with religious liberty and conscience (sections 1782 and 2106) takes a position that has been used by the Catholic Church in France to oppose a ban on the burqa.   O’Brien and I once acted in a play together, during the time that both of us were undergraduates at N.Y.U., and in fact we had an intense argument about propriety in dress, which turned into a lasting collegial relationship.  So I thank him for his intervention and his urging my  study of the Catechism!

2. The special case of France.  I did not discuss France in my piece, but since some readers did, let me comment.  The French policy of laïcité does indeed lead to restrictions on a wide range of religious manifestations, all in the name of a total separation of church and state.  But if one looks closely, the restrictions are unequal and discriminatory.  The school dress code forbids the Muslim headscarf and the Jewish yarmulke, along with “large” Christian crosses.  But this is a totally unequal burden, because the first two items of clothing are religiously obligatory for observant members of those religions, and the third is not: Christians are under no religious obligation to wear any cross, much less a “large” one.   So there is discrimination inherent in the French system.

Would French secularism be acceptable if practiced in an even-handed way?  According to U.S. constitutional law, government may not favor religion over non-religion, or non-religion over religion.  For example, it was unconstitutional for the University of Virginia to announce that it would use student fees to fund all other student organizations (political, environmental, and so forth) but not the religious clubs (Rosenberger v. Rector and Visitors of the University of Virginia, 515 U. S. 819 (1995)).  I must say that I prefer this balanced policy to French laïcité; I think it is fairer to religious people.   Separation is not total, even in France: thus, a fire in a burning church would still be put out by the public fire department; churches still get the use of the public water supply and the public sewer system.  Still, the amount and type of separation that the French system mandates, while understandable historically, looks unfair in the light of the principles I have defended.

3. Terrorism and safety.  A number of the commenters think that the burqa creates unique risks of various sorts, particularly in the context of the legitimate interest in preventing acts of terrorism.  All I can say is that if I were a terrorist in the U. S. or Europe, and if I were not stupid, the last thing I would wear would be a burqa, since that way of dressing attracts suspicious attention.  Criminals usually want not to attract suspicious attention; if they are at all intelligent, they succeed.  I think I’d dress like Martha Nussbaum in the winter: floor length Eddie Bauer down coat, hat down over the eyebrows, extra hood for insulation, and a bulky Indian shawl around nose and mouth.  Nonetheless, I have never been asked to remove these clothes, in a department store, a public building, or even a bank.  Bank workers do look at my ID documents, though, and I’ve already said that at this stage in our technological development I think it is a reasonable request that ID documents contain a full face photo.  (Moreover, I’ve been informed by my correspondence that most contemporary Islamic scholars agree: a woman can and must remove her niqab for visual identification if so requested.)   In the summer, again if I were an intelligent sort of terrorist, I would wear a big floppy hat and a long  loose caftan, and I think I’d carry a capacious Louis Vuitton bag, the sort that signals conspicuous consumption.  That is what a smart terrorist would do, and the smart ones are the ones to worry about.

So, what to do about the threat that all bulky and non-revealing clothing creates?  Airline security does a lot with metal detectors, body imaging, pat-downs, etc.  (One very nice system is at work in India, where all passengers get a full manual pat-down, but in a curtained booth by a member of the same sex who is clearly trained to be courteous and respectful.)  The White Sox stadium searches all bags (though more to check for beer than for explosives, thus protecting the interests of in-stadium vendors).  Private stores or other organizations who feel that bulky clothing is a threat (whether of shoplifting or terrorism or both) could institute a nondiscriminatory rule banning, e.g., floor-length coats; they could even have a body scanner at the door.  But they don’t, presumably preferring customer friendliness to the extra margin of safety.  What I want to establish, however, is the invidious discrimination inherent in the belief that the burqa poses a unique security risk.  Reasonable security policies, applied to similar cases similarly, are perfectly fine.

4. Depersonalization and respect for persons. Several readers made the comment that the burqa is objectionable because it portrays women as non-persons.    Is this plausible?  Isn’t our poetic tradition full of the trope that eyes are the windows of the soul?  And I think this is just right: contact with another person, as individual to individual, is made primarily through eyes, not nose or mouth.  Once during a construction project that involved a lot of dust in my office, I (who am prone to allergies and vain about my singing voice and the state of my hair) had to cover everything but my eyes while talking to students for a longish number of weeks.  At first they found it quite weird, but soon they were asking me how they could get a mask and filter scarf like the ones I was using.  My personality did not feel stifled, nor did they feel that they could not access my individuality.

More generally, I think one should listen to what women who wear the burqa say they think it means before opining.  Even if one feels convinced that depersonalization is what’s going on, that might be a reason for not liking that mode of dress, but why would it be a reason for banning it?  If the burqa were uniquely depersonalizing, we could at least debate this point: but, as I pointed out, a lot of revealing clothing is plausibly seen as a way of marketing a woman as sex objects, and that is itself a form of depersonalization.  The feminist term is “objectification,” and it has long been plausibly maintained that a lot of advertising and other aspects of culture objectify women, treat them as sex objects rather than as full persons.  The models in porn (whether films or photos) are usually not conspicuous for their rich individuality.   (Indeed, in the light of the tremendous cultural pressure to market oneself as a sex object, one might feel that wearing a lot of covering is a way of resisting that demand or insisting on intimacy.)  In any case, what business is it of government to intervene, if there is no clear public interest in burdening liberty of conscience in this way?

At this point, I want to address the point about respect raised by Amy (115).  I agree with her that we needn’t approve of the forms of dress that others choose, or of any other religious observance.  We may judge them ridiculous, or revolting, or even hateful.  I do think that one should try to understand first before coming to such a judgment, and I think that in most cases one should not give one’s opinion about the way a person is dressed unless someone has asked for it.  But of course any religious ceremony that expresses hatred for another group (a racist religion, say) is deeply objectionable, and one can certainly protest that, as usually happens when the KKK puts on a show somewhere these days.

I do not think that a burqa is a symbol of hatred, and thus not something that it would be reasonable to find deeply hateful.  It is more like the boys and their tzizit, something I may feel out of tune with, but which it is probably nosy to denounce unless a friend has asked my opinion.  Still, if Amy wants to say that it is deeply objectionable, and that she does not respect it, that does not in any way disagree with the principles I expressed in my article.   Her intervention prompts me to make a necessary clarification.   I am not saying that all religious activities ought to be respected.  Equal respect, in my view, is rightly directed at the person, and the core of human dignity in the person, which I hope Amy will agree all these people still have.  Respecting their equal human dignity and equal human rights means giving them space to carry out their conscientious observances, even if we think that those are silly or even disgusting.  Their human dignity gives them the right to be wrong, we might say.  One religion that makes me cringe is an evangelical sect that requires its members to handle poisonous snakes (the subject of long litigation).  I find that one bizarre, I would never go near it, and I tend to find the actions involved disgusting.  But that does not mean that I don’t respect the people as bearers of equal human rights and human dignity.  Because they have equal human rights and human dignity, they get to carry on their religion unless there is some compelling government interest against it.  The long litigation concerned just that question.  Since the religion kept non-consenting adults and all children far away from the snakes, it was not an easy question.  In the end, a cautious government decided to intervene (Swann v. Pack, 527 S. W. 2d 99 (Tenn. 1975)).  But that did not mean that they did not show equal respect for the snake-handlers as human beings and bearers of human dignity and human rights.

What respect for persons requires, then, is that people have equal space to exercise their conscientious commitments, not that others like or even respect what they do in that space.  Furthermore, equal respect for persons is compatible, as I said, with limiting religious freedom in the case of a “compelling state interest.”  In the snake-handler case, the interest was in public safety.  Another government intervention that was right, in my view, was the judgment that Bob Jones University should lose its tax exemption for its ban on interracial dating (Bob Jones v. U. S., 461 U. S. 574 (1983).  Here the Supreme Court agreed that the ban was part of that sect’s religion, and thus that the loss of tax-exempt status was a “substantial burden” on the exercise of that religion, but they said that society has a compelling interest in not cooperating with racism.   Never has the government taken similar steps against the many Roman Catholic universities that restrict their presidencies to a priest, hence a male; but in my view they should all lose their tax exemptions for this reason.  (The compelling interest standard is difficult to delineate, and courts can get it wrong, which is one reason why Justice Scalia prefers the Lockean position.)

Why is the burqa different from the case of Bob Jones University?  First, of course, government was not telling Bob Jones that they could not continue with their policy, it was just refusing to give them a huge financial reward, thus in effect cooperating with the policy.  A second difference is that Bob Jones enforced a total ban on interracial dating, just as the major Catholic universities (Georgetown excepted, which now has a lay president) have imposed a total ban on female candidates for the job of president.  The burqa, by contrast, is a personal choice, so it’s more like the case of some student at Bob Jones (or any other university) who decides to date only white females or males because of familial and parental pressure.  Amy and I would probably agree in disliking such behavior.  But it does not seem like a case for government intervention. Which brings me to my next point.

5. Social Pressure and government intervention. When is social and familial pressure bad enough to justify state intervention with a conscientious observance?  I have already said that all forms of physical coercion are inadmissible and should be vigorously interfered with, whether they concern children or adults.   I would even favor no-drop laws in cases of domestic violence, since we know that a woman’s withdrawal of a complaint against a violent spouse or partner is often coerced.  My judgment about Turkey in the past — that the ban on veiling was justified, in those days, by a compelling state interest — derived from the belief that women were at risk of physical violence if they went unveiled, unless the government intervened to make the veil illegal for all.  Today in Europe the situation is utterly different, and no physical violence will greet the woman who wears even scanty clothing — apart from the always present danger of rape, which should be dealt with by convicting violent men, not by telling women they can’t wear what they want to wear.   (And this too the law has now recognized: thus, in the case that became the basis for the excellent film “The Accused,” a woman’s sexually provocative behavior was found not to give the men who raped her any defense, given that she clearly said “no” to the rape.

Thus, in response to Samuel (44), my point about Turkey is not one about numbers: if even a minority were at risk of physical violence, some government action would be justified.   Usually, what government will rightly do is to stop the assailants from beating up on people, rather than banning any religious practices.   For example, the Supreme Court said that Jehovah’s Witnesses have a constitutional right to say negative things about Catholics in the public street, and the sort of government intervention that would be appropriate would not be a ban on insults to Catholics, but rather a careful defense of the minority against coercive pressure both from the state and from private individuals (see Cantwell v. Connecticut, 310 U. S. 296 (1940): Connecticut’s action charging the Jehovah’s Witnesses with a breach of the peace for their slurs against Catholics violated their rights under the First and Fourteenth Amendments).  The situation in Turkey was different, because the violence toward unveiled women was thought to be so widespread and so unstoppable that only a total ban on the veil could stop it.  If the facts were correct, the decision was (temporarily) right.

When the pressure is emotional only, the case is much more difficult.  On the whole, we tend to favor few legal limits for adults: thus, if someone is in an emotionally abusive relationship, that is a case for counseling or the intervention of friends and family, not for the police. Even when we can see that what is going on is manipulative — e.g. the man says, “I won’t date you any longer if you don’t do this or that sex act” — we think that is the business of the people involved and those who care about them, not of the police.   I think that emotional coercion to wear a burqa, applied to an adult woman (threats of withdrawal of affection, for example, but not physical violence) is like this, and should be dealt with by friends and family, not by the law.

What about children?  This opens up a huge topic, since there is nothing that is more common in the modern family than various forms of coercive pressure (to get into a top college, to date people of the “right” religion or ethnicity, to wear “appropriate” clothes, to choose a remunerative career, to take a shower, “and so each and so on to no last term” as James Joyce wrote in “Ulysses.”  So: where should government and law step in?  Not often, and only where the behavior either constitutes a gross risk to bodily health and safety (as with Jehovah’s Witness children being forbidden to have a life-saving blood transfusion), or impairs some major functioning.  Thus, I think that female genital mutilation practiced on minors should be illegal if it is a form that impairs sexual pleasure or other bodily functions. (A symbolic pin-prick is a different story.)  Male circumcision seems to me all right, however, because there is no evidence that it interferes with adult sexual functioning; indeed it is now known to reduce susceptibility to H.I.V./AIDS.   The burqa (for minors) is not in the same class as genital mutilation, since it is not irreversible and does not engender health or impair other bodily functions — not nearly so much as high-heeled shoes, as I remarked (being a proud lover of the same).  Suppose parents required their daughters to wear a Victorian corset — which did do bodily damage, compressing various organs.  There might be a case for a ban then.  But the burqa is not even in the category of the corset.   As many readers pointed out, it is sensible dress in a hot climate where skin easily becomes worn by sun and dust.

At the limit, where the state’s interest in protecting the opportunities of children is concerned, is the denial of education at stake in the Supreme Court case, Wisconsin v. Yoder (406 U. S. 205 (1972)), in which a group of Amish parents asked to withdraw their children from the last two years of legally required schooling.  They would clearly have lost if they had asked to take their children out of all schooling, but what was in question were these two years only.  They won under the accommodationist principle I described in my article, although they probably would have lost on Justice Scalia’s Lockean test, since the law mandating education until age 16 was nondiscriminatory and not drawn up in order to persecute the Amish.  The case is difficult, because the parents made a convincing case that work on the farm, at that crucial age, was a key part of their community-based religion — and yet education opens up so many exit opportunities that the denial even of those two years may unreasonably limit children’s future choices.    And of course the children were under heavy pressure to do what their parents wanted.  (Thus Justice Douglas’s claim that the Court should decide the case by interviewing the children betrayed a lack of practical understanding.)

6. How much choice is enough?   Annie (126) and several others have pointed out that we all get our values through some type of social indoctrination, religious values included.  So we can’t really assume that the choice to wear a burqa is a free choice, if we mean by that a choice that has been deliberated about with due consideration of all the alternatives and with unimpeded access to some alternatives.  But then, as Annie says, we can’t assume that about anyone’s choice of anything — career, romantic partner, politics, etc.  What we can do, I think, is to guarantee a threshold level of overall freedom, by making primary and secondary education compulsory, by opening higher education to all who want it and are qualified (through need-blind admissions), and to work on job creation so that all of our citizens have some choice in matters of employment.  Moreover, the education that children get should encourage critical thinking, expansion of the imagination, and the other humanistic ideals that I discuss in my recent book, “Not For Profit: Why Democracy Needs the Humanities” (Princeton University Press 2010).  If a person gets an education like that (and it is not expensive, I’ve seen it done by women’s groups in India for next to nothing, just a lot of passion), then we can be more confident that a choice is a choice.

Thanks to you all for taking the time to respond!

Martha Nussbaum teaches law, philosophy, and divinity at The University of Chicago. She is the author of several books, including “Liberty of Conscience: In Defense of America’s Tradition of Religious Equality” (2008) and “Not for Profit: Why Democracy Needs the Humanities” (2010).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/07/15/beyond-the-veil-a-response/

Moral Camouflage or Moral Monkeys?

After being shown proudly around the campus of a prestigious American university built in gothic style, Bertrand Russell is said to have exclaimed, “Remarkable. As near Oxford as monkeys can make.” Much earlier, Immanuel Kant had expressed a less ironic amazement, “Two things fill the mind with ever new and increasing admiration and awe … the starry heavens above and the moral law within.” Today many who look at morality through a Darwinian lens can’t help but find a charming naïveté in Kant’s thought. “Yes, remarkable. As near morality as monkeys can make.”

So the question is, just how near is that? Optimistic Darwinians believe, near enough to be morality. But skeptical Darwinians won’t buy it. The great show we humans make of respect for moral principle they see as a civilized camouflage for an underlying, evolved psychology of a quite different kind.

This skepticism is not, however, your great-grandfather’s Social Darwinism, which saw all creatures great and small as pitted against one another in a life or death struggle to survive and reproduce — “survival of the fittest.” We now know that such a picture seriously misrepresents both Darwin and the actual process of natural selection. Individuals come and go, but genes can persist for 1000 generations or more. Individual plants and animals are the perishable vehicles that genetic material uses to make its way into the next generation (“A chicken is an egg’s way of making another egg”). From this perspective, relatives, who share genes, are to that extent not really in evolutionary competition; no matter which one survives, the shared genes triumph. Such “inclusive fitness” predicts the survival, not of selfish individuals, but of “selfish” genes, which tend in the normal range of environments to give rise to individuals whose behavior tends to propel those genes into future.

A place is thus made within Darwinian thought for such familiar phenomena as family members sacrificing for one another — helping when there is no prospect of payback, or being willing to risk life and limb to protect one’s people or avenge harms done to them.

But what about unrelated individuals? “Sexual selection” occurs whenever one must attract a mate in order to reproduce. Well, what sorts of individuals are attractive partners? Henry Kissinger claimed that power is the ultimate aphrodisiac, but for animals who bear a small number of young over a lifetime, each requiring a long gestation and demanding a great deal of nurturance to thrive into maturity, potential mates who behave selfishly, uncaringly, and unreliably can lose their chance. And beyond mating, many social animals depend upon the cooperation of others for protection, foraging and hunting, or rearing the young. Here, too, power can attract partners, but so can a demonstrable tendency behave cooperatively and share benefits and burdens fairly, even when this involves some personal sacrifice — what is sometimes called “reciprocal altruism.” Baboons are notoriously hierarchical, but Joan Silk, a professor of anthropology at UCLA, and her colleagues, recently reported a long-term study of baboons, in which they found that, among females, maintaining strong, equal, enduring social bonds — even when the individuals were not related — can promote individual longevity more effectively than gaining dominance rank, and can enhance the survival of progeny.

A picture thus emerges of selection for “proximal psychological mechanisms”— for example, individual dispositions like parental devotion, loyalty to family, trust and commitment among partners, generosity and gratitude among friends, courage in the face of enemies, intolerance of cheaters — that make individuals into good vehicles, from the gene’s standpoint, for promoting the “distal goal” of enhanced inclusive fitness.

Why would human evolution have selected for such messy, emotionally entangling proximal psychological mechanisms, rather than produce yet more ideally opportunistic vehicles for the transmission of genes — individuals wearing a perfect camouflage of loyalty and reciprocity, but fine-tuned underneath to turn self-sacrifice or cooperation on or off exactly as needed? Because the same evolutionary processes would also be selecting for improved capacities to detect, pre-empt and defend against such opportunistic tendencies in other individuals — just as evolution cannot produce a perfect immune system, since it is equally busily at work improving the effectiveness of viral invaders. Devotion, loyalty, honesty, empathy, gratitude, and a sense of fairness are credible signs of value as a partner or friend precisely because they are messy and emotionally entangling, and so cannot simply be turned on and off by the individual to capture each marginal advantage. And keep in mind the small scale of early human societies, and Abraham Lincoln’s point about our power to deceive.

Why, then, aren’t we better — more honest, more committed, more loyal? There will always be circumstances in which fooling some of the people some of the time is enough; for example, when society is unstable or individuals mobile. So we should expect a capacity for opportunism and betrayal to remain an important part of the mix that makes humans into monkeys worth writing novels about. 

How close does all this take us to morality? Not all the way, certainly. An individual psychology primarily disposed to consider the interests of all equally, without fear or favor, even in the teeth of social ostracism, might be morally admirable, but simply wouldn’t cut it as a vehicle for reliable replication. Such pure altruism would not be favored in natural selection over an impure altruism that conferred benefits and took on burdens and risks more selectively — for “my kind” or “our kind.” This puts us well beyond pure selfishness, but only as far as an impure us-ishness. Worse, us-ish individuals can be a greater threat than purely selfish ones, since they can gang up so effectively against those outside their group. Certainly greater atrocities have been committed in the name of “us vs. them” than “me vs. the world.”

So, are the optimistic Darwinians wrong, and impartial morality beyond the reach of those monkeys we call humans? Does thoroughly logical evolutionary thinking force us to the conclusion that our love, loyalty, commitment, empathy, and concern for justice and fairness are always at bottom a mixture of selfish opportunism and us-ish clannishness? Indeed, is it only a sign of the effectiveness of the moral camouflage that we ourselves are so often taken in by it?

Speaking of what “thoroughly logical evolutionary thinking” might “force” us to conclude provides a clue to the answer. Think for a moment about science and logic themselves. Natural selection operates on a need-to-know basis. Between two individuals — one disposed to use scarce resources and finite capacities to seek out the most urgent and useful information and the other, heedless of immediate and personal concerns and disposed instead toward pure, disinterested inquiry, following logic wherever it might lead — it is clear which natural selection would tend to favor.

And yet, Darwinian skeptics about morality believe, humans somehow have managed to redeploy and leverage their limited, partial, human-scale psychologies to develop shared inquiry, experimental procedures, technologies and norms of logic and evidence that have resulted in genuine scientific knowledge and responsiveness to the force of logic. This distinctively human “cultural evolution” was centuries in the making, and overcoming partiality and bias remains a constant struggle, but the point is that these possibilities were not foreclosed by the imperfections and partiality of the faculties we inherited. As Wittgenstein observed, crude tools can be used to make refined tools. Monkeys, it turns out, can come surprisingly near to objective science.

We can see a similar cultural evolution in human law and morality — a centuries-long process of overcoming arbitrary distinctions, developing wider communities, and seeking more inclusive shared standards, such as the Geneva Conventions and the Universal Declaration of Humans Rights. Empathy might induce sympathy more readily when it is directed toward kith and kin, but we rely upon it to understand the thoughts and feelings of enemies and outsiders as well. And the human capacity for learning and following rules might have evolved to enable us to speak a native language or find our place in the social hierarchy, but it can be put into service understanding different languages and cultures, and developing more cosmopolitan or egalitarian norms that can be shared across our differences.

Within my own lifetime, I have seen dramatic changes in civil rights, women’s rights and gay rights. That’s just one generation in evolutionary terms. Or consider the way that empathy and the pressure of consistency have led to widespread recognition that our fellow animals should receive humane treatment. Human culture, not natural selection, accomplished these changes, and yet it was natural selection that gave us the capacities that helped make them possible. We still must struggle continuously to see to it that our widened empathy is not lost, our sympathies engaged, our understandings enlarged, and our moral principles followed. But the point is that we have done this with our imperfect, partial, us-ish native endowment. Kant was right to be impressed. In our best moments, we can come surprisingly close to being moral monkeys.

Peter Railton is the Perrin Professor of Philosophy at the University of Michigan, Ann Arbor. His main areas of research are moral philosophy and the philosophy of science. He is a member of the American Academy of Arts and Sciences.

__________
Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/18/moral-camouflage-or-moral-monkeys/

Your Move: The Maze of Free Will

You arrive at a bakery. It’s the evening of a national holiday. You want to buy a cake with your last 10 dollars to round off the preparations you’ve already made. There’s only one thing left in the store — a 10-dollar cake.

On the steps of the store, someone is shaking an Oxfam tin. You stop, and it seems quite clear to you — it surely is quite clear to you — that it is entirely up to you what you do next. You are — it seems — truly, radically, ultimately free to choose what to do, in such a way that you will be ultimately morally responsible for whatever you do choose. Fact: you can put the money in the tin, or you can go in and buy the cake. You’re not only completely, radically free to choose in this situation. You’re not free not to choose (that’s how it feels). You’re “condemned to freedom,” in Jean-Paul Sartre’s phrase. You’re fully and explicitly conscious of what the options are and you can’t escape that consciousness. You can’t somehow slip out of it.

You may have heard of determinism, the theory that absolutely everything that happens is causally determined to happen exactly as it does by what has already gone before — right back to the beginning of the universe. You may also believe that determinism is true. (You may also know, contrary to popular opinion, that current science gives us no more reason to think that determinism is false than that determinism is true.) In that case, standing on the steps of the store, it may cross your mind that in five minutes’ time you’ll be able to look back on the situation you’re in now and say truly, of what you will by then have done, “Well, it was determined that I should do that.” But even if you do fervently believe this, it doesn’t seem to be able to touch your sense that you’re absolutely morally responsible for what you next.

The case of the Oxfam box, which I have used before to illustrate this problem, is relatively dramatic, but choices of this type are common. They occur frequently in our everyday lives, and they seem to prove beyond a doubt that we are free and ultimately morally responsible for what we do. There is, however, an argument, which I call the Basic Argument, which appears to show that we can never be ultimately morally responsible for our actions. According to the Basic Argument, it makes no difference whether determinism is true or false. We can’t be ultimately morally responsible either way.

The argument goes like this.

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

The key move is (3). Why can’t you be ultimately responsible for the way you are in any respect at all? In answer, consider an expanded version of the argument.

(a) It’s undeniable that the way you are initially is a result of your genetic inheritance and early experience.

(b) It’s undeniable that these are things for which you can’t be held to be in any way responsible (morally or otherwise).

(c) But you can’t at any later stage of life hope to acquire true or ultimate moral responsibility for the way you are by trying to change the way you already are as a result of genetic inheritance and previous experience.

(d) Why not? Because both the particular ways in which you try to change yourself, and the amount of success you have when trying to change yourself, will be determined by how you already are as a result of your genetic inheritance and previous experience.

(e) And any further changes that you may become able to bring about after you have brought about certain initial changes will in turn be determined, via the initial changes, by your genetic inheritance and previous experience.

There may be all sorts of other factors affecting and changing you. Determinism may be false: some changes in the way you are may come about as a result of the influence of indeterministic or random factors. But you obviously can’t be responsible for the effects of any random factors, so they can’t help you to become ultimately morally responsible for how you are.

Some people think that quantum mechanics shows that determinism is false, and so holds out a hope that we can be ultimately responsible for what we do. But even if quantum mechanics had shown that determinism is false (it hasn’t), the question would remain: how can indeterminism, objective randomness, help in any way whatever to make you responsible for your actions? The answer to this question is easy. It can’t.

And yet we still feel that we are free to act in such a way that we are absolutely responsible for what we do. So I’ll finish with a third, richer version of the Basic Argument that this is impossible.

(i) Interested in free action, we’re particularly interested in actions performed for reasons (as opposed to reflex actions or mindlessly habitual actions).

(ii) When one acts for a reason, what one does is a function of how one is, mentally speaking. (It’s also a function of one’s height, one’s strength, one’s place and time, and so on, but it’s the mental factors that are crucial when moral responsibility is in question.)

(iii) So if one is going to be truly or ultimately responsible for how one acts, one must be ultimately responsible for how one is, mentally speaking — at least in certain respects.

(iv) But to be ultimately responsible for how one is, in any mental respect, one must have brought it about that one is the way one is, in that respect. And it’s not merely that one must have caused oneself to be the way one is, in that respect. One must also have consciously and explicitly chosen to be the way one is, in that respect, and one must also have succeeded in bringing it about that one is that way.

(v) But one can’t really be said to choose, in a conscious, reasoned, fashion, to be the way one is in any respect at all, unless one already exists, mentally speaking, already equipped with some principles of choice, “P1″ — preferences, values, ideals — in the light of which one chooses how to be.

(vi) But then to be ultimately responsible, on account of having chosen to be the way one is, in certain mental respects, one must be ultimately responsible for one’s having the principles of choice P1 in the light of which one chose how to be.

(vii) But for this to be so one must have chosen P1, in a reasoned, conscious, intentional fashion.

(viii) But for this to be so one must already have had some principles of choice P2, in the light of which one chose P1.

(ix) And so on. Here we are setting out on a regress that we cannot stop. Ultimate responsibility for how one is is impossible, because it requires the actual completion of an infinite series of choices of principles of choice.

(x) So ultimate, buck-stopping moral responsibility is impossible, because it requires ultimate responsibility for how one is; as noted in (iii).

Does this argument stop me feeling entirely morally responsible for what I do? It does not. Does it stop you feeling entirely morally responsible? I very much doubt it. Should it stop us? Well, it might not be a good thing if it did. But the logic seems irresistible …. And yet we continue to feel we are absolutely morally responsible for what we do, responsible in a way that we could be only if we had somehow created ourselves, only if we were “causa sui,” the cause of ourselves. It may be that we stand condemned by Nietzsche:

The causa sui is the best self-contradiction that has been conceived so far. It is a sort of rape and perversion of logic. But the extravagant pride of man has managed to entangle itself profoundly and frightfully with just this nonsense. The desire for “freedom of the will” in the superlative metaphysical sense, which still holds sway, unfortunately, in the minds of the half-educated; the desire to bear the entire and ultimate responsibility for one’s actions oneself, and to absolve God, the world, ancestors, chance, and society involves nothing less than to be precisely this causa sui and, with more than Baron Münchhausen’s audacity, to pull oneself up into existence by the hair, out of the swamps of nothingness … (“Beyond Good and Evil,” 1886).

Is there any reply? I can’t do better than the novelist Ian McEwan, who wrote to me: “I see no necessary disjunction between having no free will (those arguments seem watertight) and assuming moral responsibility for myself. The point is ownership. I own my past, my beginnings, my perceptions. And just as I will make myself responsible if my dog or child bites someone, or my car rolls backwards down a hill and causes damage, so I take on full accountability for the little ship of my being, even if I do not have control of its course. It is this sense of being the possessor of a consciousness that makes us feel responsible for it.”

Galen Strawson is professor of philosophy at Reading University and is a regular visitor at the philosophy program at the City University of New York Graduate Center. He is the author of “Selves: An Essay in Revisionary Metaphysics” (Oxford: Clarendon Press, 2009) and other books.

___________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/07/22/your-move-the-maze-of-free-will/

Islamophobia and Homophobia

As if we needed more evidence of America’s political polarization, last week Juan Williams gave the nation a Rorschach test. Williams said he gets scared when people in “Muslim garb” board a plane he’s on, and he promptly got (a) fired by NPR and (b) rewarded by Fox News with a big contract.

Suppose Williams had said something hurtful to gay people instead of to Muslims. Suppose he had said gay men give him the creeps because he fears they’ll make sexual advances. NPR might well have fired him, but would Fox News have chosen that moment to give him a $2-million pat on the back?

I don’t think so. Playing the homophobia card is costlier than playing the Islamophobia card. Or at least, the costs are more evenly spread across the political spectrum. In 2007, when Ann Coulter used a gay slur, she was denounced on the right as well as the left, and her stock dropped. Notably, her current self-promotion campaign stresses her newfound passion for gay rights.

Coulter’s comeuppance reflected sustained progress on the gay rights front. Only a few decades ago, you could tell an anti-gay joke on the Johnny Carson show — with Carson’s active participation — and no one would complain. (See postscript below for details.) The current “it gets better” campaign, designed to reassure gay teenagers that adulthood will be less oppressive than adolescence, amounts to a kind of double entrendre: things get better not just over an individual’s life but over the nation’s life.

When we move from homophobia to Islamophobia, the trendline seems to be pointing in the opposite direction. This isn’t shocking, given 9/11 and the human tendency to magnify certain kinds of risk. (Note to Juan Williams: Over the past nine years about 90 million flights have taken off from American airports, and not one has been brought down by a Muslim terrorist. Even in 2001, no flights were brought down by people in “Muslim garb.”) 

Still, however “natural” this irrational fear, it’s dangerous. As Islamophobia grows, it alienates Muslims, raising the risk of homegrown terrorism — and homegrown terrorism heightens the Islamophobia, which alienates more Muslims, and so on: a vicious circle that could carry America into the abyss. So it’s worth taking a look at why homophobia is fading; maybe the underlying dynamic is transplantable to the realm of inter-ethnic prejudice.

Theories differ as to what it takes for people to build bonds across social divides, and some theories offer more hope than others.

One of the less encouraging theories grows out of the fact that both homophobia and Islamophobia draw particular strength from fundamentalist Christians. Maybe, this argument goes, part of the problem is a kind of “scriptural determinism.” If religious texts say that homosexuality is bad, or that people of other faiths are bad, then true believers will toe that line.

If scripture is indeed this powerful, we’re in trouble, because scripture is invoked by intolerant people of all Abrahamic faiths — including the Muslim terrorists who plant the seeds of Islamophobia. And, judging by the past millennium or two, God won’t be issuing a revised version of the Bible or the Koran anytime soon.

Happily, there’s a new book that casts doubt on the power of intolerant scripture: “American Grace,” by the social scientists Robert Putnam and David Campbell.

Three decades ago, according to one of the many graphs in this data-rich book, slightly less than half of America’s frequent churchgoers were fine with gay people freely expressing their views on gayness. Today that number is over 70 percent — and no biblical verse bearing on homosexuality has magically changed in the meanwhile. And these numbers actually understate the progress; over those three decades, church attendance was dropping for mainline Protestant churches and liberal Catholics, so the “frequent churchgoers” category consisted increasingly of evangelicals and conservative Catholics.

So why have conservative Christians gotten less homophobic? Putnam and Campbell favor the “bridging” model. The idea is that tolerance is largely a question of getting to know people. If, say, your work brings you in touch with gay people or Muslims — and especially if your relationship with them is collaborative — this can brighten your attitude toward the whole tribe they’re part of. And if this broader tolerance requires ignoring or reinterpreting certain scriptures, so be it; the meaning of scripture is shaped by social relations.

The bridging model explains how attitudes toward gays could have made such rapid progress. A few decades ago, people all over America knew and liked gay people — they just didn’t realize these people were gay. So by the time gays started coming out of the closet, the bridge had already been built.

And once straight Americans followed the bridge’s logic — once they, having already accepted people who turned out to be gay, accepted gayness itself — more gay people felt comfortable coming out. And the more openly gay people there were, the more straight people there were who realized they had gay friends, and so on: a virtuous circle.

So could bridging work with Islamophobia? Could getting to know Muslims have the healing effect that knowing gay people has had?

The good news is that bridging does seem to work across religious divides. Putnam and Campbell did surveys with the same pool of people over consecutive years and found, for example, that gaining evangelical friends leads to a warmer assessment of evangelicals (by seven degrees on a “feeling thermometer” per friend gained, if you must know).

And what about Muslims? Did Christians warm to Islam as they got to know Muslims — and did Muslims return the favor?

That’s the bad news. The population of Muslims is so small, and so concentrated in distinct regions, that there weren’t enough such encounters to yield statistically significant data. And, as Putnam and Campbell note, this is a recipe for prejudice. Being a small and geographically concentrated group makes it hard for many people to know you, so not much bridging naturally happens. That would explain why Buddhists and Mormons, along with Muslims, get low feeling-thermometer ratings in America.

In retrospect, the situation of gays a few decades ago was almost uniquely conducive to rapid progress. The gay population, though not huge, was finely interspersed across the country, with  representatives in virtually every high school, college and sizeable workplace. And straights had gotten to know them without even seeing the border they were crossing in the process.

So the engineering challenge in building bridges between Muslims and non-Muslims will be big. Still, at least we grasp the nuts and bolts of the situation. It’s a matter of bringing people into contact with the “other” in a benign context. And it’s a matter of doing it fast, before the vicious circle takes hold, spawning appreciable homegrown terrorism and making fear of Muslims less irrational.

After 9/11, philanthropic foundations spent a lot of money arranging confabs whose participants spanned the divide between “Islam” and “the West.” Meaningful friendships did form across this border, and that’s good. It’s great that Imam Feisal Abdul Rauf, a cosmopolitan, progressive Muslim, got to know lots of equally cosmopolitan Christians and Jews.

But as we saw when he decided to build an Islamic Community Center near ground zero, this sort of high-level networking — bridging among elites whose attitudes aren’t really the problem in the first place — isn’t enough. Philanthropists need to figure out how you build lots of little bridges at the grass roots level. And they need to do it fast.

Postscript: As for the Johnny Carson episode: I don’t like to rely on my memory alone for decades-old anecdotes, but in this case I’m 99.8 percent sure that I remember the basics accurately. Carson’s guest was the drummer Buddy Rich. In a supposedly spontaneous but obviously pre-arranged exchange, Rich said something like, “People often ask me, What is Johnny Carson really like?” Carson looked at Rich warily and said, “And how do you respond to this query?” But he paused between “this” and “query,” theatrically ratcheting up the wariness by an increment or two, and then pronounced the word “query” as “queery.” Rich immediately replied, “Like that.” Obviously, there are worse anti-gay jokes than this. Still, the premise was that being gay was something to be ashamed of. That Googling doesn’t turn up any record of this episode suggests that it didn’t enter the national conversation or the national memory. I don’t think that would be the case today. And of course, anecdotes aside, there is lots of polling data showing the extraordinary progress made since the Johnny Carson era on such issues as gay marriage and on gay rights in general.

Robert Wright, New York Times

__________

Full article: http://opinionator.blogs.nytimes.com/2010/10/26/islamophobia-and-homophobia/

The Burqa and the Body Electric

In her post of July 11, “Veiled Threats?” and her subsequent response to readers, Martha Nussbaum considers the controversy over the legal status of the burqa — which continues to flare across Europe —  making a case for freedom of religious expression.  In these writings, Professor Nussbaum applies the argument of her 2008 book “Liberty of Conscience,” which praises the American approach to religious liberty of which Roger Williams, one of the founders of Rhode Island Colony, is an early champion.

Williams is an inspiring figure, indeed.  Feeling firsthand the constraint of religious conformism in England and in Massachusetts Bay, he developed a uniquely broad position on religious toleration, one encompassing not only Protestants of all stripes, but Roman Catholics, Jews and Muslims.  The state, in his view, can legitimately enforce only the second tablet of the Decalogue — those final five commandments covering murder, theft, and the like.  All matters of worship covered by the first tablet must be left to the individual conscience. 

Straightforward enough.  But in the early years of Rhode Island, Williams faced quite a relevant challenge.  One of the colonists who fled Salem with him, Joshua Verin, quickly made himself an unwelcome presence in the fledgling community.  In addition to being generally “boysterous,” Verin had forbidden his wife from attending the religious services that Williams held in his home, and became publicly violent in doing so.  The colony was forced into action: Verin was disenfranchised on the grounds that he was interfering with his wife’s religious freedom.  Taking up a defense of Verin, fellow colonist William Arnold — who would also nettle Williams in the years to follow — claimed the punishment to be a violation of Verin’s religious liberty in that it interfered with his biblically appointed husbandly dominion.  “Did you pretend to leave Massachusetts, because you would not offend God to please men,” Arnold asked, “and would you now break an ordinance and commandment of God to please women?”  In his Salem days Williams himself had affirmed such biblical hierarchy, favoring the covering of women’s heads in all public assemblies.

A colony whose founding charter was signed by more than one woman was apparently not willing to accept the kind of male violence on which the Bible is at least indifferent.  Some suggested that Mrs. Verin be liberated from her husband and placed in the community’s protection until a suitable mate could be found for her.  That proposal was not taken up, and it was not taken up because Mrs. Verin herself declared that she wished to remain with her husband — a declaration made while she was literally tied in ropes and being dragged back to Salem by a man who had bound her in a more than matrimonial way.

The Verin case illustrates the weakness of the principle on which Nussbaum depends.  Liberty of conscience limits human institutions so that they do not interfere with the sacrosanct relationship between the soul and God, and in its strict application allows a coerced soul to run its course into the abyss.   In especially unappealing appeals to this kind of liberty, bigots of all varieties have claimed an entitlement to their views on grounds of tender conscience.  Nussbaum recognizes this elsewhere.  Despite her casual attitude toward burqa wearers — a marginal group in an already small minority population — she responds forcefully to false claims of religious freedom when they infect policymaking, as in her important defense of the right to marry.

The burqa controversy revolves around a central question: “Does this cultural practice performed in the name of religion inherently violate the principle of equality that democracies are obliged to defend?”  The only answer to that question offered by liberty of conscience is that we have no right to ask in the first place.  This is in essence Nussbaum’s position, even though the kind of floor-to-ceiling drapery that we are considering is not at all essential to Muslim worship.  The burqa is not religious headwear; it is a physical barrier to engagement in public life adopted in a deep spirit of misogyny. 

Lockean religious toleration, a tradition of which Nussbaum is skeptical, expects religious observance more fully to conform to the aims of a democratic polity.  We might see the French response to the burqa as an expression of that tradition.  After a famously misguided initial attempt to do away with all Muslim headwear in schools and colleges, French legislators later settled down to an evaluation of the point at which headwear becomes an affront to gender equality, passing most recently a ban on the niqab, or face veil—which has also been barred from classrooms and dormitories at Cairo’s Al’Azhar University, the historical center of Muslim learning.  It seems farcical to create a scorecard of permissible and impermissible religious garb — and as a youth reading Arthur C. Clarke it is not what I imagined the world’s legislators to be doing with urgency in the year 2010 — but we must wonder if it is the kind of determination that we must make, and make with care, if we are to come to a just settlement of the issue.

If we take a broader view, though, we might see that Lockean tradition as part of the longstanding wish of the state to disarm religion’s subversive potential.

Controversies on religious liberty are as old as temple and palace, those two would-be foci of ultimate meaning in human life that seem perpetually to run on a collision course.  Sophocles’s Antigone subscribes to an absolute religious duty in her single-minded wish to administer funeral rites to her brother Polyneices.  King Creon forbids the burial because Polyneices is an enemy of the state, an attempt to bring observance into harmony with political authority.  Augustine of Hippo handles the tension in his critique of the Roman polymath Varro.  Though the work that he is discussing is now lost, Augustine in his thorough way gives us a rigorous account of Varro’s distinction between “natural theology,” the theology of philosophers who seek to discern the nature of the gods as they really are, and “civil theology,” which is the theology acceptable in public observance.  Augustine objects to this distinction: if natural theology is “really natural,” if it has discovered the true nature of divinity, he asks, “what is found wrong with it, to cause its exclusion from the city?”

Debates on religious liberty seem always to be separated by the gulf between Varro and Augustine.  Those who follow Varro tolerate brands of religion that do not threaten civil order or prevailing moral conventions, and accept in principle a distinction between public and private worship; those who follow Augustine tolerate only those who agree with their sense of the nature of divinity, which authorities cannot legitimately restrict.  At their worst, Varronians make flimsy claims on preserving public decorum and solidify the state’s marginalization of religious minorities — as the Swiss did in December 2009 by passing a referendum banning the construction of minarets.  Augustinians at their worst expect the state to respect primordial bigotries and tribal exceptionalism — as do many of this country’s so-called Christian conservatives.

Might there be a third way?  If, as several thinkers have suggested, we now find ourselves in a “post-secular” age, then perhaps we might look beyond traditional disputes between political and ecclesiastical authority, between religion and secularism.  Perhaps post-secularity can take justice and equality to be absolutely good with little regard for whether we come to value the good by a religious or secular path.  Our various social formations — political, religious, social, familial — find their highest calling in deepening our bonds of fellow feeling.  “Compelling state interest” has no inherent value; belief  also has no inherent value.  Political and religious positions  must be measured against the purity of truths, rightly conceived as those principles enabling the richest possible lives for our fellow human beings.

So let us attempt such a measure.  The kind of women’s fashion favored by the Taliban might legitimately be outlawed as an instrument of gender apartheid — though one must have strong reservations about the enforcement of such a law, which could create more divisiveness than it cures.  The standard of human harmony provides strong resistance to anti-gay prejudice, stripping it of its wonted mask of righteousness. It objects in disgust to Pope Benedict XVI when he complains about Belgian authorities seizing church records in the course of investigating sexual abuse; it also praises the Catholic Church for the humanitarian and spiritual services it provides on this country’s southern border, which set the needs of the human family above arbitrary distinctions of citizenship.  The last example shows that some belief provides a deeply humane resistance to state power run amok.  To belief of this kind there is no legitimate barrier.

Humane action is of course open to interpretation. But if we place it at the center of our aspirations, we will make decisions more salutary than those offered by the false choice between state interest and liberty of conscience. Whitman may have been the first post-secularist in seeing that political and religious institutions declaring certain bodies to be shameful denigrate all human dignity: every individual is a vibrant body electric deeply connected to all beings by an instinct of fellow feeling. Such living democracy shames the puerile self-interest of modern electoral politics, and the worn barbarisms lurking under the shroud of retrograde orthodoxy. Embracing that vitality, to return to Nussbaum’s concerns, also guides us to the most generous possible reading of the First Amendment, which restricts government so that individual consciences and groups of believers can actively advance, rather than stall, the American project of promoting justice and equality.

 

Feisal G. Mohamed is an associate professor of English at the University of Illinois.  His most recent book, “Milton and the Post-Secular Present,” is forthcoming from Stanford University Press.

__________
Full article: http://opinionator.blogs.nytimes.com/2010/07/28/the-burqa-and-the-body-electric/

Freedom and Reality: A Response

It has been a privilege to read the comments posted in response to my July 25th post, “The Limits of the Coded World” and I am exceedingly grateful for the time, energy and passion so many readers put into them. While my first temptation was to answer each and every one, reality began to set in as the numbers increased, and I soon realized I would have to settle for a more general response treating as many of the objections and remarks as I could. This is what I would like to do now. 

If I had to distill my entire response into one sentence, it would be this: it’s not about the monkeys! It was unfortunate that the details of the monkey experiment — in which researchers were able to use computers wired to the monkey’s brains to predict the outcome of certain decisions — dominated so much attention, since it could have been replaced by mention of any number of similar experiments, or indeed by a fictional scenario. My intention in bringing it up at all was twofold: first, to take an example of the sort of research that has repeatedly sparked sensationalist commentary in the popular press about the end of free will; and second, to show how plausible and in some sense automatic the link between predictability and the problem of free will can be (which was born out by the number of readers who used the experiment to start their own inquiry into free will).

As readers know, my next step was to argue that predictability no more indicates lack of free will than does unpredictability indicate its presence. Several readers noted an apparent contradiction between this claim, that “we have no reason to assume that either predictability or lack of predictability has anything to say about free will,” and one of the article’s concluding statements, that “I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it.”

Indeed, when juxtaposed without the intervening text, the statements seem clearly opposed. However, not only is there no contradiction between them, the entire weight of my arguments depends on understanding why there is none. In the first sentence I am talking about using models to more or less accurately guess at future outcomes; in the second I am talking about a model of the ultimate nature of reality as a kind of knowledge waiting to be decoded — what I called in the article “the code of codes,” a phrase I lifted from one of my mentors, the late Richard Rorty — and how the impossibility of that model is what guarantees freedom.

The reason why predictability in the first sense has no bearing on free will is, in fact, precisely because predictability in the second sense is impossible. The theoretical predictability of everything that occurs in a universe whose ultimate reality is conceived of as a kind of knowledge or code is what goes by the shorthand of determinism. In the old debate between free will and determinism, determinism has always played the default position, the backdrop from which free will must be wrested if we are to have any defensible concept of responsibility. From what I could tell, a great number of commentators on my article shared at least this idea with Galen Strawson: that we live in a deterministic universe, and if the concept of free will has any importance at all it is merely as a kind of necessary illusion. My position is precisely the opposite: free will does not need to be “saved” from determinism, because there was never any real threat there to begin with, either of a scientific nature or a religious one.  

The reason for this is that when we assume a deterministic universe of any kind we are implicitly importing into our thinking the code of codes, a model of reality that is not only false, but also logically impossible. Let’s see why.

To make a choice that in any sense could be considered “free,” we would have to claim that it was at some point unconstrained. But, the hard determinist would argue, there can never be any point at which a choice is unconstrained, because even if we exclude any and all obvious constraints, such as hunger or coercion, the chooser is constrained by (and this is Strawson’s “basic argument”) how he or she is at the time of the choosing, a sum total of effects over which he or she could never exercise causality.

This constraint of “how he or she is,” however, is pure fiction, a treatment of tangible reality as if it were decodable knowledge, requiring a kind of God’s eye perspective capable of knowing every instance and every possible interpretation of every aspect of a person’s history, culture, genes and general chemistry, to mention only a few variables. It refers to a reality that self-proclaimed rationalists and science advocates pay lip service to in their insistence on basing all claims on hard, tangible facts, but is in fact as elusive, as metaphysical and ultimately as incompatible with anything we could call human knowledge as would be a monotheistic religion’s understanding of God.

When some readers sardonically (I assume) reduced by argument to “ignorance=freedom,” then, they were right in a way; but the rub lies in how we understand ignorance. The commonplace understanding would miss the point entirely: it is not ignorance against the backdrop of ultimate knowledge that equates to freedom; rather, it is constitutive, essential ignorance. This, again, needs expansion.

Knowledge can never be complete. This is the case not merely because there will always be something more to know; rather, it is so because completed knowledge is oxymoronic, self-defeating. AI theorists have long dreamed of what Daniel Dennett once called heterophenomenology, the idea that, with an accurate-enough understanding of the human brain my description of another person’s experience could become indiscernible from that experience itself. My point it not merely that heterophenomenology is impossible from a technological perspective or undesirable from an ethical perspective; rather, it is impossible from a logical perspective, since the very phenomenon we are seeking to describe, in this case the conscious experience of another person, would cease to exist without the minimal opacity separating his or her consciousness from mine. Analogously, all knowledge requires this kind of minimal opacity, because knowing something involves, at a minimum, a synthesis of discrete perceptions across space or time. 

The Argentine writer Jorge Luis Borges demonstrated this point with implacable rigor in a story about a man who loses the ability to forget, and with that also ceases to think, perceive, and eventually to live, because, as Borges points out, thinking necessarily involves abstraction, the forgetting of differences. Because of what we can thus call our constitutive ignorance, then, we are free — only and precisely because as beings who cannot possibly occupy all times and spatial perspectives without thereby ceasing to be what we are, we are constantly faced with choices. All these choices — to the extent that they are choices and not simply responses to stimuli or reactions to forces exerted on us — have at least some element that cannot be traced to a direct determination, but could only be blamed, for the sake of defending a deterministic thesis, on the ideal and completely fanciful determinism of “how we are” at the time of the decision to be made.

Far from a mere philosophical wish fulfillment or fuzzy, humanistic thinking, then, this kind of freedom is real, hard-nosed and practical. Indeed, courts of law and ethics panels may take specific determinations into account when casting judgment on responsibility, but most of us would agree that it would be absurd for them to waste time considering philosophical, scientific or religious theories of general determinism. The purpose of  both my original piece and this response  has been to show that, philosophically speaking as well, this real and practical freedom has nothing to fear from philosophical, scientific or religious pipedreams.

This last remark leads me to the one more issue that many readers brought up, and which I can only touch on now in passing: religion. In a recent blog post Jerry Coyne, a professor of evolutionary biology at the University of Chicago, labels me an “accommodationist” who tries to “denigrate science” and vindicate “other ways of knowing.” Professor Coyne goes on to contrast my (alleged) position to “the scientific ‘model of the world,’” which, he adds, has “been extraordinarily successful at solving problems, while other ‘models’ haven’t done squat.” Passing over the fact that, far from denigrating them, I am fervent and open admirer of the natural sciences (my first academic interests were physics and mathematics), I’m content to let Professor Coyne’s dismissal of every cultural, literary, philosophical or artistic achievement in history speak for itself.

What I find of interest here is the label accommodationism, because the intent behind the current deployment of the term by the new atheist block is to associate explicitly those so labeled with the tragic failure of the Chamberlain government to stand up to Hitler. Indeed, Richard Dawkins has called those espousing open dialogue between faith and science “the Neville Chamberlain school of evolution.” One can only be astonished by the audacity of the rhetorical game they are playing: somehow with a twist of the tongue those arguing for greater intellectual tolerance have been allied with the worst example of intolerance in history.

One of the aims of my recent work has indeed been to provide a philosophical defense of moderate religious belief. Certain ways of believing, I have argued, are extremely effective at undermining the implicit model of reality supporting the philosophical mistake I described above, a model of reality that religious fundamentalists also depend on. While fundamentalisms of all kinds are unified in their belief that the ultimate nature of reality is a code that can be read and understood, religious moderates, along with those secularists we would call agnostics, are profoundly suspicious of any claims that one can come to know reality as it is in itself. I believe that such believers and skeptics are neither less scientific nor less religious for their suspicion. They are, however, more tolerant of discord; more prone to dialog, to patient inquiry, to trial and error, and to acknowledging the potential insights of other ways of thinking and other disciplines than their own. They are less righteously assured of the certainty of their own positions and have, historically, been less inclined to be prodded to violence than those who are beholden to the code of codes. If being an accommodationist means promoting these values, then I welcome the label.

William Egginton is the Andrew W. Mellon Professor in the Humanities at the Johns Hopkins University. His next book, “In Defense of Religious Moderation,” will be published by Columbia University Press in 2011.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/08/04/freedom-and-reality-a-response/

The Phenomenology of Ugly

This all started the day Luigi gave me a haircut. I was starting to look like a mad professor: specifically like Doc in “Back to the Future.” So Luigi took his scissors out and tried to fix me up. Except — and this is the point that occurred to me as I inspected the hair in the bathroom mirror the next morning — he didn’t really take quite enough off. He had enhanced the style, true, but there was a big floppy fringe that was starting to annoy me. And it was hot out. So I opened up the clipper attachment on the razor and hacked away at it for a while. When I finally emerged there was a general consensus that I looked like a particularly disreputable scarecrow. In the end I went to another barbershop (I didn’t dare show Luigi my handiwork) and had it all sheared off. Now I look like a cross between Britney Spears and Michel Foucault.

In short, it was a typical bad hair day. Everyone has them. I am going to hold back on my follicular study of the whole of Western philosophy (Nietzsche’s will-to-power-eternal-recurrence mustache; the workers-of-the-world-unite Marxian beard), but I think it has to be said that a haircut can have significant philosophical consequences. Jean-Paul Sartre, the French existentialist thinker, had a particularly traumatic tonsorial experience when he was only seven. Up to that point he had had a glittering career as a crowd-pleaser. Everybody referred to young “Poulou” as “the angel.” His mother had carefully cultivated a luxuriant halo of golden locks. Then one fine day his grandfather takes it into his head that Poulou is starting to look like a girl, so he waits till his mother has gone out, then tells the boy they are going out for a special treat. Which turns out to be the barbershop. Poulou can hardly wait to show off his new look to his mother. But when she walks through the door, she takes one look at him before running up the stairs and flinging herself on the bed, sobbing hysterically. Her carefully constructed — one might say carefully combed — universe has just been torn down, like a Hollywood set being broken and reassembled for some quite different movie, rather harsher, darker, less romantic and devoid of semi-divine beings. For, as in an inverted fairy-tale, the young Sartre has morphed from an angel into a “toad”. It is now, for the first time, that Sartre realizes that he is — as his American lover, Sally Swing, will say of him — “ugly as sin.”

Jean-Paul Sartre and two friends in France, no doubt discussing philosophy.

“The fact of my ugliness” becomes a barely suppressed leitmotif of his writing. He wears it like a badge of honor (Camus, watching Sartre in laborious seduction mode in a Paris bar: “Why are you going to so much trouble?” Sartre: “Have you had a proper look at this mug?”). The novelist Michel Houellebecq says somewhere that, when he met Sartre, he thought he was “practically disabled.” It is fair comment. He certainly has strabismus (with his distinctive lazy eye, so he appears to be looking in two directions at once), various parts of his body are dysfunctional and he considers his ugliness to count as a kind of disability. I can’t help wondering if ugliness is not indispensable to philosophy. Sartre seems to be suggesting that thinking — serious, sustained questioning — arises out of, or perhaps with, a consciousness of one’s own ugliness.

I don’t want to make any harsh personal remarks here but it is clear that a philosophers’ Mr. or Ms. Universe contest would be roughly on a par with the philosophers’ football match imagined by Monty Python. That is to say, it would have an ironic relationship to beauty. Philosophy as a satire on beauty.

It is no coincidence that one of our founding philosophers, Socrates, makes a big deal out of his own ugliness. It is the comic side of the great man. Socrates is (a) a thinker who asks profound and awkward questions (b) ugly. In Renaissance neo-Platonism (take, for example, Erasmus and his account of  “foolosophers” in “The Praise of Folly”) Socrates, still spectacularly ugly, acquires an explicitly Christian logic: philosophy is there — like Sartre’s angelic curls — to save us from our ugliness (perhaps more moral than physical).

But I can’t help thinking that ugliness infiltrated the original propositions of philosophy in precisely this redemptive way. The implication is there in works like Plato’s  “Phaedo.” If we need to die in order to attain the true, the good, and the beautiful (to kalon: neither masculine nor feminine but neutral, like Madame Sartre’s ephemeral angel, gender indeterminate), it must be because truth, goodness, and beauty elude us so comprehensively in life. You think you’re beautiful? Socrates seems to say. Well, think again! The idea of beauty, in this world, is like a mistake. An error of thought. Which should be re-thought.

Perhaps Socrates’s mission is to make the world safe for ugly people. Isn’t everyone a little ugly, one way or the other, at one time or another? Who is truly beautiful, all the time? Only the archetypes can be truly beautiful.

Fast-forwarding to Sartre and my bathroom-mirror crisis, I feel this gives us a relatively fresh way of thinking about neo-existentialism. Sartre (like Aristotle, like Socrates himself at certain odd moments) is trying to get away from the archetypes. From, in particular, a transcendent concept of beauty that continues to haunt — and sometimes cripple — us.

“It doesn’t matter if you are an ugly bastard. As an existentialist you can still score.” Sartre, so far as I know, never actually said it flat out (although he definitely described himself as a “salaud”). And yet it is nevertheless there in almost everything he ever wrote. In trying to be beautiful, we are trying to be like God (the “for-itself-in-itself” as Sartre rebarbatively put it). In other words, to become like a perfect thing, an icon of perfection, and this we can never fully attain. But it is good business for manufacturers of beauty creams, cosmetic surgeons and — yes! — even barbers.

Switching gender for a moment — going in the direction Madame Sartre would have preferred — I suspect that the day Britney Spears shaved her own hair off  represented a kind of Sartrean or Socratic argument (rather than, say, a nervous breakdown). She was, in effect, by the use of appearance, shrewdly de-mythifying beauty. The hair lies on the floor, “inexplicably faded” (Sartre), and the conventional notion of femininity likewise. I see Marilyn Monroe and Brigitte Bardot in a similar light: one by dying, the other by remaining alive, were trying to deviate from and deflate their iconic status. The beautiful, to kalon, is not some far-flung transcendent abstraction, in the neo-existentialist view. Beauty is a thing (social facts are things, Durkheim said). Whereas I am no-thing. Which explains why I can never be truly beautiful. Even if it doesn’t stop me wanting to be either. Perhaps this explains why Camus, Sartre’s more dashing sparring partner, jotted down in his notebooks, “Beauty is unbearable and drives us to despair.”

I always laugh when somebody says, “don’t be so judgmental.” Being judgmental is just what we do. Not being judgmental really would be like death. Normative behavior is normal. That original self-conscious, slightly despairing glance in the mirror (together with, “Is this it?” or “Is that all there is?”) is a great enabler because it compels us to seek improvement. The transcendent is right here right now. What we transcend is our selves. And we can (I am quoting Sartre here) transascend or transdescend. The inevitable dissatisfaction with one’s own appearance is the engine not only of philosophy but of civil society at large. Always providing you don’t end up pulling your hair out by the roots.

Andy Martin is currently completing “What It Feels Like To Be Alive: Sartre and Camus Remix” for Simon and Schuster. He was an 2009-10 fellow at the Cullman Center for Scholars and Writers in New York, and teaches at Cambridge University.

__________

Full article and photo:

Reclaiming the Imagination

Imagine being a slave in ancient Rome. Now remember being one. The second task, unlike the first, is crazy. If, as I’m guessing, you never were a slave in ancient Rome, it follows that you can’t remember being one — but you can still let your imagination rip. With a bit of effort one can even imagine the impossible, such as discovering that Dick Cheney and Madonna are really the same person. It sounds like a platitude that fiction is the realm of imagination, fact the realm of knowledge.

Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade alive through those one thousand and one nights — in the story. 

On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.

A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.

Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.

To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place. 

Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.

Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.

In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.

Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.

The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.

Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.

Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).

___________

Full article: http://opinionator.blogs.nytimes.com/2010/08/15/reclaiming-the-imagination/

When the Mind Wanders, Happiness Also Strays

A quick experiment. Before proceeding to the next paragraph, let your mind wander wherever it wants to go. Close your eyes for a few seconds, starting … now.

And now, welcome back for the hypothesis of our experiment: Wherever your mind went — the South Seas, your job, your lunch, your unpaid bills — that daydreaming is not likely to make you as happy as focusing intensely on the rest of this column will.

I’m not sure I believe this prediction, but I can assure you it is based on an enormous amount of daydreaming cataloged in the current issue of Science. Using an iPhone app called trackyourhappiness, psychologists at Harvard contacted people around the world at random intervals to ask how they were feeling, what they were doing and what they were thinking.

The least surprising finding, based on a quarter-million responses from more than 2,200 people, was that the happiest people in the world were the ones in the midst of enjoying sex. Or at least they were enjoying it until the iPhone interrupted.

The researchers are not sure how many of them stopped to pick up the phone and how many waited until afterward to respond. Nor, unfortunately, is there any way to gauge what thoughts — happy, unhappy, murderous — went through their partners’ minds when they tried to resume.

When asked to rate their feelings on a scale of 0 to 100, with 100 being “very good,” the people having sex gave an average rating of 90. That was a good 15 points higher than the next-best activity, exercising, which was followed closely by conversation, listening to music, taking a walk, eating, praying and meditating, cooking, shopping, taking care of one’s children and reading. Near the bottom of the list were personal grooming, commuting and working.

When asked their thoughts, the people in flagrante were models of concentration: only 10 percent of the time did their thoughts stray from their endeavors. But when people were doing anything else, their minds wandered at least 30 percent of the time, and as much as 65 percent of the time (recorded during moments of personal grooming, clearly a less than scintillating enterprise).

On average throughout all the quarter-million responses, minds were wandering 47 percent of the time. That figure surprised the researchers, Matthew Killingsworth and Daniel Gilbert.

“I find it kind of weird now to look down a crowded street and realize that half the people aren’t really there,” Dr. Gilbert says.

You might suppose that if people’s minds wander while they’re having fun, then those stray thoughts are liable to be about something pleasant — and that was indeed the case with those happy campers having sex. But for the other 99.5 percent of the people, there was no correlation between the joy of the activity and the pleasantness of their thoughts.

“Even if you’re doing something that’s really enjoyable,” Mr. Killingsworth says, “that doesn’t seem to protect against negative thoughts. The rate of mind-wandering is lower for more enjoyable activities, but when people wander they are just as likely to wander toward negative thoughts.”

Whatever people were doing, whether it was having sex or reading or shopping, they tended to be happier if they focused on the activity instead of thinking about something else. In fact, whether and where their minds wandered was a better predictor of happiness than what they were doing.

“If you ask people to imagine winning the lottery,” Dr. Gilbert says, “they typically talk about the things they would do — ‘I’d go to Italy, I’d buy a boat, I’d lay on the beach’ — and they rarely mention the things they would think. But our data suggest that the location of the body is much less important than the location of the mind, and that the former has surprisingly little influence on the latter. The heart goes where the head takes it, and neither cares much about the whereabouts of the feet.”

Still, even if people are less happy when their minds wander, which causes which? Could the mind-wandering be a consequence rather than a cause of unhappiness?

To investigate cause and effect, the Harvard psychologists compared each person’s moods and thoughts as the day went on. They found that if someone’s mind wandered at, say, 10 in the morning, then at 10:15 that person was likely to be less happy than at 10 , perhaps because of those stray thoughts. But if people were in a bad mood at 10, they weren’t more likely to be worrying or daydreaming at 10:15.

“We see evidence for mind-wandering causing unhappiness, but no evidence for unhappiness causing mind-wandering,” Mr. Killingsworth says.

This result may disappoint daydreamers, but it’s in keeping with the religious and philosophical admonitions to “Be Here Now,” as the yogi Ram Dass titled his 1971 book. The phrase later became the title of a George Harrison song warning that “a mind that likes to wander ’round the corner is an unwise mind.”

What psychologists call “flow” — immersing your mind fully in activity — has long been advocated by nonpsychologists. “Life is not long,” Samuel Johnson said, “and too much of it must not pass in idle deliberation how it shall be spent.” Henry Ford was more blunt: “Idleness warps the mind.” The iPhone results jibe nicely with one of the favorite sayings of William F. Buckley Jr.: “Industry is the enemy of melancholy.”

Alternatively, you could interpret the iPhone data as support for the philosophical dictum of Bobby McFerrin: “Don’t worry, be happy.” The unhappiness produced by mind-wandering was largely a result of the episodes involving “unpleasant” topics. Such stray thoughts made people more miserable than commuting or working or any other activity.

But the people having stray thoughts on “neutral” topics ranked only a little below the overall average in happiness. And the ones daydreaming about “pleasant” topics were actually a bit above the average, although not quite as happy as the people whose minds were not wandering.

There are times, of course, when unpleasant thoughts are the most useful thoughts. “Happiness in the moment is not the only reason to do something,” says Jonathan Schooler, a psychologist at the University of California, Santa Barbara. His research has shown that mind-wandering can lead people to creative solutions of problems, which could make them happier in the long term.

Over the several months of the iPhone study, though, the more frequent mind-wanderers remained less happy than the rest, and the moral — at least for the short-term — seems to be: you stray, you pay. So if you’ve been able to stay focused to the end of this column, perhaps you’re happier than when you daydreamed at the beginning. If not, you can go back to daydreaming starting…now.

Or you could try focusing on something else that is now, at long last, scientifically guaranteed to improve your mood. Just make sure you turn the phone off.

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/11/16/science/16tier.html

The Third Replicator

All around us information seems to be multiplying at an ever increasing pace. New books are published, new designs for toasters and i-gadgets appear, new music is composed or synthesized and, perhaps above all, new content is uploaded into cyberspace. This is rather strange. We know that matter and energy cannot increase but apparently information can.

It is perhaps rather obvious to attribute this to the evolutionary algorithm or Darwinian process, as I will do, but I wish to emphasize one part of this process — copying. The reason information can increase like this is that, if the necessary raw materials are available, copying creates more information. Of course it is not new information, but if the copies vary (which they will if only by virtue of copying errors), and if not all variants survive to be copied again (which is inevitable given limited resources), then we have the complete three-step process of natural selection  (Dennett, 1995). From here novel designs and truly new information emerge. None of this can happen without copying.

I want to make three arguments here.

Imitation is not just some new minor ability. It changes everything. It enables a new kind of evolution.

The first is that humans are unique because they are so good at imitation. When our ancestors began to imitate they let loose a new evolutionary process based not on genes but on a second replicator, memes. Genes and memes then coevolved, transforming us into better and better meme machines.

The second is that one kind of copying can piggy-back on another: that is, one replicator (the information that is copied) can build on the products (vehicles or interactors) of another. This multilayered evolution has produced the amazing complexity of design we see all around us.

The third is that now, in the early 21st century, we are seeing the emergence of a third replicator. I call these temes (short for technological memes, though I have considered other names). They are digital information stored, copied, varied and selected by machines. We humans like to think we are the designers, creators and controllers of this newly emerging world but really we are stepping stones from one replicator to the next.

As I try to explain this I shall make some assertions and assumptions that some readers may find outrageous, but I am deliberately putting my case in its strongest form so that we can debate the issues people find most interesting or most troublesome.

Some may entirely reject the notion of replicators, and will therefore dismiss the whole enterprise. Others will accept that genes are replicators but reject the idea of memes. For example, Eva Jablonka and Marion J. Lamb ( 2005) refer to “the dreaded memes” while Peter J. Richerson and Robert Boyd (2005), who have contributed so much to the study of cultural evolution, assert that “cultural variants are not replicators.” They use the phrase “selfish memes” but still firmly reject memetics (Blackmore 2006). Similarly, in a previous “On The Human” post, William Benzon explains why he does not like the term “meme,” yet he needs some term to refer to the things that evolve and so he still uses it. As John S. Wilkins points out in response, there are several more classic objections: memes are not discrete (I would say some are not discrete), they do not form lineages (some do), memetic evolution appears to be Lamarckian (but only appears so), memes are not replicated but re-created or reproduced, or are not copied with sufficient fidelity (see discussions in Aunger 2000, Sterelny 2006, Wimsatt 2010). I have tackled all these, and more, elsewhere and concluded that the notion is still valid (Blackmore 1999, 2010a).

So I will press on, using the concept of memes as originally defined by Dawkins who invented the term; that is, memes are “that which is imitated” or whatever it is that is copied when people imitate each other. Memes include songs, stories, habits, skills, technologies, scientific theories, bogus medical treatments, financial systems, organizations — everything that makes up human culture. I can now, briefly, tell the story of how I think we arrived where we are today.

First there were genes. Perhaps we should not call genes the first replicator because there may have been precursors worthy of that name and possibly RNA-like replicators before the evolution of DNA (Maynard Smith and Szathmary 1995). However, Dawkins (1976), who coined the term “replicator,” refers to genes this way and I shall do the same.

We should note here an important distinction for living things based on DNA, that the genes are the replicators while the animals and plants themselves are vehicle, interactors, or phenotypes: ephemeral creatures constructed with the aid of genetic information coded in tiny strands of DNA packaged safely inside them. Whether single-celled bacteria, great oak trees, or dogs and cats, in the gene-centered view of evolution they are all gene machines or Dawkins’s “lumbering robots.” The important point here is that the genetic information is faithfully copied down the generations, while the vehicles or interactors live and die without actually being copied. Put another way, this system copies the instructions for making a product rather than the product itself, a process that has many advantages (Blackmore 1999, 2001). This interesting distinction becomes important when we move on to higher replicators.

So what happened next? Earth might have remained a one-replicator planet but it did not. One of these gene machines, a social and bipedal ape, began to imitate. We do not know why, although shifting climate may have favored stealing skills from others rather than learning them anew (Richerson and Boyd 2005). Whatever the reason, our ancestors began to copy sounds, skills and habits from one to another. They passed on lighting fires, making stone tools, wearing clothes, decorating their bodies and all sorts of skills to do with living together as hunters and gatherers. The critical point here is, of course, that they copied these sounds, skills and habits, and this, I suggest, is what makes humans unique. No other species (as far as we know) can do this. Song birds can copy some sounds, some of the other great apes can imitate some actions, and most notably whales and dolphins can imitate, but none is capable of the widespread, generalized imitation that comes so easily to us. Imitation is not just some new minor ability. It changes everything. It enables a new kind of evolution.

This is why I have called humans “Earth’s Pandoran species.” They let loose this second replicator and began the process of memetic evolution in which memes competed to be selected by humans to be copied again. The successful memes then influenced human genes by gene-meme co-evolution (Blackmore 1999, 2001). Note that I see this process as somewhat different from gene-culture co-evolution, partly because most theorists treat culture as an adaptation (e.g. Richerson and Boyd 2005), and agree with Wilson that genes “keep culture on a leash.” (Lumsden and Wilson 1981 p 13).

Benzon, in responding to Peter Railton’s post here at The Stone, points out the limits of this  metaphor and proposes the “chess board and game” instead. I prefer a simple host-parasite analogy. Once our ancestors could imitate they created lots of memes that competed to use their brains for their own propagation. This drove these hominids to become better meme machines and to carry the (potentially huge and even dangerous) burden of larger brain size and energy use, eventually becoming symbiotic. Neither memes nor genes are a dog or a dog-owner. Neither is on a leash. They are both vast competing sets of information, all selfishly getting copied whenever and however they can.

To help understand the next step we can think of this process as follows: one replicator (genes) built vehicles (plants and animals) for its own propagation. One of these then discovered a new way of copying and diverted much of its resources to doing this instead, creating a new replicator (memes) which then led to new replicating machinery (big-brained humans). Now we can ask whether the same thing could happen again and — aha — we can see that it can, and is.

A sticking point concerns the equivalent of the meme-phenotype or vehicle. This has plagued memetics ever since its beginning: some arguing that memes must be inside human heads while words, technologies and all the rest are their phenotypes, or “phemotypes”; others arguing the opposite. I disagree with both (Blackmore 1999, 2001). By definition, whatever is copied is the meme and I suggest that, until very recently, there was no meme-phemotype distinction because memes were so new and so poorly replicated that they had not yet constructed stable vehicles. Now they have.

Think about songs, recipes, ways of building houses or clothes fashions. These can be copied and stored by voice, by gesture, in brains, or on paper with no clear replicator/vehicle distinction. But now consider a car factory or a printing press. Thousands of near-identical copies of cars, books, or newspapers are churned out. Those actual cars or books are not copied again but they compete for our attention and if they prove popular then more copies are made from the same template. This is much more like a replicator-vehicle system. It is “copy the instructions” not “copy the product.”

Of course cars and books are passive lumps of metal, paper and ink. They cannot copy, let alone vary and select information themselves. So could any of our modern meme products take the step our hominid ancestors did long ago and begin a new kind of copying? Yes. They could and they are. Our computers, all linked up through the Internet, are beginning to carry out all three of the critical processes required for a new evolutionary process to take off.

Computers handle vast quantities of information with extraordinarily high-fidelity copying and storage. Most variation and selection is still done by human beings, with their biologically evolved desires for stimulation, amusement, communication, sex and food. But this is changing. Already there are examples of computer programs recombining old texts to create new essays or poems, translating texts to create new versions, and selecting between vast quantities of text, images and data. Above all there are search engines. Each request to Google, Alta Vista or Yahoo! elicits a new set of pages — a new combination of items selected by that search engine according to its own clever algorithms and depending on myriad previous searches and link structures.

This is a radically new kind of copying, varying and selecting, and means that a new evolutionary process is starting up. This copying is quite different from the way cells copy strands of DNA or humans copy memes. The information itself is also different, consisting of highly stable digital information stored and processed by machines rather than living cells. This, I submit, signals the emergence of temes and teme machines, the third replicator.

What should we expect of this dramatic step? It might make as much difference as the advent of human imitation did. Just as human meme machines spread over the planet, using up its resources and altering its ecosystems to suit their own needs, so the new teme machines will do the same, only faster. Indeed we might see our current ecological troubles not as primarily our fault, but as the inevitable consequence of earth’s transition to being a three-replicator planet. We willingly provide ever more energy to power the Internet, and there is enormous scope for teme machines to grow, evolve and create ever more extraordinary digital worlds, some aided by humans and others independent of them. We are still needed, not least to run the power stations, but as the temes proliferate, using ever more energy and resources, our own role becomes ever less significant, even though we set the whole new evolutionary process in motion in the first place.

Whether you consider this a tragedy for the planet or a marvelous, beautiful story of creation, is up to you.

Susan Blackmore is a psychologist and writer researching consciousness, memes, and anomalous experiences, and a Visiting Professor at the University of Plymouth. She is the author of  several books, including “The Meme Machine” (1999), “Conversations on Consciousness” (2005) and Ten Zen Questions (2009).

References

Aunger, R.A. (Ed) (2000) “Darwinizing Culture: The Status of Memetics as a Science,” Oxford University Press

Benzon, W.L. (2010) “Cultural Evolution: A Vehicle for Cooperative Interaction Between the Sciences and the Humanities.” Post for On the Human.

Blackmore, S. 1999 “The Meme Machine,” Oxford and New York, Oxford University Press

Blackmore,S. 2001 “Evolution and memes: The human brain as a selective imitation device.” Cybernetics and Systems, 32, 225-255

Blackmore, S. (2006) “Memetics by another name?” Review of “Not by Genes Alone” by P.J. Richerson and R. Boyd. Bioscience, 56, 74-5

Blackmore, S. (2010a) Memetics does provide a useful way of understanding cultural evolution. In “Contemporary Debates in Philosophy of Biology”, Ed. Francisco Ayala and Robert Arp, Chichester, Wiley-Blackwell,  255-72.

Blackmore (2010b) “Dangerous Memes; or what the Pandorans let loose.” In “Cosmos and Culture: Cultural Evolution in a Cosmic Context,” Ed. Steven Dick and Mark Lupisella, NASA 297-318

Dawkins,R. (1976) “The Selfish Gene,” Oxford, Oxford University Press (new edition with additional material, 1989)

Dennett, D. (1995) “Darwin’s Dangerous Idea.” London, Penguin

Jablonka, E. and Lamb, M.J. (2005) “Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral and Symbolic Variation in the History of Life.” Bradford Books

Lumsden,C.J. and Wilson,E.O. (1981) “Genes, Mind and Culture.” Cambridge, Mass., Harvard University Press.

Maynard-Smith,J. and Szathmáry,E (1995) “The Major Transitions in Evolution.” Oxford, Freeman

Richerson, P.J. and Boyd, R. (2005) “Not by Genes Alone: How Culture Transformed Human Evolution,” Chicago, University of Chicago Press

Sterelny, K. (2006). “Memes Revisited.” British Journal for the Philosophy of Science 57 (1)

Wimsatt, W. (2010) Memetics does not provide a useful way of understanding cultural evolution: A developmental perspective. In “Contemporary Debates in Philosophy of Biology” Ed. Francisco Ayala and Robert Arp, Chichester, Wiley-Blackwell,  255-72.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/08/22/the-third-replicator/

Plato’s Pop Culture Problem, and Ours

This fall, the U.S. Supreme Court will rule on a case that may have the unusual result of establishing a philosophical link between Arnold Schwarzenegger and Plato.

The case in question is the 2008 decision of the Ninth Circuit Court of Appeals striking down a California law signed by Gov. Schwarzenegger in 2005, that imposed fines on stores that sell video games featuring “sexual and heinous violence” to minors.  The issue is an old one: one side argues that video games shouldn’t receive First Amendment protection since exposure to violence in the media is likely to cause increased aggression or violence in real life.  The other side counters that the evidence shows nothing more than a correlation between the games and actual violence. In their book “Grand Theft Childhood,” the authors Lawrence Kutner and Cheryl K. Olson of Harvard Medical School argue that this causal claim is only the result of “bad or irrelevant research, muddleheaded thinking and unfounded, simplistic news reports,.” 

The issue, which at first glance seems so contemporary, actually predates the pixel by more than two millennia.  In fact, an earlier version of the dispute may be found in “The Republic,” in which Plato shockingly excludes Homer and the great tragic dramatists from the ideal society he describes in that work.

Could Plato, who wrote in the 4th century B.C., possibly have anything to say about today’s electronic media?  As it turns out, yes, It  is characteristic of philosophy that even its most abstruse and apparently irrelevant ideas, suitably interpreted, can sometimes acquire an unexpected immediacy.  And while philosophy doesn’t always provide clear answers to our questions, it often reveals what exactly it is that we are asking.

Children in ancient Athens learned both grammar and citizenship from Homer and the tragic poets. Plato follows suit but submits their works to the sort of ruthless censorship that would surely raise the hackles of modern supporters of free speech.  But would we have reason to complain?  We, too, censor our children’s educational materials as surely, and on the same grounds, as Plato did.  Like him, many of us believe that emulation becomes “habit and second nature,” that bad heroes (we call them “role models” today) produce bad people.  We even fill our children’s books with our own clean versions of the same Greek stories that upset him, along with our bowdlerized versions of Shakespeare and the Bible.

What is really disturbing is that Plato’s adult citizens are exposed to poetry even less than their children.   Plato knows how captivating and so how influential poetry can be but, unlike us today, he considers its influence catastrophic.  To begin with, he accuses it of conflating the authentic and the fake.  Its heroes appear genuinely admirable, and so worth emulating, although they are at best flawed and at worst vicious.  In addition, characters of that sort are necessary because drama requires conflict — good characters are hardly as engaging as bad ones.  Poetry’s subjects are therefore inevitably vulgar and repulsive — sex and violence.  Finally, worst of all, by allowing us to enjoy depravity in our imagination, poetry condemns us to a depraved life. 

This very same reasoning is at  the heart of today’s denunciations of mass media.  Scratch the surface of any attack on the popular arts — the early Christians against the Roman circus, the Puritans against Shakespeare, Coleridge against the novel, the various assaults on photography, film, jazz, television, pop music, the Internet, or video games — and you will find Plato’s criticisms of poetry.  For the fact is that the works of  both Homer and Aeschylus, whatever else they were in classical Athens, were, first and  foremost, popular entertainment.

Tens of thousands of people of all classes attended the free dramatic festivals and Homeric recitations of ancient Athens.  Noisy and rambunctious, they cheered the actors they liked and chased those they didn’t off the stage, often pelting them with the food it was customary to bring into the theater.  Drama, moreover, seemed to them inherently realistic: it is said that women miscarried when the avenging Furies rushed onstage in Aeschylus’s “Eumenides.”

To be realistic is to seem to present the world without artifice or convention, without mediation — reality pure and simple.  And popular entertainment, as long as it remains popular, always seems realistic: television cops always wear seat belts.   Only with the passage of time does artifice become visible — George Raft’s 1930’s gangsters appear dated to audiences that grew up with Robert De Niro.  But by then, what used to be entertainment is on its way to becoming art.

In 1935, Rudolf Arnheim called television “a mere instrument of transmission, which does not offer any new means for the artistic representation of reality.”  He was repeating, unawares, Plato’s ancient charge that, without a “craft” or an art of his own, Homer merely reproduces “imitations,” “images,” or “appearances” of virtue and, worse, images of vice masquerading as virtue.  Both Plato and Arnheim ignored the medium of representation, which interposes itself between the viewer and what is represented.  And so, in Achilles’ lament for Patroclus’ death, Plato sees not a fictional character acting according to epic convention but a real man behaving shamefully.  And since Homer presents Achilles as a hero whose actions are commendable,  he seduces his audience into enjoying a distorted and dismal representation that both reflects and contributes to a distorted and dismal life.

We will never know how the ancient Athenians reacted to poetry.  But what about us?  Do we, as Plato thought, move immediately from representation to reality?  If we do, we should be really worried about the effects of television or video games.  Or are we aware that many features of each medium belong to its conventions and do not represent real life?

To answer these questions, we can no longer investigate only the length of our exposure to the mass media; we must focus on its quality: are we passive consumers or active participants? Do we realize that our reaction to representations need not determine our behavior in life?  If so, the influence of the mass media will turn out to be considerably less harmful than many suppose.  If not, instead of limiting access to or reforming the content of the mass media, we should ensure that we, and especially our children, learn to interact intelligently and sensibly with them.  Here, again, philosophy, which questions the relation between representation and life, will have something to say.

Even if that is true, however, though, to compare the “Iliad” or “Oedipus Rex” to “Grand Theft Auto,”, “CSI: NY,” or even “The Wire” may seem silly, if not absurd.  Plato, someone could argue, missed something serious about great art, but there is nothing to miss in today’s mass media.  Yet the fact is that Homer’s epics and, in particular, the 31 tragedies that have survived intact (a tiny proportion of the tens of thousands of works produced by thousands of ancient dramatists) did so because they were copied much more often than others — and that, as anyone familiar with best-selling books knows, may have little to do with perceived literary quality. For better or worse, the popular entertainment of one era often becomes the fine art of another.  And to the extent that we still admire Odysseus, Oedipus, or Medea, Plato, for one, would have found our world completely degenerate — as degenerate, in fact, as we would find a world that, perhaps two thousand years from now, had replaced them with Tony Soprano, Nurse Jackie, or the Terminator.

And so, as often in philosophy, we end with a dilemma: If Plato was wrong about epic and tragedy, might we be wrong about television and video games?  If, on the other hand, we are right, might Plato have been right about Homer and Euripides?

Alexander Nehamas is professor of philosophy and comparative literature and Edmund N. Carpenter, II, Class of 1943 Professor in the Humanities at Princeton University. He is the author of several works on Plato, Nietzsche, literary theory and aesthetics, including, most recently, “Only A Promise of Happiness: The Place of Beauty in a World of Art.”
__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/08/29/platos-pop-culture-problem-and-ours/

Copy That: A Response

In my essay for The Stone, “The Third Replicator,” I argued that new replicators can piggy-back on older ones. This happens when the product of one replicator becomes copying machinery for the next replicator, and so on. Memes appeared, I wrote,  when humans (a product of genes) became capable of imitating, varying and selecting a new kind of information in the form of words, actions, technologies and ideas. The same thing is happening now, I argued, with a new replicator. Computers (a product of memes) are just beginning to be capable not only of copying but of copying, varying and selecting digital information. This means the birth of a new  replicator ─  temes, or technological memes.

In the comments that followed, I was called a “trippy visionary” (Guillermo C. Jimenez, 68), and a “pink hair meme” spreader (Jim Gerofsky, 133, though I don’t think it did spread much) and sent to the Chinese Room, but I am grateful for the many responses from readers,  which compel me to defend some of my arguments, clarify others and think hard about some of the questions raised.

Inevitably, some common misunderstandings of memetics surfaced, concerning the use of analogies, the existence of memes, the nature of selfishness and the role of imitation.

Marcelo (69) ably defends the use of analogies in scientific thinking. Yet the value of an analogy depends on how it is used, and many people seem to misunderstand this when it comes to both memes and temes. Let’s go back to 1976 and the origin of the term “meme” in Richard Dawkins’s “The Selfish Gene.” He says this: “I think we have got to start again and go right back to first principles. … .  The gene will enter my thesis as an analogy, nothing more.”  (Dawkins, 1976. p. 191). Those first principles are what he calls “Universal Darwinism” ─ that when anything is copied with variation and selection then evolution must occur. He looks at human culture, argues that songs, words, ideas, technologies and habits are all copied from person to person with added variations and heavy selection and so concludes that there must be a new evolutionary process going on. He calls the replicator involved in that new process the meme. 

The critical point here is that he starts from first principles; from what Dennett (1995) calls “Darwin’s Dangerous Idea.” I guess this is what 9 and 15 mean by saying I’ve “got it” i.e. “got” what Dawkins was saying, and what is missed by so many objectors. So thanks to 9 and 15! Dawkins does not do what many seem to accuse him of,  which seems to go something like this:

1. Genes are the replicator underlying biological evolution.

2. Cultural evolution looks a bit like biological evolution so by analogy let’s invent a new replicator to underlie culture and call it a meme.

They then add: 3. But memes and genes are so dissimilar that the theory of memes must be rubbish.

I have put this rather crudely but this seems to be what many people think. For  example, Frank (49) says “the meme-as-a-cultural-sort-of-gene analogy can only be taken so far.” I would agree there, but he has missed the point. Analogies between genes and memes are secondary. If memes really are replicators (and this depends both on how you define a replicator and whether cultural information really is copied with variation and selection) then we should expect some interesting analogies between genes and memes. But we should not necessarily expect them to be close. Indeed they often are not. And this is not surprising since one is based on digital information encoded in DNA and the other is a wide variety of behaviors, skills, technologies and so on copied in various ways and often with low fidelity by humans.

A future science of higher replicators, if ever such a science comes about, should be able to use analogies between replicators as fruitful ways of asking new questions or investigating how replicators behave. It should not expect all these analogies to be useful or close. Some will be and some will not.

Some commentators have understood and built on this. For example, Marshall (116) points out that “Genetic replicators have been “working on” the problem of accurate reproduction for rather a long time, and have evolved many mechanisms discretizing units of information, fixing or eliminating miscopies, defeating genes that game the process of random recombination, and so on.” He disagrees with me by concluding that what I call temes are really more memes (so does Mark Wei, 130, who thinks we are “still in a two replicator system, with the second replicator still in its infancy”), but he goes on to note, as I have also done, that memetic transmission is still in its early stages and so it is not surprising that the replicators are ill-defined and the process sloppy. 

Similarly R. Garrett (147) argues that memes have only had about a thousand generations of humans using sufficiently complex language to spread them. So it’s unrealistic to expect memes to be as developed as genes: they could more reasonably be compared with RNA enzymes in the RNA-world. He suggests that writing, the printing press and digital information storage might all be steps towards a digital equivalent of DNA. This, I think, is the way we should be using analogies between different replicators ─ looking at how general processes operate in ones we understand and then seeing whether or not we can discern similar processes happening in ones we do not understand.

Some people claim that memes do not exist, or are not proven to exist . This reveals another misunderstanding (Aunger, 2000). Memes are words, stories, songs, habits, skills and so on. Surely they exist.  Dennett asks “Do words exist?” Of course they do. The interesting question is not whether memes exist but whether thinking of words, stories, skills, habits and technologies as a new replicator is of any value. I say yes, many others say no. This, unlike the existence question, is an argument worth having.

Some commentators are bothered by the notion of selfish memes or more generally of selfish replicators, and get themselves into trouble wondering about intentionality, anthropomorphism and teleology (100, 109, 119, 122, 129). When I say that memes are selfish I mean that they will get copied whenever and however they can without regard for the consequences. This is not because they have human-like self-preserving desires or emotions, but because they cannot care (they are only words, skills, habits etc). This is precisely the same argument as with genes ─ they have effects on living things but they don’t care because they cannot care. This becomes especially interesting when applied to temes. If I am right and we are on the verge of attaining a third replicator, this too will be selfish because it cannot care. Squillions of digits being copied, mixed up, selected and copied again cannot care about the consequences to us, our genes or to the planet. This is the sense in which temes are or will be selfish. Their inevitable evolution will drive the creation of ever more teme machines with ever more information passing around, with no regard for us or our planet.

This brings me onto the question of human emotions and consciousness. Many commentators berated me for ignoring human emotion or for playing down the importance of sentience, consciousness (36, 40, 58, 94, 129, 169) and free will (170, 172). As many of you will know, I think both consciousness and free will are illusions, by which I mean that they are not what they seem to be.

For example, people often think of consciousness as some kind of power or force which is able to act on their brain, but I reject this covertly dualist notion. Others think of consciousness as some kind of added principle in brain function, i.e. there is vision, learning, memory etc. etc. and then consciousness as well. I reject this, too (Blackmore 2002, 2010). Consciousness, in these senses, does not exist. It is an illusion that comes about when clever brains build stories about self and other and try to understand their own actions. So in reply to some of these comments I would suggest that consciousness need play no role in creativity, selection or anything else we do. In a way this is one of the delights of memetics; one can think about everything we are and do, including the way that we construct illusions of a conscious self who is in charge of our bodies, without invoking any special stuff or property or power called “consciousness.” 

What about creativity? Several mention this (61, 70, 73, 137, 143, 148), some saying that I have sneaked it in illegitimately while others argue that you cannot get novelty, creativity, or invention from “mindless copying” (137). No you can’t. You get novelty, creativity, and invention from mindless copying with variation and selecting. That’s the whole point. I would go so far as to say that this process (the evolutionary process, Darwin’s dangerous idea) is the source of all design in the universe (Blackmore, 2007). This is what creativity is. When we humans create new ideas, paintings, poems, stories, or technical achievements, it is because old ones have been copied, integrated with each other, mixed up, added to, and then the results have been ruthlessly selected, either within one brain or within the cruel worlds of bookshops, scientific peer review, cost-cutting, the fickleness of human desires and many other processes. We do not need either the concept of consciousness or the notion of some special creative capacity within humans to explain why we are such imaginative and creative creatures (Blackmore, 2007).

Cube (148) gives a wonderful example of this in by-pass surgery. The technology required was not invented by one person but by the efforts of many groups, lots of small steps, and lots of trial and error, all tested in the real world of patients and hospitals.

If you think that we humans have some special faculty of creativity or consciousness or sentience then you may think, as do 123 and 170, that we can somehow “escape the project.” Dawkins may have thought this too when he ended the “Selfish Gene” with the stirring words “We, alone on earth, can rebel against the tyranny of the selfish replicators” (1976. p. 201). I disagree (Blackmore, 1999). We are meme machines soon to become embedded in a three replicator system and without any consciousness, free will, or other spooky power that might enable to leap outside the system.

I will end with a few comments that raised interesting questions. William Benzon (41) makes many helpful comments and I have already replied to these separately (84, 112) . I enjoyed 55’s thoughts about cells and unification of people into one greater organism. I have been pondering on these processes, too. As for temes, some (76, 152) worry that since all information is effectively stored for ever somewhere in the Internet there can be no “survival of the fittest” which would discount them as replicators. However, much of this information languishes never to be copied again. As 141 points out “copying is what keeps a meme alive”.

Cube (148) says “I’m not quite sure why the Internet, although faster and cheaper, is qualitatively different from the printing press.” Some point out that most varying and selecting is still done by us humans, even though we let our machines do so much of the copying and storage. Most of the stuff out there is there because some human put it there or because other humans like it and keep copying it.

I agree but this is what I suggest is beginning to change: it’s not the Internet per se that is so different, but the advent of machines that can carry out all of the three processes required for evolution: copying, varying and selecting. Out there among all the computers interlinked around the world are, I suggest, the beginnings of such machines. This is what will bring about, or already has brought about, the birth of the third replicator.

Susan Blackmore is a psychologist and writer researching consciousness, memes, and anomalous experiences, and a visiting professor at the University of Plymouth. She is the author of several books, including “The Meme Machine (1999), “Conversations on Consciousness” (2005) and Ten Zen Questions (2009).

__________

Full article: http://opinionator.blogs.nytimes.com/2010/09/03/copy-that-a-response/

Experiments in Philosophy

Aristotle once wrote that philosophy begins in wonder, but one might equally well say that philosophy begins with inner conflict. The cases in which we are most drawn to philosophy are precisely the cases in which we feel as though there is something pulling us toward one side of a question but also something pulling us, perhaps equally powerfully, toward the other.

But how exactly can philosophy help us in cases like these? If we feel something within ourselves drawing us in one direction but also something drawing us the other way, what exactly can philosophy do to offer us illumination?

One traditional answer is that philosophy can help us out by offering us some insight into human nature. Suppose we feel a sense of puzzlement about whether God exists, or whether there are objective moral truths, or whether human beings have free will.

The traditional view was that philosophers could help us get to the bottom of this puzzlement by exploring the sources of the conflict within our own minds. If you look back to the work of some of the greatest thinkers of the 19th century Mill, Marx, Nietzsche — you can find extraordinary intellectual achievements along these basic lines.

As noted earlier this month in The Times’s Room for Debate forum, this traditional approach is back with a vengeance.  Philosophers today are once again looking for the roots of philosophical conflicts in our human nature, and they are once again suggesting that we can make progress on philosophical questions by reaching a better understanding of our own minds.  But these days, philosophers are going after these issues using a new set of methodologies.  They are pursuing the traditional questions using all the tools of modern cognitive science.  They are teaming up with researchers in other disciplines, conducting experimental studies, publishing in some of the top journals of psychology.  Work in this new vein has come to be known as experimental philosophy.

The Room for Debate discussion of this movement brought up an important question that is worth pursuing further.  The study of human nature, whether in Nietzsche or in a contemporary psychology journal, is obviously relevant to certain purely scientific questions, but how could this sort of work ever help us to answer the distinctive questions of philosophy? It may be of some interest just to figure out how people ordinarily think, but how could facts about how people ordinarily think ever tell us which views were actually right or wrong?

Instead of just considering this question in the abstract, let’s focus in on one particular example.  Take the age-old problem of free will — a topic discussed at length here at The Stone by Galen Strawson, William Egginton and hundreds of readers. If all of our actions are determined by prior events — just one thing causing the next, which causes the next — then is it ever possible for human beings to be morally responsible for the things we do? Faced with this question, many people feel themselves pulled in competing directions — it is as though there is something compelling them to say yes, but also something that makes them want to say no.

What is it that draws us in these two conflicting directions? The philosopher Shaun Nichols and I thought that people might be drawn toward one view by their capacity for abstract, theoretical reasoning, while simultaneously being drawn in the opposite direction by their more immediate emotional reactions. It is as though their capacity for abstract reasoning tells them, “This person was completely determined and therefore cannot be held responsible,” while their capacity for immediate emotional reaction keeps screaming, “But he did such a horrible thing! Surely, he is responsible for it.”

To put this idea to the test, we conducted a simple experiment.  All participants in the study were told about a deterministic universe (which we called “Universe A”), and all participants received exactly the same information about how this universe worked. The question then was whether people would think that it was possible in such a universe to be fully morally responsible.

But now comes the trick. Some participants were asked in a way designed to trigger abstract, theoretical reasoning, while others were asked in a way designed to trigger a more immediate emotional response. Specifically, participants in one condition were given the abstract question:

In Universe A, is it possible for a person to be fully morally responsible for their actions?

Meanwhile, participants in the other condition were given a more concrete and emotionally fraught example:

In Universe A, a man named Bill has become attracted to his secretary, and he decides that the only way to be with her is to kill his wife and three children. He knows that it is impossible to escape from his house in the event of a fire. Before he leaves on a business trip, he sets up a device in his basement that burns down the house and kills his family.

Is Bill fully morally responsible for killing his wife and children?

The results showed a striking difference between conditions. Of the participants who received the abstract question, the vast majority (86 percent) said that it was not possible for anyone to be morally responsible in the deterministic universe. But then, in the more concrete case, we found exactly the opposite results. There, most participants (72 percent) said that Bill actually was responsible for what he had done.

What we have in this example is just one very simple initial experiment. Needless to say, the actual body of research on this topic involves numerous different studies, and the scientific issues arising here can be quite complex.  But let us put all those issues to the side for the moment.  Instead, we can just return to our original question.  How can experiments like these possibly help us to answer the more traditional questions of philosophy?

The simple study I have been discussing here can offer at least a rough sense of how such an inquiry works.  The idea is not that we subject philosophical questions to some kind of Gallup poll. (“Well, the vote came out 65 percent to 35 percent, so I guess the answer is … human beings do have free will!”) Rather, the aim is to get a better understanding of the psychological mechanisms at the root of our sense of conflict and then to begin thinking about which of these mechanisms are worthy of our trust and which might simply be leading us astray.

So, what is the answer in the specific case of the conflict we feel about free will? Should we be putting our faith in our capacity for abstract theoretical reasoning, or should we be relying on our more immediate emotional responses?  At the moment, there is no consensus on this question within the experimental philosophy community.  What all experimental philosophers do agree on, however, is that we will be able to do a better job of addressing these fundamental philosophical questions if we can arrive at a better understanding of the way our own minds work.

Joshua Knobe is an assistant professor at Yale University, where he is appointed both in Cognitive Science and in Philosophy. He is a co-editor, with Shaun Nichols, of the volume “Experimental Philosophy.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/09/07/experimental-philosophy/

On Not Returning to Normal

Political theorists, lawyers and policy-makers sometimes assume that responses to emergency should — morally should — aim at a speedy return to a “normal” that predated the emergency. This is implicit in the metaphor of resilience often used by officials for emergency response. “Resilience” suggests that the preferred aftermath of an emergency is quickly regaining one’s former shape, bouncing back. Presumably it is possible to bounce back with a few permanent bumps or scars, but at the limit we might speak of an invisible mending ideal of emergency response: when the response is genuinely successful, the effects of the emergency entirely disappear: before and after are indistinguishable. 

The first thing to be said about the invisible mending model is that it is highly ambitious, even as a model of ideal emergency response. We normally expect firemen to put out the house fire. Replacing what is charred is not their responsibility. Neither is making the house habitable. In the same way, an emergency medical response is not supposed on its own to restore people to full functioning or health. The governing norm is that of removing or reducing the threat to life and limb. The larger aim of returning to normal involves a much larger set of agents than those who confront the emergency, and a much greater length of time.  Such was the case, as we now know, in the days that followed the attacks of Sept. 11. The efforts of the immediate responders to this grave emergency eliminated dangers and saved lives, but did not, could not, effect the complete and invisible mending that the language often used to describe their efforts implied.

Invisible mending may be a bad ideal of emergency response for two further reasons. First, the status quo ante — the way things were before — may be an emergency waiting to happen. Rebuilding water-damaged housing on a flood plain only invites more water damage. Instead of restoring things to their former condition, it is at least arguable that emergency response should usher in discontinuity. Perhaps people living in highly vulnerable flood plains have to be encouraged to move; or perhaps new forms of flood-resistant construction have to be developed.

The second reason why emergency response can be geared to the wrong ideal when it aims at invisible mending is that restoration can serve to continue a morally questionable status quo ante. Even when returning  to normal after a crisis does not return people to an emergency waiting to happen, it can return them to a “normal” that is unacceptable in other ways. The peace process in Northern Ireland has made violent paramilitary activity, including bombing, largely a thing of the past. This marks a return to a normality of non-violence that has not been seen for decades.  But there are many urban areas where high walls built during the Troubles separate loyalists from nationalists. The walls can be torn down. But extreme sectarian ill-feeling will go on: that is a continuity that the peace process has not broken.

Aiming simply to return to a former normality can have an unwelcome complacency about it, sometimes a defiant complacency. A determination to go on exactly as before —just to spite an enemy or attacker or simply a critic — is a recognizable human response to attack, enmity or criticism. Perhaps it also displays a kind of resilience. But unless continuity has a significant value of its own, the determination to go on exactly as before may have little to be said for it. Emergencies may better be seen as occasions for fresh starts and rethinking. Because they take life and make death vivid for those who survive emergencies, they properly prompt people to appraise lives that are nearly cut short. 

Consider the following clichéd vignette. Bloggs, a ruthless businessman, spends 20 years, seven days a week, clinching deals. He is run down, eats too much, drinks too much. This leads to a heart attack. He takes this heart-attack as a wake-up call. The teenage children he has neglected, his wife; his dog — all of these figures appear in a new light. He decides to lead a new sort of life that gives them the attention and appreciation they deserve. The heart attack leads Bloggs to this decision, let’s say, and not the decision to lead his old business life on a new sort of diet. The heart attack may have been caused by the bad diet, and it is open to interpretation as a wake-up call about eating and drinking. But it is naturally seized upon as an opportunity to take stock more generally and change a way of life.

A public emergency can be seized upon in the same way, even if one is not at its sharp end. This is how it was with September 11. Through the live television coverage the whole world was there. Many viewers throughout the world identified strongly with the victims; so that their deaths reminded us of our mortality, and  prompted us to take stock. To this extent we took September 11 as a wake-up call.  We opened our minds to questions of how we could live better.

There are philosophers such as Ted Honderich who think that the responsibility Westerners had before September 11 for not not living better lives by alleviating global inequalities contributed to the overall  responsibility for  the September 11  attack. Honderich wrote in “After the Terror” in 2003: “[T]he atrocity at the Twin Towers did have a human necessary condition in what preceded it: our deadly treatment of those outside our circle of comfort, those with the bad lives. Without that deadly treatment by us, the atrocity of the Twin Towers would not have happened.”

But this is a puzzling thing to think.  It may be true that the rich in the world should wake up to the fact that they can do much more for the poor of the world. It may be true that the earlier this is done the better. It may be true that there was an opportunity to start on this task on September 12 2001, as opposed to September 12 2003. It may therefore be true that people should have started on this task on September 12 2001. What does not seem to be true is that they should have started to do this because of the reasons for the September 11 attack. There is no reason to think that the Al-Qaeda operatives who flew the airplanes, or their masters, had any agenda with respect to global inequality. And it is hard to understand an attack on the Twin Towers or Pentagon as a means of reducing inequality. This crucial and fairly obvious point is fatal for Honderich’s way of arguing.

September 11 caused many people to take stock of their lives, and many governments to reappraise their priorities in foreign policy. Not every such reappraisal has led to better lives or better policies. But there is something important about the opportunity that emergency offers for not going on in the same old way. For us to break from our past because of an emergency is not at all for us to be broken by an emergency.

 

Tom Sorell is John Ferguson Professor of Global Ethics and director of the Center for the Study of Global Ethics at Birmingham University. He is the author of several philosophical works, including “Moral Theory and Anomaly” (1999), and is currently working on a book on the moral and political theory of emergencies.

___________

Full article: http://opinionator.blogs.nytimes.com/2010/09/12/on-not-returning-to-normal/

Predators: A Response

There are certain responses to the arguments in my post, “The Meat Eaters,” that recur with surprising frequency throughout the comments.  The following four objections, listed in the order of the relative frequency of their appearance, are the most common.

1. If predators were to disappear from a certain geographical region, herbivore populations in that area would rapidly expand, depleting edible vegetation, and thereby ultimately producing more deaths of among herbivores from starvation or disease than would otherwise have been caused by predators.  And starvation and disease normally involve more suffering than being quickly dispatched by a predator.

2. Should human beings be the first to go?

3. What about the suffering of plants?

4. What about bacteria, viruses, and insects?

My own response will focus primarily on the first of these objections, and on the ways in which the argument might continue after the objection has been noted.

In the sixth, seventh, and eighth paragraphs of my original article, I anticipated the first objection. I wrote:

Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.  Or suppose that we could intervene genetically, so that currently carnivorous species would gradually evolve into herbivorous ones, thereby fulfilling Isaiah’s prophecy.  If we could bring about the end of predation by one or the other of these means at little cost to ourselves, ought we to do it?

I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.  Our ignorance of the potential ramifications of our interventions in the natural world remains profound.  Efforts to eliminate certain species and create new ones would have many unforeseeable and potentially catastrophic effects.

Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations.  Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

After presenting the objection, I referred back to it six times in the course of the remaining 1900 words of the article.  Those references typically stress that my argument takes a conditional form: that is, my argument has practical implications only if we could have a high degree of confidence that problems of the sort I identified could be avoided.  Yet this same objection is repeatedly pressed in the comments, usually with lamentations over my appalling ignorance of biology and ecology, as if I had been unaware that there was such an obvious and devastating refutation of everything I had said.  I will return to this fact about the nature of the commentary at the end of this response.

Among those commentators who actually read the article and thus were aware that I had acknowledged the objection, a few took the argument a step further.  They understood that my argument was conditional but claimed that the relevant condition could never obtain.  We will never, they contended, be able to eliminate predation without causing catastrophic ecological disruption and thus even more suffering than we might have prevented.  If they are right, my article may present an interesting thought experiment that might have prompted us to reflect on our values (though it didn’t), but it is essentially devoid of practical significance.  These readers were too polite to point out that their prediction also casts Isaiah in a pretty disappointing light in his role as prophet.

My assumption in the article, however, was that our understanding of the biological and ecological sciences may well advance beyond what we consider possible today.  That has happened repeatedly in the history of science, as when Rutherford, who first split the atom, said in 1933 that anyone who thought that the splitting of the atom could be a source of power was talking “moonshine.”  Since we can’t be certain that we’ll never be able to reduce or eliminate predation without disastrous side effects, it’s important to think in advance about how we might wisely employ a more refined and discriminating power of intervention if we were ever to acquire it.

It seems, moreover, that my argument has some relevance to choices we must make even now.  There are some species of large predatory animals, such as the Siberian tiger, that are currently on the verge of extinction.  If we do nothing to preserve it, the Siberian tiger as a species may soon become extinct.  The number of extant Siberian tigers has been low for a considerable period.  Any ecological disruption occasioned by their dwindling numbers has largely already occurred or is already occurring.  If their number in the wild declines from several hundred to zero, the impact of their disappearance on the ecology of the region will be almost negligible.  Suppose, however, that we could repopulate their former wide-ranging habitat with as many Siberian tigers as there were during the period in which they flourished in their greatest numbers, and that that population could be sustained indefinitely.  That would mean that herbivorous animals in the extensive repopulated area would again, and for the indefinite future, live in fear and that an incalculable number would die in terror and agony while being devoured by a tiger.  In a case such as this, we may actually face the kind of dilemma I called attention to in my article, in which there is a conflict between the value of preserving existing species and the value of preventing suffering and early death for an enormously large number of animals.

Many of the commentators said, in effect: “Leave nature alone; the course of events in the natural world will go better without human intervention.”  Since efforts to repopulate their original habitat with large numbers of Siberian tigers might require a massive intervention in nature, this anti-interventionist view may itself imply that we ought to allow the Siberian tiger to become extinct.  But suppose Siberian tigers would eventually restore their former numbers on their own if human beings would simply leave them alone.  Most people, I assume, would find that desirable.  But is that because our human prejudices blind us to the significance of animal suffering?  Siberian tigers are in fact not particularly aggressive toward human beings, but suppose for the sake of argument that they were.  And suppose that there were large numbers of poor people living in primitive and vulnerable conditions in the areas in which Siberian tigers might become resurgent, so that many of these people would be threatened with mutilation and death if the tigers were not to become extinct, or not banished to captivity.  Would you still say: “Leave nature alone; let the tigers repopulate their former habitats.”?  What if you were one of the people in the region, so that your children or grandchildren might be among the victims?  And what would your reaction be if someone argued for the proliferation of tigers by pointing out that without tigers to keep the human population in check, you and others would breed incontinently and overcultivate the land, so that eventually your numbers would have to be controlled by famine or epidemic?  Better, they might say, to let nature do the work of culling the human herd in your region via the Siberian tiger.  Would you agree?

In fact we can’t leave nature alone.  We are a part of it, as much as any other animal.  More importantly, we can’t help but have a massive and pervasive impact on the natural world given our own numbers.  Agricultural practices necessary for our survival constitute a continuing invasion and occupation of lands previously inhabited by others.  One explicit suggestion of my article was that it would be better to try to control our impact on the natural world in a purposeful way, guided by intelligence and moral values, including the value of diminishing suffering, rather than to continue to allow our effects on the natural world, including the extinction of species, to be determined by blind inadvertence — as, for example, in the case of the many extinctions of animal species that will be caused by global climate change.

Some commentators made the interesting point that even if predators were to become extinct in a certain area without environmental catastrophe, new ones would eventually evolve there to fill the ecological niche that would have been left vacant, thereby restarting the whole dreary cycle.  But even if this were to happen, the evolution of a species can take a long time, and a lengthy interval without predation could be a significant good, just as the prevention of a war can be a great good even if it does nothing to prevent other wars in the future.  More importantly, it’s hardly plausible to suppose that we could have the ability to eliminate a predatory species from an area but would then lack the ability, even in the distant future when our scientific expertise would have advanced even further, to prevent a new predatory species from arising.

Consider the three other responses that turned up repeatedly in the comments.  Some readers suggested that my argument implies that we should aim for the extinction of the carnivorous human species, the species that that causes far more suffering to other animals than any other species.  Most took that to be a reductio ad absurdum of my argument, but a few seemed to think that getting rid of human beings would be a good idea.  For those who wish to pursue this issue, I recommend Peter Singer’s contribution to The Stone (“Should This Be the Last Generation?”, June 6, 2010).  My own response can be quite brief.  Human beings are not carnivores in the relevant sense, but omnivores, and in most cases can choose to live without tormenting and killing other animals, an option that is not a biological possibility for genuine carnivores.  My own view, though I won’t argue for it here, is that the extinction of human beings would be the worst event that could possibly occur.

What about the suffering of plants?  Again a brief response: plants don’t suffer, though they do respond to stimuli in ways that some have mistaken for a pain response.  What was rather shocking about the repeated invocation of suffering in plants is that it occasioned no reflections on what the moral implications would be if plants really did suffer.  The commentators’ gesture toward the alleged suffering of plants seemed no more than a rhetorical move in their attack on my argument.  But if one became convinced, as some of the commentators appear to be, that plants are conscious, feel pain, and experience suffering, that ought to prompt serious reconsideration of the permissibility of countless practices that we have always assumed to be benign.  If you really believed that plants suffer, would you continue to think that it’s perfectly acceptable to mow your grass?

Finally, my responses to the recurring challenges concerning microbes and insects parallel those I offered to people’s solicitude about plants.  Like plants, microbes don’t suffer.  I don’t think we know yet whether many types of insect do.  If, in controlled conditions, one pulls the leg or wing off a fly while it’s feeding or grooming, it will carry on with its activity as if nothing had happened.  But suppose insects really do suffer, perhaps quite intensely.  Shouldn’t that elicit serious moral reflection rather than being deployed as a mere debating point?

Earlier I noted that by far the most common objection to my article was that I ignored the likely consequences of the elimination or even the mere reduction of predation.  If you have the patience, review the first 152 comments on my article.  You will find this objection stated in 28 of them — that is, in one of every 5.4, or nearly 20 percent.  Given that I explicitly stated and addressed that objection, and later reverted to it six times, it seems clear that many, and probably most, of the readers of the article gave it only a cursory glance before pouncing on their keyboards to give me a good roasting. But at least those who replicated the objection I had stated deserve credit for saying something of substance. What’s particularly disheartening is that their comments are greatly outnumbered by those that make no reference to my arguments and never touch on a point of substance, but instead consist entirely of insults and invective.  If you take your own moral beliefs seriously, the way to respond to a challenge to them is to make sure you understand the challenge and then to try to refute the arguments for it.  If you can’t answer the challenge except by mocking the challenger, how can you retain your confidence in your own beliefs?

Jeff McMahan is professor of philosophy at Rutgers University and a visiting research collaborator at the Center for Human Values at Princeton University. He is the author of many works on ethics and political philosophy, including “The Ethics of Killing: Problems at the Margins of Life” and “Killing in War.”

__________

Full article: http://opinionator.blogs.nytimes.com/2010/09/28/predators-a-response/

The Defiant Ones

In her new book, the author of ‘Seabiscuit’ turns to the unimaginable ordeal of an Olympic athlete and WW II hero. Because of her own debilitating illness, they struck a special bond.

With a fringe of white hair poking out from under a University of Southern California baseball cap and blue eyes sharp behind bifocals, 93-year-old Louis Zamperini refuses to concede much to old age. He still works a couple of hours each day in the yard of his Hollywood Hills home, bagging leaves, climbing stairs and, on occasion, trimming trees with a chainsaw. His outlook is upbeat, even rambunctious. “I have a cheerful countenance at all times,” he says. “When you have a good attitude your immune system is fortified.” But as he plunged into “Unbroken,” Laura Hillenbrand’s 496-page story of his life, the happy trappings of his current existence fell away.

“Unbroken” will be published Nov. 16 with a first printing of 250,000 copies. Its publisher, Random House, hopes to repeat the success it enjoyed with “Seabiscuit,” Ms. Hillenbrand’s 2001 best seller, which has six million books in print and became a hit movie. “We’re positioning it as the big book for the holidays,” says a Barnes & Noble buyer.

One of the many notable aspects of “Unbroken” is that its author has never met her subject. Suffering from a debilitating case of chronic fatigue syndrome, she was unable to travel to Los Angeles from her Washington, D.C., home. She did the bulk of her research by phone and over the Internet, which enabled her to zero in on key collections at such institutions as the National Archives.

Mr. Zamperini, in his bomber jacket
 
“Unbroken” details a life that was tumultuous from the beginning. As a blue-collar kid in Southern California, Mr. Zamperini fell in and out of scrapes with the law. By age 19, he’d redirected his energies into sports, becoming a record-breaking distance runner. He competed in the 1936 Olympic Games in Berlin where he made headlines, not just on the track (Hitler sought him out for a congratulatory handshake), but by stealing a Nazi flag from the well-guarded Reich Chancellery. The heart of the story, however, is about Mr. Zamperini’s experiences while serving in the Pacific during World War II.

A bombardier on a B-24 flying out of Hawaii in May 1943, the Army Air Corps lieutenant was one of only three members of an 11-man crew to survive a crash into a trackless expanse of ocean. For 47 days, Mr. Zamperini and pilot Russell Allen Phillips (tail gunner Francis McNamara died on day 33) huddled aboard a tiny, poorly provisioned raft, subsisting on little more than rain water and the blood of hapless birds they caught and killed bare-handed. All the while sharks circled, often rubbing their backs against the bottom of the raft. The sole aircraft that sighted them was Japanese. It made two strafing runs, missing its human targets both times. After drifting some 2,000 miles west, the bullet-riddled, badly patched raft washed ashore in the Marshall Islands, where Messrs. Zamperini and Phillips were taken prisoner by the Japanese. The war still had more than two years to go.

 
Laura Hillenbrand at her home in Washington; she rarely leaves the house because of her illness.

For 25 months in such infamous Japanese POW camps as Ofuna, Omori and Naoetsu, Mr. Zamperini was physically tortured and subjected to constant psychological abuse. He was beaten. He was starved. He was denied medical care for maladies that included beriberi and chronic bloody diarrhea. His fellow prisoners—among them Mr. Phillips—were treated almost as badly. But Mr. Zamperini was singled out by a sadistic guard named Mutsuhiro Watanabe, known to prisoners as “the Bird,” a handle picked because it had no negative connotations that might bring down his irrational wrath. The Bird intended to make an example of the famous Olympian. He regularly whipped him across the face with a belt buckle and forced him to perform demeaning acts, among them push-ups atop pits of human excrement. The Bird’s goal was to force Mr. Zamperini to broadcast anti-American propaganda over the radio. Mr. Zamperini refused. Following Japan’s surrender, Mr. Watanabe was ranked seventh among its most wanted war criminals (Tojo was first). Because war-crime prosecutions were suspended in the 1950s, he was never brought to justice.

Mr. Zamperini, record-setting miler, 1939

This all came rushing back when Mr. Zamperini first sat down with a copy of “Unbroken” last month. “As I was reading,” he says, gesturing with an arm to a peaceful vista of palm trees outside his house, “I had to look out that picture window from time to time to make sure that I wasn’t still in Japan. When I got to the end I called Laura and told her she’d put me back in prison, and she said, ‘I’m sorry.’ ”

“It’s almost unimaginable what Louie went through,” says Ms. Hillenbrand from her home on a late fall afternoon. She discovered Mr. Zamperini’s story while researching “Seabiscuit,” the saga of another individual—in that case, a horse—that confronted long odds. “Louie and Seabiscuit were both Californians and both on the sports pages in the 1930s,” she says. “I was fascinated. When I learned about his World War II experiences, I thought, ‘If this guy is still alive, I want to meet him.’ ”

Following the publication of “Seabiscuit,” Ms. Hillenbrand wrote to Mr. Zamperini. Shortly thereafter they had the first of many long phone conversations. His tale of survival captivated her both on its merits and because she could relate to it personally. “I’m attracted,” she says, “to subjects who overcome tremendous suffering and learn to cope emotionally with it.

In basic training, pre-WWII helmet, 1941

The 43-year-old Ms. Hillenbrand contracted chronic fatigue syndrome during her sophomore year at Kenyon College. The bewildering disease, thought to originate from a virus, can be enfeebling and is incurable. Ms. Hillenbrand is today essentially a prisoner in her own home. She is so consistently weak and dizzy (vertigo is a side effect) that she recently installed a chair lift to get to the second floor of her house, where she lives with her husband, G. Borden Flanagan, an assistant professor of political philosophy at American University. What to others might seem simple matters are to her subjects of grave consideration. “I skipped my shower today,” she says, “in order to have the strength to do this interview. My illness is excruciating and difficult to cope with. It takes over your entire life and causes more suffering than I can describe.”

Ms. Hillenbrand’s research was complicated by her disease. But as she likes to remind people, she came down with chronic fatigue syndrome before starting her writing career, and she has learned to work around it. “For ‘Seabiscuit,’ ” she says, “I interviewed 100 people I never met.” For “Unbroken,” Ms. Hillenbrand located not only many of Mr. Zamperini’s fellow POWs and the in-laws of Mr. Phillips, but the most friendly of his Japanese captors. She also interviewed scores of experts on the War in the Pacific (the book is extensively end-noted) and benefited from her subject’s personal files, which he shipped to Washington for her use. “A superlative pack rat,” she writes, “Louie has saved virtually every artifact of his life.”

His damaged B-24 after a mission:, 1943

Mr. Zamperini with mother at homecoming, 1945

During her exploration of Mr. Zamperini’s war years, Ms. Hillenbrand was most intrigued by his capacity to endure hardship. “One of the fascinating things about Louie,” she says, “is that he never allowed himself to be a passive participant in his ordeal. It’s why he survived. When he was being tortured, he wasn’t just lying there and getting hit. He was always figuring out ways to escape emotionally or physically.”

Mr. Zamperini with mother at homecoming, 1945
Mr. Zamperini owes this resiliency, Ms. Hillenbrand concluded, to his rebellious nature. “Defiance defines Louie,” she says. “As a boy he was a hell-raiser. He refused to be corralled. When someone pushed him he pushed back. That made him an impossible kid but an unbreakable man.”

Although Mr. Zamperini came back to California in one piece, he was emotionally ruined. At night, his demons descended in the form of vengeful dreams about Mr. Watanabe. He drank heavily. He nearly destroyed his marriage. In 1949, at the urging of his wife, Cynthia, Mr. Zamperini attended a Billy Graham crusade in downtown Los Angeles, where he became a Christian. (The conversion of the war hero helped put the young evangelist on the map.) Ultimately Mr. Zamperini forgave his tormentors and enjoyed a successful career running a center for troubled youth. He even reached out to Mr. Watanabe. “As a result of my prisoner of war experience under your unwarranted and unreasonable punishment,” Mr. Zamperini wrote his former guard in the 1990s, “my post-war life became a nightmare … but thanks to a confrontation with God … I committed my life to Christ. Love replaced the hate I had for you.” A third party promised to deliver the letter to Mr. Watanabe. He did not reply, and it is not known whether he received it. He died in 2003.

Mr. Zamperini still has his purloined Nazi flag.

Mr. Zamperini’s internal battles and ultimate redemption point to a key difference between “Unbroken” and Ms. Hillenbrand’s previous book. “Seabiscuit’s story is one of accomplishment,” she says. “Louie’s is one of survival. Seabiscuit’s story played out before the whole world. Louie dealt with his ordeal essentially alone. His was a mental struggle.” That struggle, she adds, feels particularly resonant in 2010. “This is a time when people need to be buoyed by something, and Louie blows breath into people by making them realize that they can overcome more than they think.”

Because of Ms. Hillenbrand’s illness, there will be no author tour. In 2007 she sank deeper into chronic fatigue syndrome, and she hasn’t pulled out of it. “This is going to be hard,” she says. “I’m very afraid. I’m not functioning well. I’m going to have to be careful that I don’t slip back to the bottom.” Next week’s “Today” show interview was taped at her home.

A rambunctious youth in Torrance, Calif.

Mr. Zamperini—whose health issues don’t go beyond taking blood-thinning medication following a recent angioplasty—is raring to go. His wife died in 2001, and while he is close to his two children and a grandson, he lives alone. In short, he’s up for an adventure. He has told Random House he will promote the book in Ms. Hillenbrand’s stead. He also has signed with a San Francisco-based speakers’ agency. His goal is to become an inspirational mainstay on cruise ships. He has transformed what he learned as a POW into parables (“Hope has to have a reason. Faith has to have an object”) that he feels can reduce stress and are perfect for an anxiety-filled time.

Visiting a prison camp in Japan in 1950

There is also, not surprisingly, movie interest (the film version of “Seabiscuit” took in $150 million world-wide at the box office). The outlook, however, is uncertain. In the 1950s, Mr. Zamperini published an autobiography titled “Devil at My Heels.” Universal, envisioning a vehicle for Tony Curtis, optioned Mr. Zamperini’s life rights. The project went nowhere. In the 1990s, Universal re-optioned the rights, this time for Nicolas Cage. Again the project faltered. In 2003, Mr, Zamperini and writer David Rensin updated “Devil at My Heels.”

Running in the Olympic torch relay in Los Angeles, 1984

Andrew Rigrod, an entertainment lawyer representing Mr. Zamperini, believes the rights have now reverted to his client. A Universal spokeswoman says that this is most likely correct, but says the studio still owns the previous project and is developing it. She adds that she expects things to be resolved to everyone’s satisfaction. Mr. Zamperini’s hope, Mr. Rigrod says, is that he and Ms. Hillenbrand (who is represented by CAA) will join forces. “He wants the movie to be based on Laura’s book,” says the lawyer, “and he would cooperate and participate.” Says Mr. Zamperini: “For the work she’s done, she deserves the movie. I told her I don’t want anything

Over the course of the seven years Ms. Hillenbrand toiled on “Unbroken,” she and Mr. Zamperini became friends, despite never laying eyes on each other. “I call him a virtuoso of joy,” she says. “When things are going bad, I phone him.” Says Mr. Zamperini, “Every time I say good-bye to her, I tell her I love her and she tells me, ‘I love you.’ I’ve never known a girl like her.

“Laura brought my war buddies back to life,” he says. “The fact that Laura has suffered so much enabled her to put our suffering into words.”

Steve Oney is the author of “And the Dead Shall Rise: The Murder of Mary Phagan and the Lynching of Leo Frank.”

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703514904575602540345409292.html

Discovering the Virtues of a Wandering Mind

At long last, the doodling daydreamer is getting some respect.

In the past, daydreaming was often considered a failure of mental discipline, or worse. Freud labeled it infantile and neurotic. Psychology textbooks warned it could lead to psychosis. Neuroscientists complained that the rogue bursts of activity on brain scans kept interfering with their studies of more important mental functions.

But now that researchers have been analyzing those stray thoughts, they’ve found daydreaming to be remarkably common — and often quite useful. A wandering mind can protect you from immediate perils and keep you on course toward long-term goals. Sometimes daydreaming is counterproductive, but sometimes it fosters creativity and helps you solve problems.

Consider, for instance, these three words: eye, gown, basket. Can you think of another word that relates to all three? If not, don’t worry for now. By the time we get back to discussing the scientific significance of this puzzle, the answer might occur to you through the “incubation effect” as your mind wanders from the text of this article — and, yes, your mind is probably going to wander, no matter how brilliant the rest of this column is.

Mind wandering, as psychologists define it, is a subcategory of daydreaming, which is the broad term for all stray thoughts and fantasies, including those moments you deliberately set aside to imagine yourself winning the lottery or accepting the Nobel. But when you’re trying to accomplish one thing and lapse into “task-unrelated thoughts,” that’s mind wandering.

During waking hours, people’s minds seem to wander about 30 percent of the time, according to estimates by psychologists who have interrupted people throughout the day to ask what they’re thinking. If you’re driving down a straight, empty highway, your mind might be wandering three-quarters of the time, according to two of the leading researchers, Jonathan Schooler and Jonathan Smallwood of the University of California, Santa Barbara.

“People assume mind wandering is a bad thing, but if we couldn’t do it during a boring task, life would be horrible,” Dr. Smallwood says. “Imagine if you couldn’t escape mentally from a traffic jam.”

You’d be stuck contemplating the mass of idling cars, a mental exercise that is much less pleasant than dreaming about a beach and much less useful than mulling what to do once you get off the road. There’s an evolutionary advantage to the brain’s system of mind wandering, says Eric Klinger, a psychologist at the University of Minnesota and one of the pioneers of the field.

“While a person is occupied with one task, this system keeps the individual’s larger agenda fresher in mind,” Dr. Klinger writes in the “Handbook of Imagination and Mental Simulation.“ It thus serves as a kind of reminder mechanism, thereby increasing the likelihood that the other goal pursuits will remain intact and not get lost in the shuffle of pursuing many goals.”

Of course, it’s often hard to know which agenda is most evolutionarily adaptive at any moment. If, during a professor’s lecture, students start checking out peers of the opposite sex sitting nearby, are their brains missing out on vital knowledge or working on the more important agenda of finding a mate? Depends on the lecture.

But mind wandering clearly seems to be a dubious strategy, if, for example, you’re tailgating a driver who suddenly brakes. Or, to cite activities that have actually been studied in the laboratory, when you’re sitting by yourself reading “War and Peace” or “Sense and Sensibility.”

If your mind is elsewhere while your eyes are scanning Tolstoy’s or Austen’s words, you’re wasting your own time. You’d be better off putting down the book and doing something more enjoyable or productive than “mindless reading,” as researchers call it.

Yet when people sit down in a laboratory with nothing on the agenda except to read a novel and report whenever their mind wanders, in the course of a half hour they typically report one to three episodes. And those are just the lapses they themselves notice, thanks to their wandering brains being in a state of “meta-awareness,” as it’s called by Dr. Schooler,

He, and other researchers have also studied the many other occasions when readers aren’t aware of their own wandering minds, a condition known in the psychological literature as “zoning out.” (For once, a good bit of technical jargon.) When experimenters sporadically interrupted people reading to ask if their minds were on the text at that moment, about 10 percent of the time people replied that their thoughts were elsewhere — but they hadn’t been aware of the wandering until being asked about it.

“It’s daunting to think that we’re slipping in and out so frequently and we never notice that we were gone,” Dr. Schooler says. “We have this intuition that the one thing we should know is what’s going on in our minds: I think, therefore I am. It’s the last bastion of what we know, and yet we don’t even know that so well.”

The frequency of zoning out more than doubled in reading experiments involving smokers who craved a cigarette and in people who were given a vodka cocktail before taking on “War and Peace.” Besides increasing the amount of mind wandering, the people made alcohol less likely to notice when their minds wandered from Tolstoy’s text.

In another reading experiment, researchers mangled a series of consecutive sentences by switching the position of two nouns in each one — the way that “alcohol” and “people” were switched in the last sentence of the previous paragraph. In the laboratory experiment, even though the readers were told to look for sections of gibberish somewhere in the story, only half of them spotted it right away. The rest typically read right through the first mangled sentence and kept going through several more before noticing anything amiss.

To measure mind wandering more directly, Dr. Schooler and two psychologists at the University of Pittsburgh, Erik D. Reichle and Andrew Reineberg, used a machine that tracked the movements of people’s eyes while reading “Sense and Sensibility” on a computer screen. It’s probably just as well that Jane Austen is not around to see the experiment’s results, which are to appear in a forthcoming issue of Psychological Science.

By comparing the eye movements with the prose on the screen, the experimenters could tell if someone was slowing to understand complex phrases or simply scanning without comprehension. They found that when people’s mind wandered, the episode could last as long as two minutes.

Where exactly does the mind go during those moments? By observing people at rest during brain scans, neuroscientists have identified a “default network” that is active when people’s minds are especially free to wander. When people do take up a task, the brain’s executive network lights up to issue commands, and the default network is often suppressed.

But during some episodes of mind wandering, both networks are firing simultaneously, according to a study led by Kalina Christoff of the University of British Columbia. Why both networks are active is up for debate. One school theorizes that the executive network is working to control the stray thoughts and put the mind back on task.

Another school of psychologists, which includes the Santa Barbara researchers, theorizes that both networks are working on agendas beyond the immediate task. That theory could help explain why studies have found that people prone to mind wandering also score higher on tests of creativity, like the word-association puzzle mentioned earlier. Perhaps, by putting both of the brain networks to work simultaneously, these people are more likely to realize that the word that relates to eye, gown and basket is ball, as in eyeball, ball gown and basketball.

To encourage this creative process, Dr. Schooler says, it may help if you go jogging, take a walk, do some knitting or just sit around doodling, because relatively undemanding tasks seem to free your mind to wander productively. But you also want to be able to catch yourself at the Eureka moment.

“For creativity you need your mind to wander,” Dr. Schooler says, “but you also need to be able to notice that you’re mind wandering and catch the idea when you have it. If Archimedes had come up with a solution in the bathtub but didn’t notice he’d had the idea, what good would it have done him?”

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/06/29/science/29tier.html

When It Comes to Sex, Chimps Need Help, Too

The human ego has never been quite the same since the day in 1960 that Jane Goodall observed a chimpanzee feasting on termites near Lake Tanganyika. After carefully trimming a blade of grass, the chimpanzee poked it into a passage in the termite mound to extract his meal. No longer could humans claim to be the only tool-making species.

The deflating news was summarized by Ms. Goodall’s mentor, Louis Leakey: “Now we must redefine tool, redefine Man, or accept chimpanzees as human.”

So what have we actually done now that we’ve had a half-century to pout? In a 50th anniversary essay in the journal Science, the primatologist William C. McGrew begins by hailing the progression of chimpanzee studies from field notes to “theory-driven, hypothesis-testing ethnology.”

He tactfully waits until the third paragraph — journalists call this “burying the lead” — to deliver the most devastating blow yet to human self-esteem. After noting that chimpanzees’ “tool kits” are now known to include 20 items, Dr. McGrew casually mentions that they’re used for “various functions in daily life, including subsistence, sociality, sex, and self-maintenance.”

Sex? Chimpanzees have tools for sex? No way. If ever there was an intrinsically human behavior, it had to be the manufacture of sex toys.

Considering all that evolution had done to make sex second nature, or maybe first nature, I would have expected creatures without access to the Internet to leave well enough alone.

Only Homo sapiens seemed blessed with the idle prefrontal cortex and nimble prehensile thumbs necessary to invent erotic paraphernalia. Or perhaps Homo habilis, the famous Handy Man of two million years ago, if those ancestors got bored one day with their jobs in the rock-flaking industry:

“Flake, flake, flake.”

“There’s gotta be more to life.”

“Nobody ever died wishing he’d spent more time making sharp rocks.”

“What if you could make a tool for… something fun?”

I couldn’t imagine how chimps managed this evolutionary leap. But then, I couldn’t imagine what they were actually doing. Using blades of grass to tickle one another? Building heart-shaped beds of moss? Using stones for massages, or vines for bondage, or — well, I really had no idea, so I called Dr. McGrew, who is a professor at the University of Cambridge.

The tool for sex, he explained, is a leaf. Ideally a dead leaf, because that makes the most noise when the chimp clips it with his hand or his mouth.

“Males basically have to attract and maintain the attention of females,” Dr. McGrew said. “One way to do this is leaf clipping. It makes a rasping sound. Imagine tearing a piece of paper that’s brittle or dry. The sound is nothing spectacular, but it’s distinctive.”

O.K., a distinctive sound. Where does the sex come in?

“The male will pluck a leaf, or a set of leaves, and sit so the female can see him. He spreads his legs so the female sees the erection, and he tears the leaf bit by bit down the midvein of the leaf, dropping the pieces as he detaches them. Sometimes he’ll do half a dozen leaves until she notices.”

And then?

“Presumably she sees the erection and puts two and two together, and if she’s interested, she’ll typically approach and present her back side, and then they’ll mate.”

My first reaction, as a chauvinistic human, was to dismiss the technology as laughably primitive — too crude to even qualify as a proper sex tool. But Dr. McGrew said it met anthropologists’ definition of a tool: “He’s using a portable object to obtain a goal. In this case, the goal is not food but mating.”

Put that way, you might see this chimp as the equivalent of a human (wearing pants, one hopes) trying to attract women by driving around with a car thumping out 120-decibel music. But until researchers are able to find a woman who admits to being anything other than annoyed by guys in boom cars, these human tools must be considered evolutionary dead ends.

By contrast, the leaf-clipping chimps seem more advanced, practically debonair. But it would be fairer to compare the clipped leaf with the most popular human sex tool, which we can now identify thanks to the academic research described last year by my colleague Michael Winerip. The researchers found that the vibrator, considered taboo a few decades ago, had become one of the most common household appliances in the United States. Slightly more than half of all women, and almost half of men, reported having used one, and they weren’t giving each other platonic massages.

Leaf-clipping, meanwhile, has remained a local fetish among chimpanzees. The sexual strategy has been spotted at a colony in Tanzania but not in most other groups. There has been nothing comparable to the evolution observed in distributors of human sex tools: from XXX stores to chains of cutely named boutiques (Pleasure Chest, Good Vibrations) to mass merchants like CVS and Wal-Mart.

So let us, as Louis Leakey suggested, salvage some dignity by redefining humanity. We may not be the only tool-making species, but no one else possesses our genius for marketing. We reign supreme, indeed unrivaled, as the planet’s only tool-retailing species.

Now let’s see how long we hold on to that title.

John Tierney, New York Times

__________

Full article and photo: http://www.nytimes.com/2010/05/04/science/04tier.html

Bring On the Fat, Bring On the Taste

Celebrity Chefs Join Burger Wars, Baste Beef Patties in Butter

Celebrity chefs have slaved in haute cuisine kitchens and mastered the world’s most complex dishes. Today, they’re dedicating their culinary brain power to another challenge: How to cash in on the burger craze.

Chefs such as French-trained Hubert Keller, all-American Bobby Flay and television star Emeril Lagasse are devoting their expertise to the once-humble hamburger. The rapidly growing pack of burger chefs is sparking fierce competition to expand, protect innovations and promote their recipes as the world’s best.

The deluxe $60 Rossini burger, with Kobe beef, sauteed foie gras, shaved truffles and Madeira sauce at Burger Bar in San Francisco.

Most of the chefs make a big deal about the kind of meat served at their restaurants. Mr. Lagasse blends ground chuck, short rib and brisket; others promote their Angus, Kobe or grass-fed beef. Some beef experts say the main secret behind tasty celebrity-chef burgers is simple: They pile on the fat, whether from beef patties with 30% fat content or from patties basted in butter. That alone may make their burgers delicious at a time when supermarket ground beef may contain as little as 8% fat.

“I crave cheeseburgers more than anything else,” says Bobby Flay, who has five Bobby’s Burger Palace locations in the Northeast and is planning five to seven more in the next 12 to 18 months. “We treat the food like a high-end restaurant,” using only fresh, unprocessed ingredients, Mr. Flay says.

__________

‘Two Kobe Beef Patties, Truffles…’

Laurent Tourondel

Burger Joint: LT Burger, Sag Harbor, N.Y.

Meat Theory: A grind of Certified Angus Beef short rib, brisket, chuck and sirloin

Priciest Burger: TheWagyu burger costs $16.

Cooking Tip: Smear the beef patty with softened butter, salt and pepper before cooking.

Richard Blais

Burger Joint: Flip Burger Boutique in Atlanta (above) and Birmingham, Ala.

Priciest Burger: The A5 burger, made of Japanese Kobe beef, truffles and foie gras, costs $39.

Cooking Tip: Heat a cast-iron pan, add clarified butter, a garlic clove and sprigs of thyme, rosemary or sage. Add the patty and chunks of butter, and baste.

Hubert Keller

Burger Joint: Burger Bar in Las Vegas, St. Louis and San Francisco

Priciest Burger: The Rossini burger, made with black truffles, foie gras and Madeira sauce, costs $60.

Cooking Tip: Grind your own meat: Cut meat into cubes and chill in a bowl. Pulse quickly in a food processor, stopping when it is still coarse.

Bobby Flay

Burger Joint: Bobby’s Burger Palace, five locations in the Northeast

Meat Theory: Ground chuck and sirloin, 20% fat

Cooking Tip: Add a layer of potato chips between the meat and bun for extra crunch.

Marcus Samuelsson

Burger Joint: Marc Burger, Chicago and Costa Mesa, Calif.

Meat Theory: Chuck, 30% fat

Priciest Burger: Two Kobe sliders cost $8.95.

Cooking Tip: Don’t mix any salt into the meat— apply right before cooking so the meat doesn’t ‘cure.’

Emeril Lagasse

Burger Joint: Burgers and More by Emeril, in Bethlehem, Pa.

Meat Theory: Burgers use different cuts or blends of meat.

Cooking Tip: Get a griddle very hot, about 375 degrees, and sear the patty, then reduce heat to cook through

__________

The chefs are competing with several popular chains serving burgers that aren’t prepared by celebrities but are more upscale than fast food—such as restaurateur Danny Meyer’s Shake Shack, with units in New York City, Saratoga Springs, N.Y., and Miami, and Five Guys, with more than 600 units in 40 states.

Kevin Connaughton, a theatrical lighting designer, has laid out $12.60 for a customized burger at Mr. Keller’s Burger Bar in San Francisco three or four times since it opened last year. “Anywhere that doesn’t specialize in burgers, it’s hard to get it properly cooked,” he says. “It’s definitely a very good burger.”

Most of the celebrity burger joints sprinkle in some trappings of fine dining, while charging anywhere from a few dollars extra to twice as much as the average diner. Marc Burger makes its own spicy ketchup. Burger Bar serves a burger topped with foie gras and truffles for $60; the San Francisco location features a wine cellar.

Ambience varies. The chains from Mr. Flay, Mr. Samuelsson and Mr. Blais look like stylish diners, with hip touches like a loft ceiling or a wavy dining counter. Mr. Keller’s looks more like an old-fashioned bar and grill, with dark-wood paneling. Many are located inside malls, stores or casinos, where chefs can rely on high-volume foot traffic.

Few celebrity chefs spend their days flipping burgers or working the fry-o-later. Instead, they design the concept, conceive the recipes, train the staff and check in regularly to maintain quality. Mr. Blais and Mr. Flay have staffed the top positions of their burger restaurants with cooks from their fine-dining operations.

Mr. Flay says before opening his first Burger Palace, he identified a fault with the hamburger: It has little textural contrast. So Mr. Flay created a concept he calls “crunchify,” which means putting a layer of crispy potato chips between meat and bun. He trademarked the term, as well as “Crunchburger.”

Three weeks ago, Mr. Flay called the chief executive of Cheesecake Factory and asked him to remove a “Double Cheese Crunch Burger,” with a layer of potato chips, from its menu.

“I’m going to protect this with all my might, because it’s the signature of my restaurant,” Mr. Flay says. (Cheesecake Factory says it was unaware of Mr. Flay’s trademark and will change the menu in the next printing cycle.)

The most expensive celebrity burger is usually a “Kobe” burger. Most menus specify that the beef used comes from American Wagyu cattle, a breed famous for its highly-marbled meat, meaning thin veins of fat run throughout the muscle, adding juiciness.

Beef experts are divided on the merit of Kobe burgers. Kobe beef contains fatty acids that give it a distinct taste and have a healthier profile than the fats in typical American beef, says Chris Kerth, professor of meat science at Texas A & M University. But the taste difference between ground Kobe and ground beef with an equally high fat content is so subtle, consumers probably can’t notice it, says Edgar Chambers IV, Kansas State University professor of food science.

Mr. Blais, who serves a Kobe burger, agrees that the unique marbling is lost in a hamburger but says Kobe beef is still a good choice for people who love a burger with abundant, tasty fat. His $39 Japanese Kobe burger consists of about 30% fat.

Chefs have their own special blends of beef cuts, such as short rib, sirloin or brisket.

“You’re creating a story and people love to hear stories,” says Mr. Keller, who uses ground chuck. Mr. Blais says his blend, which includes hangar steak, is the result of much research and study, including meals at rival Burger Bar, BLT Burger, Shake Shack and Five Guys.

“You get kind of tired of burgers after so much R & D,” Mr. Blais says. There’s minimal scientific research to guide them into the flavor differences among various meat cuts when ground.

“Grass fed” beef shows up in celebrity burgers—and often costs a little extra. Grass-fed beef contains healthier fats than typical grain-fed beef and is trendy in food circles partly because of a reputation for being better for the environment (although that is a question subject to scientific debate).

Mr. Tourondel says his grass-fed burger is a big hit but he personally doesn’t like it. “Too lean, too dry,” says the chef, who ordinarily smears softened butter onto his burger patties before cooking.

Katy McLaughlin, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704312504575618450888182376.html

Sweet Smell of Success

The title of Tilar J. Mazzeo’s “The Secret of Chanel No. 5” suggests that there is some hidden truth behind one of the most famous fragrances in the world. There may be, for all we know. There is certainly a lot of Chanel lore that is unfamiliar to most of us.

The perfume’s formula, for instance, was not entirely original. It was based on a fragrance made in 1914 to honor Russian royalty. More surprising is the fact that, for part of World War II, this paragon of French fragrance was produced in Hoboken, N.J. Most off-putting, though, is the news that the perfume’s creator—who would see Chanel No. 5 turned into a cultural totem in the U.S. by G.I.’s who brought it home from Paris as a fancy gift for their wives and girlfriends—spent the Occupation holed up at the Paris Ritz with a German officer as her lover.

Gabrielle “Coco” Chanel’s affinity for the Germans—she made two trips to Berlin during the war—did nothing to dent the perfume’s appeal. Since its launch in 1921, Chanel No. 5 has seemed almost invulnerable to any force that might damage its world-wide popularity. Though its cachet has gone up and down over the years, it remains hugely popular. Les Parfums Chanel doesn’t reveal sales figures, but Ms. Mazzeo says that a bottle sells somewhere in the world every 30 seconds, with annual revenue estimated at $100 million.

What accounts for the continuing fascination with a product that is nine decades old? Ms. Mazzeo explains with a combination of engaging historical detail and at times overwrought drama.

Orphaned at an early age, Coco Chanel grew up in the convent abbey called Aubazine in southwestern France. Ms. Mazzeo contends that the abbey’s scents, aesthetic minimalism and even its numerical patterns—the place teemed with five-pointed stars and pentagon shapes, the author says—made a lifelong impression on the young girl.

Chanel’s entry into business began in 1909 when she set up a millinery shop in Paris. The boutique was a success, prompting her to open a seaside store in Deauville, where she introduced a sportswear line in 1913 that bore the hallmarks of a style—simple and chic—that would turn the Chanel brand into an international fashion powerhouse.

A few years later, upset by a break-up with her wealthy British boyfriend, Arthur “Boy” Capel, and his subsequent death in an automobile accident, Chanel focused her energies on establishing a perfume line. Ms. Mazzeo says, in a typically feverish passage: “The perfume she would create had everything to do with the complicated story of her sensuality, with the heart-breaking loss of Boy in his car crash, and with everything that had come before. In crafting this scent, she would return to her emotional ground zero.”

Chanel was introduced to perfumer Ernest Beaux, who had worked in Moscow for the fragrance house A. Rallet & Co. While there, he had created a scent that was intended to celebrate Catherine the Great. But the timing, Ms. Mazzeo notes, was not right: “A perfume named after a German-born empress of Russia was doomed in 1914.”

Marilyn Monroe

Working with Chanel, Beaux used his Catherine the Great formula to capture the qualities that Chanel was looking for in her product: It would have to be seductive and expensive, she said, and “a modern work of art and an abstraction.” A perfume based on the scent of a particular flower—which at the time had the power to define its wearer as a respectable woman (rose) or a showgirl (jasmine)—would not do. “I want to give women an artificial perfume,” Chanel once said. “Yes, I do mean artificial, like a dress, something that has been made. I don’t want a rose or a lily of the valley, I want a perfume that is a composition.”

The composition she and Beaux arrived at had strong notes of rose and jasmine, balanced by what was, in the 1920s, a new fragrance technology: aldehydes. Ms. Mazzeo neatly explains that aldehydes are “molecules with a very particular kind of arrangement among their oxygen, hydrogen and carbon atoms, and they are a stage in the natural process that happens when exposure turns an alcohol to an acid.” Aldehydes provide a “clean” scent and intensify other fragrances.

Chanel’s perfume was not the first to use aldehydes, but it was the first to use them in large portions. The innovation led to a new category of fragrance, the floral-aldehydics, that combine the scent of flowers and aldehydes.

The story of how Coco Chanel decided what to name the perfume has been often told: Beaux supposedly presented her with 10 vials of fragrance, and she chose the fifth one. But in Ms. Matteo’s telling, Chanel picked the fifth vial and called her perfume Chanel No. 5 because, if we believe the purple prose, the “special talisman” held all manner of significance for her. Even Boy Capel regarded five as his “magic number,” according to the author.

The fragrance was an immediate success. Changes to the formula “have been only minor and only when absolutely required,” writes Ms. Mazzeo, as when a type of chemically unstable musk used in the perfume was banned in the 1980s.

By then, Chanel No. 5 had long been unconnected to the Chanel business interests: Coco Chanel sold Les Parfums Chanel—an enterprise separate from her fashion house—in 1924 to French industrialists Paul and Pierre Wertheimer, who had a large perfume manufacturing and distribution operation. Chanel, who retained a 10% interest, was seeking a world-wide market for the perfume. That goal was attained, but Chanel came to bitterly regret the decision. At times over the following decades she tried and failed to win back the company, even resorting to disparaging the perfume. By the time of her death in 1971 at age 87 she had reached a settlement with the owners. Corporate squabbles aside, Chanel No. 5 has endured. Remarkably little about it has changed since 1921. And if its history tells us anything, little will.

Ms. Catton writes the Culture City column for the Journal’s Greater New York section.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704312504575618581112340888.html

Good Gracious

Surrounded by beautiful things from a tender age (her mother owned an antiques shop in her native New Orleans), interior designer Suzanne Rheinstein has made a career of showcasing them.

Interior designer Suzanne Rheinstein

Her Los Angeles store, Hollyhock—a destination for fine antiques and new decorative objects—and the homes she creates for clients both display her impeccable polish and a talent for mixing old and new.

On the release of her book “At Home: A Style for Today With Things From the Past,” we pick her estimable brain for ideas decorative and otherwise.

One of my decorating tricks is using fabrics on the wrong side. Certain materials seem too bright and strong on the right side but when I turn them over they’re often very subtle and interesting.

Why save your special things like “good” china and “good silver” for once a year? Joan Didion once said “every day is all there is.”

My favorite shopping street in the world is Magazine Street in New Orleans. There are lots of great antique shops and wonderful little children’s stores and when you finish there’s lunch at Lilette.

A room decorated by Ms. Rheinstein

I always have jugs of flowers or branches of berries or pretty leaves. The great thing about leaves is that you can just leave them and a month later they still look interesting.

I’m pro scented candles. This time of year I’m burning Michael Smith’s Angkor when I’m in New York and Diptyque’s Cannelle in L.A.

When I write letters I use engraved cards from The Printery in Oyster Bay.

My all-time favorite flowers are black dahlias and Rêve d’Or roses, which have kind of a pinky buff-y color.

A Rêve d’Or rose

In the winter I use black beeswax candles by Del Mar on the dinner table. Black candles were used in Regency times and I like that they don’t stick out like white ones.

The tackiest thing I love is Velveeta melted in a pan with Ro-Tel, which is diced tomatoes with hot chilis in it. I serve it in a chafing dish with Fritos.

The Printery cards

My favorite room in the world is Pauline de Rothschild’s bedroom in Paris, which features this fantastic juxtaposition of a very spare, contemporary metal canopy bed she designed with green 18th-century Chinese wallpaper.

Pauline de Rothschild’s bedroom

I’m not one for hiding the television set. I think it should be where you watch it. Most of ours are in bookshelves and surrounded by books. The one place I really don’t like it is above a mantelpiece because it ruins your enjoyment of both fire and TV.

Right now I’m reading “Artempo: Where Time Becomes Art,” by Axel Vervoordt. The man has the most amazing sense of art and style. I believe his influence will endure long after the flood of China-made Belgian-esque furniture has ruined that look for many people.

To keep people from using their phones at dinner I think there should be a sort of seventh-inning stretch where everyone has 10 minutes to use their gadgets. Really, the rules of common courtesy should prevail, but these days courtesy is uncommon.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703514904575602642316191692.html

The Meat Eaters

Viewed from a distance, the natural world often presents a vista of sublime, majestic placidity. Yet beneath the foliage and hidden from the distant eye, a vast, unceasing slaughter rages. Wherever there is animal life, predators are stalking, chasing, capturing, killing, and devouring their prey. Agonized suffering and violent death are ubiquitous and continuous. This hidden carnage provided one ground for the philosophical pessimism of Schopenhauer, who contended that “one simple test of the claim that the pleasure in the world outweighs the pain…is to compare the feelings of an animal that is devouring another with those of the animal being devoured.”

The continuous, incalculable suffering of animals is also an important though largely neglected element in the traditional theological “problem of evil” ─ the problem of reconciling the existence of evil with the existence of a benevolent, omnipotent god. The suffering of animals is particularly challenging because it is not amenable to the familiar palliative explanations of human suffering. Animals are assumed not to have free will and thus to be unable either to choose evil or deserve to suffer it. Neither are they assumed to have immortal souls; hence there can be no expectation that they will be compensated for their suffering in a celestial afterlife. Nor do they appear to be conspicuously elevated or ennobled by the final suffering they endure in a predator’s jaws. Theologians have had enough trouble explaining to their human flocks why a loving god permits them to suffer; but their labors will not be over even if they are finally able to justify the ways of God to man. For God must answer to animals as well.

If I had been in a position to design and create a world, I would have tried to arrange for all conscious individuals to be able to survive without tormenting and killing other conscious individuals.  I hope most other people would have done the same.  Certainly this and related ideas have been entertained since human beings began to reflect on the fearful nature of their world — for example, when the prophet Isaiah, writing in the 8th century B.C.E., sketched a few of the elements of his utopian vision.  He began with people’s abandonment of war: “They shall beat their swords into plowshares, and their spears into pruning hooks: nation shall not lift up sword against nation.”  But human beings would not be the only ones to change; animals would join us in universal veganism: “The wolf also shall dwell with the lamb, and the leopard shall lie down with the kid; and the calf and the young lion and the fatling together; and the little child shall lead them.  And the cow and the bear shall feed; their young ones shall lie down together; and the lion shall eat straw like the ox.” (Isaiah 2: 4 and 11: 6-7)

Isaiah was, of course, looking to the future rather than indulging in whimsical fantasies of doing a better job of Creation, and we should do the same.  We should start by withdrawing our own participation in the mass orgy of preying and feeding upon the weak.

Our own form of predation is of course more refined than those of other meat-eaters, who must capture their prey and tear it apart as it struggles to escape.  We instead employ professionals to breed our prey in captivity and prepare their bodies for us behind a veil of propriety, so that our sensibilities are spared the recognition that we too are predators, red in tooth if not in claw (though some of us, for reasons I have never understood, do go to the trouble to paint their vestigial claws a sanguinary hue).  The reality behind the veil is, however, far worse than that in the natural world.  Our factory farms, which supply most of the meat and eggs consumed in developed societies, inflict a lifetime of misery and torment on our prey, in contrast to the relatively brief agonies endured by the victims of predators in the wild.  From the moral perspective, there is nothing that can plausibly be said in defense of this practice.  To be entitled to regard ourselves as civilized, we must, like Isaiah’s morally reformed lion, eat straw like the ox, or at least the moral equivalent of straw.

But ought we to go further?  Suppose that we could arrange the gradual extinction of carnivorous species, replacing them with new herbivorous ones.  Or suppose that we could intervene genetically, so that currently carnivorous species would gradually evolve into herbivorous ones, thereby fulfilling Isaiah’s prophecy.  If we could bring about the end of predation by one or the other of these means at little cost to ourselves, ought we to do it?

I concede, of course, that it would be unwise to attempt any such change given the current state of our scientific understanding.  Our ignorance of the potential ramifications of our interventions in the natural world remains profound.  Efforts to eliminate certain species and create new ones would have many unforeseeable and potentially catastrophic effects.

Perhaps one of the more benign scenarios is that action to reduce predation would create a Malthusian dystopia in the animal world, with higher birth rates among herbivores, overcrowding, and insufficient resources to sustain the larger populations.  Instead of being killed quickly by predators, the members of species that once were prey would die slowly, painfully, and in greater numbers from starvation and disease.

Yet our relentless efforts to increase individual wealth and power are already causing massive, precipitate changes in the natural world.  Many thousands of animal species either have been or are being driven to extinction as a side effect of our activities.  Knowing this, we have thus far been largely unwilling even to moderate our rapacity to mitigate these effects.  If, however, we were to become more amenable to exercising restraint, it is conceivable that we could do so in a selective manner, favoring the survival of some species over others.  The question might then arise whether to modify our activities in ways that would favor the survival of herbivorous rather than carnivorous species.

At a minimum, we ought to be clear in advance about the values that should guide such choices if they ever arise, or if our scientific knowledge ever advances to a point at which we could seek to eliminate, alter, or replace certain species with a high degree of confidence in our predictions about the short- and long-term effects of our action.  Rather than continuing to collide with the natural world with reckless indifference, we should prepare ourselves now to be able to act wisely and deliberately when the range of our choices eventually expands. 

The suggestion that we consider whether and how we might exercise control over the prospects of different animal species, perhaps eventually selecting some for extinction and others for survival in accordance with our moral values, will undoubtedly strike most people as an instance of potentially tragic hubris, presumptuousness on a cosmic scale.  The accusation most likely to be heard is that we would be “playing God,” impiously usurping prerogatives that belong to the deity alone.  This has been a familiar refrain in the many instances in which devotees of one religion or another have sought to obstruct attempts to mitigate human suffering by, for example, introducing new medicines or medical practices, permitting and even facilitating suicide, legalizing a constrained practice of euthanasia, and so on.  So it would be surprising if this same claim were not brought into service in opposition to the reduction of suffering among animals as well.  Yet there are at least two good replies to it.

One is that it singles out deliberate, morally-motivated action for special condemnation, while implicitly sanctioning morally neutral action that foreseeably has the same effects as long as those effects are not intended.  One plays God, for example, if one administers a lethal injection to a patient at her own request in order to end her agony, but not if one gives her a largely ineffective analgesic only to mitigate the agony, though knowing that it will kill her as a side effect.  But it is hard to believe that any self-respecting deity would be impressed by the distinction.  If the first act encroaches on divine prerogatives, the second does as well.

The second response to the accusation of playing God is simple and decisive.  It is that there is no deity whose prerogatives we might usurp.  To the extent that these matters are up to anyone, they are up to us alone.  Since it is too late to prevent human action from affecting the prospects for survival of many animal species, we ought to guide and control the effects of our action to the greatest extent we can in order to bring about the morally best, or least bad, outcomes that remain possible.

Another equally unpersuasive objection to the suggestion that we ought to eliminate carnivorism if we could do so without major ecological disruption is that this would be “against Nature.”  This slogan also has a long history of deployment in crusades to ensure that human cultures remain primitive.  And like the appeal to the sovereignty of a deity, it too presupposes an indefensible metaphysics.  Nature is not a purposive agent, much less a wise one.  There is no reason to suppose that a species has special sanctity simply because it arose in the natural process of evolution.

Many people believe that what happens among animals in the wild is not our responsibility, and indeed that what they do among themselves is none of our business.   They have their own forms of life, quite different from our own, and we have no right to intrude upon them or to impose our anthropocentric values on them. 

There is an element of truth in this view, which is that our moral reason to prevent harm for which we would not be responsible is weaker than our reason not to cause harm.  Our primary duty with respect to animals is therefore to stop tormenting and killing them as a means of satisfying our desire to taste certain flavors or to decorate our bodies in certain ways.  But if suffering is bad for animals when we cause it, it is also bad for them when other animals cause it.  That suffering is bad for those who experience it is not a human prejudice; nor is an effort to prevent wild animals from suffering a moralistic attempt to police the behavior of other animals.  Even if we are not morally required to prevent suffering among animals in the wild for which we are not responsible, we do have a moral reason to prevent it, just as we have a general moral reason to prevent suffering among human beings that is independent both of the cause of the suffering and of our relation to the victims.  The main constraint on the permissibility of acting on our reason to prevent suffering is that our action should not cause bad effects that would be worse than those we could prevent.

That is the central issue raised by whether we ought to try to eliminate carnivorism.  Because the elimination of carnivorism would require the extinction of carnivorous species, or at least their radical genetic alteration, which might be equivalent or tantamount to extinction, it might well be that the losses in value would outweigh any putative gains.  Not only are most or all animal species of some instrumental value, but it is also arguable that all species have intrinsic value.  As Ronald Dworkin has observed, “we tend to treat distinct animal species (though not individual animals) as sacred.  We think it very important, and worth a considerable economic expense, to protect endangered species from destruction.”  When Dworkin says that animal species are sacred, he means that their existence is good in a way that need not be good for anyone; nor is it good in the sense that it would be better if there were more species, so that we would have reason to create new ones if we could.  “Few people,” he notes, “believe the world would be worse if there had always been fewer species of birds, and few would think it important to engineer new bird species if that were possible.  What we believe important is not that there be any particular number of species but that a species that now exists not be extinguished by us.”

The intrinsic value of individual species is thus quite distinct from the value of species diversity.  It also seems to follow from Dworkin’s claims that the loss involved in the extinction of an existing species cannot be compensated for, either fully or perhaps even partially, by the coming-into-existence of a new species.

The basic issue, then, seems to be a conflict between values: prevention of suffering and preservation of animal species.  It is relatively uncontroversial that suffering is intrinsically bad for those who experience it, even if occasionally it is also instrumentally good for them, as when it has the purifying, redemptive effects that Dostoyevsky’s characters so often crave.  Nor is it controversial that the extinction of an animal species is normally instrumentally bad.  It is bad for the individual members who die and bad for other individuals and species that depended on the existence of the species for their own well-being or survival.  Yet the extinction of an animal species is not necessarily bad for its individual members.  (To indulge in science fiction, suppose that a chemical might be introduced into their food supply that would induce sterility but also extend their longevity.)  And the extinction of a carnivorous species could be instrumentally good for all those animals that would otherwise have been its prey.  That simple fact is precisely what prompts the question whether it would be good if carnivorous species were to become extinct.

The conflict, therefore, must be between preventing suffering and respecting the alleged sacredness — or, as I would phrase it, the impersonal value — of carnivorous species.  Again, the claim that suffering is bad for those who experience it and thus ought in general to be prevented when possible cannot be seriously doubted.  Yet the idea that individual animal species have value in themselves is less obvious.  What, after all, are species?  According to Darwin, they “are merely artificial combinations made for convenience.”  They are collections of individuals distinguished by biologists that shade into one another over time and sometimes blur together even among contemporaneous individuals, as in the case of ring species.  There are no universally agreed criteria for their individuation.  In practice, the most commonly invoked criterion is the capacity for interbreeding, yet this is well known to be imperfect and to entail intransitivities of classification when applied to ring species.  Nor has it ever been satisfactorily explained why a special sort of value should inhere in a collection of individuals simply by virtue of their ability to produce fertile offspring.  If it is good, as I think it is, that animal life should continue, then it is instrumentally good that some animals can breed with one another.  But I can see no reason to suppose that donkeys, as a group, have a special impersonal value that mules lack.

Even if animal species did have impersonal value, it would not follow that they were irreplaceable.  Since animals first appeared on earth, an indefinite number of species have become extinct while an indefinite number of new species have arisen.  If the appearance of new species cannot make up for the extinction of others, and if the earth could not simultaneously sustain all the species that have ever existed, it seems that it would have been better if the earliest species had never become extinct, with the consequence that the later ones would never have existed.  But few of us, with our high regard for our own species, are likely to embrace that implication.

Here, then, is where matters stand thus far.  It would be good to prevent the vast suffering and countless violent deaths caused by predation.  There is therefore one reason to think that it would be instrumentally good if  predatory animal species were to become extinct and be replaced by new herbivorous species, provided that this could occur without ecological upheaval involving more harm than would be prevented by the end of predation.  The claim that existing animal species are sacred or irreplaceable is subverted by the moral irrelevance of the criteria for individuating animal species.  I am therefore inclined to embrace the heretical conclusion that we have reason to desire the extinction of all carnivorous species, and I await the usual fate of heretics when this article is opened to comment.

Jeff McMahan is professor of philosophy at Rutgers University and a visiting research collaborator at the Center for Human Values at Princeton University. He is the author of many works on ethics and political philosophy, including “The Ethics of Killing: Problems at the Margins of Life” and “Killing in War.”

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/

Speech and Harm

As every public figure knows, there are certain words that can not be uttered without causing shock or offense. These words, commonly known as “slurs,” target groups on the basis of race, nationality, religion, gender, sexual orientation, immigration status and sundry other demographics.  Many of us were reminded of the impact of such speech in August, when the radio host Dr. Laura Schlessinger repeatedly uttered a racial slur on a broadcast of her show. A public outcry followed, and ultimately led to her resignation. Many such incidents of abuse and offense, often with much more serious consequences, seem to appear in the news by the day.

We may at times convince ourselves, as Dr. Laura may have, that there are  inoffensive ways to use slurs.  But a closer look at the matter shows us that those ways are very rare. Slurs are in fact uniquely and stubbornly resistant to attempts to neutralize their power to hurt or offend. 

To be safe, we may ask ourselves how a targeted member, perhaps overhearing a slur,  would react to it. Doing so, we will almost always find that what may have seemed suitable most definitely is not.

But why are slurs so offensive? And why are some more offensive than others?  Even different slurs for the same group vary in intensity of contempt. How can words fluctuate both in their status as slurs and in their power to offend? Members of targeted groups themselves are not always offended by slurs ─ consider the uses of appropriated or reclaimed slurs among African-Americans and gay people.

The consensus answer among philosophers to the first question is that slurs, as a matter of convention, signal negative attitudes towards targeted groups. Those who pursue this answer are committed to the view that slurs carry offensive content or meaning; they disagree only over the mechanisms of implementation.  An alternative proposal is that slurs are prohibited words not on account of any particular content they get across, but rather because of relevant edicts surrounding their prohibition. This latter proposal itself raises a few pertinent questions: How do words become prohibited? What’s the relationship between prohibition and a word’s power to offend? And why is it sometimes appropriate to flout such prohibitions? 

Let’s start with conventional meaning.

Does a slur associated with a racial or ethnic group mean something different from the neutral conventional name for the group, for example, African-American or Hispanic?  The Oxford English Dictionary says a slur is a “deliberate slight; an expression or suggestion of disparagement or reproof.” But this definition fails to distinguish specific slurs from one another, or even distinct slurs for the same group. Still, from this definition we may infer that slurs supplement the meanings of their neutral counterparts with something offensive about whomever they reference. This information, however meager, suffices to isolate a flaw in trying to pin the offensiveness of a slur on its predicative meaning.

Anyone who wants to disagree with what “Mary is Hispanic” ascribes to Mary can do so with a denial (“Mary is not Hispanic.”).  If the use of a slur was offensive on account of what it predicates of its subject, we should be able to reject its offense simply by denying it. But replacing “Hispanic” with a slur on a Hispanic person does not work ─ it is no less inflammatory in the denial than the original is.  Therefore, however slurs offend, it is not through what they predicate of their subjects.

Another fascinating aspect of slurs that challenges the view that their meaning renders them offensive pertains to their effect in indirect speech.  Normally, an utterance can be correctly reported by re-using the very expressions being reported on, as in a quote in a book or a newspaper. What better insurance for accuracy can there be in reporting another than to re-use her words? Yet any such report not only fails to capture the original offense, but interestingly, it guarantees a second offense by whoever is doing the reporting.  What’s gone wrong? We expect indirect reports to be of others, not of ourselves. This limit on reporting slurs is significant. Is the offense of another’s slurring inescapable?  Is it possible that we can recognize the offense, but not re-express it?  How odd.

Is there someplace else to look for an account of why slurs are offensive? Could it be a matter of tone? Unlike conventionalized content, tone is supposed to be subjective.  Words can be different in tone but share content.  Might tone distinguish slurs from neutral counterparts?  No one can deny that the use of a slur can arouse subjective images and feelings in us that a use of its neutral counterpart does  not, but as an account of the difference in offensive punch it can’t be the whole story.

Consider a xenophobe who only uses slurs for picking out a target group. He may harbor no negative opinions towards its members; he may use slurs only among likeminded friends when intending to express affection for Hispanics or admiration for Asians but these uses remain pertinently offensive. The difference between a slur and its neutral counterpart cannot be a matter of subjective feel.

A major problem with any account that tries to explain the offensive nature of a slur by invoking content is how it can explain the general exhortation against even mentioning slurs. A quoted occurrence of a slur can easily cause alarm and offense. Witness the widespread preference in some media for using phrases that describe slurs rather than using or mentioning them. This is surprising since quotation is usually just about the form or shape of a word. You can see this in statement like “ ‘Love’ is a four letter word.” This suggests that it is something about the form or shape of other four letter words makes them unprintable. 

Another challenge to the content view is raised by the offensive potential of incidental uses of slurs, as witnessed by the Washington D.C. official who wound up resigning his job over the outcry that his use of the word “niggardly” provoked.  In 1999, the head of the Office of Public Advocate in Washington, DC used it in a discussion with a black colleague. He was reported as saying, “I will have to be niggardly with this fund because it’s not going to be a lot of money.”  Despite a similarity in spelling, his word has no semantic or etymological tie to the slur it may invoke; mere phonetic and orthographic overlap caused as much a stir as standard offensive language.  This is not an accidental use of an ambiguous or unknown slur, but an incidental one. Or take the practice of many newspapers (in case you haven’t noticed my own contortions in presenting these materials) that slurs cannot even be canonically described as in “the offensive word that begins with a certain letter….”

What conclusions should we draw from these constraints? One suggestion is that uses of slurs (and their canonical descriptions) are offensive simply because they sometimes constitute violations on their very prohibition.  Just as whoever violates a prohibition risks offending those who respect it, perhaps the fact that slurs are prohibited explains why we cannot escape the affect, hatred and negative association tied to them and why their occurrences in news outlets and even within quotation marks can still inflict pain. Prohibited words are usually banished wherever they occur. This explains why bystanders (even when silent) are uncomfortable, often embarrassed, when confronted by a slur. Whatever offenses these confrontations exact, the audience risks complicity, as if the offense were thrust upon them, not because of its content, but because of a responsibility we all incur in ensuring certain violations are prevented; when they are not, they must be reported and possibly punished. Their occurrences taint us all.

In short, Lenny Bruce got it right when he declared “the suppression of the word gives it the power, the violence, the viciousness.”  It is impossible to reform a slur until it has been removed from common use.

Words become prohibited for all sorts of reasons — by a directive or edict of an authoritative figure; or because of a tainted history of associations, perhaps, though conjuring up past pernicious or injurious events. The history of its uses, combined with reasons of self-determination, is exactly how “colored,” once used by African-Americans self-referentially, became prohibited, and so, offensive.  A slur may become prohibited because of who introduces or uses it.  This is the sentiment of a high school student who objected to W.E.B. Dubois’ use of  “Negro” because it “is a white man’s word.”

What’s clear is that no matter what its history, no matter what it means or communicates, no matter who introduces it, regardless of past associations, once relevant individuals with sufficient authority declare a word a slur, it is one.  The condition under which this occurs is not easy to predict in advance. When the Rev. Jesse Jackson proclaimed at the 1988 Democratic National Convention that from then on “black” should not be used, his effort failed. Many African-Americans carried positive associations with the term (“Black Panthers,”  “Black Power,” “I’m black and I’m proud.”) and so Jackson’s attempt at prohibition did not stick.

In appropriation, targeted members can opt to use a slur without violating its prohibition because membership provides a defeasible escape clause; most prohibitions include such clauses.  Oil embargoes permit exportation, just not importation.  Sanctions invariably exclude medical supplies. Why shouldn’t prohibitions against slurs and their descriptions exempt certain individuals under certain conditions for appropriating a banished word?  Targeted groups can sometimes inoffensively use slurs among themselves.  The NAACP, for example, continues to use “Colored” relatively prominently (on their letterhead, on their banners, etc.).

Once appropriation is sufficiently widespread, it might come to pass that the prohibition eases, permitting — under regulated circumstances — designated outside members access to an appropriated use. (For example, I have much more freedom in discussing the linguistics of slurs inside scholarly journals than I do here.) Should this practice become sufficiently widespread, the slur might lose its intensity.  How escape clauses are fashioned and what sustains them is a complex matter — one I cannot take up here.

Ernie Lepore, a professor of philosophy and co-director of the Center for Cognitive Science at Rutgers University, writes on language and mind. More of his work, including the study, “Slurring Words,” with Luvell Anderson, can be found here.

___________

Full article: http://opinionator.blogs.nytimes.com/2010/11/07/speech-and-harm/

The Cheat: The Greens Party

There was a party of dudes from Montana sitting at a table in Vij’s in Vancouver, British Columbia, getting ready to graze. They were businessmen, in the city for a conference, and the hotel had sent them out to South Granville Street to wait for a table, for Vij’s takes no reservations and never has.

The men sat in the bar and had a few beers, and after a table opened they took it and looked at the menu and ordered, each of them asking for a variation on the same theme: some mutton kebabs to start, the beef tenderloin after; the mutton kebabs to start, the lamb Popsicles after.

Their waitress was Meeru Dhalwala, who is also the chef at Vij’s and, with her husband, Vikram Vij, an owner of the restaurant. At that point she had spent more than a decade running the kitchen at Vij’s — she arrived in 1995 — but she had never worked in the dining room, interacting with customers, dealing with American men ordering meat.

Dhalwala took the orders and paused, then asked the men if they wanted any vegetables. They said no, almost instantaneously. “But you’ve ordered meat on meat,” she said. There was a collective shrug. They were from Montana.

Dhalwala stared at them. She was in a kind of shock. Vij’s is at once an excellent restaurant and a curious one, Indian without being doctrinaire about it, utopian without being political. No men work in the kitchen at Vij’s: 54 women, with no turnover save for during maternity leaves. No one, Dhalwala has said, has ever been fired. She feels a deep and important connection to both the food she makes and the business that she and her husband run. These boys from Montana were freaking her out.

From an e-mail she sent me: “I told them that as the creator of the food they were about to eat, I could not in good faith give them what they wanted and that they had to order some fiber and vitamins for their dinner. They were as dumbfounded as I was. ‘But we don’t like vegetables,’ they said. So I made a deal with them that I would choose a large plate of a side vegetable, and if they didn’t like it, I would pay for their dinner.”

The coconut kale we are cooking this weekend was that dish, and the Montana men paid for it happily. They complimented Dhalwala on their way out the door: Good vegetables! The leaves are rich and fiery, sweet and salty all at once. Important to the Montanans: they taste as if cut through with blood and fat, as if they were steak and fries combined. The grilling softens the texture of the kale without overcooking it or removing its essential structure — or the mild bitterness of the leaves — while the marinade of coconut milk, cayenne, salt and lemon juice balances out the flavors, caramelizing in the heat.

Made over a charcoal fire or even in a wickedly hot pan, it becomes a dish of uncommon flavor, the sort of thing you could eat on its own, with only a mound of basmati rice for contrast.

But you know, why would you? Here in America, after all, we will always be from Montana somehow.

At Vij’s, Dhalwala bathes grilled lamb chops — little meat Popsicles taken off the rack — in a rich, creamy fenugreek curry and serves them with turmeric-hued potatoes. This is a very, very good dish. But we are cheating here, Sunday cooks on the run. We are grilling kale, and perhaps for the first time. So let us keep things simple. To highlight the flavor of the greens, we will embrace austerity for our lamb, grilling it off under nothing but a garlic rub and showers of salt and freshly ground pepper. (You can sear the meat on the stovetop as well, in a cast-iron pan, and finish it in a hot oven.)

Then, bouncing back toward complicated flavors once more, we have a simple chickpea curry that Dhalwala cooks with star anise and chopped dates, which combine into an autumnal darkness that lingers on the tongue. Save for the business of messing around with black cardamom (and finding the black cardamom — you can always head to penzeys.com, or Amazon), it takes only a matter of moments to assemble the ingredients and cook, allowing some time at the end of the process for the flavors to meld together in the pot.

Back to Dhalwala again. She came to professional cooking late, after a career in nongovernmental organizations, the dance and heartache of third-world development. To her, cooking is a spiritual act, a simple one. You can taste this in every dish she serves. “There is an inner soul of cooking that is for nurturing and community,” she wrote to me, “and all my recipes stem from this place.”

So if you don’t like the lamb chops, that is on me.

Sam Sifton, New York Times

__________

Full article and photos: http://www.nytimes.com/2010/11/07/magazine/07food-t-000.html

Desire in the Twilight of Life

Despite the stereotypes and bad jokes, intimacy is alive and well in our aging population. And it’s time to get comfortable with it.

Elinor Carucci, Grandparents’ Kiss, 1998. From the book Closer, Chronicle books, 2002.

A colleague of mine, a geriatric social worker, likes to tell a story about the time one of his clients, an 86-year-old woman, failed to answer the phone for their daily chat. Worried about her safety, he ran to her building and asked the superintendent to let him into the apartment. Opening the door, he found a trail of lit candles and burning incense that led to the master bathroom. His client, it turned out, had a visitor.

The intimate lives of older people make us squeamish and anxious, especially in a culture so focused on beautiful young bodies primed for physical pleasure. We prefer to think that older people are asexual, resigned to a certain loss of desire and vitality. Nor do our traditions provide much guidance. In the Bible, Sarah and Abraham were age 90 and 100, respectively, when God told them they would have a son—and Sarah, famously, laughed at the news. In the opening scene of Plato’s “Republic,” an old man approvingly cites the poet Sophocles, who declared his relief, late in life, at being free of the “frenzied and savage master” of sex.

At a time when almost every kind of physical intimacy is discussed with increasing candor, the erotic feelings of empty nesters, retirees and the residents of assisted-living centers remain a taboo subject, except in tiresome jokes about Viagra. But there is nothing unusual or deviant about romance among older people. If we have learned nothing else from the past half-century of personal freedom and experimentation, it is that we are profoundly sexual beings. How we understand ourselves over the course of a lifetime is closely tied to our bodies and how we share our bodies with others, even when we’re done reproducing. The details may change with age, but our basic physical and psychological needs do not.

A growing body of research on aging suggests that many older Americans have satisfying sexual relationships well into their later years. The largest systematic study, published in the New England Journal of Medicine in 2007, involved just over 3,000 subjects. It found that, although sexual activity does decline with age, about half of individuals between the ages of 65 and 74 remain active, as do 26% of those between 75 and 85. Among those in this second group, 54% said that they had sex at least two or three times a month, and 23% reported relations with a partner at least once a week.

Keep in mind, too, that this is no small population. According to estimates by the U.S. Census Bureau, there were some 34 million Americans in these two groups (age 65 to 84) in 2010. Twenty years from now, as the ranks of older Americans are swelled by aging baby boomers, that number is expected to grow to approximately 62 million. And as we know, this will be a generation that has grown up more preoccupied with sex than perhaps any generation before it.

The New England Journal of Medicine study found that poor health was not the main reason that older people abstain from physical intimacy. Loss of desire is also not nearly as common as our popular culture would have us believe. Ranking high among the impediments, especially for women, is the absence of a willing and/or able partner. In fact, most of the women who cited health problems as the reason for their sexual inactivity were referring not to their own problems but to those of their spouses.

Elinor Carucci , Mom touches father, 2000. From the book Closer, Chronicle books, 2002

Medical problems that interfere with sex, like mobility-limiting arthritis, do become more common over the years. But other problems, such as premature ejaculation in men and dyspareunia (pain during intercourse) in women, actually become less common with advancing age. Despite the picture painted by television commercials for drugs that deal with erectile dysfunction, the majority of men, even in the oldest age groups, reported having little difficulty in that department.

The deeper problem with the advertisements for Viagra, Cialis, and their competitors is the image that they create of healthy intimacy for older people. The marketing depicts a utopian version of sex, in which the best and only valid sort of activity involves a fit, extremely attractive couple who segue seamlessly from doing the dishes to their bedroom.

These idealized, and mostly unobtainable, scenarios ignore much of what we have learned about the reality of romantic activity among older partners. Many professionals who deal with these issues report high degrees of satisfaction among the people in their care, even when the relationships involve alternative approaches to intimacy.

Sometimes these more limited activities are dictated by medical conditions, like hip arthritis that makes traditional intercourse impractical. Other times, it is simply a matter of the couple’s preferences.

And who’s to say what’s “normal” for older people if it brings satisfaction to them? Here it is useful to make a comparison to another area of health. As many of us have discovered for ourselves, nearsightedness is nearly universal after the age of 40. Yet no one would call those of us with glasses or contact lenses “abnormal.” These are merely age-appropriate adjustments that make it possible for us to see.

Why should sexuality be viewed any differently as we get older? We simply need to adapt our attitudes and techniques. It is a disservice to older people, as well as to those of us approaching those years, to view any form of safe sexual expression that persists into later life as anything but healthy.

Unfortunately, the unwitting conspirators in creating the taboos that surround these issues are often doctors. In the New England Journal of Medicine study, only 38% of men and 22% of women reported having discussed sex with a physician since turning age 50. Fifty? In the geriatric facilities where I work, 50 is the equivalent of the newborn nursery.

These findings are jarring not only because of the quality-of-life issues that are going unaddressed, but also because they have real health consequences. A modest proportion of new HIV infections are now occurring in people over the age of 50. In one study of single, sexually active women over the age of 50, less than half reported that their partners used condoms.

Changing our approach to the romantic lives of older Americans will not be easy. It presents a variety of new challenges, especially for professionals in my field. My own introduction to the barriers we face came many years ago, when two of my older widowed patients decided to get married. They had met at a nursing home where they were both patients, and neither had any sort of dementia that would interfere with their capacity to consent. After the wedding, they simply became roommates. Their sexual relationship was their own business.

But what about residents of nursing homes or assisted-living centers who are not married, not roommates, or have a compromised ability to consent? This poses such thorny issues that most facilities have simply discouraged residents from pursuing intimate relationships.

Fortunately, a number of facilities have started to recognize that a nursing home is a home first and a health-care facility second. The Hebrew Home for the Aged in Riverdale, N.Y., for example, actually promotes healthy romantic relationships among residents. The home’s policies specifically note that “residents have the right to seek out and engage in sexual expression, including words, gestures, movements or activities that appear motivated by the desire for sexual gratification.” Staff members are also taught to recognize when cognitive impairment might preclude such relationships and how to intervene with couplings that are not consensual.

These issues have assumed special urgency over the past several decades. A mountain of recent research has shown that not only is life expectancy rising in the U.S., but more people are spending the last years of their lives without the burden of immobilizing disabilities. These encouraging trends will continue for the foreseeable future, but we have yet to think seriously about what our extended lives mean for our personal identities, our families and our society. We have not gotten past the idea that our final decades are a problem to endure, rather than an opportunity for new experiences and personal bonds.

I’m not a Pollyanna about aging, and I’m well acquainted with the details of our physiological twilight. But we need to recognize the profound benefits of growing older in 21st-century America. Spouses in marriages that endure into late life report some of the highest levels of marital satisfaction and the lowest rates of divorce. As we age, many of the constraints that made us anxious and unhappy when we were younger—juggling work and family responsibilities, dealing with difficult bosses and colleagues, fretting about career success—slip away, and we enjoy a newfound freedom. We are liberated to cultivate ourselves and the relationships that matter most to us. And if we find ourselves alone, we can form new bonds and find new loves.

There are many erroneous stereotypes about getting older. As we age, we tend to get treated like a number rather than like individuals with a wide range of preferences and abilities. Among the worst of these tendencies is the assumption that after the age of, say, 60, we should simply forget about physical intimacy.

With luck (and with the help of our ever more sophisticated medical technologies), those of us now in the middle years of life will be the “dirty” old men and ladies of tomorrow. In the meantime, we must struggle to overcome the habits and taboos that now interfere with the happiness of so many older people. There is nothing “dirty” about the sexual feelings that attend our lives from adolescence on. The great irony of ageism—and what sets it apart from other forms of prejudice—is that you eventually become the target of your own bigotry. We must begin to approach aging with the honesty and wonder that it deserves.

Mark Lachs is director of geriatrics for the New York-Presbyterian Healthcare System and a professor of clinical medicine at the Weill Cornell Medical College.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703848204575608663772413510.html

She Talks a Lot, He Listens a Little

Marianne Ham was so excited when she recently visited Machu Picchu, the ancient Incan ruins in Peru, that she pulled out her cellphone on the top of the mountain and called her husband back home in Huntingdon Valley, Pa. She wanted to share the experience with him and described the stunning views.

When she was done talking, her husband, Bill, had one thing to say: “Do you have the number for the electrician?”

“It was so typical,” says Ms. Ham, a 72-year-old retired advertising media manager who has been married 35 years. “I was saying one thing and his mind was somewhere else.”

There really are just two kinds of people in the world: talkers and non-talkers. (You know which one you are.) Logically, all would be perfect if the talkers married only the non-talkers. But that’s not always the case, since many non-talkers are also non-listeners—they simply tune out the chatter. And frustration over communication styles can be heightened in this age of technology. Now that we have more ways than ever before to communicate, some people have never felt less heard.

I have a good friend who says I talk too much. He’s probably not wrong—I speak for far longer stretches of time than he can stand to listen. So he has a plan: He suggested, half jokingly, that we communicate by voicemail and text messages. “It’s shorter that way,” he explained.

I’ve got bad news for him: That’s not going to work for me. But his negotiation over how we can best communicate does raise an interesting point: Non-talkers control the conversation. When they’re done listening, the conversation is over.

__________

Can You Hear Me Now?

How can gabbers and non-gabbers communicate better? Here are some tips:

  • Set aside a time to talk. Taylor Keeney, a pilot, and his wife plan to speak for the first 20 minutes every day when they both get home from work. ‘The rule is that the TV or radio can be on but no computers or cellphone calls for however long it takes to re-connect,’ he says.
  • If you’re the talker, slow down. Cheryl Leone, a 66-year-old management consultant from Raleigh, N.C., breaks down her conversations into more bite-size pieces, so her boyfriend of 11 years can better absorb them. ‘I tend to throw a lot of things at him, and he could still be on one and two when I hit 10,’ she says. ‘Now I try to keep my list to about three things.’
  • Ask questions. Ask for feedback. ‘Conversation isn’t a monologue,’ says Rob Dobrenski, a psychologist in New York City. ‘Remember that you are talking “with” a person, not “at” one.’
  • Let the talker talk. Remember that if you’re the non-talker, you control and manipulate the communication by constantly curtailing conversation, which can be hurtful to the other person.
  • Really listen. Being silent is not the same as active listening. Use cues, such as head nods or statements such as ‘I hear you.’ When the talker pauses, relay back what you have heard and add your thoughts.
  • Ask for a break. ‘The non-talker should feel free to say, “Let me have a quick breather, I’m running out of listening gas. Let’s come back to this in a few minutes, or at least let’s break up this conversation with a new topic,” ‘ Dr. Dobrenski says.
  • Use technology as a supplement, not a substitute. Sure, it’s a great way to check in and converse in short bursts. But remember that it might not provide enough of an emotional connection for the talker in your life.
  • Call someone else. Sometimes it’s better to find an attentive audience, rather than numb one into submission.

__________

Consider the Macalusos, of Webster, N.Y. Susan likes to talk—a lot. Rob does not. “You could be interested in talking all night, but if I’m not interested, it’s not going to happen,” says Rob, 40, head of a telecommunications-equipment company.

At night in bed, while Ms. Macaluso happily chats away about her day, her husband often falls asleep, despite propping himself up with pillows and keeping the light on. On more than one occasion, after a cellphone call with him gets dropped, she has yammered away for an additional two or three minutes, unaware that he’s not there until he calls back on another line. And every once in awhile, she becomes so exasperated by his silence that she pretends to speak for him, in a deeper voice, just to hear some feedback.

“He doesn’t tell me to get to the point because he knows it would be a big insult,” says Ms. Macaluso, 43, a homemaker. Says her husband: “I made the mistake of telling my wife to speed up—just once. She started over and made me sit through the whole thing again.”

Do women talk more than men? Not always, of course. Some men are big gabbers, just as some women are silent types. And yet, the stereotype that women talk more than men holds pretty true.

There are environmental reasons—many men are raised not to share their feelings. But biology plays a surprisingly strong a part, as well. There is evidence that women’s and men’s brains process language differently, according to Marianne Legato, a cardiologist and founder of the Partnership for Gender-Specific Medicine at New York’s Columbia University. She says that listening to, understanding and producing speech may be easier for women because they have more nerve cells in the left half of the brain, which is used to process language, a greater degree of connectivity between the two parts of the brain and more of the neurotransmitter dopamine in the part of the brain that controls language.

Although the ability to understand and process language diminishes in both men and women as we age, it does so earlier for men (after age 35) than women (post-menopause). Women also get a boost of oxytocin, the feel-good hormone, when they speak to others, and estrogen enhances its effects. While men get this, too, testosterone blunts its effects. “This makes sense from an evolutionary point of view—men can’t defend their families if they are burdened with high levels of a hormone that compels them to make friends of all they meet,” says Dr. Legato, author of “Why Men Never Remember and Women Never Forget.” “Thus, men in their prime with high levels of testosterone are the least likely to be interested in social exchanges and bonding to others.”

Of course, we don’t need scientific studies to tell us that men and women communicate differently. Ask Taylor Keeney about his “word imbalance” theory. “Some women generally have a word quota,” says the 44-year-old pilot from Apex, N.C. “They have to say so many words to their significant other per day.”

Simply for ease of accounting, he puts that number at 1,000 words per day. Men, he says, generally are capable of hearing about 750 of these words. That is, once the woman hits 750, the man’s eyes glaze over. She then goes into “angry storage mode,” saving words for the next day. This cycle can be repeated indefinitely, with the woman storing up words until she gets a chance to say them.

“It is not pretty and can end badly for all concerned,” says Mr. Keeney. “Phrases like, ‘You never listen to me’ or ‘We never talk anymore’ are uttered. Men give the ‘I just got home and all I want to do is relax’ as a defense. Not good.”

Mr. Keeney has come up with a solution: texting. “Something as simple as a text of “DCA LUMU” (“Landed in D.C., Love You, Miss You”) or “GMB” (“Good Morning, Beautiful)” are fast but very effective at letting your partner know that you are thinking of them,” he says. “And all words count against the word quota.” His wife of three years agrees—to a point. “It has to be both quality and quantity,” says Margit Sylvester, 39, an executive assistant.

So what’s the answer? Should talkers befriend only talkers? Perhaps the best solution is to find someone who exactly complements your talking style. So, for example, if you like to talk 75% of the time, you need to find someone who is comfortable listening 75% of the time and talking only a quarter of it.

Jeff Foote says he was attracted to his partner of 17 years because he talks twice as much as he himself does. “It’s a lot less work for me,” says Mr. Foote, 53, a project manager for a San Francisco bank.

Still, he admits that he doesn’t always listen. Sometimes he has been reading and just can’t switch gears fast enough. “The conversation has already started, but I can’t keep up,” he says. “If I’m lucky, I don’t get caught.”

Sadly, he often does—because his partner gives him pop quizzes on what he has been saying. “Twenty-five percent of the time I can connect enough of the key words and guess right,” says Mr. Foote.

His partner, Cosgrove Norstadt, has seen his quizzes backfire. “I find out he has been listening the whole time, and I am embarrassed because I have been yammering on about nothing in the hopes of catching him not paying attention.” says Mr. Norstadt, 47, an actor. “So, I reinforce his decision to tune me out.”

Elizabeth Bernstein, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704658204575610921238173714.html

Mental health is the new antiabortion battleground. But the science is all wrong.

The latest war on abortion is being fought less over women’s bodies than over their minds. In the past few years, under the banner of “a woman’s right to know,” a number of states have passed laws mandating that women seeking abortions be told that going ahead with the procedure would expose them to mental health risks, including post-traumatic stress and a greater danger of suicide.

Such warnings might sound like a good idea. The decision to terminate a pregnancy can be difficult, and some women end up regretting it. It’s commendable to help women make an informed choice. But an informed choice requires accurate information. And these laws mandate that women be misled.

Rigorous U.S. scientific studies have not substantiated the claim that abortion, compared with its alternatives, causes an increased incidence of mental health problems. The same conclusion was reached in 2008 by an American Psychological Association task force, which I chaired, as well as by an independent team of scholars at Johns Hopkins University. As recently as September, Oregon State University researchers announced the results of a national study showing that teenagers who have an abortion are no more likely to become depressed or to have low self-esteem one year or five years later, compared with their peers who deliver.

Even so, the claim that abortion harms women’s mental health persists. According to research by the Guttmacher Institute, counseling on the negative psychological effects of abortion is mandatory in Mississippi, Nebraska, South Carolina, South Dakota, Texas, Utah and West Virginia. Promoting this claim is part of a political strategy aimed at dissuading women from terminating a pregnancy and at making abortions difficult, if not impossible, to obtain. It is a strategy that distorts scientific principles, even as it uses the umbrella of scientific research to advance its aims.

As part of this strategy, some antiabortion activists, such as David Reardon of the Elliot Institute, an antiabortion advocacy group, have scoured existing survey data for evidence linking abortion and a wide variety of mental health issues, such as depression, anxiety and alcohol use. They cite any correlations they find as evidence that abortion causes harm to women.

But there are at least two logical flaws at play here. The first is a confusion of correlation with causation. The most plausible explanation for the association that some studies find between abortion and mental health is that it reflects preexisting differences between women who continue a pregnancy and those who end one.

A substantial amount of research shows that women who deliver babies are, on average, more likely to have planned and wanted their pregnancies and to feel emotionally and financially capable of becoming a mother. In contrast, women who seek abortions are, on average, less likely to be married or involved in an intimate relationship, more likely to be poor, and more likely to have suffered physical or psychological abuse. All of these latter qualities are risk factors for poor mental health.

State lawmakers in Nebraska willfully ignored the difference between correlation and causation in April, when they passed a law requiring that health-care providers inform women seeking abortions if they have any characteristics – such as being poor or having low self-esteem – shown to be related to mental health problems following an abortion. If a woman experiences certain difficulties after an abortion, she can file a civil lawsuit against her physician claiming that she wasn’t screened adequately for those characteristics. It’s an option sure to discourage doctors from offering the procedure, if they aren’t already disinclined.

The law ignores the fact that the very characteristics that predispose women to emotional or mental health problems following an abortion also predispose them to postpartum depression if they deliver or to mental health problems in general, even if they do not become pregnant.

Following the logic of this purportedly protective law, women wanting to deliver a child should likewise be screened to ascertain that they are not predisposed to poor mental health afterward.

A second logical failing in the campaign to convince women that abortion harms their mental health involves what psychologists call the “availability heuristic.” Essentially, it means that vivid, first-person accounts that can be easily brought to mind, such as the personal stories of women who feel harmed by abortion, influence our estimates of the frequency of an event more than dry, statistical data do. For example, people think the probability of dying by homicide is greater than that of dying by stomach cancer, even though the rate of death by the latter is five times higher than death by the former. They err because examples of homicide are easier to recall than examples of stomach cancer.

In just this way, the emotionally evocative stories of a minority of women can lead people to overestimate the frequency of those experiences. For example, one woman who shared her story on an antiabortion Web site said that after her abortion, “I became very depressed and tried to kill myself by taking an entire bottle of pain pills, and I was unconscious for three days.” Her story drowns out the evidence that a much larger number of women feel relief following an abortion.

My research, based on clinic interviews in the 1990s with more than 400 women who obtained a first-trimester abortion, shows that women who terminate an unplanned pregnancy report a range of feelings, including sadness and loss as well as relief. Nonetheless, two years after their abortion, most women say they would make the same decision if they had it to do over again under the same circumstances. Because of the stigma attached to abortion in our society, however, most women feel they can’t talk about their abortions – unless they repent.

Women who think they made the right decision in having an abortion must be able to say so without fear of condemnation and without feeling that something is wrong with them. And women who feel sadness and regret should feel free to share their feelings as well. But their words should not be used to deceive women or to limit their choices.

Brenda Major is a professor of psychology at the University of California at Santa Barbara and a fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University.

__________

Full article: http://www.washingtonpost.com/wp-dyn/content/article/2010/11/05/AR2010110503095.html

Sick of This Text: ‘Sorry I’m Late’

I recently made plans to meet a good friend for dinner. We picked our favorite Italian place in Brooklyn and both swore we’d be there at 8.

At 8:05, as I arrived at the restaurant, my pal sent a text saying she was running late and was just leaving her office—half an hour away. She wrote again at 8:30, explaining that she had been delayed by her boss and was walking out the door “for real this time.”

Over the next hour, I received more texts: She was having trouble hailing a cab; she found a cab; the cab was stuck in traffic. She then stopped at a hardware store to buy bungee cords for her new bike rack.

At 9:47, my friend entered the restaurant, apologizing profusely but looking strikingly unstressed. Good thing I adore her.

Remember when we would make plans to meet someone and then actually show up on time? If you were more than a few minutes late, the other person would have visions of you lying on a gurney with a toe tag.

Coping Strategies

How do you cope with someone who makes you crazy by always being late? Share your feelings: Tell your friend how much their behavior hurts. And if that doesn’t work, here are some tips:

  • Explain what you mean by ‘on time.’ It means ‘when the event starts,’ not ‘while it is still going on.’
  • Lie about the time. Gene Barnes, 61, a retired government budget analyst from Dunn Loring, Va., hides the correct times of concerts and dinner reservations from his wife, subtracting 40 minutes.
  • Change the clocks. Lisa McNary, 50, a business professor from Raleigh, N.C., set the clocks in her home ahead by 12 minutes when one particularly tardy relative visited. ‘The technique worked like a charm,’ she says.
  • Pick up your chronically late pal at his or her house.
  • Show your displeasure. Penelope Malone, 56 , a retired legal-department manager from Atlanta, starts her dinner parties on time—and serves stragglers cold food.
  • Make a deal. If your companion is late, he’s buying.
  • Bring something to read. Or an iPod.
  • Cut the person some slack. You’re supposed to love them, flaws and all.

Now, thanks to cellphones, BlackBerrys and other gadgets, too many of us have become blasé about being late. We have so many ways to relay a message that we’re going to be tardy that we no longer feel guilty about it.

And lateness is contagious. Once one person is tardy, others feel they can be late as well. It becomes beneficial to be the last one in a group to show up, because your wait will be the shortest.

“Cellphones let you off the hook,” says Kelly Casciotta, a 34-year-old pastoral counselor from Orange, Calif. She says she has been habitually tardy for years—late to everything from concerts to friends’ weddings—and once showed up an hour and a half late for a date. Her husband says she has “T.E.D.”—Time Estimation Disorder.

She says she feels little remorse. “If I am heading to a meeting and am running behind, I feel I am being responsible if I text five minutes before the meeting is supposed to start to say I am going to be 10 minutes late,” she says.

Don’t believe that tardiness is out of control? Ask around. Diana Miller, 65, a financial adviser from San Diego, says she broke up with a good friend who was habitually late. Melissa Gottlieb, 24, a Manhattan publicist, once asked a policeman to drive her to class in college because she was running behind. (He did it.)

Full disclosure: Last week I showed up 20 minutes late to pick up a friend for dinner. (He took it well, even though he was waiting outside in a tropical storm.) I’ve missed a flight because I arrived at the airport after the deadline to check baggage. And I was once more than an hour late to meet my bungee-cord buying friend.

It’s hard to believe I grew up with a father who is a Navy veteran fond of quoting a Marine friend of his: “If you’re early, you’re on time. If you’re on time, you’re late.”

Of course, people were tardy—even chronically so—long before smartphones. How else would Lewis Carroll have come up with the White Rabbit?

Some people were raised in cultures where tardiness is tolerated. Others learned poor time-management skills from their parents.

Far too many of us, though, try to cram too much into the day, leaving no time to get from place to place. And a few people use their tardiness to display power or control. (Think about the people who routinely show up late to meetings at your office. I bet they’re not the peons, right?)

Here’s the problem: Being late—especially over and over—can leave the other person feeling disrespected.

And yet, delays happen. The car refuses to start, the baby throws up on your tie, a co-worker stops by your desk to chat just as you’re packing it in for the day.

It’s the varying nature of these unexpected delays that actually makes it so hard for people to be on time, says Dan Ariely, a professor of psychology and behavioral economics at Duke University and author of “The Upside of Irrationality.” Because what goes wrong is different each time, people fail to plan for the delays. “They never take the average into account,” he says.

It works like this: If every day as you left work to go pick up your kids, your printer broke and took half an hour to fix, you’d soon start planning that time into your commute. But it isn’t always the printer that goes wrong. Sometimes it’s an unexpected email; other times it’s your boss. And because the cause of the delay is different each time, it feels unexpected.

Dr. Ariely has found that people are more likely to show up on time if they have made a deal with themselves to do so. In an experiment conducted last year, he asked 2,500 Americans this question: If you knew you had a colonoscopy scheduled for a particular day, would you be willing to put aside $500 that you would forfeit if you didn’t show up for the procedure on time? Sixty percent of the participants said they were willing to risk money. “They make this pre-commitment to ensure their own behavior,” Dr. Ariely says.

Dannie Raines, 52, a property manager in Pasadena, Calif., knows all too well how chronic lateness can harm relationships. Allowed to walk to school alone when she was in kindergarten, she was suspended on the first day of school for being half a day late; she spent the morning picking flowers and kicking the heads off mushrooms. (That didn’t go over well with her folks.) She was stripped of her student-council president title in junior high because of too many tardiness violations. (Ditto.) And she was an hour late for her own wedding. “My husband came very close to saying, ‘I don’t,’ ” she says.

Over the years, Ms. Raines has tried to overcome her tardiness habit through therapy, self-hypnosis and by setting the clocks in her house ahead. But recently she had a bigger wake-up call.

When she arrived 30 minutes late to meet one of her best friends to play racquetball, her friend started crying and told her: “Your constant lateness makes me feel that you disrespect me.”

Ms. Raines says she tried to explain to her friend that she shouldn’t take her behavior personally. And she admitted she was the one with the problem. “But it damaged our friendship,” Ms. Raines says.

“There were many conversations with remorse and promises,” recalls the friend, Alison Lewis, 52, a licensed social worker from La Canada, Calif. In the end, though, she says, “I didn’t feel cared for.”

Elizabeth Bernstein, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703859204575526471179963214.html

Fighting Bullying With Babies

Imagine there was a cure for meanness. Well, maybe there is.

Lately, the issue of bullying has been in the news, sparked by the suicide of Tyler Clementi, a gay college student who was a victim of cyber-bullying, and by a widely circulated New York Times article that focused on “mean girl” bullying in kindergarten. The federal government has identified bullying as a national problem. In August, it organized the first-ever “Bullying Prevention Summit,” and it is now rolling out an anti-bullying campaign aimed at 5- to 8-year old children. This past month the Department of Education released a guidance letter to schools, colleges and universities to take bullying seriously, or face potential legal consequences.

Stop Bulling Now Campaign The problem of bullying has attracted federal attention. Above, an excerpt from a cartoon in the government’s bullying prevention guide for children.
 
The typical institutional response to bullying is to get tough. In the Tyler Clementi case, prosecutors are considering bringing hate-crime charges. But programs like the one I want to discuss today show the potential of augmenting our innate impulses to care for one another instead of just falling back on punishment as a deterrent. And what’s the secret formula? A baby.

We know that humans are hardwired to be aggressive and selfish. But a growing body of research is demonstrating that there is also a biological basis for human compassion. Brain scans reveal that when we contemplate violence done to others we activate the same regions in our brains that fire up when mothers gaze at their children, suggesting that caring for strangers may be instinctual. When we help others, areas of the brain associated with pleasure also light up. Research by Felix Warneken and Michael Tomasello indicates that toddlers as young as 18 months behave altruistically.

More important, we are beginning to understand how to nurture this biological potential. It seems that it’s not only possible to make people kinder, it’s possible to do it systematically at scale – at least with school children. That’s what one organization based in Toronto called Roots of Empathy has done. 

Roots of Empathy was founded in 1996 by Mary Gordon, an educator who had built Canada’s largest network of school-based parenting and family-literacy centers after having worked with neglectful and abusive parents. Gordon had found many of them to be lacking in empathy for their children. They hadn’t developed the skill because they hadn’t experienced or witnessed it sufficiently themselves. She envisioned Roots as a seriously proactive parent education program – one that would begin when the mothers- and fathers-to-be were in kindergarten.

Since then, Roots has worked with more than 12,600 classes across Canada, and in recent years, the program has expanded to the Isle of Man, the United Kingdom, New Zealand, and the United States, where it currently operates in Seattle. Researchers have found that the program increases kindness and acceptance of others and decreases negative aggression.

Here’s how it works: Roots arranges monthly class visits by a mother and her baby (who must be between two and four months old at the beginning of the school year). Each month, for nine months, a trained instructor guides a classroom using a standard curriculum that involves three 40-minute visits – a pre-visit, a baby visit, and a post-visit. The program runs from kindergarten to seventh grade. During the baby visits, the children sit around the baby and mother (sometimes it’s a father) on a green blanket (which represents new life and nature) and they try to understand the baby’s feelings. The instructor helps by labeling them. “It’s a launch pad for them to understand their own feelings and the feelings of others,” explains Gordon. “It carries over to the rest of class.”

I have visited several public schools in low-income neighborhoods in Toronto to observe Roots of Empathy’s work. What I find most fascinating is how the baby actually changes the children’s behavior. Teachers have confirmed my impressions: tough kids smile, disruptive kids focus, shy kids open up. In a seventh grade class, I found 12-year-olds unabashedly singing nursery rhymes.

The baby seems to act like a heart-softening magnet. No one fully understands why. Kimberly Schonert-Reichl, an applied developmental psychologist who is a professor at the University of British Columbia, has evaluated Roots of Empathy in four studies. “Do kids become more empathic and understanding? Do they become less aggressive and kinder to each other? The answer is yes and yes,” she explained. “The question is why.”

C. Sue Carter, a neurobiologist based at the University of Illinois at Chicago, who has conducted pioneering research into the effects of oxytocin, a hormone that has been linked with caring and trusting behavior, suspects that biology is playing a role in the program’s impact. “This may be an oxytocin story,” Carter told me. “I believe that being around the baby is somehow putting the children in a biologically different place. We don’t know what that place is because we haven’t measured it. However, if it works here as it does in other animals, we would guess that exposure to an infant would create a physiological state in which the children would be more social.”

To parent well, you must try to imagine what your baby is experiencing. So the kids do a lot of “perspective taking.” When the baby is too small to raise its own head, for example, the instructor asks the children to lay their heads on the blanket and look around from there. Perspective taking is the cognitive dimension of empathy – and like any skill it takes practice to master. (Cable news hosts, take note.)

Children learn strategies for comforting a crying baby. They learn that one must never shake a baby. They discover that everyone comes into the world with a different temperament, including themselves and their classmates. They see how hard it can be to be a parent, which helps them empathize with their own mothers and fathers. And they marvel at how capacity develops. Each month, the baby does something that it couldn’t do during its last visit: roll over, crawl, sit up, maybe even begin walking. Witnessing the baby’s triumphs – even something as small as picking up a rattle for the first time — the children will often cheer.

Ervin Staub, professor emeritus of psychology at the University of Massachusetts, has studied altruism in children and found that the best way to create a caring climate is to engage children collectively in an activity that benefits another human being. In Roots, children are enlisted in each class to do something to care for the baby, whether it is to sing a song, speak in a gentle voice, or make a “wishing tree.”

The results can be dramatic. In a study of first- to third-grade classrooms, Schonert-Reichl focused on the subset of kids who exhibited “proactive aggression” – the deliberate and cold-blooded aggression of bullies who prey on vulnerable kids. Of those who participated in the Roots program, 88 percent decreased this form of behavior over the school year, while in the control group, only 9 percent did, and many actually increased it. Schonert-Reichl has reproduced these findings with fourth to seventh grade children in a randomized controlled trial. She also found that Roots produced significant drops in “relational aggression” – things like gossiping, excluding others, and backstabbing. Research also found a sharp increase in children’s parenting knowledge.

“Empathy can’t be taught, but it can be caught,” Gordon often says – and not just by children. “Programmatically my biggest surprise was that not only did empathy increase in children, but it increased in their teachers,” she added. “And that, to me, was glorious, because teachers hold such sway over children.”

When the program was implemented on a large scale across the province of Manitoba – it’s now in 300 classrooms there — it achieved an “effect size” that Rob Santos, the scientific director of Healthy Child Manitoba, said translates to reducing the proportion of students who get into fights from 15 percent to 8 percent, close to a 50 percent reduction. “For a program that costs only hundreds of dollars per child, the cost-benefit of preventing later problems that cost thousands of dollars per child, is obvious,” said Santos.

Follow up studies have found that outcomes are maintained or enhanced three years after the program ends. “When you’ve got emotion and cognition happening at the same time, that’s deep learning,” explains Gordon. “That’s learning that will last.”

It’s hard to envision what a kinder and gentler world, or school, would truly look like. But Gordon told me a story about a seventh grade student in a tough school in Toronto that offered a glimpse. He was an effeminate boy from an immigrant background who was always the butt of jokes. “Anytime he spoke, you’d hear snickers in the background,” she recalled. Towards the end of the year, the children in Roots are asked to write a poem or a song for the baby. Kids often work in groups and come up with raps. This boy decided to sing a song he’d written himself about mothers.

“He was overweight and nerdy looking. His social skills were not very good,” Gordon recalled. “And he sang his song. The risk he took. My breath was in my fist, hoping that no one would humiliate him. And no one did. Not one youngster smirked. When he finished, they clapped. And I’m sure they all knew that they were holding back. But, oh my God, I was blown away. I couldn’t say anything.”

She added: “When they talk about protecting kids in schools, they talk about gun shields, cameras, lights, but never about the internal environment. But safe is not about the rules – it’s about how the youngsters feel inside.”

Have you seen or do you have ideas about effective ways to diminish bullying in school and elsewhere? We’ll discuss them in Saturday’s follow up – and also look at a critical step that teachers can take to make their classrooms more peaceful.

David Bornstein is the author of “How to Change the World,” which has been published in 20 languages, and “The Price of a Dream: The Story of the Grameen Bank,” and is co-author of “Social Entrepreneurship: What Everyone Needs to Know.” He is the founder of dowser.org, a media site that reports on social innovation.

___________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/08/fighting-bullying-with-babies

A Buffet Of Period Pieces

‘I have created this book as a literary amuse-bouche,” food historian William Woys Weaver says in “Culinary Ephemera,” but his collection of period American recipes, labels, ads, menus, matchbooks and much else is more likely to sate an appetite than whet it. The 352 color plates, accompanied by informed, diverting text, cover everything from food-oriented “almanacs and calendars” to “wrappers and packaging.” The book tells us much about who we’ve been as well as what we’ve eaten . . . and drunk. There is no better evocation of a native oenophilia’s first stirrings, for instance, than the Engels & Krudwig Wine Co.’s 1940s label proclaiming “American Grape Wine”—made in sultry Sandusky, Ohio—to be “bottled romance.

Many of the pieces here come from the 19th century, like the menu for a “corn festival” hosted in 1886 by the Ladies’ Aid Society of Wilkes-Barre, Pa., offering corn mush, corn bread, corn-starch blanc mange and—I kid you not—corn coffee. Remarkably, the menu is printed on a corn husk. How the menu-husk survived all these years, as Mr. Weaver says, is “a comment on the serendipitous nature of ephemera collecting.”

Serendipity brings us such 19th-century items as an 1887 advertising circular for the pork products of the Strawberry Hill farm in Florence, Mass. (a drawing of a pig emblazoned “No cholera here!”); an 1870s business card for the J.F. Hoffman & Co. Pie Bakers in Pittsburgh (their pies were “O.K.”); and a candy wrapper for Black Jacks molasses treats made in the 1890s in Salem, Mass.

The charm of the 20th-century pieces lies mostly in those from before 1950 or so, as with the menu for a Graf Zeppelin flight from Rio de Janeiro to New York in May 1930. (The Duck Supreme With Olives sounds tempting.) Then there is the 1940 Christmas menu for the U.S.S. Helena, moored in Pearl Harbor. Many of the crew members who dined on that meal would die a year later in the Japanese surprise attack. The ephemeral, it turns out, has the power to stir more than historical curiosity.

Mr. Bakshian writes about gastronomy for the Journal.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704462704575589643515597902.html

Mother Madness

Spend every moment with your child? Make your own baby food and use cloth diapers? Erica Jong wonders how motherhood became such a prison for modern women.

Unless you’ve been living on another planet, you know that we have endured an orgy of motherphilia for at least the last two decades. Movie stars proudly display their baby bumps, and the shiny magazines at the checkout counter never tire of describing the joys of celebrity parenthood. Bearing and rearing children has come to be seen as life’s greatest good. Never mind that there are now enough abandoned children on the planet to make breeding unnecessary. Professional narcissists like Angelina Jolie and Madonna want their own little replicas in addition to the African and Asian children that they collect to advertise their open-mindedness. Nannies are seldom photographed in these carefully arranged family scenes. We are to assume that all this baby-minding is painless, easy and cheap.

Today’s bible of child-rearing is “The Baby Book” by William and Martha Sears, which trumpets “attachment parenting.” You wear your baby, sleep with her and attune yourself totally to her needs. How you do this and also earn the money to keep her is rarely discussed. You are just assumed to be rich enough. At one point, the Searses suggest that you borrow money so that you can bend your life to the baby’s needs. If there are other caregivers, they are invisible. Mother and father are presumed to be able to do this alone—without the village it takes to raise any child. Add to this the dictates of “green” parenting—homemade baby food, cloth diapers, a cocoon of clockless, unscheduled time—and you have our new ideal. Anything less is bad for baby. Parents be damned.

We also assume that “mother” and “father” are exclusive terms, though in other cultures, these terms are applied to a variety of aunts, uncles and other adults. Kinship is not exclusively biological, after all, and you need a brood to raise a brood. Cooperative child-rearing is obviously convenient, but some anthropologists believe that it also serves another more important function: Multiple caregivers enhance the cognitive skills of babies and young children. Any family in which there are parents, grandparents, nannies and other concerned adults understands how readily children adapt to different caregivers. Surely this prepares them better for life than stressed-out biological parents alone. Some of these stressed-out parents have come to loathe Dr. Sears and his wife and consider them condescending colonialists in love with noble savagery.

We like to think mothering has always been the same, but it’s encompassed practices as diverse as baby farming, wet nursing and infanticide.

Someday “attachment parenting” may be seen as quaint, but today it’s assumed that we can perfect our babies by the way we nurture them. Few of us question the idea, and American mothers and fathers run themselves ragged trying to mold exceptional children. It’s a highly competitive race. No parent wants to be told it all may be for naught, especially, say, a woman lawyer who has quit her firm to raise a child. She is assumed to be pursuing a higher goal, and hard work is supposed to pay off, whether in the office or at home. We dare not question these assumptions.

No wonder that Elisabeth Badinter’s book “Le Conflit: La Femme and La Mère” (“The Conflict: Woman and Mother”) has become a best seller in France and will soon be published around the world. Ms. Badinter dares to question attachment parenting, arguing that such supposedly benign expectations victimize women far more than men have ever done. Attachment parenting, especially when combined with environmental correctness, has encouraged female victimization. Women feel not only that they must be ever-present for their children but also that they must breast-feed, make their own baby food and eschew disposable diapers. It’s a prison for mothers, and it represents as much of a backlash against women’s freedom as the right-to-life movement.

When a celebrity mother like the supermodel Gisele Bündchen declares that all women should be required to breast-feed, she is echoing green-parenting propaganda, perhaps unknowingly. Mothers are guilty enough without more rules about mothering. I liked breast-feeding. My daughter hated it. Mothers must be free to choose. But politicians may yet find ways to impose rules on motherhood. Mandatory breast-feeding isn’t imminent, but it’s not hard to imagine that the “food police” might become something more than a punch line about overreaching government. Mothers, after all, are easy scapegoats.

Madonna with daughter Mercy James in April.

Gisele Bündchen with baby Benjamin in September.

Angelina Jolie with Zahara and Maddox in 2007.

In truth, nothing is more malleable than motherhood. We like to imagine that mothering is immutable and decreed by natural law, but in fact it has encompassed such disparate practices as baby farming, wet-nursing and infanticide. The possessive, almost proprietary motherhood that we consider natural today would have been anathema to early kibbutzniks in Israel. In our day motherhood has been glamorized, and in certain circles, children have become the ultimate accessories. But we should not fool ourselves: Treating children like expensive accessories may be the ultimate bondage for women.

Is it even possible to satisfy the needs of both parents and children? In agrarian societies, perhaps wearing your baby was the norm, but today’s corporate culture scarcely makes room for breast-feeding on the job, let alone baby-wearing. So it seems we have devised a new torture for mothers—a set of expectations that makes them feel inadequate no matter how passionately they attend to their children.

I try to imagine what it would have been like for me to follow the suggestions of attachment parenting while I was a single mother and full-time bread-winner. I would have had to take my baby on lecture tours, in and out of airports, television stations and hotels. But that was impossible. Her schedule and mine could not have diverged more. So I hired nannies, left my daughter home and felt guilty for my own imperfect attachment. I can’t imagine having done it any other way. Even if every hotel and every airport had had a beautiful baby facility—which, of course, they didn’t—the schedules of children are not so malleable. Children are naturally afraid of unfamiliar baby sitters, so parents change their lives to accommodate them. In the absence of societal adjustment to the needs of children, parents have to revise their own schedules.

We are in a period of retrenchment against progressive social policies, and the women pursuing political life today owe more to Evita Peron than to Eleanor Roosevelt. “Mama grizzlies” like Sarah Palin never acknowledge that there are any difficulties in bearing and raising children. Nor do they acknowledge any helpers as they thrust their babies into the arms of siblings or daddies. The baby has become the ultimate political tool.

__________

2,000 Years of Parenting Advice

  • Proper measures must be taken to ensure that [children] shall be tactful and courteous in their address; for nothing is so deservedly disliked as tactless characters. —”The Education of Children,” Plutarch, A.D. 110
  • I will also advise his feet to be wash’d every day in cold water, and to have his shoes so thin, that they might leak and let in water.… It is recommendable for its cleanliness; but that which I aim at in it, is health; and therefore I limit it not precisely to any time of the day. —”Some Thoughts Concerning Education,” John Locke, 1693
  • But let mothers deign to nurse their children, morals will reform themselves, nature’s sentiments will be awakened in every heart, the state will be repeopled. —”Emile: or, On Education,” Jean-Jacques Rousseau, 1762
  • Even very little children are happy when they think they are useful. “I can do some good—can’t I, mother?” is one of the first questions asked….Let them go out with their little basket, to weed the garden, to pick peas for dinner, to feed the chickens, &c. —”The Mother’s Book,” Lydia Maria Child, 1831
  • Babies under six months old should never be played with; and the less of it at any time the better for the infant. —”The Care and Feeding of Children,” L. Emmett Holt, 1894
  • Never hug and kiss them, never let them sit in your lap. If you must, kiss them once on the forehead when they say good night. Shake hands with them in the morning. Give them a pat on the head if they have made an extraordinary good job of a difficult task. —”Psychological Care of Infant and Child,” John B. Watson, 1928
  • The more people have studied different methods of bringing up children the more they have come to the conclusion that what good mothers and fathers instinctively feel like doing for their babies is usually best after all. Furthermore, all parents do their best job when they have a natural, easy confidence in themselves. Better to make a few mistakes from being natural than to do everything letter-perfect out of a feeling of worry. —”The Common Sense Book of Baby and Child Care,” Benjamin Spock, 1946

__________

Indeed, although attachment parenting comes with an exquisite progressive pedigree, it is a perfect tool for the political right. It certainly serves to keep mothers and fathers out of the political process. If you are busy raising children without societal help and trying to earn a living during a recession, you don’t have much time to question and change the world that you and your children inhabit. What exhausted, overworked parent has time to protest under such conditions?

The first wave of feminists, in the 19th century, dreamed of communal kitchens and nurseries. A hundred years later, the closest we have come to those amenities are fast-food franchises that make our children obese and impoverished immigrant nannies who help to raise our kids while their own kids are left at home with grandparents. Our foremothers might be appalled by how little we have transformed the world of motherhood.

None of these parenting patterns is encoded in our DNA. Mothering and fathering are different all over the world. Our cultural myth is that nurturance matters deeply. And it has led to “helicopter parenting,” the smothering surveillance of a child’s every experience and problem, often extending as far as college. It has also led to pervasive anxiety (among parents and children alike) and the deep disappointment that some parents suffer when their kids become less malleable during their teenage years.

Giving up your life for your child creates expectations that are likely to be thwarted as the child, inevitably, attempts to detach. Nor does such hyper-attentive parenting help children to become independent adults. Kids who never have to solve problems for themselves come to believe that they can’t solve problems themselves. Sometimes they fall apart in college.

Much of the demand for perfect children falls on mothers, and now we are hearing a new drumbeat: the idea that prenatal life determines post-natal life. In her much-discussed new book, “Origins: How the Nine Months Before Birth Shape the Rest of Our Lives,” Annie Murphy Paul describes the ever-expanding efforts of researchers to determine how maternal diet, weight, stress, exercise and other factors can influence fetal development. Ms. Paul is sensibly resistant to alarmism on these issues, but you cannot read her book without asking: And who is in charge of prenatal life? The mother! Does one glass of wine doom your child to fetal alcohol syndrome? No, but you could be forgiven for thinking so, judging by the hysterical reaction that often greets an expectant mother who dares to sip Chardonnay.

What is so troubling about these theories of parenting—both pre- and postnatal—is that they seem like attempts to exert control in a world that is increasingly out of control. We can’t get rid of the carcinogens in the environment, but we can make sure that our kids arrive at school each day with a reusable lunch bag full of produce from the farmers’ market. We can’t do anything about loose nukes falling into the hands of terrorists, but we can make sure that our progeny’s every waking hour is tightly scheduled with edifying activities.

Molly Jong-Fast, at about age 4, and her mother, Erica, in France around 1981.

Our obsession with parenting is an avoidance strategy. It allows us to substitute our own small world for the world as a whole. But the entire planet is a child’s home, and other adults are also mothers and fathers. We cannot separate our children from the ills that affect everyone, however hard we try. Aspiring to be perfect parents seems like a pathetic attempt to control what we can while ignoring problems that seem beyond our reach.

Some parenting gurus suggest that helicopter parenting became the rage as more mothers went to work outside the home. In other words, it was a kind of reaction formation, a way for mothers to compensate for their absence and guilt and also for the many dangerous and uncontrollable things in the modern family’s environment. This seems logical to me. As we give up on ideals of community, we focus more and more on our individual children, perhaps not realizing that the community and the child cannot be separated.

In the oscillations of feminism, theories of child-rearing have played a major part. As long as women remain the gender most responsible for children, we are the ones who have the most to lose by accepting the “noble savage” view of parenting, with its ideals of attachment and naturalness. We need to be released from guilt about our children, not further bound by it. We need someone to say: Do the best you can. There are no rules.

Erica Jong is a novelist, poet and essayist whose 20 books have been published around the world. “Fear of Flying” is her best-known novel, with 20 million copies in print.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704462704575590603553674296.html

The Secrets Behind Edible Irony

Armed with an industrial-sized blow torch and a 10-liter canister of liquid nitrogen, Alex Stupak was ready to roast ice cream.

As a meringue-like base whirred in a standing mixer, the tattoo-covered 30-year-old poured in the liquid nitrogen, raising gales of puffy white vapor that, within minutes, created a glistening, white, frozen mousse. Carefully scooping a portion into a torpedo shape, Mr. Stupak aimed his blow torch at the quenelle and browned it. The result was delicious and witty, a bite of edible irony.

“That’ll work,” said Mr. Stupak.

Considered one of the nation’s most inventive pastry chefs, Mr. Stupak has spent his career at cutting-edge restaurants, including Boston’s Clio, Chicago’s Alinea and Manhattan’s WD-50, his current employer. Several of his innovations, including pliable chocolate ganache ribbons and flavored ice capsules containing a creamy liquid center, are often imitated by adventurous high-end restaurants.

Alex Stupak

His desserts are known for their surprising flavors and novel textures. A dessert on the current menu features four flavors—lemongrass, jackfruit, whole wheat and brown sugar—in textures including mousse, foam, ice, crunchy crust, solid fruit and puree.

Such combinations often occur to him through a kind of mental flavor association. He loves lemongrass, which he paired with jackfruit, another Southeast Asian ingredient. Lemongrass steeped in milk “tastes like the milk left over in a bowl of Fruit Loops,” so he made whole wheat ice cream to conjure the taste of cereal. That made him think of cream of wheat, topped with brown sugar. “It only needs to make sense to me,” Mr. Stupak said. “Then it has to taste good.”

On a recent Tuesday, Mr. Stupak’s day off, he met with his assistant, 24-year-old Malcolm Livingston, in WD-50’s empty kitchen to work on some new techniques. The pastry team’s regular schedule—from 1 p.m. to about 1 a.m.—is filled with preparing the foams, emulsions, jellies, puffs and ice creams that are then assembled into approximately 200 desserts each night.

Today Mr. Stupak will test a handful of theories that have been percolating in his mind. Sometimes, he makes a breakthrough, then tries to figure out how to do the opposite. For example, having discovered how to make frozen capsules, his next “holy grail” was figuring out how to make a paper-thin gel filled with hot liquid. (He and Mr. Livingston are still tinkering with a pumpkin-filled version.) Other times, he’ll ponder a riff on a classic dessert.

The goal was to make olive oil solid by blending it at high speed with isomalt sugar. The oil and sugar base failed to emulsify. Mr. Stupak poured the sweet, slippery oil onto a tray.

“I thought the sugar would microparticulate and become suspended in the oil, and it would become a sliceable gel,” Mr. Stupak said. Mr. Livingston tried a few other methods, none of which worked. Failures like this are so common that Mr. Stupak is sanguine, vowing to return to the idea later.

A dessert made from soft chocolate, beets, long pepper and ricotta ice cream

“New flavor combinations are not creativity. We’re talking about technique,” Mr. Stupak said. Once he’s perfected a technique, he then applies different flavors to it. After developing a method for coating a calcium-based mousse with a solid pectin-fortified glaze, he’s used it for chocolate mousse dipped in cherry sauce and sweet potato pie coated with brown sugar glaze.

Though he appears preternaturally contained and controlled, his best ideas occur to him gradually over the course of a few busy nights of work, he said, the result of a career in hectic kitchens.

The constant pressure to outdo both himself and other modernist chefs has undercut Mr. Stupak’s enjoyment of his work. The thrill of shocking peers and diners has worn off, and instead of striving for bizarre, never-before-seen dishes, he bases more desserts on known entities, like rainbow sherbet or meringue, so that customers won’t feel as intimidated.

“The first 10 times a tightrope walker gets across, he’s excited,” Mr. Stupak said. “After that, it’s his job.” (“He is excited, I could see it,” Mr. Livingston whispered, after the roasted ice cream worked out, while Mr. Stupak’s back was turned.)

Over time, Mr. Stupak said he has also observed chefs copying each other and stealing ideas, giving him a depressing sense that what he once saw as high art is actually a big ego contest. He has decided to leave the pastry world next year and to open a Mexican restaurant—albeit one where he said he will continue to innovate.

On a recent night, stuffed into his corner in the hot kitchen, Mr. Stupak hunched over 20 dishes of beer ice cream, molasses jelly and caramel sauce, piping ribbons of malted yogurt over the contours of long, thin caraway tuilles. This drill continued off and on for hours because the dish goes out, for free, to about 80 people a night as a palate-cleanser. Wouldn’t it be easier to design a low-labor dish?

“What we do is make things hard on ourselves,” Mr. Stupak said. “Creativity is work.”

Small Bites

• Asked why so many pastry chefs sport elaborate tattoos—his own include images of an octopus and a fallen angel with his wings torn off—Alex Stupak said, “When you spend your day around gum drops and ice cream and cupcakes, you just need to assert your masculine side somehow.”

• Mr. Stupak will take a liquid base, divide it into parts and freeze some, dehydrate some and “aerate” some by pumping air into it, looking for the most interesting texture. “It’s just manipulation of water in all its permutations,” he said.

• He reads industrial manuals like “Food Polymers in Water Soluble Applications.” Though stultifying, he said these are the best source of ideas for new ingredients and techniques.

• Substances like hydrocolloids and liquid nitrogen allow Mr. Stupak to avoid the lengthy delays typical in a pastry kitchen and to conduct many experiments in a short period of time.

• Most of his desserts are cold by design. Hot elements are “dynamic” and “food is harder to manipulate in a dynamic state.”

• Mr. Stupak got his first job, as a restaurant dishwasher, when he was 12 (he told the owner he was 14). He gorged on books by top chefs, translating Spanish pastry chef Albert Adrià’s book word-by-word with a dictionary.

Katy McLaughlin, Wall Street Journal

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703514904575602434202046328.html

Twinkle, Twinkle, Giant Star

Daylight-saving time ends this weekend. The clocks change. Is that the best the modern world can do for a sun-worshiping ritual?

About 10 years ago, Richard Cohen was running a British publishing house and trying to find someone to write a book he wanted to read, one about the sun. He found no takers and so he took the job himself. After eight years and reporting trips to 18 countries on six continents—his wife would tell inquirers, “Oh, he’s out chasing the sun”—he had a 574-page book with a point. The modern world has decentralized the sun, Mr. Cohen says in “Chasing the Sun”; science has reduced this glorious miracle of a star to little more than a dependable overhead light. “The wonder has been stripped away,” he writes.

At least that’s what Mr. Cohen says his point is. I’m not sure that the book backs him up. I suspect that he was just interested in the sun and one thing led to another, the way Richard Burton in the 17th century set out to describe depression and ended up writing “The Anatomy of Melancholy, What it is, With all the kinds, causes, symptomes, prognostickes, and severall cures of it. In three Partitions with their severall Sections, members, and subsections. Philosophically, Medicinally, Historically, opened and cut up.”

And so in this discursive and readable firework of a book, we learn—and are interested to do so—that the sun was studied by primitive cultures because it allowed them to time their crops; by the ancient Chinese because it gave astrologers political power with their rulers; by early Islamic worshipers because it set the direction and hour of their prayers.

We also learn how Copernicus, Kepler, Galileo, Newton and Herschel by turns built the evidence-based picture of an orderly universe with the Earth not at the center—the closest Mr. Cohen comes to explaining how science decentralized the sun. We hear about the awe that solar eclipses have always inspired, and how an eclipse in 1919 led to the acceptance of Einstein’s theory that gravity bends the light from stars visible near and eclipsed sun. We find out how the atom was discovered and then split, and how a bomb was built based on atomic fusion, which is the process that powers the sun.

But Mr. Cohen—oddly enough, in a book protesting the sun’s decentralization—says little about the sun itself: How it drew itself together from the detritus of four previous generations of stars, how it’s layered like an onion and melds gas and light, how its heat struggles from the fusing core to reach us. Mr. Cohen confesses: “The main memory I have of any scientific endeavor” in high school was of a teacher climbing through a window and frying an egg on a copper pan with a Bunsen burner. “Some readers may wish that I had ventured deeper into solar astronomy,” he says, “but this book is not a rainbow; it has to end somewhere.”

“Chasing the Sun” is less about the sun, then, than about the sun’s effect on the Earth and us earthlings. The sun’s ultraviolet light gives us fashionable tans and skin cancer; it cures seasonal affective disorder and thins the ozone layer. The sun sets our clocks and calendars and maps. Sunspots—formed by cyclical surges in the sun’s magnetic field—increase the Earth’s exposure to radiation, affecting everything from the weather to satellites to telephone service. The sun, Mr. Cohen notes, has been central to the myths of every culture, though it’s a mystery why Daedalus and his sun-struck son Icarus show up only in a footnote on page 487. Gold, mirrors and blondes all have been regarded as precious for centuries because they’re sun symbols.

Mask from the Museum of the Sun in Riga, Latvia.

The sun, of course, governs the great cycles in the air and in the oceans, and it might—or might not—even play a major role in global warming. Mr. Cohen is agnostic on the “climate change” front. “Whether we should prepare ourselves for global warming or for a new ice age indirectly caused by a hotter Earth or by some other factor is almost impossible to forecast,” he says, and then is unable to resist adding: “Until the sixteenth century, ‘weather’ and ‘whether’ were interchangeable spellings.” Then he notes: “Or, as Joyce’s Leopold Bloom so charmingly puts it, weather is ‘as uncertain as a child’s bottom.’ ”

“Chasing the Sun” is sprinkled throughout with such glittery delights. The haloes of Christian saints, he says, began as little suns. The 16th-century Danish astronomer Tycho Brahe had two cousins who went to England in 1592 on a diplomatic mission that had nothing to do with the sun or astronomy but, we learn with pleasure as Shakespeare clearly did, that their names were Frederik Rosenkrantz and Knud Gyldenstierne.

The book ends with people who still honor the sun. Mr. Cohen, on one of his many trips, goes to India in 2006 to visit Udaipur, about 250 miles south of New Delhi, for the Hindu festival of light. He meets with a wealthy local leader, a maharana, who has 14 solar-powered vehicles for hire; who says his family—which he traces back to 569—is “descended from the Sun” (Mr. Cohen capitalizes the word throughout); and whose stationery, the author notes, is “embossed with a Sun sporting a mighty, whirly mustache.” The maharana tells him that the sun is a god, a part of us, divine, and it “doesn’t so much bring us light as take away the darkness.”

At Varanasi, an Indian holy city, the author watches a Hindu ceremony along the Ganges River, attended by more than a thousand in boats and on the shore, where priests rang bells, blew on conch shells and chanted for peace in the world—a ceremony held in the dying light of the setting sun. At that moment, Mr. Cohen says, he felt the ancient connection with this star that holds over us life and death. And so it does: Without the sun, we’re just cold, starving naked mole rats who won’t last a generation.

Even if he doesn’t work much at backing his theory about the sun’s “decentralized” place in our lives, Mr. Cohen is surely right. We modern Western folk—living with air conditioning and central heat, buying food at markets, our watches and cellphones telling us the time, GPS devices telling us where we are—hardly need to think of the sun at all. The blame lies clearly, obviously with technology, the march of progress, the temptations of convenience and human laziness.

The blame does not lie with science. For day-in, day-out sun worship, no one is more devoted than those scientists known as astronomers. One of them routinely posts online photos of the sun accompanied by long, clear and enchanted explanations that begin with introductions on the order of “Oh man oh man, do I love this picture.”

Another astronomer-blogger says of another photo of the sun: “I could not stop looking at it.” In the photos, a granular sun erupts, swirls, blazes, and you might want to look at them. Worship is catching.

Ms. Finkbeiner, who runs the graduate program in science writing at John Hopkins University, is a free-lance science writer.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704462704575590681157498028.html

Socially challenging

Psychopathy

Psychopathy seems to be caused by specific mental deficiencies

Fancy a game of cards before dinner?

WHAT makes people psychopaths is not an idle question. Prisons are packed with them. So, according to some, are boardrooms. The combination of a propensity for impulsive risk-taking with a lack of guilt and shame (the two main characteristics of psychopathy) may lead, according to circumstances, to a criminal career or a business one. That has provoked a debate about whether the phenomenon is an aberration, or whether natural selection favours it, at least when it is rare in a population. The boardroom, after all, is a desirable place to be—and before the invention of prisons, even crime might often have paid.

To shed some light on this question Elsa Ermer and Kent Kiehl of the University of New Mexico, Albuquerque, decided to probe psychopaths’ moral sensibilities and their attitude to risk a little further. Their results do not prove that psychopathy is adaptive, but they do suggest that it depends on specific mechanisms (or, rather, a specific lack of them). Such specificity is often the result of evolution.

Past work has established that psychopaths have normal levels of intelligence (they are only rarely Hannibal Lecter-like geniuses). Nor does their lack of guilt and shame seem to spring from a deficient grasp of right and wrong. Ask a psychopath what he is supposed to do in a particular situation, and he can usually give you what non-psychopaths would regard as the correct answer. It is just that he does not seem bound to act on that knowledge.

Dr Ermer and Dr Kiehl suspected the reason might be that, despite psychopaths’ ability to give the appropriate answer when confronted with a moral problem, they are not arriving at this answer by normal psychological processes. In particular, the two researchers thought that psychopaths might not possess the instinctive grasp of social contracts—the rules that govern obligations—that other people have. To examine this idea, as they report this week in Psychological Science, they used a game called the Wason card test.

Playing by the rules

Most people understand social contracts intuitively. They do not have to reason them out. The Wason test is a good way of showing this. It poses two logically identical problems, one cast in general terms and the other in terms of a social contract.

For instance, the first presentation might be of four cards, each with a number on one side and a colour on the other. The cards are placed on a table to show 3, 8, red and brown. The rule to be tested is: “If a card shows an even number on one side, then it is red on the other.” Which cards do you need to turn over to tell if the rule has been broken?

That sounds simple, but most people get it wrong. Now consider this problem. The rule to be tested is: “If you borrow the car, then you have to fill the tank with petrol.” Once again, you are shown four cards, one side of which says who did or did not borrow the car and the other whether or not that person filled the tank:

Dave did not borrow the car
Helen borrowed the car
Brianne filled up the tank with petrol
Kirk did not fill up the tank with petrol

Once again, also, you have to decide which cards to turn to see if the rule was broken.

In terms of formal logic, the problems are the same. But most people have an easier time answering the second one than the first. (In both cases it is cards number two and four that need to be turned.)

Ordinary people are similarly attuned to questions of risk (“If you work with tuberculosis patients, then you must wear a mask,” for example), and the Wason test shows this, too. It is not, however, a matter of the problem being cast in natural language. Descriptive sentences that are not about social contracts or risk (“People from California are patient; John is patient”, etc) are as difficult for normal people to deal with as colours and numbers.

Dr Ermer and Dr Kiehl wanted to know how psychopaths would fare at this task. To find out, they recruited 67 prisoners and tested them for psychopathy. Ten were unambiguously psychopathic. Thirty were non-psychopaths. The rest were somewhere in between. When the two researchers probed the prisoners’ abilities on the general test, they discovered that the psychopaths did just as well—or just as poorly, if you like—as everyone else. In this case the average score for all was to get it right about a fifth of the time. For problems cast as social contracts or as questions of risk avoidance, by contrast, non-psychopaths got it right about 70% of the time. Psychopaths scored much less—around 40%—and those in the middle of the psychopathy scale scored midway between the two.

The Wason test suggests that analysing social contracts and analysing risk are what evolutionary psychologists call cognitive modules—bundles of mental adaptations that act like bodily organs in that they are specialised to a particular job. This new result suggests that in psychopaths these modules have been switched off.

Further research will be needed to see how the risk and social-contract modules that govern psychopathy are actually controlled. But other phenomena that look like diseases are known to be maintained by natural selection. Sickle-cell anaemia, caused by genes protect against malaria, is the most famous example. Psychopathy may be about to join it.

__________

Full article and photo: http://www.economist.com/node/17460702

This Is Your Brain on Metaphors

Despite rumors to the contrary, there are many ways in which the human brain isn’t all that fancy. Let’s compare it to the nervous system of a fruit fly. Both are made up of cells, of course, with neurons playing particularly important roles. Now one might expect that a neuron from a human will differ dramatically from one from a fly. Maybe the human’s will have especially ornate ways of communicating with other neurons, making use of unique “neurotransmitter” messengers. Maybe compared to the lowly fly neuron, human neurons are bigger, more complex, in some way can run faster and jump higher.

But no. Look at neurons from the two species under a microscope and they look the same. They have the same electrical properties, many of the same neurotransmitters, the same protein channels that allow ions to flow in and out, as well as a remarkably high number of genes in common. Neurons are the same basic building blocks in both species.

So where’s the difference? It’s numbers — humans have roughly one million neurons for each one in a fly. And out of a human’s 100 billion neurons emerge some pretty remarkable things. With enough quantity, you generate quality.

Neuroscientists understand the structural bases of some of these qualities. Take language, that uniquely human behavior. Underlining it are structures unique to the human brain — regions like “Broca’s area,” which specializes in language production. Then there’s the brain’s “extrapyramidal system,” which is involved in fine motor control. The complexity of the human version allows us to do something that, say, a polar bear, could never accomplish — sufficiently independent movement of digits to play a trill on the piano, for instance. Particularly striking is the human frontal cortex. While occurring in all mammals, the human version is proportionately bigger and denser in its wiring. And what is the frontal cortex good for? Emotional regulation, gratification postponement, executive decision-making, long-term planning. We study hard in high school to get admitted to a top college to get into grad school to get a good job to get into the nursing home of our choice. Gophers don’t do that.

There’s another domain of unique human skills, and neuroscientists are learning a bit about how the brain pulls it off.

Consider the following from J. Ruth Gendler’s wonderful “The Book of Qualities,” a collection of “character sketches” of different qualities, emotions and attributes:

Anxiety is secretive. He does not trust anyone, not even his friends, Worry, Terror, Doubt and Panic … He likes to visit me late at night when I am alone and exhausted. I have never slept with him, but he kissed me on the forehead once, and I had a headache for two years …

Or:

Compassion speaks with a slight accent. She was a vulnerable child, miserable in school, cold, shy … In ninth grade she was befriended by Courage. Courage lent Compassion bright sweaters, explained the slang, showed her how to play volleyball.

What is Gendler going on about? We know, and feel pleasure triggered by her unlikely juxtapositions. Despair has stopped listening to music. Anger sharpens kitchen knives at the local supermarket. Beauty wears a gold shawl and sells seven kinds of honey at the flea market. Longing studies archeology.

Symbols, metaphors, analogies, parables, synecdoche, figures of speech: we understand them. We understand that a captain wants more than just hands when he orders all of them on deck. We understand that Kafka’s “Metamorphosis” isn’t really about a cockroach. If we are of a certain theological ilk, we see bread and wine intertwined with body and blood. We grasp that the right piece of cloth can represent a nation and its values, and that setting fire to such a flag is a highly charged act. We can learn that a certain combination of sounds put together by Tchaikovsky represents Napoleon getting his butt kicked just outside Moscow. And that the name “Napoleon,” in this case, represents thousands and thousands of soldiers dying cold and hungry, far from home.

And we even understand that June isn’t literally busting out all over. It would seem that doing this would be hard enough to cause a brainstorm. So where did this facility with symbolism come from? It strikes me that the human brain has evolved a necessary shortcut for doing so, and with some major implications. 

Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.

Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio.

Or consider pain. Somebody pokes your big left toe with a pin. Spinal reflexes cause you to instantly jerk your foot back just as they would in, say, a frog. Evolutionarily ancient regions activate in the brain as well, telling you about things like the intensity of the pain, or whether it’s a sharp localized pain or a diffuse burning one. But then there’s a fancier, more recently evolved brain region in the frontal cortex called the anterior cingulate that’s involved in the subjective, evaluative response to the pain. A piranha has just bitten you? That’s a disaster. The shoes you bought are a size too small? Well, not as much of a disaster.

Now instead, watch your beloved being poked with the pin. And your anterior cingulate will activate, as if it were you in pain. There’s a neurotransmitter called Substance P that is involved in the nuts and bolts circuitry of pain perception. Administer a drug that blocks the actions of Substance P to people who are clinically depressed, and they often feel better, feel less of the world’s agonies. When humans evolved the ability to be wrenched with feeling the pain of others, where was it going to process it? It got crammed into the anterior cingulate. And thus it “does” both physical and psychic pain.

Another truly interesting domain in which the brain confuses the literal and metaphorical is cleanliness. In a remarkable study, Chen-Bo Zhong of the University of Toronto and Katie Liljenquist of Northwestern University demonstrated how the brain has trouble distinguishing between being a dirty scoundrel and being in need of a bath. Volunteers were asked to recall either a moral or immoral act in their past. Afterward, as a token of appreciation, Zhong and Liljenquist offered the volunteers a choice between the gift of a pencil or of a package of antiseptic wipes. And the folks who had just wallowed in their ethical failures were more likely to go for the wipes. In the next study, volunteers were told to recall an immoral act of theirs. Afterward, subjects either did or did not have the opportunity to clean their hands. Those who were able to wash were less likely to respond to a request for help (that the experimenters had set up) that came shortly afterward. Apparently, Lady Macbeth and Pontius Pilate weren’t the only ones to metaphorically absolve their sins by washing their hands.

This potential to manipulate behavior by exploiting the brain’s literal-metaphorical confusions about hygiene and health is also shown in a study by Mark Landau and Daniel Sullivan of the University of Kansas and Jeff Greenberg of the University of Arizona. Subjects either did or didn’t read an article about the health risks of airborne bacteria. All then read a history article that used imagery of a nation as a living organism with statements like, “Following the Civil War, the United States underwent a growth spurt.” Those who read about scary bacteria before thinking about the U.S. as an organism were then more likely to express negative views about immigration.

Another example of how the brain links the literal and the metaphorical comes from a study by Lawrence Williams of the University of Colorado and John Bargh of Yale. Volunteers would meet one of the experimenters, believing that they would be starting the experiment shortly. In reality, the experiment began when the experimenter, seemingly struggling with an armful of folders, asks the volunteer to briefly hold their coffee. As the key experimental manipulation, the coffee was either hot or iced. Subjects then read a description of some individual, and those who had held the warmer cup tended to rate the individual as having a warmer personality, with no change in ratings of other attributes.

Another brilliant study by Bargh and colleagues concerned haptic sensations (I had to look the word up — haptic: related to the sense of touch). Volunteers were asked to evaluate the resumes of supposed job applicants where, as the critical variable, the resume was attached to a clipboard of one of two different weights. Subjects who evaluated the candidate while holding the heavier clipboard tended to judge candidates to be more serious, with the weight of the clipboard having no effect on how congenial the applicant was judged. After all, we say things like “weighty matter” or “gravity of a situation.”

What are we to make of the brain processing literal and metaphorical versions of a concept in the same brain region? Or that our neural circuitry doesn’t cleanly differentiate between the real and the symbolic? What are the consequences of the fact that evolution is a tinkerer and not an inventor, and has duct-taped metaphors and symbols to whichever pre-existing brain areas provided the closest fit?

Jonathan Haidt, of the University of Virginia, has shown how viscera and emotion often drive our decisionmaking, with conscious cognition mopping up afterward, trying to come up with rationalizations for that gut decision. The viscera that can influence moral decisionmaking and the brain’s confusion about the literalness of symbols can have enormous consequences. Part of the emotional contagion of the genocide of Tutsis in Rwanda arose from the fact that when militant Hutu propagandists called for the eradication of the Tutsi, they iconically referred to them as “cockroaches.” Get someone to the point where his insula activates at the mention of an entire people, and he’s primed to join the bloodletting.

But if the brain confusing reality and literalness with metaphor and symbol can have adverse consequences, the opposite can occur as well. At one juncture just before the birth of a free South Africa, Nelson Mandela entered secret negotiations with an Afrikaans general with death squad blood all over his hands, a man critical to the peace process because he led a large, well-armed Afrikaans resistance group. They met in Mandela’s house, the general anticipating tense negotiations across a conference table. Instead, Mandela led him to the warm, homey living room, sat beside him on a comfy couch, and spoke to him in Afrikaans. And the resistance melted away.

This neural confusion about the literal versus the metaphorical gives symbols enormous power, including the power to make peace. The political scientist and game theorist Robert Axelrod of the University of Michigan has emphasized this point in thinking about conflict resolution. For example, in a world of sheer rationality where the brain didn’t confuse reality with symbols, bringing peace to Israel and Palestine would revolve around things like water rights, placement of borders, and the extent of militarization allowed to Palestinian police. Instead, argues Axelrod, “mutual symbolic concessions” of no material benefit will ultimately make all the difference. He quotes a Hamas leader who says that for the process of peace to go forward, Israel must apologize for the forced Palestinians exile in 1948. And he quotes a senior Israeli official saying that for progress to be made, Palestinians need to first acknowledge Israel’s right to exist and to get their anti-Semitic garbage out of their textbooks.

Hope for true peace in the Middle East didn’t come with the news of a trade agreement being signed. It was when President Hosni Mubarak of Egypt and King Hussein of Jordan attended the funeral of the murdered Israeli prime minister Yitzhak Rabin. That same hope came to the Northern Irish, not when ex-Unionist demagogues and ex-I.R.A. gunmen served in a government together, but when those officials publicly commiserated about each other’s family misfortunes, or exchanged anniversary gifts. And famously, for South Africans, it came not with successful negotiations about land reapportionment, but when black South Africa embraced rugby and Afrikaans rugby jocks sang the A.N.C. national anthem.

Nelson Mandela was wrong when he advised, “Don’t talk to their minds; talk to their hearts.” He meant talk to their insulas and cingulate cortices and all those other confused brain regions, because that confusion could help make for a better world.

Robert Sapolsky is John A. and Cynthia Fry Gunn Professor of Biology, Neurology and Neurosurgery at Stanford University, and is a research associate at the Institute of Primate Research, National Museums of Kenya. He writes frequently on issues related to biology and behavior. His books include “Why Zebras Don’t Get Ulcers,” “A Primate’s Memoir,” and “Monkeyluv.”
__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/11/14/this-is-your-brain-on-metaphors

Ain’t That a Shame?

Last year, seniors at Dartmouth and Cornell found themselves getting strong-armed to contribute to their parting “class gifts.” The student fund raisers were given lists of those who had donated—and of those who had yet to. As the Chronicle of Higher Education recently reported, the eager young rainmakers then set about using the obligatory social-networking tools to shame their peers into ponying up.

It soon got ugly. At Dartmouth, when one holdout remained, a columnist for the student newspaper denounced her—though not by name. That happened the next day on a student blog, where a writer called her a “parasite,” posted her name and picture, and sneered, “You’re not even worth the one measly dollar that you wouldn’t give.”

Tommaso Masaccio’s ‘The Expulsion of Adam and Eve From Eden.’

Welcome to the new world of shaming, in which the ancient fear of public humiliation and ostracism (once a homely, low-tech business of the stocks and pillories) has become a high-tech tool to motivate and incentivize. Where once such tactics were used to enforce a traditional moral code, today they have found new life in nudging us to conform to “prosocial” behavior, be it philanthropy or strict adherence to the expansive pieties of our reigning civic religion, environmentalism.

America has a long (and many have argued, shameful) history of shaming. Things got going in earnest in 1639 when Plymouth colonist Mary Mendame was found to have committed “the act of uncleannesse” with an Indian named Tinsin. The judge sentenced her “to be whipt at a carte tayle through the townes streete, and to weare a badge upon her left sleeve.” And if the proto-Prynne failed to wear her scarlet letter, she was “to be burned in the face with a hott iron.”

Today, the tradition lives on. Several northeastern states “name and shame”—usually by means of online lists—citizens behind on their taxes. For some officials, though, even that approach is too discreet. Last Sunday, the city of Holyoke, Mass., publicized the names of delinquent taxpayers in the local newspaper. The goal, explained City Treasurer Jon D. Lumbra, was to shame those people into paying what they owed. (Mr. Lumbra did not say whether the city will seek to have the scofflaws wear a scarlet “T.”)

Such efforts are ham-fisted. The advocates of bringing social pressure to bear on modern ne’er-do-wells generally push for more subtle forms of public pressure. Earlier this year, the District of Columbia instituted a tax on disposable shopping bags. Supermarkets in Washington now charge five cents for every plastic bag, and you have to specifically request one. But the expected revenues have not materialized because the number of bags used by shoppers has plummeted. Environmental advocates are delighted, but note that the the tax alone can’t account for the dramatic drop in revenues. It’s unlikely that the average Washingtonian is deterred by the small tax incurred on a dozen bags when paying a $300 grocery bill. They argue that shaming gets the credit. Shoppers are hesitant to expose their lack of eco-virtue to the withering stares of the good citizens behind them in the checkout line. As Councilman Tommy Wells, the District’s prime bag-tax cheerleader, recently told the Journal, “It’s more important to get in their heads than in their pocketbooks.”

When it comes to the criminal code, shaming can be a lot cheaper than incarceration. The last decade saw something of a fad for shame-based punishments, among them fitting “humility tags” to the the cars of convicted drunk drivers, and making petty thieves wear signs proclaiming their offenses. The “communitarian” thinkers behind shaming argue that the practice not only encourages public decency but allows society to make unambiguous moral statements about what behaviors are beyond the pale. In practice, the behaviors that warrant such a response seem to be such things as insufficient effort to reduce one’s carbon footprint.

The Internet has done much to promote our peculiarly modern sort of shaming. Annoy the wrong person, behave in a way some blogger disdains, and you will soon find yourself locked in the digital pillory, exposed to snark and ridicule. These are supposed to be salubrious incentives to civil public behavior, but I haven’t seen much evidence that a Web-armed society is a polite one.

The most odious aspect of these online humiliations is that they don’t go away. As law professor Daniel J. Solove notes in his book “The Future of Reputation,” the Internet saddles us with permanent digital baggage: “Internet shaming creates an indelible blemish on a person’s identity. Being shamed in cyberspace is akin to being marked for life.”

The old colonists eventually thought better of the practice. Even as hanging remained a common punishment, shaming was deemed inhumane and eventually abandoned.

As efforts to prod us with the threat of shame grow, it’s worth keeping in mind that the tactic only works if we go along. The Dartmouth student who chose not to donate to the school didn’t back down, publicly stating “I resent the pressure that was applied to me.” Soon it was the college that was backtracking: A Dartmouth fund-raising official said they “deeply regret” the violation of the student’s privacy and that changes would be made to the way class gifts are solicited. Cornell is also re-emphasizing respect for privacy in asking for donations.

The British philosopher-politician Edmund Burke urged “adherence to principle,” and warned against succumbing to threats of humiliation: “It is a power of resisting false shame and frivolous fear, that assert our good faith and honor, and assure to us the confidence of mankind.”

Then again, he never had to ask for a disposable plastic grocery bag.

Eric Felten, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703805704575594502531418866.html

I’m Very, Very, Very Sorry … Really?

We Apologize More to Strangers Than Family, and Why Women Ask for Forgiveness More Than Men

I’d like to tell the man whose cab I stole in the rain last week that I’m very sorry. But to my mom, whose driving I criticized recently? Not so much.

I’m in good company on this. According to new research from Canadian psychologists, people apologize about four times a week. But, on average, they offer up these apologies much more often to strangers (22% of the time) than to romantic partners (11%) or family members (7%). The only folks we apologize to more? Friends (46%).

Why is it so hard to say “I’m sorry” to someone we love? Ask Phil Peachey. He knew he was in trouble when he woke up one morning to find his wife banging utensils around the kitchen. What was wrong? “Nothing,” she said. He asked her again. She gave him the cold shoulder.

Then he came up with the answer: Pinot Grigio—a lot of it—which he’d drunk the night before. Had he really told her he didn’t trust her sense of direction and called her “stupid”?

Uh-oh. Mr. Peachey, a 47-year-old real-estate broker in Orlando, Fla., quickly offered his best apology: “Is there anyone who would like a new pair of shoes?”

“Nothing says ‘I’m sorry’ like Christian Dior,” he says.

Odds are your mother taught you that it’s important to apologize if you’ve done something wrong—and to graciously accept an apology when one is offered. The act of making amends is crucial to maintaining harmony in both our personal relationships and the world at large.

Apologies are so important that many hospitals train their staffs to say they are sorry to patients and their families following a medical mistake because they’ve found it deters malpractice lawsuits. Economists have shown that companies offering a mea culpa to disgruntled customers fare better than ones offering financial compensation.

But apologies can be complicated. They’re not always forthcoming, or even sincere. Making matters worse, there’s a gender “apology gap”: Men and women have different approaches and different expectations when it comes to acts of contrition.

Conventional wisdom says women apologize too much, and men don’t apologize often enough. Women are good at nurturing relationships, the thinking goes, while men are too egotistical to say they’re sorry or have a different take on social graces. Yet there’s no proof that women are better than men at apologizing—they just do it more often, sometimes for inconsequential offenses.

Two small studies at the University of Waterloo in Ontario, published last month by the journal Psychological Science, indicate men are just as willing as women to apologize if they think they’ve done something wrong. Men just have a different idea of what defines “something wrong.”

In the first study, 66 men and women kept daily diaries and recorded each time they committed—or were on the receiving end—of an offense. They also noted whether an apology was issued. The outcome: Women were offended more often, and they offered more apologies for their own behavior. Yet men were just as likely as women to apologize if they believed they’d done something wrong.

Umpire Jim Joyce apologized for blowing the call that cost Detroit Tigers pitcher Armando Galarraga a perfect game in June. Mr. Galarraga accepted the apology.

In the second study, 120 subjects imagined committing offenses, from being rude to a friend to inconveniencing someone they live with. The men said they would apologize less frequently. The researchers concluded the men had a higher threshold for what they found offensive. “We don’t think that women are too sensitive or that men are insensitive,” says Karina Schumann, one of the study’s authors. “We just know that women are more sensitive.”

Sandra Elmoznino, 27, a New York City teacher, says she apologizes all the time, whether for calling a friend too early in the morning or showing up two minutes late. “I want to be in everyone’s good graces,” she says. “It’s an anxiety thing.”

__________

Saying ‘I’m Sorry’

A ‘comprehensive’ apology is more likely to win forgiveness, researchers say. There are eight elements:

  • Remorse
  • Acceptance of responsibility
  • Admission of wrongdoing
  • Acknowledgment of harm
  • Promise to behave better
  • Request for forgiveness
  • Offer of repair
  • Explanation

Source: University of Waterloo

__________

Recently, though, Ms. Elmoznino has begun to feel that the constant apologizing has become a handicap. Her friends tease her about it. Men she has dated find it annoying. Her twin brother told her it makes her look unsure of herself. As a result, she’s now making a conscious effort to apologize only when she’s really done something wrong. “I don’t want to be like the boy who cried wolf,” she says.

Funny, the men I spoke with agreed that women are too sensitive, though most of them were reluctant to talk on the record. I promised anonymity, though, and they piped up:

“Apologize? What language is that?”

“Women care too much.”

“One of the first requirements of getting into relationships with women is to rehearse saying ‘I’m sorry’ as many times as possible.”

“If a husband speaks in the forest and no one hears him, is he still wrong?”

I pressed on, and asked men to explain exactly why they apologize—when they do:

“To move on.”

“To end the drama.” (Hmm. This from a man who’s apologized recently to me.)

“To be honest, men never—well, almost never—have any idea what we are apologizing for,” says Mark Stevens, 63, chief executive of MSCO, a Rye Brook, N.Y., marketing consulting firm.

Mr. Stevens says during his 35-year marriage he has sincerely apologized to his wife, Carol, just five times—but has said he’s sorry an additional 3,500 times. He calls these mea culpas “fraudulent apologies.” They go something like this: “I don’t know why you’re unhappy, but I’m sorry.”

“Ninety percent of apologies are to keep the peace,” he adds. “How can you have a sincere apology if you don’t know what you’ve done?”

He still remembers when, years ago, he and his wife agreed to buy a vacation home in Vermont and consider it their anniversary gift to each other. On the night of the anniversary, though, he found his wife slamming silverware into a drawer. (Sound familiar?) His transgression: He hadn’t bought her a gift.

“Despite the agreement we both made, I apologized because I realized she was hurting and I had overlooked something,” Mr Stevens says. (“He has no clue,” says Ms. Stevens, 57. “Sometimes I’ll just let it go.”)

Mr. Peachey, of the Pinot Grigio episode, says he also was trying to do his best. To show remorse, he took his wife to the mall and bought her the shoes—and an iPad. “That was a $1,000 insult,” he calculated.

Yet his wife, Rochelle, the 46-year-old director of an Internet dating site, says all she really wanted was the apology: “I told him, had he just put his arms around me and said he was so sorry he screwed up and that he loved me, that would have been enough.”

Need help with your own apologies? Here are some tips:

Know what you did wrong. If you’re not sure, ask.

Show real remorse. Don’t say: “I’m sorry you are hurt,” which suggests the person is too sensitive. Say: “I am sorry I hurt you.”

Don’t be defensive. Don’t use the word “but,” as in, “I am sorry, but…”

Offer to make changes. It helps to say, sincerely, that you will try not to make the same offense again.

Don’t throw in the kitchen sink. If you’re the one who wants the apology, stick to the matter at hand. Don’t bring up past slights.

Try humor. A little self-deprecation can go a long way.

Don’t delay. Just do it. An imperfect apology is better than none at all.

Elizabeth Bernstein, Wall Street Journal

__________

Full article and photos: http://online.wsj.com/article/SB10001424052702304410504575560093884004442.html

A Way Out of Depression

Coaxing a Loved One in Denial Into Treatment Without Ruining Your Relationship

For people suffering from depression, the advice is usually the same: Seek help.

That simple-sounding directive, however, is often difficult for those with depression to follow because one common symptom of the disease is denial or lack of awareness. This can be frustrating for well-meaning family and friends—and is one of the key ways that treating mental illness is different from treating other illnesses.

Research shows that almost 15 million American adults in any given year have a major depressive disorder. And six million Americans have another mental illness, such as schizophrenia, bipolar disorder or other psychotic disorders. Yet a full 50% of people with bipolar disorder and schizophrenia don’t believe they are ill and resist seeking help. People with clinical depression resist treatment at similar rates, experts say.

You may have seen that seemingly ubiquitous TV commercial for the anti-depressant Cymbalta that repeatedly stresses that “depression hurts”—not just the person who is sick but the people who love that person as well. (Even the dog looks sad.) It’s an ad, sure, but the sentiment is correct: People who live with a depressed person often become depressed themselves. And depression can have a terrible effect on relationships. It is a mental illness beyond just a depressed mood or situational sadness, in which a person is able to still enjoy life. Depression drains people of their interest in social connections. And it erases personality traits, taking away many of the very characteristics that made people love them in the first place.

“Depression makes a person see the world through gray-colored glasses,” says Xavier Amador, a clinical psychologist and author of “I Am Not Sick. I Don’t Need Help!” which was republished earlier this year in a 10th edition.

The challenge for a person with a depressed spouse, relative or close friend who refuses to get treatment is how to change that defiant person’s mind. Reality show-style interventions and tough love are rarely successful, experts say. But there are techniques that can help. The key is to try and avoid a debate over whether your loved one is sick and instead look for common ground.

Patricia Gallagher knows how hard this can be. Her husband, John, came home from his job as a senior financial analyst for a pharmaceutical company one day and said his boss had given him three to six months to find a new job. He was crying.

Over the next year, Ms. Gallagher, who is 59 years old and lives in Chalfont, Pa., noticed her husband became irritable, sad, and short-tempered, withdrawing from her and the kids. He lost 55 pounds, stopped sleeping and would call her numerous times each day saying he couldn’t “take it” anymore. He visited doctors dozens of times that year, getting examined for everything from a stomach ulcer to a brain tumor. Many doctors suggested he see a psychiatrist, but he didn’t.

Ms. Gallagher tried everything she could think of to help. She urged her husband to relax or take a vacation. She begged him to see a psychologist, eventually scheduling the appointments herself, and even going alone when he refused to go, to ask advice. Eventually, he was hospitalized after becoming catatonic with anxiety, then attempted suicide by jumping out the hospital room window.

A decade later, the Gallaghers are separated. “I kept thinking, ‘You’ve wrecked everything because you didn’t go to therapy,'” she says. Mr. Gallagher, 59, a sales associate for a clothing company, says: “I didn’t understand [depression] was a chemical thing. I thought it was a physical thing.”

People who are mentally ill yet refuse or are unable to admit it or seek help may feel shame. They may feel vulnerable. Or their judgment may be impaired, keeping them from seeing that they’re depressed.

“When a loved one tells them they are depressed and should see someone, they feel they are being criticized for being a complete failure,” says Dr. Amador, director of the LEAP Institute in Taconic, NY, which trains mental-health professionals and family members how to circumvent a mentally ill person’s denial of their disease.

__________

Getting Around Denial

Experts say there are ways to circumvent a loved one’s refusal to seek help:

  • BE GENTLE. Your loved one likely feels very vulnerable. “This is akin to talking to someone about his weight,” says Ken Duckworth, a psychiatrist and medical director of the National Alliance on Mental Illness, an education, support and advocacy group. Simply saying “I love you” will help.
  • SHARE YOUR OWN VULNERABILITY. If you’ve accepted help for anything—a problem at work, an illness, an emotional problem—tell your loved one about it. This will help reduce their shame, which is a contributing factor to denial.
  • STOP TRYING TO REASON. Don’t get into a debate about who is right and who is wrong. Ask questions instead. Learn what your loved one believes.
  • FOCUS ON THE PROBLEMS YOUR LOVED ONE CAN SEE. Suggest they get help for those. For example, if they acknowledge sleep loss or problems concentrating, ask if they will seek help for those issues. “Don’t hammer them with everything else,” says Dr. Duckworth. “Nobody wants to be pathologized.
  • SUGGEST YOUR LOVED ONE SEE A GENERAL PRACTITIONER. It is often far easier to persuade them to do this than to see a psychiatrist or psychologist. And this physician can diagnose depression, prescribe medicine or refer to a mental-health professional.
  • WORK AS A TEAM. Ask if you can attend an appointment with the doctor or mental-health professional, just once, so you can share your observations and get advice on how best to help.
  • ASK FOR HELP FOR YOURSELF. See a therapist to discuss how you are doing and to get help problem solving. Or contact organizations such as the National Alliance on Mental Illness to find information on caretaking or support groups.
  • ENLIST OTHERS. Who else loves this person and can see the changes in their behavior? Perhaps a sibling, parent, adult child or religious leader can help you break through.
  • LEVERAGE YOUR LOVE. Ask the person to get help for your sake. “If your loved one will not get help, you will not win on the strength of your argument,” says Xavier Amador, a clinical psychologist and director of the LEAP Institute. “You will win on the strength of your relationship.”

__________

In addition to the psychological reasons that lead a person to deny his own mental illness, there may be a physiological one, as well. Anosognosia, an impairment of the frontal lobe of the brain, which governs self awareness, leaves a person with an inability to understand that he is sick.

Dr. Amador, who pioneered research into this syndrome 20 years ago, says it appears in about 50% of people with schizophrenia and bipolar disorder. Experts believe that similar damage sometimes occurs in people with clinical depression, although they are just beginning to research this.

At the LEAP Institute, they teach mental-health professionals and family members how to build enough trust with the mentally ill person that he will follow advice even if he won’t admit to being sick. LEAP is an acronym for listen reflectively, empathize strategically, agree on common ground and partner on shared goals.

“It’s the difference between boxing and judo, says Dr. Amador. “In boxing you throw a punch and the person blocks you. In judo, a person throws the punch and you take that punch and use their own resistance to move them where you want them.”

Sometimes loved ones are able to help. Renee Rosolino, 44, a residential appraiser in Fraser, Mich., says she is sorry she waited so long to listen to her family.

They expressed concerns about her behavior 14 years ago, when she first started showing signs of bipolar disorder. At the time, she felt judged when her husband, parents and sisters told her that her personality had changed completely in six months. She stopped eating and sleeping, cried a lot and yelled at family members, and began pulling away from everything from social activities to church.

Repeatedly, her husband tried to talk to her about her behavior, but she insisted she was fine. He even enlisted Ms. Rosolino’s sister to help. After dinner one night, they told her they were worried that she was depressed because she was sad, stressed and always on edge. Ms. Rosolino got mad and “shut down the conversation,” she says.

In addition to being angry, Ms. Rosolino says she was terrified. When she was a child, Ms. Rosolino’s father, an assistant vice president at a bank, had a mental breakdown and was taken to a psychiatric hospital in the middle of the night. “I never understood what happened to my father,” she says. “And I had it in my head that if I went to talk to someone this would happen to me, to my children. I didn’t want my kids to have those same feelings.”

Ms. Rosolino’s husband eventually broke through to her by asking her to speak to their pastor, pleading with her to do it for him and their children. “He said, ‘It’s OK. I am not going to leave you. I need you. Our kids need you,'” she says.

During her talk with the pastor, she broke down and told him about the pressures she felt as a mom—one of her children is autistic—and her irritation at feeling judged by her family. He told her that family members were worried about her and asked her to see a psychiatrist, just once, to set their minds at ease.

She agreed and started seeing the psychiatrist once a week and taking anti-depressants. Still, she has been hospitalized several times, usually, she says, when she stops taking her medication. But she has been stable for several years and says she has the people in her life to thank.

“Out of love and respect for the pastor and my family, I said I’d make the phone call,” she says. “They made me feel safe.”

 Elizabeth Bernstein, Wall Street Journal

__________

Full article: http://online.wsj.com/article/SB10001424052748703946504575470040863778372.html

The Many Powers of Maybe

Refusing to Commit Has Never Been Easier, and It Says A Lot About Us

If I asked you to have dinner with me Friday night, would you say “yes”? (Great!) “No”? (Bummer.)

Or would you break my heart and say “maybe”?

It seems it wasn’t long ago that invitations required definitive answers. We would receive a phone call or a piece of mail requesting our attendance at an event, and we were expected to call or write back—with an affirmative or negative response.

But then electronic invites came along and made it way too easy for us to wriggle out of social engagements. All we had to do was click one little button: “Maybe.” Once we saw how easy that was—no stressful decision or long explanation necessary—we started typing it into emails and texts.

Catch a movie tonight? Maybe.

Brunch this weekend? Maybe.

Join us for Thanksgiving? Maybe.

See how easy it is? No commitment. No consequences.

Or so we’d like to think. Because we’re not speaking to someone directly and so don’t have to hear that person’s disappointment (or listen to her nag), we can fool ourselves into thinking there are no hard feelings. And now that we have unlimited access to each other through our smartphones, we feel we have the luxury of waiting until the last minute to make a decision because we can always call, email or text to say we’ve made up our mind—we’re going to show up after all.

Here’s the problem with “maybe”: It means different things to different people. And something always gets lost in translation.

“I thought ‘maybe’ meant ‘maybe,’ ” says Mamta Desai, a 26-year-old private-equity investment associate in Los Angeles. She learned otherwise when she and a friend threw a party last summer. They sent out a Facebook invite to 120 people. Fifty said they would attend and did. Twenty replied “maybe”—and just two of those people showed up.

Of course, some people who say “maybe” genuinely need to check their calendars. And many see it as a nice, gentle way to say “no.” (Doesn’t everyone know by now that on a Facebook invite, “yes” means “maybe” and “maybe” means “no”? I decided to ask Facebook. “Sometimes the best bet is the hedge bet until you know who’s said yes,” a spokeswoman explained.)

But for many, “maybe” is more complicated. “It seems to be about ambivalence, but it is really about power and boundaries,” says Prudence Gourguechon, a psychiatrist in Chicago. “Person A who says, ‘Yeah, maybe,’ essentially puts recipient B on hold. B is powerless.”

Some “maybe” people are trying to stall, buy time, work up their nerve to decline the offer or see if a better one comes along. Others suspect that on the date in question they just might prefer to curl up in bed with a good book. Parents use “maybe” to soften a negative response to a child. Ditto bosses and their underlings. And don’t get me started on the commitment-phobes and control freaks.

__________

Getting Past ‘Maybe’

How do you deal with wafflers? Here are some suggestions:

  • Tell them how much you hate the word ‘maybe.’ I have been doing this, and twice recently it elicited commitments.
  • Send a proper, written invitation with an RSVP request.
  • Explain that the invitation isn’t a trick question. A simple ‘yes’ or ‘no’ will do. Then accept the answer graciously.
  • Give a deadline. Anne Eddy, a health-care administrator from Needham, Mass., tells invited guests that if she doesn’t hear from them by a certain date, she’ll assume they can’t make it.
  • Let them know you have someone else lined up to take their place. ‘Sometimes this accelerates their decision,’ says Bill Kalmar, 67, a retired director of a state quality council from Lake Orion, Mich.
  • Throw it back at them. Dottie Woods, a 59-year-old Blacksburg, Va., bookkeeper, was fed up with all the ‘maybes’ who were no-shows to her Virginia Tech tailgate parties. Now, when she gets a maybe, she replies: ‘Maybe another time would be better for you?’ ‘It always works,’ she says.

__________

“A ‘maybe’ protects us from being a promise-breaker,” says Gerald Goodman, professor emeritus of clinical psychology at the University of California, Los Angeles. He says that “maybes” sometimes are necessary to protect relationships. “Tender emotions turn broken promises into betrayal,” he says.

Alicia Gutierrez offers up maybes all the time—to requests to attend happy hours, concerts and dinner with friends. The invitations often sound great, but when the time comes, “I want to sit on my couch and be brain-dead and watch bad TV,” says the 38-year-old commercial-account manager for a large technology company.

Ms. Gutierrez, who lives in Miami, considers “maybe” to mean “no,” but recently found out that not everyone else does. When asked by an acquaintance to attend a charity event, she replied: “Sounds great, maybe.” Then she forgot about the invite.

On the night of the party, Ms. Gutierrez—at home on her couch in her pajamas—received a text from her pal, asking her where she was. She responded, “Sorry, I’m on a date.” The woman never spoke to her again. “A ‘maybe’ can be whatever you want it to be,” she says. “It has nothing to do with the person saying it—it’s really about the person who is interpreting it.”

Ah, there’s the rub. Just as “maybe” has various meanings to the people who say it, it also has different meanings to the people who hear it. ” ‘Maybe’ can be blurry to the listener,” says UCLA’s Dr. Goodman. “People who feel intolerant of ambiguity probably hate to hear ‘maybe’—it can give them an insecure feeling.”

Tell me about it. Some of my favorite people are chronic hedgers. I gave up decades ago on getting a firm response from my mom to any request. It took me two years to figure out that when my best friend says “maybe,” she unfailingly means “no.” And recently, one of my oldest friends declared me to be “high maintenance” for insisting on a firm ‘yes’ or ‘no’ to the following question: “Are we on for dinner tonight?”

As much as I am accustomed to this waffling, it still sometimes unsettles me. I want my loved ones to jump with joy at my invites, of course. And if they don’t give me a definitive answer, I’m not really free to make other plans. But I also can’t help feeling a little rejected.

I’m not alone in finding this fence-straddling annoying. ” ‘Maybe’ is a weasel word,” says Kerry Fitzpatrick, 70, a retired chief executive of a horse-racing business who lives in St. Petersburg, Fla.

“It says to me, ‘You are not that important; other people or things might come along that are really more important,'” says Lori West, 39, a nurse from Virginia Beach, Va.

“It makes me feel like my feelings have been discounted,” says Amanda Collins, 39, of Phoenix. One of her best friends answers every invite with, “I’m not sure. Maybe.” The owner of a marketing and communications firm, she has come up with a strategy: Every time her friend hedges, she calls him “Maybe Man” and demands a firm answer.

Perhaps I’ll try this on my own Maybe Man, my three-year-old nephew, Noah. On a recent visit, I asked him if he wanted to go swimming after lunch.

His answer? You guessed it.

Elizabeth Bernstein, Wall Street Journal

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704141104575588460082408950.html

‘Stranger Danger’ and the Decline of Halloween

No child has ever been killed by poisoned candy. Ever.

Halloween is the day when America market-tests parental paranoia. If a new fear flies on Halloween, it’s probably going to catch on the rest of the year, too.

Take “stranger danger,” the classic Halloween horror. Even when I was a kid, back in the “Bewitched” and “Brady Bunch” costume era, parents were already worried about neighbors poisoning candy. Sure, the folks down the street might smile and wave the rest of the year, but apparently they were just biding their time before stuffing us silly with strychnine-laced Smarties.

That was a wacky idea, but we bought it. We still buy it, even though Joel Best, a sociologist at the University of Delaware, has researched the topic and spends every October telling the press that there has never been a single case of any child being killed by a stranger’s Halloween candy. (Oh, yes, he concedes, there was once a Texas boy poisoned by a Pixie Stix. But his dad did it for the insurance money. He was executed.)

Anyway, you’d think that word would get out: poisoned candy not happening. But instead, most Halloween articles to this day tell parents to feed children a big meal before they go trick-or-treating, so they won’t be tempted to eat any candy before bringing it home for inspection. As if being full has ever stopped any kid from eating free candy!

So stranger danger is still going strong, and it’s even spread beyond Halloween to the rest of the year. Now parents consider their neighbors potential killers all year round. That’s why they don’t let their kids play on the lawn, or wait alone for the school bus: “You never know!” The psycho-next-door fear went viral.

Then along came new fears. Parents are warned annually not to let their children wear costumes that are too tight—those could seriously restrict breathing! But not too loose either—kids could trip! Fall! Die!

Treating parents like idiots who couldn’t possibly notice that their kid is turning blue or falling on his face might seem like a losing proposition, but it caught on too.

Halloween taught marketers that parents are willing to be warned about anything, no matter how preposterous, and then they’re willing to be sold whatever solutions the market can come up with. Face paint so no mask will obscure a child’s vision. Purell, so no child touches a germ. And the biggest boondoggle of all: an adult-supervised party, so no child encounters anything exciting, er, “dangerous.”

Think of how Halloween used to be the one day of the year when gaggles of kids took to the streets by themselves—at night even. Big fun! Low cost! But once the party moved inside, to keep kids safe from the nonexistent poisoners, in came all the nonsense. The battery-operated caskets. The hired witch. The Costco veggie trays and plastic everything else. Halloween went from hobo holiday to $6 billion extravaganza.

And it blazed the way for adult-supervised everything else. Let kids make their own fun? Not anymore! Let’s sign our toddlers up for “movement” classes! Let’s bring on the extracurricular activities, travel soccer and manicure parties for the older kids. Once Halloween got outsourced to adults, no kids-only activity was safe. Goodbye sandlot, hello batting coach!

And now comes the latest Halloween terror: Across the country, cities and states are passing waves of laws preventing registered sex offenders from leaving their homes—or sometimes even turning on their lights—on Halloween.

The reason? Same old same old: safety. As a panel of “experts” on the “Today” show warned viewers recently: Don’t let your children trick-or-treat without you “any earlier than [age] 13, because people put on masks, they put on disguises, and there are still people who do bad things.”

Perhaps there are. But Elizabeth Letourneau, an associate professor at the Medical University of South Carolina, studied crime statistics from 30 states and found, “There is zero evidence to support the idea that Halloween is a dangerous date for children in terms of child molestation.”

In fact, she says, “We almost called this paper, ‘Halloween: The Safest Day of the Year,’ because it was just so incredibly rare to see anything happen on that day.”

Why is it so safe? Because despite our mounting fears and apoplectic media, it is still the day that many of us, of all ages, go outside. We knock on doors. We meet each other. And all that giving and taking and trick-or-treating is building the very thing that keeps us safe: community.

We can kill off Halloween, or we can accept that it isn’t dangerous and give it back to the kids. Then maybe we can start giving them back the rest of their childhoods, too.

Ms. Skenazy is the author of “Free-Range Kids” (Jossey-Bass, 2010).

___________

Full article and photo: http://online.wsj.com/article/SB10001424052702304915104575572642896563902.html

Testosterone Put to the Test

Men today—wimpy or exploited or both?

Do today’s men need to man up? Yes, absolutely, Peter McAllister says in “Manthropology,” viewing contemporary males as faint shadows of their shaggy forebears.

Modern man, Mr. McAllister declares, is “the worst man in history,” though not every reader will be convinced by the evidence presented. Certainly the guys of 2010 are not as physically tough as the men of other times and other places. Mr. McAllister, who is especially entertaining when he writes about male-centric mayhem, scoffs at what passes for grit these days. He dismisses, for instance, modern-day “blood pinning,” in which military insignia are jabbed into soldiers’ chests, as minor-league at best. Sambian boys in New Guinea have traditionally been initiated into manhood with cane splints jammed up their nostrils and vines shoved down their throats. He also roughs up modern soldiers, noting that Army recruits are asked to run only 12 miles in four hours; in China, Wu Dynasty soldiers in the sixth century B.C. were reputed to go on 80-mile runs without a break.

You might think, given all the moaning lately, that helmet-to-helmet hits in football are a sign of a violent sports culture. Don’t tell Mr. McAllister. Even the no-holds-barred brawling of Ultimate Fighting Championship, he says, is “a ridiculously safe form of combat” when compared with Olympic boxing back in the good old days—say, the fifth century B.C. That’s when a boxer named Cleomedes killed his opponent Iccus “by driving his hand into his stomach and disemboweling him.”

For Mr. McAllister one measure of manhood is the willingness to face an enemy and mete out punishment without flinching. Today our conduct in war is governed by a handbook of careful rules. Mr. McAllister, for contrast, points to the 17th-century Native American practice of not only scalping victims alive but also “heaping hot coals onto their scalped heads.” Which is nothing compared with the attentions lavished by the Romans on a Christian named Apphianus, who was racked for 24 hours and scourged so hard that “his ribs and spine showed.”

Even today’s bloodthirsty maniacs are pikers by comparison with the rampagers of yore. In the 13th century, Genghis Khan’s son Tolui killed nearly every inhabitant of Merv in Turkmenistan, then the world’s largest city. All told, Mr. McAllister writes, the Mongols killed as many as 60 million people during nearly a century of slaughter. “Al Qaeda and its affiliates,” he adds with something of a sneer, “succeeded in killing 14,602 people worldwide in 2005.” True enough, although by some readings Mr. McAllister is describing a positive development.

Male readers who slink away from “Manthropology” feeling that Mr. McAllister has driven a Cleomedesian fist into their guts may find some solace in Roy F. Baumeister’s “Is There Anything Good About Men?” Mr. Baumeister is less concerned about the wimpification of modern man than about the degree to which men have been historically “exploited.” The very cultures that men have built, he says, have considered males more “expendable” than women.

The expendability is reflected in wartime casualty rates, of course, but men also die more often in work-related accidents and die earlier, on average. Their energies are the motor for some bad things but also for a great deal of good, including the economic bustle and technological advance that we associate with progress. But men, Mr. Baumeister says, are often taken for granted and denigrated as the bane of female existence, with some gender activist insisting that women would be better off without them. In a feisty rejoinder, Mr. Baumeister says that “if women really would have been happier without men,” they would have “set up shop” on their own long ago. “The historical record is overwhelming,” he adds. “Women stick around men.”

In a passage that may strike a chord in some male readers, Mr. McAllister says that men are disadvantaged when it comes to sex. Women don’t pay for sex because “they don’t have to. Women can get sex for nothing.” When women offer themselves to male celebrities, he notes, men jump at the opportunity. When men do the same to women celebrities, they can expect a visit from the security detail. But Mr. Baumeister, a psychology professor, writes with a hopeful air, insisting that, while men and women are different, they can create partnerships based on complementary skills. No hard feelings, apparently, about those centuries of exploitation.

Both “Manthropology” and “Is There Anything Good About Men?” leave the reader wondering: Aren’t men better off these days? What’s a decline in pillaging-proficiency and a history of being a tad taken-advantage-of when, on the whole, modern man has it so good?

Mr. McAllister inadvertently answers the question at book’s end by envisioning a male Homo erectus from a million years ago, plucked off the African plain and plunked down at a Nascar event. The visitor, we’re told, gazing at the soft-bellied male race enthusiasts in the stands, would be horrified and bellow (if he could indeed speak): “My sons, my sons, why have you forsaken me?”

But there is another view. If ancient erectus were told that his “sons” had driven to the event at 70 m.p.h. in cars outfitted with satellite radios; that they lived in climate-controlled houses equipped with refrigerators full of parasite-free steaks from Argentina and beer from Holland; that they and their womenfolk took showers and were familiar with shampoo, he might shout: “My sons, you have found the Kingdom of Heaven!” And, comparatively speaking, he’d be right.

Mr. Shiflett posts his journalism and original music at Daveshiflett.com.

__________

Full article and photos: http://online.wsj.com/article/SB10001424052702304410504575561103201543976.html

Goodbye Basil, Hello Pumpkin Seeds

Ten—no, 11!—delicious, beyond-the-obvious pestos to add to your arsenal

Clockwise from left: Lardo and rosemary, cherry tomato and almond, walnut, arugula and pistachio pestos

Pesto is a gift from summer—a nutty, herby distillation of a sweet-smelling, sunshine-loving herb. But fall doesn’t have to mean giving it up altogether. The classic basil version is just one interpretation of an open-ended technique: The word “pesto” has its roots in the Italian word for “pestle,” and it means the technique of using a mortar and pestle (or more often nowadays, a food processor) to make a flavorful paste combining garlic, nuts and oil with vegetables or herbs. In pesto’s birthplace, ingredients like parsley, mint and olives commonly end up in the mix. Fall, especially now when spinach and broccoli are approaching their peak, is the perfect time to experiment—and to try one of these more seasonable pesto recipes from top chefs around the country. Make extra: It’ll keep in an airtight container in the fridge for a few days. Better yet, freeze it in a Ziploc bag and you can stay sauced through the winter.

Arugula + Basil + Almonds

Pesto

Blanch two cups arugula and three-quarters cup basil leaves separately. Shock, squeeze dry and puree in a food processor with a garlic clove, a little parsley, slivered almonds, olive oil, salt and a lot of ground pepper. —chef Matthew Accarrino, SPQR in San Francisco

Use it: Tossed with fusilli and ricotta salata

Walnuts + Grapeseed Oil

Pesto

In a food processor, blend a half-cup each of olive and grapeseed oils with a half clove of garlic until garlic is finely chopped. On medium speed, incorporate a cup of walnuts. Process on high until mixture is smooth. Season with sherry vinegar, salt and pepper. —chef Marc Vetri, Vetri in Philadelphia

 Use it: Tossed with fresh pappardelle or farro penne

Cherry Tomatoes + Almonds

Pesto

Blend 2½ cups cherry tomatoes, a garlic clove, a half-cup slivered almonds, 12 basil leaves, a pinch crushed red pepper and a big pinch of salt to a fine purée. While blending, pour in a half-cup olive oil in a steady stream until pesto emulsifies into a thick purée. Season. —chef Lidia Bastianich, “Lidia’s Italy” (PBS)

Use it: Tossed with hot spaghetti

Pistachios + Breadcrumbs + Mint

Pesto

Blanch a quarter-cup raw pistachios in boiling water for two minutes. Remove, cool and process with a quarter-cup breadcrumbs, three tablespoons of olive oil, two tablespoons of chopped mint, a pinch of Aleppo pepper (available in Middle Eastern markets) and a garlic clove, pulsing until well mixed and smooth. Season with salt and pepper to taste. —chef Chris Cosentino, Incanto in San Francisco

 Use it: Tossed with roasted potatoes or Brussels sprouts

Lardo + Rosemary

Pesto

In a mortar and pestle, mash a quarter clove of garlic and a pinch of salt until a paste begins to form. Add a teaspoon each of chopped rosemary and black pepper and continue to crush. Add a quarter pound of lardo, and mash ingredients together until pesto is smooth. Season to taste. —chef Cesare Casella, dean of The Italian Culinary Academy in New York

 Use it: Spread on toasted slices of crusty bread

Marjoram + Parsley + Walnuts

Pesto

In a mortar and pestle, pound three garlic cloves and a pinch of salt into a mash. Pound six sprigs worth of marjoram leaves into mix. Do same with parsley leaves until you have rough paste. Cover paste with three-quarters cup olive oil. Add half-cup chopped walnuts. Taste for salt. —chef Russell Moore, Camino in Oakland, Calif .

 Use it: Spooned over sautéed mushrooms or grilled sea bass

Rapini + Parmesan + Porcini

Pesto

Blanch one bunch of rapini for about four minutes, shock in a bowl of ice water, squeeze dry and chop finely. Purée the rapini, two garlic cloves, one cup olive oil, and a pinch of salt in a food processor until very smooth. Transfer to bowl. Stir in a half-cup grated Parmesan. Sauté a third of a pound of porcini mushrooms in butter until they are colorless and soft. Cool, purée and fold into the rapini mix. —chef Ethan Stowell, Staple & Fancy Mercantile in Seattle

 Use it: Tossed with a short twisted pasta like gemelli

Parsley 

Pesto

Purée a half-cup of flat leaf parsley, two garlic cloves, a cup of olive oil, a large pinch of salt and up to eight turns of the pepper mill in a blender until mixture is smooth. Taste and adjust seasoning. —chefs Frank Falcinelli and Frank Castronovo, Frankies Spuntino, Brooklyn, N.Y.

 Use it: Brushed on sliced crusty bread before toasting

Pumpkin Seeds + Basil + Parmesan

Pesto

Blend five tablespoons of pumpkin seeds, two cups of basil, a clove of garlic and salt until pureed. Pour into a large mixing bowl. Add two-thirds cup grated Parmesan and a quarter-cup olive oil, stirring until the pesto is smooth and creamy. —chefs Tony Mantuano and Sarah Grueneberg, Spiaggia in Chicago

 Use it: Spooned over cheese ravioli

Pecans + Parsley + Dates

Pesto

Pulse a half-cup pecans, a half-cup parsley leaves, a quarter-cup Parmesan, a half-cup pecan oil and a teaspoon of kosher salt in a food processor until combined, but not totally puréed. Transfer to bowl. Fold in four chopped dates and two teaspoons balsamic vinegar. —chef Alon Shaya, Domenica in New Orleans

Use it: Spooned over duck, pork or ricotta spread on grilled bread

Pumpkin Seeds + Spinach 

Pesto

Blend four cups spinach and one cup parsley in a food processor with just enough olive oil to make a semithick paste (about a half cup). Add two tablespoons toasted pumpkin seeds and blend well. Transfer to bowl. Add one crushed amaretto cookie and three tablespoons grated Parmesan. Add salt and pepper, and adjust oil to desired consistency. —chef Marc Bianchini, Osteria del Mondo, Milwaukee, Wisc.

 Use it: Spooned over scallops or stuffed inside an omelet

Pervaiz Shallwani, Wall Street Journal

__________
Full article and photos: http://online.wsj.com/article/SB10001424052702304772804575558530945273878.html

Johnny has two mommies – and four dads

As complex families proliferate, the law considers: Can a child have more than two parents?

“To an unconventional family.” That’s what Paul, the roguish restaurateur and sperm donor, raises his glass to in this summer’s movie “The Kids Are All Right.” Paul is, he has recently discovered, the biological father of two teenage children, one by each partner in a long-term lesbian couple. Contacted by the kids, he has come into their lives and begun to compete for the affections of various members of the family he unknowingly helped create. Complications — funny, then sad — ensue.

The film’s family is indeed unconventional, but it is not unique. In the age of assisted reproductive technology, the increasing acceptance of same-sex partnerships, and a steady growth in “blended” families, more parents and more children are finding that traditional notions of the nuclear family don’t accurately reflect their lives and relationships.

Still, even in a time of changing attitudes about who can be a parent, the legal and social definition of a family still has certain rules — a family can be run by a single mom or a single dad and, increasingly, by two moms or two dads, but it can’t have three parents, or four. For a long, long time — going back to when the English common law first started codifying such things — the law has set the maximum number of parents a child can have as two. Only two people, in other words, can enjoy the unique set of rights to determine a child’s life — and the unique set of responsibilities for the child’s welfare — that legal parenthood entails. That matches how most people think about parenthood: Two people, after all, are how many it usually takes to make a baby in the first place.

Now a few family-law scholars have begun to argue that there is nothing special about the number two — if three or four or five adults have a parental relationship with a child, the law should recognize them all as parents. Going beyond two, these scholars argue, would better reflect the dynamics of the modern family, and also protect the children in such families. It would ensure that, even in the event of a split or major disagreement between the adults in question, the children would not be deprived of the affection, care, and financial resources of any of the people they have grown up regarding as their mothers and fathers.

“The law needs to adapt to the reality of children’s lives, and if children are being raised by three parents, the law should not arbitrarily select two of them and say these are the legal parents, this other person is a stranger,” says Nancy Polikoff, a family-law professor at American University’s Washington College of Law.

In a few recent cases, courts seem to have agreed with the calls for multiple parents. But critics argue that tinkering with the definition of parenthood in this way threatens to dilute the sense of obligation that being a parent has always carried, and that increasing the number of legal parents only raises the likelihood that family disputes will arise and get messy and find their way into court. Not to mention that having judges routinely declare that Heather has two mommies and three daddies would represent a radical cultural shift, and one that, like gay marriage, many will find threatening.

Ultimately, the legal definition of parenthood is part of a broader philosophical question: What is a family? And what is it for? While some scholars have focused on expanding the number of parents, others argue that the law needs to do more to recognize the social context in which families exist, and the extent to which child care is actually performed by people who aren’t part of the nuclear family at all.

And as supporters of revising the definition of parenthood point out, there’s nothing tidy or biologically preordained about today’s prevailing notion of parentage, one that often has to shoehorn families jumbled and reassembled by divorce, adoption, and reproductive technology into one standard model, in ways that can prove disruptive to the families in question.

“The law determines what makes someone a legal parent, not marriage, not biology. Those things don’t determine who is a parent, the law does,” says Polikoff.

When Sharon Tanenbaum and Matty Person, a married lesbian couple in San Francisco, decided to have a child together, it wasn’t hard to figure out who they wanted the sperm donor to be. Bill Hirsh was one of Sharon’s oldest friends, they had known each other, Sharon says, “since we were born, more or less.” Their fathers had been best friends in college, and Sharon and Bill had grown up spending summers together and calling each other’s parents aunts and uncles.

Sharon, Matty, and Bill agreed that Bill would be more than just a source of genetic material — they wanted him to be a father. When Sharon had a son, Jesse, in 1994, the boy lived with Sharon and Matty, but growing up he spent one day a week with Bill and Bill’s same-sex partner, Thompson. In addition, the whole family would gather once a week for dinner.

Legally, however, Sharon and Bill were Jesse’s parents, and that put Matty in a potentially precarious position. “Let’s say I died in some terrible car crash or whatever and Matty had no legal rights, and let’s say she and Billy had a falling out or one of my parents or brother wanted to take care of Jesse,” Sharon says. In that case, Matty could have had Jesse taken away from her altogether.

At the same time, no one in the family wanted to force Bill to give up his parental status. So, when Jesse was 4, their lawyer persuaded the San Francisco Superior Court to allow Matty to do a third-parent adoption. The move, which had little precedent, gave Jesse three parents, three people who, in the event of a split, could demand custody or visitation rights and would be responsible for paying child support.

Asked why it was so important to recognize all three of them in the eyes of the law, Sharon responds, “When you look back on your life, there’s a big difference between your father and your uncle and your parents’ best friends. There are certain rights and responsibilities that also come with being a parent, and those rights and responsibilities only come with being a parent.”

Third-parent adoptions remain extremely rare, and only a handful have been done, mostly in Massachusetts and California. But some legal scholars see in them the seeds of a larger shift in how the law defines parenthood. These advocates point to a few recent court decisions that suggest a willingness to recognize more than two parents.

It would not be the first time that American law has changed the rules of parenthood. According to Polikoff, in the English common law from which American law is derived, children born out of wedlock before the 19th century had, legally speaking, no parents at all. They were filius nullius. By the 1800s, however, their status had changed — legal parentage was automatically assigned to the mother. If she was unmarried, she was the sole parent; if she was married, her husband was the father, regardless of whether he was biologically related.

In the 20th century, the most significant change in parenting law was erasing the distinction between legitimate and illegitimate children. Until the 1960s, the law regularly denied rights to children born out of wedlock: the right to collect worker’s compensation benefits or Social Security survivor benefits for a dead parent, for example, or sue for a parent’s wrongful death or inherit in the absence of a will (so-called intestate succession). With the sexual revolution, of course, popular attitudes about marriage changed, and the law changed with them. In decisions in 1968 and 1972, the Supreme Court struck down state statutes penalizing children born to unmarried mothers. The states claimed the laws encouraged marriage, but the justices focused on the fact that the penalties were largely aimed at the children.

Today’s proponents of expanding the definition of parenthood argue that restricting the number of parents to two people also disadvantages children, at least those in certain nontraditional households. If a child grows up thinking of more than two people as parents, these lawyers and legal scholars argue, then the law should protect those relationships and the emotional connection and material support that come with them. Doing so may not be necessary as long as all of the parents get along and remain equally committed to the child — or children — but if the parents have a falling-out or if the custodial parents split up, then the people the law officially recognizes as parents hold all the cards, and can shut the others out of the child’s life.

In addition, in the eyes of the law, a child doesn’t have any claim on the financial resources of parental figures beyond the legally recognized two. The relationship is not unlike those of illegitimate children and their parents before 1968. With very few exceptions, it is today impossible for children to sue for child support, collect Social Security survivor benefits, or inherit by intestate succession from self-identified third or fourth parents, since the law doesn’t recognize the relationship.

To critics of the legal status quo, all of this means that, just as with illegitimacy laws, the courts are punishing children in the interest of preserving a traditional family structure, making their lives more uncertain by depriving them of emotional and financial support.

“I’m not saying all kids should have three [parents], or that two is good so why not three,” says Melanie Jacobs, a law professor at Michigan State University and author of a 2007 law review article entitled “Why Just Two?” “The law says someone is either a parent or a legal stranger, and in some cases that’s threatening to just take this person who has been a part of the child’s life out of the child’s life.”

Jacobs points to two recent decisions in particular that suggest how she would like courts to define parenthood in such families. In January 2007, the Ontario Court of Appeals granted full parental status to both members of a lesbian couple as well as their sperm donor, ruling that it was contrary to the child’s best interests to not recognize all three. In April of 2007, the Pennsylvania Superior Court was faced with a custody decision involving a child’s biological mother and her same-sex partner, who had split up, and a donor who had been a significant presence in the child’s life. The court ruled that all three should have custodial rights and that all three were responsible for child support. Additionally, in July of this year, the attorney general’s office in British Columbia proposed allowing for more than two parents in cases of sperm and egg donation.

Recognizing multiple fathers or multiple mothers, however, doesn’t necessarily mean that they all have the same rights. In the Pennsylvania case, the court did not decide that all three parents had equal custody or were responsible for the same amount of child support. Jacobs in particular has argued that expanding the number of legal parents a child has requires that courts begin to allow for degrees of legal parenthood, what she calls a scheme of “relative rights.” Whereas today the law tends to see someone as either a parent or a nonparent, she argues that it should instead recognize gradations. For example, she argues, a known sperm donor should perhaps have certain parental rights and responsibilities — visitation and the obligation to pay some child support — but not the right to demand custody.

For critics, “disaggregating” the rights and responsibilities of parenthood, as Jacobs suggests, exposes a larger problem with the idea of expanding beyond two in the first place. Traditional legal definitions of parenthood, though they may not exactly correspond with every family’s day-to-day reality, do lay out a set of hard and fast, inescapable obligations. If courts begin to experiment and innovate with what being a parent means, that may create uncertainty, and even a sense that parental obligations to children may be more negotiable than they once were.

June Carbone, a law professor at the University of Missouri-Kansas City, points to research Deirdre Bowen at Seattle University has done that suggests that in same-sex couples with a child, there’s a great deal of ignorance and miscommunication about what the legal rights and responsibilities of each parent are.

“I think it is very important that there be a shaping of expectations at the outset,” Carbone says.

Opponents of the change also worry that increasing the number of parents increases the odds of disagreements — over everything from where the child goes to school and what religion to raise him to how much time he spends with which parent — and the odds that those disagreements get litigated.

“Expanding the number of parents that would have rights to a child could, on the upside, expand the number of people who have responsibilities to that child, but it also expands the number of people who have a claim on that child, and who could come into conflict with the other parents,” says Elizabeth Marquardt of the Institute for American Values, a nonprofit dedicated to encouraging traditional two-parent households.

Whether or not multiple parentage gains wider legal and social acceptance, the fact that it’s being debated — and, in a few cases, allowed — suggests the flexibility that the concept of parenthood has taken on today, not only among scholars, but among adults doing the work of actually raising children in sometimes unorthodox situations. It’s part of a broader reexamination of what it means to have a family, a conversation that is itself only a chapter in a story that has unfolded over hundreds of years. That constant push and pull has been shaped by religion and law, custom and economics, and its inflection points are not only changes like the abolition of illegitimacy, but the revision of adoption laws, the relaxation of divorce requirements, the movement in some states to legalize same-sex marriage, and even the debate, in places as different as late 19th-century Mormon Utah and the contemporary Netherlands, over the permissibility of polygamy.

Some of those changes remain deeply controversial, of course. And yet there are other aspects of the contemporary family that, while they would strike people of an earlier era as deeply unnatural, today go all but unremarked: the fact, for example, that it’s common for grandparents to live not with their children and grandchildren but instead hundreds of miles away. The family of the future may look similarly unfamiliar to us, and in ways we’re only beginning to discern.

Drake Bennett is the former staff writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/10/24/johnny_has_two_mommies__and_four_dads/

In Praise of the Mediocre Mother

Elisabeth Badinter’s bestselling book champions France’s so-so moms as the secret to high Gallic birth rates.

For all their hand-wringing over Gallic cultural decline, the French are the European champions of childbirth. With a consistently solid birth rate of two babies per woman, France is a both a puzzle and a model for demographers and policy makers alarmed by aging populations in the rest of the developed world.

Feminist philosopher Elisabeth Badinter, the Left Bank’s modern-day answer to Simone de Beauvoir, thinks she can explain this paradox: French women have always allowed themselves to be “mediocre mothers.”

As she details in her bestselling book, whose title translates as “The Conflict: The Woman and the Mother,” France has a long tradition of entrusting babies to nannies and daycare staff (the daycare center, or crèche, and the pre-school, or école maternelle, both being French inventions). Helicopter parenting, and the constant demands it places on women’s bodies, identities, intellects, and careers, never really made it to France, where for centuries the children of the upper classes were handed over to wet nurses. “Maman does not owe everything—her milk, her time, her energy—to her child,” Ms. Badinter says.

But times are changing, even in France, where maternal instinct and hormones are venerated ever more strongly, a trend that’s on the rise in the rest of the developed world as well. Becoming pregnant is, in Ms. Badinter’s words, becoming akin to “entering a religious order.” This global mentality shift is now threatening to strip French mothers of what has, ironically, made them among the most fertile women in the developed world: Their willingness to be so-so moms.

The tension involves more than simply what kind of mother one should be, or how much time with one’s children is too much. In her book Ms. Badinter describes a subterranean culture war that is being waged on mothers by the new forces of “eco-political” correctness. Only a few decades ago, disposable diapers, packaged baby food, infant formula and bottles were seen as key tools of women’s emancipation. Today, mothers in the rich world are under increasing pressure to not only give themselves entirely to their children, but to do so by going back to the “natural,” and all the tedium that entails.

Ms. Badinter identifies this concept as a regressive one, even adorned as it is with the new-age tinge of environmentalism. In the ascendent mentality, the “perfect” 24/7 mother is one who stays home to prepare only organic purees for her treasures, while endlessly washing cloth nappies and breastfeeding until the child is almost ready for school.

What Ms. Badinter terms a “holy alliance of reactionaries” comprises environmentalists, pediatricians, politicians, “the ayatollahs of breastfeeding” and elements of the media. As the book’s dense cross-national research shows, the consequences of the Total Motherhood credo are becoming dire for birth rates. Panicked by the all-or-nothing definition of good motherhood, women are opting out in droves, leading to the demographic crises we see in Italy and Germany, each with a fertility rate of 1.4 children per woman in 2008 according to the World Bank.

As Ms. Badinter tells it, the thinking that has lead to this baby-bust is not so new, and finds its strongest ideological roots in the 18th-century anti-progress arguments of Jean-Jacques Rousseau. And while there are important culturally specific notions of motherhood, such as the German mutter, Italian mama, and the Japanese kenbo, Ms. Badinter identifies the global trend now hitting France as part of a post-Baby Boomer backlash.

“Because of successive economic crises since the 1970s, and the feeling that our parents made a mistake with their excessive materialism, we have to turn our backs on extreme individualism and unreasonable consumption and give to our children only what is ‘natural,'” Ms. Badinter told me in a recent interview.

The reader might assume that all this means that “The Conflict” boils down to a blunt attack on the green movement, or the latest battle in the intergenerational feminist wars. It’s neither, though if we must, Ms. Badinter is best labeled as a libertarian of the left. The clearest difference between mother-of-three and grandmother Badinter, and her childless forebear de Beauvoir, is the former’s enthusiastic embrace of the will to procreate. But that embrace, she tells us, is only possible if motherhood isn’t supposed to take everything from the mother.

Ms. Badinter scoffs, for instance, at the interminable lists of banned foods and drinks for expectant mothers, noting that “thirty years ago we lived our pregnancies with insouciance and lightness” without bad consequences. While other mothers pore over the plethora of yummy-mummy websites and instruction manuals, Ms. Badinter jeers at the invention and multiplication of children’s needs in a world where the kid is king.

“The Conflict” hit German bookshelves last month after spending much of the year on French bestseller lists. Its tone can be brutal at times but it provides a fresh and apparently necessary wake-up call to advanced societies about how to stop the “womb strike” menacing graying nations from Japan to Germany.

Next year “The Conflict” will be published in English. As in France and certainly Germany, most English-speaking parents will recognize the ideology Ms. Badinter says is attacking procreation: That to attain moral elevation, mothers must throw out powdered milk and plastic bottles, disposable diapers, strollers, feeding spoons, and even submit to “natural births” sans epidural.

For all women who have wilted under these crushing prohibitions and admonitions, Elisabeth Badinter is their savior. Her acerbic dose of skepticism, even if overdrawn at times, is a welcome panacea to the fetishization of parenting.

And who knows, it might even convince some would-be mothers that, experts be damned, she can “afford” to bear children after all. At least if she does so à la française.

Ms. Symons is a writer based in Bangkok and Paris.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052702304410504575560083627559998.html

The Shadow Knows

Probably the first thing we notice when we observe an object is its shape. This is an enormously useful characteristic because it gives us an immediate impression of the spirit of the subject.

Think of the shape of an elephant. Its mass and tree-trunk-like legs suggest the slow, unstoppable movement of the animal. Contrast this with the shape of a grasshopper, whose delicate antennae and jutting-back legs suggest a more nervous, fast kind of energy. Responding to shape is the first step in our logical and intuitive search for the meaning of what we draw.

If responding to shape is a fundamental aspect of seeing an object, it also interacts with all our other perceptual responses in helping us make sense of our subjects. When one is actively observing a subject in order to draw it, the mind is ping-ponging among different visual responses, shape-to-color-to-contour-to-shadow-to-proportion, and from those purely “eyeball” calculations to all the memory and psychological associations we have about our subject.

This ability of the mind to intermingle all our different kinds of reactions enriches our response and strengthens each part of that response. Shape is made more meaningful by seeing color and volume, and particularly by our recognition of our subject’s “thingness” — what makes an elephant an elephant, for instance. Understanding the significance of each part of a shape — seeing that the bump behind an elephant’s head is where the strength of the shoulder reveals itself and is different in nature from the soft curve of the belly — helps us to draw lines that evoke various kinds of energy. This is in contrast to making a contour line that moves around a shape as though each part is equal, like a neutral diagram. What we want from each stage of the drawing is to try to answer more and more of the question, “How is this thing different from every other thing?”

I include a watercolor drawing of a tap dancer to show how the silhouette of a figure can convey a particular vitality better than the details themselves.

In the process of drawing a shoe and a chair, I will show you how you can see their shapes as part of your response to their functions, and as the beginning of a much richer mental game than contour alone. You can either draw the shoe and chair from the photos or find a shoe and chair of your own to draw, following my steps.

I begin by thinking about how I put on a shoe and how I walk in it. In the drawing I made over the shape of the shoe (at right), I emphasize the aperture that the foot uses to get into the shoe, the embracing forms of the instep, the heel and the toe, the point at which the ball of the foot hits the ground and the flexible area on top that gets wrinkled by the constant bending of the material. At this stage I am ignoring all the logos and surface designs so that I can concentrate on the fundamental issues of how the shoe is made to accommodate the foot and its function. In this way, I have enlivened the shape of the shoe in my mind so that different parts have different qualities and it is no longer like the map of a country I have never visited. This analysis (which you can make by simply thinking about the shoe and without diagramming it) will guide me in making a more detailed drawing of the shoe.

As I start, I am still thinking about the large enclosing areas that I emphasized on the silhouette shape. I first make the bottom line of the sole where the pressure of the ball of the foot is exerted — it feels to me like a basic aspect of the “walking” function of the shoe. Then I make lines that enclose the heel, toe and instep, and a looping line that begins to describe the aperture of the shoe. Even though these lines form a kind of contour, I have tried to make each of them express the particular kind of pressure and implicit volume I feel in that area. This is in contrast to a contour line that simply describes an edge by moving evenly around the shape.

These first lines are especially important because, just as in the drawing of the lily, I am choosing among the myriad details I am seeing to find the issues that seem particularly central to the function of the shoe and, in that way, I take charge of the drawing.

Now I make volumetric lines around the heel and below the instep to establish the larger forms of the shoe. I add more detail to the aperture and the laces. As much as possible, I try to use the design details to reinforce the three-dimensionality of the shoe, even trying to imagine what zoomy, wrap-the-foot feelings the designer had when he decided to make these particular shapes. As I draw, I see that I have slightly missed the chunky proportions of the sneaker, so I make correctional lines around the top lining to make that part higher. A drawing should feel like a live, open-ended experience in which you can amend your lines as you absorb more and more of your subject.

This is the finishing stage of the drawing, and I concentrate on strengthening the roundedness of the forms, adding more volumetric lines around the area at the ball of the foot, heel and toe. I add darks to the surface the shoe sits on to give it more spatial presence and to set off the white color of the material. As I draw the designs on the surface I hold back slightly on their darkness so that the graphic elements won’t overwhelm the sense of form in the whole shoe. This lack of logo enthusiasm on my part helps the drawing to maintain it’s unity, but it probably won’t lead to any phone calls asking me to do product illustration.

The happy guy at ease in the furry chair represents the beginning of my thinking about drawing the chair. I let the idea of sitting register strongly in my mind as I look at the chair — I can imagine what the seat and the back feel like as I sit, and remember the trust I put in chair legs to do their job stoutly and not collapse. I think of soft chairs and hard chairs and put this particular chair in the medium- hard category. The design of the chair slots into 1940’s English no-nonsense with some mild Art Deco around the slats. A likable bourgeois seat from which to eat your soft-boiled eggs. This little common-sense exercise helps me to see the chair as both a chair-chair as well as a specific chair, anything but a neutral shape. The lines I have drawn over the chair silhouette emphasize these core functions of sitting and support.

I start the drawing by making the rectangle of the seat, then the line describing one side of the back and the connecting leg. Next I draw two lines delineating the near front leg. Now I have implicitly set up the position of all four legs. As you see from my red lines, the rectangle on the floor is anchored by the two legs I have drawn and echoes the rectangle of the seat. My basic sense of perspective helps me to draw the lines so that the elements recede. Part of the satisfaction of starting the drawing in this way, is that it’s like the answer to a puzzle — how do I figure out in the most efficient way where the ends of the legs are?

Once the positions of the various rectangles that comprise the chair are pinned down, it becomes a matter of adding the details so they are both where they should be and they also retain the character of this specific chair. As I draw the legs, for instance, I think about the difference between the edge of a sawn wooden piece and the same part of a chair if it were made from an extruded steel pipe. They are both straight, but the wood has a certain softness in it’s straightness that the steel would not.

It may seem odd to think about different kinds of straightness, but a sensitivity to the materials that an object is made from is one of the things that I believe experience in drawing will lead you to. In the final stage of the drawing I use cross-hatch shading to bring out the sturdiness of the chair and the flatness of the seat — the qualities of structure and “sittingness” with which I began.

Learning to understand the structure of a shoe or a chair and be able to draw it in a straightforward manner gives you the basis to consider those objects (or any others) in a more personal and intuitive way. These two paintings by Van Gogh resonate with the memories and associations that this pair of boots and this chair had for him in his life.

Vincent van Gogh A Pair of Shoes, 1886
 
 
Vincent van Gogh Van Gogh’s Chair, 1888
___________

Morals Without God?

I was born in Den Bosch, the city after which Hieronymus Bosch named himself. [1] This obviously does not make me an expert on the Dutch painter, but having grown up with his statue on the market square, I have always been fond of his imagery, his symbolism, and how it relates to humanity’s place in the universe. This remains relevant today since Bosch depicts a society under a waning influence of God.

His famous triptych with naked figures frolicking around — “The Garden of Earthly Delights” — seems a tribute to paradisiacal innocence. The tableau is far too happy and relaxed to fit the interpretation of depravity and sin advanced by puritan experts. It represents humanity free from guilt and shame either before the Fall or without any Fall at all. For a primatologist, like myself, the nudity, references to sex and fertility, the plentiful birds and fruits and the moving about in groups are thoroughly familiar and hardly require a religious or moral interpretation. Bosch seems to have depicted humanity in its natural state, while reserving his moralistic outlook for the right-hand panel of the triptych in which he punishes — not the frolickers from the middle panel — but monks, nuns, gluttons, gamblers, warriors, and drunkards.

Hieronymus Bosch Hieronymus Bosch’s “Garden of Earthly Delights” depicts hundreds of erotic naked figures carrying or eating fruits, but is also full of references to alchemy, the forerunner of chemistry. The figures on the right are embedded in glass tubes typical of a bain-marie, while the two birds supposedly symbolize vapors.

Five centuries later, we remain embroiled in debates about the role of religion in society. As in Bosch’s days, the central theme is morality. Can we envision a world without God? Would this world be good? Don’t think for one moment that the current battle lines between biology and fundamentalist Christianity turn around evidence. One has to be pretty immune to data to doubt evolution, which is why books and documentaries aimed at convincing the skeptics are a waste of effort. They are helpful for those prepared to listen, but fail to reach their target audience. The debate is less about the truth than about how to handle it. For those who believe that morality comes straight from God the creator, acceptance of evolution would open a moral abyss.

Our Vaunted Frontal Lobe

Echoing this view, Reverend Al Sharpton opined in a recent videotaped debate: “If there is no order to the universe, and therefore some being, some force that ordered it, then who determines what is right or wrong? There is nothing immoral if there’s nothing in charge.” Similarly, I have heard people echo Dostoevsky’s Ivan Karamazov, exclaiming that “If there is no God, I am free to rape my neighbor!”

Perhaps it is just me, but I am wary of anyone whose belief system is the only thing standing between them and repulsive behavior. Why not assume that our humanity, including the self-control needed for livable societies, is built into us? Does anyone truly believe that our ancestors lacked social norms before they had religion? Did they never assist others in need, or complain about an unfair deal? Humans must have worried about the functioning of their communities well before the current religions arose, which is only a few thousand years ago. Not that religion is irrelevant — I will get to this — but it is an add-on rather than the wellspring of morality.

Deep down, creationists realize they will never win factual arguments with science. This is why they have construed their own science-like universe, known as Intelligent Design, and eagerly jump on every tidbit of information that seems to go their way. The most recent opportunity arose with the Hauser affair. A Harvard colleague, Marc Hauser, has been accused of eight counts of scientific misconduct, including making up his own data. Since Hauser studied primate behavior and wrote about morality, Christian Web sites were eager to claim that “all that people like Hauser are left with are unsubstantiated propositions that are contradicted by millennia of human experience” (Chuck Colson, Sept. 8, 2010). A major newspaper asked “Would it be such a bad thing if Hausergate resulted in some intellectual humility among the new scientists of morality?” (Eric Felten, Aug. 27, 2010). Even a linguist could not resist this occasion to reaffirm the gap between human and animal by warning against “naive evolutionary presuppositions.”

These are rearguard battles, however. Whether creationists jump on this scientific scandal or linguists and psychologists keep selling human exceptionalism does not really matter. Fraud has occurred in many fields of science, from epidemiology to physics, all of which are still around. In the field of cognition, the march towards continuity between human and animal has been inexorable — one misconduct case won’t make a difference. True, humanity never runs out of claims of what sets it apart, but it is a rare uniqueness claim that holds up for over a decade. This is why we don’t hear anymore that only humans make tools, imitate, think ahead, have culture, are self-aware, or adopt another’s point of view.

If we consider our species without letting ourselves be blinded by the technical advances of the last few millennia, we see a creature of flesh and blood with a brain that, albeit three times larger than a chimpanzee’s, doesn’t contain any new parts. Even our vaunted prefrontal cortex turns out to be of typical size: recent neuron-counting techniques classify the human brain as a linearly scaled-up monkey brain.[2]No one doubts the superiority of our intellect, but we have no basic wants or needs that are not also present in our close relatives. I interact on a daily basis with monkeys and apes, which just like us strive for power, enjoy sex, want security and affection, kill over territory, and value trust and cooperation. Yes, we use cell phones and fly airplanes, but our psychological make-up remains that of a social primate. Even the posturing and deal-making among the alpha males in Washington is nothing out of the ordinary.

The Pleasure of Giving

Charles Darwin was interested in how morality fits the human-animal continuum, proposing in “The Descent of Man”: “Any animal whatever, endowed with well-marked social instincts … would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well developed … as in man.”

Unfortunately, modern popularizers have strayed from these insights. Like Robert Wright in “The Moral Animal,” they argue that true moral tendencies cannot exist — not in humans and even less in other animals — since nature is one hundred percent selfish. Morality is just a thin veneer over a cauldron of nasty tendencies. Dubbing this position “Veneer Theory” (similar to Peter Railton’s “moral camouflage”), I have fought it ever since my 1996 book “Good Natured.” Instead of blaming atrocious behavior on our biology (“we’re acting like animals!”), while claiming our noble traits for ourselves, why not view the entire package as a product of evolution? Fortunately, there has been a resurgence of the Darwinian view that morality grew out of the social instincts. Psychologists stress the intuitive way we arrive at moral judgments while activating emotional brain areas, and economists and anthropologists have shown humanity to be far more cooperative, altruistic, and fair than predicted by self-interest models. Similarly, the latest experiments in primatology reveal that our close relatives will do each other favors even if there’s nothing in it for themselves.

Maintaining a peaceful society is one of the tendencies underlying human morality that we share with other primates, such as chimpanzees. After a fight between two adult males, one offers an open hand to his adversary. When the other accepts the invitation, both kiss and embrace.

Chimpanzees and bonobos will voluntarily open a door to offer a companion access to food, even if they lose part of it in the process. And capuchin monkeys are prepared to seek rewards for others, such as when we place two of them side by side, while one of them barters with us with differently colored tokens. One token is “selfish,” and the other “prosocial.” If the bartering monkey selects the selfish token, it receives a small piece of apple for returning it, but its partner gets nothing. The prosocial token, on the other hand, rewards both monkeys. Most monkeys develop an overwhelming preference for the prosocial token, which preference is not due to fear of repercussions, because dominant monkeys (who have least to fear) are the most generous.

Even though altruistic behavior evolved for the advantages it confers, this does not make it selfishly motivated. Future benefits rarely figure in the minds of animals. For example, animals engage in sex without knowing its reproductive consequences, and even humans had to develop the morning-after pill. This is because sexual motivation is unconcerned with the reason why sex exists. The same is true for the altruistic impulse, which is unconcerned with evolutionary consequences. It is this disconnect between evolution and motivation that befuddled the Veneer Theorists, and made them reduce everything to selfishness. The most quoted line of their bleak literature says it all: “Scratch an ‘altruist,’ and watch a ‘hypocrite’ bleed.”[3]

It is not only humans who are capable of genuine altruism; other animals are, too. I see it every day. An old female, Peony, spends her days outdoors with other chimpanzees at the Yerkes Primate Center’s Field Station. On bad days, when her arthritis is flaring up, she has trouble walking and climbing, but other females help her out. For example, Peony is huffing and puffing to get up into the climbing frame in which several apes have gathered for a grooming session. An unrelated younger female moves behind her, placing both hands on her ample behind and pushes her up with quite a bit of effort, until Peony has joined the rest.

We have also seen Peony getting up and slowly move towards the water spigot, which is at quite a distance. Younger females sometimes run ahead of her, take in some water, then return to Peony and give it to her. At first, we had no idea what was going on, since all we saw was one female placing her mouth close to Peony’s, but after a while the pattern became clear: Peony would open her mouth wide, and the younger female would spit a jet of water into it.

A juvenile chimpanzee reacts to a screaming adult male on the right, who has lost a fight, by offering a calming embrace in an apparent expression of empathy.

Such observations fit the emerging field of animal empathy, which deals not only with primates, but also with canines, elephants, even rodents. A typical example is how chimpanzees console distressed parties, hugging and kissing them, which behavior is so predictable that scientists have analyzed thousands of cases. Mammals are sensitive to each other’s emotions, and react to others in need. The whole reason people fill their homes with furry carnivores and not with, say, iguanas and turtles, is because mammals offer something no reptile ever will. They give affection, they want affection, and respond to our emotions the way we do to theirs.

Mammals may derive pleasure from helping others in the same way that humans feel good doing good. Nature often equips life’s essentials — sex, eating, nursing — with built-in gratification. One study found that pleasure centers in the human brain light up when we give to charity. This is of course no reason to call such behavior “selfish” as it would make the word totally meaningless. A selfish individual has no trouble walking away from another in need. Someone is drowning: let him drown. Someone cries: let her cry. These are truly selfish reactions, which are quite different from empathic ones. Yes, we experience a “warm glow,” and perhaps some other animals do as well, but since this glow reaches us via the other, and only via the other, the helping is genuinely other-oriented.

Bottom-Up Morality

A few years ago Sarah Brosnan and I demonstrated that primates will happily perform a task for cucumber slices until they see others getting grapes, which taste so much better. The cucumber-eaters become agitated, throw down their measly veggies and go on strike. A perfectly fine food has become unpalatable as a result of seeing a companion with something better.

We called it inequity aversion, a topic since investigated in other animals, including dogs. A dog will repeatedly perform a trick without rewards, but refuse as soon as another dog gets pieces of sausage for the same trick. Recently, Sarah reported an unexpected twist to the inequity issue, however. While testing pairs of chimps, she found that also the one who gets the better deal occasionally refuses. It is as if they are satisfied only if both get the same. We seem to be getting close to a sense of fairness.

Such findings have implications for human morality. According to most philosophers, we reason ourselves towards a moral position. Even if we do not invoke God, it is still a top-down process of us formulating the principles and then imposing those on human conduct. But would it be realistic to ask people to be considerate of others if we had not already a natural inclination to be so? Would it make sense to appeal to fairness and justice in the absence of powerful reactions to their absence? Imagine the cognitive burden if every decision we took needed to be vetted against handed-down principles. Instead, I am a firm believer in the Humean position that reason is the slave of the passions. We started out with moral sentiments and intuitions, which is also where we find the greatest continuity with other primates. Rather than having developed morality from scratch, we received a huge helping hand from our background as social animals.

At the same time, however, I am reluctant to call a chimpanzee a “moral being.” This is because sentiments do not suffice. We strive for a logically coherent system, and have debates about how the death penalty fits arguments for the sanctity of life, or whether an unchosen sexual orientation can be wrong. These debates are uniquely human. We have no evidence that other animals judge the appropriateness of actions that do not affect themselves. The great pioneer of morality research, the Finn Edward Westermarck, explained what makes the moral emotions special: “Moral emotions are disconnected from one’s immediate situation: they deal with good and bad at a more abstract, disinterested level.” This is what sets human morality apart: a move towards universal standards combined with an elaborate system of justification, monitoring and punishment.

At this point, religion comes in. Think of the narrative support for compassion, such as the Parable of the Good Samaritan, or the challenge to fairness, such as the Parable of the Workers in the Vineyard, with its famous conclusion “The last will be first, and the first will be last.” Add to this an almost Skinnerian fondness of reward and punishment — from the virgins to be met in heaven to the hell fire that awaits sinners — and the exploitation of our desire to be “praiseworthy,” as Adam Smith called it. Humans are so sensitive to public opinion that we only need to see a picture of two eyes glued to the wall to respond with good behavior, which explains the image in some religions of an all-seeing eye to symbolize an omniscient God.

The Atheist Dilemma

Over the past few years, we have gotten used to a strident atheism arguing that God is not great (Christopher Hitchens) or a delusion (Richard Dawkins). The new atheists call themselves “brights,” thus hinting that believers are not so bright. They urge trust in science, and want to root ethics in a naturalistic worldview.

While I do consider religious institutions and their representatives — popes, bishops, mega-preachers, ayatollahs, and rabbis — fair game for criticism, what good could come from insulting individuals who find value in religion? And more pertinently, what alternative does science have to offer? Science is not in the business of spelling out the meaning of life and even less in telling us how to live our lives. We, scientists, are good at finding out why things are the way they are, or how things work, and I do believe that biology can help us understand what kind of animals we are and why our morality looks the way it does. But to go from there to offering moral guidance seems a stretch.

Even the staunchest atheist growing up in Western society cannot avoid having absorbed the basic tenets of Christian morality. Our societies are steeped in it: everything we have accomplished over the centuries, even science, developed either hand in hand with or in opposition to religion, but never separately. It is impossible to know what morality would look like without religion. It would require a visit to a human culture that is not now and never was religious. That such cultures do not exist should give us pause.

Bosch struggled with the same issue — not with being an atheist, which was not an option — but science’s place in society. The little figures in his paintings with inverted funnels on their heads or the buildings in the form of flasks, distillation bottles, and furnaces reference chemical equipment.[4] Alchemy was gaining ground yet mixed with the occult and full of charlatans and quacks, which Bosch depicted with great humor in front of gullible audiences. Alchemy turned into science when it liberated itself from these influences and developed self-correcting procedures to deal with flawed or fabricated data. But science’s contribution to a moral society, if any, remains a question mark.

Other primates have of course none of these problems, but even they strive for a certain kind of society. For example, female chimpanzees have been seen to drag reluctant males towards each other to make up after a fight, removing weapons from their hands, and high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as yet another sign that the building blocks of morality are older than humanity, and that we do not need God to explain how we got where we are today. On the other hand, what would happen if we were able to excise religion from society? I doubt that science and the naturalistic worldview could fill the void and become an inspiration for the good. Any framework we develop to advocate a certain moral outlook is bound to produce its own list of principles, its own prophets, and attract its own devoted followers, so that it will soon look like any old religion.

NOTES

[1] Also known as s’Hertogenbosch, this is a 12th-century provincial capital in the Catholic south of the Netherlands. Bosch lived from circa 1450 until 1516.

[2] Herculano-Houzel, Suzana (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience 3: 1-11.

[3] Ghiselin, Michael (1974). The Economy of Nature and the Evolution of Sex. Berkeley, CA: University of California Press.

[4] Dixon, Laurinda (2003). Bosch. London: Phaidon.

Frans B. M. de Waal is a biologist interested in primate behavior. He is C. H. Candler Professor in Psychology, and Director of the Living Links Center at the Yerkes National Primate Research Center at Emory University, in Atlanta, and a member of the National Academy of Sciences and the Royal Dutch Academy of Sciences. His latest book is “The Age of Empathy.”
__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/10/17/morals-without-god/

Is Pure Altruism Possible?

Who could doubt the existence of altruism?

True, news stories of malice and greed abound. But all around us we see evidence of human beings sacrificing themselves and doing good for others. Remember Wesley Autrey? On Jan. 2, 2007, Mr. Autrey jumped down onto the tracks of a New York City subway platform as a train was approaching to save a man who had suffered a seizure and fallen. A few months later the Virginia Tech professor Liviu Librescu blocked the door to his classroom so his students could escape the bullets of Seung-Hui Cho, who was on a rampage that would leave 32 students and faculty members dead. In so doing, Mr. Librescu gave his life. 

Still, doubting altruism is easy, even when it seems at first glance to be apparent. It’s undeniable that people sometimes act in a way that benefits others, but it may seem that they always get something in return — at the very least, the satisfaction of having their desire to help fulfilled. Students in introductory philosophy courses torture their professors with this reasoning. And its logic can seem inexorable.

Contemporary discussions of altruism quickly turn to evolutionary explanations. Reciprocal altruism and kin selection are the two main theories. According to reciprocal altruism, evolution favors organisms that sacrifice their good for others in order to gain a favor in return. Kin selection — the famous “selfish gene” theory popularized by Richard Dawkins — says that an individual who behaves altruistically towards others who share its genes will tend to reproduce those genes. Organisms may be altruistic; genes are selfish. The feeling that loving your children more than yourself is hard-wired lends plausibility to the theory of kin selection.

These evolutionary theories explain a puzzle: how organisms that sacrifice their own “reproductive fitness” — their ability to survive and reproduce — could possibly have evolved. But neither theory fully accounts for our ordinary understanding of altruism.

The defect of reciprocal altruism is clear. If a person acts to benefit another in the expectation that the favor will be returned, the natural response is: “That’s not altruism!”  Pure altruism, we think, requires a person to sacrifice for another without consideration of personal gain. Doing good for another person because something’s in it for the do-er is the very opposite of what we have in mind. Kin selection does better by allowing that organisms may genuinely sacrifice their interests for another, but it fails to explain why they sometimes do so for those with whom they share no genes, as Professor Librescu and Mr. Autrey did.

When we ask whether human beings are altruistic, we want to know about their motives or intentions. Biological altruism explains how unselfish behavior might have evolved but, as Frans de Waal suggested in his column in The Stone on Sunday, it implies nothing about the motives or intentions of the agent: after all, birds and bats and bees can act altruistically. This fact helps to explain why, despite these evolutionary theories, the view that people never intentionally act to benefit others except to obtain some good for themselves still possesses a powerful lure over our thinking.

The lure of this view — egoism — has two sources, one psychological, the other logical. Consider first the psychological. One reason people deny that altruism exists is that, looking inward, they doubt the purity of their own motives. We know that even when we appear to act unselfishly, other reasons for our behavior often rear their heads: the prospect of a future favor, the boost to reputation, or simply the good feeling that comes from appearing to act unselfishly. As Kant and Freud observed, people’s true motives may be hidden, even (or perhaps especially) from themselves. Even if we think we’re acting solely to further another person’s good, that might not be the real reason. (There might be no single “real reason” — actions can have multiple motives.)

So the psychological lure of egoism as a theory of human action is partly explained by a certain humility or skepticism people have about their own or others’ motives. There’s also a less flattering reason: denying the possibility of pure altruism provides a convenient excuse for selfish behavior. If “everybody is like that” — if everybody must be like that — we need not feel guilty about our own self-interested behavior or try to change it.

The logical lure of egoism is different: the view seems impossible to disprove. No matter how altruistic a person appears to be, it’s possible to conceive of her motive in egoistic terms. On this way of looking at it, the guilt Mr. Autrey would have suffered had he ignored the man on the tracks made risking his life worth the gamble. The doctor who gives up a comfortable life to care for AIDS patients in a remote place does what she wants to do, and therefore gets satisfaction from what only appears to be self-sacrifice. So, it seems, altruism is simply self-interest of a subtle kind. 

The impossibility of disproving egoism may sound like a virtue of the theory, but, as philosophers of science know, it’s really a fatal drawback. A theory that purports to tell us something about the world, as egoism does, should be falsifiable. Not false, of course, but capable of being tested and thus proved false. If every state of affairs is compatible with egoism, then egoism doesn’t tell us anything distinctive about how things are.

A related reason for the lure of egoism, noted by Bishop Joseph Butler in the 18th century, concerns ambiguity in the concepts of desire and the satisfaction of desire. If people possess altruistic motives, then they sometimes act to benefit others without the prospect of gain to themselves. In other words, they desire the good of others for its own sake, not simply as a means to their own satisfaction. It’s obvious that Professor Librescu desired that his students not die, and acted accordingly to save their lives. He succeeded, so his desire was satisfied. But he was not satisfied — since he died in the attempt to save the students. From the fact that a person’s desire is satisfied we can draw no conclusions about effects on his mental state or well-being.

Still, when our desires are satisfied we normally experience satisfaction; we feel good when we do good. But that doesn’t mean we do good only in order to get that “warm glow” — that our true incentives are self-interested (as economists tend to claim). Indeed, as de Waal argues, if we didn’t desire the good of others for its own sake, then attaining it wouldn’t produce the warm glow.

Common sense tells us that some people are more altruistic than others. Egoism’s claim that these differences are illusory — that deep down, everybody acts only to further their own interests — contradicts our observations and deep-seated human practices of moral evaluation.

At the same time, we may notice that generous people don’t necessarily suffer more or flourish less than those who are more self-interested. Altruists may be more content or fulfilled than selfish people. Nice guys don’t always finish last.

But nor do they always finish first. The point is rather that the kind of altruism we ought to encourage, and probably the only kind with staying power, is satisfying to those who practice it. Studies of rescuers show that they don’t believe their behavior is extraordinary; they feel they must do what they do, because it’s just part of who they are. The same holds for more common, less newsworthy acts — working in soup kitchens, taking pets to people in nursing homes, helping strangers find their way, being neighborly. People who act in these ways believe that they ought to help others, but they also want to help, because doing so affirms who they are and want to be and the kind of world they want to exist. As Prof. Neera Badhwar has argued, their identity is tied up with their values, thus tying self-interest and altruism together. The correlation between doing good and feeling good is not inevitable— inevitability lands us again with that empty, unfalsifiable egoism — but it is more than incidental.

Altruists should not be confused with people who automatically sacrifice their own interests for others. We admire Paul Rusesabagina, the hotel manager who saved over 1,000 Tutsis and Hutus during the 1994 Rwandan genocide; we admire health workers who give up comfortable lives to treat sick people in hard places. But we don’t admire people who let others walk all over them; that amounts to lack of self-respect, not altruism.

Altruism is possible and altruism is real, although in healthy people it intertwines subtly with the well-being of the agent who does good. And this is crucial for seeing how to increase the amount of altruism in the world. Aristotle had it right in his “Nicomachean Ethics”: we have to raise people from their “very youth” and educate them “so as both to delight in and to be pained by the things that we ought.”

Judith Lichtenberg is professor of philosophy at Georgetown University. She is at work on a book on the idea of charity.
__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/10/19/is-pure-altruism-possible/

Socrates – a man for our times

He was condemned to death for telling the ancient Greeks things they didn’t want to hear, but his views on consumerism and trial by media are just as relevant today

The Death of Socrates, 1787, by Jacques Louis David.

Two thousand four hundred years ago, one man tried to discover the meaning of life. His search was so radical, charismatic and counterintuitive that he become famous throughout the Mediterranean. Men – particularly young men – flocked to hear him speak. Some were inspired to imitate his ascetic habits. They wore their hair long, their feet bare, their cloaks torn. He charmed a city; soldiers, prostitutes, merchants, aristocrats – all would come to listen. As Cicero eloquently put it, “He brought philosophy down from the skies.”

For close on half a century this man was allowed to philosophise unhindered on the streets of his hometown. But then things started to turn ugly. His glittering city-state suffered horribly in foreign and civil wars. The economy crashed; year in, year out, men came home dead; the population starved; the political landscape was turned upside down. And suddenly the philosopher’s bright ideas, his eternal questions, his eccentric ways, started to jar. And so, on a spring morning in 399BC, the first democratic court in the story of mankind summoned the 70-year-old philosopher to the dock on two charges: disrespecting the city’s traditional gods and corrupting the young. The accused was found guilty. His punishment: state-sponsored suicide, courtesy of a measure of hemlock poison in his prison cell.

The man was Socrates, the philosopher from ancient Athens and arguably the true father of western thought. Not bad, given his humble origins. The son of a stonemason, born around 469BC, Socrates was famously odd. In a city that made a cult of physical beauty (an exquisite face was thought to reveal an inner nobility of spirit) the philosopher was disturbingly ugly. Socrates had a pot-belly, a weird walk, swivelling eyes and hairy hands. As he grew up in a suburb of Athens, the city seethed with creativity – he witnessed the Greek miracle at first-hand. But when poverty-striken Socrates (he taught in the streets for free) strode through the city’s central marketplace, he would harrumph provocatively, “How many things I don’t need!”

Whereas all religion was public in Athens, Socrates seemed to enjoy a peculiar kind of private piety, relying on what he called his “daimonion“, his “inner voice”. This “demon” would come to him during strange episodes when the philosopher stood still, staring for hours. We think now he probably suffered from catalepsy, a nervous condition that causes muscular rigidity.

Putting aside his unshakable position in the global roll-call of civilisation’s great and good, why should we care about this curious, clever, condemned Greek? Quite simply because Socrates’s problems were our own. He lived in a city-state that was for the first time working out what role true democracy should play in human society. His hometown – successful, cash-rich – was in danger of being swamped by its own vigorous quest for beautiful objects, new experiences, foreign coins.

The philosopher also lived through (and fought in) debilitating wars, declared under the banner of demos-kratia – people power, democracy. The Peloponnesian conflict of the fifth century against Sparta and her allies was criticised by many contemporaries as being “without just cause”. Although some in the region willingly took up this new idea of democratic politics, others were forced by Athens to love it at the point of a sword. Socrates questioned such blind obedience to an ideology. “What is the point,” he asked, “of walls and warships and glittering statues if the men who build them are not happy?” What is the reason for living life, other than to love it?

For Socrates, the pursuit of knowledge was as essential as the air we breathe. Rather than a brainiac grey-beard, we should think of him as his contemporaries knew him: a bustling, energetic, wine-swilling, man-loving, vigorous, pug-nosed, sword-bearing war-veteran: a citizen of the world, a man of the streets.

According to his biographers Plato and Xenophon, Socrates did not just search for the meaning of life, but the meaning of our own lives. He asked fundamental questions of human existence. What makes us happy? What makes us good? What is virtue? What is love? What is fear? How should we best live our lives? Socrates saw the problems of the modern world coming; and he would certainly have something to say about how we live today.

He was anxious about the emerging power of the written word over face-to-face contact. The Athenian agora was his teaching room. Here he would jump on unsuspecting passersby, as Xenophon records. “One day Socrates met a young man on the streets of Athens. ‘Where can bread be found?’ asked the philosopher. The young man responded politely. ‘And where can wine be found?’ asked Socrates. With the same pleasant manner, the young man told Socrates where to get wine. ‘And where can the good and the noble be found?’ then asked Socrates. The young man was puzzled and unable to answer. ‘Follow me to the streets and learn,’ said the philosopher.”

Whereas immediate, personal contact helped foster a kind of honesty, Socrates argued that strings of words could be manipulated, particularly when disseminated to a mass market. “You might think words spoke as if they had intelligence, but if you question them they always say only one thing . . . every word . . . when ill-treated or unjustly reviled always needs its father to protect it,” he said.

When psychologists today talk of the danger for the next generation of too much keyboard and texting time, Socrates would have flashed one of his infuriating “I told you so” smiles. Our modern passion for fact-collection and box-ticking rather than a deep comprehension of the world around us would have horrified him too. What was the point, he said, of cataloguing the world without loving it? He went further: “Love is the one thing I understand.”

The televised election debates earlier this year would also have given pause. Socrates was withering when it came to a polished rhetorical performance. For him a powerful, substanceless argument was a disgusting thing: rhetoric without truth was one of the greatest threats to the “good” society.

Interestingly, the TV debate experiment would have seemed old hat. Public debate and political competition (agon was the Greek word, which gives us our “agony”) were the norm in democratic Athens. Every male citizen over the age of 18 was a politician. Each could present himself in the open-air assembly up on the Pnyx to raise issues for discussion or to vote. Through a complicated system of lots, ordinary men might be made the equivalent of heads of state for a year; home secretary or foreign minister for the space of a day. Those who preferred a private to a public life were labelled idiotes (hence our word idiot).

Socrates died when Golden Age Athens – an ambitious, radical, visionary city-state – had triumphed as a leader of the world, and then over-reached herself and begun to crumble. His unusual personal piety, his guru-like attraction to the young men of the city, suddenly seemed to have a sinister tinge. And although Athens adored the notion of freedom of speech (the city even named one of its warships Parrhesia after the concept), the population had yet to resolve how far freedom of expression ratified a freedom to offend.

Socrates was, I think, a scapegoat for Athens’s disappointment. When the city was feeling strong, the quirky philosopher could be tolerated. But, overrun by its enemies, starving, and with the ideology of democracy itself in question, the Athenians took a more fundamentalist view. A confident society can ask questions of itself; when it is fragile, it fears them. Socrates’s famous aphorism “the unexamined life is not worth living” was, by the time of his trial, clearly beginning to jar.

After his death, Socrates’s ideas had a prodigious impact on both western and eastern civilisation. His influence in Islamic culture is often overlooked – in the Middle East and North Africa, from the 11th century onwards, his ideas were said to refresh and nourish, “like . . . the purest water in the midday heat”. Socrates was nominated one of the Seven Pillars of Wisdom, his nickname “The Source”. So it seems a shame that, for many, Socrates has become a remote, lofty kind of a figure.

When Socrates finally stood up to face his charges in front of his fellow citizens in a religious court in the Athenian agora, he articulated one of the great pities of human society. “It is not my crimes that will convict me,” he said. “But instead, rumour, gossip; the fact that by whispering together you will persuade yourselves that I am guilty.” As another Greek author, Hesiod, put it, “Keep away from the gossip of people. For rumour [the Greek pheme, via fama in Latin, gives us our word fame] is an evil thing; by nature she’s a light weight to lift up, yes, but heavy to carry and hard to put down again. Rumour never disappears entirely once people have indulged her.”

Trial by media, by pheme, has always had a horrible potency. It was a slide in public opinion and the uncertainty of a traumatised age that brought Socrates to the hemlock. Rather than follow the example of his accusers, we should perhaps honour Socrates’s exhortation to “know ourselves”, to be individually honest, to do what we, not the next man, knows to be right. Not to hide behind the hatred of a herd, the roar of the crowd, but to aim, hard as it might be, towards the “good” life.

The Hemlock Cup: Socrates, Athens and the Search for the Good Life, by Bettany Hughes, is published by Jonathan Cape.

___________

Full article and photo: http://www.guardian.co.uk/books/2010/oct/17/socrates-philosopher-man-for-our-times

Exit strategies

Socrates philosophised as the hemlock kicked in, Vespasian quoted verse – but of all the ideas we have inherited from the classical world, the very worst, writes Mary Beard, is the myth of a ‘good death’

The last words of the Roman emperor Vespasian were a joke. “Damn,” he is supposed to have said on his deathbed in AD79, “I think I’m turning into a god.” This was not merely a wry reflection on the peculiar Roman custom of turning dead emperors – the good ones, at least – into divinities. (And, sure enough, before the year was out, Vespasian really had become a god, with his very own temple and priests.) His words also served as a reminder to family, friends, courtiers and subjects that this famously down-to-earth, no-nonsense ruler had remained down-to-earth and no-nonsense even unto death. The old man had died with his wits, and his wit, still about him. It was a “good death”, Roman style.

There are plenty of other stories of Roman notables handling their last moments with similar aplomb. The first emperor Augustus, for example, is said to have stage-managed his own end in AD14, with memorable efficiency. He asked for a mirror, had his hair combed, then called in his friends and asked them whether he had “played his part well in the comedy of life”. And just to emphasise the point, he quoted some well-chosen last lines from a comedy by the Greek playwright Menander:

“Since the play has been so good, clap your hands / And all of you dismiss us with applause.”

Then, to complete this well-choreographed finale, he made tender inquiries about the health of a young relative, embraced his wife – and died, mid-kiss, right on cue.

The Romans, of course, didn’t invent this idea of a controlled, tidy, well-planned and witty exit. It goes back to ancient Greece and the philosopher-guru Socrates, who was put to death in Athens in 399BC for a variety of crimes, including corrupting the young and introducing new gods. His pupil Plato recreated that death in a best-selling essay, known – after one the characters at the bedside – as the Phaedo. It inspired philosophers, intellectuals and martyrs ever after.

Here we read of Socrates drinking the hemlock (the execution method-of-choice in classical Athens), and continuing to philosophise – with apparently effortless calm – while his body grew gradually numb from the poison. As the numbness neared his heart, just in time, he reminds his grieving friends to do their religious duty by making an offering to the god Asclepius. They were obviously liable to forget, but not the dying philosopher, who still had his mind on the job:

“‘Crito,'” he says, to one of those present, “‘we owe a cock to Asclepius. Pay it and do not forget.’ Crito said, ‘It will be done. But see if you have anything else to say.’ To this question, he made no reply … These had been his last words.”

Some Romans were so enamoured of this particular scene of dying that they tried to replicate it – with all the theatrical showmanship that they could muster, but not always happy results. Seneca, the Roman philosopher who had once held the unenviable job of tutor to the young emperor Nero, killed himself in AD65. He aimed for a Socratic death, and had even arranged for a secretary to be on hand to take down his last, philosophical, words, for future publication – and immortality. The secretary did his job, but Seneca’s attempts to open his veins were repeatedly botched until eventually he had to fall back on some hemlock. When even that failed, he was dipped in a hot bath to encourage the blood to flow. It worked in the end, but Seneca – the Roman Socrates – had turned out to be very bad, and slow, at dying, where Socrates himself had proved the expert.

This failure of Seneca to replicate Socrates hints at a bigger problem with the ancient view of death and of the processes of dying. The ancient Greeks and Romans have given us many of the tools with which we still understand the world, from democracy to dictatorship, philosophy to pornography. There is much we can learn from them, even now. But one of the very worst ideas we have inherited from the classical world is the idea of the “good death”, in which the man (and, in these stories, it usually is a man) remains in charge of his destiny right until the very end – joking with his friends, planning his exit strategy, and never once losing control, or his distinctive, individual character.

For a start, most of those ancient stories cannot possibly be true. We cannot now check out what Vespasian or Augustus really did say on their deathbeds. But the chances are that their reported last words and actions were a combination of wishful thinking, convenient propaganda, rumour and outright invention. It was certainly in the interests of the scheming empress Livia, Augustus’s widow, that the world should picture the old emperor passing away with a kiss for his wife still lingering on his lips.

We do, however, know the effects of drinking hemlock, as Socrates did. It’s not, in fact, a nice or a dignified way to go, with a gradual numbness creeping gently up the limbs, and all the faculties left intact until almost the last minute, when the heart stops beating. True, it depends a bit on the precise species of hemlock that you take (some varieties are nastier than others). But you could normally expect agonising cramps, vomiting and terrible convulsions – which not even the most single-minded philosopher could manage to think, or talk, his way through. The myth of the sage, philosophising his path to death, unmoved by the disintegration of the body, is just that – a myth.

But, to press this a bit further, these stories of “good” ancient deaths, true or not, have bequeathed to us a version of dying that we can never live up to in real life (or in our real dying). It may be a different matter in the movies. Remember Ali MacGraw, dying of leukaemia at the end of the film Love Story. In a way that would have delighted even Socrates’s friends, she stayed just as beautiful as she had been at the beginning, even if she got gradually paler as she reached her last breath. The same may be true for novels, where death is often reached with Roman – and utterly implausible – dignity. When Oscar Wilde quipped about Charles Dickens’s Old Curiosity Shop that “it would require a heart of stone not to laugh at the death of Little Nell”, he was not simply pouring scorn on a certain type of Victorian sentimentality. He was reminding us of something we all know already: that death never actually comes pretty; that no matter how we choose to represent it to ourselves, it can’t ever be “good”.

For the truth is that dying is nasty, undignified and uncontrolled. Or so my limited observation of it has suggested. The human body is, after all, designed to be tough. The price we pay for surviving minor accidents, for our capacity to fight routine infection and disease, and – let’s not forget – for the miraculous advances of modern pharmaceuticals, is a heavy one: that is, it takes a major, or horribly sustained, onslaught to kill us off for good. It is, in other words, very, very difficult for a human being to die. We may be lucky and get speedily dispatched by a well-aimed bullet or by a big enough heart attack. But the likelihood is that we will take a long time to leave this world, as the cancer works its way through to our most vital organs, or as the brain, the lungs and the heart gradually shut down. We might choose to see the bright side of this, and mutter gamely about “getting our affairs in order”. But we would be kidding ourselves if we thought that, in the end, our addled bodies would be able to seize the opportunity for some witty last words, as the ancient myth suggests. There will be no cleverly choreographed deathbed scene; almost certainly we will be in pain, and not in control.

Personally, I am hoping for one of those massive heart attacks (not the little sort that merely leave you an invalid). So too are most of the doctors that I know. Going out with a bang on the golf course is the physicians’ preferred exit route. Though quite why they persist in prescribing for the rest of us pills that will make such an event less likely and consign us to far less desirable forms of death, is a bit of a mystery. I’m still waiting to meet a medic who greets my high blood pressure and raised cholesterol with a smile and a warm prediction of a premature but speedy end.

If I can’t go quickly, then all I can realistically hope for is a hospital regime that doesn’t ration the morphine. But on that model, I am very unlikely to “be myself” in my last days and weeks; I shall be in some medicated limbo, halfway out of the living world. Either way – whether the swift or the lingering exit – there is not going to be much of a chance of uttering those characteristic last words for the family and friends to carry away in their memories. There will be only moans and groans. Let’s be realistic.

As is the case for most people, the deaths I have observed at closest quarters – too close, perhaps – are those of my parents. My father’s death ran more or less according to my own Plan B. He was in hospital for several weeks, drugged up to the eyeballs, incapable of normal communication, and stripped of most of his dignity. It’s not pleasant to see your elderly parent lying partly covered by a single sheet, naked and catheterised. But if the choice is between pain and loss of dignity, most of us, when it comes to it (whatever we like to think in advance), would choose loss of dignity. In Dad’s case the medicated limbo lasted for some time, and it wasn’t actually clear to me when he died; it certainly wasn’t when the hospital rang one night to say that, in their terms, he was now gone.

My mother did it quite differently, and I’m sorry to say that there was more than a touch of the Socratic about it. She, too, had had cancer for some time, but was still living semi-independently in sheltered housing, while having a course of palliative radiotherapy. It was palliative only; a cure was not in prospect. One night, alone, she had a horrible haemorrhage. Getting up, she called an ambulance, which came to take her on what she must have known was her last journey to hospital. She was, in fact, dead within a couple of hours. But, bleeding as she was, she did not leave her flat without putting out a note for the milkman. “No milk today; will let you know when to start deliveries again.”

I know she did this, because I found the note. I didn’t manage to get to the hospital in time to see her alive (or even half alive). So I went round to her flat and saw it there, still sticking out of the bottle.

My first reactions went along the old Roman lines. How reassuring, I thought, that she was compos mentis almost up to the end. Even in her last hours and bleeding, she still had the domestic chores well organised. That was my mum …

But my second thoughts were, and remain, rather different. How strong was the grip on her of that ancient myth of a well-presented death, that even as she bled her way out of life, she was forced to put on this display of control. Couldn’t she have just let go and died? There was something almost pitiful in that need she must have felt to live up to the image of good and organised dying.

For the note was, after all, just a display. What on earth was the point of cancelling the milk? The milkman hadn’t even picked the message up. And would it have mattered anyway if a few full bottles had stood out on the doorstep for a while?

Or even, to go back to Socrates, would it really have mattered if that cock hadn’t been offered to Asclepius?

__________

Full article: http://www.guardian.co.uk/books/2009/mar/21/philosophy

Monterrey’s Habit

INVISIBLE paths to the United States, it seems, have always passed through Monterrey. People and their merchandise come and go via paved roads and dusty lanes, but also through the famous little walkways, somewhere between manicured and overgrown, that are hidden among the thickets of underbrush.

Increasingly, Mexico has a hidden drug problem — but it’s not entirely the kind that you’d think. And the traffic won’t stop until it’s exposed.

As early as the 1940s, the local newspapers were reporting on captured smugglers. Those going north to the United States transported humans (generally seasonal farm workers) and substances for attaining those “artificial paradises” that so fascinated the French poètes maudits of the late 19th century. The other group, those going south, could bring almost anything they fancied into the country — you could bring a building into Mexico, people joked, as long as it fit under the bridge. And they knew, though they talked about it only in hushed tones, that quite a bit of money was being made.

I grew up thinking that “sardos,” the lowest-ranking members of the army, were the only Mexicans who smoked marijuana. But by the 1960s, the hippie generation had popularized pot, and during my university years several of my classmates smoked. As for harder drugs, few of us knew anything more than what we saw in the movies; only in the 1970s did we become aware of psychotropic pills that “drove you crazy.”

Around then, popular music, always a reliable witness, began to recount the stories of people transporting drugs beyond the Rio Grande. With each decade, the songs got more and more explicit. “Camelia la Tejana,” one of the most emblematic, is about a woman whose car tires were “filled with the evil weed.” It ends with a shooting death. But soon, lyricists stopped killing off their antiheroes. Drug trafficking became an adventure story, or a comedy: in one famous song, smugglers disguised as nuns traded “white powder” they swore was just powdered milk.

Still, we didn’t think of drugs as our problem. In Monterrey over the years we sang about them, sure, we even smoked them — but we kept insisting they were only passing through, north to the Americans. We saw the construction going on in Monterrey, the new fortunes, and we knew the phrase “money laundering,” but we looked the other way.

After 9/11, the drug industry became harder to ignore. From then, day in and day out, the news media reported on the border: on interceptions of huge marijuana and cocaine shipments, dozens of deaths caused by warring gangs and stories of coercion and corruption among government authorities and policemen.

And still the habit grew, among the young and not-so-young, though it was always denied, never admitted. In certain neighborhoods here, it was said, absolutely anything could be gotten.

We have come face to face with the violence associated with the business, we acknowledge it. But we don’t acknowledge our own drug problems. If those secret paths from south to north passed through some other country, some other state, perhaps Monterrey wouldn’t have the drug traffic it has today. But people here also buy and consume these paradise-inducing substances.

By ignoring this, we only put off learning the magnitude of our own addiction. There can be no solution until we come to terms with the truth. And after that, who knows?

Ricardo Elizondo Elizondo is a novelist and historian.

__________

Full article and photo: http://www.nytimes.com/2010/10/17/opinion/17elizondo.html

Ground Zero in Sinaloa

FOUR years ago Mexico invented a civil war: the government decided to confront the seven major drug cartels. The army was sent into the streets, mountains and country paths. Even the navy was on alert.

Here in Sinaloa, the western state where the modern drug trade began, poorly armed and ill-outfitted federal and state police were the first to fall. Around 50 of them, killed by the cartels. Those who survived took to the streets in protest, demanding better weapons and bulletproof vests. In Culiacán, the state capital, students are always staging protest marches; it was strange to see the police do the same. You could smell the fear and uncertainty in the air.

At first people believed that it would soon blow over. But weeks went by and the gunfire continued to claim victims. Across Mexico in 2009, an average of 23 people died in drug-related violence every day, and on many of those days Sinaloa was the prime contributor to that statistic. Military patrols and federal policemen prowled the cities looking to uncover troves of weapons. They went door to door in Culiacán. It took them five minutes to inspect my house. “It’s full of books,” the sergeant remarked, a bewildered look on his face.

I don’t know if they did the same in the neighborhoods where the drug lords actually live. The soldiers didn’t look that tough, nor did the police. But still, it was unsettling to see them close up and with such troubled looks on their faces. Ever since the student uprisings of 1968 and the resulting repression of the 1970s, soldiers are seen as threats, even in Sinaloa, where they are trying to protect us.

The Mexican drug industry was established in the 1940s by a group of Sinaloans and Americans trafficking in heroin. It is part of our culture: we know all the legends, folk songs and movies about the drug world, including its patron saint, Jesús Malverde, a Robin Hood-like bandit who was hanged in 1909.

There are days when we feel deeply ashamed that the trade is at the heart of Sinaloa’s identity, and wish our history were different. Our ancestors were fearless and proud people, and it is their memory that gives us the will to try to control our own fear and the sobs of the widows and mothers who have lost loved ones.

It was reported that not long ago, a group of high-ranking government officials from Mexico City paid a visit to Ciudad Juárez, a city in Chihuahua State on the Texas border where people are too scared to go out at night. A troop of Niños Exploradores, akin to Boy Scouts, was trotted out to greet the dignitaries. Warm smiles abounded among the government representatives. The boys’ faces were dead serious.

When the boys were asked to perform their salute, their commander shouted, “How do the children play in Ciudad Juárez?” The boys hit the ground. When asked, “How do the children play in Tijuana?” again the scouts hit the ground. When asked about the children of the border city of Matamoros, yet again, they were on the ground. The visitors looked eager to disappear.

In Sinaloa, at least things haven’t gotten that bad. People live well and our children play other games. At night we go out for dinner, we go for evening strolls down our beaches and our roads as if to say: this is our land, we will not let go of it. But it doesn’t always work.

Sinaloa is a place with a strong work ethic: people tell me, for example, that I write like a farmer, from dawn. Our greatest worry is that, in our fear, we will lose our grip on the code of work and responsibility that guided our forefathers and helped them convert our unpromising salt flats and desert into agricultural bounty.

Élmer Mendoza is a novelist.

__________
Full article and photo: http://www.nytimes.com/2010/10/17/opinion/17Mendoza.html

Tijuana Reclaimed

THERE are two Tijuanas: that of the locals, and that of the rest. The true Tijuana belongs only to the oldest families, the grandparents and great-grandparents of Tijuana. The view from outside, on the other hand, tends to come into focus through fantasy, stereotype and cliché.

But the outside world helped create Tijuana.

In the 19th century, Tijuana resembled the set of an old Western — a few houses, some wooden corrals, mud-caked roads and a customs hut to register the passage of caravans heading to the port at Ensenada.

The city came into its own only in the 1920s, thanks to Prohibition and laws outlawing gambling in the United States. Americans exported the vices they had banned at home to the new city emerging on this side of the border, which soon became a nerve center for the production of alcohol, from brandy to Mexicali beer.

Capital from the American underworld was largely responsible. American investors like Carl Withington opened saloons and broke ground for the construction of casinos like the Foreign Club, the Montecarlo and the Agua Caliente, which was built alongside the hot springs of the same name. And American tourists paid for the prostitutes, the boxing clubs and the opium.

Of course, the particular vices changed a bit during the 20th century, but the city kept on playing the same role for its northern neighbor. That is, until the 1990s, when everything began to change. This pressure started building from the south — drugs (and the violence and law of the jungle that come with them) were heading north and Tijuana was the last stop before the border. The Arellano brothers had moved here from Sinaloa in the ’80s, and other traffickers and assassins followed. It was like a tide shifting. Instead of an influx of visitors from the north, we had these smugglers from the south. And the tourists were scared away.

It had a devastating effect on Tijuana’s economy. The murders, kidnappings and decapitations reached a peak in 2008. Americans stopped coming, and those Tijuana families who could afford it moved to California, to San Diego or Bonita, to sleep in peace. Even local politicians and officials bought or rented houses elsewhere. Stores closed. Bars were boarded up.

But now Tijuana is recovering. The violence has begun to subside, thanks to the local police and the Mexican military, as well as the capture last January of Teodoro García Simental, an infamous drug lord known as El Teo. Avenida Revolución, dead for the past three years, is showing signs of life. On Friday and Saturday nights it is packed with young people. Caesar’s, a symbolic old restaurant and hotel (where the famous salad was invented), just reopened, and one block over, rock and blues bands play at the music hall.

No, the tourists haven’t returned. It’s the locals, the people of Tijuana — who kept to themselves during the worst of the violence — reclaiming their territory.

“We have to change our image,” said Jaime Cháidez, a local journalist. “We can’t rely on tourism anymore. The city still stands, as noble as ever. It is surviving, growing, picking itself up.”

And for perhaps the first time in more than a century, the Tijuanans are driving that growth. In a sense, then, it is the very violence that plagues Mexico that has returned Tijuana to the people who live here.

A few days ago, a statue of Rubén Vizcaíno Valencia, a writer, teacher and promoter of Mexican culture, was unveiled. He is the first Tijuana native to be honored in this way, and there he stands, presiding over one of the hallways of the Centro Cultural Tijuana.

Federico Campbell is the author of the short story collection “Tijuana: Stories on the Border.”

___________

Full article and photo: http://www.nytimes.com/2010/10/17/opinion/17campbell.html

The Walls of Puebla

HOW has life in Mexico changed under the rising tide of drug violence? It’s difficult to say; it is what it is. It goes on. For long stretches of time, it is easy to forget about the violence. But then reality breaks through, and it becomes once again impossible to ignore.

All my life I have lived in Puebla, a city of more than one million inhabitants about 70 miles southeast of Mexico’s sprawling capital. Puebla has a reputation for being a moderately safe place to live (considering the general standard in the country today). Mexico City residents, called chilangos, have been moving here for years — particularly since so many were driven from the capital by the earthquake of 1985, which destroyed hundreds of buildings and killed thousands of people.

The famous have retreated here, too. At one time, Puebla was reported to be home to Mexico’s most-wanted man, the billionaire drug lord Joaquín Guzmán Loera, who has still not been apprehended. Other prominent traffickers have followed. Puebla is perceived as a place that is largely free from violence — which, surely, must be as attractive to a drug lord as it is to me — but it is known for being free from the authorities’ scrutiny as well.

There is lots of speculation about “agreements” between governors and certain cartels. The government turns a blind eye, and the cartel guarantees a level of peace. Many people believe these pacts to be the reason that states like Puebla are relatively “safe” while Mexico’s civil war rages around them.

Recently, though, the delicate balance has been threatened, as the authorities have started to crack down on traffickers. Last month, Sergio Villarreal Barragan, another important drug lord, was arrested in one of the city’s most posh residential neighborhoods. The government may have been emboldened by the results of the summer’s elections, which ended decades of rule in Puebla by the PRI (the Institutional Revolutionary Party).

But the sad thing is, nobody has much faith in the new coalition government. I met a taxi driver here whose children had moved to New York City: one of them works as a cook at a fancy restaurant in the Flatiron neighborhood and the other one cleans bathrooms near Penn Station. He hasn’t seen them in six years. We got to talking about the elections, and I asked him if he thought the new governor might change things.

“I don’t know,” he said. “I’ve always thought it was inevitable that politicians are thieves. But even so — they could still leave something behind, right? Do some good work just the same.”

It wasn’t an earthquake or drug violence that drove his children from home. Puebla expelled them because it couldn’t offer them any opportunities. Worst of all, they went to New York illegally. So now, they can never leave.

We too, in a sense, are trapped in Puebla. In my neighborhood, where the roads are still unpaved, we live behind high walls and electrified or barbed-wire fences. A friend of mine, an artist, lives in one of the city’s fanciest neighborhoods, behind an immense wall. Last weekend he was unable to enter or leave; the great drawback of the wall is that it has only one entrance. In this case, the opening had been blocked by the Naval Department for an operation of some sort. For the first time, my friend felt that he was living in a prison.

And no matter the lengths we go to preserve our tranquillity, violence infringes. Not long ago, robbers broke into the house across the street from mine. Luckily, my neighbor had a machete. He chased the intruders out, after hacking one of them in the arm.

That morning, his garage floor was still covered in blood. I asked him what they had taken.

“All sorts of things,” he said. “Tools, the television set, some things from the kitchen.”

“Do you think they’ll come back?”

“That’s the worst part of it. I can’t sleep in this house anymore, thinking that at any moment they might come back, with me and my daughters inside. Thank God nothing happened to us!”

I couldn’t help but think of something the chief of security said about the recent wave of arrests of drug traffickers in Puebla. “Puebla is a safe state to live in, and that is why they come here,” he said. We dream of happy endings, but sometimes I’m afraid that everything that could possibly happen in Puebla has already happened.

Pedro Ángel is a novelist.

__________

Full article and photo: http://www.nytimes.com/2010/10/17/opinion/17palou.html

The Elusive Small-House Utopia

Every year, in conjunction with a big trade show, the magazine Builder creates something it calls its “concept home.” The house is an exhibition on a theme — the 2004 edition, for example, was called the Ultimate Family Home — but also a commercial venture. Attendees of the International Builders’ Show can walk through a model, and when the show is over, the concept is quite literally put to a test when the finished house is offered up for sale on the open market. During the last decade’s real estate boom, the annual demonstration kept up with the times: designs abounded with baronial features like colonnades, cathedral ceilings and observation towers, and they sometimes topped 6,000 square feet. But then the crash came, wiping out credit lines and shaking the industry’s confidence. For this year’s show, Boyce Thompson, the editorial director of Builder, wanted a look more attuned to curtailed appetites, so he came up with a concept that he called a Home for the New Economy.

The most salient specification of the house was its modest proportions. At around 1,700 square feet, it was the size of the average American home built in 1980. Since then, new houses have on average grown by more than 40 percent, as dens have expanded into great rooms, and tubs and sinks have multiplied. “Houses got too big, because people were chasing investment gains and there was cheap money, and the industry responded by building houses that were too large,” Thompson says. “So we really wanted to focus people’s attention on doing smaller, better homes.” He points to Census statistics that show a slight decline in the size of homes built over the past two years and to a much larger drop in the square footage of those that have just started construction, and suggests that the market may be headed toward a more austere norm.

That’s certainly debatable. If dissecting the causes of the housing market’s crash is a task for economists, predicting its future is a fuzzier matter of sociology. Will Americans re-evaluate cultural assumptions that equate ever-larger houses with success and stability? Or will they invest more in their lived environments, figuring that with the demise of the quick flip, they are now in for the long term? In the absence of much real market activity, imaginations are free to run wild. The Home for the New Economy is one such exercise in speculation, a proposal that the future lies in denser, more walkable, modestly scaled communities. Marianne Cusato, who designed the Home for the New Economy, sees it as a rebuke to the ethic of the McMansion. “We’re not going to go back to 2005,” she says. “What was built then is not going to come back, and this is not a bad thing. What we were building was so unsustainable, and it didn’t really meet our needs.”

Cusato, who is 36, started her career drawing up million-dollar mansions, but she has made a name for herself by going smaller, designing a 300-square-foot Katrina Cottage meant to be a replacement for the trailers the government set up after the 2005 hurricane. When Cusato sat down to devise the Home for the New Economy, she tried to consider how families actually use their living areas. She started with a simple, symmetrical three-bedroom plan, excising extraneous spaces — the seldom-used formal dining room, for instance — while enlarging windows wherever she could and adding a wraparound porch. A result was a house that was compact, comfortable, bright and energy-efficient.

There was just one problem. Usually the concept homes are put up in partnership with a local builder, but this year’s conference was held in Las Vegas, perhaps the nation’s most disastrous real estate market, and it was presumed no investor there would be willing to take a risk, even a small one, on the Home for the New Economy. So Builder put together an interactive Web site and displayed Cusato’s design at a booth inside the conference hall. The concept was a hit with at least one developer, LeylandAlliance, which was working on master-planned communities in New York, Connecticut, Virginia and South Carolina. Cusato began working with the company to bring her idea to life. The Home for the New Economy would be given the opportunity, after all, to meet the actual new economy.

The spirit of constraint that Cusato means to tap isn’t purely a product of the recession. It’s a cultural thread that runs from Henry David Thoreau’s 10-by-15-foot cabin next to Walden Pond all the way to the New Urbanist communities that began appearing in the 1980s, reacting to the spread of soulless suburbanization by trying to recapture a traditional small-town aesthetic. But the wider buying public has never found much appeal in the idea that it ought to make do with less. “Builders have tried quality rather than size, but they always fail,” says Witold Rybczynski, an architecture professor at the University of Pennsylvania. “The market always says: We don’t care. If you’re giving us a smaller house, we don’t want it.”

Rybczynski speaks from experience. In 1990, he and a partner came up with a design they called the Grow Home, a 14-foot-wide row house that could be constructed for as little as $35,000. His intent was to produce something along the lines of Frank Lloyd Wright’s famed Usonians, dwellings devised during the Great Depression as housing for the working man, which later became a model for the tiny tract homes of Levittown. Rybczynski built a Grow Home in Montreal and wrote an article about it for The Atlantic, in which he posited that the “abundant resources that accounted for the success of the large single-family suburban house — unlimited land, cheap transportation and plentiful energy — can no longer be taken for granted.” That was 20 years ago. While the Grow Home design proved to be a niche success, it didn’t change popular tastes, which kept inflating in defiance of all warnings about sustainability. By the market’s 2007 peak, the average American house had surpassed 2,500 square feet.

“This happened at the same time as household size declined, so it’s a little bizarre,” says John McIlwain, senior resident fellow at the Urban Land Institute. “But people seem to have liked the idea of having extra bedrooms and lots of room, big kitchens and big master bedrooms and big master baths. I think it’s just cultural, an expression of wealth.” But it’s not just that — studies have found that lower-class homes in the United States are also much larger than comparable residences in Europe.

“To me, the answer is that we subsidized it massively,” says Christopher B. Leinberger, a housing scholar at the Brookings Institution. “Over the last 30 years, we saw one of the largest social-engineering projects in the nation’s history.” The mortgage tax deduction encouraged a vast expansion of homeownership, while Fannie Mae and Freddie Mac created new pools of capital through securitization. The federal government kept building highways to serve ever-more distant suburbs, where local authorities often mandated large home and lot sizes in the belief that it would encourage the construction of affluent communities. Facing a widespread revolt against property taxes, many of the same municipalities began financing suburban infrastructure — roads, sewers and so on — through “impact fees” levied at the outset of the development process. This and other factors effectively inflated the cost of developable land. The homebuilding industry, which was in the process of consolidating into the hands of a dozen or so publicly traded corporations, passed on the added expense to consumers through higher home prices. But because they enjoyed such economies of scale when it came to construction, the major homebuilders could offer buyers an inducement in return: a lot more room.

Some in the industry argue that buyers never truly craved all that surplus space and took it only because that was the way the marketplace measured the worth of their investment. “We have to get out of the dollars-per-square-foot mind-set,” says Sarah Susanka, an architect and author of “The Not So Big House.” “I’ve been on a crusade to get people to think this way for years.” She says she was inspired to write her book — which recommends that home-buyers adjust their space expectations downward by a third and spend the money they save on creating a well-designed interior — after years of hearing the same sentiment from her clients: they were hoping to make a profit at resale, so they had to meet the market’s space expectations, not their own. Susanka says that the success of “The Not So Big House,” which has sold hundreds of thousands of copies since its publication 12 years ago, proves there is room for alternate conceptions of value.

In today’s marketplace, homebuilders are finding that smaller models are selling more reliably, and many are reassessing old marketing assumptions. “In many of our markets, there was an attitude that whatever you buy, you need to stretch, because in two years you’ll be able to sell it for double,” says Jeffrey Mezger, the chief executive of KB Home, one of the nation’s largest builders. With quick profit expectations dispelled, the average size of a KB house has fallen by almost 20 percent since the peak. The company recently introduced a line called the Open Series, homes that have flexible floor plans and low energy costs and run as small as 1,200 square feet.

The trend does not necessarily indicate, however, that Americans have suddenly decided they desire less. Homebuilders are shifting to compete with the cut-rate prices of foreclosures. Nowhere in KB’s marketing for the Open Series will you glimpse the word “small.” Even some enemies of the McMansion say it’s impossible to make a selling point of asceticism.

“Everybody hates the Calvinist sacrifice; they just don’t want to hear of it,” says the architect Andrés Duany, a founding father of the New Urbanist movement and a mentor of Marianne Cusato’s. Duany argues that the sprawling homes of the last decade actually met a need, albeit imperfectly, by reproducing internally what suburban communities lacked: an exercise room substitutes for a park, a home theater for the Main Street cinema. Buyers will only accept smaller homes, he says, if their surroundings compensate them. “The idea that you can promote things — that a developer is actually going to come out and say, ‘Marianne’s house is more virtuous,’ is ridiculous,” Duany says.

A real, live edition of the Home for the New Economy now stands near the end of a street in North Augusta, S.C., behind a white picket fence in a residential development called Hammond’s Ferry. The house owes its existence, in part, to the ideals of LeylandAlliance, which builds communities in the New Urbanist vein, but it’s also a practical concession to the distressed times. Hammond’s Ferry, which is situated along the Savannah River, hit the market in 2006, initially offering up a street of antebellum-style residences that ran as large as 4,000 square feet and cost an average of $500,000. The Home for the New Economy, priced at about half that much, represents a definite adjustment.

“If there was a tactical error we made, it was in the direction of going more toward the ‘wow,’ ” said Turner Simkins, the general manager of the project, as we stood atop a condominium building and surveyed a street of generously proportioned homes with columned two-story porches. On a color map of the Hammond’s Ferry master plan, the building was surrounded by some 450 beige rectangles, lots mainly set aside for single-family homes. So far, 88 houses have been built, 78 of them sold, and many of the lots on the map were still covered by uncleared bramble. Simkins said the homes that were finding buyers, unsurprisingly, were those priced below $250,000, but the developer had found it difficult to convince upscale Southern builders that there was money in modesty. Marianne Cusato’s house was there, as much as anything, to provide a different model.

Some skeptics have questioned whether the Home for the New Economy actually demonstrates much of a change. “I really think the whole thing is a marketing ploy,” Witold Rybczynski says. “Seventeen hundred square feet is not a small house.” Yet in North Augusta, the problem with the place appeared to be quite the opposite: people had trouble imagining how you could sell something so minuscule. One day I had lunch with Cusato and Thomas Blanchard Jr., whose firm was marketing Hammond’s Ferry. Blanchard explained that along the Georgia border, the crash had actually caused new houses to grow even larger. Crown Communities, an Atlanta-area builder, had snapped up suburban land at depressed prices and was erecting billboards that crowed, “Houses from $47 a square foot.” One of its largest models was called the Titan.

Cusato wondered aloud about the “sustainability” of such development and said that she would prefer for people to think of a house’s value not in terms of price per square foot but in its cost per month, taking into account the ongoing expenses of heating and cooling. “They’re meeting a segment of the market that needs to be served,” Blanchard responded, diplomatically. “You can’t create markets; you can only respond to them.”

One afternoon, Cusato and I made our way through Hammond’s Ferry to see the Home for the New Economy, walking down streets scaled to be bicycle friendly and passing Manuel’s Bread Café, where the French chef cooks with ingredients from an on-site organic farm. The houses, big and small, were bunched close together on lots that averaged a fifth of an acre. Building at relatively high density helps LeylandAlliance to offset the profit it forgoes by reserving a substantial portion of land for public amenities, like a waterfront park, running paths and several restored ponds. Still, the small-town feel doesn’t come cheap. For the price Cusato’s house was going for, $280,000, a North Augusta consumer could just as easily get something truly titanic, if less neighborly.

“There’s this machine we’ve created, this expectation that you have to have this huge home,” Cusato said. We were joined at the house by its builder, David Blair, who financed the construction himself because no bank would do so. We walked through the front door, into a sunlit 25-by-15-foot living and dining room, which was adjoined by an open kitchen.

Cusato had never been inside this particular version of her home — she had seen several others in upstate New York, where LeylandAlliance has another development — and so Blair took over the tour to show us a few alterations he made to the design. Cusato had intended it to be adaptable. Buyers, Blair said as we walked upstairs, are “used to the McMansions, and they say it’s too small.” At the top of the staircase was a loftlike open space, adjoined by two bedrooms and a windowless chamber that the builder said could be put to use as an office. Then, at the end of the hallway, there was a door. Blair opened it, and we entered an unfinished room, measuring 260 square feet. Blair said that it could be used for storage, but for a buyer willing to pay the asking price, he would finish it as a master bedroom. He explained that this allowed the house to be advertised as having four bedrooms and 2,000 square feet — in other words, not as a small house, but one close to medium-size. “People still have this . . . what I call a hang-up about price per square foot,” the builder said.

Blair’s stratagem had brought the Home for the New Economy’s price, according to the accepted metric, down from $160 to $140 a square foot. But that was still almost three times the rate the billboards were advertising elsewhere. Since the home’s debut in June, it had received a fair amount of public attention, but it still hadn’t sold, and Blair had already been forced to drop what he was asking. “To sell a house in this market is tough,” Blair said. “We’re going to market the lifestyle — the homes follow the lifestyle.” And indeed, at an open house later that evening, the Home for the New Economy filled up with neighbors from around Hammond’s Ferry: singles, retirees, families with kids. “I think it’s taken folks a while to get the concept of what we’re doing down here,” said Walker Posey, who lives in a three-bedroom house. “Your dollar can go a long way in South Carolina in terms of square footage. But quality of life is not about square footage.”

Of course, these were people who had already bought into that proposition — and it was unclear how many others would accept the trade-off of space for lifestyle. For more than 60 years, at least, American consumers have dreamed of one day having enough room to stretch out. It may take more than the shock of hard times to downsize that particular fantasy.

Andrew Rice, a contributing writer to the magazine, is the author of “The Teeth May Smile but the Heart Does Not Forget.”

__________

Full article and photo: http://www.nytimes.com/2010/10/17/magazine/17KeySmallHouse-t.html

Viva Chile! They Left No Man Behind.

A show of competence and determination inspires the world.

Chile! Viva Chile! If I had your flag, I would wave it today from the roof of my building, and watch my New York neighbors smile, nod and wave as they walked by. What a thing Chile has done. They say on TV, “Chile needed this.” But the world needed it. And the world knew it: That’s why they watched, a billion of them, as the men came out of the mine.

Why did the world need it? Because the saving of those men gave us something we don’t see enough, a brilliant example of human excellence—of cohesion, of united and committed action, of planning and execution, of caring. They used the human brain and spirit to save life. All we get all day, every day is scandal. But this inspired.

Viva Chile. They left no man behind. That is what our U.S. Army Rangers say, and our Marines: We leave no man behind. It has a meaning, this military motto, this way of operating. It means you are not alone, you are part of something. Your brothers are with you, here they come. Chile, in leaving no man behind, in insisting that the San José mine was a disaster area but not a tomb, showed itself to be a huge example of that little thing that is at the core of every society: a fully functioning family. A cohering unit that can make its way through the world.

“Viva Chile.” That is what they all said, one way or another, as they came out of the capsule, which was nicknamed the Phoenix. They could have nicknamed it the Lazarus, for those risen from the dead. Each one of the miners, in the 10 weeks they spent a half-mile deep in the Atacama Desert, would have known the odds. For two weeks, nobody even knew they were alive. Then this week there they were, one by one, returning to the surface. They must have thought, “Chile, you did not forget us. Chile, you could have said ‘An accident, a tragedy, the men are dead, let the men die.’ But you did not let the men die.” What a thing to know about your country.

Viva Chile. So many speak of faith but those miners, they had faith. A miner’s relative, as the men began to come up: “It is a miracle from God.” A miner got out of the capsule and got on his knees in front of the nation, saying prayers you know he promised, at the bottom of the mine, he would say, crossing himself twice, and holding up his arms in gratitude, surrender and awe. A miner, after he walked out of the capsule, described his personal experience: “I met God. I met the devil. God won.”

So many nations and leaders have grown gifted at talk. Or at least they talk a lot. News talk, politics talk, spin talk, selling talk: There are nations, and we at our worst are sometimes among them, whose biggest export seems to be chatter. But Chile this week moved the world not by talking but by doing, not by mouthing sympathy for the miners, but by saving them. The whole country—the engineers and technicians, the president, the government, the rescue workers, other miners, medics—set itself to doing something hard, specific, physical, demanding of commitment, precision and expertise. And they did it. Homer Hickman, the coal miner’s son turned rocket engineer who was the subject of the 1999 film “October Sky,” said Wednesday on MSNBC that it was “like a NASA mission.” Organized, thought through, “staying on the time line, sequential thinking.” “This is pretty marvelous,” he said. “This is Chile’s moon landing,” said an NBC News reporter.

Technology was used capably, creatively, and as a force for good. It has not everywhere been used so successfully in the recent past, another reason the world needed to see this. Last summer Americans watched professionals and the government seem helpless to stop the Gulf oil spill, a disaster every bit as predictable as a mine cave-in. For months we watched on TV the spewing of the oil into the sea. In Chile, the opposite. They showed live video of the rescue workers down in the shaft, getting the miners into the Phoenix. Our video said: Something is wrong here. Theirs said: Something is working here.

A government of a mature and complex democracy proved itself capable and competent. This was heartening and surprising. Governments are charged with doing certain vital and necessary things, but they are overburdened, distracted, so we no longer expect them to do them well. President Sebastián Piñera, in office five months when the mine caved in, saw the situation for what it was. Thirty three men in a hole in the ground, in a mine that probably shouldn’t have been open. A disaster, a nation riveted.

What do you do? You throw yourself at the problem. You direct your government: This is the thing we do now. You say, “We will get the men.” You put your entire persona behind it, you put it all on the line, you gamble that your nation can do it. You trust your nation to do it. You do whatever possible to see your nation does it. And the day the rescues are to begin, you don’t show up and wring your hands so people can say “Ah, he knew it might not work, he was not unrealistic, he was telling us not to get our hopes up.” No, you stand there smiling with joy because you know it will work, you know your people will come through, you have utmost confidence. And so you go and radiate your joy from the first moment the rescue began and the first man came out straight through to the last man coming out. You stand. You stay.

It was the opposite of the governor of Louisiana during Katrina, projecting helplessness and loserdom, or the president flying over the storm, or the mayor holing up in a motel deciding this might be a good time for a breakdown. This was someone taking responsibility.

The event transcended class differences, social barriers, regional divides. The entire nation—rich, poor, all colors and ages—was united. Scientists and engineers gave everything to save men who’d lived rough, working-class lives. “Every one of them who came up was treated like the first one,” said a reporter on MSNBC.

What does it do to the children of a nation to see that? Everyone from Chile will be proud as they go through the world. “You saved the miners.” Chilean children will know, “We are the kind of people who get them out alive. We made up our mind to do it and we did.”

What a transformative event this is going to be for that nation.

A closing note, another contrast. President Obama this week told the New York Times, speaking of his first two years, that he realized too late “there’s no such thing as shovel-ready projects.” He’s helpless in the face of environmental impact statement law. But every law, even those, can be changed if you have the vision, will, instinct and guts to do it, if you start early, if you’re not distracted by other pursuits.

“Shovel ready.” Chile just proved, in the profoundest sense, it is exactly that. And in doing so, it moved the rough heart of the world.

Viva Chile.

Peggy Noonan, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704361504575551830474342068.html

The Last Reunion

Even in the age of Twitter, face-to-face interaction is what makes life worth living.

In 1986, some former classmates and I organized an informal 40th reunion for the graduating class of our public grammar school in Chicago. We were in our fifties back then, and many of our careers were going well. Most of us were still in good health and were enjoying being busy and watching our children grow up. The mood of the evening was one of optimism and hope. At one point a woman slipped her phone number into my pocket and told me to call her when I got to Los Angeles. She assured me of a good time.

Two weeks ago, some of us organized a similar reunion for those classmates we could locate. Although graduation was 64 years ago, most of us still live in Chicago. A few came from as far away as Denver and New York.

Our school, Nettelhorst, has gone through several phases. When we attended, it was an average middle-class neighborhood public school, and all students walked to it from their homes or apartment houses. Over time nearby residents became less enthusiastic about the place, so buses brought students in from distant neighborhoods to fill the vacancies. The school went into serious decline.

Over the past decade, though, a group of neighborhood parents developed a program to bring the school back to life—and they succeeded. The renaissance is described in a book, “How to Walk to School,” with contributions by Arne Duncan, the U.S. secretary of education who was Chicago’s superintendent of schools, and Rahm Emanuel, the former White House chief of staff who is now running for mayor of Chicago.

My classmates and I were amazed at how involved parents now are in the school. My mother gave me a dollar so she could join the Parent/Teachers Association, but she never intended to show up at the meetings. From her point of view, once her kid got inside the school building it was up to the teachers to get him educated. School had worked for generations without the meddling of parents.

Our reunion began at 4:30 on a Saturday afternoon. I was a little shocked to see some of my old friends. We’re now in our late seventies and time has left its mark. Some required walkers or canes, a few had gotten much heavier (that was expected), and several formerly clean-cut faces had gray beards. But the good-looking girls, while a bit wrinkled, brought back memories. Losers don’t come to reunions, they say, and some who had intended to come dropped out at the last minute. Once we were together, I could feel the old warmth in the room. We were glad to have survived this far and to see each other again.

We took a tour of the school (the fellow with the walker valiantly scaled the stairs). The old 1892 building was intact, but the interior had changed. The drab walls were brightly painted and our stationary desks with their inkwells—yes, inkwells—had been removed. A few murals painted during the Works Project Administration of the 1930s had been restored. The porcelain signs above the “boys” and “girls” restrooms had been preserved, and the seemingly ancient wooden doors were still there.

Each of us had been asked to write a brief summary of our lives. Only my old girlfriend and I, it turned out, were still working. That desire to make every minute of life count may have been what brought us together 60 years ago, though neither of us knew it then. I got the feeling that while most of the others had what would be considered “a good life,” they felt somewhat unfulfilled; goals hadn’t been reached and time was getting short.

Each of us spent a few minutes talking to a videographer about the school’s impact on our lives. Most remembered their teachers more clearly than I did. Some talked about pranks and humiliations, but it was clear that their friendships had provided a safety net for the many emotional falls experienced in childhood. A genius may thrive as a loner, but most of us need a network.

Some say your personality is formed before your 10th birthday. I couldn’t tell in grammar school who was going to be successful. We all spent most of our time back then just trying to enjoy ourselves. I learned that you were unlikely to make it without the love of others, and that you had better believe in yourself and try your best to get the most out of your talent because life in the real world is going to be hard.

Toward the end of our gathering, I expressed sorrow that this would be our last reunion. We all had accepted that we only have a limited amount of time left, but we all thanked God that we had gotten together once more. In a world of cell phones, iPads and Twitter, we all seemed to know that it was face-to-face interaction that gives meaning to life.

Mr. Wien is vice chairman of Blackstone Advisory Partners LP.

__________

Full article: http://online.wsj.com/article/SB10001424052748704361504575552070347628394.html

Virtual lemmings

HUMANS are a gregarious lot. We appreciate company. And we appreciate our company appreciating us. One way to preserve this mutual appreciation is to emulate others. This gives rise to trends or, in a less charitable turn of phrase, herd mentality. We appear to be wired to find all manner of fads psychologically irresistible. Advertisers have long understood this. So have retailers−in increasingly tech-savvy ways. Some have been developing smart trolleys, which relay information on their contents to digital displays on shelves. These, in turn, would inform passing shoppers how many other customers are about to plump for the same item. And no self-respecting online venture would be complete without a constantly updated “most recommended” box (just look at this screen, to the right of this blog post).

It’s likely that such ruses work because it made evolutionary sense to copy neighbours, to avoid danger or find food and shelter. Sometimes, this atavistic tendency ends in tears, when it prompts us to act contrary to what is, on reflection, our self-interest. (Witness stock-market crashes, stampedes and tamagotchi.) What made sense to a relatively homogeneous gaggle of several dozen nomads needn’t hold for millions of strangers.

As modern Homo sapiens migrates to the online savannah, trends have been spreading to ever greater numbers. So the wise men and women of our now-massive tribe have been tracking web versions of these ancient behaviours. However, most of the research (both on- and offline) to date has focused on either a small subset of users or the most successful herd-driven behaviours. Now Felix Reed-Tsochas of Oxford University’s Saïd Business School and Jukka-Pekka Onnela from Harvard University have broached the subject with an admirably broad brush.

As the pair report in the latest Proceedings of the National Academy of Sciences, they pored over (anonymous) data of the entire Facebook population in July and August 2007 (around 50m at the time), and at all but a few of the 2720 apps available for download in the same period (the 15 that didn’t make the cut were partly corrupted). This amounted to a total of some 104m app installations. At that time, a Facebook user’s apps were all visible to friends, who were also notified when any new app was downloaded (a practice Facebook has since abandoned). This, along with a display of the total number of installations of each app, were the only ways apps were plugged, permitting the researchers to control for the effects of external advertising. Any effects observed would thus be wholly attributable to social influence, not canny ad men.

Dr Reed-Tsochas and Dr Onnela duly discovered that the social networkers’ herd mentality was intact, with popular apps doing best, and the trendiest reaching stratospheric levels. A typical app was installed around 1,000 times, but the highest-ranked notched up an astonishing 12m users. What did come as something of a surprise, though, was that our inner lemming only kicked in once the app had breached a clear threshold rate of about 55 installations a day. Any fewer than that and users seemed oblivious to their friends’ preferences. Interestingly, after some serious number crunching, the researchers found that this cannot be put down purely to the network effect, ie, the idea that adopting a certain innovation only makes sense if enough other people have done so. Indeed, this effect appeared less pronounced than might have been expected.

Moreover, the data suggest that the sudden spike in installations doesn’t come about simply because a discovered threshold has been passed. This means the observed threshold rate is unlike an infectious disease’s basic reproduction number. (This is what epidemiologists call the average number of secondary cases caused by a typical infected individual in a population lacking immunity, with no efforts to control the outbreak.) In other words, it would be inaccurate to speak of an epidemic of popularity. Rather, Dr Reed-Tsochas and Dr Onnela suggest that two discrete behavioural patterns emerged. Users appeared to treat any app with more than 55 daily installations differently to those with fewer downloads. Under 55 daily installations, friend behaviour was an instrumental part of the decision to install. Over 55 daily installations, and friend behaviour didn’t matter one jot. Virtual lemmings are, it seems, discriminating in ways we still don’t quite comprehend. As is, no doubt, the offline troop.

Babbage, The Economist

__________

Full article and photo: http://www.economist.com/blogs/babbage/2010/10/online_herd_instinct

Mother Nature Decoded

Mother Nature can look very chaotic. When we take a walk around a garden, every flowering bush can seem like a confusing explosion of blossoms and leaves, every tree like an impossibly complicated tangle of branches and foliage. How can we possibly draw these verdantly overflowing subjects without going blind, or crazy?

Well, the truth is that drawing or painting the actual complexity of a bouquet of flowers or a patch of forest with precision is a high-level observational and aesthetic task that, for the moment, we will leave to artists like Henri Fantin-Latour or Gustav Klimt. We can, however, take a single stem from that bouquet and choose single trees from that forest to look at and find a way to draw.

Henri Fantin-Latour, left; Gustav Klimt, right “Roses,” left. “The Birch Wood,” right.

Many people learning to draw have an understandable anxiety about getting the proportions right. The skill of drawing proportionally comes from doing a lot of drawing, but also from combining the search for correct proportion with the all the other ways that we think about our subject as we draw. In the following analysis of a flower I think you will see that responding to fundamental issues in looking at the flower help us to draw the proportions of the plant much more easily than concentrating on each part of the plant as we come to it.

A good place to start is to acknowledge that this lily is a growing plant moving upwards to get nourishment from the sun and rain, and that its central stalk is a strong column that supports the out-springing stems, leaves and flowers.

In the first stage of the drawing I establish the direction of the stems and leaves and the centers of the petals as they bend away from their core. Two things strike me as I make these lines — one, that the curves of the stems and leaves have a rhythmic relationship with one another and two, that the petals form an almost symmetrical “fountain” as they burst from their center.

In choosing to start my drawing in this way I have decided on a priority — that establishing the basic growing direction of each of the elements gives me a more coherent foundation on which to build my drawing than starting with any particular detail. This is an enormously important insight in drawing — if you start each observation of a subject by deciding what is most important to its character, you will know where to begin your drawing and how to proceed. In the case of the pot you drew, its series of ellipses stacked on a central core was the key to its structure. In this lily you are considering something much more organic and subtle, but still with a logical structure.

In stage two of the drawing, using the first lines as a guide as to where the centers are, I choose to finish the two petals on the far side of the flower because they are the easiest to understand and give me more reference points to complete the other petals. Again, using the first lines as a guide to where the center of the leaves are, I draw the twisting forms of the leaves, registering what is the underside or the top side of the leaf.

In stage three of the drawing I have established enough of the architecture of the plant to draw the details — the thicknesses of the various parts of the stalk and stems, the stamens inside the flower and the unopened buds. At this point I could keep working on all the other aspects of shadow and atmosphere, but you could do that without me. I’ve walked you through the important part of the initial thinking where you might have been led astray by details.

Photo A
Photo B

Trees, with their hundreds of thousands of leaves and branches reaching every which way, are daunting subjects to draw. But just as we thought about the growing pattern of the lily to help us organize the details in our drawing, we can observe in each individual tree clues as to what makes them look like they do.

One of the most useful clues is the leaf of the tree itself, because examining it gives us a sense of the large shape of the tree and the kind of texture that the limbs and groups of leaves create. I have chosen two trees from the yard around my house to consider and to draw. They are both Japanese maples, one an unusual coral bark maple (Photo A), the second a more common split leaf red maple (Photo B). In the close-up photos of the leaves, you will see that the leaves of the coral bark are slightly pointier and thrust outwards more than the leaves of the red maple, which are softer-looking and curl downwards.

The different character of the leaves helps us to understand the overall shape and texture of each tree. The coral bark’s silhouette is spiky, with large indentations in its mass, echoing the vigorous pushing-outward energy of the leaves and the deep spaces between each of the leaves’ five segments. The feeling of the whole red maple is softer and rounder with fewer big gaps in its perimeter, just like the broader, downward-curling leaf.

Start your drawings by sketching out the large shapes, quickly giving the tree the general character of spikiness (in the case of the coral bark) or roundness and softness (in the case of the red maple). As you proceed to map out the big masses of leaves, keep using the appropriate kind of line, jagged and up-thrusting for the coral bark, and round and downward-arcing for the red maple. The coral bark’s leaves feel like they are arranged along the outer branches to form long, spear-like protrusions, whereas the red maple’s leaves feel like soft, rounded clumps.

Rather than trying to draw individual leaves, use the characteristic spiky or rounded line to evoke the whole texture of the tree. You are drawing the overall character of the tree, not a rendering for a horticultural textbook. Even when you use groups of lines to quickly develop large shadow areas, keep thinking of the kind of edges and shapes that you see in that particular tree. The red maple, for instance, has a kind of fussiness that you could reflect by using lines that jig and jag around and that have a sense of downward-pointedness. The coral bark’s shadow areas can be developed with patches of angular lines that move upward and outward.

Drawing the lily was a more precise observational exercise because we could look at a relatively simple subject up close. The tree drawings were a way to generalize the central character of a more complex and larger subject that we were observing from more of a distance.

However, with enough patience, and finding a tree you love to observe, it is quite possible to do a “portrait” of a particular tree, and I encourage you to find that special tree and to have the experience of drawing it with concentration and particularity. Mother nature may often seem impossibly chaotic, but sometimes she can be pinned down. I include here a drawing of a tree that attracted me in its wintry starkness.

In the next column, we will draw two manufactured objects that require a somewhat different approach.

James McMullan, New York Times

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/10/14/mother-nature-decoded/

The Spoils of Happiness

In 1974, Robert Nozick, a precocious young philosopher at Harvard, scooped “The Matrix”:

Suppose there were an experience machine that would give you any experience you desired. Super-duper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life experiences? […] Of course, while in the tank you won’t know that you’re there; you’ll think that it’s all actually happening […] Would you plug in?. (Anarchy, State, and Utopia, p. 3)

Nozick’s thought experiment — or the movie, for that matter — points to an interesting hypothesis: Happiness is not a state of mind.

“What is happiness?” is one of those strange questions philosophers ask, and it’s hard to answer. Philosophy, as a discipline, doesn’t agree about it. Philosophers are a contentious, disagreeable, lot by nature and training. But the question’s hard because of a problematic prejudice about what kind of thing happiness might be. I’d like to diagnose the mistake and prescribe a corrective. 

Nozick’s thought experiment asks us to make a decision about a possible circumstance. If things were thus-and-so, what would you do? Would you plug in? Some people dismiss the example because they think the very idea of that sort of decision, with respect to a hypothetical situation, is somehow bogus and can’t show anything. “These are all just hypothetical! Who cares? Get real!”

But the fact that a scenario is hypothetical doesn’t make it imponderable or worthless. Compare a simpler case: Suppose there were a fire in your building and you could either save your neighbors, who’d otherwise be trapped, by dragging them outside, or you could save your pencil, by holding on tight to that as you escaped, but not both. What would you do? I hope the answer’s easy. And that’s the point: We can, sometimes at least, answer this sort of question very easily. You are given a supposition and asked whether you would do this or that; you consider the hypothetical situation and give an answer. That’s what Nozick’s example is like.

So, would you plug in?

I think that for very many of us the answer is no. It’s Morpheus and Neo and their merry band of rebels who are the heroes of “The Matrix.” Cypher, who cuts the deal with the Agents, is a villain. And just as considering what we would grab in case of emergency can help us learn about what we value, considering whether to plug into the experience machine can help us learn about the sort of happiness we aspire to.

In refusing to plug in to Nozick’s machine, we express our deep-seated belief that the sort of thing we can get from a machine isn’t the most valuable thing we can get; it isn’t what we most deeply want, whatever we might think if we were plugged in. Life on the machine wouldn’t constitute achieving what we we’re after when we’re pursuing a happy life. There’s an important difference between having a friend and having the experience of having a friend. There’s an important difference between writing a great novel and having the experience of writing a great novel. On the machine, we would not parent children, share our love with a partner, laugh with friends (or even smile at a stranger), dance, dunk, run a marathon, quit smoking, or lose 10 pounds in time for summer. Plugged in, we would have the sorts of experience that people who actually achieve or accomplish those things have, but they would all be, in a way, false — an intellectual mirage. 

Now, of course, the difference would be lost on you if you were plugged into the machine — you wouldn’t know you weren’t really anyone’s friend. But what’s striking is that even that fact is not adequately reassuring. On the contrary it adds to the horror of the prospect. We’d be ignorant, too — duped, to boot! We wouldn’t suffer the pain of loneliness and that’s a good thing. But it would be better if we weren’t so benighted, if our experiences of friendship were the genuine article.

To put the point in a nutshell, watching your child play soccer for the first time is a great thing not because it produces a really enjoyable experience; on the contrary, what normally makes the experience so special is that it’s an experience of watching your child, playing soccer, for the first time. Sure it feels good — paralyzingly good. It matters, though, that the feeling is there as a response to the reality: the feeling by itself is the wrong sort of thing to make for a happy life.

Happiness is more like knowledge than like belief. There are lots of things we believe but don’t know. Knowledge is not just up to you, it requires the cooperation of the world beyond you — you might be mistaken. Still, even if you’re mistaken, you believe what you believe. Pleasure is like belief that way. But happiness isn’t just up to you. It also requires the cooperation of the world beyond you. Happiness, like knowledge, and unlike belief and pleasure, is not a state of mind.

Here’s one provocative consequence of this perspective on happiness. If happiness is not a state of mind, if happiness is a kind of tango between your feelings on one hand and events and things in the world around you on the other, then there’s the possibility of error about whether you’re happy. If you believe you’re experiencing pleasure or, perhaps especially, pain, then, presumably, you are. But the view of happiness here allows that “you may think you’re happy, but you’re not.”

One especially apt way of thinking about happiness — a way that’s found already in the thought of Aristotle — is in terms of “flourishing.” Take someone really flourishing in their new career, or really flourishing now that they’re off in college. The sense of the expression is not just that they feel good, but that they’re, for example, accomplishing some things and taking appropriate pleasure in those accomplishments. If they were simply sitting at home playing video games all day, even if this seemed to give them a great deal of pleasure, and even if they were not frustrated, we wouldn’t say they were flourishing. Such a life could not in the long term constitute a happy life. To live a happy life is to flourish.

A stark contrast is the life of the drug addict. He is often experiencing intense pleasure. But his is not a life we admire; it is often quite a pitiful existence. Well, one might think, it has its displeasures, being a user. There are withdrawal symptoms and that kind of thing—maybe he’s frustrated that he can’t kick the habit. But suppose that weren’t the case. Suppose the user never had to suffer any displeasure at all—had no interest in any other sort of life. How much better would that make it?

Better perhaps, but it would not be a happy life. It might be better than some others — lives filled with interminable and equally insignificant pain, for example. Better simple pleasure than pain, of course. But what’s wrong with the drug addict’s life is not just the despair he feels when he’s coming down. It’s that even when he’s feeling pleasure, that’s not a very significant fact about him. It’s just a feeling, the kind of pleasure we can imagine an animal’s having. Happiness is harder to get. It’s enjoyed after you’ve worked for something, or in the presence of people you love, or upon experiencing a magnificent work of art or performance — the kind of state that requires us to engage in real activities of certain sorts, to confront real objects and respond to them. And then, too, we shouldn’t ignore the modest happiness that can accompany pride in a clear-eyed engagement with even very painful circumstances.

We do hate to give up control over the most important things in our lives. And viewing happiness as subject to external influence limits our control — not just in the sense that whether you get to live happily might depend on how things go, but also in the sense that what happiness is is partly a matter of how things beyond you are. We might do everything we can to live happily — and have everything it takes on our part to be happy, all the right thoughts and feelings — and yet fall short, even unbeknownst to us. That’s a threatening idea. But we should be brave. Intellectual courage is as important as the other sort.

David Sosa is Louann and Larry Temple Centennial Professor in the Humanities at the University of Texas at Austin, where he is chair of the department of philosophy. He is editor of the journal Analytic Philosophy and author of numerous articles. He is now completing a book, “A Table of Contents,” about how we get the world in mind.

__________

Full article: http://opinionator.blogs.nytimes.com/2010/10/06/the-spoils-of-happiness/

In Defense of Naïve Reading

Remember the culture wars (or the ’80s, for that matter)? “The Closing of the American Mind,” “Cultural Literacy,” “Prof Scam” “Tenured Radicals”? Whatever happened to all that? It occasionally resurfaces, of course. There was the Alan Sokal/Social Text affair in 1996, and there are occasional flaps about winners of bad writing awards and so forth, but the national attention on universities and their mission and place in our larger culture has certainly shifted.

Those culture wars, however much more heat than light they generated, were at least a philosophical debate about values, about what an educated person should know, even about what college was for. All of that has been displaced in the last decade by another sort of discourse: stories about the staggering and growing expense of a college education; the national hysteria about getting one’s children into an “elite” school (or at least one the neighbors might have heard of); the declining impact of a college degree on one’s job prospects; rampant plagiarism; the vast multitudes of part-time or adjunct faculty, usually without health care or much of a future, now teaching our undergraduates; pronouncements on the end of the book, the end of attention spans, even the end of reading itself. But the question of what all this expense and anxiety might ultimately be about, or what the point of it all is, has not surfaced much lately.

It might now be possible to get a different sort of perspective on that discussion of 20 years ago. It is not as if anybody won. The underlying issues ─ especially the philosophical issues ─ have not been resolved. The debate, in the manner of many such public debates and old soldiers, just faded away.

While the public debates may have died down — and while there continue to be such methodological debates in sociology, anthropology and history — it is still the teaching of literature that generates the most academic, and especially non-academic, discussion. There are such debates in philosophy, too, but we tend to get a pass on this issue since debates about what philosophy is have always been one of philosophy’s main topics. 

Most students study some literature in college, and most of those are aware that they are being taught a lot of  theory along with the literature. They understand that the latest theory is a broad social-science-like approach called “cultural studies,” or a particular version is called “post-colonialism” or “new historicism.” And there are still plenty of gender-theoretical approaches that are prominent. But what often goes unremarked upon in the continuing (though less public) debate about such approaches is that, taking in the longue durée, this instability is in itself completely unremarkable.

The ’80s debaters tended to forget that the teaching of vernacular literature is quite a recent development in the long history of the university. (The same could be said about the relatively recent invention of art history or music as an academic research discipline.) So it is not surprising that, in such a short time, we have not yet settled on the right or commonly agreed upon way to go about it. The fact that the backgrounds and expectations of the student population have changed so dramatically so many times in the last 100 years has made the problem even more difficult.

In the case of vernacular literature, there was from the beginning some tension between the reader’s point of view and what “professional scholarship” required. Naturally enough, the first models were borrowed from the way “research” was done on the classical texts in Greek and Latin that made up most of a student’s exposure to literature until the end of the 19th century. Philology, with its central focus on language, was once the master model for all the sciences and it was natural for teachers to try to train students to make good texts, track down sources, learn about conflicting editions and adjudicate such controversies. Then, as a kind of natural extension of these practices, came historical criticism, national language categorization, work on tracing influences and patronage, all contributing to the worry about classifying various schools, movements or periods. Then came biographical criticism and the flood gates were soon open wide: psychoanalytic criticism, new or formal criticism, semiotics, structuralism, post-structuralism, discourse analysis, reader response criticism or “reception aesthetics,” systems theory, hermeneutics, deconstruction, feminist criticism, cultural studies. And so on.

Clearly, poems and novels and paintings were not produced as objects for future academic study; there is no a priori reason to think that they could be suitable objects of  “research.” By and large they were produced for the pleasure and enlightenment of those who enjoyed them. But just as clearly, the teaching of literature in universities ─ especially after the 19th-century research model of Humboldt University of Berlin was widely copied ─ needed a justification consistent with the aims of that academic setting: that fact alone has always shaped the way vernacular literature has been taught.

The main aim was research: the creating and accumulation and transmission of knowledge. And the main model was the natural science model of collaborative research: define problems, break them down into manageable parts, create sub-disciplines and sub-sub-disciplines for the study of these, train students for such research specialties and share everything. With that model, what literature and all the arts needed was something like a general “science of meaning” that could eventually fit that sort of aspiration. Texts or art works could be analyzed as exemplifying and so helping establish such a science. Results could be published in scholarly journals, disputed by others, consensus would eventually emerge and so on. And if it proved impossible to establish anything like a pure science of exclusively literary or artistic or musical meaning, then collaboration with psychoanalysis or anthropology or linguistics would be welcomed.

Finally, complicating the situation is the fact that literature study in a university education requires some method of evaluation of whether the student has done well or poorly. Students’ papers must be graded and no faculty member wants to face the inevitable “that’s just your opinion” unarmed, as it were. Learning how to use a research methodology, providing evidence that one has understood and can apply such a method, is understandably an appealing pedagogy.

None of this is in itself wrong-headed or misguided, and the absence of any consensus about this at this still early stage is not surprising. But there are two main dangers created by the inevitable pressures that the research paradigm for the study of literature and the arts within a modern research university brings with it.

First, while it is important and quite natural for literary specialists to try to arrive at a theory of what they do (something that conservatives in the culture wars often refused to concede), there is no particular reason to think that every aspect of the teaching of literature or film or art or all significant writing about the subject should be either an exemplification of how such a theory works or an introduction to what needs to be known in order to become a professor of such an enterprise. This is so for two all-important reasons.

First, literature and the arts have a dimension unique in the academy, not shared by the objects studied, or “researched” by our scientific brethren. They invite or invoke, at a kind of “first level,” an aesthetic experience that is by its nature resistant to restatement in more formalized, theoretical or generalizing language. This response can certainly be enriched by knowledge of context and history, but the objects express a first-person or subjective view of human concerns that is falsified if wholly transposed to a more “sideways on” or third person view. Indeed that is in a way the whole point of having the “arts.”

Likewise ─ and this is a much more controversial thesis ─ such works also can directly deliver a  kind of practical knowledge and self-understanding not available from a third person or more general formulation of such knowledge. There is no reason to think that such knowledge — exemplified in what Aristotle said about the practically wise man (the phronimos)or in what Pascal meant by the difference between l’esprit géometrique and l’esprit de finesse — is any less knowledge because it cannot be so formalized or even taught as such. Call this a plea for a place for “naïve” reading, teaching and writing — an appreciation and discussion not mediated by a theoretical research question recognizable as such by the modern academy.

This is not all that literary study should be: we certainly need a theory about how artistic works mean anything at all, why or in what sense, reading a novel, say, is different than reading a detailed case history. But there is also no reason to dismiss the “naïve” approach as mere amateurish “belle lettrism.” Naïve reading can be very hard; it can be done well or poorly; people can get better at it. And it doesn’t have to be “formalist” or purely textual criticism. Knowing as much as possible about the social world it was written for, about the author’s other works, his or her contemporaries, and so forth, can be very helpful.

Secondly, the “research model” pressures described are beginning to have another poorly thought out influence. It is quite natural (to some, anyway) to assume that eventually not just the model of the sciences, but the sciences themselves will provide the actual theory of meaning that researchers in such fields will need. One already sees the “application” of “results” from the neurosciences and evolutionary biology to questions about why characters in novels act as they do or what might be responsible for the moods characteristic of certain poets. People seem to be unusually interested in what area of the brain is active when Rilke is read to a subject. The great problem here is not so much a new sort of culture clash (or the victory of one of C.P. Snow’s “two cultures”) but that such applications are spectacular examples of bad literary criticism, not good examples of some revolutionary approach.

If one wants to explain why Dr. Sloper in Henry James’s novel, “Washington Square,” seems so protective yet so cold about his daughter Catherine’s dalliance with a suitor, one has to begin by entertaining the good evidence provided in the novel ─ that he enjoys the power he has over her and wants to keep it; that he fears the loneliness that would result if she leaves; that he knows the suitor is a fortune hunter; that Catherine has become a kind of surrogate wife for him and he regards her as “his” in that sense; that he hates the youth of the suitor; that he hates his daughter for being less accomplished than he would have liked; and that only some of this is available to his awareness, even though all true and playing some role. And one would only be getting started in fashioning an account of what his various actions mean, what he intended, what others understood him to be doing, all before we could even begin looking for anything like “the adaptive fitness” of “what he does.”

If being happy to remain engrossed in the richness of such interpretive possibilities is “naïve,” then so be it.

Robert B. Pippin is the Evelyn Stefansson Nef Distinguished Service Professor in the John U. Nef Committee on Social Thought, the Department of Philosophy, and the College at the University of Chicago. He is the author of several books on German idealism and on  theories of modernity. His next book, on the problem of fate in American film noir, will appear in 2011.

__________

Full article and photo: http://opinionator.blogs.nytimes.com/2010/10/10/in-defense-of-naive-reading

How Handwriting Trains the Brain

Forming Letters Is Key to Learning, Memory, Ideas

Ask preschooler Zane Pike to write his name or the alphabet, then watch this 4-year-old’s stubborn side kick in. He spurns practice at school and tosses aside workbooks at home. But Angie Pike, Zane’s mom, persists, believing that handwriting is a building block to learning.

She’s right. Using advanced tools such as magnetic resonance imaging, researchers are finding that writing by hand is more than just a way to communicate. The practice helps with learning letters and shapes, can improve idea composition and expression, and may aid fine motor-skill development.

It’s not just children who benefit. Adults studying new symbols, such as Chinese characters, might enhance recognition by writing the characters by hand, researchers say. Some physicians say handwriting could be a good cognitive exercise for baby boomers working to keep their minds sharp as they age.

Studies suggest there’s real value in learning and maintaining this ancient skill, even as we increasingly communicate electronically via keyboards big and small. Indeed, technology often gets blamed for handwriting’s demise. But in an interesting twist, new software for touch-screen devices, such as the iPad, is starting to reinvigorate the practice.

Four-year-old Zane Pike used to toss aside his handwriting books. Now, the Cabot, Ark., preschooler is learning to write his letters using a smartphone application.

Most schools still include conventional handwriting instruction in their primary-grade curriculum, but today that amounts to just over an hour a week, according to Zaner-Bloser Inc., one of the nation’s largest handwriting-curriculum publishers. Even at institutions that make it a strong priority, such as the private Brearley School in New York City, “some parents say, ‘I can’t believe you are wasting a minute on this,'” says Linda Boldt, the school’s head of learning skills.

Recent research illustrates how writing by hand engages the brain in learning. During one study at Indiana University published this year, researchers invited children to man a “spaceship,” actually an MRI machine using a specialized scan called “functional” MRI that spots neural activity in the brain. The kids were shown letters before and after receiving different letter-learning instruction. In children who had practiced printing by hand, the neural activity was far more enhanced and “adult-like” than in those who had simply looked at letters.

“It seems there is something really important about manually manipulating and drawing out two-dimensional things we see all the time,” says Karin Harman James, assistant professor of psychology and neuroscience at Indiana University who led the study.

Adults may benefit similarly when learning a new graphically different language, such as Mandarin, or symbol systems for mathematics, music and chemistry, Dr. James says. For instance, in a 2008 study in the Journal of Cognitive Neuroscience, adults were asked to distinguish between new characters and a mirror image of them after producing the characters using pen-and-paper writing and a computer keyboard. The result: For those writing by hand, there was stronger and longer-lasting recognition of the characters’ proper orientation, suggesting that the specific movements memorized when learning how to write aided the visual identification of graphic shapes.

Other research highlights the hand’s unique relationship with the brain when it comes to composing thoughts and ideas. Virginia Berninger, a professor of educational psychology at the University of Washington, says handwriting differs from typing because it requires executing sequential strokes to form a letter, whereas keyboarding involves selecting a whole letter by touching a key.

She says pictures of the brain have illustrated that sequential finger movements activated massive regions involved in thinking, language and working memory—the system for temporarily storing and managing information.

And one recent study of hers demonstrated that in grades two, four and six, children wrote more words, faster, and expressed more ideas when writing essays by hand versus with a keyboard.

For research at Indiana University, children undergo specialized MRI brain scans that spot neurological activity.

Even in the digital age, people remain enthralled by handwriting for myriad reasons—the intimacy implied by a loved one’s script, or what the slant and shape of letters might reveal about personality. During actress Lindsay Lohan’s probation violation court appearance this summer, a swarm of handwriting experts proffered analysis of her blocky courtroom scribbling. “Projecting a false image” and “crossing boundaries,” concluded two on celebrity news and entertainment site hollywoodlife.com. Beyond identifying personality traits through handwriting, called graphology, some doctors treating neurological disorders say handwriting can be an early diagnostic tool.

“Some patients bring in journals from the years, and you can see dramatic change from when they were 55 and doing fine and now at 70,” says P. Murali Doraiswamy, a neuroscientist at Duke University. “As more people lose writing skills and migrate to the computer, retraining people in handwriting skills could be a useful cognitive exercise.”

In high schools, where laptops are increasingly used, handwriting still matters. In the essay section of SAT college-entrance exams, scorers unable to read a student’s writing can assign that portion an “illegible” score of 0.

Even legible handwriting that’s messy can have its own ramifications, says Steve Graham, professor of education at Vanderbilt University. He cites several studies indicating that good handwriting can take a generic classroom test score from the 50th percentile to the 84th percentile, while bad penmanship could tank it to the 16th. “There is a reader effect that is insidious,” Dr. Graham says. “People judge the quality of your ideas based on your handwriting.”

Handwriting-curriculum creators say they’re seeing renewed interest among parents looking to hone older children’s skills—or even their own penmanship. Nan Barchowsky, who developed the Barchowsky Fluent Handwriting method to ease transition from print-script to joined cursive letters, says she’s sold more than 1,500 copies of “Fix It … Write” in the past year.

Some high-tech allies also are giving the practice an unexpected boost through hand-held gadgets like smartphones and tablets. Dan Feather, a graphic designer and computer consultant in Nashville, Tenn., says he’s “never adapted well to the keypads on little devices.” Instead, he uses a $3.99 application called “WritePad” on his iPhone. It accepts handwriting input with a finger or stylus, then converts it to text for email, documents or Twitter updates.

And apps are helping Zane Pike—the 4-year-old who refused to practice his letters. The Cabot, Ark., boy won’t put down his mom’s iPhone, where she’s downloaded a $1.99 app called “abc PocketPhonics.” The program instructs Zane to draw letters with his finger or a stylus; correct movements earn him cheering pencils.

In children who had practiced writing by hand, the scans showed heightened brain activity in a key area, circled on the image at right, indicating learning took place.

“He thinks it’s a game,” says Angie Pike.

Similarly, kindergartners at Harford Day School in Bel Air, Md., are taught to write on paper but recently also began tracing letter shapes on the screen of an iPad using a handwriting app.

“Children will be using technology unlike I did, and it’s important for teachers to be familiar with it,” says Kay Crocker, the school’s lead kindergarten teacher. Regardless of the input method, she says, “You still need to be able to write, and someone needs to be able to read it.”

Gwendolyn Bounds, Wall Street Journal

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748704631504575531932754922518.html

Hatching the Pot

In the last column, I discussed ellipses and how drawing them involves the fluid, fairly fast movement of the hand, letting your reflexes carry out the kind of rounded shape you intend to make. Now we’ll move on to shading the pot that we previously described in simple outline, using curving lines that are like segments of the ellipse.

These are what I think of as “cat stroking” lines — curves that start gently, reach a crescendo of pressure and then fade out at the end. They enclose lines sensuously and are enormously useful in describing all kinds of bulging, rolling, bumpy subjects. In using these curved lines to shade the pot, we will not only describe the shadow but, because the lines curve around the pot, we will be accentuating its actual form. In my example of cross-hatching I show that, in order to avoid a “clotted” effect, the lines are made at different angles. I have drawn my examples in pen and ink to make the images clearer, but you might want to draw in a 2B or 4B pencil.

Now that the pot has been illuminated with a strong directional light, we can study how that light falls on the object, the angles that the shadows make and how to use lines to shade the drawing. Either using the outline drawing you did last week or, drawing the pot again, follow along with these steps to delineate the shadows on the pot.

 In the first stage of the drawing, I show that the light is coming from the right and slightly in front of the pot. This means that the basic pattern of shadow on the outside of the pot falls on the left, but the shadow on the inside of the pot is on the right.

Next, in order to show more of the complexity of the light and shadow, I begin to use cross-hatching, lines that go against the direction of my first lines, but that still consider the form of the pot. I notice where light catches on the shoulder of the pot, creating a little arc of illumination that “pushes” the shadow further to the left. I also show how the shadow arcs under the bottom of the pot, describing the way the shape rolls under towards the base. I also note how the light ground on which the pot sits, reflects light back onto the pot, creating a “core” shadow that is darker not at the back edge, where you might expect it, but slightly in from the edge. This reflected light, and the “core” shadow it creates, is a frequent phenomenon in round subjects.

In the last stage of the drawing, I use very pale lines in the light part of the pot on the right to dramatize the very lightest part of the pot on the “shoulder.” In other words, I am saving paper white for the one dramatically lit part of the pot. I finish the drawing by shading the shadow on the ground behind the pot, accentuating the flatness of the ground by using straighter parallel lines.

In order to move on to a slightly more complicated subject that still involves ellipses I have photographed a pitcher. Its pouring lip and handle add two elements that will expand your analytic and drawing skills. Either use the pitcher in the photo as a subject or find a similar object to observe in three dimensions and draw, delineating both its basic structure and the effect of light falling on it.

I have made a basic drawing of the pitcher that may help you as a guide in getting started. Note that the axis of the pouring lip and the handle are the same, in other words, they line up. Also note that in my drawing I am using several “feeling out” strokes to get to the bulging sides of the pitcher. I encourage you to do this: to internalize the feeling of roundness as you make the stroke, so as to move beyond the more neutral feeling of simply reproducing the curve.

Giorgio Morandi

This etching is by Giorgio Morandi, one of my favorite artists. As you can see, his style of drawing rounded shapes is not consistent with my demonstration. This dichotomy illustrates an issue pertinent to every example I use from the history of fine art, which is that the examples I give will expand the ideas of the lesson, rather than simply reinforcing the lesson.

For instance, I have demonstrated how to use curved lines in making the drawing of the pot because those kinds of lines help you to feel out the pot’s roundness and because the subtlety of making those lines is a way for you to engage the sensuousness of your reflexes. And now I show you a Morandi still life where he uses straight lines to describe round forms. How confusing! The Morandi etching depends for much of its contemplative beauty on the game the artist plays between the implied depth in which the objects exist and the texture of the lines that bring the drawing back to the surface.

When you are learning to draw, it is useful to understand the most obvious methods of achieving form and proportion, but when your idea of what you want to do in your drawing is strong enough, any line, texture or implement can achieve your vision.

James McMullan, New York Times

__________

Full article and photos: http://opinionator.blogs.nytimes.com/2010/09/30/hatching-the-pot/

Why So Many People Can’t Make Decisions

Some people meet, fall in love and get married right away. Others can spend hours in the sock aisle at the department store, weighing the pros and cons of buying a pair of wool argyles instead of cotton striped.

Seeing the world as black and white, in which choices seem clear, or shades of gray can affect people’s path in life, from jobs and relationships to which political candidate they vote for, researchers say. People who often have conflicting feelings about situations—the shades-of-gray thinkers—have more of what psychologists call ambivalence, while those who tend toward unequivocal views have less ambivalence.

High ambivalence may be useful in some situations, and low ambivalence in others, researchers say. And although people don’t fall neatly into one camp or the other, in general, individuals who tend toward ambivalence do so fairly consistently across different areas of their lives.

__________

Different Strokes

PEOPLE WHO SEE THE WORLD AS BLACK AND WHITE TEND TO…

  • Speak their mind or make quick decisions.
  • Be more predictable in making decisions (e.g., who they vote for).
  • Be less anxious about making wrong choices.
  • Have relationship conflicts that are less drawn out.
  • Be less likely to consider others’ points of view.

PEOPLE WHO SEE THE WORLD IN SHADES OF GRAY TEND TO….

  • Procrastinate or avoid making decisions if possible.
  • Feel more regret after making decisions.
  • Be thoughtful about making the right choice.
  • Stay longer in unhappy relationships.
  • Appreciate multiple points of view.

__________

For decades psychologists largely ignored ambivalence because they didn’t think it was meaningful. The way researchers studied attitudes—by asking participants where they fell on a scale ranging from positive to negative—also made it difficult to tease apart who held conflicting opinions from those who were neutral, according to Mark Zanna, a University of Waterloo professor who studies ambivalence. (Similarly, psychologists long believed it wasn’t necessary to examine men and women separately when studying the way people think.)

Now, researchers have been investigating how ambivalence, or lack of it, affects people’s lives, and how they might be able to make better decisions. Overall, thinking in shades of gray is a sign of maturity, enabling people to see the world as it really is. It’s a “coming to grips with the complexity of the world,” says Jeff Larsen, a psychology professor who studies ambivalence at Texas Tech University in Lubbock.

In a recent study, college students were asked to write an essay coming down on one side or another of a contentious issue, regarding a new labor law affecting young adults, while other groups of students were allowed to write about both sides of the issue. The students forced to choose a side reported feeling more uncomfortable, even physically sweating more, says Frenk van Harreveld , a social psychologist at the University of Amsterdam who studies how people deal with ambivalence.

If there isn’t an easy answer, ambivalent people, more than black-and-white thinkers, are likely to procrastinate and avoid making a choice, for instance about whether to take a new job, says Dr. Harreveld. But if after careful consideration an individual still can’t decide, one’s gut reaction may be the way to go. Dr. van Harreveld says in these situations he flips a coin, and if his immediate reaction when the coin lands on heads is negative, then he knows what he should do.

Researchers can’t say for sure why some people tend towards greater ambivalence. Certain personality traits play a role—people with a strong need to reach a conclusion in a given situation tend to black-and-white thinking, while ambivalent people tend to be more comfortable with uncertainty. Individuals who are raised in environments where their parents are ambivalent or unstable may grow to experience anxiety and ambivalence in future relationships, according to some developmental psychologists.

Culture may also play a role. In western cultures, simultaneously seeing both good and bad “violates our world view, our need to put things in boxes,” says Dr. Larsen. But in eastern philosophies, it may be less problematic because there is a recognition of dualism, that something can be one thing as well as another.

One of the most widely studied aspects of ambivalence is how it affects thinking. Because of their strongly positive or strongly negative views, black-and-white thinkers tend to be quicker at making decisions than highly ambivalent people. But if they get mired in one point of view and can’t see others, black-and-white thinking may prompt conflict with others or unhealthy thoughts or behaviors.

People with clinical depression, for instance, often get mired in a negative view of the world. They may interpret a neutral action like a friend not waving to them as meaning that their friend is mad at them, and have trouble thinking about alternative explanations.

Ambivalent people, on the other hand, tend to systematically evaluate all sides of an argument before coming to a decision. They scrutinize carefully the evidence that is presented to them, making lists of pros and cons, and rejecting overly simplified information.

Ambivalent individuals’ ability to see all sides of an argument and feel mixed emotions appears to have some benefits. They may be better able to empathize with others’ points of view, for one thing. And when people are able to feel mixed emotions, such as hope and sadness, they tend to have healthier coping strategies, such as when a spouse passes away, according to Dr. Larsen. They may also be more creative because the different emotions lead them to consider different ideas that they might otherwise have dismissed.

People waffling over a decision may benefit from paring down the number of details they are weighing and instead selecting one or a few important values to use in basing their decision, says Richard Boyatzis , a professor in organizational behavior, psychology and cognitive science at Case Western Reserve University.

For example, in making a decision about whether to buy a costly piece of new medical equipment, a hospital executive may weigh the expense, expertise needed to operate it and space requirements against its effectiveness. But ultimately, Dr. Boyatzis says, in order to avoid getting mired in a prolonged debate, the executive may decide on a core value—say, how well the equipment works for taking care of patients—that can be used to help make the decision.

In the workplace, employees who are highly ambivalent about their jobs are more erratic in job performance; they may perform particularly well some days and poorly other times, says René Ziegler, a professor of social and organizational psychology at the University of Tübingen in Germany whose study of the subject is scheduled for publication in the Journal of Applied Social Psychology. Positive feedback for a highly ambivalent person, such as a pay raise, will boost their job performance more than for someone who isn’t ambivalent about the job, he says.

Every job has good and bad elements. But people who aren’t ambivalent about their job perform well if they like their work and poorly if they don’t. Dr. Ziegler suggests that black-and-white thinkers tend to focus on key aspects of their job, such as how much they are getting paid or how much they like their boss, and not the total picture in determining whether they are happy at work.

Black-and-white thinkers similarly may recognize that there are positive and negative aspects to a significant relationship. But they generally choose to focus only on some qualities that are particularly important to them.

By contrast, people who are truly ambivalent in a relationship can’t put the negative out of their mind. They may worry about being hurt or abandoned even in moments when their partner is doing something nice, says Mario Mikulincer, dean of the New School of Psychology at the Interdisciplinary Center Herzliya in Israel.

Such shades-of-gray people tend to have trouble in relationships. They stay in relationships longer, even abusive ones, and experience more fighting. They are also more likely to get divorced, says Dr. Mikulincer.

Recognizing that a partner has strengths and weaknesses is normal, says Dr. Mikulincer. “A certain degree of ambivalence is a sign of maturity,” he says.

Shirley S. Wang, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703694204575518200704692936.html

Ditch Your Laptop, Dump Your Boyfriend

Advice for freshmen from the people who actually grade their papers and lead their class discussions. 

College is your chance to see what you’ve been missing, both in the outside world and within yourself. Use this time to explore as much as you can.

Take classes in many different subjects before picking your major. Try lots of different clubs and activities. Make friends with people who grew up much poorer than you, and others much richer. Date someone of a different race or religion. (And no, hooking up at a party doesn’t count.) Spend a semester abroad or save up and go backpacking in Europe or Asia.

Somewhere in your childhood is a gaping hole. Fill this hole. Don’t know what classical music is all about? That’s bad. Don’t know who Lady Gaga is? That’s worse. If you were raised in a protected cocoon, this is the time to experience the world beyond.

College is also a chance to learn new things about yourself. Never been much of a leader? Try forming a club or a band.

The best things I did in college all involved explorations like this. I was originally a theater major but by branching out and taking a math class I discovered I actually liked math, and I enjoyed hanging out with technical people.

By dabbling in leadership — I ran the math club and directed a musical — I learned how to formulate a vision and persuade people to join me in bringing it to life. Now I’m planning to become an entrepreneur after graduate school. It may seem crazy, but it was running a dinky club that set me on the path to seeing myself as someone who could run a business.

Try lots of things in college. You never know what’s going to stick.

— TIM NOVIKOFF, Ph.D. student in applied mathematics at Cornell

• • •

Chances are, if you are taking the time to read this advice, you already have the quality necessary to undertake the intellectual challenges of a college education — a seriousness of purpose. What I want to speak to is much more mundane, but it will make your transition into college easier: amid the thrill and vertigo of change, be kind to and patient with yourself.

Remember to take some time away from campus — from the demands of schoolwork and the trappings of the college social life. Explore the town you’re living in. Meet people who are not professors or fellow students. If you spend all of your time on school grounds, then it becomes too easy for the criticism from an occasional unkind professor or the conflict with a roommate to take on a monstrous scale. And to let that happen is to suffer from a mistake of emphasis; college should be a part of, but not the entire scope of, your existence for the next few years.

In Virginia Woolf’s novel “Mrs. Dalloway,” characters are troubled and traumatized by their inability to maintain a proper “sense of proportion”; ordinary tasks — life itself, for one of the characters — become outsized and unmanageable.

I mention this not because I think your situation will be so dire if you don’t heed my advice, but mostly because “Mrs. Dalloway” is a great read, and I highly recommend it.

— WILLIE X. LIN, student in the M.F.A. program in creative writing at Washington University in St. Louis

• • •

Universities are places where facts are made. Research is a collaborative process, so scientists need lab assistants, humanities researchers need library aides and graduate students need all the help they can get. A curious, competent undergraduate can always find work assisting a researcher.

Regardless of the field and the specific project, helping them helps you. The obvious benefits are new skills and invaluable experience. But there is also something powerful in seeing how the right experimental or analytical approach can sort through a mess of observations and opinion to identify real associations between phenomena, like a gene variant and a disease, or a financial tool and the availability of credit. With a window into the world of research, you will find yourself thinking more critically, accepting fewer assertions at face value and perhaps developing an emboldened sense of what you can accomplish.

Most important: research experience shows you how knowledge is produced. There are worse ways to prepare for life in an information age.

— AMAN SINGH GILL, Ph.D. student in the ecology and evolution department at Stony Brook University

• • •

Devices have become security blankets. Take the time to wean yourself.

Start by scheduling a few Internet-free hours each day, with your phone turned off. It’s the only way you’ll be able to read anything seriously, whether it’s Plato or Derrida on Plato. (And remember, you’ll get more out of reading Derrida on Plato if you read Plato first.) This will also have the benefit of making you harder to reach, and thus more mysterious and fascinating to new friends and acquaintances.

When you leave your room for class, leave the laptop behind. In a lecture, you’ll only waste your time and your parents’ money, disrespect your professor and annoy whoever is trying to pay attention around you by spending the whole hour on Facebook.

You don’t need a computer to take notes — good note-taking is not transcribing. All that clack, clack, clacking … you’re a student, not a court reporter. And in seminar or discussion sections, get used to being around a table with a dozen other humans, a few books and your ideas. After all, you have the rest of your life to hide behind a screen during meetings.

— CHRISTINE SMALLWOOD, Ph.D. student in English and American literature at Columbia

• • •

First-years are under an unbelievable pressure not only to succeed, but to excel in college. They walk into a university already feeling guilty that they don’t know what they want to major in, or what their career path is going to be. But be comfortable with the fact that you don’t know anything. Nobody does.

During my first week in art school, I sat in a dark lecture hall as a professor asked questions I couldn’t answer and showed slides I couldn’t identify. I felt as if I was the only one in the room who didn’t have a clue. So, when my drawing teacher invited several of us students to a potluck dinner at her house, I was still worried that I was out of my league. But in this casual setting, everyone opened up, and I was able to talk about art in the most relaxed and personal way.

As we returned to the dorms in the back of our now-favorite professor’s pickup truck, I remember looking up at the night sky and the trees whizzing by and thinking, “This is what college is supposed to feel like!” Relax and enjoy the ride.

— EVAN LaLONDE, student in the M.F.A. program in contemporary art practice at Portland State University

• • •

During the first few months of college, everyone wants to make friends. But no one knows how to do it, so everyone is really friendly all the time. You are likely to find yourself feigning interest in and enthusiasm for a lot of things to ingratiate yourself with your peers. “You’re a semiprofessional mime? So cool. Where are you going out tonight?”

Eventually, mercifully, it all shakes out. Parties, activities, dorms and classes help you find people you actually like to talk to. That is, unless you’re in your room every night, on the phone with your high school sweetheart, who’s back home or at another school. Or worse, you’re leaving school every other weekend to visit your significant other. Break up.

You should break up soon because you are likely to break up over Thanksgiving, anyway. You’ll give it an earnest try, but you’ll start to resent each other for forming new attachments, for not really “getting” what it’s like at your respective schools, for being the reason you’re both missing out on important experiences, like the hectic social sorting that’s happening right now. Worse, other people will punish you for missing out: “Oh, yeah, the joke is kind of hard to explain. See, it started that weekend you were out of town.”

Going to the same college as your significant high school other will not necessarily solve the problem. This is what happened to me. My boyfriend didn’t like my new “scene”; I panicked because I felt that we were spending too much time — then too little time — together. We limped through the first two months of the first semester before we called it quits.

The college year went by, bringing a lot of new people and priorities into our separate lives. The following fall, we realized that all our growing pains had not diminished what was a very precious connection. We ended up getting back together and staying together through the rest of college. But we had to break up first.

— REBECCA ELLIOTT, Ph.D. student in the sociology department at the University of California, Berkeley

__________

Full article and photo: http://www.nytimes.com/2010/09/26/opinion/26gradstudents.html

Feasts for Foodies

Two new books showcase the world’s top chefs

It’s old news now that the Catalan chef Ferran Adrià for four years ran the “World’s Best Restaurant”—until his El Bulli lost the title last April to an equally unlikely candidate, René Redzepi’s Copenhagen nosherie called Noma. The title may be meaningless, but it does point to a genuine phenomenon, and one about which there is near unanimity: Ferran Adrià (Mr. Redzepi is one of his disciples) is doing something radically different with food, and has led the first genuine revolution in professional cooking since the French nouvelle cuisine movement of the late 1970s and early 1980s.

Fundamentally this consists of altering the structure of familiar foodstuffs while retaining, or even intensifying its flavors. Mr. Adrià’s first metamorphosed foods were foams, for example his frothy white bean foam with sea urchin; or his smoke foam of 1997, “a lightly gelatinized froth of nothing but water flavored with woodsmoke, served in a glass with a few drops of olive oil and some strips of toast. ‘The idea,’ Ferran explained, ‘was to make you recall eating grilled bread with olive oil. It is an iconic dish, used to arouse a reaction.'” (It did. One Spanish critic who loathed it said it was “like what you get when you cross a busy street.”)

El Bulli’s Ferran Adrià has changed the way we think about food, but not the way we cook.

This dish is designed to provoke both our intellect and our feelings. As Mr. Adrià’s biographer, Colman Andrews says, “we’re used to foam, and tend to like it when it is attached to certain beverages.” He cites the crema on our espresso, the whipped cream on hot chocolate, the head on a pint of Guinness and the mousse of champagne. But our ancestors, Mr. Andrews points out, would probably, and very sensibly, refuse to eat anything foaming, as it was “a sign that food was spoiled, or noxious.” It is the nervousness of the human race that Mr. Adrià is playing with here, our ancestral uncertainly in the face of foaming food, trumped by the delight of finding that it actually tastes deliciously of wholesome, earthy pulses and the iodine tang of sea urchin.

Another of Mr. Adrià’s party tricks is “spherification.” I was lucky enough to dine once at the kitchen table at El Bulli, when he summoned us to watch him inject some viscous orange liquid containing calcium carbonate into a bath of an alginate solution. The result looked exactly like keta, salmon caviar—even to the blue top of the tin bearing the legend “IKRA” in which it was served. When I popped a bubble in my mouth, it exploded exactly like a lightly preserved fish egg, but there was the unmistakable taste of melon. So you get a liquid hit surrounded by (as it were) a skin of itself. Melon caviar is good, but the self-referential tiny peas and big olives are even better.

The only thing harder than to get a table reservation at El Bulli is to be accepted as a stagiaire, working there as an apprentice for the few months of the year the place actually opens for business, but for no pay. Mr. Adrià has not only had his ersatz world-champion title, but also the real one, three stars in the Michelin guide, since 1996. Mr. Andrews thinks, with pardonable exaggeration, that his subject has changed restaurant food forever; but his title implies, less forgivably, that Mr. Adrià has altered the way people eat at home. As the two paragraphs above make clear, this is not so, and probably never will be.

Mr. Adrià has long devoted at least half the year to research, at his Barcelona “Taller.” He would be the first to acknowledge that this boils down to playing with your food—in his kitchen, imagination is the only indispensible ingredient. He says El Bulli will close for good in its present form on July 31, 2011, and will probably reopen as an educational foundation, rather than as a business.

Mr. Adrià has the distinction of having been invited to participate in one of the international art world’s blue ribbon events, the 2007 “Documenta 12” in Kassel. He is also a friend of the artist, Richard Hamilton, the British father of Pop Art. Mr. Hamilton has written elsewhere (in a volume about Mr. Adrià called “Food for Thought”) a very impressive essay on the whole idea of food as art, and Mr. Adrià’s relation to it; and I wish Mr. Andrews had devoted more space to the question. He is certainly able to do so: Mr. Andrews is one of the most intelligent, erudite people who write about food, and is probably the only non-Spaniard qualified to do this biography, as he not only understands the Catalan language, but has written an entire book on “Catalan Cuisine.” The trouble is that Mr. Adrià is still only 49, has lived only half a life yet, and surely had only half his career. Mr. Andrew’s book is lovingly and beautifully written, and a handsome physical object, but there is as yet only the material for a book half the length of this one.

Mr. Adrià hates it when he is said to be practicing “molecular gastronomy” (an expression coined—to set the record straight—by the late Prof. Nicholas Kurti at a meeting in the early ’80s of the Oxford Symposium on Food and Cookery). It’s not clear why he so dislikes it; after all, cooking consists almost entirely of physical and chemical processes. But his British colleague Heston Blumenthal (of the Fat Duck) also disdains the term; and so, apparently, does Mr. Redzepi, who prefers to call his cooking “new Nordic cuisine.” His book (with photographs by Ditte Isager) is not only the most beautiful I’ve seen this year, but also—caveat lector—carries the most emphatic hazard warnings I’ve ever seen in a cookery book. In any case, it is unlikely you will want to attempt any of the recipes in this book. Like the several books written by Mr. Adrià himself, this is not really a cookbook, but a vibrant visual record of dishes invented by Mr. Redzepi—and, for those lucky enough to have eaten at Noma, a sumptuous souvenir.

Mr. Levy is a writer based in Oxfordshire.

___________

Full article and photo: http://online.wsj.com/article/SB10001424052748703989304575504690152423612.html

Unpacking Imagination

In an age of childhood obesity and children tethered to electronic consoles, playgrounds have rarely been more important. In an age of constrained government budgets, playgrounds have rarely been a harder sell. Fortunately, the cost of play doesn’t have to be prohibitive. In creating the Imagination Playground in Lower Manhattan — a playground with lots of loose parts for children to create their own play spaces — we realized that many of the elements with the greatest value to children were inexpensive and portable. Although traditional playgrounds can easily cost in the millions to build, boxed imagination playgrounds can be put together for under $10,000. (Land costs not included!) The design below is one that my architecture firm has done in collaboration with the New York City Parks Department and KaBoom, a nonprofit organization. But it needn’t be the only one out there. There are a lot of ways to build a playground — and a lot of communities in need of one. Let a thousand portable playgrounds bloom.

David Rockwell, New York Times

__________

Full article and photo: http://www.nytimes.com/interactive/2010/09/25/opinion/20100925_opchart.html

How to Raise Boys Who Read

Hint: Not with gross-out books and video-game bribes.

When I was a young boy, America’s elite schools and universities were almost entirely reserved for males. That seems incredible now, in an era when headlines suggest that boys are largely unfit for the classroom. In particular, they can’t read.

According to a recent report from the Center on Education Policy, for example, substantially more boys than girls score below the proficiency level on the annual National Assessment of Educational Progress reading test. This disparity goes back to 1992, and in some states the percentage of boys proficient in reading is now more than ten points below that of girls. The male-female reading gap is found in every socio-economic and ethnic category, including the children of white, college-educated parents.

The good news is that influential people have noticed this problem. The bad news is that many of them have perfectly awful ideas for solving it.

Everyone agrees that if boys don’t read well, it’s because they don’t read enough. But why don’t they read? A considerable number of teachers and librarians believe that boys are simply bored by the “stuffy” literature they encounter in school. According to a revealing Associated Press story in July these experts insist that we must “meet them where they are”—that is, pander to boys’ untutored tastes.

For elementary- and middle-school boys, that means “books that exploit [their] love of bodily functions and gross-out humor.” AP reported that one school librarian treats her pupils to “grossology” parties. “Just get ’em reading,” she counsels cheerily. “Worry about what they’re reading later.”

Not with ‘gross-out’ books and video-game bribes.

There certainly is no shortage of publishers ready to meet boys where they are. Scholastic has profitably catered to the gross-out market for years with its “Goosebumps” and “Captain Underpants” series. Its latest bestsellers are the “Butt Books,” a series that began with “The Day My Butt Went Psycho.”

The more venerable houses are just as willing to aim low. Penguin, which once used the slogan, “the library of every educated person,” has its own “Gross Out” line for boys, including such new classics as “Sir Fartsalot Hunts the Booger.”

Workman Publishing made its name telling women “What to Expect When You’re Expecting.” How many of them expected they’d be buying “Oh, Yuck! The Encyclopedia of Everything Nasty” a few years later from the same publisher? Even a self-published author like Raymond Bean—nom de plume of the fourth-grade teacher who wrote “SweetFarts”—can make it big in this genre. His flatulence-themed opus hit no. 3 in children’s humor on Amazon. The sequel debuts this fall.

Education was once understood as training for freedom. Not merely the transmission of information, education entailed the formation of manners and taste. Aristotle thought we should be raised “so as both to delight in and to be pained by the things that we ought; this is the right education.”

“Plato before him,” writes C. S. Lewis, “had said the same. The little human animal will not at first have the right responses. It must be trained to feel pleasure, liking, disgust, and hatred at those things which really are pleasant, likeable, disgusting, and hateful.”

This kind of training goes against the grain, and who has time for that? How much easier to meet children where they are.

One obvious problem with the SweetFarts philosophy of education is that it is more suited to producing a generation of barbarians and morons than to raising the sort of men who make good husbands, fathers and professionals. If you keep meeting a boy where he is, he doesn’t go very far.

The other problem is that pandering doesn’t address the real reason boys won’t read. My own experience with six sons is that even the squirmiest boy does not require lurid or vulgar material to sustain his interest in a book.

So why won’t boys read? The AP story drops a clue when it describes the efforts of one frustrated couple with their 13-year-old unlettered son: “They’ve tried bribing him with new video games.” Good grief.

The appearance of the boy-girl literacy gap happens to coincide with the proliferation of video games and other electronic forms of entertainment over the last decade or two. Boys spend far more time “plugged in” than girls do. Could the reading gap have more to do with competition for boys’ attention than with their supposed inability to focus on anything other than outhouse humor?

Dr. Robert Weis, a psychology professor at Denison University, confirmed this suspicion in a randomized controlled trial of the effect of video games on academic ability. Boys with video games at home, he found, spend more time playing them than reading, and their academic performance suffers substantially. Hard to believe, isn’t it, but Science has spoken.

The secret to raising boys who read, I submit, is pretty simple—keep electronic media, especially video games and recreational Internet, under control (that is to say, almost completely absent). Then fill your shelves with good books.

People who think that a book—even R.L. Stine’s grossest masterpiece—can compete with the powerful stimulation of an electronic screen are kidding themselves. But on the level playing field of a quiet den or bedroom, a good book like “Treasure Island” will hold a boy’s attention quite as well as “Zombie Butts from Uranus.” Who knows—a boy deprived of electronic stimulation might even become desperate enough to read Jane Austen.

Most importantly, a boy raised on great literature is more likely to grow up to think, to speak, and to write like a civilized man. Whom would you prefer to have shaped the boyhood imagination of your daughter’s husband—Raymond Bean or Robert Louis Stevenson?

I offer a final piece of evidence that is perhaps unanswerable: There is no literacy gap between home-schooled boys and girls. How many of these families, do you suppose, have thrown grossology parties?

Mr. Spence is president of Spence Publishing Company in Dallas.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704271804575405511702112290.html

Getting Back to the Phantom Skill

Drawing, for many people, is that phantom skill they remember having in elementary school, when they drew with great relish and abandon. Crayon and colored pencil drawings of fancy princesses poured out onto the sketchbooks of the girls, while planes and ships, usually aflame, battled it out in the boys’ drawings. Occasionally boys drew princesses and girls drew gunboats, but whatever the subject matter, this robust period of drawing tended to wither in most students’ lives and, by high school, drawing became the specialized province of those one or two art geeks who provided the cartoons for the yearbook and made the posters for the prom.

The first few columns of this series on drawing that I’m initiating this week will offer a primer on the basic elements of line-making, perspective, structure and proportion, which I hope will begin to rekindle the love of drawing for those readers who left it behind in the 4th grade. Achieving some confidence in drawing objects will get you started in the pleasure of this activity, and give you the basis for moving on to drawing figures.

I also hope, in later installments, to provide insight into the vitality and sensuousness of great drawing so that your next visit to the museum will be both more gratifying and a chance to amaze your companions with your new-found aestheticism.

My method for helping you to draw focuses primarily on two aspects of the skill: first, showing you how to see the structural logic of the object or figure you are drawing, and second, through focused practice, strengthening the link between your eyes and hand so that you are better able to make the drawing marks you intend.

For readers who are familiar with other kinds of drawing instruction that emphasize experimenting with materials, making images with different kinds of pencils, pens and paints, my approach may seem, at first, somewhat stripped down. However, if you try the exercises I describe, I think you will find that they give you the basic thinking and hand skills you need to move on to whatever experimenting with mediums you like.

drawing

In the historical and contemporary art I use as examples here, I hope there will be many drawing styles and different drawing and painting materials to inspire you. But, for the exercises in the early columns, I suggest you stick to using a 2B or 4B pencil so that the goal of clarifying your thinking and strengthening your hand-eye coordination doesn’t get confused by the difficulties of manipulating pen and ink or any other more complicated drawing tool.

In advising you to begin with these simple materials, pencil and a drawing pad, I am not denying the sensuousness of charcoal or pen-and-ink or paint or any of the myriad implements and colorful fluids with which, like happy children in a mud puddle, we can make images. I’m simply hoping to provide you with a period at the start of your endeavor in which you can focus on learning to see and getting in touch with your drawing hand without the distraction of style or materials.

In every column I will use examples from the history of art to show how certain functions of drawing and approaches to subject matter play out successfully in the work of specific artists. De Chirico’s surreal cityscapes will help to dramatize perspective. Edward Hopper’s paintings will illuminate how light brings out the solidity of objects and people. Picasso shows us how modeling can emphasize the substance of faces and bodies, while Matisse will be exhibit A in how artists move from realism to stylization in considering the human figure.

My overall goal, apart from helping with specific information, is to communicate the enthusiasm I feel for the immediacy of drawing. It is the activity that most engages an artist’s sense of exploration, both visually, as the artist feels out the image, new born, on the drawing surface, and intellectually, as the drawing becomes the bridge between observation or an idea and a graphic fact. In looking at Michelangelo’s studies for the Sistine Chapel, for instance, so searching and in-the-moment is the artist’s attention to his subject that I often feel the years fall away and there I am, looking over his shoulder as he draws.

I confess that much contemporary drawing disappoints me for its lack of risk and immediacy. It often seems like the product of a too premeditated and too lengthy process of refinement. Part of this may be the influence of the computer and the surface perfection that it achieves so easily; geometrically pure shapes, even textures, clear colors.

Another source of this arid quality may be attributable to the use of photography as a drawing shortcut. Photography as a source for subject matter has opened many amazing possibilities in 20th and 21st century art, but when it is used as a tracing or projecting tool in order to circumvent the difficulties of achieving correct proportion, the resulting art is often static and lifeless.

Drawing is a process of engagement for the artist, a period of both time and struggle that pulls the artist deeply and intensely into his subject and his ideas. Projecting a photograph in order to give you a perfect drawing of your subject has robbed you of all the imperfect yet more interesting drawings you might have made. The recent exhibit of the art of William Kentridge at The Museum of Modern Art in New York was the most powerful expression of the vital possibility in drawing that I have seen for some time, and it made so much other contemporary drawing seem dry and intellectualized.

William Kentridge

William Kentridge’s drawing from Stereoscope 1998–99

During the 12-week period of this column, I will be working on posters for Lincoln Center Theater as well as on a children’s book, and I will share with you sketches from those processes if they seem to illuminate an aspect of drawing being discussed. I hope that readers will respond to this column and help to shape and expand its content. I will be only too happy to move into the more arcane aspects of art and drawing if comments indicate interest.

Next week: “The Frisbee of Art.”

James McMullan, New York Times

__________

Full article and photos : http://opinionator.blogs.nytimes.com/2010/09/16/getting-back-to-the-phantom-skill

How animals made us human

What explains the ascendance of Homo sapiens? Start by looking at our pets.

A prehistoric painting of a bull at Lascaux in southwestern France.

Who among us is invulnerable to the puppy in the pet store window? Not everyone is a dog person, of course; some people are cat people or horse people or parakeet people or albino ferret people. But human beings are a distinctly pet-loving bunch. In no other species do adults regularly and knowingly rear the young of other species and support them into old age; in our species it is commonplace. In almost every human culture, people own pets. In the United States, there are more households with pets than with children.

On the face of it, this doesn’t make sense: Pets take up resources that we would otherwise spend on ourselves or our own progeny. Some pets, it’s true, do work for their owners, or are eventually eaten by them, but many simply live with us, eating the food we give them, interrupting our sleep, dictating our schedules, occasionally soiling the carpet, and giving nothing in return but companionship and often desultory affection.

What explains this yen to have animals in our lives?

An anthropologist named Pat Shipman believes she’s found the answer: Animals make us human. She means this not in a metaphorical way — that animals teach us about loyalty or nurturing or the fragility of life or anything like that — but that the unique ability to observe and control the behavior of other animals is what allowed one particular set of Pleistocene era primates to evolve into modern man. The hunting of animals and the processing of their corpses drove the creation of tools, and the need to record and relate information about animals was so important that it gave rise to the creation of language and art. Our bond with nonhuman animals has shaped us at the level of our genes, giving us the ability to drink milk into adulthood and even, Shipman argues, promoting the set of finely honed relational antennae that allowed us to create the complex societies most of us live in today. Our love of pets is an artifact of that evolutionary interdependence.

“Our connection with animals had a very great deal to do with our development,” Shipman says. “Beginning with the adaptive advantage of focusing on and collecting information about what other animals are doing, from there to developing such a reliance on that kind of information that there became a serious need to document and transmit that information through the medium of language, and through the whole thing the premium on our ability to read the intentions, needs, wants, and concerns of other beings.”

Shipman’s arguments for the importance of “the animal connection,” laid out in an article in the current issue of Current Anthropology and in a book due out next year, draw on evidence from archeological digs and the fossil record, but they are also freely speculative. Some of her colleagues suggest that the story she tells may be just that, a story. Others, however, describe it as a promising new framework for looking at human evolution, one that highlights the extent to which the human story has been a collection of interspecies collaborations — between humans and dogs and horses, goats and cats and cows, and even microbes.

Shipman, a professor of biological anthropology at Pennsylvania State University, draws together the scattered strands of a growing field of research on the long and complex relationship between human and nonhuman animals, a topic that hasn’t traditionally warranted much scholarly discussion but is now enjoying a surge of interest. The field of so-called human-animal studies is broad enough to include doctors researching why visits by dogs seem to make people in hospitals healthier, art historians looking at medieval depictions of wildlife, and anthropologists like Shipman exploring the evolution and variation of animal domestication. What they all share is an interest in understanding why we are so vulnerable to the charms of other animals — and so good at exploiting them for our own gain.

The traits that traditionally have been seen to separate human beings from the rest of the animal kingdom are activities like making tools, or the use of language, or creating art and symbolic rituals. Today, however, there is some debate over how distinctively human these qualities actually are. Chimpanzees, dolphins, and crows create and use tools, and some apes can acquire the language skills of a human toddler.

A few anthropologists are now proposing that we add the human-animal connection to that list of traits. A 2007 collection of essays, “Where the Wild Things Are Now,” looked at how domesticating animals had shaped human beings as much as the domesticated animals themselves. Barbara King, an anthropologist at the College of William & Mary, published a book earlier this year, “Being With Animals,” that explores the many ramifications of our specieswide obsession with animals, from prehistoric cave art to modern children’s books and sports mascots. King’s primary interest is in the many ways in which myths and religious parables and literature rely on animal imagery and center on encounters between humans and animals.

“[W]e think and we feel through being with animals,” King writes.

Shipman’s argument is more specific: She is trying to explain much of the story of human evolution through the animal connection. The story, as she sees it, starts with the human invention of the first chipped stone tools millions of years ago. Shipman, who specializes in studying those tools, argues that they were an advance made for the express purpose of dismembering the animals they had killed. The problem early humans faced was that even once they had become proficient enough hunters to consistently bring down big game, they had the challenge of quickly getting the meat off the corpse. With small teeth and a relatively weak jaw, human beings couldn’t just rip off huge chunks, it took time to tear off what they needed, and it rarely took long for bigger, meaner predators to smell a corpse and chase off the humans who had brought it down.

Early chopping tools sped up the butchering process, making hunting more efficient and encouraging more of it. But this also placed early humans in an odd spot on the food chain: large predators who were nonetheless wary of the truly big predators. This gave them a strong incentive to study and master the behavioral patterns of everything above and below them on the food chain.

That added up to a lot of information, however, about a lot of different animals, all with their various distinctive behaviors and traits. To organize that growing store of knowledge, and to preserve it and pass it along to others, Shipman argues, those early humans created complex languages and intricate cave paintings.

Art in particular was animal-centered. It’s significant, Shipman points out, that the vast majority of the images on the walls of caves like Lascaux, Chauvet, and Hohle Fels are animals. There were plenty of other things that no doubt occupied the minds of prehistoric men: the weather, the physical landscape, plants, other people. And yet animals dominate.

The centrality of animals in that early artwork has long intrigued anthropologists. Some have suggested that the animals were icons in early religions, or visions from mystical trances. Shipman, however, argues that the paintings serve a more straightforward function: conveying data between members of a species that was growing increasingly adept at hunting and controlling other animals. Lascaux, in this reading, was basically primitive Powerpoint. The paintings, Shipman points out, are packed with very specific information about animal appearance and behavior.

“It’s all about animals,” Shipman says. “There are very few depictions of humans and they’re generally not very realistic. The depictions of animals are amazing, you can tell this is a depiction of a prehistoric horse in its summer coat, or that this is a rhino in sexual posture.”

This storehouse of knowledge eventually allowed humans to domesticate animals. Evidence from early human settlements suggests that wolves were domesticated into dogs more than 20,000 years before people first domesticated plants. These new companion animals — not only dogs but eventually horses, camels, cows, goats, sheep, pigs, and others — in essence allowed human beings to appropriate a whole new set of abilities: to be better hunters, to kill off household pests, to haul goods, pull plows, create fertilizer, and protect homes against intruders. Not to mention the food and raw materials their bodies yielded up. Of course, the domesticated animals benefited, too: Human dependence on them ensured their survival and spread, even as some of their wild cousins were hunted to extinction.

The great value that was gained from these “living tools,” as Shipman calls them, also meant that people with a particular interest in animal behavior, and who were especially acute about observing, predicting, and controlling it, were more likely to thrive in early human societies and to have more offspring. To the extent that there was a genetic component to these skills, Shipman argues, it spread. Just as humans selected for certain traits in domestic animals, those same animals were unconsciously shaping their domesticators right back.

“Domestication was reciprocal,” Shipman writes in her Current Anthropology article. And our weakness for pets, she suggests, may be a vestige of that bilateral domestication.

Shipman readily admits that what she’s proposing is a hypothesis, and she hopes other scholars will help to flesh it out. So far, the reception has been mixed. Other researchers exploring the origins of language and art are reluctant to ascribe it to something as limited as the predators and prey early humans faced — the need to convey information about other human beings, for example, could have been just as important in spurring the development of language, if not more. Anthropologists like Manuel Dominguez-Rodrigo of Spain’s Complutense University of Madrid disagree with Shipman that early tool use arose to deal with dead animals; it’s more likely, he argues, that the first stone tools were used to process plants.

And it may be, too, that we find puppies cute not because of some innate desire to domesticate wild animals, but simply because puppies share some of the features — big eyes, clumsy movements, stubby limbs — that human babies have.

Still, for scholars of human-animal studies, the ambition and scope of Shipman’s argument are good in and of themselves, throwing into relief the ways that our own development has made us one of the world’s great symbiotic species, thriving through a set of partnerships with other animals.

Shipman’s argument “is radical to the degree that it really puts front and center the animal-human bond in a way that it hasn’t been before,” says King. “It’s not just background noise — yeah we hunted them, yeah we lived with them, yeah we ate them — it truly shapes the human evolutionary trajectory. That seems to me a really good thing to be doing.”

Drake Bennett is the staff writer for Ideas.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/09/12/what_explains_the_ascendance_of_homo_sapiens_start_by_looking_at_our_pets/

‘Delusions of Gender’ argues that faulty science is furthering sexism

DELUSIONS OF GENDER

How Our Minds, Society, and Neurosexism Create Difference

By Cordelia Fine

About halfway through this irreverent and important book, cognitive psychologist Cordelia Fine offers a fairly technical explanation of the fMRI, a common kind of brain scan. By now, everyone is familiar with these head-shaped images, with their splashes of red and orange and green and blue. But far fewer know what those colors really mean or where they come from.

It’s not as if these machines are taking color videos of the human brain in action — not even close. In fact, these high-tech scanners are gathering data several steps removed from brain activity and even further from behavior. They are measuring the magnetic quality of hemoglobin, as a proxy for the blood oxygen being consumed in particular regions of the brain. If the measurement is different from what one would expect, scientists slap some color on that region of the map: hot, vibrant shades such as red if it’s more than expected; cool, subdued tones if it’s less.

Fine calls this “blobology”: the science — or art — of creating images and then interpreting them as if they have something to do with human behavior. Her detailed explanation of brain-scanning technology is essential to her argument, as it conveys a sense of just how difficult it is to interpret such raw data. She isn’t opposed to neuroscience or brain imaging; quite the opposite. But she is ardently against making authoritative interpretations of ambiguous data. And she’s especially intolerant of any intellectual leap from analyzing iffy brain data to justifying a society stratified by gender. Hence her title, “Delusions of Gender,” which can be read as an intentional slur on the scientific minds perpetrating this deceit.

Fine gives these scientists no quarter, and her beef isn’t just with brain scanners. Consider her critique of a widely cited study of babies’ gazes, conducted when the infants were just a day and a half old. The study found that baby girls were much more likely to gaze at the experimenter’s face, while baby boys preferred to look at a mobile. The scientists took these results as evidence that girls are more empathic than boys, who are more analytic than girls — even without socialization. The problem, not to put too fine a point on it, is that it’s a lousy experiment. Fine spends several pages systematically discrediting the study, detailing flaw after flaw in its design. Again, it’s a somewhat technical, methodological discussion, but an important one, especially since this study has become a cornerstone of the argument that boys and girls have a fundamental difference in brain wiring.

By now, you should be getting a feeling for the tone and texture of this book. Fine offers no original research on the brain or gender; instead, her mission is to demolish the sloppy science being used today to justify gender stereotypes — which she labels “neurosexism.” She is no less merciless in attacking “brain scams,” her derisive term for the many popular versions of the idea that sex hormones shape the brain, which then shapes behavior and intellectual ability, from mathematics to nurturance.

Two of her favorite targets are John Gray, author of the “Men Are From Mars, Women Are From Venus” books, and Louann Brizendine, author of “The Female Brain” and “The Male Brain.” Fine’s preferred illustration of Gray’s “neurononsense” is his discussion of the brain’s inferior parietal lobe, or IPL. The left IPL is more developed in men, the right IPL in women, which for Gray illuminates a lot: He says this anatomical difference explains why men become impatient when women talk too long and why women are better able to respond to a baby crying at night. Fine dismisses such conclusions as nothing more than “sexism disguised in neuroscientific finery.”

Gray lacks scientific credentials. Brizendine has no such excuse, having been trained in science and medicine at Harvard, Berkeley and Yale. And Fine saves her big guns — and her deepest contempt — for her. For the purposes of this critique, Fine fact-checked every single citation in “The Female Brain,” examining every study that Brizendine used to document her argument that male and female brains are fundamentally different. Brizendine cited hundreds of academic articles, making the text appear authoritative to the unwary reader. Yet on closer inspection, according to Fine, the articles are either deliberately misrepresented or simply irrelevant.

“Neurosexism” is hardly new. Fine traces its roots to the mid-19th century, when the “evidence” for inequality included everything from snout elongation to “cephalic index” (ratio of head length to head breadth) to brain weight and neuron delicacy. Back then, the motives for this pseudoscience were transparently political: restricting access to higher education and, especially, the right to vote. In a 1915 New York Times commentary on women’s suffrage, neurologist Charles Dana, perhaps the most illustrious brain scientist of his time, catalogued several differences between men’s and women’s brains and nervous systems, including the upper half of the spinal cord. These differences, he claimed, proved that women lack the intellect for politics and governance.

None of this was true, of course. Not one of Dana’s brain differences withstood the rigors of scientific investigation over time. And that is really the main point that Fine wants to leave the reader pondering: The crude technologies of Victorian brain scientists may have been replaced by powerful brain scanners such as the fMRI, but time and future science may judge imaging data just as harshly. Don’t forget, she warns us, that wrapping a tape measure around the head was once considered modern and scientifically sophisticated. Those seductive blobs of color could end up on the same intellectual scrap heap.

Wray Herbert’s book “On Second Thought: Outsmarting Your Mind’s Hard-Wired Habits” has just been published.

__________

Full article: http://www.washingtonpost.com/wp-dyn/content/article/2010/09/10/AR2010091002678.html

Treehouse Hotels Bring Visitors Back to Childhood

Sleeping Among the Birds

Giant orange eyeballs, airplanes you can sleep in, and multidimensional birds’ nests. Those who want to relive their childhood fantasies of sleeping in a treehouse have their choice of rustic wooden models or the ultra chic. SPIEGEL ONLINE takes a look at hotels for tree climbers.

Perched crookedly among the branches, more garish than sublime, the rooms all offer guests the sounds of the breeze rustling through the leaves and birds chirping — all 10 meters (32 feet) above ground. Welcome to the first treehouse hotel in Germany.

“We built it like children would have: filled with nooks and crannies, colorful, and with lots of imagination,” says the hotel’s owner Jürgen Bergmann. But it’s not only families with children who frequent the Kulturinsel Einsiedel near Görlitz, a city in eastern Germany that shares its border with Poland. A night in the treetops has also become a favorite 40th or 50th birthday present.

Bergmann is sure of the hotel’s appeal to men. “It’s a lifelong male fantasy to be in a treehouse,” he says.

__________

A night nestled in the trees. Germany’s first treehouse hotel, Kulturinsel Einsiedel, opened near Görlitz in Saxony in 2005. The wooden rooms, with names like Modelpfutzen’s Treetop Summit, are connected by narrow walkways.

Guests, both young and old, share outhouses and cold water outdoor showers. Still, the rooms are booked to capacity, says the owner, Jürgen Bergmann.

Many treehouse hotels have eccentric designs. The Free Spirit Spheres hang on Canada’s Vancouver Island. The owner, Tom Chudleigh, says the idea came to him from “the spirit realm.”

Chudleigh hopes one day to expand his offering from three to 40 round rooms, all interconnected. He dreams of a resort in the trees.

No simple plane wreck. In Costa Rica one can spend the night in an old Boeing placed high in the trees.

The terrace features rocking chairs and a view of the sea. The special suite attracts families and pilots alike.

At the new treehouse hotel in Sweden, called the Treehotel, all of the rooms are designed by different architects. This one is the mirrored cube.

A square sushi roll: “The Cabin” hangs among the birch trees near the Swedish village of Harads. In all, 24 rooms are planned.

The third completed room, called the “Bird’s Nest.” Just like with the other two rooms, the owners of the hotel took precautions that none of the trees would be injured during its construction.

Storybook treehouse: Also in Sweden is the Woodpecker Hotel, high among the branches of an old oak tree in a park in Västerås. Not for those with a fear of heights: one can only reach the room, some 13 meters high, with a ladder. Fortunately, a toilet is inside.

Somewhat easier to reach, but higher still is the Canopy Tree House in the middle of the Peruvian rain forest. Guests at the Inkaterra Reserva Amazonica Lodge sleep some 27 meters above ground.

Luxury treehouses: Those who want to stay at the Tsala Treetop Lounge in South Africa don’t have to worry about outhouses or cold water showers. They relax with room service and private pools.

At the Post Ranch Inn in California’s Big Sur, high among the cliffs of the Pacific coast …

… guests have well-appointed rooms surrounded by green trees.

The bed and breakfast Vertical Horizons Treehouse Paradise in the US state of Oregon doesn’t offer a pool or fireplace, but owner Phil and his wife Jodie make the guests breakfast. The couple built the three treehouses themselves, and each has its own theme.

Another treetop hotel in Oregon is the Out’n’About Treehouse Treesort, where owner and treehouse expert Michael Garnier created 16 different rooms. Guests wanting more of a thrill can fly along the hotel’s zip line

__________

These days it’s easier than ever to fulfill one’s dream of sleeping among the branches. More and more hotels are offering the experience, not only in Germany, but also in the jungles of South America, in Africa, Australia and even in the polar regions. And lately it hasn’t only been ramshackle shacks that are hanging between the branches. Some of the treehouses offer real luxury, such as fireplaces and hot tubs. Others are treehouses in name only.

In Bergmann’s hotel, which opened in 2005, things are still on the rustic end. The toilets are truly outhouses, and the outdoor shower, which naturally only has cold water, is not for everyone. But that hasn’t scared off visitors. Bergmann says the rooms are filled to capacity.

What attracts his visitors, Bergmann says, is simply the thrill of being in a treehouse itself. “We take them on a trip back into their childhoods,” says the 54-year-old with the long white braid. “It’s a place for the soul: cozy, romantic. One feels very at home.”

Treehouse Purists and Gigantic Eyeballs

He might have crooked cottages with unusual names, but Bergmann is a purist when it comes to building treehouses. Others have different interpretations. On Vancouver Island in Canada, some four meters (13 feet) off the ground, are three orange-colored balls, about the size of small campers. With round windows on the sides, they look like giant eyeballs.

Tom Chudleigh is the founder and builder of the three Free Spirit Spheres, made out of wood and fiberglass. He confides that the idea for the unusual hotel rooms came to him from the “spirit realm.”

“Architecture can shape your surroundings,” he says. “The spheres give off, with their shapes, a feeling of being at one with the environment. They hover in the air between the trees and make this magical world accessible.”

Chudleigh wants his guests to feel a connection to nature when they spend the night with him. The balls hang on ropes from the trees and sway back and forth in the breeze. The concept has been well-received. Eve, Eryn and Melody, as he has named them, have been booked almost all year.

Interest in his hotel has gained rapidly in the last few years, Chudleigh says. He built everything from the frames down to the furniture, leaving only the plastic wrapping and the windows for others to do. His dream is to have 40 balls connected together — a whole resort in the treetops.

Many of the treehouse hoteliers tend to have a vision, and often it is an outlandish one.

Allan Templeton just positioned a whole airplane between the trees. Along the cliffs of the Pacific coast in Costa Rica, the silver and red monstrosity juts out of the tropical forest and looks like it just made an emergency landing.

But a look inside the suite of the Hotel Costa Verde proves that it is not dangerous. Inside, the room is outfitted with local teak, and over the wings is a terrace with rocking chairs and a breathtaking view of the ocean.

Sushi in the Trees

It’s a form of recycling,” Templeton says. He found the discarded old Boeing from the 1960s at the airport in Costa Rica’s capital, San Jose, and remembered an article he once read about airplanes that had been converted into homes. Templeton, an American and the son of a B-17 bomber pilot, decided he wanted something just like that for his hotel.

After extensive renovations and a risky maneuver involving a 90-ton crane, the Boeing finally landed among the trees. The suite has been open since the end of 2008, and the guests have included families and several pilots.

“Maybe they come here with especially romantic ideas,” Templeton says, laughing.

The newest member of the treehouse family is also no wooden shack. Hanging in the woods near the Swedish village of Harads are the new rooms of the Treehotel, opened in July by the Lindvall family. There is a mirrored cube, a multidimensional bird’s nest and a square-shaped sushi roll. All of the planned 24 rooms have been designed by Swedish architects.

Like Tom Chudleigh and his spheres, the owners of the hotel want to offer rooms that are in harmony with their environment. “Respect for nature is very important for us,” says Sofia Lindvall, who runs the hotel with her parents. “We haven’t damaged a single tree with a screw. The rooms are either hanging or are on stilts.”

The interest in the treehouses has been so strong that the Lindvall family holds tours every day for those who can’t get a free room.

“Many people only first notice once they are here how high up 10 meters is from the ground,” Lindvall says. “Many get quite scared.”

One can’t be afraid of heights and stay in a treehouse, no matter what the design.

__________

Full article and photos: http://www.spiegel.de/international/zeitgeist/0,1518,716603,00.html

Studies Show Nurture at Least as Important as Nature

How Hereditary Can Intelligence Be?

New studies have found that environment has at least as great an impact on IQ as genetics. Researchers in recent years have scaled back their estimates of the influence genetics plays in intelligence differences. Psychologist Richard Nisbett says that if you take social differences into account, you would find “50 percent to be the maximum contribution to genetics.”

Researchers have long overestimated the role our genes play in determining intelligence. As it turns out, cognitive skills do not depend on ethnicity, and are far more malleable than once thought. Targeted encouragement can help children from socially challenged families make better use of their potential. 

Eric Turkheimer jokes about people who believe environmental influences alone determine a person’s character: “They soon change their tune when they have a second child,” he says. A father himself, he is speaking from experience. His eldest daughter likes being the center of attention, while her sister is shy and more reticent at school. 

Even so, Turkheimer doubts that genetics alone can provide the complete answer. As a clinical psychologist working at the University of Virginia in Charlottesville, he repeatedly came across people whose childhoods hadn’t been as carefree as those of his daughters. Many of his patients are from impoverished backgrounds. 

“I could see how poverty had literally suppressed these people’s intelligence,” 56-year-old Turkheimer says. 

Scientists typically use twins to gauge the influence of our genes on the one hand and the environment on the other. However Turkheimer noticed that such studies rarely involve twins from broken homes. Stress, neglect and abuse can have a dramatic effect on intellectual ability. And it’s precisely this factor that many nature-vs.-nurture studies have completely failed to address. 

Plugging a Gap 

Turkheimer and his colleagues are the first scientists to have plugged this gap. Their three studies conducted in the United States on this issue have now compared the intelligence of hundreds of twins from more privileged backgrounds with those from more difficult environments. They found that the higher a child’s socioeconomic status, the greater the genetic influence on the difference in intelligence. The situation is very different for children from socially disadvantaged families, where differences in intelligence were hardly inherited at all. 

“The IQ of the poorest twins appeared to be almost exclusively determined by their socioeconomic status,” Turkheimer says. A person’s intelligence can only truly blossom if the environment gives the brain what it desires. 

Ulman Lindenberger, a 49-year-old psychologist at the Max Planck Institute for Education Research in Berlin, has come to the same conclusion. He says, “The proportion of genetic factors in intelligence differences depends on whether a person’s environment enables him to fulfill his genetic potential.” In other words: Seeds that are scattered on infertile soil won’t ever grow into large plants. 

This is precisely what intelligence researchers have denied up to now. Dazzled by their studies of carefree middle-class and upper middle-class twins, they decided that cognitive skills are largely under genetic control, that academic talent is biologically hard-wired and can unfurl in almost any environment. 

‘Intelligence Is Highly Modifiable by the Environment’ 

In the meantime psychologists, neuroscientists, and geneticists have developed a very different perspective. They now believe that the skill we term “intelligence” is not in the least fixed, but is actually remarkably variable. “It is now clear that intelligence is highly modifiable by the environment,” says Richard Nisbett, a psychologist at the University of Michigan in Ann Arbor. 

As a result, researchers have in recent years scaled back their estimates of the influence genetics plays in intelligence differences. The previous figure of 80 percent is outdated. Nisbett says that if you take social differences into account, you would find “50 percent to be the maximum contribution of genetics.” That leaves an unexpectedly large proportion of a child’s intelligence for parents, teachers and educators to shape. 

The findings will undoubtedly please those parents who already send their children to good schools, drive them to violin lessons in the afternoon, and then drag them around museums at the weekend. “So you haven’t wasted your time, money and patience on your children after all,” Nisbett says. 

Time and again researchers have found that a child’s genes have far less of an effect on its brain than its surroundings — and the social environment is only one of the factors in this. Scientists in Boston, for instance, have found that children who live near roads and intersections and are thus exposed to higher levels of exhaust fumes have three IQ points fewer on average than children of the same age living in areas with cleaner air. That’s simply because microscopic dust and pollutants can reach the brain and then adversely effect the nerve cells’ ability to function properly. 

In a similar way to those exposed to pollutants, children also suffer as a result of mental pressure, misery, worry and neglect. Chronic stress alters the way neurotransmitters work, inhibits the formation of new nerve cells and causes the hippocampus to shrivel. 

That can lead to identifiable differences, as researchers at Cornell University in Ithaca, New York, have shown. They found that stressed children from poor families performed up to 10 percent worse at memory tests than well looked-after children from middle-class homes. 

IQ Increases with Each Year Spent in School 

By contrast, IQ increases with every year a child spends at school. During World War II, some children in Holland started school late because of the Nazi occupation — with momentous consequences. “The average IQ for these children was seven points lower than for children who came of school age after the siege,” Nisbett says. 

Unequal educational opportunities were and still remain particularly prevalent in the United States. American society denied black slaves an education, and refused them access to books. But the races remained divided even after the abolition of slavery in 1865. For a long time, dark-skinned children attended special schools that had terrible facilities. So it’s hardly surprising that they were behind when they were finally granted access to the public schools that were previously the sole preserve of white children. 

White academics in the US repeatedly tried to claim that the resulting differences in performance were genetically determined. In the 1960s, psychologist Arthur Jensen from the University of California at Berkeley wondered why so many underachieving pupils were dark-skinned. How could anyone deny that their low intelligence was a feature of their ethnicity, Jensen argued. He therefore concluded that there was no point in trying to encourage children from socially disadvantaged groups at an early age. 

The controversial book “The Bell Curve” was published in 1994. Its authors, Richard Herrnstein and Charles Murray, warned against giving ethnic minorities easier access to universities. 

An Unplanned Experiment in Germany 

This is the line of reasoning that German central banker Thilo Sarrazin recently adopted when he provocatively suggested that the children of Turkish immigrants were genetically less well disposed than German children. 

In spite of Sarrazin’s claims, an unplanned experiment that took place in Germany proved long ago that skin color had no influence on intelligence. After World War II, many American servicemen fathered children with German women. These bi-national offspring were dubbed ” occupation babies”. Some of them had light-skinned American dads, others dark-skinned ones. In contrast to the US, this had no influence over their performance at school. 

In 1961 Klaus Eyferth at the Hamburg University Institute of Psychology saw this as a unique opportunity to uncover the “developmental characteristics of (biracial) children” by comparing them with “white occupation babies.” Eyferth gave intelligence tests to 264 children and adolescents, 181 of them with dark skin, 83 with light skin. The children with a white father had an average IQ of 97, those with a black father 96.5, values statistically so close to one another to disprove the notion of “developmental characteristics.” 

Hundreds of Thousands of Genes Play Role in Cognitive Skills 

Modern genetic research has also now shown that there is no such thing as a biological source of cleverness consisting of one or a few “intelligence genes.” Apparently there are hundreds — if not thousands — of genes that play a role in determining our cognitive skills. 

A person’s ability to make use of his or her genetic potential can indeed be influenced, especially if the person is assisted and permits others to help. Education researcher Anders Ericsson has also shown that master musicians, for example, aren’t born that way. Studying violinists in Berlin he found that none who had practiced for fewer than 10,000 hours ever became virtuosi. By contrast, nearly all those who had practiced for more than 10,000 hours by the age of 20 went on to become principal violinists. 

The analogy doesn’t only hold for musicians. The notions of “chess genius” or “math genius” are likewise mere metaphors that don’t have any biological basis whatsoever. 

True, time and again it has been observed that children in several Asian countries are far better at calculating than their peers in the West. But that has nothing to do with genetics — and everything to do with attitudes. In one study, students from Japan and Canada were given mathematical tasks. No matter how well they actually did, the researchers told one group of subjects they had done excellently. The others were informed they had flunked completely. The scientists then gave the subjects another set of tasks, and said they could take as long as they wanted. 

The reactions by the students showed remarkable cultural differences. The Canadians were apparently motivated by success. Those who had been told they had done well in the first test spent significantly longer doing the second task than fellow Canadians who were given to believe they had done terribly the first time round. The Japanese subjects behaved quite differently. Those who had been given a bad grade in the first test worked longer and more diligently than those who had been praised. It therefore seems that a sense of failure was motivating for them. 

Cognitive Skills a Reflection of Environment 

Numbers aren’t the only thing you can teach through repeated practice. The same is true for words. A person’s vocabulary is an expression of how much their parents and significant others spoke to them as a child. According to studies conducted in the US, the average child has heard about 30 million words by the age of three. The figure for disadvantaged children is only 20 million. This then affects their active vocabulary. The average middle-class three-year-old can use 1,100 words, whereas children from poorer families only have about 525 at their disposal. 

The new findings by the intelligence researchers all point in the same direction: Our cognitive skills are a reflection of our environment. “The low IQs expected for children born to lower-class parents can be greatly increased if their environment is sufficiently rich cognitively,” says psychologist Nisbett. 

The practical possibilities have been explored by psychologists Sharon Landesman Ramey and Craig Ramey from Georgetown University in Washington, DC. The Rameys used as their subjects the children of extremely poor and poorly educated parents. In one project children spent their days at a special day-care center in which there was one teacher for each child and where the young charges were given special encouragement from the age of six weeks. 

After three years, these children were compared to a control group. Lo and behold, the average IQ of the boys and girls was a staggering 13 points higher than that of similar children who had not been given special attention. 

__________

Full article and photo: http://www.spiegel.de/international/zeitgeist/0,1518,716614,00.html

Preppy Pitfall: All That Madras, Not Enough Effort.

Did Lisa Birnbach’s original ‘Handbook’ drive people lazy?

It is one of the great mysteries of publishing that, for decades after the astonishing success of the 1980 paperback, “The Official Preppy Handbook,” there was no follow-up. The gently comical tribute to the ways of the WASP not only did boffo box office itself but boosted a whole industry—making a fortune for Ralph Lauren, rejuvenating L.L. Bean and paving the way for J. Crew. Strangely, the book’s main author, Lisa Birnbach, waited 30 years to deliver a sequel, “True Prep,” which is now in bookstores.

[feltenpreppy]

You party. I’ll study.

The preppy moment the original Handbook inspired was hardly the first mainstream fad for the East Coast country-club aesthetic. When the Ivy League look was in vogue around 1960, even hipsters made the scene: Miles Davis sourced his tweeds and flannel suits at the Andover Shop in Harvard Square. That was a distant memory 20 years on, when Ms. Birnbach offered the masses khaki, madras and crocodile-emblazoned tennis shirts as a bracing tonic for the disco hangover.

The clothes may have been the most visible part of the preppy phenomenon, but they represented only a small part of the upper-class way of life Ms. Birnbach was championing (however cheerfully sly and subversive her advocacy may have been). Much of the book was devoted to a world-view that was casually aristocratic. Ms. Birnbach promised to make that outlook available to one and all. “In a true democracy,” she wrote, “everyone can be upper class and live in Connecticut.”

How could the suburban teenager from Peoria or Pomona all of sudden be “upper class”? It was less a matter of pink oxford cloth and Kelly-green poplin than of adopting an aristocratic lassitude, an attitude that exuded privilege by treating effort with contempt. One simply mustn’t try too hard. A key principle of what Ms. Birnbach called the Preppy Value System was Effortlessness: “If life is a country club, then all functions should be free from strain.”

There’s no denying the seductive appeal of the old aristocratic disdain for those who strive. How much nicer (and, of course, easier) it is to adopt a blasé and boozy contempt for the grinds and geeks who put in effort than it is to compete with them. The original Handbook warned acolytes not to waste their college days studying “Professional majors” such as engineering, chemistry or mathematics, because they “all reek of practicality.” Nor, we were told, did the preppy go for intellectually demanding subjects such as philosophy or linguistics because, “they smack of an equally undesirable effort.”

And there’s the rub. Unless you actually have a fat trust fund to underwrite your nonchalance, an aversion to effort is hardly a strategy for success. Which may explain some of our national woes.

Over the last couple of decades we’ve seen the contempt for effort spread far beyond the original preppy demographic. Now it’s commonplace for middle-class kids to go to college and behave as though they are scions of the gentry—abjuring studies and indulging in the bottomless kegger that a recent book dubbed “The Five Year Party.”

Or take MTV’s “Jersey Shore” (please!). What does it show us but a curiously modern sort of aristocratically privileged polloi—young adults with no obligations or ambitions beyond partying and coupling, with nothing to do other than fuss and fight over perceived social slights?

The glib and privileged attitude that was once prototypically preppy has been adopted by those without the slightest clue how to pronounce “grosgrain.” Ironically (but not merely coincidentally) the upper crusty have responded by casting off their fashionable slough and embracing a crazy-ambitious work ethic.

The savvy well-to-do have turned their kids into super-grinds. Which is why so many Ivy League hopefuls are discovering it’s no longer enough to be valedictorian of your high-school class. Now you have to have written an opera or raised a million dollars to save children in Africa or have done original research on cancer. Yes, there are still “legacy” slots to be had for the children of generous alums, but even those are going to the ambitious and accomplished.

In “True Prep,” Ms. Birnbach notes this with something approaching despair. Parents now spend gobs of money on SAT prep for their progeny. She laments that one is forced to pay for such tutors “because all your children’s classmates’ parents have hired them.” So much for embracing the gentleman’s C.

And then there is the fact that the threadbare remnants of old-money fortunes are, frankly, no longer very impressive. Ms. Birnbach looks longingly at the Croesian piles of the Google crowd and struggles to give them a preppy gloss. She writes admiringly that they share with preppies an aversion to ostentation.

But that’s where the similarity ends (and not just because the Larrys and Sergeys of the world wear T-shirts and jeans). Unlike those who followed the path of aristocratic insouciance, unlike those who turned up their noses at anything reeking of practicality, they studied mathematics.

As we look for ways out of the Great Repression, there’s something to be said for the value system that celebrates effort over effortlessness.

Eric Felten, Wall Street Journal

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704644404575481861760374480.html

Our Love Affair With the Fairs

State fairs embody our roots in agriculture, entrepreneurship and rabble-rousing.

So the summer fades and our children return to school—and this feels at once like a liberation and a reminder that we have only a few years with them. The fall leaves me melancholy, but thankfully there is perhaps our most beloved family tradition, the Kansas State Fair.

I grew up assuming the fair was an indispensable part of the American landscape. But this year the state of Michigan, bowing to budgetary pressures, abandoned its 161-year-old state-fair tradition. A part of me wonders, even as we plan our day at the Kansas fair, whether my children will do this with my grandchildren.

Fairs, according to the International Association of Fairs and Expositions (IAFE), can be traced back to 500 B.C. Before UPS and TiVo, people had to get together for shopping and amusement. Fairs have existed as microcosms of society from the beginning, places where you can buy sharp knives and slabs of cooked meat, be enticed by all manner of hucksters, and survey the oddity and splendor which are your fellow man’s wardrobe choices.

Small wonder, then, with the decline of agriculture and the advent of satellite TV, that state fairs are struggling. South Dakota, Nebraska, Illinois and Colorado state fairs have recently faced financial strains, and Arizona used federal bailout funds for theirs last year. The defunct Michigan state fair, meanwhile, was the oldest in the nation.

Kansas doesn’t rank in the top 50 fairs by total attendance (Texas has held the top spot the last two years), but when you adjust for state population we look pretty darn good. We trounced Texas last year, for example, with one attendee for every 8 Kansans, versus one Texas State Fair attendee for every 13.6 Texans.

By that measure, we’re nowhere near Iowa, Minnesota or Alaska, where fair attendance routinely averages a third or more of state population. Still, it’s total numbers that catch the eye, which might explain why Oprah Winfrey wore a cowboy hat and taped a show at last year’s Texas State Fair.

While the numbers vary considerably from state to state, all told some 150 million Americans visited agricultural fairs last year, estimates IAFE president Jim Tucker. State fairs represent the America many of us praise from afar, or live within, or simply puzzle over.

Fairs embody our roots in agriculture, entrepreneurship and rabble-rousing. Where else can you, in a matter of minutes, buy a tractor, ride a camel, sample the latest in waterless car-washing technology, marvel over a 20-pound cucumber and then saunter a few hundred feet to hear Hank Williams, Jr. belt out “Family Tradition”? Let’s face it: no matter how sophisticated we become, a life-size statue of Elvis sculpted from 800 pounds of butter will always fascinate us.

And if you don’t understand this, then I’m afraid you don’t understand America. Don’t look for enlightened insights about American culture from those like Frenchman and “American Vertigo” author Bernard-Henri Lévy, who could afford no more than a “quick visit” to the Iowa State Fair, but who lingered over prisons in a manner that would make Foucault blush. If you’ve never hurled a tattered baseball at a pyramid of milk jugs, run your hand along a shiny new combine, or cheered at a pig race, then save your opinions for people who roll their eyes at Lee Greenwood.

Come to think of it, perhaps a qualification for commentators on American culture should be the ability to explain a cheese curd. The food alone can make fairs worthwhile, all of it from heaven or hell, although I’m not really sure which.

There are the funnel cakes, steak sandwiches, and roasted and buttered corn on the cob so hot you can brand cattle with it. And let’s not forget the panoply of fried delicacies. Every year brings an item that nobody before had thought—or dared—to fry and eat: pickles, Twinkies, HoHos, and—surely a sign of the apocalypse—bacon-cheeseburger doughnuts. Alongside these are all manner of skewered delights: pork chops on a stick, potato chips on a stick, cheesecake on a stick, waffles on a stick and, as ever, corn dogs and candy apples on sticks.

It seems insane to me: Not the unhealthy food, mind you, which I wholeheartedly support, but arming thousands of children with sharp wooden sticks. Perhaps that’s just the usual handwringing from a parent of four little boys who hopes to see them all through to adulthood with two eyeballs apiece.

That is always part of it, of course, both attending the fair and raising children, this fear that harm will come to them. In that sense the fair is not only microcosm but metaphor. At least it is to me as I put my ten-, eight-, and five-year-olds on a whirling, spinning, lighted metal contraption, wave goodbye, and pray to God that the carnies weren’t drinking when they assembled it— all while restraining my three year-old, who is outraged that he can’t go with his brothers. We are always sending them away, one way or another, and hoping the way is safe.

Though it’s a metaphor, however, the fair is gentler than life, because within minutes they come back to us, hair tussled, cheeks aflame, eyes wide. And at the end of it all, long after the sun has set, we pack them into our minivan, where they fall asleep almost instantly. Then we drive home through the dark country night, thankful to have been part of something so exhausting, and hokey, and irrepressibly American.

Mr. Woodlief’s memoir on fatherhood and marriage, “Somewhere More Holy,” was published by Zondervan in May.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704358904575477500355733286.html

Forget What You Know About Good Study Habits

Every September, millions of parents try a kind of psychological witchcraft, to transform their summer-glazed campers into fall students, their video-bugs into bookworms. Advice is cheap and all too familiar: Clear a quiet work space. Stick to a homework schedule. Set goals. Set boundaries. Do not bribe (except in emergencies).

And check out the classroom. Does Junior’s learning style match the new teacher’s approach? Or the school’s philosophy? Maybe the child isn’t “a good fit” for the school.

Such theories have developed in part because of sketchy education research that doesn’t offer clear guidance. Student traits and teaching styles surely interact; so do personalities and at-home rules. The trouble is, no one can predict how.

Yet there are effective approaches to learning, at least for those who are motivated. In recent years, cognitive scientists have shown that a few simple techniques can reliably improve what matters most: how much a student learns from studying.

The findings can help anyone, from a fourth grader doing long division to a retiree taking on a new language. But they directly contradict much of the common wisdom about good study habits, and they have not caught on.

For instance, instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing.

“We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”

Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.

Ditto for teaching styles, researchers say. Some excellent instructors caper in front of the blackboard like summer-theater Falstaffs; others are reserved to the point of shyness. “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”

But individual learning is another matter, and psychologists have discovered that some of the most hallowed advice on study habits is flat wrong. For instance, many study skills courses insist that students find a specific place, a study room or a quiet corner of the library, to take their work. The research finds just the opposite. In one classic 1978 experiment, psychologists found that college students who studied a list of 40 vocabulary words in two different rooms — one windowless and cluttered, the other modern, with a view on a courtyard — did far better on a test than students who studied the words twice, in the same room. Later studies have confirmed the finding, for a variety of topics.

The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.

“What we think is happening here is that, when the outside context is varied, the information is enriched, and this slows down forgetting,” said Dr. Bjork, the senior author of the two-room experiment.

Varying the type of material studied in a single sitting — alternating, for example, among vocabulary, reading and speaking in a new language — seems to leave a deeper impression on the brain than does concentrating on just one skill at a time. Musicians have known this for years, and their practice sessions often include a mix of scales, musical pieces and rhythmic work. Many athletes, too, routinely mix their workouts with strength, speed and skill drills.

The advantages of this approach to studying can be striking, in some topic areas. In a study recently posted online by the journal Applied Cognitive Psychology, Doug Rohrer and Kelli Taylor of the University of South Florida taught a group of fourth graders four equations, each to calculate a different dimension of a prism. Half of the children learned by studying repeated examples of one equation, say, calculating the number of prism faces when given the number of sides at the base, then moving on to the next type of calculation, studying repeated examples of that. The other half studied mixed problem sets, which included examples all four types of calculations grouped together. Both groups solved sample problems along the way, as they studied.

A day later, the researchers gave all of the students a test on the material, presenting new problems of the same type. The children who had studied mixed sets did twice as well as the others, outscoring them 77 percent to 38 percent. The researchers have found the same in experiments involving adults and younger children.

“When students see a list of problems, all of the same kind, they know the strategy to use before they even read the problem,” said Dr. Rohrer. “That’s like riding a bike with training wheels.” With mixed practice, he added, “each problem is different from the last one, which means kids must learn how to choose the appropriate procedure — just like they had to do on the test.”

These findings extend well beyond math, even to aesthetic intuitive learning. In an experiment published last month in the journal Psychology and Aging, researchers found that college students and adults of retirement age were better able to distinguish the painting styles of 12 unfamiliar artists after viewing mixed collections (assortments, including works from all 12) than after viewing a dozen works from one artist, all together, then moving on to the next painter.

The finding undermines the common assumption that intensive immersion is the best way to really master a particular genre, or type of creative work, said Nate Kornell, a psychologist at Williams College and the lead author of the study. “What seems to be happening in this case is that the brain is picking up deeper patterns when seeing assortments of paintings; it’s picking up what’s similar and what’s different about them,” often subconsciously.

Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out.

“With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”

When the neural suitcase is packed carefully and gradually, it holds its contents for far, far longer. An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.

No one knows for sure why. It may be that the brain, when it revisits material at a later time, has to relearn some of what it has absorbed before adding new stuff — and that that process is itself self-reinforcing.

“The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”

That’s one reason cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.

Dr. Roediger uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle alters that property: “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.

In one of his own experiments, Dr. Roediger and Jeffrey Karpicke, also of Washington University, had college students study science passages from a reading comprehension test, in short study periods. When students studied the same material twice, in back-to-back sessions, they did very well on a test given immediately afterward, then began to forget the material.

But if they studied the passage just once and did a practice test in the second session, they did very well on one test two days later, and another given a week later.

“Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”

Of course, one reason the thought of testing tightens people’s stomachs is that tests are so often hard. Paradoxically, it is just this difficulty that makes them such effective study tools, research suggests. The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,” is evident in daily life. The name of the actor who played Linc in “The Mod Squad”? Francie’s brother in “A Tree Grows in Brooklyn”? The name of the co-discoverer, with Newton, of calculus?

The more mental sweat it takes to dig it out, the more securely it will be subsequently anchored.

None of which is to suggest that these techniques — alternating study environments, mixing content, spacing study sessions, self-testing or all the above — will turn a grade-A slacker into a grade-A student. Motivation matters. So do impressing friends, making the hockey team and finding the nerve to text the cute student in social studies.

“In lab experiments, you’re able to control for all factors except the one you’re studying,” said Dr. Willingham. “Not true in the classroom, in real life. All of these things are interacting at the same time.”

But at the very least, the cognitive techniques give parents and students, young and old, something many did not have before: a study plan based on evidence, not schoolyard folk wisdom, or empty theorizing.

Benedict Carey, New York Times

__________

Full article and photo : http://www.nytimes.com/2010/09/07/health/views/07mind.html

That ’70s Feeling

TODAY we celebrate the American labor force, but this year’s working-class celebrity hero made his debut almost a month ago. Steven Slater, a flight attendant for JetBlue, ended his career by cursing at his passengers over the intercom and grabbing a couple of beers before sliding down the emergency-evacuation chute — and into popular history.

The press immediately drew parallels between Mr. Slater’s outburst and two iconic moments of 1970s popular culture: Howard Beale’s “I’m mad as hell” rant from the 1976 film “Network” and Johnny Paycheck’s 1977 anthem of alienation, “Take This Job and Shove It.”

But these are more than just parallels: those late ’70s events are part of the cultural foundation of our own time. Less expressions of rebellion than frustration, they mark the final days of a time when the working class actually mattered.

The ’70s began on a remarkably hopeful — and militant — note. Working-class discontent was epidemic: 2.4 million people engaged in major strikes in 1970 alone, all struggling with what Fortune magazine called an “angry, aggressive and acquisitive” mood in the shops.

Most workers weren’t angry over wages, though, but rather the quality of their jobs. Pundits often called it “Lordstown syndrome,” after the General Motors plant in Ohio where a young, hip and interracial group of workers held a three-week strike in 1972. The workers weren’t concerned about better pay; instead, they wanted more control over what was then the fastest assembly line in the world.

Newsweek called the strike an “industrial Woodstock,” an upheaval in employment relations akin to the cultural upheavals of the 1960s. The “blue-collar blues” were so widespread that the Senate opened an investigation into worker “alienation.”

But what felt to some like radical change in the heartland was really the beginning of the end — not just of organized labor’s influence, but of the very presence of workers in national civic life.

When the