Phonograph, CD, MP3—What’s Next?

The Beatles finally make it to iTunes.

‘I am particularly glad to no longer be asked when the Beatles are coming to iTunes,” said Ringo Starr last week as the Fab Four’s record company finally agreed to have their music sold through digital downloads by Apple. This agreement could mark the beginning of the end of the digital dislocation of the music industry— the first industry to be completely disrupted by information-age technology.

Apple CEO Steve Jobs said, “It has been a long and winding road to get here,” understating the case. Members of the band and other rights-holders had long objected to Apple’s practice of selling songs separately from albums, disagreed with its pricing, and feared illegal file-sharing of the songs if they were ever available online.

All 17 of the Beatles’ albums were among the top 50 sellers on iTunes the day they were made available. Apple is even selling a virtual “box set” of the Beatles. The top single was “Here Comes the Sun,” appropriately enough, since the lyrics “it’s been a long cold lonely winter” summarize a music industry just emerging from the destruction element of creative destruction.

Music has been a test case for technology transitions before. In the 19th century, the sheet-music publishers of Tin Pan Alley dominated the industry but were disrupted by recorded sound when Thomas Edison invented the phonograph. This was in turn replaced by newer physical forms of recordings, from eight-track tapes to cassettes and CDs. In the Internet era, sales of albums—bundles of music—broke down as consumers downloaded just the songs they wanted, usually illegally.

The iTunes store, launched in 2003, popularized legal downloads. Streaming music online has also become popular. Today one quarter of recorded-music revenue comes from digital channels. This tells us that technology can reward both creators and consumers, even as traditional middlemen such as record companies get squeezed.

A Beatles song plays on an iPod.

The Beatles have been accused of being digitally backward, but last year the group targeted younger listeners by cooperating with a videogame maker on “The Beatles: Rock Band” that lets people play along.

“We’ve made the Beatles music,” Paul McCartney told London’s Observer last year. “It’s a body of work. That’s it for us—it’s done. But then what happens is that somebody will come up with a suggestion,” like a video game.

Consumers get more choice through digital products and seem happy to pay for the convenience of downloads through iTunes, despite the availability of free music. Apple can charge more for a Beatles download than Amazon can charge for a CD, even though CDs are usually higher-quality and the songs can be transferred to devices such as iPods.

Several years ago the big legal battle featured music industry companies suing some 35,000 people who illegally downloaded songs. Piracy continues, but now the industry is instead looking for new revenue streams. Sean Parker, founder of the original downloading service, Napster, has advice for music companies. “The war on piracy is a failure,” he says. “Labels must offer services that consumers are willing to pay for, focusing on convenience and accessibility.”

Some musicians still hold out against digital downloads. Country star Kid Rock explained to Billboard magazine recently why he stays off iTunes. “I have trouble with the way iTunes says everybody’s music is worth the same price. I don’t think that’s right. There’s music out there that’s not a penny. They should be giving it away, or they should be making the artist pay people to listen to it.”

Still, there are encouraging signs that creators and distributors are coming together. Artists often skip the music industry altogether by using new technology to make songs cheaply, then market them on the Web. For many musicians, the real money comes from concerts and merchandising. For bands that appeal to older audiences, such as the Beatles, CD sales remain brisk.

For music and many content-based industries, the shift to the Information Age from the Industrial Age is a shift to digital versions from older analog versions. The older forms don’t disappear altogether. Instead, traditional products find a more limited role alongside newer versions that take advantage of new technology to deliver different experiences to consumers. Sellers may lose scarcity value for their goods as digital tools make copying easy, but as iTunes has shown, convenience is also a service worth buying.

If the music industry can learn new tricks, there’s hope for all the other industries that are being transformed as technology continues to give consumers more choices. The best alternative for smart industries is to take the advice of the Beatles song “Let It Be”—make the most of technological progress, and recognize that certain things are beyond anyone’s control.

L. Gordon Crovitz, Wall Street Journal


Full article and photo:

Forget any ‘Right to Be Forgotten’

Don’t count on government to censor information about you online.

The stakes keep rising in the debate over online privacy. Last week, the Obama administration floated the idea of a privacy czar to regulate the Internet, and the European Union even concocted a new “right to be forgotten” online.

The proposed European legislation would give people the right, any time, to have all of their personal information deleted online. Regulators say that in an era of Facebook and Google, “People should have the ‘right to be forgotten’ when their data is no longer needed or they want their data to be deleted.” The proposal, which did not explain how this could be done in practice, includes potential criminal sanctions.

Privacy viewed in isolation looks more like a right than it does when seen in context. Any regulation to keep personal information confidential quickly runs up against other rights, such as free speech, and many privileges, from free Web search to free email.

There are real trade-offs between privacy and speech. Consider the case of German murderer Wolfgang Werle, who does not think his name should be used. In 1990, he and his half brother killed German actor Walter Sedlmayr. They spent 15 years in jail. German law protects criminals who have served their time, including from references to their crimes.

Last year, Werle’s lawyers sent a cease-and-desist letter to Wikipedia, citing German law, demanding the online encyclopedia remove the names of the murderers. They even asked for compensation for emotional harm, saying, “His rehabilitation and his future life outside the prison system is severely impacted by your unwillingness to anonymize any articles dealing with the murder of Mr. Sedlmayr with regard to our client’s involvement.”

Censorship requires government limits on speech, at odds with the open ethos of the Web. It’s also not clear how a right to be forgotten could be enforced. If someone writes facts about himself on Facebook that he later regrets, do we really want the government punishing those who use the information?

UCLA law Prof. Eugene Volokh has explained why speech and privacy are often at odds. “The difficulty is that the right to information privacy—the right to control other people’s communication of personally identifiable information about you—is a right to have the government stop people from speaking about you,” he wrote in a law review article in 2000.

Indeed, there’s a good argument that “a ‘right to be forgotten’ is not really a ‘privacy’ right in the first place,” says Adam Thierer, president of the Progress and Freedom Foundation. “A privacy right should only concern information that is actually private. What a ‘right to be forgotten’ does is try to take information that is, by default, public information, and pretend that it’s private.”

There are also concerns about how information is collected for advertising. A Wall Street Journal series, “What They Know,” has shown that many online companies don’t even know how much tracking software they use. Better disclosure would require better monitoring by websites. When used correctly, these systems benignly aggregate information about behavior online so that advertisers can target the right people with the right products.

Many people seem happy to make the trade-off in favor of sharing more about themselves in exchange for services and convenience. On Friday, when news broke of potential new regulations in the U.S., the Journal conducted an online poll asking, “Should the Obama administration appoint a watchdog for online privacy?” Some 85% of respondents said no.

As Brussels and Washington were busily proposing new regulations last week, two of the biggest companies were duking it out over consumer privacy, a new battlefield for competition. Google tried to stop Facebook from letting users automatically import their address and other contact details from their Gmail accounts, arguing that the social-networking site didn’t have a way for users to get the data out again.

When users tried to import their contacts to Facebook, a message from Gmail popped up saying, “Hold on a second. Are you super sure you want to import your contact information for your friends into a service that won’t let you get it out?” The warning adds, “We think this is an important thing for you to know before you import your data there. Although we strongly disagree with this data protectionism, the choice is yours. Because, after all, you should have control over your data.”

One of the virtues of competitive markets is that companies vie for customers over everything from services to privacy protections. Regulators have no reason to dictate one right answer to these balancing acts among interests that consumers are fully capable of making for themselves.

L. Gordon Crovitz, Wall Street Journal


Full article and photo:

The Power To Control

Is Internet freedom threatened more by dominant companies or by the government’s efforts to hem them in?

In the early days of the radio industry, in the 1920s, almost anyone could become a broadcaster. There were few barriers to entry, basically just some cheap equipment to acquire. The bigger broadcasters soon realized that wealth creation depended on restricting market entry and limiting competition. Before long, regulation—especially the licensing of radio frequencies—transformed the open radio landscape into a “closed” oligopoly, with few players instead of many.

In “The Master Switch,” Tim Wu, a professor at Columbia University, argues that the Internet also risks becoming a closed system unless certain steps are taken. In his telling, information industries—including radio, television and telecommunications—begin as relatively open sectors of the economy but get co-opted by private interests, often abetted by the state. What starts as a hobby or a cottage industry ends up as a monopoly or cartel.

In such an environment, success often depends on snuffing out competitors before they become formidable. In Greek mythology, Kronos—ruler of the universe—was warned by an oracle that one of his children would dethrone him. Logically, he took pre-emptive action: Each time his wife gave birth, he seized the new child and ate it. Applied to corporate strategy, the “Kronos Effect” is the attempt by a dominant company to devour its challengers in their infancy.

In the late 19th century, Western Union, the telegraph company, tried to put AT&T out of business in the infancy of the telephone—by commissioning Thomas Edison to design a better phone and then rolling out tens of thousands of telephones to consumers, rendering AT&T a “bit player.” It was a sound Kronos strategy, but AT&T survived and eventually prospered over Western Union, thanks in part to aggressive patent litigation. Later AT&T, in its turn, applied the Kronos strategy to every upstart that challenged it.

Mr. Wu notes that, for most of the 20th century, AT&T operated the “most lucrative monopoly in history.” In the early 1980s, the U.S. government broke the monopoly up, but its longevity was the result of government regulation. In 1913, AT&T entered into the “Kingsbury Commitment” with the Justice Department. The deal was meant to increase competition by forcing AT&T, among other things, to allow independent operators to connect their local exchanges with AT&T’s long-distance lines. But the agreement, by forestalling the break-up of AT&T, was really, Mr. Wu says, the “death knell” of both “openness and competition.”

In the past, then, even arrangements aimed at maximizing competition have ended up entrenching the dominant player. Some argue that the Internet will avoid this fate because it is “inherently open.” Mr. Wu isn’t so sure. In fact, he says, “with everything on one network, the potential power to control is so much greater.” He worries about major players dominating the Internet, stifling innovation and free speech.

Mr. Wu’s solution is to propose a “Separation Principle,” a form of industry self-regulation to be overseen by the Federal Communications Commission (though he concedes that there is an ever-present danger of regulatory capture—whereby the FCC or other agencies become excessively influenced by the businesses they are meant to be regulating). The key to competition in the information industry, Mr. Wu believes, is a complete independence among its three layers: content owners (e.g., a games developer); network infrastructure (e.g., a cable company or cellular-network owner); and tools of access (e.g., a mobile handset maker). Obviously vertical integration, where one company participates in more than one layer, would be prohibited. The biggest effect of such a rule would be to separate content and conduit: Comcast, the cable giant, would plainly not be allowed to complete its planned acquisition of NBC Universal, a content provider.

The process that Mr. Wu describes—of a few companies dominating the information industry and requiring regulatory intervention to tame them—plays down the disruptive effects of technology itself. In 1998, the Justice Department launched an antitrust action against Microsoft, partly to prevent it from using Windows, its operating system, to control the Web. But it was innovation by competitors that put paid to Microsoft’s potential dominance. A decade ago, AOL (when it was still called America Online) seemed poised to dominate cyberspace. Then broadband came along and AOL, a glorified dial-up service provider, quickly became an also-ran.

Similarly, mobile carriers, like AT&T Wireless, long enjoyed a near complete control over mobile applications—until the Apple’s iPhone arrived. The App Store decimated that control and unleashed a wave of mobile innovation. Mr. Wu notes that Apple, which at first forbade some competing applications, was “shamed” into allowing apps like Skype and Google Voice on its phones. True enough, but surely that is evidence of market forces creating openness, not the need for more mechanisms to enforce it.

The legitimate desire to prevent basic “discrimination” (e.g., Comcast blocking Twitter) is not enough to justify the broad restrictions that Mr. Wu advocates. Besides, enforcing the new rules would itself stifle innovation, create arbitrary distinctions and protect rival incumbents. Google’s bid for wireless spectrum and its Nexus One smartphone would certainly have crossed “separation” lines—as would Apple’s combination of access devices (the iPhone) and a content-distribution business (iTunes). Mr. Wu’s proposal would blunt the competitive pressure that Google and Apple apply to each other, as well as to Verizon Wireless, Microsoft, Nokia and just about everyone else. As Mr. Wu himself shows when tracing the history of earlier technology-based industries, the effort to regulate openness can often do more harm than good.

Mr. Philips is chief executive of Photon Group, an Australia-based communications company.


Full article and photo:

Taking on Google by Learning From Ants

Fifteenth- and 16th-century European explorers helped to transform cartography during the Age of Discovery. Rather than mapping newly discovered worlds, Blaise Agüera y Arcas is out to invent new ways of viewing the old ones.

Mr. Agüera y Arcas is the architect of Bing Maps, the online mapping service that is part of Microsoft Corp.’s Bing Internet search engine. Bing Maps does all the basics, like turn-by-turn directions and satellite views that offer a peek into the neighbor’s backyard, but Mr. Agüera y Arcas has attracted attention in the tech world by pushing the service to do a lot more.

Blaise Agüera y Arcas, in Bellevue, Wash.

He helped to cook up a technology that allows people to post high-resolution photocollages that explore the interiors of buildings. For New York’s Metropolitan Museum of Art, for example, 1,317 still images dissolve into each other, giving an online visitor the sensation of touring the Greek and Roman art wing. By dragging a mouse, the viewer can circle a marble statute of Aphrodite and zoom in on the exhibit’s sign to read that the statue, Venus Genetrix, was created between the first and second centuries A.D.

For a user who wants to check out a particular street, Mr. Agüera y Arcas has devised an elegant visual transition that provides the feel of skydiving to the ground. He says that these transitions will become even better over time. “I want this all to become cinematic,” he said.

Mr. Agüera y Arcas, 35, imagines these projects from a cramped office on the 22nd floor of a new high-rise in Bellevue, Wash., facing the jagged geography of the Cascade Range. One wall is covered in chalk notes and equations. He messily applied a coat of blackboard paint to the wall himself because he dislikes the odor of whiteboard markers.

“Technically, I don’t think I was supposed to do that,” said Mr. Agüera y Arcas. With short-cropped hair and a scruffy beard, he has the appearance of a graduate student.

When he’s brainstorming, Mr. Agüera y Arcas paces his office, talking to himself out loud. He said the process “doesn’t look very good,” but the self-dialogue is essential for working out new ideas. “First you try to beat something down and show why it’s a stupid idea,” he said. “Then you branch out. What’s the broadest range of solutions I can come up with? It’s all very dialectical. You kind of argue with yourself.”

He often shares “new pieces of vision,” as he calls these early-stage concepts, in presentations and documents that are distributed to other members of the team. Mr. Agüera y Arcas, who manages about 60 people, said the most stimulating meetings he has are “jam sessions,” in which people riff on each others’ ideas. “Without all of that input, I don’t think I would be doing interesting things on my own,” he said.

Prototypes, he said, are crucial. These include everything from crude bits of functional code to storyboard sketches. Mr. Agüera y Arcas demonstrated one such prototype: a short video, done with a designer, that shows a street-level map in which typography representing street names is upright and suspended off the ground so that it’s easier to see.

“Presenting an idea in the abstract as text or as something you talk about doesn’t have anything like the galvanizing effect on people or on yourself,” he said.

His most productive moments often occur outside the office, without the distraction of meetings. After he has dinner and puts his two young children to bed, Mr. Agüera y Arcas says he and his wife, a neuroscientist at the University of Washington, often sit side-by-side working on their laptops late into the night.


Points of Interest

• Though Mr. Agüera y Arcas has assumed greater management responsibilities over the years, he still considers it vital to find time to develop projects on his own. “You see people who evolved in this way, and sometimes it looks like their brains died,” he said.

• He is a coffee connoisseur, fueling himself throughout the workday with several trips to a café downstairs in his building. Because he can’t always break away from the office, a gleaming chrome espresso maker and coffee grinder sit in the corner “for emergencies,” he said.

• He finds driving a car “deadening,” so he takes a bus to work from his home, reading or working on his laptop during the commute.

• When he was young, Mr. Agüera y Arcas dismantled things both animal and inanimate, from cameras to guinea pigs, so that he could see how they worked.


He finds unlikely sources of inspiration. Mr. Agüera y Arcas once cobbled together software that automatically clustered together related images on a photo-sharing site, with the goal of creating detailed 3-D reconstructions composed of pictures from many different photographers. The software was inspired by research he had read about how ant colonies form the most efficient pathways to food sources. He used the software to build a 3D view of Cambodia’s Angkor Wat temple.

Another time, he stumbled on a project inside Microsoft’s research group called WorldWide Telescope that offers access to telescope imagery of the universe over the Web. Now when Bing Maps users are viewing a location at street level, they can gaze up at the sky to see constellations appear overhead. (Microsoft is testing this and other features on an experimental version of its site before rolling them out to a wider audience.)

A marble statue of Aphrodite at New York’s Metropolitan Museum of Art can be viewed through an app on Bing Maps.

Mr. Agüera y Arcas draws from an eclectic set of skills and interests. The son of a Catalan father and an American mother who met on an Israeli kibbutz, he learned how to program computers during his childhood in Mexico City. As a teenager on a summer internship with a U.S. Navy research center in Bethesda, Md., he reprogrammed the guidance software for aircraft carriers to improve their stability at sea, which helped to reduce seasickness among sailors.

He studied physics, neuroscience and applied math at Princeton University but stopped short of completing his doctoral dissertation. Instead, he chose to apply his quantitative skills to his long fascination with the Early Modern period of history, devoting several years to analyzing the typography of Gutenberg Bibles from the 1450s using computers and digital cameras.

During his research—which cast doubt on Johannes Gutenberg’s role in creating a form of type-making commonly credited to him—he had to create software that was capable of displaying extremely high-resolution images of book pages on a computer screen. That technology inspired him to create a startup, Seadragon Software, that he sold to Microsoft in 2006; its technology is used in a Microsoft program that lets consumers interact with high-resolution images on Bing Maps.

Though his work has helped to build buzz for Bing Maps, Mr. Agüera y Arcas concedes that the site lags its big rival, Google Maps, in some areas. Google has photographed many more streets and roads than Microsoft has for its street-level views. He said that competition with Google is a stimulus for innovation in the maps category, but he avoids doing direct clones of new Google Map features.

“You can always be inspired, but the moment you start copying, you guarantee you will never get ahead,” he said.

Nick Wingfield, Wall Street Journal


Full article and photo:

Prize Descriptions

I visit Wikipedia every day. I study the evolving entries for Internet-specific entities like World of Warcraft, Call of Duty, Foursquare and Picasa, often savoring the lucid exposition that Wikipedia brings to technical subjects that might not be expected to inspire poetry and for which no vocabulary has yet been set.

Wikipedia is a perfectly serviceable guide to non-Internet life. But as a companion to the stuff that was born on the Internet, Wikipedia — itself an Internet artifact — will never be surpassed.

Every new symbolic order requires a taxonomist to make sense of it. When Renaissance paintings and drawings first became fashionable in the art market in the early 20th century, the primary task of critics like Bernard Berenson was to attribute them, classify them and create a taste for them. Art collectors had to be introduced to the dynamics of the paintings, the names of the painters and the differences among them. Without descriptions, attributions and analysis, Titian’s “Salomé With the Head of St. John the Baptist” is just a clump of data.

Wikipedia has become the world’s master catalogue raisonnée for new clumps of data. Its legion nameless authors are the Audubons, the Magellans, the Berensons of our time. This was made clear to me recently when I unknowingly quoted the work of Randy Dewberry, an anonymous contributor to Wikipedia, in a column on the video game Angry Birds. Dewberry’s prose hit a note rare in exposition anywhere: both efficient and impassioned. (“Players take control of a flock of birds that are attempting to retrieve their eggs from a group of evil pigs that have stolen them.”)

The passage described Angry Birds so perfectly that I assumed it came from the game’s developers. Who else could know the game so well? But as Dewberry subsequently explained to me in an e-mail, that’s not what happened. In fact, according to the entry’s history, the original description of Angry Birds was such egregious corporate shilling that Wikipedia planned to drop it. That’s when Dewberry, a Wikipedian and devoted gamer, introduced paragraphs so lively they made the pleasure of the game palpable. The entry remained.

Like many Wikipedians, Dewberry is modest to the point of self-effacement about his contributions to the site. Because entries are anonymous and collaborative, no author is tempted to showboat and, in the pursuit of literary glory, swerve from the aim of clarity and utility. “No one editor can lay absolute claim to any articles,” Dewberry told me. “While editors will acknowledge when a user puts a substantial amount of work into an article, it is not ‘their’ article.”

For more information on the house vibe around credit-claiming, Dewberry proposed I type “WP: OWN” into Wikipedia to read its policy about “ownership” of articles. My jaw dropped. The page is fascinating for anyone who has ever been part of a collaborative effort to create anything.

At the strenuously collectivist Wikipedia, it seems, “ownership” of an article — what in legacy media is called “authorship” — is strictly forbidden. But it’s more than that: even doing jerky things that Wikipedia calls “ownership behavior” — subtle ways of acting proprietary about entries — is prohibited. As an example of the kind of attitude one editor is forbidden to cop toward another, Wikipedia cites this: “I have made some small amendments to your changes. You might notice that my tweaking of your wording has, in effect, reverted the article back to what it was before, but do not feel disheartened. Please feel free to make any other changes to my article if you ever think of anything worthwhile. Toodles! :)”

The magazine business could have used some guidelines about this all-too-familiar kind of authorship jockeying decades ago.

Wikipedia is vitally important to the culture. Digital artifacts like video games are our answer to the album covers and romance novels, the saxophone solos and cigarette cases, that previously defined culture. Today an “object” that gives meaning might be an e-book. An MP3. A Flash animation. An HTML5 animation. A video, an e-mail, a text message, a blog. A Tumblr blog. A Foursquare badge. Around these artifacts we now form our identities.

Take another such artifact: the video game Halo. The entry on Wikipedia for Halo: Combat Evolved, which Wikipedia’s editors have chosen as a model for the video-game-entry form, keeps its explanations untechnical. Halo, according to the article, is firmly in the tradition of games about shooting things, “focusing on combat in a 3D environment and taking place almost entirely from a character’s eye view.” But not always: “The game switches to the third-person perspective during vehicle use for pilots and mounted gun operators; passengers maintain a first-person view.” At last, Halo: I understand you!

At first blush the work of composing these anonymous descriptions may seem servile. Hundreds of thousands of unnamed Wikipedia editors have made a hobby of perfecting the descriptions of objects whose sales don’t enrich them. But their pleasure in the always-evolving master document comes through clearly in Wikipedia itself. The nameless authors tell the digital world what its components are, and thereby create it.

With authorship disputes, Wikipedia advises, “stay calm, assume good faith and remain civil.” The revolutionary policy outlined on “Wikipedia: Ownership of Articles” — search Wikipedia or Google for it — is stunningly thorough.

For the best-written articles on video games, search Wikipedia for WP:VG/FA. These are all featured articles, and as Wikipedia notes, they have “the status which all articles should eventually achieve.”

It’s time to contribute to Wikipedia — even if you just want to make a small correction to the Calvin Coolidge, “Krapp’s Last Tape” or Bettie Serveert entries. Join the project by following links from Wikipedia’s homepage, and then read WP:YFA, Wikipedia’s page on creating your first article.

Virginia Heffernan, New York Times


Full article and photos:

True to type

YOU’RE sick of Helvetica, aren’t you? That show-off changed its birth name, Neue Haas Grotesk, had plastic surgery in the 1980s to get thinner (and fatter), and even has its own movie. Helvetica and its online type brethren Arial, Georgia, Times and Verdana appear on billions of Web pages. You’re sick of these other faces, too, even if you don’t know them by name.

No one questions the on-screen aesthetics of the fonts; Georgia and Verdana were designed specifically for computer use by 2010 MacArthur Foundation grant recipient Matthew Carter, one of the greatest modern type designers. The others have varying pedigrees, and work fine in pixels. They aren’t Brush Script and Marker Felt, for heaven’s sake. But those faces dominate the the web’s fontscape purely because of licensing. Most or all of the faces are pre-installed in Mac OS X, Windows, and several mobile operating systems. Their overuse provides a homogeneity that no graphic designer—or informed reader—would ever tolerate in print. Those not educated in type’s arcana can be forgiven for not caring at a conscious level, even as the lack of differentiation pricks at the back of their optic nerves.

That’s about to change. An entente has formed in a cold war lasting over a decade between type foundries that create and license typefaces for use, and browser makers that want to allow web designers the freedom of selection available for print. The testiness between the two camps arose as a result of piracy and intellectual-property protection concerns. Foundries don’t want their valuable designs easily downloaded and copied, which was possible in one iteration of web font inclusion. For a time, foundries looked to digital rights management (DRM) to encrypt and protect use. Microsoft built such a system in 1998 for Internet Explorer 4. Simon Daniels, the company’s typography supremo, says that even with its browser’s giant market share at the time, it wasn’t very widely used.

Such protection is complicated, and requires an infrastructure and agreements that often prevent use across systems. It also has precious little effect in deterring piracy. DRM may actually push potential buyers into pirates’ arms because out of a desire for simplicity and portability rather than out of an unwillingness to pay. Apple once sold only protected music that would play in its iTunes software and on its iPods, iPhones and iPads. The music industry tried to break Apple’s hegemony over digital downloads by removing DRM, which in turn allows song files to be played on any device. That had some effect, but probably not enough. The industry is now moving towards streaming, where a recurring monthly fee or viewing advertisements unlocks audio from central servers on demand. Fonts may follow a similar path. Foundries have accepted a compromise that removes protection in exchange for a warning label and a kind of on-demand font streaming from central depositories.

This compromise, the WOFF (Web Open Font Format), was thrashed out by an employee of Mozilla, the group behind Firefox, and members of two type houses. It’s a mercifully brief technical document that defines political and financial issues. WOFF allows designers to package fonts using either of the two major desktop formats—themselves remnants of font wars of yore—in a way approved by all major and most minor foundries. It doesn’t protect the typefaces with encryption, but with a girdle of ownership defined in clear text. Future versions of browsers from the three groups will add full WOFF support. Apple’s Safari and its underlying WebKit rendering engine used for nearly all mobile operating systems’ browsers will adopt WOFF, as will Google Chrome and its variants. WOFF was proposed in October 2009, presented to the World Wide Web Consortium (W3C) in April 2010 by Microsoft, the Mozilla Foundation and Opera Software, and adopted as a draft in July, remarkably quickly for such an about face. 

At the annual meeting of the typoscenti at the Association Typographique Internationale (ATypI) last month in Dublin, all the web font talk was about WOFF and moving forward to offer more faces, services and integration, says John Berry, the president of ATypI, and part of Mr Daniels’ typography group at Microsoft. “The floodgates have opened,” says Mr Berry. “All the font foundries and many of the designers are offering their fonts or subsets of their fonts.” Several sites now offer a subscription-based combination of font licensing and simple JavaScript code to insert on web pages to ensure that a specified type loads on browsers—even older ones still in use. Online font services include TypeKit, Webtype, and Monotype’s, to name but a few. Designers don’t load the faces on their own websites, but stream them as small packages, cached by browsers, from the licence owner’s servers.

The long-term effect of the campaign for real type will be a gradual branding of sites, whether those created by talented individuals or multi-billion-dollar corporations, or based on choices in templates used in blogging and other platforms. Just as a regular reader of the print edition of this newspaper can recognise it in a flash across a room, so, too, will an online edition have the pizazz (or lack thereof) of a print publication. Mr Berry notes,

It’s most obvious in display type and headlines and things, but it’s going to make a huge difference just in reading and text, to have something besides Arial, Verdana, and Georgia. It will make real web publications possible that you want to read, as opposed to a poor substitute.

Expect an equivalent of the Cambrian explosion in typography. And Cambria—another dedicated computer font—won’t be the only new face in town.

Full article and photo:

Kant on a Kindle?

The technology of the book—sheafs of paper covered in squiggles of ink—has remained virtually unchanged since Gutenberg. This is largely a testament to the effectiveness of books as a means of transmitting and storing information. Paper is cheap, and ink endures.

In recent years, however, the act of reading has undergone a rapid transformation, as devices such as the Kindle and iPad account for a growing share of book sales. (Amazon, for instance, now sells more e-books than hardcovers.) Before long, we will do most of our reading on screens—lovely, luminous screens.

The displays are one of the main selling points of these new literary gadgets. Thanks to dramatic improvements in screen resolution, the words shimmer on the glass; every letter is precisely defined, with fully adjustable fonts. Think of it as a beautifully printed book that’s always available in perfect light. For contrast and clarity, it’s hard for Gutenberg to compete.

And these reading screens are bound to get better. One of the longstanding trends of modern technology is to make it easier and easier to perceive fine-grained content. The number of pixels in televisions has increased fivefold in the last 10 years, VHS gave rise to the Blu-Ray, and computer monitors can display millions of vibrant colors.

I would be the last to complain about such improvements—I shudder to imagine a world without sports on HDTV—but it’s worth considering the ways in which these new reading technologies may change the nature of reading and, ultimately, the content of our books.

Let’s begin by looking at how reading happens in the brain. Stanislas Dehaene, a neuroscientist at the Collège de France in Paris, has helped to demonstrate that the literate brain contains two distinct pathways for making sense of words, each activated in different contexts. One pathway, known as the ventral route, is direct and efficient: We see a group of letters, convert those letters into a word and then directly grasp the word’s meaning. When you’re reading a straightforward sentence in a clear format, you’re almost certainly relying on this neural highway. As a result, the act of reading seems effortless. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The brain’s second reading pathway, the dorsal stream, is turned on when we have to pay conscious attention to a sentence. Perhaps we’ve encountered an obscure word or a patch of smudged ink. (In his experiments, Mr. Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Mr. Dehaene’s research demonstrates that even adults are still forced to occasionally decipher a text.

The lesson of his research is that the act of reading observes a gradient of awareness. Familiar sentences rendered on lucid e-ink screens are read quickly and effortlessly. Unusual sentences with complex clauses and odd punctuation tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra cognitive work wakes us up; we read more slowly, but we notice more. Psychologists call this the “levels-of-processing” effect, since sentences that require extra levels of analysis are more likely to get remembered.

E-readers have yet to dramatically alter the reading experience; e-ink still feels a lot like old-fashioned ink. But it seems inevitable that the same trends that have transformed our televisions will also affect our reading gadgets. And this is where the problems begin. Do we really want reading to be as effortless as possible? The neuroscience of literacy suggests that, sometimes, the best way to make sense of a difficult text is to read it in a difficult format, to force our brain to slow down and process each word. After all, reading isn’t about ease—it’s about understanding. If we’re going to read Kant on the Kindle, or Proust on the iPad, then we should at least experiment with an ugly font.

Every medium eventually influences the message that it carries. I worry that, before long, we’ll become so used to the mindless clarity of e-ink that the technology will feed back onto the content, making us less willing to endure challenging texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a thorny stretch of prose. And that would be a shame, because not every sentence should be easy to read.

Jonah Lehrer is the author, most recently, of ‘”How We Decide.”


Full article and photo:

The Pen That Never Forgets

In the spring, Cincia Dervishaj was struggling with a take-home math quiz. It was testing her knowledge of exponential notation — translating numbers like “3.87 x 102” into a regular form. Dervishaj is a 13-year-old student at St. John’s Lutheran School in Staten Island, and like many students grappling with exponents, she got confused about where to place the decimal point. “I didn’t get them at all,” Dervishaj told me in June when I visited her math class, which was crowded with four-year-old Dell computers, plastic posters of geometry formulas and a big bowl of Lego bricks.

To refresh her memory, Dervishaj pulled out her math notebook. But her class notes were not great: she had copied several sample problems but hadn’t written a clear explanation of how exponents work.

She didn’t need to. Dervishaj’s entire grade 7 math class has been outfitted with “smart pens” made by Livescribe, a start-up based in Oakland, Calif. The pens perform an interesting trick: when Dervishaj and her classmates write in their notebooks, the pen records audio of whatever is going on around it and links the audio to the handwritten words. If her written notes are inadequate, she can tap the pen on a sentence or word, and the pen plays what the teacher was saying at that precise point.

Dervishaj showed me how it works, flipping to her page of notes on exponents and tapping a set of numbers in the middle of the page. Out of a tiny speaker in the thick, cigar-shaped pen, I could hear her teacher, Brian Licata, explaining that precise problem. “It’s like having your own little personal teacher there, with you at all times,” Dervishaj said.

Having a pen that listens, the students told me, has changed the class in curious ways. Some found the pens make class less stressful; because they don’t need to worry about missing something, they feel freer to listen to what Licata says. When they do take notes, the pen alters their writing style: instead of verbatim snippets of Licata’s instructions, they can write “key words” — essentially little handwritten tags that let them quickly locate a crucial moment in the audio stream. Licata himself uses a Livescribe pen to provide the students with extra lessons. Sitting at home, he’ll draw out a complicated math problem while describing out loud how to solve it. Then he’ll upload the result to a class Web site. There his students will see Licata’s handwriting slowly fill the page while hearing his voice explaining what’s going on. If students have trouble remembering how to tackle that type of problem, these little videos — “pencasts” — are online 24 hours a day. All the students I spoke to said they watch them.

LIKE MOST PIECES of classroom technology, the pens cause plenty of digital-age hassles. They can crash. The software for loading students’ notes onto their computers or from there onto the Web can be finicky. And the pens work only with special notepaper that enables the pen to track where it’s writing; regular paper doesn’t work. (Most students buy notepads from Livescribe, though it’s possible to print the paper on a color printer.) There are also some unusual social side-effects. The presence of so many recording devices in the classroom creates a sort of panopticon — or panaudiocon, as it were. Dervishaj has found herself whispering to her seatmate, only to realize the pen was on, “so we’re like, whoa!” — their gossip has been recorded alongside her notes. Although you can pause a recording, there’s currently no way to selectively delete a few seconds of audio from the pen, so she’s forced to make a decision: Delete all the audio for that lesson, or keep it in and hope nobody else ever hears her private chatter. She usually deletes.

Nonetheless, Licata is a convert. As the students started working quietly on review problems, their pens making tiny “boop” noises as the students began or paused their recording, Licata pulled me aside to say the pens had “transformed” his class. Compact and bristling with energy, Licata is a self-professed geek; in his 10 years of teaching, he has seen plenty of classroom gadgets come and go, from Web-based collaboration software to pricey whiteboards that let children play with geometric figures the way they’d manipulate an iPhone screen. Most of these gewgaws don’t impress him. “Two or three times a year teachers whip out some new technology and use it, but it doesn’t do anything better and it’s never seen again,” he said.

But this time, he said, was different. This is because the pen is based on an age-old classroom technique that requires no learning curve: pen-and-paper writing. Livescribe first released the pen in 2008; Licata encountered it when a colleague brought his own to work. Intrigued, he persuaded Livescribe to donate 20 pens to the school to outfit his entire class. (The pens sell for around $129.) “I’ve made more gains with this class this year than I’ve made with any class,” he told me. In his evenings, Licata is pursuing a master’s degree in education; separately, he intends to study how the smart pens might affect the way students learn, write and think. “Two years ago I would have told you that note-taking is a lost art, that handwriting was a lost art,” he said. “But now I think handwriting is crucial.”

TAKING NOTES HAS long posed a challenge in education. Decades of research has found a strong correlation between good notes and good grades: the more detailed and accurate your notes, the better you do in school. That’s partly because the act of taking notes forces you to pay closer attention. But what’s more important, according to some researchers, is that good notes provide a record: most of the benefits from notes come not from taking them but from reviewing them, because no matter how closely we pay attention, we forget things soon after we leave class. “We have feeble memories,” says Ken Kiewra, a professor of educational psychology at the University of Nebraska and one of the world’s leading researchers into note-taking.

Yet most students are very bad at taking notes. Kiewra’s research has found that students record about a third of the critical information they hear in class. Why? Because note-taking is a surprisingly complex mental activity. It heavily taxes our “working memory” — the volume of information we can consciously hold in our heads and manipulate. Note-taking requires a student to listen to a teacher, pick out the most important points and summarize and record them, while trying not to lose the overall drift of the lecture. (The very best students do even more mental work: they blend what they’re hearing with material they already know and reframe the concepts in their own words.) Given how jampacked this task is, “transcription fluency” matters: the less you have to think about the way you’re recording notes, the better. When you’re taking notes, you want to be as fast and as automatic as possible.

All note-taking methods have downsides. Handwriting is the most common and easiest, but a lecturer speaks at 150 to 200 words per minute, while even the speediest high-school students write no more than 40 words per minute. The more you struggle to keep up, the more you’re focusing on the act of writing, not the act of paying attention.

Typing can be much faster. A skilled typist can manage 60 words a minute or more. And notes typed into a computer have other advantages: they can be quickly searched (unlike regular handwritten notes) and backed up or shared online with other students. They’re also neater and thus easier to review. But they come with other problems, not least of which is that typing can’t capture the diagrammatic notes that classes in math, engineering or biology often require. What’s more, while personal computers and laptops may be common in college, that isn’t the case in cash-strapped high schools. Laptops in class also bring a host of distractions — from Facebook to Twitter — that teachers loathe. And students today are rarely taught touch typing; some note-taking studies have found that students can be even slower at typing than at handwriting.

One of the most complete ways to document what is said in class is to make an audio record: all 150-plus words a minute can be captured with no mental effort on the part of the student. Kiewra’s research has found that audio can have a powerful effect on learning. In a 1991 experiment, he had four groups of students listen to a lecture. One group was allowed to listen once, another twice, the third three times and the fourth was free to scroll back and forth through the recording at will, listening to whatever snippets the students wanted to review. Those who relistened were increasingly likely to write down crucial “secondary” ideas — concepts in a lecture that add nuance to the main points but that we tend to miss when we’re focused on writing down the core ideas. And the students who were able to move in and out of the audio stream performed as well as those who listened to the lecture three times in a row. (Students who recorded more secondary ideas also scored higher in a later quiz.) But as anyone who has tried to scroll back and forth through an audio file has discovered, reviewing audio is frustrating and clumsy. Audio may be richer in detail, but it is not, like writing and typescript, skimmable.

JIM MARGGRAFF, the 52-year-old inventor of the Livescribe pen, has a particular knack for blending audio and text. In the ’90s, appalled by Americans’ poor grasp of geography, he invented a globe that would speak the name of any city or country when you touched the location with a pen. In 1998, his firm was absorbed by Leapfrog, the educational-toy maker, where Marggraff invented toys that linked audio to paper. His first device, the LeapPad, was a book that would speak words and play other sounds whenever a child pointed a stylus at it. It quickly became Leapfrog’s biggest hit.

In 2001, Marggraff was browsing a copy of Wired magazine when he read an article about Anoto, a Swedish firm that patented a clever pen technology: it imprinted sheets of paper with tiny dots that a camera-equipped pen could use to track precisely where it was on any page. Several firms were licensing the technology to create pens that would record pen strokes, allowing users to keep digital copies of whatever they wrote on the patterned paper. But Marggraff had a different idea. If the pen recorded audio while it wrote, he figured, it would borrow the best parts from almost every style of note-taking. The audio record would help note-takers find details missing from their written notes, and the handwritten notes would serve as a guide to the audio record, letting users quickly dart to the words they wanted to rehear. Marggraff quit Leapfrog in 2005 to work on his new idea, and three years later he released the first Livescribe pen. He has sold close to 500,000 pens in the last two years, mostly to teachers, students and businesspeople.

I met Marggraff in his San Francisco office this summer. He and Andrew Van Schaack, a professor in the Peabody College of Education at Vanderbilt University and Livescribe’s science adviser, explained that the pen operated, in their view, as a supplement to your working memory. If you’re not worried about catching every last word, you can allocate more of your attention to processing what you’re hearing.

“I think people can be more confident in taking fewer notes, recognizing that they can go back if there’s something important that they need,” Van Schaack said. “As a teacher, I want to free up some cognitive ability. You know that little dial on there, your little brain tachometer? I want to drop off this one so I can use it on my thinking.” Marggraff told me Livescribe has surveyed its customers on how they use the pen. “A lot of adults say that it helps them with A.D.H.D.,” he said. “Students say: ‘It helps me improve my grades in specific classes. I can think and listen, rather than writing.’ They get more confident.”

Livescribe pens often inspire proselytizing among users. I spoke to students at several colleges and schools who insisted that the pen had improved their performance significantly; one swore it helped boost his G.P.A. to 3.9 from 3.5. Others said they had evolved highly personalized short notations — even pictograms — to make it easier to relocate important bits of audio. (Whenever his professor reeled off a long list of facts, one student would simply write “LIST” if he couldn’t keep up, then go back later to fill in the details after class.) A few students pointed to the handwriting recognition in Livescribe’s desktop software: once an individual user has transferred the contents of a pen to his or her computer, the software makes it possible to search that handwriting — so long as it’s reasonably legible — by keyword. That, students said, markedly sped up studying for tests, because they could rapidly find notes on specific topics. The pen can also load “apps”: for example, a user can draw an octave of a piano keyboard and play it (with the notes coming out of the pen’s speaker), or write a word in English and have the pen translate it into Spanish on the pen’s tiny L.E.D. display.

Still, it’s hard to know whether Marggraff’s rosiest ambitions are realistic. No one has yet published independent studies testing whether the Livescribe style of enhanced note-taking seriously improves educational performance. One of the only studies thus far is by Van Schaack himself. In the spring, he conducted an unpublished experiment in which he had 40 students watch a video of a 30-minute lecture on primatology. The students took notes with a Livescribe pen, and were also given an iPod with a recording of the lecture. Afterward, when asked to locate specific facts on both devices, the students were 2.5 times faster at retrieving the facts on the pen than on the iPod. It was, Van Schaack argues, evidence that the pen can make an audio stream genuinely accessible, potentially helping students tap into those important secondary ideas that we miss when we’re scrambling to write solely by hand.

Marggraff suspects the deeper impact of the pen may not be in taking notes when you’re listening to someone else, but when you’re alone — and thinking through a problem by yourself. For example, he said, a book can overwhelm a reader with thoughts. “You’re going to get ideas like crazy when you’re reading,” Marggraff says. “The issue is that it’s too slow to sit down and write them” — but if you don’t record them, you’ll usually forget them. So when Marggraff is reading a book at home or even on a plane, he’ll pull out his pen, hit record and start talking about what he’s thinking, while jotting down some keywords. Later on, when he listens to the notes, “it’s just astounding how relevant it is, and how much value it brings.” No matter how good his written notes are, audio includes many more flashes of insight — the difference between the 30 words per minute of his writing and the 150 minutes per word of his speech, as it were.

Marggraff pulls out his laptop to show me notes he took while reading Malcolm Gladwell’s book “Outliers.” The notes are neat and legible, but the audio is even richer; when he taps on the middle of the note, I can hear his voice chattering away at high speed. When he listens to the notes, he’ll often get new ideas, so he’ll add notes, layering analysis on top of analysis.

“This is game-changing,” he says. “This is a dialogue with yourself.” He has used the technique to brainstorm patent ideas for hours at a time.

Similarly, in his class at St. John’s, Licata has found the pen is useful in capturing the students’ dialogues with themselves. For instance, he asks his students to talk to their pens while they do their take-home quizzes, recording their logic in audio. That way, if they go off the rails, Licata can click through the page to hear what, precisely, went wrong and why. “I’m actually able to follow their train of thought,” he says.

Some experts have doubts about Livescribe as a silver bullet. As Kiewra points out, plenty of technologies in the past have been hailed as salvations of education. “There’s been the radio, there’s been the phonograph, moving pictures, the VCR” — and, of course, the computer. But the average student’s note-taking ability remains as dismal as ever. Kiewra says he now believes the only way to seriously improve it is by painstakingly teaching students the core skills: how to listen for key concepts, how to review your notes and how to organize them to make meaning, teasing out interesting associations between bits of information. (As an example, he points out that students taking notes on the planets will learn lots of individual facts. But if they organize them into a chart, they’ll make discoveries on their own: sort the planets by distance from the sun and speed of rotation, and you’ll discover that the farther you go out, the more slowly they spin.) Kiewra also says that an effective way to get around the problem of incomplete and disorganized note-taking is for teachers to give out “partial” notes — handouts that summarize key concepts in the lecture but leave blanks that the students must fill in, forcing them to pay attention. Some studies have found that students using partial notes capture a majority of the main concepts in a lecture, more than doubling their usual performance.

Indeed, many modern educators say that students shouldn’t be taking notes in class at all. If it’s true that note-taking taxes their working memory, they argue, then teachers should simply hand out complete sets of notes that reflect everything in the lecture — leaving students free to listen and reflect. After all, if the Internet has done anything, it has made it trivially easy for instructors to distribute materials.

“I don’t think anyone should be writing down what the teacher’s saying in class,” is the blunt assessment of Lisa Nielsen, author of a blog, “The Innovative Educator,” who also heads up a division of the New York City Department of Education devoted to finding uses for new digital tools in classrooms. “Teachers should be pulling in YouTube videos or lectures from experts around the world, piping in great people into their classrooms, and all those things can be captured online — on Facebook, on a blog, on a wiki or Web site — for students to be looking at later,” she says. “Now, should students be making meaning of what they’re hearing or coming up with questions? Yes. But they don’t need to write down everything the teacher’s said.” There is some social-science support for the no-note-taking view. In one experiment, Kiewra took several groups of students and subjected them to different note-taking situations: some attended a lecture and reviewed their own notes; others didn’t attend but were given a set of notes from the instructor. Those who heard the lecture and took notes scored 51 percent on a subsequent test, while those who only read the instructor’s notes scored 69 percent.

Of course, if Marggraff has his way, smart pens could become so common — and so much cheaper — that bad notes, or at least incomplete ones, will become a thing of the past. Indeed, if most pen-and-paper writing could be easily copied and swapped online, the impacts on education could be intriguing and widespread. Marggraff intends to release software that lets teachers print their students’ work on dot-patterned paper; students could do their assignment, e-mail it in, then receive a graded paper e-mailed back with handwritten and spoken feedback from the teacher. Students would most likely swap notes more often; perhaps an entire class could designate one really good note-taker and let him write while everyone else listens, sharing the notes online later. Marggraff even foresees textbooks in which students could make notes in the margins and have a permanent digital record of their written and spoken thoughts beside the text. “Now we really have bridged the paper and the digital worlds,” he adds. Perhaps the future of the pen is on the screen.

Clive Thompson, a contributing writer for the magazine, writes frequently about technology and science.


Full article and photo:

A virtual counter-revolution

The internet has been a great unifier of people, companies and online networks. Powerful forces are threatening to balkanise it

A fragmenting virtual world

THE first internet boom, a decade and a half ago, resembled a religious movement. Omnipresent cyber-gurus, often framed by colourful PowerPoint presentations reminiscent of stained glass, prophesied a digital paradise in which not only would commerce be frictionless and growth exponential, but democracy would be direct and the nation-state would no longer exist. One, John-Perry Barlow, even penned “A Declaration of the Independence of Cyberspace”.

Even though all this sounded Utopian when it was preached, it reflected online reality pretty accurately. The internet was a wide-open space, a new frontier. For the first time, anyone could communicate electronically with anyone else—globally and essentially free of charge. Anyone was able to create a website or an online shop, which could be reached from anywhere in the world using a simple piece of software called a browser, without asking anyone else for permission. The control of information, opinion and commerce by governments—or big companies, for that matter—indeed appeared to be a thing of the past. “You have no sovereignty where we gather,” Mr Barlow wrote.

The lofty discourse on “cyberspace” has long changed. Even the term now sounds passé. Today another overused celestial metaphor holds sway: the “cloud” is code for all kinds of digital services generated in warehouses packed with computers, called data centres, and distributed over the internet. Most of the talk, though, concerns more earthly matters: privacy, antitrust, Google’s woes in China, mobile applications, green information technology (IT). Only Apple’s latest iSomethings seem to inspire religious fervour, as they did again this week.

Again, this is a fair reflection of what is happening on the internet. Fifteen years after its first manifestation as a global, unifying network, it has entered its second phase: it appears to be balkanising, torn apart by three separate, but related forces.

First, governments are increasingly reasserting their sovereignty. Recently several countries have demanded that their law-enforcement agencies have access to e-mails sent from BlackBerry smart-phones. This week India, which had threatened to cut off BlackBerry service at the end of August, granted RIM, the device’s maker, an extra two months while authorities consider the firm’s proposal to comply. However, it has also said that it is going after other communication-service providers, notably Google and Skype.

Second, big IT companies are building their own digital territories, where they set the rules and control or limit connections to other parts of the internet. Third, network owners would like to treat different types of traffic differently, in effect creating faster and slower lanes on the internet.

It is still too early to say that the internet has fragmented into “internets”, but there is a danger that it may splinter along geographical and commercial boundaries. (The picture above is a visual representation of the “nationality” of traffic on the internet, created by the University of California’s Co-operative Association for Internet Data Analysis: America is in pink, Britain in dark blue, Italy in pale blue, Sweden in green and unknown countries in white.) Just as it was not preordained that the internet would become one global network where the same rules applied to everyone, everywhere, it is not certain that it will stay that way, says Kevin Werbach, a professor at the Wharton School of the University of Pennsylvania.

To grasp why the internet might unravel, it is necessary to understand how, in the words of Mr Werbach, “it pulled itself together” in the first place. Even today, this seems like something of a miracle. In the physical world, most networks—railways, airlines, telephone systems—are collections of more or less connected islands. Before the internet and the world wide web came along, this balkanised model was also the norm online. For a long time, for instance, AOL and CompuServe would not even exchange e-mails.

Economists point to “network effects” to explain why the internet managed to supplant these proprietary services. Everybody had strong incentives to join: consumers, companies and, most important, the networks themselves (the internet is in fact a “network of networks”). The more the internet grew, the greater the benefits became. And its founding fathers created the basis for this virtuous circle by making it easy for networks to hook up and for individuals to get wired.

Yet economics alone do not explain why the internet rather than a proprietary service prevailed (as Microsoft did in software for personal computers, or PCs). One reason may be that the rapid rise of the internet, originally an obscure academic network funded by America’s Department of Defence, took everyone by surprise. “The internet was able to develop quietly and organically for years before it became widely known,” writes Jonathan Zittrain, a professor at Harvard University, in his 2008 book, “The Future of the Internet—And How To Stop It”. In other words, had telecoms firms, for instance, suspected how big it would become, they might have tried earlier to change its rules.

Whatever the cause, the open internet has been a boon for humanity. It has not only allowed companies and other organisations of all sorts to become more efficient, but enabled other forms of production, notably “open source” methods, in which groups of people, often volunteers, all over the world develop products, mostly pieces of software, collectively. Individuals have access to more information than ever, communicate more freely and form groups of like-minded people more easily.

Even more important, the internet is an open platform, rather than one built for a specific service, like the telephone network. Mr Zittrain calls it “generative”: people can tinker with it, creating new services and elbowing existing ones aside. Any young company can build a device or develop an application that connects to the internet, provided it follows certain, mostly technical conventions. In a more closed and controlled environment, an Amazon, a Facebook or a Google would probably never have blossomed as it did.

Forces of fragmentation

However, this very success has given rise to the forces that are now pulling the internet apart. The cracks are most visible along geographical boundaries. The internet is too important for governments to ignore. They are increasingly finding ways to enforce their laws in the digital realm. The most prominent is China’s “great firewall”. The Chinese authorities are using the same technology that companies use to stop employees accessing particular websites and online services. This is why Google at first decided to censor its Chinese search service: there was no other way to be widely accessible in the country.

But China is by no means the only country erecting borders in cyberspace. The Australian government plans to build a firewall to block material showing the sexual abuse of children and other criminal or offensive content. The OpenNet Initiative, an advocacy group, lists more than a dozen countries that block internet content for political, social and security reasons. They do not need especially clever technology: governments go increasingly after dominant online firms because they are easy to get hold of. In April Google published the numbers of requests it had received from official agencies to remove content or provide information about users. Brazil led both counts (see chart 1).

Not every request or barrier has a sinister motive. Australia’s firewall is a case in point, even if it is a clumsy way of enforcing the law. It would be another matter, however, if governments started tinkering with the internet’s address book, the Domain Name System (DNS). This allows the network to look up the computer on which a website lives. If a country started its own DNS, it could better control what people can see. Some fear this is precisely what China and others might do one day.

To confuse matters, the DNS is already splintering for a good reason. It was designed for the Latin alphabet, which was fine when most internet users came from the West. But because more and more netizens live in other parts of the world—China boasts 420m—last October the Internet Corporation for Assigned Names and Numbers, the body that oversees the DNS, allowed domain names entirely in other scripts. This makes things easier for people in, say, China, Japan or Russia, but marks another step towards the renationalisation of the internet.

Many media companies have already gone one step further. They use another part of the internet’s address system, the “IP numbers” that identify computers on the network, to block access to content if consumers are not in certain countries. Try viewing a television show on Hulu, a popular American video service, from Europe and it will tell you: “We’re sorry, currently our video library can only be streamed within the United States.” Similarly, Spotify, a popular European music-streaming service, cannot be reached from America.

Yet it is another kind of commercial attempt to carve up the internet that is causing more concern. Devotees of a unified cyberspace are worried that the online world will soon start looking as it did before the internet took over: a collection of more or less connected proprietary islands reminiscent of AOL and CompuServe. One of them could even become as dominant as Microsoft in PC software. “We’re heading into a war for control of the web,” Tim O’Reilly, an internet savant who heads O’Reilly Media, a publishing house, wrote late last year. “And in the end, it’s more than that, it’s a war against the web as an interoperable platform.”

The trend to more closed systems is undeniable. Take Facebook, the web’s biggest social network. The site is a fast-growing, semi-open platform with more than 500m registered users. Its American contingent spends on average more than six hours a month on the site and less than two on Google. Users have identities specific to Facebook and communicate mostly via internal messages. The firm has its own rules, covering, for instance, which third-party applications may run and how personal data are dealt with.

Apple is even more of a world apart. From its iPhone and iPad, people mostly get access to online services not through a conventional browser but via specialised applications available only from the company’s “App Store”. Granted, the store has lots of apps—about 250,000—but Apple nonetheless controls which ones make it onto its platform. It has used that power to keep out products it does not like, including things that can be construed as pornographic or that might interfere with its business, such as an app for Google’s telephone service. Apple’s press conference to show off its new wares on September 1st was streamed live over the internet but could be seen only on its own devices.

Even Google can be seen as a platform unto itself, if a very open one. The world’s biggest search engine now offers dozens of services, from news aggregation to word processing, all of which are tied together and run on a global network of dozens of huge data-centres. Yet Google’s most important service is its online advertising platform, which serves most text-based ads on the web. Being the company’s main source of revenue, critics say, it is hardly a model of openness and transparency.

There is no conspiracy behind the emergence of these platforms. Firms are in business to make money. And such phenomena as social networks and online advertising exhibit strong network effects, meaning that a dominant market leader is likely to emerge. What is more, most users these days are not experts, but average consumers, who want secure, reliable products. To create a good experience on mobile devices, which more and more people will use to get onto the internet, hardware, software and services must be more tightly integrated than on PCs.

Net neutrality, or not?

Discussion of these proprietary platforms is only beginning. A lot of ink, however, has already been spilt on another form of balkanisation: in the plumbing of the internet. Most of this debate, particularly in America, is about “net neutrality”. This is one of the internet’s founding principles: that every packet of data, regardless of its contents, should be treated the same way, and the best effort should always be made to forward it.

Proponents of this principle want it to become law, out of concern that network owners will breach it if they can. Their nightmare is what Tim Wu, a professor at Columbia University, calls “the Tony Soprano vision of networking”, alluding to a television series about a mafia family. If operators were allowed to charge for better service, they could extort protection money from every website. Those not willing to pay for their data to be transmitted quickly would be left to crawl in the slow lane. “Allowing broadband carriers to control what people see and do online would fundamentally undermine the principles that have made the internet such a success,” said Vinton Cerf, one of the network’s founding fathers (who now works for Google), at a hearing in Congress.

Opponents of the enshrining of net neutrality in law—not just self-interested telecoms firms, but also experts like Dave Farber, another internet elder—argue that it would be counterproductive. Outlawing discrimination of any kind could discourage operators from investing to differentiate their networks. And given the rapid growth in file-sharing and video (see chart 2), operators may have good reason to manage data flows, lest other traffic be crowded out.

The issue is not as black and white as it seems. The internet has never been as neutral as some would have it. Network providers do not guarantee a certain quality of service, but merely promise to do their best. That may not matter for personal e-mails, but it does for time-sensitive data such as video. What is more, large internet firms like Amazon and Google have long redirected traffic onto private fast lanes that bypass the public internet to speed up access to their websites.

Whether such preferential treatment becomes more widespread, and even extortionary, will probably depend on the market and how it is regulated. It is telling that net neutrality has become far more politically controversial in America than it has elsewhere. This is a reflection of the relative lack of competition in America’s broadband market. In Europe and Japan, “open access” rules require network operators to lease parts of their networks to other firms on a wholesale basis, thus boosting competition. A study comparing broadband markets, published in 2009 by Harvard University’s Berkman Centre for Internet & Society, found that countries with such rules enjoy faster, cheaper broadband service than America, because the barrier to entry for new entrants is much lower. And if any access provider starts limiting what customers can do, they will defect to another.

America’s operators have long insisted that open-access requirements would destroy their incentive to build fast, new networks: why bother if you will be forced to share it? After intense lobbying, America’s telecoms regulators bought this argument. But the lesson from elsewhere in the industrialised world is that it is not true. The result, however, is that America has a small number of powerful network operators, prompting concern that they will abuse their power unless they are compelled, by a net-neutrality law, to treat all traffic equally. Rather than trying to mandate fairness in this way—net neutrality is very hard to define or enforce—it makes more sense to address the underlying problem: the lack of competition.

It should come as no surprise that the internet is being pulled apart on every level. “While technology can gravely wound governments, it rarely kills them,” Debora Spar, president of Barnard College at Columbia University, wrote several years ago in her book, “Ruling the Waves”. “This was all inevitable,” argues Chris Anderson, the editor of Wired, under the headline “The Web is Dead” in the September issue of the magazine. “A technology is invented, it spreads, a thousand flowers bloom, and then someone finds a way to own it, locking out others.”

Yet predictions are hazardous, particularly in IT. Governments may yet realise that a freer internet is good not just for their economies, but also for their societies. Consumers may decide that it is unwise to entrust all their secrets to a single online firm such as Facebook, and decamp to less insular alternatives, such as Diaspora.

Similarly, more open technology could also still prevail in the mobile industry. Android, Google’s smart-phone platform, which is less closed than Apple’s, is growing rapidly and gained more subscribers in America than the iPhone in the first half of this year. Intel and Nokia, the world’s biggest chipmaker and the biggest manufacturer of telephone handsets, are pushing an even more open platform called MeeGo. And as mobile devices and networks improve, a standards-based browser could become the dominant access software on the wireless internet as well.

Stuck in the slow lane

If, however, the internet continues to go the other way, this would be bad news. Should the network become a collection of proprietary islands accessed by devices controlled remotely by their vendors, the internet would lose much of its “generativity”, warns Harvard’s Mr Zittrain. Innovation would slow down and the next Amazon, Google or Facebook could simply be, well, Amazon, Google or Facebook.

The danger is not that these islands become physically separated, says Andrew Odlyzko, a professor at the University of Minnesota. There is just too much value in universal connectivity, he argues. “The real question is how high the walls between these walled gardens will be.” Still, if the internet loses too much of its universality, cautions Mr Werbach of the Wharton School, it may indeed fall apart, just as world trade can collapse if there is too much protectionism. Theory demonstrates that interconnected networks such as the internet can grow quickly, he explains—but also that they can dissolve quickly. “This looks rather unlikely today, but if it happens, it will be too late to do anything about it.”


Full article and photos:

Ten Fallacies About Web Privacy

We are not used to the Internet reality that something can be known and at the same time no person knows it.

Privacy on the Web is a constant issue for public discussion—and Congress is always considering more regulations on the use of information about people’s habits, interests or preferences on the Internet. Unfortunately, these discussions lead to many misconceptions. Here are 10 of the most important:

1) Privacy is free. Many privacy advocates believe it is a free lunch—that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.

2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information.

Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services—all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.

3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers—who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes—for example, in credit rating—then the cost of credit for all consumers will decrease.

4) Information use is “all or nothing.” Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted.

For example, search engines can better target searches if they know what searchers are looking for. (Google’s “Did you mean . . .” to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.

5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are “known” about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.

6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.

7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don’t bother to learn about information use on the Web precisely because there is no harm from the way it is used.

8) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.

9) Restricting the use of information (such as by mandating consumer “opt-in”) will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.

10) Targeted advertising leads people to buy stuff they don’t want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient—and we have all seen how that works.

Mr. Rubin teaches economics at Emory University.


Full article and photo:

New Law to Stop Companies from Checking Facebook Pages in Germany

Potential bosses will no longer be allowed to look at job applicants’ Facebook pages, if a new law comes into force in Germany.

Good news for jobseekers who like to brag about their drinking exploits on Facebook: A new law in Germany will stop bosses from checking out potential hires on social networking sites. They will, however, still be allowed to google applicants.

Lying about qualifications. Alcohol and drug use. Racist comments. These are just some of the reasons why potential bosses reject job applicants after looking at their Facebook profiles.

According to a 2009 survey commissioned by the website CareerBuilder, some 45 percent of employers use social networking sites to research job candidates. And some 35 percent of those employers had rejected candidates based on what they found there, such as inappropriate photos, insulting comments about previous employers or boasts about their drug use.

But those Facebook users hoping to apply for a job in Germany should pause for a moment before they hit the “deactivate account” button. The government has drafted a new law which will prevent employers from looking at a job applicant’s pages on social networking sites during the hiring process.

According to reports in the Monday editions of the Die Welt and Süddeutsche Zeitung newspapers, Interior Minister Thomas de Maizière has drafted a new law on data privacy for employees which will radically restrict the information bosses can legally collect. The draft law, which is the result of months of negotiations between the different parties in Germany’s coalition government, is set to be approved by the German cabinet on Wednesday, according to the Süddeutsche Zeitung.

Although the new law will reportedly prevent potential bosses from checking out a candidate’s Facebook page, it will allow them to look at sites that are expressly intended to help people sell themselves to future employers, such as the business-oriented social networking site LinkedIn. Information about the candidate that is generally available on the Internet is also fair game. In other words, employers are allowed to google potential hires. Companies may not be allowed to use information if it is too old or if the candidate has no control over it, however.

Toilets to Be Off-Limits

The draft legislation also covers the issue of companies spying on employees. According to Die Welt, the law will expressly forbid firms from video surveillance of workers in “personal” locations such as bathrooms, changing rooms and break rooms. Video cameras will only be permitted in certain places where they are justified, such as entrance areas, and staff will have to be made aware of their presence.

Similarly, companies will only be able to monitor employees’ telephone calls and e-mails under certain conditions, and firms will be obliged to inform their staff about such eavesdropping.

The new law is partially a reaction to a number of recent scandals in Germany involving management spying on staff. In 2008, it was revealed that the discount retail chain Lidl had spied on employees in the toilet and had collected information on their private lives. National railway Deutsche Bahn and telecommunications giant Deutsche Telekom were also involved in cases relating to surveillance of workers.

Online data privacy is increasingly becoming a hot-button issue in Germany. The government is currently also working on legislation to deal with issues relating to Google’s Street View service, which is highly controversial in the country because of concerns it could violate individuals’ privacy.


Full article and photo:,1518,713240,00.html

End of the Net Neut Fetish

What the Google-Verizon deal really means for the wireless future.

Historians, if any are interested, will conclude that the unraveling of the net neutrality movement began when the iPhone appeared, instigating a tsunami of demand for mobile Web access.

They will conclude that an ancillary role was played when carriers (even some non-wireless) began talking about metered pricing to meet the deluge of Internet video.

Suddenly, those net neut advocates who live in the real world (e.g., Google) had to face where their advocacy was leading—to usage-based pricing for mobile Web users, a dagger aimed at the heart of their own business models. After all, who would click on a banner ad if it meant paying to do so?

Thus Google and other realists developed a new appreciation of the need for incentives to keep their telco and cable antagonists investing in new broadband capacity. They developed an appreciation of “network management,” though it meant discriminating between urgent and less urgent traffic.

Most of all, they realized (whisper it quietly) that they might soon want to pay out of their own pockets to speed their bits to wireless users, however offensive to the net neutrality gods.

Hence a watershed this week in the little world of the net neut obsessives, as the realists finally parted company with the fetishists. The latter are those Washington-based groups that have emerged in recent years to gobble up Google’s patronage and declaim in favor of “Internet freedom.” You can easily recognize these groups today—they’re the ones taking Google’s name in vain.

The unraveling of the net neut coalition is perhaps the one meaningful result of the new net neut “principles” enunciated this week by former partisans Google and Verizon.

While these principles address in reasonable fashion the largely hypothetical problem of carriers blocking content and services that compete with their own, Verizon and Google insist the terms aren’t meant to apply to wireless. Funny thing—because wireless is precisely what brings these ex-enemies together in the first place. They’re partners in promoting Google’s Android software as a rival platform to Apple’s iPhone.

All their diversionary huffing and puffing, in fact, is a backhanded way of acknowledging reality: The future is mobile, and anything resembling net neutrality on mobile is a nonstarter thanks to the problem of runaway demand and a shortage of spectrum capacity.

Tasteless as it may be to toot our own horn, this column noted the dilemma last year, even forecasting Google’s coming apostasy on net neutrality. Already it was clear that only two economic solutions existed to a coming mobile meltdown. Either wireless subscribers would have to face usage-based pricing, profoundly disturbing the ad-based business models of big players whose services now appear “free” to users. Or Google and its ilk would have to be “willing to subsidize delivery of their services to mobile consumers—which would turn net neut precisely on its head.”

Our point was that the net neut fetish was dead, and good riddance. All along, competition was likely to provide a more reasonable and serviceable definition of “net neutrality” than regulators could ever devise or enforce. That rough-and-ready definition would allow carriers to discriminate in ways that consumers, on balance, are willing to put up with because it enables acceptable service at an acceptable price.

Even now, Google and its CEO Eric Schmidt, in their still-conflicted positioning, argue that the wired Internet has qualities of a natural monopoly, because most homes are dependent on one cable modem supplier. This treats the phone companies’ DSL and fiber services as if they don’t exist. It also overlooks how people actually experience the Internet.

Users don’t just get the Internet at home, but at work and on their mobile devices, and they won’t stand for being denied on one device services and sites they’re used to getting on the others. That is, they won’t unless there’s a good reason related to providing optimum service on a particular device.

You don’t have to look far for an example: Apple iPhone users put up with Apple’s blocking of most Web video on the iPhone because, on the whole, the iPhone still provides a satisfying service.

This is the sensible way ahead as even Google, a business realist, now seems to recognize. The telecom mavens at Strand Consult joke that Google is a “man with deep pockets and short arms, who suddenly disappears when the waiter brings the bill.” Yes, on the wired Net, Google remains entrenched in the position that network providers must continue to bury the cost to users of Google’s services uniformly across the bills of all broadband subscribers.

That won’t work on the wireless battlefield, and Google knows it. Stay tuned as the company’s business interests trump the simple net neutrality that the fetishists believe in—and that Google used to believe in.

Holman W. Jenkins, Wall Street Journal


Full article:

Spies, secrets and smart-phones

SOME sort of a deal seems to have been thrashed out over the weekend, according to reports from Saudi Arabia, under which its spooks will be able to snoop to their heart’s content on messages sent over BlackBerrys within the kingdom. All last week, as it negotiated with the Saudi, United Arab Emirates (UAE) and Indian authorities over their demands for monitoring, the smart-phones’ Canadian maker, Research In Motion (RIM), was dodging journalists’ demands for proper explanations about what exactly is negotiable about the phones’ security. The Economist asked five times in four days for an interview, and got nowhere. Other news organisations had a similar experience.

The best we could get from the company was a series of tight-lipped statements, of which the least cryptic was this one:

RIM has spent over a decade building a very strong security architecture to meet our enterprise customers’ strict security requirements around the world. It is a solution that we are very proud of, and it has helped us become the number one choice for enterprises and governments. In recent days there has been a range of commentary, speculation, and misrepresentation regarding this solution and we want to take the opportunity to set the record straight. There is only one BlackBerry enterprise solution available to our customers around the world and it remains unchanged in all of the markets we operate in. RIM cooperates with all governments with a consistent standard and the same degree of respect. Any claims that we provide, or have ever provided, something unique to the government of one country that we have not offered to the governments of all countries, are unfounded. The BlackBerry enterprise solution was designed to preclude RIM, or any third party, from reading encrypted information under any circumstances since RIM does not store or have access to the encrypted data.

RIM cannot accommodate any request for a copy of a customer’s encryption key, since at no time does RIM, or any wireless network operator or any third party, ever possess a copy of the key.  This means that customers of the BlackBerry enterprise solution can maintain confidence in the integrity of the security architecture without fear of compromise.

Seems, at first glance, pretty categorical and reassuring, doesn’t it? But hang on. First, all of the reassurances about message security seem only to apply to “enterprise” customers—large organisations that give BlackBerrys to their staff, and which route messages through a server on their own premises. RIM’s statement appears to make no promises to the millions of BlackBerry users worldwide who are contracted directly to a mobile-telecoms operator. Their messages are routed via RIM’s own servers, which are dotted around the world. Wherever RIM puts them, it has to comply with local authorities’ demands for access. It is reported that RIM has agreed to put servers inside Saudi territory, which would of course be under Saudi jurisdiction. Presumably the other governments demanding greater access to message monitoring will want something similar, since the company does say it co-operates with all governments “with a consistent standard”.

RIM’s guarantee of the impregnability of customers’ encryption keys is also less impressive than it appears. Let’s leave aside for a moment the long history of “uncrackable” codes proving crackable after all. All that RIM is saying is that while the message is encrypted it is not possible to provide a key to decrypt it. What about at either end of the encryption process? E-mails sent encrypted from a BlackBerry handset at some point have to be decrypted and sent to the recipient’s e-mail server. That is done either by the “enterprise” server, for those large BlackBerry users that have them, or in RIM’s own servers in the case of people who have their BlackBerry contract with a local telecoms firm. So at the very least, anyone who has a BlackBerry contract with a Saudi telecoms operator, or whose Saudi employer provides his Blackberry, would now seem to have his e-mails at risk of being read if the authorities demand this.

But what the Saudis were concerned about was not so much e-mails but those “uncrackable” instant-messaging chats. When the company says it does not have, and cannot provide, a key to decrypt them as they travel from handset to handset, what this may mean, says Ross Anderson, professor of security engineering at Cambridge University in England, is that a new key is generated for each chat, and that only the paired handsets at either end have that key. If that is the case, he says, it might be rather difficult to decode those messages’ contents while they are encrypted and in transmission (though it would not be hard to detect who has sent a message to whom, and when).

The weakest link

However, as we have reported before, the handsets themselves are the weakest link in BlackBerry security. Last year the UAE’s state-controlled telecoms operator, Etisalat, sent out what it insists was a software patch to improve BlackBerrys’ performance. RIM put out an indignant statement saying that  “independent sources” had concluded that the patch could “enable unauthorised access to private or confidential information stored on the user’s smartphone.” In plain language: it appeared to be spyware. RIM gave users advice on how to remove it from their handsets.

The easiest way for spooks to read all of a surveillance target’s messages (including e-mails, texts, web forms) might be to do more stealthily what Etisalat seems (if you accept RIM’s theory) to have tried so clumsily to do: push a piece of spyware out to his handset—perhaps disguised as, or hidden in, a software update. This blogger receives software patches regularly and without warning on his company BlackBerry and would have no idea if one of them were part of a dastardly MI5 plot (paranoid, moi?).

According to an Indian government document leaked to the Economic Times last week, RIM has promised to provide the “tools”, within 8 months, for Indian spooks to read BlackBerry instant-messaging chats. It would be a huge blow to its reputation if it were ever found to have helped spy agencies put spyware on users’ handsets. So perhaps RIM itself would not risk that. But maybe others can provide a “solution” that can push snooping software on to handsets. America’s spies seem to think China’s spies can do this: last year Joel Brenner, then a senior counterintelligence official, told a security conference near CIA headquarters that during the Beijing Olympics “your phone or BlackBerry could have been tagged, tracked, monitored, and exploited between your disembarking the airplane and reaching the taxi stand at the airport. And when you emailed back home, some or all of the malware may have migrated to your home server. This is not hypothetical.”

Mark Rasch, former head of the computer crimes unit at the United States Department of Justice told Reuters that the ability to tap into messages is routine for security agencies around the world, and he should know. American authorities have huge powers, under the post 9/11 Patriot Act and other laws, to demand compliance with wiretapping orders, to gag those who are complying with them and grant them immunity against any legal consequences. So basically, it’s a licence to fib, or at least to keep stumm: if any smart-phone or telecoms provider were letting Uncle Sam take a peep at our messages, they wouldn’t be able to tell us, and even if we found out we couldn’t sue them. Is it plausible that the American authorities, after 9/11, would let people walk around with devices that send completely uncrackable messages? Surely they can read them, says Bruce Schneier, another internet-security expert, “You know they do.”

Given India’s tough line (unsurprising, given its terrorism worries), if it doesn’t get the “tools” to read messenger chats, then RIM may be shut out of a huge market; on the other hand, if BlackBerry services are not blocked in India in the coming months, this is bound to raise suspicions that its authorities have somehow gained (not necessarily from RIM itself) the means to read chats and other messages.

All this leaves RIM in a difficult situation. It doesn’t want to be, and perhaps may not be able to be, entirely open about what sort of access to messages it offers the authorities in different countries. The trouble is, as it notes in its statement, it has to a large degree built its brand on the supposed uncrackability of BlackBerry messages—more than rival brands have done. The feature that set its products apart from other smart-phones is now being thrown into doubt: and at an especially awkward time. The launch last week of the new generation BlackBerry, the Torch, was overshadowed not just by the disputes with various governments over monitoring, but by a Nielsen survey which showed that, unlike iPhone and Android users, only a minority of BlackBerry owners are thinking of buying another BlackBerry next time. The company’s evasiveness on the security issue is hardly going to encourage them to stay loyal.

Pretending not to listen

What about all those other supposedly hack-proof means of communication, such as Skype internet telephony and Google Mail, both of which are “encrypted”. A security pundit interviewed on BBC television’s “Newsnight” a few days ago speculated that the American authorities are only pretending when they claim they still can’t tap into Skype calls. This was then put to Lord West, a former British security minister. His response was fascinating:

When I come on a programme like this I’m always very nervous, ‘cos I know so much. And also people…don’t necessarily always tell the truth. That sounds an awful thing to say but do you want anyone to know that you can get into very high-encrypted stuff? No, you can say “we don’t, we can’t do it”. 

He then went on to say how “mind-boggling” are the capabilities of America’s National Security Agency and its British counterpart, GCHQ. To this blogger, that sounded like: “Yes of course we can hack Skype calls and all the rest, but we have to pretend we can’t”. Mr Anderson notes that there are all sorts of other internet-based services that provide encrypted messaging, including various dungeons-and-dragons online games. As these proliferate, providing terrorists and crime gangs with secure cyber-meeting places, the spooks will have to keep chasing them: serving papers on the hosts where possible, seeking deals with them otherwise. This is tricky but not impossible if you are the United States. For less powerful nations like the UAE, it is harder to get co-operation, and simply blocking all such secure-message services would do great economic damage.

Not all governments may get all of the snooping powers they want (RIM seems to be trying to persuade some to make do with the “metadata” of messages—who sent a message to whom, and when—rather than their contents). Even so, whether you are an international terrorist, an investment banker, or indeed an intelligence agent, given the technical capacity and the legal powers at the disposal of the big world powers, it seems that even on “secure” and “encrypted” channels, you can never be quite sure that someone isn’t listening in:

Number Two: We want information, information, information…
The Prisoner: You won’t get it.
Number Two: By hook or by crook, we will.


Full article and photo:

The Internet Generation Prefers the Real World

They may have been dubbed the “Internet generation,” but young people are more interested in their real-world friends than Facebook. New research shows that the majority of children and teenagers are not the Web-savvy digital natives of legend. In fact, many of them don’t even know how to google properly.

Seventeen-year-old Jetlir is online every day, sometimes for many hours at a time and late into the night. The window of his instant messaging program is nearly always open on his computer screen. A jumble of friends and acquaintances chat with each other. Now and again Jetlir adds half a sentence of his own, though this is soon lost in the endless stream of comments, jokes and greetings. He has in any case moved on, and is now clicking through sports videos on YouTube.

Jetlir is a high school student from Cologne. He could easily be a character in one of the many newspaper stories about the “Internet generation” that is allegedly in grave danger of losing itself in the virtual world.

Jetlir grew up with the Internet. It’s been around for as long as he can remember. He spends half of his leisure time on Facebook and YouTube, or chatting with friends online.

In spite of this, Jetlir thinks that other things — especially basketball — are much more important to him. “My club comes first,” Jetlir says. “I’d never miss a training session.” His real life also seems to come first in other respects: “If someone wants to meet me, I turn off my computer immediately,” he says.

‘What’s the Point?’

Indeed, Jetlir does not actually expect very much from the Internet. Older generations may consider it a revolutionary medium, enthuse about the splendors of blogging and tweet obsessively on the short-messaging service Twitter. But Jetlir is content if his friends are within reach, and if people keep uploading videos to YouTube. He’d never dream of keeping a blog. Nor does he know anybody else his age who would want to. And he’s certainly never tweeted before. “What’s the point?” he asks.

The Internet plays a paradoxical role in Jetlir’s life. Although he uses it intensively, he isn’t that interested in it. It’s indispensable, but only if he has nothing else planned. “It isn’t everything,” he says.

Jetlir’s easy-going attitude towards the Internet is typical of German adolescents today, as several recent studies have shown. Odd as it may seem, the first generation that cannot imagine life without the Internet doesn’t actually consider the medium particularly important, and indeed shuns some of the latest web technologies. Only 3 percent of young people keep their own blog, and no more than 2 percent regularly contribute to Wikipedia or other comparable open source projects.

Similarly, most young people in Germany ignore social bookmarking websites like Delicious and photo-sharing portals such as Flickr and Picasa. Apparently the netizens of the future couldn’t care less about the collaborative delights of Web 2.0 — that, at least, is the finding of a major study by the Hans Bredow Institute in Germany.

The Net Generation

For years, experts have been talking about a new kind of tech-savvy youth who are mobile, networked, and chronically restless, spoilt by the glut of stimuli on the Internet. These young people were said to live in perpetual symbiosis with their computers and mobile phones, with networking technology practically imprinted in their genes. The media habitually referred to them as “digital natives,” “Generation @” or simply “the net generation.”

Two of the much cited spokesmen of this movement are the 64-year-old American author Marc Prensky and his 62-year-old Canadian colleague, Don Tapscott. Prensky coined the expression “digital natives” to describe those lucky souls born into the digital era, instinctively acquainted with all that the Internet has to offer in terms of participation and self-promotion, and streets ahead of their elders in terms of web-savviness. Prensky classifies everyone over the age of 25 as “digital immigrants” — people who gain access to the Internet later in life and betray themselves through their lack of mastery of the local customs, like real-world immigrants who speak their adopted country’s language with an accent.

A small group of writers, consultants and therapists thrives on repeating the same old mantra, namely that our youth is shaped through and through by the online medium in which it grew up. They claim that our schools must, therefore, offer young people completely new avenues — surely traditional education cannot reach this generation any longer, they argue.

Little Evidence

There is little evidence to back such theories up, however. Rather than conducting surveys, these would-be visionaries base their arguments on impressive individual cases of young Internet virtuosos. As other, more serious researchers have since discovered, such exceptions say very little about the generation as a whole, and they are now avidly trying to correct the mistakes of the past.

Numerous studies have since revealed how young people actually use the Internet. The findings show that the image of the “net generation” is almost completely false — as is the belief in the all-changing power of technology.

A study by the Hans Bredow Institute entitled “Growing Up With the Social Web” was particularly thorough in its approach. In addition to conducting a representative survey, the researchers conducted extensive individual interviews with 28 young people. Once again it became clear that young people primarily use the Internet to interact with friends. They go on social networking sites like Facebook and the popular German website SchülerVZ, which is aimed at school students, to chat, mess around and show off — just like they do in real life.

There are a few genuine net pioneers who compose music online with friends from Amsterdam and Barcelona, organize spontaneous protests to lobby for cheaper public transport passes for schoolchildren, or use the virtual arena in other imaginative ways. But most of the respondents saw the Internet as merely a useful extension of the old world rather than as a completely new one. Their relationship to the medium is therefore far more pragmatic than initially posited. “We found no evidence whatsoever that the Internet is the dominating influence in the lives of young people,” says Ingrid Paus-Hasebrink, the Salzburg-based communication researcher who led the project.

Not Very Skilled

More surprising yet, these supposedly gifted netizens are not even particularly adept at getting the most out of the Internet. “They can play around,” says Rolf Schulmeister, an educational researcher from Hamburg who specializes in the use of digital media in the classroom. “They know how to start up programs, and they know where to get music and films. But only a minority is really good at using it.”

Schulmeister should know. He recently ploughed through the findings of more than 70 relevant studies from around the globe. He too came to the conclusion that the Internet certainly hasn’t taken over the real world. “The media continue to account for only a part of people’s leisure activities. And the Internet is only one medium among many,” he says. “Young people still prefer to meet friends or take part in sports.”

Of course that won’t prevent the term “net generation” being bandied about in the media and elsewhere. “It’s an obvious, cheap metaphor,” Schulmeister says. “So it just keeps cropping up.”

In Touch with Friends around the Clock

In purely statistical terms, it appears that ever-greater proportions of young people’s days are focused on technology. According to a recent study carried out by the Stuttgart-based media research group MPFS, 98 percent of 12- to 19-year-olds in Germany now have access to the Internet. And by their own estimates, they are online for an average of 134 minutes a day — just three minutes less than they spend in front of the television. 

However, the raw figures say little about what these supposed digital natives actually do online. As it turns out, the kids of today are very similar to previous generations of young people: They are mainly interested in communicating with their peers. Today’s young people spend almost half of their time interacting socially online. E-mail, instant messaging and social networking together accounts for the bulk of their Internet time.

For instance Tom, one of Jetlir’s classmates, remains in touch with 30 or 40 of his friends almost around the clock. Even so, the channels of communication vary. In the morning Tom will chat briefly on his PC, during lunch recess he’ll rattle off a few text messages, after school he’ll sit down for his daily Facebook session and make a few calls on his cell phone, and in the evening he’ll make one or two longer video calls using the free Internet telephony service Skype.

The Medium Is Not the Message

For Tom, Jetlir, and the others of their age, it doesn’t seem to matter whether they interact over the Internet or via another medium. It seems that young people are mainly interested in what the particular medium or communication device can be used for. In the case of the Internet in particular, that can be one of many things: Sometimes it acts as a telephone, sometimes as a kind of souped-up television. Tom spends an hour or two every day watching online videos, mostly on YouTube, but also entire TV programs if they’re available somehow. “Everyone knows how to find episodes of the TV series they want to watch,” says fellow pupil Pia.

The second most popular use of the Internet is for entertainment. According to a survey conducted by Leipzig University in 2008, more young people now access their music via various online broadcasting services than listen to it on the radio. As a consequence, the video-sharing portal YouTube has become the global jukebox, serving the musical needs of the world’s youth — although its rise to prominence as a resource for music on demand has gone largely unnoticed. Indeed, there are few songs that cannot be dredged up somewhere on the site.

“That’s also practical if you’re looking for something new,” Pia says. Searching for specific content is incredibly simple on YouTube. In general all you need to do is enter half a line of some lyrics you caught at a party, and YouTube supplies the corresponding music video and the song itself.

In this way the Internet is becoming a repository for the content of older media, sometimes even replacing them altogether. And youthful audiences, who are always on the lookout for something to share or entertainment, are now increasingly using the Internet to find this content. But it’s not exactly the kind of behavior that would trigger a lifestyle revolution.

Teens Still Enjoy Meeting Friends

What’s more, there’s still plenty of life beyond the many screens at their disposal. A 2009 study by MPFS found that nine out of every 10 teenagers put meeting friends right at the top of their list of favorite non-media activities. More striking still, 76 percent of young people in Germany take part in sport several times a week, although among girls that figure is only 64 percent.

In January, the authors of the “Generation M2” survey by the Kaiser Family Foundation published the remarkable finding that even the most intense media users in the US exercised just as much as others of their age.

So how can they pack all that into a single day? Simply adding together the amount of time devoted to each activity creates a very false picture. That’s because most young people are excellent media multitaskers, simultaneously making phone calls, checking out their friends on Facebook and listening to music. And it appears that they’re primarily online at times they would otherwise spend lounging around.

“I go online when I have nothing better to do,” Jetlir says. “Unfortunately that’s often when I should already be sleeping.” Thanks to cell phones and MP3 players, young people can also fill gaps in their busy schedules even when they’re away from static media sources like TVs, computers and music systems. Media use can therefore increase steadily while still leaving plenty of time for other activities.

‘Time’s Too Precious’

What’s more, many young people still aren’t the least bit interested in all the online buzz. Some 31 percent of them rarely or never visit social networking sites. Anna, who attends the same school as Jetlir, says she would “probably only miss the train timetable” if the Internet ceased to exist, while fellow student Torben thinks “time’s too precious” to waste on computers. He plays handball and soccer, and says “10 minutes a day on Facebook” is all he needs.

By contrast, Tom will occasionally get so wrapped up in Facebook and his instant messaging that he’ll forget the time altogether. “It’s a strange feeling to realize you’ve spent so much time on something and have nothing to show for it,” he admits. But he also knows that others find the temptations of the virtual world much harder to resist. “Everyone knows a few people who are online all day,” Pia says, though Jetlir suggests that’s only for want of something better to do. “None of them would turn down an offer to go out somewhere instead,” he adds.

But even the most inveterate netizens aren’t necessarily natural experts in the medium. If you want to make use of the Internet, you first have to understand how the real world works. And that’s often the sticking point. The only advantage that young people have over their elders is their lack of inhibitions with regard to computers. “They simply try things out,” says René Scheppler, a teacher at a high school in Wiesbaden. “They discover all sorts of things that way. The only thing is they don’t understand how it works.”

‘I Found It on Google’

Occasionally the teacher will ask his students big-picture questions about the medium they take for granted. Questions like: Where did the Internet come from? “I’ll get replies like, ‘What do you mean? It’s just there!'” Scheppler says. “Unless they’re prompted to do so, they never address those sorts of questions. For them it’s like a car: All that matters is that it works.”

And because teenagers are basically inexperienced, they are all the more likely to overestimate their own abilities. “They think they’re the real experts,” Scheppler says. “But when it comes down to it, they can’t even google properly.”

When Scheppler scheduled a lesson about Google to teach his pupils how to better search the Web, they thought it was hilarious. “Google?!” they gasped. “We know all about that. We do it all the time. And now Mr Scheppler wants to tell us how to use Google!”

He, therefore, set them a challenge: They were to design a poster on globalization based on the example of Indian subcontractors. Now it was the teacher’s turn to laugh. “They just typed a series of individual keywords into Google, and then they went click, click, click: ‘Don’t want that! Useless! Let’s try another one!'” Scheppler recalls. “They’re very quick to jettison things, sometimes even relevant information. They think they can tell the wheat from the chaff, but they just stumble about — very rapidly, very hectically and very superficially. And they stop the moment they get a hit that looks reasonably plausible.”

Few have any idea where the information on the Web comes from. And if their teacher asks for references, he often gets the reply, “I found it on Google.”

Learning How to Use the Internet Productively

Recent research into the way people conduct Internet searches confirms Scheppler’s observations. A major study conducted by the British Library came to the sobering conclusion that the “net generation” hardly knows what to look for, quickly scans over results, and has a hard time assessing relevance. “The information literacy of young people has not improved with the widening access to technology,” the authors wrote. 

A few schools have now realized that the time has come to act. One of them is Kaiserin Augusta School in Cologne, the high school that Jetlir, Tom, Pia, and Anna attend. “We want our pupils to learn how to use the Internet productively,” says music teacher André Spang, “Not just for clicking around in.”

Spang uses Web 2.0 tools in the classroom. When teaching them about the music of the 20th century, for example, he got his 12th-graders to produce a blog on the subject. “They didn’t even know what that was,” he says. Now they’re writing articles on aleatoric music and musique concrete, composing simple 12-tone rows and collecting musical examples, videos, and links about it. Everyone can access the project online, see what the others are doing and comment on each other’s work. The fact that the material is public also helps to promote healthy competition and ambition among the participants.

Blogs are not technically challenging and are quick to set up. That’s why they are also being used to teach other subjects. Piggybacking on the enormous success of Wikipedia, the collaborative online encyclopedia produced entirely by volunteer contributors, wikis are also being employed in schools. The 10th-graders in the physics class of Spang’s colleague Thomas Vieth are currently putting together a miniature encyclopedia of electromagnetism. “In the past all we could do was give out group assignments, and people would just rattle off their presentations,” Vieth says. “Now everyone reads along, partly because all the articles are connected and have to be interlinked.”

Not Interested in Fame

One positive side-effect is that the students are also learning how to find reliable information on the Internet. And so that they understand what they find online, there are regular sessions of old-fashioned sessions on learning how to learn, including reading, comprehension and summarizing exercises. So instead of tech-savvy young netizens challenging the school, the school itself is painstakingly teaching them how to benefit from the online medium.

For most of the pupils it was the first time they had contributed their own work to the Internet’s pool of data. They’re not interested in widespread fame. Self-promoters are rare, and most young people even shun anonymous role-playing such as that found in the online world Second Life. The youth of today, it turns out, is much more obsessed with real relationships. Whatever they do or write is directed at their particular group of friends and acquaintances.

That also applies to video, the medium most tempting for people to try out for themselves. An impressive 15 percent of young people have already uploaded at least one home-made video, mostly shot on a cell phone.

Part of Their Social Life

One student, Sven, has uploaded a video he made to YouTube. It shows him and a few friends in their bathing suits first by a lake, then all running into the clearly icy water. “No, really,” Sven says, “people are interested in this. They talk about it!” There are indeed already 37 comments under the video, all from his circle of friends.

“And here,” Sven adds, pointing to the screen. “Here on Facebook someone recently posted just a dot. Even so, seven people have clicked on the ‘Like’ button so far, and 83 commented on the dot.”

Older people might consider such activity inane, but for young people it’s part of their social life and no less important than a friendly wave or affable clowning around in the offline world. The example of the dot shows how normal the Internet has become, and debunks the idea that it is a special world in which special things happen.

“Media are used by the masses if they have some relevance to everyday life,” says Rolf Schulmeister, the educational researcher. “And they are used for aims that people already had anyway.”

Turning Point

Young people have now reached this turning point. The Internet is no longer something they are willing to waste time thinking about. It seems that the excitement about cyberspace was a phenomenon peculiar to their predecessors, the technology-obsessed first generation of Web users.

For a brief transition period, the Web seemed to be tremendously new and different, a kind of revolutionary power that could do and reshape everything. Young people don’t feel that way. They hardly even use the word “Internet,” talking about “Google”, “YouTube” and “Facebook” instead. And they certainly no longer understand it when older generations speak of “going online.”

“The expression is meaningless,” Tom says. Indeed the term is a relic of a time when the Internet was still something special, evoking a separate space distinct from our real life, an independent, secretive world that you entered and then exited again.

Tom and his friends just describe themselves as being “on” or “off,” using the English terms. What they mean is: contactable or not.


Full article and photo:,1518,710139,00.html

The Web’s New Gold Mine: Your Secrets

A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series.

Hidden inside Ashley Hayes-Beaty’s computer, a tiny file helps gather personal details about her, all to be put up for sale for a tenth of a penny.

The file consists of a single code— 4c812db292272995e5416a323e79bd37—that secretly identifies her as a 26-year-old female in Nashville, Tenn.

The code knows that her favorite movies include “The Princess Bride,” “50 First Dates” and “10 Things I Hate About You.” It knows she enjoys the “Sex and the City” series. It knows she browses entertainment news and likes to take quizzes.

“Well, I like to think I have some mystery left to me, but apparently not!” Ms. Hayes-Beaty said when told what that snippet of code reveals about her. “The profile is eerily correct.”

Ms. Hayes-Beaty is being monitored by Lotame Solutions Inc., a New York company that uses sophisticated software called a “beacon” to capture what people are typing on a website—their comments on movies, say, or their interest in parenting and pregnancy. Lotame packages that data into profiles about individuals, without determining a person’s name, and sells the profiles to companies seeking customers. Ms. Hayes-Beaty’s tastes can be sold wholesale (a batch of movie lovers is $1 per thousand) or customized (26-year-old Southern fans of “50 First Dates”).

“We can segment it all the way down to one person,” says Eric Porres, Lotame’s chief marketing officer.

One of the fastest-growing businesses on the Internet, a Wall Street Journal investigation has found, is the business of spying on Internet users.

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

• The study found that the nation’s 50 top websites on average installed 64 pieces of tracking technology onto the computers of visitors, usually with no warning. A dozen sites each installed more than a hundred. The nonprofit Wikipedia installed none.

• Tracking technology is getting smarter and more intrusive. Monitoring used to be limited mainly to “cookie” files that record websites people visit. But the Journal found new tools that scan in real time what people are doing on a Web page, then instantly assess location, income, shopping interests and even medical conditions. Some tools surreptitiously re-spawn themselves even after users try to delete them.

• These profiles of individuals, constantly refreshed, are bought and sold on stock-market-like exchanges that have sprung up in the past 18 months.

The new technologies are transforming the Internet economy. Advertisers once primarily bought ads on specific Web pages—a car ad on a car site. Now, advertisers are paying a premium to follow people around the Internet, wherever they go, with highly specific marketing messages.

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen—tracking companies, data brokers and advertising networks—competing to meet the growing demand for data on individual behavior and interests.

The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges.

“It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.”

The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, It then analyzed the tracking files and programs these sites downloaded onto a test computer.

As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles.

But over two-thirds—2,224—were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

The top venue for such technology, the Journal found, was IAC/InterActive Corp.’s A visit to the online dictionary site resulted in 234 files or programs being downloaded onto the Journal’s test computer, 223 of which were from companies that track Web users.


Dig Deeper


Key tracking terminology


How to Protect Yourself

Almost every major website you visit is tracking your online activity. Here’s a step-by-step guide to fending off trackers.

The Tracking Ecosystem

Surfing the Internet kickstarts a process that passes information about you and your interests to tracking companies and advertisers. See how it works.


The information that companies gather is anonymous, in the sense that Internet users are identified by a number assigned to their computer, not by a specific person’s name. Lotame, for instance, says it doesn’t know the name of users such as Ms. Hayes-Beaty—only their behavior and attributes, identified by code number. People who don’t want to be tracked can remove themselves from Lotame’s system.

And the industry says the data are used harmlessly. David Moore, chairman of 24/7 RealMedia Inc., an ad network owned by WPP PLC, says tracking gives Internet users better advertising.

“When an ad is targeted properly, it ceases to be an ad, it becomes important information,” he says.

Tracking isn’t new. But the technology is growing so powerful and ubiquitous that even some of America’s biggest sites say they were unaware, until informed by the Journal, that they were installing intrusive files on visitors’ computers.

The Journal found that Microsoft Corp.’s popular Web portal,, planted a tracking file packed with data: It had a prediction of a surfer’s age, ZIP Code and gender, plus a code containing estimates of income, marital status, presence of children and home ownership, according to the tracking company that created the file, Targus Information Corp.

Both Targus and Microsoft said they didn’t know how the file got onto, and added that the tool didn’t contain “personally identifiable” information.

Tracking is done by tiny files and programs known as “cookies,” “Flash cookies” and “beacons.” They are placed on a computer when a user visits a website. U.S. courts have ruled that it is legal to deploy the simplest type, cookies, just as someone using a telephone might allow a friend to listen in on a conversation. Courts haven’t ruled on the more complex trackers.

The most intrusive monitoring comes from what are known in the business as “third party” tracking files. They work like this: The first time a site is visited, it installs a tracking file, which assigns the computer a unique ID number. Later, when the user visits another site affiliated with the same tracking company, it can take note of where that user was before, and where he is now. This way, over time the company can build a robust profile.

One such ecosystem is Yahoo Inc.’s ad network, which collects fees by placing targeted advertisements on websites. Yahoo’s network knows many things about recent high-school graduate Cate Reid. One is that she is a 13- to 18-year-old female interested in weight loss. Ms. Reid was able to determine this when a reporter showed her a little-known feature on Yahoo’s website, the Ad Interest Manager, that displays some of the information Yahoo had collected about her.

Yahoo’s take on Ms. Reid, who was 17 years old at the time, hit the mark: She was, in fact, worried that she may be 15 pounds too heavy for her 5-foot, 6-inch frame. She says she often does online research about weight loss.

“Every time I go on the Internet,” she says, she sees weight-loss ads. “I’m self-conscious about my weight,” says Ms. Reid, whose father asked that her hometown not be given. “I try not to think about it…. Then [the ads] make me start thinking about it.”

Yahoo spokeswoman Amber Allman says Yahoo doesn’t knowingly target weight-loss ads at people under 18, though it does target adults.

“It’s likely this user received an untargeted ad,” Ms. Allman says. It’s also possible Ms. Reid saw ads targeted at her by other tracking companies.

Information about people’s moment-to-moment thoughts and actions, as revealed by their online activity, can change hands quickly. Within seconds of visiting or, information detailing a Web surfer’s activity there is likely to be auctioned on the data exchange run by BlueKai, the Seattle startup.

Each day, BlueKai sells 50 million pieces of information like this about specific individuals’ browsing habits, for as little as a tenth of a cent apiece. The auctions can happen instantly, as a website is visited.

Spokespeople for eBay Inc. and Expedia Inc. both say the profiles BlueKai sells are anonymous and the people aren’t identified as visitors of their sites. BlueKai says its own website gives consumers an easy way to see what it monitors about them.

Tracking files get onto websites, and downloaded to a computer, in several ways. Often, companies simply pay sites to distribute their tracking files.

But tracking companies sometimes hide their files within free software offered to websites, or hide them within other tracking files or ads. When this happens, websites aren’t always aware that they’re installing the files on visitors’ computers.

Often staffed by “quants,” or math gurus with expertise in quantitative analysis, some tracking companies use probability algorithms to try to pair what they know about a person’s online behavior with data from offline sources about household income, geography and education, among other things.

The goal is to make sophisticated assumptions in real time—plans for a summer vacation, the likelihood of repaying a loan—and sell those conclusions.

Some financial companies are starting to use this formula to show entirely different pages to visitors, based on assumptions about their income and education levels.

Life-insurance site, a unit of Byron Udell & Associates Inc., last month tested a system showing visitors it determined to be suburban, college-educated baby-boomers a default policy of $2 million to $3 million, says Accuquote executive Sean Cheyney. A rural, working-class senior citizen might see a default policy for $250,000, he says.

“We’re driving people down different lanes of the highway,” Mr. Cheyney says.

Consumer tracking is the foundation of an online advertising economy that racked up $23 billion in ad spending last year. Tracking activity is exploding. Researchers at AT&T Labs and Worcester Polytechnic Institute last fall found tracking technology on 80% of 1,000 popular sites, up from 40% of those sites in 2005.

The Journal found tracking files that collect sensitive health and financial data. On Encyclopaedia Britannica Inc.’s dictionary website, one tracking file from Healthline Networks Inc., an ad network, scans the page a user is viewing and targets ads related to what it sees there. So, for example, a person looking up depression-related words could see Healthline ads for depression treatments on that page—and on subsequent pages viewed on other sites.

Healthline says it doesn’t let advertisers track users around the Internet who have viewed sensitive topics such as HIV/AIDS, sexually transmitted diseases, eating disorders and impotence. The company does let advertisers track people with bipolar disorder, overactive bladder and anxiety, according to its marketing materials.

Targeted ads can get personal. Last year, Julia Preston, a 32-year-old education-software designer in Austin, Texas, researched uterine disorders online. Soon after, she started noticing fertility ads on sites she visited. She now knows she doesn’t have a disorder, but still gets the ads.

It’s “unnerving,” she says.

Tracking became possible in 1994 when the tiny text files called cookies were introduced in an early browser, Netscape Navigator. Their purpose was user convenience: remembering contents of Web shopping carts.

Back then, online advertising barely existed. The first banner ad appeared the same year. When online ads got rolling during the dot-com boom of the late 1990s, advertisers were buying ads based on proximity to content—shoe ads on fashion sites.

The dot-com bust triggered a power shift in online advertising, away from websites and toward advertisers. Advertisers began paying for ads only if someone clicked on them. Sites and ad networks began using cookies aggressively in hopes of showing ads to people most likely to click on them, thus getting paid.

Targeted ads command a premium. Last year, the average cost of a targeted ad was $4.12 per thousand viewers, compared with $1.98 per thousand viewers for an untargeted ad, according to an ad-industry-sponsored study in March.

The Journal examined three kinds of tracking technology—basic cookies as well as more powerful “Flash cookies” and bits of software code called “beacons.”

More than half of the sites examined by the Journal installed 23 or more “third party” cookies. installed the most, placing 159 third-party cookies.

Cookies are typically used by tracking companies to build lists of pages visited from a specific computer. A newer type of technology, beacons, can watch even more activity.

Beacons, also known as “Web bugs” and “pixels,” are small pieces of software that run on a Web page. They can track what a user is doing on the page, including what is being typed or where the mouse is moving.

The majority of sites examined by the Journal placed at least seven beacons from outside companies. had the most, 41, including several from companies that track health conditions and one that says it can target consumers by dozens of factors, including zip code and race. President Shravan Goli attributed the presence of so many tracking tools to the fact that the site was working with a large number of ad networks, each of which places its own cookies and beacons. After the Journal contacted the company, it cut the number of networks it uses and beefed up its privacy policy to more fully disclose its practices.

The widespread use of Adobe Systems Inc.’s Flash software to play videos online offers another opportunity to track people. Flash cookies originally were meant to remember users’ preferences, such as volume settings for online videos.

But Flash cookies can also be used by data collectors to re-install regular cookies that a user has deleted. This can circumvent a user’s attempt to avoid being tracked online. Adobe condemns the practice.

Most sites examined by the Journal installed no Flash cookies. installed 55.

That finding surprised the company, which said it was unaware of them. Comcast Corp. subsequently determined that it had used a piece of free software from a company called Clearspring Technologies Inc. to display a slideshow of celebrity photos on The Flash cookies were installed on Comcast’s site by that slideshow, according to Comcast.

Clearspring, based in McLean, Va., says the 55 Flash cookies were a mistake. The company says it no longer uses Flash cookies for tracking.

CEO Hooman Radfar says Clearspring provides software and services to websites at no charge. In exchange, Clearspring collects data on consumers. It plans eventually to sell the data it collects to advertisers, he says, so that site users can be shown “ads that don’t suck.” Comcast’s data won’t be used, Clearspring says.

Wittingly or not, people pay a price in reduced privacy for the information and services they receive online., the site with the most tracking files, is a case study.

The site’s annual revenue, about $9 million in 2009 according to an SEC filing, means the site is too small to support an extensive ad-sales team. So it needs to rely on the national ad-placing networks, whose business model is built on tracking. executives say the trade-off is fair for their users, who get free access to its dictionary and thesaurus service.

“Whether it’s one or 10 cookies, it doesn’t have any impact on the customer experience, and we disclose we do it,” says spokesman Nicholas Graham. “So what’s the beef?”

The problem, say some industry veterans, is that so much consumer data is now up for sale, and there are no legal limits on how that data can be used.

Until recently, targeting consumers by health or financial status was considered off-limits by many large Internet ad companies. Now, some aim to take targeting to a new level by tapping online social networks.

Media6Degrees Inc., whose technology was found on three sites by the Journal, is pitching banks to use its data to size up consumers based on their social connections. The idea is that the creditworthy tend to hang out with the creditworthy, and deadbeats with deadbeats.

“There are applications of this technology that can be very powerful,” says Tom Phillips, CEO of Media6Degrees. “Who knows how far we’d take it?”

Julia Angwin, Wall Street Journal


Full article and pĥotos:

To Tweet, Or Not to Tweet

How weary, stale, flat and unprofitable—at times—seem all the digital distractions of this world.

A catastrophic event unfolds. A seemingly healthy professional embarks on his daily commute, only to come to the frightening realization that his battered and beloved BlackBerry lies vulnerable and unused in a distant corner of his home. An unwholesome panic descends. No matter how far away from home he is, and no matter how needless the device may be in a practical sense, he is impelled to hightail it back to his house and reconnect with the world.

William Powers offers this beleaguered man (me), and everyone else who has faced a similar ordeal, a roadmap to contentment in “Hamlet’s BlackBerry,” a rewarding guide to finding a “quiet” and “spacious” place “where the mind can wander free.”

Based on the author’s much-discussed 2006 National Journal essay, “Hamlet’s BlackBerry: Why Paper is Eternal” (and how I wish that were true), the former Washington Post staff writer argues that the distractions of manic connectivity often lead to a lack of productivity and, if allowed to permeate too deeply, to an assault on the beauty and meaning of everyday life.

Obviously this is not a unique grievance, or a fresh one: As Mr. Powers acknowledges, concerns about the deleterious effects of a new world supplanting the old go back to Plato. But there has been an awful lot of grousing about digital distraction lately—Nicholas Carr’s “The Shallows: What the Internet Is Doing to Our Brains” came out just a few weeks ago—and it is easy to feel skeptical of worrywarts agonizing about Americans “wrestling” with too many choices and “coping” with the effects of too much Internet use.

There is simply too much good that comes of innovation for that sort of Luddite hand-wringing. The farmer a century ago who pulled himself off the straw mattress at 4 a.m. to till the earth so his family wouldn’t starve led a fairly straightforward, undistracted existence, but he was almost certainly miserable most of the time. And he probably regarded the arrival of radio as a sort of miracle. In discussions of this type I tend to rely on the wisdom of P.J. O’Rourke: “Civilization is an enormous improvement on the lack thereof.”

But even a jaded reader is likely to be won over by “Hamlet’s BlackBerry.” It convincingly argues that we’ve ceded too much of our existence to what he calls Digital Maximalism. Less scold and more philosopher, Mr. Powers certainly bemoans the spread of technology in our lives, but he also offers a compelling discussion of our dependence on contraptions and of the ways in which we might free ourselves from them. I buy it. I need quiet time.

To accept “Hamlet’s BlackBerry” is to accept that we are super busy. “It’s staggering,” writes Mr. Powers, “how many balls we keep in the air each day and how few we drop. We’re so busy, sometimes it seems as though busyness itself is the point.” Though I don’t find all that ball-juggling as staggering as the author, and I don’t know anyone who acts as if chaos is the point of it all, it would be foolish not to concede that our lives have become far more complex than ever before.

What can be done? What should be done? Mr. Powers’s answer is, in essence: Just say no. Try to cultivate a quieter or at least more focused life. The most persuasive and entertaining parts of “Hamlet’s BlackBerry” are found in Mr. Powers’s efforts to practice what he preaches. (Most of us, it should be noted, do not have the option of moving from a dense Washington, D.C., suburb to an idyllic Cape Cod town to grapple with the demons of gadgetry addiction.) His skeptical wife and kids agree that if they’re allowed to use their laptops during the week, they will turn the computers off on the weekend. Mr. Powers discovers that friends and relatives quickly adapt to the family’s digital disconnect (they call it the “Internet Sabbath”). The family spends more time face-to-face instead of Facebooking.

Mr. Powers proposes that we take into account the “need to connect outward, as well as the opposite need for time and space apart.” It is a powerful desire, the balanced life. Most of us yearn for it. Neither technology nor connectivity is injurious unless we allow them to consume us. Mr. Powers argues that letting life turn into a blizzard of snapshots—that’s what all those screenviews amount to, after all—isn’t enough. We would be happier freeing ourselves for genuine, unfiltered experience and then reflecting on it, not tweeting about it. The busy person will pause here to nod in sympathy.

I’m not sure that many of us have found that spacious place where our minds can wander free of technological intrusions, of beeps and buttons and emails and tweets, but “Hamlet’s BlackBerry” makes the case that we can—or should—find it. Recently, while watching some hypnotically dreadful movie, I instinctively reached for my BlackBerry to fetch some worthless biographical information about a third-rate actress that would do no more than clog my brain still further.

Then I remembered something in Mr. Powers’s book—which takes its title from a scene in “Hamlet” when the prince refers to an Elizabethan technical advance: specially coated paper or parchment that could be wiped clean. A book that included heavy, blank, erasable pages made from such paper—an almanac, for example—was called a table. “Yea, from the table of my memory / I’ll wipe away all trivial fond records,” Hamlet says. Or, as Mr. Powers paraphrases: ” ‘Don’t worry,’ Hamlet’s nifty device whispered, ‘you don’t have to know everything. Just the few things that matter.’ ”

Mr. Harsanyi is a nationally syndicated columnist for the Denver Post.


Full article and photo:

An empire gives way

Blogs are growing a lot more slowly. But specialists still thrive

ONLINE archaeology can yield surprising results. When John Kelly of Morningside Analytics, a market-research firm, recently pored over data from websites in Indonesia he discovered a “vast field of dead blogs”. Numbering several thousand, they had not been updated since May 2009. Like hastily abandoned cities, they mark the arrival of the Indonesian version of Facebook, the online social network.

Such swathes of digital desert are still rare in the blogosphere. And they should certainly not be taken as evidence that it has started to die. But signs are multiplying that the rate of growth of blogs has slowed in many parts of the world. In some countries growth has even stalled.

Blogs are a confection of several things that do not necessarily have to go together: easy-to-use publishing tools, reverse-chronological ordering, a breezy writing style and the ability to comment. But for maintaining an online journal or sharing links and photos with friends, services such as Facebook and Twitter (which broadcasts short messages) are quicker and simpler.

Charting the impact of these newcomers is difficult. Solid data about the blogosphere are hard to come by. Such signs as there are, however, all point in the same direction. Earlier in the decade, rates of growth for both the numbers of blogs and those visiting them approached the vertical. Now traffic to two of the most popular blog-hosting sites, Blogger and WordPress, is stagnating, according to Nielsen, a media-research firm. By contrast, Facebook’s traffic grew by 66% last year and Twitter’s by 47%. Growth in advertisements is slowing, too. Blogads, which sells them, says media buyers’ inquiries increased nearly tenfold between 2004 and 2008, but have grown by only 17% since then. Search engines show declining interest, too.

People are not tiring of the chance to publish and communicate on the internet easily and at almost no cost. Experimentation has brought innovations, such as comment threads, and the ability to mix thoughts, pictures and links in a stream, with the most recent on top. Yet Facebook, Twitter and the like have broken the blogs’ monopoly. Even newer entrants such as Tumblr have offered sharp new competition, in particular for handling personal observations and quick exchanges. Facebook, despite its recent privacy missteps, offers better controls to keep the personal private. Twitter limits all communication to 140 characters and works nicely on a mobile phone.

A good example of the shift is Iran. Thanks to the early translation into Persian of a popular blogging tool (and crowds of journalists who lacked an outlet after their papers were shut down), Iran had tens of thousands of blogs by 2009. Many were shut down, and their authors jailed, after the crackdown that followed the election in June of that year. But another reason for the dwindling number of blogs written by dissidents is that the opposition Green Movement is now on Facebook, says Hamid Tehrani, the Brussels-based Iran editor for Global Voices, a blog news site. Mir Hossein Mousavi, one of the movement’s leaders, has 128,000 Facebook followers. Facebook, explains Mr Tehrani, is a more efficient way to reach people.

The future for blogs may be special-interest publishing. Mr Kelly’s research shows that blogs tend to be linked within languages and countries, with each language-group in turn containing smaller pockets of densely linked sites. These pockets form around public subjects: politics, law, economics and knowledge professions. Even narrower specialisations emerge around more personal topics that benefit from public advice. Germany has a cluster for children’s crafts; France, for food; Sweden, for painting your house.

Such specialist cybersilos may work for now, but are bound to evolve further. Deutsche Blogcharts says the number of links between German blogs dropped last year, with posts becoming longer. Where will that end? Perhaps in a single, hugely long blog posting about the death of blogs.


Full article:

Hooked on Gadgets, and Paying a Mental Price

Brenda and Kord Campbell, with iPads, at breakfast

When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.

Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.

“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”

The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing.

While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.

His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”

This is your brain on computers.

Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.

These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.

The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.

While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.

And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers.

“The technology is rewiring our brains,” said Nora Volkow, director of the National Institute of Drug Abuse and one of the world’s leading brain scientists. She and other researchers compare the lure of digital stimulation less to that of drugs and alcohol than to food and sex, which are essential but counterproductive in excess.

Technology use can benefit the brain in some ways, researchers say. Imaging studies show the brains of Internet users become more efficient at finding information. And players of some video games develop better visual acuity.

More broadly, cellphones and computers have transformed life. They let people escape their cubicles and work anywhere. They shrink distances and handle countless mundane tasks, freeing up time for more exciting pursuits.

For better or worse, the consumption of media, as varied as e-mail and TV, has exploded. In 2008, people consumed three times as much information each day as they did in 1960. And they are constantly shifting their attention. Computer users at work change windows or check e-mail or other programs nearly 37 times an hour, new research shows.

The nonstop interactivity is one of the most significant shifts ever in the human environment, said Adam Gazzaley, a neuroscientist at the University of California, San Francisco.

“We are exposing our brains to an environment and asking them to do things we weren’t necessarily evolved to do,” he said. “We know already there are consequences.”

Mr. Campbell, 43, came of age with the personal computer, and he is a heavier user of technology than most. But researchers say the habits and struggles of Mr. Campbell and his family typify what many experience — and what many more will, if trends continue.

For him, the tensions feel increasingly acute, and the effects harder to shake.

The Campbells recently moved to California from Oklahoma to start a software venture. Mr. Campbell’s life revolves around computers.

He goes to sleep with a laptop or iPhone on his chest, and when he wakes, he goes online. He and Mrs. Campbell, 39, head to the tidy kitchen in their four-bedroom hillside rental in Orinda, an affluent suburb of San Francisco, where she makes breakfast and watches a TV news feed in the corner of the computer screen while he uses the rest of the monitor to check his e-mail.

Major spats have arisen because Mr. Campbell escapes into video games during tough emotional stretches. On family vacations, he has trouble putting down his devices. When he rides the subway to San Francisco, he knows he will be offline 221 seconds as the train goes through a tunnel.

Their 16-year-old son, Connor, tall and polite like his father, recently received his first C’s, which his family blames on distraction from his gadgets. Their 8-year-old daughter, Lily, like her mother, playfully tells her father that he favors technology over family.

“I would love for him to totally unplug, to be totally engaged,” says Mrs. Campbell, who adds that he becomes “crotchety until he gets his fix.” But she would not try to force a change.

“He loves it. Technology is part of the fabric of who he is,” she says. “If I hated technology, I’d be hating him, and a part of who my son is too.”

Always On

Mr. Campbell, whose given name is Thomas, had an early start with technology in Oklahoma City. When he was in third grade, his parents bought him Pong, a video game. Then came a string of game consoles and PCs, which he learned to program.

In high school, he balanced computers, basketball and a romance with Brenda, a cheerleader with a gorgeous singing voice. He studied too, with focus, uninterrupted by e-mail. “I did my homework because I needed to get it done,” he said. “I didn’t have anything else to do.”

He left college to help with a family business, then set up a lawn mowing service. At night he would read, play video games, hang out with Brenda and, as she remembers it, “talk a lot more.”

In 1996, he started a successful Internet provider. Then he built the start-up that he sold for $1.3 million in 2003 to LookSmart, a search engine.

Mr. Campbell loves the rush of modern life and keeping up with the latest information. “I want to be the first to hear when the aliens land,” he said, laughing. But other times, he fantasizes about living in pioneer days when things moved more slowly: “I can’t keep everything in my head.”

No wonder. As he came of age, so did a new era of data and communication.

At home, people consume 12 hours of media a day on average, when an hour spent with, say, the Internet and TV simultaneously counts as two hours. That compares with five hours in 1960, say researchers at the University of California, San Diego. Computer users visit an average of 40 Web sites a day, according to research by RescueTime, which offers time-management tools.

As computers have changed, so has the understanding of the human brain. Until 15 years ago, scientists thought the brain stopped developing after childhood. Now they understand that its neural networks continue to develop, influenced by things like learning skills.

So not long after Eyal Ophir arrived at Stanford in 2004, he wondered whether heavy multitasking might be leading to changes in a characteristic of the brain long thought immutable: that humans can process only a single stream of information at a time.

Going back a half-century, tests had shown that the brain could barely process two streams, and could not simultaneously make decisions about them. But Mr. Ophir, a student-turned-researcher, thought multitaskers might be rewiring themselves to handle the load.

His passion was personal. He had spent seven years in Israeli intelligence after being weeded out of the air force — partly, he felt, because he was not a good multitasker. Could his brain be retrained?

Mr. Ophir, like others around the country studying how technology bent the brain, was startled by what he discovered.

The Myth of Multitasking

The test subjects were divided into two groups: those classified as heavy multitaskers based on their answers to questions about how they used technology, and those who were not.

In a test created by Mr. Ophir and his colleagues, subjects at a computer were briefly shown an image of red rectangles. Then they saw a similar image and were asked whether any of the rectangles had moved. It was a simple task until the addition of a twist: blue rectangles were added, and the subjects were told to ignore them.

The multitaskers then did a significantly worse job than the non-multitaskers at recognizing whether red rectangles had changed position. In other words, they had trouble filtering out the blue ones — the irrelevant information.

So, too, the multitaskers took longer than non-multitaskers to switch among tasks, like differentiating vowels from consonants and then odd from even numbers. The multitaskers were shown to be less efficient at juggling problems.

Other tests at Stanford, an important center for research in this fast-growing field, showed multitaskers tended to search for new information rather than accept a reward for putting older, more valuable information to work.

Researchers say these findings point to an interesting dynamic: multitaskers seem more sensitive than non-multitaskers to incoming information.

The results also illustrate an age-old conflict in the brain, one that technology may be intensifying. A portion of the brain acts as a control tower, helping a person focus and set priorities. More primitive parts of the brain, like those that process sight and sound, demand that it pay attention to new information, bombarding the control tower when they are stimulated.

Researchers say there is an evolutionary rationale for the pressure this barrage puts on the brain. The lower-brain functions alert humans to danger, like a nearby lion, overriding goals like building a hut. In the modern world, the chime of incoming e-mail can override the goal of writing a business plan or playing catch with the children.

“Throughout evolutionary history, a big surprise would get everyone’s brain thinking,” said Clifford Nass, a communications professor at Stanford. “But we’ve got a large and growing group of people who think the slightest hint that something interesting might be going on is like catnip. They can’t ignore it.”

Mr. Nass says the Stanford studies are important because they show multitasking’s lingering effects: “The scary part for guys like Kord is, they can’t shut off their multitasking tendencies when they’re not multitasking.”

Melina Uncapher, a neurobiologist on the Stanford team, said she and other researchers were unsure whether the muddied multitaskers were simply prone to distraction and would have had trouble focusing in any era. But she added that the idea that information overload causes distraction was supported by more and more research.

A study at the University of California, Irvine, found that people interrupted by e-mail reported significantly increased stress compared with those left to focus. Stress hormones have been shown to reduce short-term memory, said Gary Small, a psychiatrist at the University of California, Los Angeles.

Preliminary research shows some people can more easily juggle multiple information streams. These “supertaskers” represent less than 3 percent of the population, according to scientists at the University of Utah.

Other research shows computer use has neurological advantages. In imaging studies, Dr. Small observed that Internet users showed greater brain activity than nonusers, suggesting they were growing their neural circuitry.

At the University of Rochester, researchers found that players of some fast-paced video games can track the movement of a third more objects on a screen than nonplayers. They say the games can improve reaction and the ability to pick out details amid clutter.

“In a sense, those games have a very strong both rehabilitative and educational power,” said the lead researcher, Daphne Bavelier, who is working with others in the field to channel these changes into real-world benefits like safer driving.

There is a vibrant debate among scientists over whether technology’s influence on behavior and the brain is good or bad, and how significant it is.

“The bottom line is, the brain is wired to adapt,” said Steven Yantis, a professor of brain sciences at Johns Hopkins University. “There’s no question that rewiring goes on all the time,” he added. But he said it was too early to say whether the changes caused by technology were materially different from others in the past.

Mr. Ophir is loath to call the cognitive changes bad or good, though the impact on analysis and creativity worries him.

He is not just worried about other people. Shortly after he came to Stanford, a professor thanked him for being the one student in class paying full attention and not using a computer or phone. But he recently began using an iPhone and noticed a change; he felt its pull, even when playing with his daughter.

“The media is changing me,” he said. “I hear this internal ping that says: check e-mail and voice mail.”

“I have to work to suppress it.”

Kord Campbell does not bother to suppress it, or no longer can.

Interrupted by a Corpse

It is a Wednesday in April, and in 10 minutes, Mr. Campbell has an online conference call that could determine the fate of his new venture, called Loggly. It makes software that helps companies understand the clicking and buying patterns of their online customers.

Mr. Campbell and his colleagues, each working from a home office, are frantically trying to set up a program that will let them share images with executives at their prospective partner.

But at the moment when Mr. Campbell most needs to focus on that urgent task, something else competes for his attention: “Man Found Dead Inside His Business.”

That is the tweet that appears on the left-most of Mr. Campbell’s array of monitors, which he has expanded to three screens, at times adding a laptop and an iPad.

On the left screen, Mr. Campbell follows the tweets of 1,100 people, along with instant messages and group chats. The middle monitor displays a dark field filled with computer code, along with Skype, a service that allows Mr. Campbell to talk to his colleagues, sometimes using video. The monitor on the right keeps e-mail, a calendar, a Web browser and a music player.

Even with the meeting fast approaching, Mr. Campbell cannot resist the tweet about the corpse. He clicks on the link in it, glances at the article and dismisses it. “It’s some article about something somewhere,” he says, annoyed by the ads for jeans popping up.

The program gets fixed, and the meeting turns out to be fruitful: the partners are ready to do business. A colleague says via instant message: “YES.”

Other times, Mr. Campbell’s information juggling has taken a more serious toll. A few weeks earlier, he once again overlooked an e-mail message from a prospective investor. Another time, Mr. Campbell signed the company up for the wrong type of business account on, costing $300 a month for six months before he got around to correcting it. He has burned hamburgers on the grill, forgotten to pick up the children and lingered in the bathroom playing video games on an iPhone.

Mr. Campbell can be unaware of his own habits. In a two-and-a-half hour stretch one recent morning, he switched rapidly between e-mail and several other programs, according to data from RescueTime, which monitored his computer use with his permission. But when asked later what he was doing in that period, Mr. Campbell said he had been on a long Skype call, and “may have pulled up an e-mail or two.”

The kind of disconnection Mr. Campbell experiences is not an entirely new problem, of course. As they did in earlier eras, people can become so lost in work, hobbies or TV that they fail to pay attention to family.

Mr. Campbell concedes that, even without technology, he may work or play obsessively, just as his father immersed himself in crossword puzzles. But he says this era is different because he can multitask anyplace, anytime.

“It’s a mixed blessing,” he said. “If you’re not careful, your marriage can fall apart or your kids can be ready to play and you’ll get distracted.”

The Toll on Children

Father and son sit in armchairs. Controllers in hand, they engage in a fierce video game battle, displayed on the nearby flat-panel TV, as Lily watches.

They are playing Super Smash Bros. Brawl, a cartoonish animated fight between characters that battle using anvils, explosives and other weapons.

“Kill him, Dad,” Lily screams. To no avail. Connor regularly beats his father, prompting expletives and, once, a thrown pillow. But there is bonding and mutual respect.

“He’s a lot more tactical,” says Connor. “But I’m really good at quick reflexes.”

Screens big and small are central to the Campbell family’s leisure time. Connor and his mother relax while watching TV shows like “Heroes.” Lily has an iPod Touch, a portable DVD player and her own laptop, which she uses to watch videos, listen to music and play games.

Lily, a second-grader, is allowed only an hour a day of unstructured time, which she often spends with her devices. The laptop can consume her.

“When she’s on it, you can holler her name all day and she won’t hear,” Mrs. Campbell said.

Researchers worry that constant digital stimulation like this creates attention problems for children with brains that are still developing, who already struggle to set priorities and resist impulses.

Connor’s troubles started late last year. He could not focus on homework. No wonder, perhaps. On his bedroom desk sit two monitors, one with his music collection, one with Facebook and Reddit, a social site with news links that he and his father love. His iPhone availed him to relentless texting with his girlfriend.

When he studied, “a little voice would be saying, ‘Look up’ at the computer, and I’d look up,” Connor said. “Normally, I’d say I want to only read for a few minutes, but I’d search every corner of Reddit and then check Facebook.”

His Web browsing informs him. “He’s a fact hound,” Mr. Campbell brags. “Connor is, other than programming, extremely technical. He’s 100 percent Internet savvy.”

But the parents worry too. “Connor is obsessed,” his mother said. “Kord says we have to teach him balance.”

So in January, they held a family meeting. Study time now takes place in a group setting at the dinner table after everyone has finished eating. It feels, Mr. Campbell says, like togetherness.

No Vacations

For spring break, the family rented a cottage in Carmel, Calif. Mrs. Campbell hoped everyone would unplug.

But the day before they left, the iPad from Apple came out, and Mr. Campbell snapped one up. The next night, their first on vacation, “We didn’t go out to dinner,” Mrs. Campbell mourned. “We just sat there on our devices.”

She rallied the troops the next day to the aquarium. Her husband joined them for a bit but then begged out to do e-mail on his phone.

Later she found him playing video games.

The trip came as Mr. Campbell was trying to raise several million dollars for his new venture, a goal that he achieved. Brenda said she understood that his pursuit required intensity but was less understanding of the accompanying surge in video game.

His behavior brought about a discussion between them. Mrs. Campbell said he told her that he was capable of logging off, citing a trip to Hawaii several years ago that they called their second honeymoon.

“What trip are you thinking about?” she said she asked him. She recalled that he had spent two hours a day online in the hotel’s business center.

On Thursday, their fourth day in Carmel, Mr. Campbell spent the day at the beach with his family. They flew a kite and played whiffle ball.

Connor unplugged too. “It changes the mood of everything when everybody is present,” Mrs. Campbell said.

The next day, the family drove home, and Mr. Campbell disappeared into his office.

Technology use is growing for Mrs. Campbell as well. She divides her time between keeping the books of her husband’s company, homemaking and working at the school library. She checks e-mail 25 times a day, sends texts and uses Facebook.

Recently, she was baking peanut butter cookies for Teacher Appreciation Day when her phone chimed in the living room. She answered a text, then became lost in Facebook, forgot about the cookies and burned them. She started a new batch, but heard the phone again, got lost in messaging, and burned those too. Out of ingredients and shamed, she bought cookies at the store.

She feels less focused and has trouble completing projects. Some days, she promises herself she will ignore her device. “It’s like a diet — you have good intentions in the morning and then you’re like, ‘There went that,’ ” she said.

Mr. Nass at Stanford thinks the ultimate risk of heavy technology use is that it diminishes empathy by limiting how much people engage with one another, even in the same room.

“The way we become more human is by paying attention to each other,” he said. “It shows how much you care.”

That empathy, Mr. Nass said, is essential to the human condition. “We are at an inflection point,” he said. “A significant fraction of people’s experiences are now fragmented.”

Matt Richtel, New York Times


Full article and photo:

Immersed and Confused

Jaw-dropping graphics, engrossing action and . . . vapid storytelling.

Tom Bissell has purchased four Xbox 360 videogame consoles in the past five years. And he has given away three. In an attempt to kick his videogame habit, Mr. Bissell would bestow each recently acquired console on a friend or family member, only to run out and buy another one a short time later. No doubt Microsoft is gratified. We should just be glad that Mr. Bissell was able to drag himself away from playing “Grand Theft Auto” and “Fallout” long enough to write “Extra Lives,” his exploration of, as the subtitle has it, “Why Video Games Matter.”

Unusually for the videogame book genre, Mr. Bissell brings to his subject not only a handy way with a game controller but also a deft literary style and a journalist’s eye. He writes for Harper’s magazine and The New Yorker and is the author of the short-story collection “God Lives in St. Petersburg” (2005). “Extra Lives” is mostly a travelogue recounting Mr. Bissell’s journey, over the course of several years, through a series of immense, immersive videogames, such as “Far Cry 2.” It’s much less tedious than it sounds.

Mr. Bissell is so descriptively alert that his accounts of pixelated derring-do may well interest even those who are immune to the charm of videogames. Here, for instance, is his description of a scene in “Fallout 3,” a post-apocalyptic, role-playing shoot-’em-up game that mostly takes place in Washington D.C.: “I was running up the stairs of what used to be the Dupont Circle metro station and, as I turned to bash in the brainpan of a radioactive ghoul, noticed the playful, lifelike way in which the high-noon sunlight streaked along the grain of my sledgehammer’s wooden handle.”

He’s funny, too. In a section arguing that the artistic merits of videogames can’t judged by the worst of the breed, he writes: “Every form of art, popular or otherwise, has its ghettos”—for instance, “the crack houses along Michael Bay Avenue.”

But what makes “Extra Lives” so winning is Mr. Bissell’s sense of absurdity. He recounts a discussion with some fellow customers at a videogame store about the artistic merits of the game “Left 4 Dead.” The little colloquy continued until he realized: “I was contrasting my aesthetic sensitivity to that of some teenagers about a game that concerns itself with shooting as many zombies as possible. It is moments like this that can make it so dispiritingly difficult to care about videogames.”

Running through “Extra Lives” is a thread of seriousness. Mr. Bissell wonders why, despite their technical sophistication, videogames are so bad at telling stories. It’s a more complex question than you might think.

The best narrative art forms are necessarily authoritarian. In books, film or theater, the creator tells his story with near total control. In a certain way, the audience might as well not even exist. Videogames are participatory. And the fact of participation creates all sorts of problems for narrative authority.

Yet many videogame producers do aspire to tell meaningful stories. The game “BioShock,” for instance, attempts to explore the philosophical tensions within Ayn Rand’s Objectivism and to meditate on the costs of individual freedom—but with plenty of genetic mutants to splatter. The script for the game “Mass Effect”—that is, the on-screen characters’ dialogue, not the computer code—is 300,000 words. But to little avail. The stories just aren’t much as stories. Videogames seem “designed by geniuses and written by Ed Wood Jr.,” Mr. Bissell laments.

The most interesting person Mr. Bissell crosses paths with is Jonathan Blow, a videogame designer and a sort philosopher of the medium. Mr. Blow has spent a good deal of his life thinking about storytelling and the “dynamical meaning” of simply getting through a game. He believes that the central problem with storytelling in videogames is that the actual mechanics of playing a game—moving your character to jump over a barrel, or eat a power pellet, or punch an enemy—are divorced from the stories that videogame makers are trying to tell.

Like all games, videogames are constructed around rules. You can shoot this. You can’t shoot that. This hamburger restores your health. That sword gives you extra power. And so on. “Games have rules, rules have meaning, and game-play is the process by which those rules are tested and explored,” Mr. Bissell explains. And as Mr. Blow notes, if those rules are fake, unimportant or arbitrary, audiences sense it. And, let’s face it, assigning power to a hamburger is a little arbitrary. No matter how impressive a game is, in its rules-ridden immersiveness it will not be able to tell a coherent, meaningful story. The very nature of the medium, Mr. Blow believes, “prevents the stories from being good.”

As if to prove the rule, Mr. Blow designed a game called “Braid.” It concerns a young scientist who discovers how to go back in time and decides to use this power to revisit the period when he lost his great love. “Braid” is a meditation on time travel, choices and consequences. A crucial aspect of playing the game is the player’s ability, at any moment, to rewind the clock to undo his mistakes. It is “dynamical meaning” in harmony with narrative ambition. And because of it, “Braid” occupies a lonely place in the pantheon of videogames as something that approaches art.

When Mr. Blow departs the scene in “Extra Lives,” the book loses some of its sharpness. And toward the end reader interest may flag even more as Mr. Bissell’s videogame addiction merges unsettlingly with his cocaine addiction. Drug stories, like dreams, are interesting only to the person who has them.

Even so, “Extra Lives” is the most fun you’ll ever have reading about videogames. It may prove even more entertaining than playing them.

Mr. Last is a senior writer at The Weekly Standard.


Full article and photo:

A Web Smaller Than a Divide

AT first glance, there’s a clear need for expanding the Web beyond the Latin alphabet, including in the Arabic-speaking world. According to the Madar Research Group, about 56 million Arabs, or 17 percent of the Arab world, use the Internet, and those numbers are expected to grow 50 percent over the next three years.

Many think that an Arabic-alphabet Web will bring millions online, helping to bridge the socio-economic divides that pervade the region.

But such hopes are overblown. Although there are still problems — encoding glitches and the lack of a standard Arabic keyboard — virtually any Arabic speaker who uses the Web has already adjusted to these challenges in his or her own way. And it’s no big deal: educated Arabs are exposed, in various degrees, to English and French in school.

The very idea of an “Arabic Web” is misleading. True, before the Icann announcement declared that Arabic characters could be used throughout domain names, U.R.L.’s had to be written at least in part in Latin script. But once one passes the Latin domain gate, the rest is all done in Arabic characters anyway.

Nowadays almost every computer can be made to write Arabic, or any other script, and there is plenty of Arabic software. Most late-model electronic devices are equipped with Arabic. I text with friends using Arabic on my iPhone. Many computer keyboards are now even made with Arabic letters printed on the keys.

And where there’s no readily available solution, Arabic Internet users have found a way to adjust. Many use the Latin script to transliterate messages in Arabic when there’s no conversion program or font set available. Phonetic spelling is common. For sounds that have no written equivalent in Latin script, they’ve gotten creative: for example, the number 3 is commonly used for the “ayn” sound and 7 stands in for the “ha,” because their shapes closely resemble the corresponding Arabic letters.

So what will happen? In the short term, of course, some additional users will move to the Web, especially as they take advantage of the new range of domain names. Over time, though, this will peter out, because, as in most of the world, the digital divide still tracks closely with the material and political divide. The haves are the ones using computers, and many of them are also the ones long accustomed to working with Latin script. The have-nots are unlikely to have the luxury of jumping online. Changing the alphabet used to form domain names won’t exactly attract millions of poor Arabs to the Internet.

We should all celebrate the diversity that comes with an Internet no longer tied to a single alphabet. But we should be realistic, too. The Web may be a revolutionary technology, but an Arabic Web is not about to spur an Internet revolution.

Sinan Antoon, an assistant professor of Arabic literature at New York University, is the author of the novel “I`jaam: An Iraqi Rhapsody.”


Full article:

Search Engine of the Song Dynasty

BAIDU.COM, the popular search engine often called the Chinese Google, got its name from a poem written during the Song Dynasty (960-1279). The poem is about a man searching for a woman at a busy festival, about the search for clarity amid chaos. Together, the Chinese characters băi and dù mean “hundreds of ways,” and come out of the last lines of the poem: “Restlessly I searched for her thousands, hundreds of ways./ Suddenly I turned, and there she was in the receding light.”

Baidu, rendered in Chinese, is rich with linguistic, aesthetic and historical meaning. But written phonetically in Latin letters (as I must do here because of the constraints of the newspaper medium and so that more American readers can understand), it is barely anchored to the two original characters; along the way, it has lost its precision and its poetry.

As Web addresses increasingly transition to non-Latin characters as a result of the changing rules for domain names, that series of Latin letters Chinese people usually see at the top of the screen when they search for something on Baidu may finally turn into intelligible words: “a hundred ways.”

Of course, this expansion of languages for domain names could lead to confusion: users seeking to visit Web sites with names in a script they don’t read could have difficulty putting in the addresses, and Web browsers may need to be reconfigured to support non-Latin characters. The previous system, with domain names composed of numbers, punctuation marks and Latin letters without accents, promoted standardization, wrangling into consistency and simplicity one small part of the Internet. But something else, something important, has been lost.

Part of the beauty of the Chinese language comes from a kind of divisibility not possible in a Latin-based language. Chinese is composed of approximately 20,000 single-syllable characters, 10,000 of which are in common use. These characters each mean something on their own; they are also combined with other characters to form hundreds of thousands of multisyllabic words. Níhăo, for example, Chinese for “Hello,” is composed of ní — “you,” and hăo — “good.” Isn’t “You good” — both as a statement and a question — a marvelous and strangely precise breakdown of what we’re really saying when we greet someone?

The Romanization of Chinese into a phonetic system called Pinyin, using the Latin alphabet and diacritics (to indicate the four distinguishing tones in Mandarin), was developed by the Chinese government in the 1950s. Pinyin makes the language easier to learn and pronounce, and it has the added benefit of making Chinese characters easy to input into a computer. Yet Pinyin, invented for ease and standards, only represents sound. In Chinese, there are multiple characters with the exact same sound. The sound “băi,” for example, means 100, but it can also mean cypress, or arrange. And “Baidu,” without diacritics, can mean “a failed attempt to poison” or “making a religion of gambling.” In the case of, the word, in Latin letters, has slipped away from its original context and meaning, and been turned into a brand.

Language is such a basic part of our lives, it seems ordinary and transparent. But language is strange and magical, too: it dredges up history and memory; it simultaneously bestows and destabilizes meaning. Each of the thousands of languages spoken around the world has its own system and rules, its own subversions, its own quixotic beauty. Whenever you try to standardize those languages, whether on the Internet, in schools or in literature, you lose something. What we gain in consistency costs us in precision and beauty.

When Chinese speakers Baidu (like Google, it too is a verb), we look for information on the Internet using a branded search engine. But when we see the characters for băi dù, we might, for one moment, engage with the poetry of our language, remember that what we are really trying to do is find what we were seeking in the receding light. Those sets of meanings, layered like a palimpsest, might appear suddenly, where we least expect them, in the address bar at the top of our browsers. And in some small way, those words, in our own languages, might help us see with clarity, and help us to make sense of the world.

Ruiyan Xu is the author of the forthcoming novel “The Lost and Forgotten Languages of Shanghai.”


Full article:

Goddess English of Uttar Pradesh

Mumbai, India

A FORTNIGHT ago, in a poor village in Uttar Pradesh, in northern India, work began on a temple dedicated to Goddess English. Standing on a wooden desk was the idol of English — a bronze figure in robes, wearing a wide-brimmed hat and holding aloft a pen. About 1,000 villagers had gathered for the groundbreaking, most of them Dalits, the untouchables at the bottom of India’s caste system. A social activist promoting the study of English, dressed in a Western suit despite the hot sun and speaking as if he were imparting religious wisdom, said, “Learn A, B, C, D.” The temple is a gesture of defiance from the Dalits to the nation’s elite as well as a message to the Dalit young — English can save you.

A few days later, the Internet Corporation for Assigned Names and Numbers, a body that oversees domain names on the Web, announced a different kind of liberation: it has taken the first steps to free the online world from the Latin script, which English and most Web addresses are written in. In some parts of the world, Web addresses can already be written in non-Latin scripts, though until this change, all needed the Latin alphabet for country codes, like “.sa” for Saudi Arabia. But now that nation, along with Egypt and the United Arab Emirates, has been granted a country code in the Arabic alphabet, and Russia has gotten a Cyrillic one. Soon, others will follow.

Icann calls it a “historic” development, and that is true, but only because a great cliché has finally been defeated. The Internet as a unifier of humanity was always literary nonsense, on par with “truth will triumph.”

The universality of the Latin script online was an accident of its American invention, not a global intention. The world does not want to be unified. What is the value of belonging if you belong to all? It is a fragmented world by choice, and so it was always a fragmented Web. Now we can stop pretending — but that doesn’t mean this is a change worth celebrating.

Many have argued that the introduction of domain names and country codes in non-Latin scripts will help the Web finally reach the world’s poor. But it is really hard to believe that what separates an Egyptian or a Tamil peasant from the Internet is the requirement to type in a few foreign characters. There are far greater obstacles. It is even harder to believe that all the people who are demanding their freedom from the Latin script are doing it for humanitarian reasons. A big part of the issue here is nationalism, and the East’s imagination of the West as an adversary. This is just the latest episode in an ancient campaign.

A decade ago I met Mahatma Gandhi’s great-grandson, Tushar Gandhi, a jolly, endearing, meat-eating man. He was distraught that the Indians who were creating Web sites were choosing the dot-com domain over the more patriotic dot-in. He was trying to convince Indians to flaunt their nationality. He told me: “As long as we live in this world, there will be boundaries. And we need to be proud of what we call home.”

It is the same sentiment that is now inspiring small groups of Indians to demand top-level domain names (the suffix that follows the dot in a Web address) in their own native scripts, like Tamil. The Tamil language is spoken in the south Indian state of Tamil Nadu, where I spent the first 20 years of my life, and where I have seen fierce protests against the colonizing power of Hindi. The International Forum for Information Technology in Tamil, a tech advocacy and networking group, has petitioned Icann for top-level domain names in the Tamil script. But if it cares about increasing the opportunities available to poor Tamils, it should be promoting English, not Tamil.

There’s no denying that at the heart of India’s new prosperity is a foreign language, and that the opportunistic acceptance of English has improved the lives of millions of Indians. There are huge benefits in exploiting a stronger cultural force instead of defying it. Imagine what would have happened if the 12th-century Europeans who first encountered Hindu-Arabic numerals (0, 1, 2, 3) had rejected them as a foreign oddity and persisted with the cumbersome Roman numerals (IV, V). The extraordinary advances in mathematics made by Europeans would probably have been impossible.

But then the world is what it is. There is an expression popularized by the spread of the Internet: the global village. Though intended as a celebration of the modern world’s inclusiveness, it is really an accurate condemnation of that world. After all, a village is a petty place — filled with old grudges, comical self-importance and imagined fears.

Manu Joseph, the deputy editor of the Indian newsweekly OPEN, is the author of the forthcoming novel “Serious Men.”


Full article:

Five Ways to Keep Online Criminals at Bay

THE Web is a fount of information, a busy marketplace, a thriving social scene — and a den of criminal activity.

Criminals have found abundant opportunities to undertake stealthy attacks on ordinary Web users that can be hard to stop, experts say. Hackers are lacing Web sites — often legitimate ones — with so-called malware, which can silently infiltrate visiting PCs to steal sensitive personal information and then turn the computers into “zombies” that can be used to spew spam and more malware onto the Internet.

At one time, virus attacks were obvious to users, said Alan Paller, director of research at the SANS Institute, a training organization for computer security professionals. He explained that now, the attacks were more silent. “Now it’s much, much easier infecting trusted Web sites,” he said, “and getting your zombies that way.”

And there are myriad lures aimed at conning people into installing nefarious programs, buying fake antivirus software or turning over personal information that can be used in identity fraud.

“The Web opened up a lot more opportunities for attacking” computer users and making money, said Maxim Weinstein, executive director of StopBadware, a nonprofit consumer advocacy group that receives funding from Google, PayPal, Mozilla and others.

Google says its automated scans of the Internet recently turned up malware on roughly 300,000 Web sites, double the number it recorded two years ago. Each site can contain many infected pages. Meanwhile, Malware doubled last year, to 240 million unique attacks, according to Symantec, a maker of security software. And that does not count the scourge of fake antivirus software and other scams.

So it is more important than ever to protect yourself. Here are some basic tips for thwarting them.

Protect the Browser

The most direct line of attack is the browser, said Vincent Weafer, vice president of Symantec Security Response. Online criminals can use programming flaws in browsers to get malware onto PCs in “drive-by” downloads without users ever noticing.

Internet Explorer and Firefox are the most targeted browsers because they are the most popular. If you use current versions, and download security updates as they become available, you can surf safely. But there can still be exposure between when a vulnerability is discovered and an update becomes available, so you will need up-to-date security software as well to try to block any attacks that may emerge, especially if you have a Windows PC.

It can help to use a more obscure browser like Chrome from Google, which also happens to be the newest browser on the market and, as such, includes some security advances that make attacks more difficult.

Get Adobe Updates

Most consumers are familiar with Adobe Reader, for PDF files, and Adobe’s Flash Player. In the last year, a virtual epidemic of attacks has exploited their flaws; almost half of all attacks now come hidden in PDF files, Mr. Weafer said. “No matter what browser you’re using,” he said, “you’re using the PDF Reader, you’re using the Adobe Flash Player.”

Part of the problem is that many computers run old, vulnerable versions. But as of April, it has become easier to get automatic updates from Adobe, if you follow certain steps.

To update Reader, open the application and then select “Help” and “Check for Updates” from the menu bar. Since April, Windows users have been able to choose to get future updates automatically without additional prompts by clicking “Edit” and “Preferences,” then choosing “Updater” from the list and selecting “Automatically install updates.” Mac users can arrange updates using a similar procedure, though Apple requires that they enter their password each time an update is installed.

Adobe said it did not make silent automatic updates available previously because many users, especially at companies, were averse to them. To get the latest version of Flash Player, visit Abobe’s Web site.

Any software can be vulnerable. Windows PC users can identify vulnerable or out-of-date software using Secunia PSI, a free tool that scans machines and alerts users to potential problems.

Beware Malicious Ads

An increasingly popular way to get attacks onto Web sites people trust is to slip them into advertisements, usually by duping small-time ad networks. Malvertising, as this practice is known, can exploit software vulnerabilities or dispatch deceptive pop-up messages.

A particularly popular swindle involves an alert that a virus was found on the computer, followed by urgent messages to buy software to remove it. Of course, there is no virus and the security software, known as scareware, is fake. It is a ploy to get credit card numbers and $40 or $50. Scareware accounts for half of all malware delivered in ads, up fivefold from a year ago, Google said.

Closing the pop-up or killing the browser will usually end the episode. But if you encounter this scam, check your PC with trusted security software or Microsoft’s free Malicious Software Removal Tool. If you have picked up something nasty, you are in good company; Microsoft cleaned scareware from 7.8 million PCs in the second half of 2009, up 47 percent from the 5.3 million in the first half, the company said.

Another tool that can defend against malvertising, among other Web threats, is K9 Web Protection, free from Blue Coat Systems. Though it is marketed as parental-control software, K9 can be configured to look only for security threats like malware, spyware and phishing attacks — and to bark each time it stops one.

Poisoned Search Results

Online criminals are also trying to manipulate search engines into placing malicious sites toward the top of results pages for popular keywords. According to a recent Google study, 60 percent of malicious sites that embed hot keywords try to distribute scareware to the computers of visitors.

Google and search engines like Microsoft’s Bing are working to detect malicious sites and remove them from their indexes. Free tools like McAfee’s SiteAdvisor and the Firefox add-on Web of Trust can also help — warning about potentially dangerous links.

Antisocial Media

Attackers also use e-mail, instant messaging, blog comments and social networks like Facebook and Twitter to induce people to visit their sites.

It’s best to accept “friend” requests only from people you know, and to guard your passwords. Phishers are trying to filch login information so they can infiltrate accounts, impersonate you to try to scam others out of money and gather personal information about you and your friends.

Also beware the Koobface worm, variants of which have been taking aim at users of Facebook and other social sites for more than a year. It typically promises a video of some kind and asks you to download a fake multimedia-player codec to view the video. If you do so, your PC is infected with malware that turns it into a zombie (making it part of a botnet, or group of computers, that can spew spam and malware across the Internet).

But most important, you need to keep your wits about you. Criminals are using increasingly sophisticated ploys, and your best defense on the Web may be a healthy level of suspicion.

Riva Richmond, New York Times


Full article:

To catch a thief

Spotting video piracy

A new way to scan digital videos for copyright infringement

ONLINE video piracy is a big deal. Google’s YouTube, for example, is being sued for more than $1 billion by Viacom, a media company. But it is extremely hard to tell if a video clip is copyrighted, particularly since 24 hours of video are uploaded to YouTube every minute. Now a new industry standard promises to be able to identify pirated material with phenomenal accuracy in a matter of seconds.

The technique, developed by NEC, a Japanese technology company, and later tweaked by Mitsubishi Electric, has been adopted by the International Organisation for Standardisation (ISO) for MPEG-7, the latest standard for describing audio-visual content. The two existing methods do not do a very good job. One is digital “watermarking,” in which a bit of computer code is embedded in a file to identify it. This works only if content owners take the trouble to affix the watermark—and then it only spots duplicates, not other forms of piracy such as recording a movie at a cinema. A second approach is to extract a numeric code or “digital fingerprint” from the content file itself by comparing, say, the colours or texture of regions in a frame. But this may not work if the file is altered, such as by cropping or overlaying text.

NEC’s technology extracts a digital signature that works even if the video is altered. It does this by comparing the brightness in 380 predefined “regions of interest” in a frame of the video. This could be done for all or only some of the frames in a film. The brightness is assigned a value: -1, 0, or +1. These values are encapsulated in a digital signature of 76 bytes per frame.

The beauty of the technique is that it encompasses both granularity and generality. The 380 regions of interest are numerous, so an image can be identified even if it is doctored. At the same time, the array of three values simplifies the complexity in the image, so even if a video is of poor quality or a different hue, the information about its relative luminance is retained. Moreover, the compact signature is computationally easy to extract and use.

NEC says the system could be used to automate what is currently a manual procedure of checking that video uploaded to the internet is not pirated. The technology is said to have an average detection rate of 96% and a low rate of false alarms: a mere five per million, according to tests by the ISO. It can detect if a video is pirated from clips as short as two seconds. And an ordinary PC can be used with the system to scour through 1,000 hours of video in a second. There are other potential uses too, because it provides a way to identify video content. A person could, say, use the signature in a clip to search for a full version of a movie. Piracy will still flourish—but the pirates may have to get smarter.


Full article:

Tell-All Generation Learns to Keep Things Offline

“I am much more self-censoring,” said Sam Jackson, a student.

Min Liu, a 21-year-old liberal arts student at the New School in New York City, got a Facebook account at 17 and chronicled her college life in detail, from rooftop drinks with friends to dancing at a downtown club. Recently, though, she has had second thoughts.

Concerned about her career prospects, she asked a friend to take down a photograph of her drinking and wearing a tight dress. When the woman overseeing her internship asked to join her Facebook circle, Ms. Liu agreed, but limited access to her Facebook page. “I want people to take me seriously,” she said.

The conventional wisdom suggests that everyone under 30 is comfortable revealing every facet of their lives online, from their favorite pizza to most frequent sexual partners. But many members of the tell-all generation are rethinking what it means to live out loud.

While participation in social networks is still strong, a survey released last month by the University of California, Berkeley, found that more than half the young adults questioned had become more concerned about privacy than they were five years ago — mirroring the number of people their parent’s age or older with that worry.

They are more diligent than older adults, however, in trying to protect themselves. In a new study to be released this month, the Pew Internet Project has found that people in their 20s exert more control over their digital reputations than older adults, more vigorously deleting unwanted posts and limiting information about themselves. “Social networking requires vigilance, not only in what you post, but what your friends post about you,” said Mary Madden, a senior research specialist who oversaw the study by Pew, which examines online behavior. “Now you are responsible for everything.”

The erosion of privacy has become a pressing issue among active users of social networks. Last week, Facebook scrambled to fix a security breach that allowed users to see their friends’ supposedly private information, including personal chats.

Sam Jackson, a junior at Yale who started a blog when he was 15 and who has been an intern at Google, said he had learned not to trust any social network to keep his information private. “If I go back and look, there are things four years ago I would not say today,” he said. “I am much more self-censoring. I’ll try to be honest and forthright, but I am conscious now who I am talking to.”

He has learned to live out loud mostly by trial and error and has come up with his own theory: concentric layers of sharing.

His Facebook account, which he has had since 2005, is strictly personal. “I don’t want people to know what my movie rentals are,” he said. “If I am sharing something, I want to know what’s being shared with others.”

Mistrust of the intentions of social sites appears to be pervasive. In its telephone survey of 1,000 people, the Berkeley Center for Law and Technology at the University of California found that 88 percent of the 18- to 24-year-olds it surveyed last July said there should be a law that requires Web sites to delete stored information. And 62 percent said they wanted a law that gave people the right to know everything a Web site knows about them.

That mistrust is translating into action. In the Pew study, to be released shortly, researchers interviewed 2,253 adults late last summer and found that people ages 18 to 29 were more apt to monitor privacy settings than older adults are, and they more often delete comments or remove their names from photos so they cannot be identified. Younger teenagers were not included in these studies, and they may not have the same privacy concerns. But anecdotal evidence suggests that many of them have not had enough experience to understand the downside to oversharing.

Elliot Schrage, who oversees Facebook’s global communications and public policy strategy, said it was a good thing that young people are thinking about what they put online. “We are not forcing anyone to use it,” he said of Facebook. But at the same time, companies like Facebook have a financial incentive to get friends to share as much as possible. That’s because the more personal the information that Facebook collects, the more valuable the site is to advertisers, who can mine it to serve up more targeted ads.

Two weeks ago, Senator Charles E. Schumer, Democrat of New York, petitioned the Federal Trade Commission to review the privacy policies of social networks to make sure consumers are not being deliberately confused or misled. The action was sparked by a recent change to Facebook’s settings that forced its more than 400 million users to choose to “opt out” of sharing private information with third-party Web sites instead of “opt in,” a move which confounded many of them.

Mr. Schrage of Facebook said, “We try diligently to get people to understand the changes.”

But in many cases, young adults are teaching one another about privacy.

Ms. Liu is not just policing her own behavior, but her sister’s, too. Ms. Liu sent a text message to her 17-year-old sibling warning her to take down a photo of a guy sitting on her sister’s lap. Why? Her sister wants to audition for “Glee” and Ms. Liu didn’t want the show’s producers to see it. Besides, what if her sister became a celebrity? “It conjures up an image where if you became famous anyone could pull up a picture and send it to TMZ,” Ms. Liu said.

Andrew Klemperer, a 20-year-old at Georgetown University, said it was a classmate who warned him about the implications of the recent Facebook change — through a status update on (where else?) Facebook. Now he is more diligent in monitoring privacy settings and apt to warn others, too.

Helen Nissenbaum, a professor of culture, media and communication at New York University and author of “Privacy in Context,” a book about information sharing in the digital age, said teenagers were naturally protective of their privacy as they navigate the path to adulthood, and the frequency with which companies change privacy rules has taught them to be wary.

That was the experience of Kanupriya Tewari, a 19-year-old pre-med student at Tufts University. Recently she sought to limit the information a friend could see on Facebook but found the process cumbersome. “I spent like an hour trying to figure out how to limit my profile, and I couldn’t,” she said. She gave up because she had chemistry homework to do, but vowed to figure it out after finals.

“I don’t think they would look out for me,” she said. “I have to look out for me.”

Laura M. Holson, New York Times


Full article and photo:

Emperors and beggars

The rise of content farms

Can technology help make online content pay?

Our ceramic-mugs correspondent writes

THIS week the Wall Street Journal, the pride of News Corporation’s stable of newspapers, launched a 12-page daily section of local news in New York, in a direct challenge to the New York Times. The premise behind the launch is that expensive, thorough reporting will pay for itself by attracting readers and advertisers. Indeed, Rupert Murdoch, News Corp’s boss, recently proclaimed, “Content is not just king, it is the emperor of all things electronic.” However, a new brand of media firms dubbed “content farms” takes the opposite view: that online, at any rate, revenue from advertising or subscriptions will never cover the costs of conventional journalism, so journalism will have to change.

Newspaper articles are expensive to produce but usually cost nothing to read online and do not command high advertising rates, since there is almost unlimited inventory. Mr Murdoch’s answer is to charge for online content: another of his newspapers, the Times of London, will start to do so this summer (the Journal already does). Content farms like Demand Media and Associated Content, in contrast, aim to produce content at a price so low that even meagre advertising revenue can support it.

Demand Media’s approach is a “combination of science and art”, in the words of Steven Kydd, who is in charge of the firm’s content production. Clever software works out what internet users are interested in and how much advertising revenue a given topic can pull in. The results are sent to an army of 7,000 freelancers, each of whom must have a college degree, writing experience and a speciality. They artfully pen articles or produce video clips to fit headlines such as “How do I paint ceramic mugs?” and “Why am I so tired in winter?”

Although an article may pay as little as $5, writers make on average $20-25 an hour, says Mr Kydd. The articles are copy-edited and checked for plagiarism. For the most part, they are published on the firm’s 72 websites, including eHow, answerbag and But videos are also uploaded onto YouTube, where the firm is by far the biggest contributor. Some articles end up on the websites of more conventional media, including USAToday, which runs travel tips produced by Demand Media. In March, Demand Media churned out 150,000 pieces of content in this way. The company is expected to go public later this year, if it is not acquired by a big web portal, such as Yahoo!, first.

AOL, a web portal which was recently spun off from Time Warner, a media giant, does not like to be compared to such an operation. Tim Armstrong, its boss, intends to turn it into “the largest producer of quality online content”. The firm already runs more than 80 websites covering topics from gadgets ( and music ( to fashion ( and local news (

In AOL’s journalistic universe there are three groups of contributors. The first two are salaried journalists and freelancers with expertise in a certain domain, who currently number more than 3,000. Then there are amateurs who contribute to individual projects, for instance when AOL recently compiled profiles of all 2,000 bands at the SXSW music festival in Texas. (Contributors were paid $50 for each profile.) All this is powered by a system like Demand Media’s that uses data and algorithms to predict what sorts of stories will appeal most to readers, what advertisers are willing to pay for them and what freelancers should therefore be offered. So far, however, the numbers are small: in the week of April 25th, 61 writers published 155 articles in this way on 33 AOL sites.

Predictably, many commentators are appalled. Demand Media has been called “demonic”. But, argues Dan Gillmor, a professor of journalism at Arizona State University, “the firm is at least interested in what people want to know—which is nothing to sneer at”. And unlike many other services that take advantage of “user generated content”, he says, Demand Media actually pays its contributors.

The problem with content farms, Mr Gillmor and others say, is that they swamp the internet with mediocre content. To earn a decent living, freelancers have to work at a breakneck pace, which has an obvious impact on quality. Moreover, content that is designed to appear high up in the results produced by search engines could lose its audience if the search engines change their rules.

In AOL’s case, the question is whether the infrastructure for the three tiers of contributors will work financially, not just journalistically and technically. Clay Shirky, a new-media expert at New York University, suggests that content produced cheaply by freelancers could serve to fund more ambitious projects. If AOL can make that work, the pundits will cheer.


Full article and photo:

When Apple Calls the Cops

The First Amendment doesn’t only belong to journalists.

Jason Chen is a newsman. Or is he?

That’s just one question raised by the raid on Mr. Chen’s home by the San Mateo County, Calif., Sheriff’s Office, which carted off some computers and other electronic equipment. The search warrant appears to be the result of an investigation into whether Mr. Chen broke the law when he bought an iPhone prototype that an Apple engineer left in a bar where he was celebrating his birthday.

Because Mr. Chen reported on the new iPhone for his website,, the seizure of his computers has renewed a heated debate about whether bloggers are real journalists. Traditionally, many in the mainstream press have disparaged bloggers, though in this case at least some press organizations—including the parent company that runs Mr. Chen’s blog—argue that he is a full-time journalist whose home is his newsroom. The irony is how few connect Mr. Chen’s First Amendment freedoms to those for corporations that were recently upheld in a landmark Supreme Court ruling.

The case was Citizens United v. Federal Election Commission. Citizens United is a nonprofit corporation that produced a documentary on Hillary Clinton. It sought to distribute the film via video-on-demand back when she was running in the Democratic presidential primaries. When a lower court agreed with the FEC that the McCain-Feingold restrictions applied to the Hillary film, the group appealed and won at the Supreme Court this past January.

Not long after, in his State of the Union Address, Barack Obama disparaged members of the Supreme Court sitting before him by accusing them of opening “the floodgates for special interests.” Many focused on the president’s rudeness. More troubling was his message: What President Obama was really saying is that the Wall Street Journal and ABC News and your hometown daily should be free to print or broadcast what they want during an election. But not organizations like Citizens United.

The High Court wisely rejected that logic. Writing for the majority, Justice Anthony Kennedy said that “The First Amendment protects speech and speaker, and the ideas that flow from each.” In other words, the government can’t restrict First Amendment rights based on the identity of the speaker.”

Steve Simpson, a lawyer for the Institute for Justice, a libertarian public interest law firm, puts it this way: “Once the government gets in the business of deciding who can speak based on identity, it will then necessarily be involved in deciding what viewpoints get heard.”

The classic view of the First Amendment holds all Americans are entitled to its rights by virtue of citizenship. These days, alas, too many journalists and politicians assume that a free press should mean special privileges for a designated class. The further we travel in this direction, the more the government will end up deciding which Americans qualify and which do not.

It’s not just Mr. Chen. Two weeks ago in New Jersey, a state appeals court ruled that a hockey mom who blogs is not a journalist for the purposes of protecting her sources. The woman was being sued for derogatory comments she posted on a message board about a company that supplies software for the porn industry. At the federal level, meanwhile, a “shield law” protecting journalists from revealing their sources remains bogged down in Congress as legislators are forced to define who is legitimately a journalist and who is not.

Mr. Simpson points to another irony: Legislation now being pushed by Sen. Chuck Schumer (D., N.Y.) to scale back the Supreme Court’s January decision would limit political speech for government contractors, for companies that owe TARP money, and for those that pass some threshold for foreign ownership.

It’s an interesting proposition. I wonder: How many among the press who favor these chains being wrapped around corporations have thought through the implications for news organizations? The implications will be especially interesting if Congress ever does get around to approving that bailout for failing newspapers that the president says he’s at least open to.

In Mr. Chen’s case, all this may be moot if his troubles really have to do with buying property that is considered stolen under California law. In its reporting on the case, Gizmodo has already admitted paying $5,000 for the iPhone prototype. If the criminal case comes down to stolen property, whether or not he is deemed a bona fide journalist may not make much difference.

The larger point is that the best guarantee of good, independent journalism has always been the willingness of reporters and editors and publishers to run with the truth, protect their sources, and accept the consequences—even jail, if it comes to that. In short, we’ll all be better served by a First Amendment that remains a fundamental right for all rather than a class privilege for some.

William McGurn, Wall Street Journal


Full article and photo:

Mrs. Clinton, Tear Down this Cyberwall

The State Department is sitting on funds to free the flow of information in closed societies.

When a government department refuses to spend money that Congress has allocated, there’s usually a telling backstory. This is doubly so when the funds are for a purpose as uncontroversial as making the Internet freer.

So why has the State Department refused to spend $45 million in appropriations since 2008 to “expand access and information in closed societies”? The technology to circumvent national restrictions is being provided by volunteers who believe that with funding they can bring Web access to many more people, from Iran to China.

A bipartisan group in Congress intended to pay for tests aimed at expanding the use of software that brings Internet access to “large numbers of users living in closed societies that have acutely hostile Internet environments.” The most successful of these services is provided by a group called the Global Internet Freedom Consortium, whose programs include Freegate and Ultrasurf.

When Iranian demonstrators last year organized themselves through Twitter posts and brought news of the crackdown to the outside world, they got past the censors chiefly by using Freegate to get access to outside sites.

The team behind these circumvention programs understands how subversive their efforts can be. As Shiyu Zhou, deputy director of the Global Internet Freedom Consortium, told Congress last year, “The Internet censorship firewalls have become 21st-century versions of Berlin Walls that isolate and dispirit the citizens of closed-society dictatorships.”

Repressive governments rightly regard the Internet as an existential threat, giving people powerful ways to communicate and organize. These governments also use the Web as a tool of repression, monitoring emails and other traffic. Recall that Google left China in part because of hacking of human-rights activists’ Gmail accounts.

To counter government monitors and censors, these programs give online users encrypted connections to secure proxy servers around the world. A group of volunteers constantly switches the Internet Protocol addresses of the servers—up to 10,000 times an hour. The group has been active since 2000, and repressive governments haven’t figured out how to catch up. More than one million Iranians used the system last June to post videos and photos showing the government crackdown.

Mr. Zhou tells me his group would use any additional money to add equipment and to hire full-time technical staff to support the volunteers. For $50 million, he estimates the service could accommodate 5% of Chinese Internet users and 10% in other closed societies—triple the current capacity.

So why won’t the State Department fund this group to expand its reach, or at least test how scalable the solution could be? There are a couple of explanations.

The first is that the Global Internet Freedom Consortium was founded by Chinese-American engineers who practice Falun Gong, the spiritual movement suppressed by Beijing. Perhaps not the favorites of U.S. diplomats, but what other group has volunteers engaged enough to keep such a service going? As with the Jewish refuseniks who battled the Soviet Union, sometimes it takes a persecuted minority to stand up to a totalitarian regime.

The second explanation is a split among technologists—between those who support circumvention programs built on proprietary systems and others whose faith is on more open sources of code. A study last year by the Berkman Center at Harvard gave more points to open-source efforts, citing “a well-established contentious debate among software developers about whether secrecy about implementation details is a robust strategy for security.” But whatever the theoretical objections, the proprietary systems work.

Another likely factor is realpolitik. Despite the tough speech Hillary Clinton gave in January supporting Internet freedom, it’s easy to imagine bureaucrats arguing that the U.S. shouldn’t undermine the censorship efforts of Tehran and Beijing. An earlier generation of bureaucrats tried to edit, as overly aggressive, Ronald Reagan’s 1987 speech in Berlin urging Mikhail Gorbachev: “Tear down this wall.”

It’s true that circumvention doesn’t solve every problem. Internet freedom researcher and advocate Rebecca MacKinnon has made the point that “circumvention is never going to be the silver bullet” in the sense that it can only give people access to the open Web. It can’t help with domestic censorship.

During the Cold War, the West expended huge effort to get books, tapes, fax machines, radio reports and other information, as well as the means to convey it, into closed societies. Circumvention is the digital-age equivalent.

If the State Department refuses to support a free Web, perhaps there’s a private solution. An anonymous poster, “chinese.zhang,” suggested on a Google message board earlier this year that the company should fund the Global Internet Freedom Consortium as part of its defense against Chinese censorship. “I think Google can easily offer more servers to help to break down the Great Firewall,” he wrote.

L. Gordon Crovitz, Wall Street Journal


Full article:

Brave New World of Digital Intimacy

On Sept. 5, 2006, Mark Zuckerberg changed the way that Facebook worked, and in the process he inspired a revolt.

Zuckerberg, a doe-eyed 24-year-old C.E.O., founded Facebook in his dorm room at Harvard two years earlier, and the site quickly amassed nine million users. By 2006, students were posting heaps of personal details onto their Facebook pages, including lists of their favorite TV shows, whether they were dating (and whom), what music they had in rotation and the various ad hoc “groups” they had joined (like “Sex and the City” Lovers). All day long, they’d post “status” notes explaining their moods — “hating Monday,” “skipping class b/c i’m hung over.” After each party, they’d stagger home to the dorm and upload pictures of the soused revelry, and spend the morning after commenting on how wasted everybody looked. Facebook became the de facto public commons — the way students found out what everyone around them was like and what he or she was doing.

But Zuckerberg knew Facebook had one major problem: It required a lot of active surfing on the part of its users. Sure, every day your Facebook friends would update their profiles with some new tidbits; it might even be something particularly juicy, like changing their relationship status to “single” when they got dumped. But unless you visited each friend’s page every day, it might be days or weeks before you noticed the news, or you might miss it entirely. Browsing Facebook was like constantly poking your head into someone’s room to see how she was doing. It took work and forethought. In a sense, this gave Facebook an inherent, built-in level of privacy, simply because if you had 200 friends on the site — a fairly typical number — there weren’t enough hours in the day to keep tabs on every friend all the time.

“It was very primitive,” Zuckerberg told me when I asked him about it last month. And so he decided to modernize. He developed something he called News Feed, a built-in service that would actively broadcast changes in a user’s page to every one of his or her friends. Students would no longer need to spend their time zipping around to examine each friend’s page, checking to see if there was any new information. Instead, they would just log into Facebook, and News Feed would appear: a single page that — like a social gazette from the 18th century — delivered a long list of up-to-the-minute gossip about their friends, around the clock, all in one place. “A stream of everything that’s going on in their lives,” as Zuckerberg put it.

When students woke up that September morning and saw News Feed, the first reaction, generally, was one of panic. Just about every little thing you changed on your page was now instantly blasted out to hundreds of friends, including potentially mortifying bits of news — Tim and Lisa broke up; Persaud is no longer friends with Matthew — and drunken photos someone snapped, then uploaded and tagged with names. Facebook had lost its vestigial bit of privacy. For students, it was now like being at a giant, open party filled with everyone you know, able to eavesdrop on what everyone else was saying, all the time.

“Everyone was freaking out,” Ben Parr, then a junior at Northwestern University, told me recently. What particularly enraged Parr was that there wasn’t any way to opt out of News Feed, to “go private” and have all your information kept quiet. He created a Facebook group demanding Zuckerberg either scrap News Feed or provide privacy options. “Facebook users really think Facebook is becoming the Big Brother of the Internet, recording every single move,” a California student told The Star-Ledger of Newark. Another chimed in, “Frankly, I don’t need to know or care that Billy broke up with Sally, and Ted has become friends with Steve.” By lunchtime of the first day, 10,000 people had joined Parr’s group, and by the next day it had 284,000.

Zuckerberg, surprised by the outcry, quickly made two decisions. The first was to add a privacy feature to News Feed, letting users decide what kind of information went out. But the second decision was to leave News Feed otherwise intact. He suspected that once people tried it and got over their shock, they’d like it.

He was right. Within days, the tide reversed. Students began e-mailing Zuckerberg to say that via News Feed they’d learned things they would never have otherwise discovered through random surfing around Facebook. The bits of trivia that News Feed delivered gave them more things to talk about — Why do you hate Kiefer Sutherland? — when they met friends face to face in class or at a party. Trends spread more quickly. When one student joined a group — proclaiming her love of Coldplay or a desire to volunteer for Greenpeace — all her friends instantly knew, and many would sign up themselves. Users’ worries about their privacy seemed to vanish within days, boiled away by their excitement at being so much more connected to their friends. (Very few people stopped using Facebook, and most people kept on publishing most of their information through News Feed.) Pundits predicted that News Feed would kill Facebook, but the opposite happened. It catalyzed a massive boom in the site’s growth. A few weeks after the News Feed imbroglio, Zuckerberg opened the site to the general public (previously, only students could join), and it grew quickly; today, it has 100 million users.

When I spoke to him, Zuckerberg argued that News Feed is central to Facebook’s success. “Facebook has always tried to push the envelope,” he said. “And at times that means stretching people and getting them to be comfortable with things they aren’t yet comfortable with. A lot of this is just social norms catching up with what technology is capable of.”

In essence, Facebook users didn’t think they wanted constant, up-to-the-minute updates on what other people are doing. Yet when they experienced this sort of omnipresent knowledge, they found it intriguing and addictive. Why?

Social scientists have a name for this sort of incessant online contact. They call it “ambient awareness.” It is, they say, very much like being physically near someone and picking up on his mood through the little things he does — body language, sighs, stray comments — out of the corner of your eye. Facebook is no longer alone in offering this sort of interaction online. In the last year, there has been a boom in tools for “microblogging”: posting frequent tiny updates on what you’re doing. The phenomenon is quite different from what we normally think of as blogging, because a blog post is usually a written piece, sometimes quite long: a statement of opinion, a story, an analysis. But these new updates are something different. They’re far shorter, far more frequent and less carefully considered. One of the most popular new tools is Twitter, a Web site and messaging service that allows its two-million-plus users to broadcast to their friends haiku-length updates — limited to 140 characters, as brief as a mobile-phone text message — on what they’re doing. There are other services for reporting where you’re traveling (Dopplr) or for quickly tossing online a stream of the pictures, videos or Web sites you’re looking at (Tumblr). And there are even tools that give your location. When the new iPhone, with built-in tracking, was introduced in July, one million people began using Loopt, a piece of software that automatically tells all your friends exactly where you are.

For many people — particularly anyone over the age of 30 — the idea of describing your blow-by-blow activities in such detail is absurd. Why would you subject your friends to your daily minutiae? And conversely, how much of their trivia can you absorb? The growth of ambient intimacy can seem like modern narcissism taken to a new, supermetabolic extreme — the ultimate expression of a generation of celebrity-addled youths who believe their every utterance is fascinating and ought to be shared with the world. Twitter, in particular, has been the subject of nearly relentless scorn since it went online. “Who really cares what I am doing, every hour of the day?” wondered Alex Beam, a Boston Globe columnist, in an essay about Twitter last month. “Even I don’t care.”

Indeed, many of the people I interviewed, who are among the most avid users of these “awareness” tools, admit that at first they couldn’t figure out why anybody would want to do this. Ben Haley, a 39-year-old documentation specialist for a software firm who lives in Seattle, told me that when he first heard about Twitter last year from an early-adopter friend who used it, his first reaction was that it seemed silly. But a few of his friends decided to give it a try, and they urged him to sign up, too.

Each day, Haley logged on to his account, and his friends’ updates would appear as a long page of one- or two-line notes. He would check and recheck the account several times a day, or even several times an hour. The updates were indeed pretty banal. One friend would post about starting to feel sick; one posted random thoughts like “I really hate it when people clip their nails on the bus”; another Twittered whenever she made a sandwich — and she made a sandwich every day. Each so-called tweet was so brief as to be virtually meaningless.

But as the days went by, something changed. Haley discovered that he was beginning to sense the rhythms of his friends’ lives in a way he never had before. When one friend got sick with a virulent fever, he could tell by her Twitter updates when she was getting worse and the instant she finally turned the corner. He could see when friends were heading into hellish days at work or when they’d scored a big success. Even the daily catalog of sandwiches became oddly mesmerizing, a sort of metronomic click that he grew accustomed to seeing pop up in the middle of each day.

This is the paradox of ambient awareness. Each little update — each individual bit of social information — is insignificant on its own, even supremely mundane. But taken together, over time, the little snippets coalesce into a surprisingly sophisticated portrait of your friends’ and family members’ lives, like thousands of dots making a pointillist painting. This was never before possible, because in the real world, no friend would bother to call you up and detail the sandwiches she was eating. The ambient information becomes like “a type of E.S.P.,” as Haley described it to me, an invisible dimension floating over everyday life.

“It’s like I can distantly read everyone’s mind,” Haley went on to say. “I love that. I feel like I’m getting to something raw about my friends. It’s like I’ve got this heads-up display for them.” It can also lead to more real-life contact, because when one member of Haley’s group decides to go out to a bar or see a band and Twitters about his plans, the others see it, and some decide to drop by — ad hoc, self-organizing socializing. And when they do socialize face to face, it feels oddly as if they’ve never actually been apart. They don’t need to ask, “So, what have you been up to?” because they already know. Instead, they’ll begin discussing something that one of the friends Twittered that afternoon, as if picking up a conversation in the middle.

Facebook and Twitter may have pushed things into overdrive, but the idea of using communication tools as a form of “co-presence” has been around for a while. The Japanese sociologist Mizuko Ito first noticed it with mobile phones: lovers who were working in different cities would send text messages back and forth all night — tiny updates like “enjoying a glass of wine now” or “watching TV while lying on the couch.” They were doing it partly because talking for hours on mobile phones isn’t very comfortable (or affordable). But they also discovered that the little Ping-Ponging messages felt even more intimate than a phone call.

“It’s an aggregate phenomenon,” Marc Davis, a chief scientist at Yahoo and former professor of information science at the University of California at Berkeley, told me. “No message is the single-most-important message. It’s sort of like when you’re sitting with someone and you look over and they smile at you. You’re sitting here reading the paper, and you’re doing your side-by-side thing, and you just sort of let people know you’re aware of them.” Yet it is also why it can be extremely hard to understand the phenomenon until you’ve experienced it. Merely looking at a stranger’s Twitter or Facebook feed isn’t interesting, because it seems like blather. Follow it for a day, though, and it begins to feel like a short story; follow it for a month, and it’s a novel.

You could also regard the growing popularity of online awareness as a reaction to social isolation, the modern American disconnectedness that Robert Putnam explored in his book “Bowling Alone.” The mobile workforce requires people to travel more frequently for work, leaving friends and family behind, and members of the growing army of the self-employed often spend their days in solitude. Ambient intimacy becomes a way to “feel less alone,” as more than one Facebook and Twitter user told me.

When I decided to try out Twitter last year, at first I didn’t have anyone to follow. None of my friends were yet using the service. But while doing some Googling one day I stumbled upon the blog of Shannon Seery, a 32-year-old recruiting consultant in Florida, and I noticed that she Twittered. Her Twitter updates were pretty charming — she would often post links to camera-phone pictures of her two children or videos of herself cooking Mexican food, or broadcast her agonized cries when a flight was delayed on a business trip. So on a whim I started “following” her — as easy on Twitter as a click of the mouse — and never took her off my account. (A Twitter account can be “private,” so that only invited friends can read one’s tweets, or it can be public, so anyone can; Seery’s was public.) When I checked in last month, I noticed that she had built up a huge number of online connections: She was now following 677 people on Twitter and another 442 on Facebook. How in God’s name, I wondered, could she follow so many people? Who precisely are they? I called Seery to find out.

“I have a rule,” she told me. “I either have to know who you are, or I have to know of you.” That means she monitors the lives of friends, family, anyone she works with, and she’ll also follow interesting people she discovers via her friends’ online lives. Like many people who live online, she has wound up following a few strangers — though after a few months they no longer feel like strangers, despite the fact that she has never physically met them.

I asked Seery how she finds the time to follow so many people online. The math seemed daunting. After all, if her 1,000 online contacts each post just a couple of notes each a day, that’s several thousand little social pings to sift through daily. What would it be like to get thousands of e-mail messages a day? But Seery made a point I heard from many others: awareness tools aren’t as cognitively demanding as an e-mail message. E-mail is something you have to stop to open and assess. It’s personal; someone is asking for 100 percent of your attention. In contrast, ambient updates are all visible on one single page in a big row, and they’re not really directed at you. This makes them skimmable, like newspaper headlines; maybe you’ll read them all, maybe you’ll skip some. Seery estimated that she needs to spend only a small part of each hour actively reading her Twitter stream.

Yet she has, she said, become far more gregarious online. “What’s really funny is that before this ‘social media’ stuff, I always said that I’m not the type of person who had a ton of friends,” she told me. “It’s so hard to make plans and have an active social life, having the type of job I have where I travel all the time and have two small kids. But it’s easy to tweet all the time, to post pictures of what I’m doing, to keep social relations up.” She paused for a second, before continuing: “Things like Twitter have actually given me a much bigger social circle. I know more about more people than ever before.”

I realized that this is becoming true of me, too. After following Seery’s Twitter stream for a year, I’m more knowledgeable about the details of her life than the lives of my two sisters in Canada, whom I talk to only once every month or so. When I called Seery, I knew that she had been struggling with a three-day migraine headache; I began the conversation by asking her how she was feeling.

Online awareness inevitably leads to a curious question: What sort of relationships are these? What does it mean to have hundreds of “friends” on Facebook? What kind of friends are they, anyway?

In 1998, the anthropologist Robin Dunbar argued that each human has a hard-wired upper limit on the number of people he or she can personally know at one time. Dunbar noticed that humans and apes both develop social bonds by engaging in some sort of grooming; apes do it by picking at and smoothing one another’s fur, and humans do it with conversation. He theorized that ape and human brains could manage only a finite number of grooming relationships: unless we spend enough time doing social grooming — chitchatting, trading gossip or, for apes, picking lice — we won’t really feel that we “know” someone well enough to call him a friend. Dunbar noticed that ape groups tended to top out at 55 members. Since human brains were proportionally bigger, Dunbar figured that our maximum number of social connections would be similarly larger: about 150 on average. Sure enough, psychological studies have confirmed that human groupings naturally tail off at around 150 people: the “Dunbar number,” as it is known. Are people who use Facebook and Twitter increasing their Dunbar number, because they can so easily keep track of so many more people?

As I interviewed some of the most aggressively social people online — people who follow hundreds or even thousands of others — it became clear that the picture was a little more complex than this question would suggest. Many maintained that their circle of true intimates, their very close friends and family, had not become bigger. Constant online contact had made those ties immeasurably richer, but it hadn’t actually increased the number of them; deep relationships are still predicated on face time, and there are only so many hours in the day for that.

But where their sociality had truly exploded was in their “weak ties” — loose acquaintances, people they knew less well. It might be someone they met at a conference, or someone from high school who recently “friended” them on Facebook, or somebody from last year’s holiday party. In their pre-Internet lives, these sorts of acquaintances would have quickly faded from their attention. But when one of these far-flung people suddenly posts a personal note to your feed, it is essentially a reminder that they exist. I have noticed this effect myself. In the last few months, dozens of old work colleagues I knew from 10 years ago in Toronto have friended me on Facebook, such that I’m now suddenly reading their stray comments and updates and falling into oblique, funny conversations with them. My overall Dunbar number is thus 301: Facebook (254) + Twitter (47), double what it would be without technology. Yet only 20 are family or people I’d consider close friends. The rest are weak ties — maintained via technology.

This rapid growth of weak ties can be a very good thing. Sociologists have long found that “weak ties” greatly expand your ability to solve problems. For example, if you’re looking for a job and ask your friends, they won’t be much help; they’re too similar to you, and thus probably won’t have any leads that you don’t already have yourself. Remote acquaintances will be much more useful, because they’re farther afield, yet still socially intimate enough to want to help you out. Many avid Twitter users — the ones who fire off witty posts hourly and wind up with thousands of intrigued followers — explicitly milk this dynamic for all it’s worth, using their large online followings as a way to quickly answer almost any question. Laura Fitton, a social-media consultant who has become a minor celebrity on Twitter — she has more than 5,300 followers — recently discovered to her horror that her accountant had made an error in filing last year’s taxes. She went to Twitter, wrote a tiny note explaining her problem, and within 10 minutes her online audience had provided leads to lawyers and better accountants. Fritton joked to me that she no longer buys anything worth more than $50 without quickly checking it with her Twitter network.

“I outsource my entire life,” she said. “I can solve any problem on Twitter in six minutes.” (She also keeps a secondary Twitter account that is private and only for a much smaller circle of close friends and family — “My little secret,” she said. It is a strategy many people told me they used: one account for their weak ties, one for their deeper relationships.)

It is also possible, though, that this profusion of weak ties can become a problem. If you’re reading daily updates from hundreds of people about whom they’re dating and whether they’re happy, it might, some critics worry, spread your emotional energy too thin, leaving less for true intimate relationships. Psychologists have long known that people can engage in “parasocial” relationships with fictional characters, like those on TV shows or in books, or with remote celebrities we read about in magazines. Parasocial relationships can use up some of the emotional space in our Dunbar number, crowding out real-life people. Danah Boyd, a fellow at Harvard’s Berkman Center for Internet and Society who has studied social media for 10 years, published a paper this spring arguing that awareness tools like News Feed might be creating a whole new class of relationships that are nearly parasocial — peripheral people in our network whose intimate details we follow closely online, even while they, like Angelina Jolie, are basically unaware we exist.

“The information we subscribe to on a feed is not the same as in a deep social relationship,” Boyd told me. She has seen this herself; she has many virtual admirers that have, in essence, a parasocial relationship with her. “I’ve been very, very sick, lately and I write about it on Twitter and my blog, and I get all these people who are writing to me telling me ways to work around the health-care system, or they’re writing saying, ‘Hey, I broke my neck!’ And I’m like, ‘You’re being very nice and trying to help me, but though you feel like you know me, you don’t.’ ” Boyd sighed. “They can observe you, but it’s not the same as knowing you.”

When I spoke to Caterina Fake, a founder of Flickr (a popular photo-sharing site), she suggested an even more subtle danger: that the sheer ease of following her friends’ updates online has made her occasionally lazy about actually taking the time to visit them in person. “At one point I realized I had a friend whose child I had seen, via photos on Flickr, grow from birth to 1 year old,” she said. “I thought, I really should go meet her in person. But it was weird; I also felt that Flickr had satisfied that getting-to-know you satisfaction, so I didn’t feel the urgency. But then I was like, Oh, that’s not sufficient! I should go in person!” She has about 400 people she follows online but suspects many of those relationships are tissue-fragile. “These technologies allow you to be much more broadly friendly, but you just spread yourself much more thinly over many more people.”

What is it like to never lose touch with anyone? One morning this summer at my local cafe, I overheard a young woman complaining to her friend about a recent Facebook drama. Her name is Andrea Ahan, a 27-year-old restaurant entrepreneur, and she told me that she had discovered that high-school friends were uploading old photos of her to Facebook and tagging them with her name, so they automatically appeared in searches for her.

She was aghast. “I’m like, my God, these pictures are completely hideous!” Ahan complained, while her friend looked on sympathetically and sipped her coffee. “I’m wearing all these totally awful ’90s clothes. I look like crap. And I’m like, Why are you people in my life, anyway? I haven’t seen you in 10 years. I don’t know you anymore!” She began furiously detagging the pictures — removing her name, so they wouldn’t show up in a search anymore.

Worse, Ahan was also confronting a common plague of Facebook: the recent ex. She had broken up with her boyfriend not long ago, but she hadn’t “unfriended” him, because that felt too extreme. But soon he paired up with another young woman, and the new couple began having public conversations on Ahan’s ex-boyfriend’s page. One day, she noticed with alarm that the new girlfriend was quoting material Ahan had e-mailed privately to her boyfriend; she suspected he had been sharing the e-mail with his new girlfriend. It is the sort of weirdly subtle mind game that becomes possible via Facebook, and it drove Ahan nuts.

“Sometimes I think this stuff is just crazy, and everybody has got to get a life and stop obsessing over everyone’s trivia and gossiping,” she said.

Yet Ahan knows that she cannot simply walk away from her online life, because the people she knows online won’t stop talking about her, or posting unflattering photos. She needs to stay on Facebook just to monitor what’s being said about her. This is a common complaint I heard, particularly from people in their 20s who were in college when Facebook appeared and have never lived as adults without online awareness. For them, participation isn’t optional. If you don’t dive in, other people will define who you are. So you constantly stream your pictures, your thoughts, your relationship status and what you’re doing — right now! — if only to ensure the virtual version of you is accurate, or at least the one you want to present to the world.

This is the ultimate effect of the new awareness: It brings back the dynamics of small-town life, where everybody knows your business. Young people at college are the ones to experience this most viscerally, because, with more than 90 percent of their peers using Facebook, it is especially difficult for them to opt out. Zeynep Tufekci, a sociologist at the University of Maryland, Baltimore County, who has closely studied how college-age users are reacting to the world of awareness, told me that athletes used to sneak off to parties illicitly, breaking the no-drinking rule for team members. But then camera phones and Facebook came along, with students posting photos of the drunken carousing during the party; savvy coaches could see which athletes were breaking the rules. First the athletes tried to fight back by waking up early the morning after the party in a hungover daze to detag photos of themselves so they wouldn’t be searchable. But that didn’t work, because the coaches sometimes viewed the pictures live, as they went online at 2 a.m. So parties simply began banning all camera phones in a last-ditch attempt to preserve privacy.

“It’s just like living in a village, where it’s actually hard to lie because everybody knows the truth already,” Tufekci said. “The current generation is never unconnected. They’re never losing touch with their friends. So we’re going back to a more normal place, historically. If you look at human history, the idea that you would drift through life, going from new relation to new relation, that’s very new. It’s just the 20th century.”

Psychologists and sociologists spent years wondering how humanity would adjust to the anonymity of life in the city, the wrenching upheavals of mobile immigrant labor — a world of lonely people ripped from their social ties. We now have precisely the opposite problem. Indeed, our modern awareness tools reverse the original conceit of the Internet. When cyberspace came along in the early ’90s, it was celebrated as a place where you could reinvent your identity — become someone new.

“If anything, it’s identity-constraining now,” Tufekci told me. “You can’t play with your identity if your audience is always checking up on you. I had a student who posted that she was downloading some Pearl Jam, and someone wrote on her wall, ‘Oh, right, ha-ha — I know you, and you’re not into that.’ ” She laughed. “You know that old cartoon? ‘On the Internet, nobody knows you’re a dog’? On the Internet today, everybody knows you’re a dog! If you don’t want people to know you’re a dog, you’d better stay away from a keyboard.”

Or, as Leisa Reichelt, a consultant in London who writes regularly about ambient tools, put it to me: “Can you imagine a Facebook for children in kindergarten, and they never lose touch with those kids for the rest of their lives? What’s that going to do to them?” Young people today are already developing an attitude toward their privacy that is simultaneously vigilant and laissez-faire. They curate their online personas as carefully as possible, knowing that everyone is watching — but they have also learned to shrug and accept the limits of what they can control.

It is easy to become unsettled by privacy-eroding aspects of awareness tools. But there is another — quite different — result of all this incessant updating: a culture of people who know much more about themselves. Many of the avid Twitterers, Flickrers and Facebook users I interviewed described an unexpected side-effect of constant self-disclosure. The act of stopping several times a day to observe what you’re feeling or thinking can become, after weeks and weeks, a sort of philosophical act. It’s like the Greek dictum to “know thyself,” or the therapeutic concept of mindfulness. (Indeed, the question that floats eternally at the top of Twitter’s Web site — “What are you doing?” — can come to seem existentially freighted. What are you doing?) Having an audience can make the self-reflection even more acute, since, as my interviewees noted, they’re trying to describe their activities in a way that is not only accurate but also interesting to others: the status update as a literary form.

Laura Fitton, the social-media consultant, argues that her constant status updating has made her “a happier person, a calmer person” because the process of, say, describing a horrid morning at work forces her to look at it objectively. “It drags you out of your own head,” she added. In an age of awareness, perhaps the person you see most clearly is yourself.

Clive Thompson, New York Times


Full article and photo:

Tinker, Tailor, Soldier, Hacker

The Internet was designed for easy communication. Security? Not so much.

Worrying about threats to the electric grid is all the rage these days, with anxious planners troubled by electromagnetic pulse attacks or even solar superflares that could melt down the power net for months or even years, bringing civilization to a halt. But Richard Clarke and Robert Knake warn in “Cyber War” that if such a calamity occurs, the culprit behind it might not be a high-altitude nuclear burst or strange solar weather but a computer hacker in Beijing or Tehran.

Over the past few decades, American society has become steadily more wired. Devices talk to one another over the Internet, with tremendous increases in efficiency: Copy machines call their own repairmen when they break down, stores automatically replenish inventory as needed and military units stay in perpetual contact over logistical matters—often without humans in the loop at all. The benefits of this nonstop communication are obvious, but the vulnerabilities are underappreciated. The Internet was designed for ease of communication; security was (and is) largely an afterthought. We have created a hacker’s playground.

Worse yet, computer hardware, usually made in China, is sometimes laced with “logic bombs” that will allow anyone who has the correct codes—the Chinese government comes to mind—to turn our own devices against us. Messrs. Clarke and Knake are particularly concerned with risks to the electric grid. Hackers might be able not only to trick generators into turning themselves off but also to command expensive custom equipment to tear itself apart—damage that could take months or longer to fix. The result wouldn’t be a short-term blackout of the sort we’re familiar with but something more like Baghdad after the Iraq invasion. And that’s probably a best-case scenario.

Nor are electric-generating facilities, already the target of thousands of known hack attacks, the only vulnerability. Military secrets and valuable intellectual property are also at risk, Messrs. Clarke and Knake note. Yet efforts to protect against hacker-attacks have lagged behind increasingly sophisticated threats as the Pentagon concentrates on offensive, not defensive, cyberwar techniques. The emphasis may reflect the unhappy truth that, in a cyberwar, first-strike capability is an enormous advantage. The instigator can launch an attack before the targeted country has raised its defenses or disconnected vital services from the Internet altogether. The targeted country may be damaged so badly that it cannot respond in kind, and a weaker response would probably meet a well-prepared defense. The incentive to strike first, Messrs. Clarke and Knake argue, is destabilizing and dangerous—and all the more reason to bolster our preparedness.

Not that every first strike is malign; sometimes it produces a happy result. Messrs. Clarke and Knake are convinced that an Israeli air strike in 2007 against a secret North Korean-designed nuclear facility being constructed in the Syrian desert was a textbook case of cyber-aided warfare. Israeli computers “owned” Syria’s elaborate air defenses, the authors say, “ensuring that the enemy could not even raise its defenses.” How the Israelis accomplished the task isn’t known, but Messrs. Clarke and Knake speculate that a drone aircraft may have been used to commandeer Syrian radar signals, or Israeli agents may have inserted a “trapdoor” access point in the computer code of the Russian-designed defense system, or an intrepid Israeli agent deep in Syria may have spliced into a fiber-optic cable linked to the defense system and then sent commands clearing the way for the bombing run.

Stealthy online intrusion and malicious hacking have evolved from low-level intelligence-gathering tools to weapons that are, potentially, as destructive as bombs and missiles. (How many Americans would die if the electricity went out for a week? A month? Six months?) Yet many policy-makers still seem to regard the threat as a sideshow. The Pentagon plans “net-centric” warfare without addressing the vulnerability of the “net” part; diplomats who discuss arms control deal almost exclusively with traditional weaponry, without considering more modern threats. Generals are astounded to hear about digital military weaknesses that already haunt every captain and major. Presidents Clinton and George W. Bush largely ignored the problem, and President Obama shows no sign of doing any better.

In some intelligence circles the threat of cyber attacks is scoffed at, but I think that Messrs. Clarke and Knake are right to sound the alarm. (Mr. Clarke, we should recall, was the head of counterterrorism security in the Clinton and George W. Bush administrations.) As Henry Fielding remarked long ago, those who lay the foundation of their own ruin find that others are apt to build upon it. By constructing, and then relying on, vulnerable systems that are now entwined with almost every aspect of American life, we have laid just such a foundation. The time has come to fix it or at least to refine the systems to avoid catastrophic failure.

“Just-in-time” inventory systems are highly vulnerable to transportation problems; “network computing” fails when the network does; and smart grids are open invitations to smart hackers. Too much of our critical infrastructure operates with increased vulnerabilities and reduced margins for error. “The same way that a hand can reach out from cyberspace and destroy an electric transmission line or generator,” the authors note, “computer commands can derail a train or send freight cars to the wrong place, or cause a gas pipeline to burst.”

Promoters of something called “resilience engineering” suggest that planners should put more effort into designing systems that resist disruption and that degrade gracefully, rather than failing calamitously when stressed. Such an approach would reduce our vulnerability to cyberwar—and to many other kinds of trouble as well.

Mr. Reynolds, who teaches Internet law at the University of Tennessee, hosts “Instavision” at


Full article and photo:

Please do not change your password

You were right: It’s a waste of your time. A study says much computer security advice is not worth following.

To continue reading this story, enter your password now. If you do not have a password, please create one. It must contain a minimum of eight characters, including upper- and lower-case letters and one number. This is for your own good.

Nonsense, of course, but it helps illustrate a point: You will need a computer password today, maybe a half dozen or more — those secret sign-ins that serve as sentries for everything from Amazon shopping carts to work files to online bank accounts. Just when you have them all sorted out, along comes another “urgent” directive from the bank or IT department — time to reset those codes, for safety’s sake. And the latest lineup of log-ins you’ve concocted won’t last for long, either. Some might temporarily stay in your head, others are jotted on scraps of paper and stuffed in a wallet. A few might be taped to your computer monitor in plain view (or are those are from last year’s batch? Who can remember?).

Now, a study has concluded what lots of us have long suspected: Many of these irritating security measures are a waste of time. The study, by a top researcher at Microsoft, found that instructions intended to spare us from costly computer attacks often exact a much steeper price in the form of user effort and time expended.

“Most security advice simply offers a poor cost-benefit trade-off to users,” wrote its author, Cormac Herley, a principal researcher for Microsoft Research.

Particularly dubious are the standard rules for creating and protecting website passwords, Herley found. For example, users are admonished to change passwords regularly, but redoing them is not an effective preventive step against online infiltration unless the cyber attacker (or evil colleague) who steals your sign-in sequence waits to employ it until after you’ve switched to a new one, Herley wrote. That’s about as likely as a crook lifting a house key and then waiting until the lock is changed before sticking it in the door.

Herley also looked at the validity of other advice for blocking security threats, including ways to recognize phishing e-mails (phony messages aimed at getting recipients to give up personal information such as credit card numbers) and how to deal with certificate errors, those impossible-to-fathom warning messages. As with passwords, the benefits of these procedures are usually outweighed by what users must do to carry them out, he said.

It’s not that Herley believes we should give up on protecting our computers from being hijacked or corrupted simply because safety measures consume time. The problem, he said, is that users are being asked to take too many steps, and more are constantly being added as new threats emerge or evolve. Security professionals have generally assumed that users can’t have too much knowledge in the battle against cyber crime. But that fails to take into account a crucial part of the equation, according to Herley: the worth of users’ time.

“A lot of advice makes sense only if we think user time has no value,” he said.

The study was first presented by Herley at a security workshop at Oxford University last fall, and began generating wider discussion last month after an essay about it appeared on TechRepublic, a popular technology website.

In the paper, Herley describes an admittedly crude economic analysis to determine the value of user time. He calculated that if the approximately 200 million US adults who go online earned twice the minimum wage, a minute of their time each day equals about $16 billion a year. Therefore, for any security measure to be justified, each minute users are asked to spend on it daily should reduce the harm they are exposed to by $16 billion annually. It’s a high hurdle to clear.

Herley’s paper gives “normal users a voice,” said Michael P. Kassner, a technology writer and IT veteran who wrote the TechRepublic piece. For too long, users have been asked to follow security instructions without being told why they are worth the time investment. “I’ve been a proponent of prioritizing” security measures, Kassner said. “The whole purpose of IT is to make people’s lives easier.”

The computer security community has long puzzled over why so many users fail to snap to attention when alerted to news about the latest threats, such as viruses, worms, Trojan horses, malware, and spyware. At countless conferences and seminars, experts have consistently called for more education and outreach as the answer to user apathy or ignorance. But the research of Herley and others is causing many to realize most of the blame for noncompliance rests not with users, but with the experts themselves — the pros aren’t able to make a strong case for all their recommendations.

Some advice is excellent, of course. But instead of working to prioritize what efforts are effective, government and security industry officials have resorted to dramatic boldface statements about the horrors of poor passwords and other safety lapses, overwhelming the public. For instance, the federal government’s website for computer safety tips,, includes more than 50 categories under the heading of “Cyber Security Tips.” Each category leads to complex sets of instructions.

“It’s nice to see the industry starting to grapple with these issues,” said Bruce Schneier, the author of “Secrets and Lies,” a book about computer and network security. In a blog posting last year, Schneier recalled a security conference at which a speaker was baffled by the failure of workers at his company to adhere to strict computer policies. Schneier speculated that the employees knew following those policies would cut into their work time. They understood better than the IT department that the risks of not completing their assignments far outweighed any unspecified consequences of ignoring a security rule or three. “People do what makes sense and don’t do what doesn’t,” he said. To prompt them to be more rigorous about computer protection, he said, “You want actual studies, actual data.”

That poses a challenge for the security industry, Herley said. While doctors can cite statistics showing smoking causes cancer, and road-safety engineers can produce miles of numbers supporting seat belt use, computer security professionals lack such compelling evidence to give their advice clout. “Unbelievable though it might seem, we don’t have data on most of the attacks we talk about,” he said. “That’s precisely why we’re in this ‘do it all’ approach.”

His paper argues for advice that incorporates more information, and less hyperbole. Security professionals need to consider that user education costs everyone (in time), but benefits only the small percentage who are actually victimized, he wrote. Advice must be based on an estimate of the victimization rate for a particular security issue, not a worst-case scenario risk analysis. It’s a start to quantify in a rough way the value of user time, he said, but more study is required. The central question that remains to be answered: Given all the threats, what steps produce results that outweigh the price for society at large?

Costs can come in unexpected ways, he suggests. One example he studied was phishing. Banks and other investment companies often guarantee to reimburse customers if unauthorized withdrawals are made from their online accounts, so the customer does not pay a direct price. The banks face losses, but they are relatively modest — the annual cost nationwide as a result of phishing attacks is $60 million, Herley estimated. By instructing users to take measures against them (such as by scouring URLs to make sure they lead to legitimate websites), “we’re imposing a cost that is orders of magnitude greater than the problem it addresses,” he said.

For banks, the greater potential for damages comes not from a phishing attack itself, but indirect expenses. Herley used Wells Fargo as an example. He wrote that if a mere 10 percent of its 48 million customers needed the assistance of a company agent to reset their passwords — at about $10 per reset — it would cost $48 million, far surpassing Wells Fargo’s share of the $60 million in collective losses.

No one is saying computer security threats are not a serious matter. Attacks multiply daily and are becoming more effective, having risen far beyond the sophistication level of the Nigerian prince looking to unload $12 million. Check your in-box — within the last few hours a criminal probably sent you an invitation to be victimized. Herley’s paper cites a report that said an unprotected PC will be invaded within 12 minutes of being connected to the Internet, on average. And last month, Justice Department Inspector General Glenn A. Fine warned the government isn’t keeping pace with cyber crooks in its efforts to combat the fastest-growing crime in the United States — identity theft. About 10 million Americans are affected each year.

With all that scary stuff in mind, it is easy to appreciate the sincerity of those pushing us to be more vigilant, even if their methods are muddled.

So which security measures offer a reasonable return on time and effort? Although coming up with a sensible list of security actions was not a goal of Herley’s research, he does have some suggestions based on personal experience. Start with bullet-proof passwords, he said, even if your employer requires you to periodically reinvent them or use too many (he juggles about three dozen as part of his work). Beyond that, he is big on one-time measures that offer ongoing benefits, like installing the latest software to shield against viruses and spyware (set it to automatically update). Two-thirds of computers have outdated software protection, according to a Microsoft spokesman. The company also recommends activating a firewall, which “functions like a moat around a castle.” Combined, such measures shouldn’t take more than 30 minutes, it said, and offer insulation from what is perhaps the biggest security menace of all: users.

“One of the main ways people get compromised is that they open the door to an attacker themselves,” said Herley. Someone might load software promoted as offering protection when it is actually spyware in disguise, he said, or they “open an e-mail attachment with a malicious payload….If this happens, it can be very bad. A piece of malicious keylogging software on your machine can grab all of your passwords: It makes no difference at that point whether they are strong or weak.”

After all this trash talk about security, you might wonder what Microsoft chief executive Steve Ballmer thinks about one of his key researchers challenging much of the advice the industry giant dispenses like gospel. Herley insists there has not been any blowback. Microsoft encourages its researchers to “push against fixed beliefs, even when some of the ideas can be controversial,” he said. And from outside Redmond, Wash., he added, “the reaction has been tremendous.”

“Maybe I’m just saying out loud what is rather obvious — we seem to be causing lots of unnecessary misery.”

Mark Pothier is the Globe’s senior assistant business editor.


Full article and photo:

Is Internet Civility an Oxymoron?

Unmoderated, anonymous comments on Web sites create more noise than wisdom.

For those of us tempted to hope that new technology might improve human nature, the Web has proved a disappointment. The latest online reality: comment sections so uncivilized and uninformative that it’s clear the free flow of anonymous comments has become way too much of a good thing.

The common practice is for news and other Web sites to treat all comments equally, whether made anonymously or using real names, via obscenities or reasoned debate. The hope was that people would be civil. Instead, many comment areas have become wastelands of attacks and insults.

“Too many of us like to think that we have made great progress in human relations,” wrote Doug Feaver earlier this month in the Washington Post. “Unmoderated comments provide an antidote to such ridiculous conclusions.” Mr. Feaver writes a blog called dot.comments that covers what readers are saying on the Post’s site.

Part of the problem is that people who conceal their names seem to feel free to say things they never would if their identities were known. There are obvious cases—dissidents living in authoritarian countries—where anonymity is needed. But as Miami Herald columnist Leonard Pitts Jr. wrote recently, message boards dominated by anonymous comments often become “havens for a level of crudity, bigotry, meanness and plain nastiness that shocks the tattered remnants of our propriety.”

There are remedies. Popular commentators on many sites and blogs go by their own names or at least by recognizable noms de plume, so their comments can be tracked. Sites letting readers rank the reputation of comment writers also help.

Some edgier Web sites have been leaders in taming message boards. last year put in place a system that gives preferred placements for comments from people who get high marks among the site’s readers and editors. Founder Nick Denton explained, “It’s our party; we get to decide who comes.” At first, the number of comments dropped off, but comments then doubled over the past nine months as readers vied to become trusted commentators.

Peer News, a new site launching in Hawaii and funded by eBay founder Pierre Omidyar, will not permit comments at all. Editor John Temple said anonymity had so reduced responsibility that comments sections have been dominated by “racism, hate, ugliness” and “reflect badly on news organizations that have them.”

Other media outlets permit comments but filter them to give readers control over which ones they see. The Wall Street Journal’s Web site gives readers the option of seeing only comments from paying subscribers (which is how I first review responses to these columns). The Washington Post announced this month it soon will rank “trusted commentators” based on their complying with guidelines and using real names. Readers will be able to access comments from less trusted commentators, but only if they click further to do so.

By now, there’s an entire vocabulary to describe bad behavior, from flaming (hostile interactions between people on comment boards) to astroturfing (anonymous postings made to appear as grass-roots efforts that are actually organized political or PR efforts).

Used properly, the Web can deliver crowd-sourced useful information. As the “balloon boy” story was under way last year, National Public Radio’s online commenters posted complex mathematical equations showing that the claim about a boy floating in a helium balloon his father had built could not be true. Here’s what passes for flaming on “Show your math!”

A Web site launched this month called Unvarnished goes so far as to make anonymous comments its business model. It invites “community-contributed, business-focused assessments of professional performance” of named individuals, with commenters kept anonymous, which means readers have no way to assess their interests or biases. The subjects of comments can’t remove them.

Michael Arrington of objected to this approach, writing sarcastically, “It’s time for a centralized, well-organized place for anonymous mass defamation on the Internet.” He figures that “we’re going to be forced to adjust as a society,” to forgive indiscretions and to get smarter about ignoring comments from sources whose credibility is low.

The Web is a great liberator, giving millions of people the ability to offer opinions with the ease once reserved for, say, newspaper columnists. The downside is that comment overload and anonymity create more noise than wisdom. Since it’s now clear human nature hasn’t improved with the transition to digital media, we should cheer efforts to make it as easy for readers to decide which commenters to trust as it has become to post the comments.

Technology, for all its benefits, is no substitute for readers’ own judgments.

L. Gordon Crovitz, Wall Street Journal


Full article and photo:

The End of History (Books)

TODAY, Apple’s iPad goes on sale, and many see this as a Gutenberg moment, with digital multimedia moving one step closer toward replacing old-fashioned books.

Speaking as an author and editor of illustrated nonfiction, I agree that important change is afoot, but not in the way most people see it. In order for electronic books to live up to their billing, we have to fix a system that is broken: getting permission to use copyrighted material in new work. Either we change the way we deal with copyrights — or works of nonfiction in a multimedia world will become ever more dull and disappointing.

The hope of nonfiction is to connect readers to something outside the book: the past, a discovery, a social issue. To do this, authors need to draw on pre-existing words and images.

Unless we nonfiction writers are lucky and hit a public-domain mother lode, we have to pay for the right to use just about anything — from a single line of a song to any part of a poem; from the vast archives of the world’s art (now managed by gimlet-eyed venture capitalists) to the historical images that serve as profit centers for museums and academic libraries.

The amount we pay depends on where and how the material is used. In fact, the very first question a rights holder asks is “What are you going to do with my baby?” Which countries do you plan to sell in? What languages? Over what period of time? How large will the image be in your book?

Given that permission costs are already out of control for old-fashioned print, it’s fair to expect that they will rise even higher with e-books. After all, digital books will be in print forever (we assume); they can be downloaded, copied, shared and maybe even translated. We’ve all heard about the multimedia potential of the iPad, but how much will writers be charged for film clips and audio? Rights holders will demand a hefty premium for use in digital books — if they make their materials available in that format at all.

Seeing the clouds on the horizon, publishers painstakingly remove photos and even text extracts from print books as they are converted to e-books. So instead of providing a dazzling future, the e-world is forcing nonfiction to become drier, blander and denser.

Still, this logjam between technological potential and copyright hell could turn into a great opportunity — if it leads to a new model for how permission costs are calculated in e-books and even in print.

For e-books, the new model would look something like this: Instead of paying permission fees upfront based on estimated print runs, book creators would pay based on a periodic accounting of downloads. Right now, fees are laid out on a set schedule whose minimum rates are often higher than a modest book can support. The costs may be fine for textbooks or advertisers, but they punish individual authors. Since publishers can’t afford to fully cover permissions fees for print books, and cannot yet predict what they will earn from e-books, the writer has to choose between taking a loss on permissions fees or short-changing readers on content.

But if rights holders were compensated for actual downloads, there would be a perfect fit. The better a book did, the more the original rights holder would be paid. The challenge of this model is accurate accounting — but in the age of iTunes micropayments surely someone can figure out a way.

Before we even get to downloads, though, we need to fix the problem for print books. As a starting point, authors and publishers — perhaps through a joint committee of the Authors Guild and the Association of American Publishers — should create a grid of standard rates and images and text extracts keyed to print runs and prices.

Since authors and publishers have stakes on both sides of this issue, they ought to be able to come up with suggested fees that would allow creators to set reasonable budgets, and compel rights holders to conform to industry norms.

A good starting point might be a suggested scale based on the total number of images used in a book; an image that was one one-hundredth of a story would cost less than an image that was a tenth of it. Such a plan would encourage authors to use more art, which is precisely what we all want.

If rights remain as tightly controlled and as expensive as they are now, nonfiction will be the province of the entirely new or the overly familiar. Dazzling books with newly created art, text and multimedia will far outnumber works filled with historical materials. Only a few well-heeled companies will have the wherewithal to create gee-whiz multimedia book-like products that require permissions, and these projects will most likely focus on highly popular subjects. History’s outsiders and untold stories will be left behind.

We treat copyrights as individual possessions, jewels that exist entirely by themselves. I’m obviously sympathetic to that point of view. But source material also takes on another life when it’s repurposed. It becomes part of the flow, the narration, the interweaving of text and art in books and e-books. It’s essential that we take this into account as we re-imagine permissions in a digital age.

When we have a new model for permissions, we will have new media. Then all of us — authors, readers, new-media innovators, rights holders — will really see the stories that words and images can tell.

Marc Aronson is the author, most recently, of “If Stones Could Speak: Unlocking the Secrets of Stonehenge.”


Full article:

China Convicts Itself

Beijing needs to commit to the global economy

China tactfully reminds the world every once in a while that its specialty is masquerading weakness as strength.

In convicting iron-ore salesman and naturalized Australian citizen Stern Hu of bribery and stealing commercial secrets this week, China passed a verdict sure to frighten but not a verdict that anyone in the world would actually trust. A solitary Australian consular official was permitted to witness only part of the largely secret trial; the only publicly disclosed piece of evidence appears to be a written statement by Du Shuanghua, owner of a private steel mill, saying he paid off one of Mr. Hu’s colleagues.

Rio Tinto, one of the few Western companies to earn billions out of China, was quick to write off its employee. The Australian government is having a harder time endorsing the verdict, prompting the predictable caterwaul from China.

China unloading what it would like to get cheaper.

Let’s recall, the reason for an open courtroom is not just to make sure justice is done, but to make sure a verdict will be believed and lend credibility to the government that issues it.

The reason to have a free media, and even to put up with Google, is so people can know when their government is lying to them, which in turn is conducive to people being prepared to believe their government when it’s telling them the truth.

Weakness masquerading as strength is also key to understanding the most dangerous issue in U.S.-China relations today—China’s controversial currency peg and the false prize of its $2 trillion in accumulated dollar reserves.

The problem isn’t that China ties its yuan to the dollar. The problem is that it never let the full consequences of this choice flow through to domestic prices, wages and patterns of investment and employment.

Perhaps the pithiest summary came from whoever said that the real trouble with China is that one Chinese won’t lend to another to buy a house unless he’s buying it in the U.S.

Exactly. Tens of billions of Chinese-owned dollars rolled into Fannie and Freddie to support a U.S. housing boom. Meanwhile, at home, the world’s second biggest economy has yet to develop a real banking system or debt market, or any way for consumers to leverage China’s huge savings to improve their standard of living.

Writ small, China’s ore wars are emblematic of the same lopsided development agenda. Beijing has been trying somehow to turn its rickety and overmanned steel industry into leverage over international ore prices. China has been trying for two years to defy market realities and force Rio and its major competitors to deliver supplies at a steep discount to the international price created by China’s own explosive and volatile demand.

Not the least of Rio’s offenses was that it refused to go along. Rio sold a growing share of ore at spot market prices to the all-too-willing buyers among mainland steelmakers. Whatever the truth of the bribery charges, this actually reduced the opportunity for corruption—but then maybe that was Rio’s real sin, since well-connected mainlanders apparently had been getting rich reselling their ore allocations to unapproved buyers at huge markups.

Had China opened up its economy at a pace commensurate with its exports and accumulation of dollars, a solution would have revealed itself: import more steel. Many of the world’s steelmakers use domestic ore or scrap. Unlike China’s, they aren’t captive to an internationally traded raw material controlled by three big sellers.

This week, two of the three, Brazil’s Vale and Australia’s BHP, persuaded major Japanese, South Korean and Chinese steelmakers to accept quarterly ore repricings, with price hikes of nearly 100% above last year. Even with the Stern Hu verdict in hand, Beijing can’t hope to hold back this tide.

Nor can it hold back forever those in the U.S. who want to use China’s currency policy as an excuse to start a trade war, joined by some who apparently want to blame China for the failure of their tax-and-spend nostrums to lift the U.S. economy to a sustainable recovery.

See, we can masquerade weakness as strength too. But Washington can’t make China see a light its leaders don’t want to see. How much better to adopt a policy of real strength at home, beginning with domestic U.S. reforms that do what the word actually implies: justify confidence in our own economic future.

When Moody’s threatened to downgrade the U.S. credit rating recently, it said a prime concern was a loss of faith in Washington’s ability to get spending under control and protect growth. Moody’s didn’t mention China.

Holman Jenkins, Wall Street Journal


Full article and photo:

Google’s Search Result: Hong Kong

The company had to maintain the trust of its users.

Whether its executives planned it or not, Google may one day attain the enviable rank of an old friend of China. During the gradual opening up in the 1980s and 1990s, many Western companies expelled during the Maoist period returned to privileged positions. When China opens up to the information economy the way it has for manufacturing and finance, Google could be the first technology company to have done well by both standing its ground and finding a face-saving compromise.

Google made good on its pledge to stop censoring search results in China through the elegant solution of moving its search engine to Hong Kong. In an interview with The Wall Street Journal published last week, Google co-founder Sergey Brin said that the idea was “actually relayed to us indirectly from the Chinese government.”

He didn’t give details, but this helped Google take a page from Sun Tzu’s “The Art of War,” achieving its goals while helping China save face. Google now also offers uncensored search on its Hong Kong site with the language choice of the simplified Chinese characters used on the mainland or the complex ones used in Hong Kong.

For its part, China can say that Google is complying with its laws, which define Hong Kong as a special administrative region. Hong Kong has broad free-speech protections under the “one country, two systems” formulation that London and Beijing established to allow the British handover of Hong Kong in 1997. Of course, Beijing can block access by mainland residents to politically sensitive search results on the Hong Kong site, as it does for other Web sites outside the mainland.

Recent history shows that standing your ground with Beijing can garner respect. Chris Patten, the last British governor of Hong Kong, was officially branded a “prostitute for 1,000 years” and a “criminal for 1,000 generations” for his efforts to protect freedoms by bringing modest democracy to the then-colony. China’s insult that “Google is not God” seems mild in comparison. Lord Patten has since been welcomed back as an old friend in China.

Google had reasons beyond China to take action. Mr. Brin confirms that what prompted the company to reverse its 2006 decision to censor search results in China was the hacking of the email accounts of human-rights activists in the U.S. and elsewhere. Mr. Brin, who was born in the Soviet Union, told the Journal that while China had made progress, “in some aspects of their policy, particularly with respect to censorship, with respect to surveillance of dissidents, I see the same earmarks of totalitarianism, and I find that personally quite troubling.”

David Drummond, Google’s chief legal officer, linked the hacking incidents to the company’s decision to stop censoring search results in China. “Most hacking attacks that you see are freelancers—maybe government sponsored, maybe not,” he told The Atlantic last week. “This attack, which was from China, was different. It was almost singularly focused on getting into Gmail accounts specifically of human-rights activists, inside China or outside.” This “was all part of an overall system bent on suppressing expression, whether it was by controlling Internet search results or trying to surveil activists.”

Google needed to signal to users around the world that its systems can be trusted, or at least that the company will disclose breaches. Trust is especially important in the new digital medium where privacy rights and expectations are still evolving. Google needs to assure users that it won’t abuse its unparalleled data about what we search, read, watch and write. We expect our email accounts to be sacrosanct, not accessible to foreign governments or other hackers.

So it was likely no coincidence that Google last week also issued an update for users about its email service entitled “Detecting Suspicious Account Activity.” This new feature lets people check to see where their accounts have been accessed. They can get alerts when their account is being accessed from what Google identifies as a suspicious location.

The Google-China standoff shifts the spotlight back to Washington. Even Google needs help, rightly comparing today’s hackers interfering with communications to the pirates who had to be defeated to ensure free shipping. U.S. companies are ahead of U.S. policy.

“As challenging as China may be for Google,” U.S. Trade Representative Ron Kirk said, “my first preference is always to see if we can’t build a partnership to work with China to see if we can’t get a resolution sooner rather than later.” In short, no help there. Meanwhile, China continues to block Facebook, Twitter and YouTube.

Google has done as much as a private company can, but its services remain vulnerable. Security online will only be assured when the U.S. decides that the free flow of information is a matter of national interest worth protecting.

Gordon L. Crovitz, Wall Street Journal


Full article:

I, Translator

EVERYBODY has his own tale of terrible translation to tell — an incomprehensible restaurant menu in Croatia, a comically illiterate warning sign on a French beach. “Human-engineered” translation is just as inadequate in more important domains. In our courts and hospitals, in the military and security services, underpaid and overworked translators make muddles out of millions of vital interactions. Machine translation can certainly help in these cases. Its legendary bloopers are often no worse than the errors made by hard-pressed humans.

Machine translation has proved helpful in more urgent situations as well. When Haiti was devastated by an earthquake in January, aid teams poured in to the shattered island, speaking dozens of languages — but not Haitian Creole. How could a trapped survivor with a cellphone get usable information to rescuers? If he had to wait for a Chinese or Turkish or an English interpreter to turn up he might be dead before being understood. Carnegie Mellon University instantly released its Haitian Creole spoken and text data, and a network of volunteer developers produced a rough-and-ready machine translation system for Haitian Creole in little more than a long weekend. It didn’t produce prose of great beauty. But it worked.

The advantages and disadvantages of machine translation have been the subject of increasing debate among human translators lately because of the growing strides made in the last year by the newest major entrant in the field, Google Translate. But this debate actually began with the birth of machine translation itself.

The need for crude machine translation goes back to the start of the cold war. The United States decided it had to scan every scrap of Russian coming out of the Soviet Union, and there just weren’t enough translators to keep up (just as there aren’t enough now to translate all the languages that the United States wants to monitor). The cold war coincided with the invention of computers, and “cracking Russian” was one of the first tasks these machines were set.

The father of machine translation, William Weaver, chose to regard Russian as a “code” obscuring the real meaning of the text. His team and its successors here and in Europe proceeded in a commonsensical way: a natural language, they reckoned, is made of a lexicon (a set of words) and a grammar (a set of rules). If you could get the lexicons of two languages inside the machine (fairly easy) and also give it the whole set of rules by which humans construct meaningful combinations of words in the two languages (a more dubious proposition), then the machine would be able translate from one “code” into another.

Academic linguists of the era, Noam Chomsky chief among them, also viewed a language as a lexicon and a grammar, able to generate infinitely many different sentences out of a finite set of rules. But as the anti-Chomsky linguists at Oxford commented at the time, there are also infinitely many motor cars that can come out of a British auto plant, each one having something different wrong with it. Over the next four decades, machine translation achieved many useful results, but, like the British auto industry, it fell far short of the hopes of the 1950s.

Now we have a beast of a different kind. Google Translate is a statistical machine translation system, which means that it doesn’t try to unpick or understand anything. Instead of taking a sentence to pieces and then rebuilding it in the “target” tongue as the older machine translators do, Google Translate looks for similar sentences in already translated texts somewhere out there on the Web. Having found the most likely existing match through an incredibly clever and speedy statistical reckoning device, Google Translate coughs it up, raw or, if necessary, lightly cooked. That’s how it simulates — but only simulates — what we suppose goes on in a translator’s head.

Google Translate, which can so far handle 52 languages, sidesteps the linguists’ theoretical question of what language is and how it works in the human brain. In practice, languages are used to say the same things over and over again. For maybe 95 percent of all utterances, Google’s electronic magpie is a fabulous tool. But there are two important limitations that users of this or any other statistical machine translation system need to understand.

The target sentence supplied by Google Translate is not and must never be mistaken for the “correct translation.” That’s not just because no such thing as a “correct translation” really exists. It’s also because Google Translate gives only an expression consisting of the most probable equivalent phrases as computed by its analysis of an astronomically large set of paired sentences trawled from the Web.

The data comes in large part from the documentation of international organizations. Thousands of human translators working for the United Nations and the European Union and so forth have spent millions of hours producing precisely those pairings that Google Translate is now able to cherry-pick. The human translations have to come first for Google Translate to have anything to work with.

The variable quality of Google Translate in the different language pairings available is due in large part to the disparity in the quantities of human-engineered translations between those languages on the Web.

But what of real writing? Google Translate can work apparent miracles because it has access to the world library of Google Books. That’s presumably why, when asked to translate a famous phrase about love from “Les Misérables” — “On n’a pas d’autre perle à trouver dans les plis ténébreux de la vie” — Google Translate comes up with a very creditable “There is no other pearl to be found in the dark folds of life,” which just happens to be identical to one of the many published translations of that great novel. It’s an impressive trick for a computer, but for a human? All you need to do is get the old paperback from your basement.

And the program is very patchy. The opening sentence of Proust’s “In Search of Lost Time” comes out as an ungrammatical “Long time I went to bed early,” and the results for most other modern classics are just as unusable.

Can Google Translate ever be of any use for the creation of new literary translations into English or another language? The first thing to say is that there really is no need for it to do that: would-be translators of foreign literature are not in short supply — they are screaming for more opportunities to publish their work.

But even if the need were there, Google Translate could not do anything useful in this domain. It is not conceived or programmed to take into account the purpose, real-world context or style of any utterance. (Any system able to do that would be a truly epochal achievement, but such a miracle is not on the agenda of even the most advanced machine translation developers.)

However, to play devil’s advocate for a moment, if you were to take a decidedly jaundiced view of some genre of contemporary foreign fiction (say, French novels of adultery and inheritance), you could surmise that since such works have nothing new to say and employ only repeated formulas, then after a sufficient number of translated novels of that kind and their originals had been scanned and put up on the Web, Google Translate should be able to do a pretty good simulation of translating other regurgitations of the same ilk.

So what? That’s not what literary translation is about. For works that are truly original — and therefore worth translating — statistical machine translation hasn’t got a hope. Google Translate can provide stupendous services in many domains, but it is not set up to interpret or make readable work that is not routine — and it is unfair to ask it to try. After all, when it comes to the real challenges of literary translation, human beings have a hard time of it, too.

David Bellos is the director of the Program in Translation and Intercultural Communication at Princeton.


Full article and photo:

When American and European Ideas of Privacy Collide

“On the Internet, the First Amendment is a local ordinance,” said Fred H. Cate, a law professor at Indiana University. He was talking about last week’s ruling from an Italian court that Google executives had violated Italian privacy law by allowing users to post a video on one of its services.

In one sense, the ruling was a nice discussion starter about how much responsibility to place on services like Google for offensive content that they passively distribute.

But in a deeper sense, it called attention to the profound European commitment to privacy, one that threatens the American conception of free expression and could restrict the flow of information on the Internet to everyone.

“Americans to this day don’t fully appreciate how Europeans regard privacy,” said Jane Kirtley, who teaches media ethics and law at the University of Minnesota. “The reality is that they consider privacy a fundamental human right.”

Google understands.

“The framework in Europe is of privacy as a human-dignity right,” said Nicole Wong, a lawyer with the company. “As enforced in the U.S., it’s a consumer-protection right.”

But Ms. Wong said Google’s policies on invasion of privacy, like its policies on hate speech, pornography and extreme violence, were best applied uniformly around the world. Trying to meet all the differing local standards “will make you tear your hair out and be paralyzed.”

The three Google executives were sentenced to six months in prison for failing to block a video showing an autistic boy being bullied by other students. The video was on line for two months in 2006, and was promptly removed after Google received a formal complaint. The prison sentences were suspended.

Still, Judge Oscar Magi’s ruling, in effect, balanced privacy against free speech and ruled in favor of the former. And given the borderless quality of the Internet, that balance has the potential to affect nations that prefer to tilt toward the values protected by the First Amendment.

“For many purposes, the European Union is today the effective sovereign of global privacy law,” Jack Goldsmith and Tim Wu wrote in their book “Who Controls the Internet?” in 2006.

This may sound odd in America, where the First Amendment has pride of place in the Bill of Rights. In Europe, privacy comes first.

Article 8 of the European Convention on Human Rights says, “Everyone has the right to respect for his private and family life, his home and his correspondence.” The First Amendment’s distant cousin comes later, in Article 10.

Americans like privacy, too, but they think about it in a different way, as an aspect of liberty and a protection against government overreaching, particularly into the home. Continental privacy protections, by contrast, focus on protecting people from having their lives exposed to public view, especially in the mass media.

The title of a Yale Law Journal article by James Q. Whitman captured the tension: “The Two Western Cultures of Privacy: Dignity Versus Liberty.” And historical experience helps explain the differing priorities.

“The privacy protections we see reflected in modern European law are a response to the Gestapo and the Stasi,” Professor Cate said, referring to the reviled Nazi and East German secret police — totalitarian regimes that used informers, surveillance and blackmail to maintain their power, creating a web of anxiety and betrayal that permeated those societies. “We haven’t really lived through that in the United States,” he said.

American experience has been entirely different, said Lee Levine, a Washington lawyer who has taught media law in America and France. “So much of the revolution that created our legal system was a reaction to excesses of government in areas of press and speech,” he said.

It was not until 1890 that Samuel Warren and Louis D. Brandeis wrote “The Right to Privacy,” their groundbreaking Harvard Law Review article. Influential though it was, it came awfully late in the life of the republic.

The word privacy does not appear in the Constitution, and, outside the context of government searches, the document has almost nothing to say about the concept. This was perhaps best demonstrated by how hard the Supreme Court had to work in Griswold v. Connecticut, the 1965 ruling that established a right to marital privacy.

That right, Justice William O. Douglas wrote, was suggested by the First, Third, Fourth, Fifth and Ninth Amendments. The “specific guarantees in the Bill of Rights have penumbras, formed by emanations from those guarantees,” he wrote, in a much-mocked passage.

European courts, by contrast, have Article 8.

In 2004, the European Court of Human Rights relied on it to rule that Princess Caroline of Monaco could block German magazines from publishing pictures of her — quite tame pictures — that had been taken in public. “I believe that the courts have to some extent and under American influence made a fetish of the freedom of the press,” Judge Bostjan M. Zupancic of Slovenia wrote in a concurrence. “It is time that the pendulum swung back to a different kind of balance between what is private and secluded and what is public and unshielded.”

The differing conceptions can have profound consequences. “Europeans are likely to privilege privacy protection over both economic efficiency and speech,” Susan P. Crawford, who teaches Internet law at the University of Michigan, wrote in an e-mail message. “They’re willing to risk huge economic losses and erect trade barriers in order to protect privacy.”

The Italian prosecution would be unimaginable in America. The Communications Decency Act of 1996 leaves online companies free of liability for transmitting most kinds of unlawful material supplied by others. Prosecutions for truthful speech on matters of public interest are almost certainly barred by the First Amendment.

Still, said Marc Rotenberg, executive director of the Electronic Privacy Information Center, there may be something to learn from the Italian episode. “This video was enormously controversial, widely seen and very upsetting,” he said. “Sometimes,” he added, “there are egregious acts and there should be some responsibility.”

But Professor Crawford cautioned against thinking about the problem in categorical terms. Privacy is a broad enough concept, and Europe and America are varied enough, that it is easy to find counterexamples. Britain, for one, is only slowly moving toward the Continental model.

And what Italian prosecutors labeled a battle over principle may well have had another goal.

“Italian media is full of naked women and embarrassing revelations about both celebrities and ordinary people,” Professor Crawford wrote. “Any concern for privacy in this case is a pious cover for an (also naked) assertion of power over online companies.”

In some ways the Italian video represents the easy case. Google was merely a conduit for other people’s information, and that may well be enough to protect it in most of Europe.

The harder cases arise when Google is more active in gathering and disseminating information, as in its StreetView service, which provides ground-level panoramas gathered by cars with cameras on them. The program has generated legal challenges in Switzerland and Germany.

“Google is digitizing the world and expecting the world to conform to Google’s norms and conduct,” said Siva Vaidhyanathan, who teaches media studies and law at the University of Virginia. “That’s a terribly naïve view of privacy and responsibility.”


Full article and photo:

China’s Cyberposse

The short video made its way around China’s Web in early 2006, passed on through file sharing and recommended in chat rooms. It opens with a middle-aged Asian woman dressed in a leopard-print blouse, knee-length black skirt, stockings and silver stilettos standing next to a riverbank. She smiles, holding a small brown and white kitten in her hands. She gently places the cat on the tiled pavement and proceeds to stomp it to death with the sharp point of her high heel.

“This is not a human,” wrote BrokenGlasses, a user on Mop, a Chinese online forum. “I have no interest in spreading this video nor can I remain silent. I just hope justice can be done.” That first post elicited thousands of responses. “Find her and kick her to death like she did to the kitten,” one user wrote. Then the inquiries started to become more practical: “Is there a front-facing photo so we can see her more clearly?” The human-flesh search had begun.

Human-flesh search engines — renrou sousuo yinqing — have become a Chinese phenomenon: they are a form of online vigilante justice in which Internet users hunt down and punish people who have attracted their wrath. The goal is to get the targets of a search fired from their jobs, shamed in front of their neighbors, run out of town. It’s crowd-sourced detective work, pursued online — with offline results.

There is no portal specially designed for human-flesh searching; the practice takes place in Chinese Internet forums like Mop, where the term most likely originated. Searches are powered by users called wang min, Internet citizens, or Netizens. The word “Netizen” exists in English, but you hear its equivalent used much more frequently in China, perhaps because the public space of the Internet is one of the few places where people can in fact act like citizens. A Netizen called Beacon Bridge No Return found the first clue in the kitten-killer case. “There was credit information before the crush scene reading ‘’ ” that user wrote. Netizens traced the e-mail address associated with the site to a server in Hangzhou, a couple of hours from Shanghai. A follow-up post asked about the video’s location: “Are users from Hangzhou familiar with this place?” Locals reported that nothing in their city resembled the backdrop in the video. But Netizens kept sifting through the clues, confident they could track down one person in a nation of more than a billion. They were right.

The traditional media picked up the story, and people all across China saw the kitten killer’s photo on television and in newspapers. “I know this woman,” wrote I’m Not Desert Angel four days after the search began. “She’s not in Hangzhou. She lives in the small town I live in here in northeastern China. God, she’s a nurse! That’s all I can say.”

Only six days after the first Mop post about the video, the kitten killer’s home was revealed as the town of Luobei in Heilongjiang Province, in the far northeast, and her name — Wang Jiao — was made public, as were her phone number and her employer. Wang Jiao and the cameraman who filmed her were dismissed from what the Chinese call iron rice bowls, government jobs that usually last to retirement and pay a pension until death.

“Wang Jiao was affected a lot,” a Luobei resident known online as Longjiangbaby told me by e-mail. “She left town and went somewhere else. Li Yuejun, the cameraman, used to be core staff of the local press. He left Luobei, too.” The kitten-killer case didn’t just provide revenge; it helped turn the human-flesh search engine into a national phenomenon.

AT THE BEIJING headquarters of Mop, Ben Du, the site’s head of interactive communities, told me that the Chinese term for human-flesh search engine has been around since 2001, when it was used to describe a search that was human-powered rather than computer-driven. Mop had a forum called human-flesh search engine, where users could pose questions about entertainment trivia that other users would answer: a type of crowd-sourcing. The kitten-killer case and subsequent hunts changed all that. Some Netizens, including Du, argue that the term continues to mean a cooperative, crowd-sourced investigation. “It’s just Netizens helping each other and sharing information,” he told me. But the Chinese public’s primary understanding of the term is no longer so benign. The popular meaning is now not just a search by humans but also a search for humans, initially performed online but intended to cause real-world consequences. Searches have been directed against all kinds of people, including cheating spouses, corrupt government officials, amateur pornography makers, Chinese citizens who are perceived as unpatriotic, journalists who urge a moderate stance on Tibet and rich people who try to game the Chinese system. Human-flesh searches highlight what people are willing to fight for: the political issues, polarizing events and contested moral standards that are the fault lines of contemporary China.

Versions of the human-flesh search have taken place in other countries. In the United States in 2006, one online search singled out a woman who found a cellphone in a New York City taxi and started to use it as her own, rebuffing requests from the phone’s rightful owner to return it. In South Korea in 2005, Internet users identified and shamed a young woman who was caught on video refusing to clean up after her dog on a Seoul subway car. But China is the only place in the world with a nearly universal recognition (among Internet users) of the concept. I met a film director in China who was about to release a feature film based on a human-flesh-search story and a mystery writer who had just published a novel titled “Human-Flesh Search.”

The prevailing narrative in the West about the Chinese Internet is the story of censorship — Google’s threatened withdrawal from China being only the latest episode. But the reality is that in China, as in the United States, most Internet users are far more interested in finding jobs, dates and porn than in engaging in political discourse. “For our generation, the post-’80s generation, I don’t feel like censorship is a critical issue on the Internet,” Jin Liwen, a Chinese technology analyst who lives in America, told me. While there are some specific, highly sensitive areas where the Chinese government tries to control all information — most important, any political activity that could challenge the authority of the Communist Party — the Western media’s focus on censorship can lead to the misconception that the Chinese government utterly dominates online life. The vast majority of what people do on the Internet in China, including most human-flesh-search activity, is ignored by censors and unfettered by government regulation. There are many aspects of life on and off the Internet that the government is unwilling, unable or maybe just uninterested in trying to control.

The focus on censorship also obscures the fact that the Web is not just about free speech. As some human-flesh searches show, an uncontrolled Internet can be menacing as well as liberating.

ON A WINDY NIGHT in late December 2007, a man was headed back to work when he saw someone passed out in the small garden near the entryway to his Beijing office building. The man, who would allow only his last name, Wei, to be published, called over to the security guard for help. A woman standing next to the guard started weeping. Wei was confused.

Wei and the guard entered the yard, but the woman, Jiang Hong, was afraid to follow. As they approached the person, Wei told me, he realized it was the body of someone who fell from the building. Then he understood why Jiang wouldn’t come any closer: the body was that of her sister, Jiang Yan, who jumped from her apartment’s 24th-floor balcony while Hong was in the bathroom. Two days earlier, Yan, who was 31, had tried to commit suicide with sleeping pills — she was separated from her husband, Wang Fei, who was dating another woman — but her sister and her husband had rushed her to the hospital. Now she had succeeded, hitting the ground so hard that her impact left a shallow crater still evident when I visited the site with Wei a year and a half later.

Hong soon discovered that her sister kept a private diary online in the two months leading up to her death and wanted it to be made public after she killed herself. When Hong called her sister’s friends to tell them that Yan had died, she also told them that they could find out why by looking at her blog, now unlocked for public viewing. The online diary, “Migratory Bird Going North,” was more than just a reflection on her adulterous husband and a record of her despair; it was Yan’s countdown to suicide, prompted by the discovery that her husband was cheating on her. The first entry reads: “Two months from now is the day I leave . . . for a place no one knows me, that is new to me. There I won’t need phone, computer or Internet. No one can find me.”

A person who read Yan’s blog decided to repost it, 46 short entries in all, on a popular Chinese online bulletin board called Tianya. Hong posted a reply, expressing sadness over her sister’s death and detailing the ways she thought Yan had helped her husband: supporting him through school, paying for his designer clothes and helping him land a good job. Now, she wrote, Wang wouldn’t even sign his wife’s death certificate until he could come to an agreement with her family about how much he needed to pay them in damages.

Yan’s diaries, coupled with her sister’s account of Wang’s behavior, attracted many angry Tianya users and shot to the top of the list of the most popular threads on the board. One early comment by an anonymous user, referring to Wang and his mistress, reads, “We should take revenge on that couple and drown them in our sputa.” Calls for justice, for vengeance and for a human-flesh search began to spread, not only against Wang but also against his girlfriend. “Those in Beijing, please share with others the scandal of these two,” a Netizen wrote. “Make it impossible for them to stay in this city.”

The search crossed over to other Web sites, then to the mainstream media — so far a crucial multiplier in every major human-flesh search — and Wang Fei became one of China’s most infamous and reviled husbands. Most of Wang’s private information was revealed: cellphone number, student ID, work contacts, even his brother’s license-plate number. One site posted an interactive map charting the locations of everything from Wang’s house to his mistress’s family’s laundry business. “Pay attention when you walk on the street,” wrote Hypocritical Human. “If you ever meet these two, tear their skin off.”

Wang is still in hiding and was unwilling to meet me, but his lawyer, Zhang Yanfeng, told me not long ago: “The human-flesh search has unimaginable power. First it was a lot of phone calls every day. Then people painted red characters on his parents’ front door, which said things like, ‘You caused your wife’s suicide, so you should pay.’ ”

Wang and his mistress, Dong Fang, both worked for the multinational advertising agency Saatchi & Saatchi. Soon after Netizens revealed this, Saatchi & Saatchi issued a statement reporting that Wang Fei and Dong Fang had voluntarily resigned. Wang’s lawyer says Saatchi pushed the couple out. “All the media have the wrong report,” he says. “[Wang Fei] never quit. He told me that the company fired him.” (Representatives for Saatchi & Saatchi Beijing refused to comment.) Netizens were happy with this outcome but remained vigilant. One Mop user wrote, “To all employers: Never offer Wang Fei or Dong Fang jobs, otherwise Moppers will human-flesh-search you.”

What was peculiar about the human-flesh search against Wang was that it involved almost no searching. His name was revealed in the earliest online-forum posts, and his private information was disclosed shortly after. This wasn’t cooperative detective work; it was public harassment, mass intimidation and populist revenge. Wang actually sought redress in Chinese court and was rewarded very minor damages from an Internet-service provider and a Netizen who Wang claimed had besmirched his reputation. Recently passed tort-law reform may encourage more such lawsuits, but damages awarded thus far in China have been so minor that it’s hard to imagine lawsuits having much impact on the human-flesh search.

FOR A WESTERNER, what is most striking is how different Chinese Internet culture is from our own. News sites and individual blogs aren’t nearly as influential in China, and social networking hasn’t really taken off. What remain most vital are the largely anonymous online forums, where human-flesh searches begin. These forums have evolved into public spaces that are much more participatory, dynamic, populist and perhaps even democratic than anything on the English-language Internet. In the 1980s in the United States, before widespread use of the Internet, B.B.S. stood for bulletin-board server, a collection of posts and replies accessed by dial-up or hard-wired users. Though B.B.S.’s of this original form were popular in China in the early ’90s, before the Web arrived, Chinese now use “B.B.S.” to describe any kind of online forum. Chinese go to B.B.S.’s to find broad-based communities and exchange information about everything from politics to romance.

Jin Liwen, the technology analyst, came of age in China just as Internet access was becoming available and wrote her thesis at M.I.T. on Chinese B.B.S.’s. “In the United States, traditional media are still playing the key role in setting the agenda for the public,” Jin told me. “But in China, you will see that a lot of hot topics, hot news or events actually originate from online discussions.” One factor driving B.B.S. traffic is the dearth of good information in the mainstream media. Print publications and television networks are under state control and cannot cover many controversial issues. B.B.S.’s are where the juicy stories break, spreading through the mainstream media if they get big enough.

“Chinese users just use these online forums for everything,” Jin says. “They look for solutions, they want to have discussions with others and they go there for entertainment. It’s a very sticky platform.” Jin cited a 2007 survey conducted by iResearch showing that nearly 45 percent of Chinese B.B.S. users spend between three and eight hours a day on them and that more than 15 percent spend more than eight hours. While less than a third of China’s population is on the Web, this B.B.S. activity is not as peripheral to Chinese society as it may seem. Internet users tend to be from larger, richer cities and provinces or from the elite, educated class of more remote regions and thus wield influence far greater than their numbers suggest.

I found the intensity of the Wang Fei search difficult to understand. Wang Fei and Jiang Yan were separated and heading toward divorce, and what he did cannot be uncommon. How had the structure of the B.B.S. allowed mass opinion to be so effectively rallied against this one man? I tracked down Wang Lixue, a woman who goes by the online handle Chali and moderates a subforum on (China’s largest search engine, with its own B.B.S.) that is devoted entirely to discussions about Jiang Yan. Chali was careful to distance herself from the human-flesh search that found Wang Fei and Dong Fang. “That kind of thing won’t solve any problems,” she told me. “It’s not good for either side.” But she didn’t exactly apologize. “Everyone was so angry, so irrational,” Chali says. “It was a sensitive period. So I understand the people who did the human-flesh search. If a person doesn’t do anything wrong, they won’t be human-flesh-searched.”

Chali was moved by the powerful feeling that Wang shouldn’t be allowed to escape censure for his role in his wife’s suicide. “I want to know what is going to happen if I get married and have a similar experience,” Chali says. “I want to know if the law or something could protect me and give me some kind of security.” It struck me as an unusual wish — that the law could guard her from heartbreak. Chali wasn’t only angry about Jiang Yan’s suicide; she also wanted to improve things for herself and others. “The goal is to commemorate Jiang Yan and to have an objective discussion about adultery, to talk about what you want in your marriage, to find new opinions and have a better life,” Chali says. Her forum was the opposite of the vengeful populism found on some B.B.S.’s. The frenzy of the occasional human-flesh search attracts many Netizens to B.B.S.’s, but the bigger day-to-day draw, as in Chali’s case, is the desire for a community in which people can work out the problems they face in a country where life is changing more quickly than anyone could ever have imagined.

THE PLUM GARDEN Seafood Restaurant stands on a six-lane road that cuts through Shenzhen, a fishing village turned factory boomtown. It has a subterranean dining room with hundreds of orange-covered seats, an open kitchen to one side and a warren of small private rooms to the other. Late on a Friday night in October 2008, a security camera captured a scene that was soon replayed all over the Chinese Internet and sparked a human-flesh search against a government official.

In the video clip, an older man crosses the background with a little girl. Later the girl runs back through the frame and returns with her father, mother and brother. The subtitles tell us that the old man had tried to force the girl into the men’s room, presumably to molest her, and that her father is trying to find the man who did that. Then the girl’s father appears in front of the camera, arguing with that man.

There is no sound on the video, so you have to rely on the Chinese subtitles, which seem to have been posted with the video. According to those subtitles, the older man tells the father of the girl: “I did it, so what? How much money do you want? Name your price.” He gestures violently and continues: “Do you know who I am? I am from the Ministry of Transportation in Beijing. I have the same level as the mayor of your city. So what if I grabbed the neck of a small child? If you dare challenge me, just wait and see how I will deal with you.” He moves to leave but is blocked by restaurant employees and the girl’s father. The group exits frame left.

The video was first posted on a Web site called Netease, whose slogan is “The Internet can gather power from the people.” The eighth Netizen comment reads: “Have you seen how proud he was? He’s a dead man now.” Later someone chimed in, “Another official riding roughshod over the people!” The human-flesh search began. Users quickly matched a public photo of a local party official to the older man in the video and identified him as Lin Jiaxiang from the Shenzhen Maritime Administration. “Kill him,” wrote a user named Xunleixing. “Otherwise China will be destroyed by people of this kind.”

While Netizens saw this as a struggle between an arrogant official and a victimized family of common people, the staff members at Plum Garden, when I spoke to them, had a different take. First, they weren’t sure that Lin had been trying to molest the girl. Perhaps, they thought, he was just drunk. The floor director, Zhang Cai Yao, told me, “Maybe the government official just patted the girl on the head and tried to say, ‘Thank you, you’re a nice girl.’ ” Zhang saw the struggle between Lin and the family as a kind of conflict she witnessed all too often. “It was a fight between rich people and officials,” she says. “The official said something irritating to her parents, who are very rich.”

Police said they did not have sufficient evidence to prosecute Lin, but that didn’t stop the government from firing him. It was the same kind of summary dismissal as in the kitten-killer case — Lin drew attention to himself, and so it was time to go. The government had the technology and the power to make a story like this one disappear, yet it didn’t stand up to the Netizens. That is perhaps because this search took aim at a provincial-level official; there have been no publicized human-flesh searches against central-government officials in Beijing or their offspring, even though many of them are considered corrupt.

Rebecca MacKinnon, a visiting fellow at Princeton University’s Center for Information Technology Policy, argues that China’s central government may actually be happy about searches that focus on localized corruption. “The idea that you manage the local bureaucracy by sicking the masses on them is actually not a democratic tradition but a Maoist tradition,” she told me. During the Cultural Revolution, Mao encouraged citizens to rise up against local officials who were bourgeois or corrupt, and human-flesh searches have been tagged by some as Red Guard 2.0. It’s easy to denounce the tyranny of the online masses when you live in a country that has strong rule of law and institutions that address public corruption, but in China the human-flesh search engine is one of the only ways that ordinary citizens can try to go after corrupt local officials. Cases like the Lin Jiaxiang search, as imperfect as their outcomes may be, are examples of the human-flesh search as a potential mechanism for checking government excess.

The human-flesh search engine can also serve as a safety valve in a society with ever mounting pressures on the government. “You can’t stop the anger, can’t make everyone shut up, can’t stop the Internet, so you try and channel it as best you can. You try and manage it, kind of like a waterworks hydroelectric project,” MacKinnon explained. “It’s a great way to divert the qi, the anger, to places where it’s the least damaging to the central government’s legitimacy.”

THE CHINESE GOVERNMENT has proved particularly adept at harnessing, managing and, when necessary, containing the nationalist passions of its citizens, especially those people the Chinese call fen qing, or angry youth. Instead of wondering, in the run-up to the 2008 Beijing Olympics, why the world was so upset about China’s handling of Tibet, popular sentiment in China was channeled against dissenting individuals, painted as traitors. One young Chinese woman, Grace Wang, became the target of a human-flesh search after she tried to mediate between pro-Tibet and pro-China protesters at Duke University, where she is an undergraduate. Wang told me that her mother’s home in China was vandalized by human-flesh searchers. Wang’s mother was not harmed — popular uprisings are usually kept under tight control by the government when they threaten to erupt into real violence — but Wang told me she is afraid to return to China. Certain national events, like the Tibet activism before the 2008 Olympics or the large-scale loss of life from the Sichuan earthquake, often produce a flurry of human-flesh searches. Recent searches seem to be more political — taking aim at things like government corruption or a supposedly unpatriotic citizenry — and less focused on the kind of private transgressions that inspired earlier searches.

After the earthquake, in May 2008, users on the B.B.S. of Douban, a Web site devoted to books, movies and music, discussed the government’s response to the earthquake. A woman who went by the handle Diebao argued that the government was using the earthquake to rally nationalist sentiment, and that, she wrote, was an exploitation of the tragedy. Netizens challenged Diebao’s arguments, saying that it was only right for China to speak in one voice after such a catastrophe. These were heady days, and the people who disagreed with Diebao weren’t content to leave it at that. In Guangzhou, the capital of Guangdong, Feng Junhua, a 25-year-old man who on the Internet goes by the handle Hval, was getting worried. Feng spent a lot of time on Douban, and, he told me later, he saw where the disagreement with Diebao was going — the righteous massing against the dissenter. He e-mailed Diebao, who lived in Sichuan Province, to warn her of the danger and urge her to stop fighting with the other Netizens. “I found out that the other people were going to threaten her with the human-flesh search engine,” he told me. “She wrote back to me, saying she wanted to talk them out of it.”

The group started to dig through everything Diebao had written on the Internet, desperate to find more reasons to attack her. They found what they were looking for, a stream-of-consciousness blog entry Diebao posted right after the earthquake hit: “I felt really excited when the earthquake hit. I know this experience might happen once in a lifetime. When I watched the news at my aunt’s place, I found out that it caused five people to die. I feel so good, but that’s not enough. I think more people should die.” Diebao wrote this right after the earthquake struck her city, possibly while she was still in shock and before she knew the extent of the damage.

The group tried to use this post to initiate a human-flesh search against Diebao. At first it didn’t succeed — no one responded to the calls for a search. (There are hundreds, maybe thousands of attempts each week for all kinds of human-flesh searches, the vast majority of which do not amount to much.) Finally they figured out a way to make their post “sparkle,” as they say in Chinese, titling it, “She Said the Quake Was Not Strong Enough” and writing, of Diebao: “We cannot bear that an adult in such hard times didn’t feel ashamed for not being able to help but instead was saying nonsense, with little respect for other people’s lives. She should not be called a human. We think we have to give her a lesson. We hereby call for a human-flesh search on her!”

This time it took hold. A user named Little Dumpling joined the pile-on, writing: “Earthquake, someone is calling you. Please move your epicenter right below [Diebao’s] computer desk.” Juana0906 asked: “How could she be so coldblooded? Her statement did greater harm to the victims than the earthquake.” Then from Expecting Bull Market, the obligatory refrain in almost every human-flesh search, “Is she a human?”

Feng, the user who tried to warn Diebao of the impending search, became angry that so many people were going after Diebao. “I cannot stand seeing the strong beating the weak,” he told me. “I thought I should protect the right of free speech. She can say anything she wants. I think that she just didn’t think before she spoke.” But the searchers managed to rally users against Diebao. “Her school read a lot of aggressive comments on the Internet and got pressure from Netizens asking them to kick out this girl,” Feng told me. Shortly after the human-flesh search began, Diebao was expelled from her university. “The school announced that it was for her own safety, to protect her,” Feng says.

Feng decided to get revenge on the human-flesh searchers. He and a few other users started a human-flesh search of their own, patiently matching back the anonymous ID’s of the people who organized against Diebao to similar-sounding names on school bulletin boards, auction sites and help-wanted ads. Eventually he assembled a list of the real identities of Diebao’s persecutors. “When we got the information, we had to think about what we should do with it,” Feng says. “Should we use it to attack the group?”

Feng stopped and thought about what he was about to do. “When we tried to fight evil, we found ourselves becoming evil,” he says. He abandoned the human-flesh search and destroyed all the information he had uncovered.

Tom Downey is the author of “The Last Men Out: Life on the Edge at Rescue 2 Firehouse.”


Full article and photos:

The many voices of the web

The internet: New combinations of human and computer translation are making web pages available in foreign languages


THE web connects over a billion people, but it is fragmented by language. Anglophone web-users have as many pages to choose from as Chinese speakers, and there are roughly as many blogs in Japanese as there are in English. And although the Arabic blogosphere got off to a late start, it is now booming. But each of these groups of users is walled off from the others by language.

What might the web look like without such linguistic barriers? Imagine if internet users everywhere could have content automatically, smoothly and accurately translated into their own languages. A Chinese web-surfer could then visit an English newspaper website and read all the content in excellent Mandarin, before moving on to read blog entries written in Malagasy or Twitter posts in Galician.

This fantasy is still just that, but bits of it are starting to look plausible. Start with the translation part. Thanks to the internet, this is now a relatively flexible and cheap process. At the base of the translation hierarchy are free services offered by Google and others. Such services “learn” by analysing collections of documents that have been translated by humans, such as the records of the European Parliament, which are translated into 11 different languages. These collections are so big, and the machines that analyse them so powerful, that automatic translation (known in the jargon as “machine translation”) can usually convey the gist of a text, albeit it in a slightly garbled manner. Google and its rivals focus on widely spoken tongues, but academics are working on machine-translation services for more obscure languages.

An army of volunteer translators occupies the next level up in the hierarchy. Several prominent English-language publications, including this newspaper, are regularly translated into Mandarin by groups of unpaid volunteers for the benefit of other readers (see More formal projects also exist. At Global Voices, a kind of polyglot bloggers’ collective, around 200 volunteers select and translate their colleagues’ posts. Items on Meedan, a social network dedicated to the discussion of Middle East news, are translated into English or Arabic by machine and can then be tidied up by readers.

Paid human translators, unsurprisingly, still produce the best results. But even here costs are coming down, as the translation industry is shifting from project-based to piecemeal working. The methods are inspired by Mechanical Turk, an online service operated by Amazon that companies use to farm out mundane tasks to a pool of online workers. SpeakLike, which launched in late 2009, has a pool of 3,000 translators and can supply a translation of a given text within hours for $0.05-0.15 a word, depending on turnaround time. SpeakLike will even translate Twitter posts and send them to a parallel account within minutes for $0.25 a pop.

All this activity can, at least in theory, take place out of sight of the reader. One way to make this happen is to use the Worldwide Lexicon (WWL), a series of interlocking pieces of free software created by Brian McConnell, a software developer based in San Francisco. WWL gives bloggers and media companies fine control over how their content is translated. A blogger can, for example, provide a machine-translated version of a post whenever the speaker of a different language visits his site. (Web browsers like Internet Explorer and Firefox specify the user’s language when requesting pages.) WWL also provides a neat interface that, if enabled, allows readers to improve the translation of blog postings, for the benefit of subsequent visitors.

Commercial producers of content can use the software to create an initial machine translation and then send it to SpeakLike for further work. The WWL software can also wait until the hit count on an item exceeds a certain value, indicating that it is popular, before sending the machine-translated version out to a human. This combination of human and computer work—cyborg translation, as it were—takes place entirely behind the scenes; visitors are simply presented with a more or less readable article. Mr McConnell is working to integrate his system with WordPress, one of the most widely used blogging platforms. He says WWL is being used by several publishers, including the owners of a well-known technology magazine.

So how much closer is the dream of a unified web? Volunteer translators only cluster around popular sites, so the vast majority of blogs will remain untranslated, or only machine-translated. Most content producers are unable to pay for human translation, even at today’s prices. That leaves them reliant on machine translation, too. It is getting better, but it still struggles with colloquialisms and idioms. As Ethan Zuckerman, co-founder of Global Voices and a researcher at Harvard University, puts it: “If you sound like an EU parliamentarian, we can translate you quite well.” Until computers learn how to cope just as proficiently with the outbursts of self-absorbed teenage bloggers or snarky gossip columnists, machine-translated articles will struggle to attract readers. Clever technology can help lower the web’s linguistic barriers, but cannot yet eliminate them.


Full article and photo:

You Can’t Judge a Book by Its Author

Software that blurs a writer’s meaning is not progress.

We all know that you can’t tell a book by its cover, but technology is about to deliver its newest mixed blessing: Soon we won’t be able to tell a book by its author.

Last week one of the large textbook publishers, Macmillan, announced new software to let college instructors rewrite textbooks by substituting new material for what the author wrote. This will allow options such as deleting paragraphs or editing down to the level of individual sentences. The software can bring to print and e-textbooks what’s called a “mashup” in other forms like music and videos, where people alter the original with their own preferred version of the real thing.

This seems like another step in the progress of technology. Mistakes can be corrected and new views expressed with the wisdom of crowds—or at least the wisdom of professors—improving the work of a single author. Just as Wikipedia often delivers excellent group work, textbooks can get better with many people altering them.

But we have to wonder about the unintended consequences of a textbook absent an author. For example, since 1948 generations of students learned from Paul Samuelson’s “Economics,” which has sold four million copies. It had quirks and went through many editions. But it also was elegantly written and became canonical. What happens when students learn from what appears to be the same text but isn’t?

“Appalling and preposterous” is how Jaron Lanier described this idea to me last week. Mr. Lanier, the Silicon Valley computer scientist who popularized the term “virtual reality,” is now among a band of Internet pioneers who worry about its effects. Contrary to original assumptions about technology as a force for personal expression, he worries about minimizing individual creativity.

“Without Richard Feynman, where would physics be?” he asks. “Education is the ultimate form of expression. To think of it as a dry process devoid of personal creativity is to take a very antihuman approach.”

Without Feynman’s individual creativity, where would physics be?

Mr. Lanier’s new book, “You Are Not a Gadget,” rails against the Internet for promoting a “digital Maoism,” in which “a mashup is more important than the sources who were mashed.” He says anonymous groups creating content lack the accountability of an individual. “If you’re worried about history or science being politicized, a mashup will be even worse. Individual textbook authors are not perfect, but at least they have a voice with consistency and creativity.”

It is understandable that textbook publishers would embrace new technology. Their business model is under pressure from secondhand sales. Print and e-books customized by instructors for their own classes won’t be valuable in the used-book market, so innovative publishers reckon this will boost their economics.

Mr. Lanier warns this is another step in the open source, information-wants-to-be-free ideology. “Authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind,” he writes.

Blogs and social networks have boosted individual expression, but the Web paradoxically makes it harder to support professional creativity. Mr. Lanier, who is also a musician, researched the question of how new technologies have affected the ability of musicians to make a living. His conclusion is glum.

“By now, a decade and a half into the web era, when iTunes has become the biggest music store, in a period when companies like Google are the beacons of Wall Street, shouldn’t there be at least a few thousand initial pioneers of a new kind of musical career who can survive in our utopia?” Instead, “maybe after a generation or two without professional musicians, some new habitat will emerge that will bring them back.”

Mr. Lanier calls creative people the “new peasants” and likens them to “animals converging on shrinking oases of old media in a depleted desert.” In contrast, Silicon Valley views its own creativity differently and doesn’t give it away. Google and Apple don’t make the work product of their legions of engineers available for free. Venture capitalists would not agree to a mashup of their early-stage companies, with their investments reduced to divvied-up profits, if any.

In the case of textbooks there should at least be transparency when the relationship between authors and students is amended. Readers should be able to know they’ve read the book the author intended or what changes were made and why.

Technology creates opportunities, and the genie shouldn’t go back in the bottle. Still, the integrity and authenticity that a single author provides should not be lost. As Mr. Lanier reminds us, technological progress is great, but we need to be sure it doesn’t devalue our greatest growth driver, individual creativity.

Gordon Crovitz, Wall Street Journal


Full article and photo:

Computers Turn Flat Photos Into 3-D Buildings

GRAND SCALE A 3-D reconstruction of the Colosseum in Rome, built as part of the “Rome in a Day” project, which used 2,106 images and 819,242 points.

Rome wasn’t built in a day, but in cyberspace it might be.

Computer science researchers at the University of Washington and Cornell University are deploying a system that will blend teamwork and collaboration with powerful graphics algorithms to create three-dimensional renderings of buildings, neighborhoods and potentially even entire cities.

The new system, PhotoCity, grew from the original work of a Cornell computer scientist, Noah Snavely, who while working on his Ph.D. dissertation at the University of Washington, developed a set of algorithms that generated three-dimensional models from unstructured collections of two-dimensional photos.

The original project was dubbed Photo Tourism and it has since been commercialized as Microsoft’s Photosynth service, making it possible for users to upload collections of photos that can then be viewed in a quasi three-dimensional montage with a Web browser.

However, Photosynth collections are generally limited to dozens or hundreds of photos. The researchers wanted to push — or “scale” — their technology to be able to handle tens of thousands or even millions of photos. They also wanted to use computer processing power to transform the photos into true three-dimensional images, or what they refer to as a “dense point cloud.”

The visualization technology is already able to quickly process large collections of digital photos of an object like a building and render ghostly and evocative three-dimensional images. To do this they use a three-stage set of algorithms that begins by creating a “sparse point cloud” with a batch of photos, renders it as a denser image, capturing much of the original surface texture of the object, and then renders it in three dimensions.

To improve the quality of their rendering capabilities, the researchers plan to integrate their computing system with a social game that will permit competing teams to add images where they are most needed to improve the quality of the visual models.

The PhotoCity game is already being played by teams of students at the University of Washington and Cornell, and the researchers plan to open it to the public in an effort to collect three-dimensional renderings in cities like New York and San Francisco. Contestants will be able to use either an iPhone application that uses the phone’s camera, or upload collections of digital images.

In adopting what is known as a social computing or collective intelligence model, they are extending an earlier University of Washington research effort that combined computing and human skills to create a video game about protein folding.

The game, Foldit, was released in May 2008, allowing users to augment computing algorithms, solving visual problems where humans could find better solutions than computers. The game quickly gained a loyal following of amateur protein folders who became addicted to the challenges that bore a similarity to solving a Rubik’s Cube puzzle.

The emergence of such collaborative systems has great promise for harnessing the creative abilities of people in tandem with networked computers, said Peter Lee, a Defense Advanced Research Projects Agency program manager who recently organized a team-based contest to use the Internet to quickly locate a series of red balloons hidden around the United States.

“The obvious thing to do is to try to mobilize a lot of people and get them to go out and take snapshots that contribute to this 3-D reconstruction,” he said. “But maybe if enough people are involved someone will come up with a better idea of how to go about doing this.”

Indeed, it was J. C. R. Licklider, a legendary official at the Defense Advanced Research Projects Agency, who was a pioneer in proposing the idea of a “man-computer symbiosis.” While at Darpa, Dr. Licklider financed a series of research projects that led directly to the modern personal computer and today’s Internet.

To entice volunteers, the researchers have created a Web site: Anyone who wants to be a “custodian” of a particular building or place can begin by uploading pictures of the site. To maintain control they will need to be part of the group that contributes the most photos, in a capture-the-flag-like competition.

“One of the nice things for the players is they can own the points they create, whether it’s a building or a collection of buildings,” said Kathleen Tuite, a University of Washington graduate student and a computer graphics researcher who is one of the designers of PhotoCity. She said the researchers were considering the idea of offering real world prizes that would create incentives similar to Geocaching, the popular Internet GPS game.

“Eventually, the goal is to create a game without boundaries, that expands to fill the world,” Dr. Snavely said. “ For now, we’re focused on the scale of a college campus, or the heart of a city.”

John Markoff, New York Times


Full article and photo:

Digital Technology and Cleaner Politics

The Web creates new opportunities for disclosure.

This is supposed to be an era of openness and full-on transparency, powered by the Internet. Disclosure is a virtue, made simple through technology. The old, top-down control over communications is over, a relic of predigital life.

All true—except when it comes to politics.

It says something about how analog Washington remains that congressmen and presidents of both parties thought they could dictate who could say what about them when they ran for election. This was part of the 2002 Bipartisan Campaign Reform Act, aka McCain-Feingold. The law prohibited corporations—companies, unions or nonprofits—from “electioneering communications” within a month of a primary election or two months of a general election.

In invalidating this provision of the law as a violation of free speech, the Supreme Court focused on how technology has made it easier to speak and be heard, making restraints on speech less defensible. The majority opinion in Citizens United v. Federal Election Commission, decided at the end of last month, deserves close attention for its recognition of how the Internet and other digital advances redefine many issues, including campaign finance.

The eyes of many readers might have glazed over because of the partisan reaction to the ruling, but through the apolitical lens of increasingly open communications in other facets of life, McCain-Feingold looked decidedly anachronistic. The opinion rejected a ban on a movie, funded by a nonprofit corporation, that was critical of Hillary Clinton during the 2008 campaign. Defenders of McCain-Feingold argued that books, too, could be banned depending on who funded them. They defended an exemption for endorsements by newspapers owned by corporations, but said that bloggers could be punished for commenting on candidates.

The justices cited “classic examples of censorship” the law would have allowed: “The Sierra Club runs an ad, within the crucial phase of 60 days before the general election, that exhorts the public to disapprove of a Congressman who favors logging in national forests; the National Rifle Association publishes a book urging the public to vote for the challenger because the incumbent U. S. Senator supports a handgun ban; and the American Civil Liberties Union creates a Web site telling the public to vote for a Presidential candidate in light of that candidate’s defense of free speech.”

The court dismissed the idea of free speech for some and not others. “With the advent of the Internet and the decline of print and broadcast media,” the majority opinion said, “the line between the media and others who wish to comment on political and social issues becomes far more blurred.” “Rapid changes in technology—and the creative dynamic inherent in the concept of free expression—counsel against upholding a law that restricts political speech in certain media or by certain speakers,” the justices ruled. Prohibitions on television advertising are unsustainable when the trend is that “Internet sources, such as blogs and social networking Web sites, will provide citizens with significant information about political candidates and issues.”

The justices suggested an alternative: requiring more transparency about who is funding what. “Disclosure is a less restrictive alternative to more comprehensive regulations of speech,” they said, noting that they have “upheld registration and disclosure requirements on lobbyists, even though Congress has no power to ban lobbying itself.”

Technology could make disclosure more effective. Searchable databases using advances such as XBRL and other semantic techniques for organizing information could be powerful tools for ensuring that voters know who is behind advocacy during elections. If campaign-finance reformers want voters to know who funded which messages, they should make disclosure more effective.

Last week, The Wall Street Journal had a page-one article breaking the news of how plaintiff law firms are huge funders of the political campaigns of state officials around the country who then hire them to litigate lucrative cases such as for public pension funds. It took several investigative journalists to uncover what a digital database could have tracked if disclosures were required in a usable format.

Similarly, the investigative Web site detailed how members of Congress were allotted tickets for yesterday’s Super Bowl, which some then sold to their backers at big markups. Why not require disclosure of these kinds of indirect donations, using databases that everyone could access?

The Supreme Court has not been known as the most digitally sophisticated branch of government—after all, justices still wear robes and jot on legal pads. But the political branches of government can learn from the technology lesson the justices have just handed them, including that if they want real campaign-finance reform, disclosure is the way to go.

Gordon L. Crovitz, Wall Street Journal

Full article and photo:

What Newspapers Can Learn From Craigslist

Craig Newmark did one simple thing: He thought about what his users truly wanted.

Last summer Wired magazine ran a cover story on Craigslist, the classified-advertising Web site, and its founder, Craig Newmark. Craigslist, the headline read, was a mess. The story said the site was underdeveloped. It refused to monetize itself. Mr. Newmark wouldn’t add new features, and his motivations were obscure.

Indeed, so poorly is Craigslist run that it’s easily one of the 20 biggest Web sites in the U.S. and, according to Wired, likely topped $100 million in revenue last year. It did this with minimal effort and with a staff an order of magnitude or two smaller than those of other sites its size. Those are two things that can not be said about most Web sites in the U.S., and certainly not about the New York Times, which announced last month it would be shifting its Web operation to a modified pay set-up in 2011.

The details are as yet sketchy, but the paper plans to adopt a metered model. Subscribers—daily or Sunday—get everything. Nonsubscribers get to see a few articles every month; after that, they will be invited to pay some sort of fee. The move is designed to take advantage of the traffic that search engines bring to the site, but still target the moochers.

The Times has invested heavily in the Web. You can’t accuse it of not taking the medium seriously. Unlike Craigslist it highly monetizes its Web site, makes a hefty amount from it, and deserves to do whatever it wants to make money to support its extensive news operation. Indeed, the move makes terrestrial subscriptions more valuable—a smart way to maximize that revenue stream.

That said, let’s remember that the road to charging for content on the Internet is strewn with media roadkill, including the Times’s short-lived TimesSelect experiment, which launched in 2005 and folded in 2007.

With all humility toward the very smart people at the Times, I submit that they can learn a thing or two from the lowly Craigslist. Indeed, I submit that it’s hard to look at virtually any news site out there and not notice how its architecture and presentation differ from that of Craigslist, which has several times any news site’s number of users.

Craig Newmark did one simple thing: He thought about what his users wanted, and put very little on his site that wasn’t useful to them. Craigslist’s mission is merely to make it easy for people to sell an old refrigerator, or look for a roommate, or find someone to date. By just about any metric the site serves those users as well as any business on the Internet, with the arguable exception of Google.

For decades, newspapers made billions in profits by tending their local monopolies and felt quite important in the process. So it’s not surprising that the single most salient weakness of a mainstream news site is that in the end it is a corporate showcase, a striking contrast to the Craigslist model. The vocabulary of the architecture speaks plainly that the page is there to serve not readers, but to articulate the structure of the company behind it.

That architecture is depressingly similar. The typical news site doesn’t take advantage of the wide horizontal window of the browser. Instead, it is framed inside a square or vertical box. And inside that frame, the reader is presented with four or five—and sometimes even seven or eight—horizontal lines of information or navigation before the first actual story headline appears. (The Wall Street Journal site, which, it should be noted, has never given away its content for free, has more than a half-dozen lines of foofaraw before a reader is granted a story headline.)

The stories are as a rule stuffed into a cramped space in the bottom middle of the page, hemmed in by myriad other links, devices and widgets arrayed in columns to either side. Headlines, forced to fit in those tiny spaces, are often as awkward and telegrammatic as print ones.

Even after the reader clicks on a story, the site then offers up more of the same: A frame inside the browser window, unwanted navigation elements, links to any and every possible department of the site, placed above, to the left and to the right of the actual prose. As for that prose, it could be a 400-word reported piece, a lacerating editorial, or a recipe for pumpkin pie. It doesn’t matter—it will always be trapped in that small well, suffocated by the weight of the widgets, links and navigation around it.

I’m not talking about ads. It is a cranky consumer who can’t grok the reason for an ad next to a story. I think many readers, like me, would gladly swap their prized Adblock Firefox add-on for one that would keep the ads and instead eliminate all the non-content elements of the average newspaper Web page.

Ultimately, I would like about 99% fewer navigation links on the page, but will settle for 90% fewer. For that service, a newspaper site can hit me with all the ads it wants, or charge me any amount of money. But until it provides this simple and I think obvious service to readers, one can’t help suspecting that a newspaper’s approach to the Web is incomplete.

Newspaper folks, looking at their collapsed business model, have griped for a decade that Craigslist stole their classified ads—and a breath later dream about charging for content. They do everything but consider that a Craigslist model—which puts the reader first, highlights the presentation of information, and fosters community—might indeed fund newsgathering for a new millennium.

Mr. Wyman is the former arts editor of and National Public Radio.


Full article:

From the Roman Codex to the iPad

How’s this for human progress? It took about 4,000 years from the invention of writing to the Roman-era codex of bound pages replacing scrolls, 1,000 years from the codex to movable type creating printed books, 500 years from the printing press to the Internet—and only 25 years to the launch of the iPad.

Even Apple enthusiasts will concede that this somewhat overstates last week’s announcement of the tablet. But like the arrival of Amazon’s Kindle, Apple’s tablet reminds us there is a digital revolution redefining the book.

At a time when other media markets from news to music are in disarray because of the new ways we consume information, the book is thriving. Books in their emerging forms make the best example so far of using print for what it does best, digital for what it does best, and both together for a dramatically new experience.

Book publishers have the business advantage that they are not dependent on advertising. This means they are in far less danger of losing revenues as they acquire more readers. University presses, for example, often can’t get distribution in larger bookstores, so they encourage sampling of books online to drive print sales.

One million books were published last year. Amazon stocks hundreds of thousands of e-books (six of 10 of its sales are now for the Kindle edition when it’s available), and Google is digitizing millions more, including ones out of print. Now the iPad promises color, real-time access and multimedia for the book experience.

The result is hard to predict, but Robert Darnton may be as good a futurist as any technologist analyzing the iPad. Mr. Darnton is director of the Harvard library, for decades a leading historian of the book, and author of “The Case for Books: Past, Present and Future” (Public Affairs, 2009).

“I’ve been invited to so many conferences on the death of the book that it must be very much alive,” he told me last week. “It’s misguided to think that one medium displaces another and we have a choice of either analog or digital. The history of communication is that new technologies reinforce rather than displace the old.” Scribes continued to copy books by hand for over a century after Gutenberg.

The mix of print and digital will get more interesting this year, with the launch of many new e-readers, including the Que and the Skiff. The iPad promises to integrate audio and video seamlessly, perhaps immersing us rather than further reducing our attention spans as most new media have. Why stick to text alone when other media can be incorporated into the book experience? Once-flat textbooks could be reimagined as multimedia teaching materials, and novels could come with animation.

Apple will try to catch up to Amazon, the leading force behind the renaissance of books. Amazon’s convenient online bookstores help people discover new books through highly personalized recommendations, and the Kindle lets people carry a library of digital books wherever they go.

“Every age is an information age,” Mr. Darnton says. “It’s just that information is organized in different ways.” Tablets, e-readers and print-on-demand books “will reinforce the printed codex and not displace it.” Analog and digital media are “complementary and not confrontational,” he says, rebutting what he calls a “colossal case of false consciousness that accompanies spectacular announcements like the iPad.”

In “The Case for Books,” he described the ideal book he imagined a decade ago. He envisioned a pyramid, with the top level a text monograph with links to supplementary essays. Readers could then “continue deeper through the book, though bodies of document, bibliography, historiography, iconography, background music, everything I can provide,” he wrote. “In the end, they will make the subject theirs, because they will find their own paths through it, reading horizontally, vertically, or diagonally, wherever the electronic links may lead.”

Technology is about to make real this sort of deep engagement with information. Mr. Darnton’s next book, “Poetry and the Police: Communication Networks in 18th Century Paris,” will be a history of street songs in the French capital. There were no newspapers and half the population was illiterate, so news was spread by song. “Parisians wrote new verses to old tunes literally every day,” he says, “tunes being a great mnemonic device for spreading the word in a semiliterate world.”

He found the original tunes in the National Library in Paris and had a cabaret singer record them for a modern audience. These recordings can be incorporated with text to create a full information experience. Combined text and audio seems like a perfect offering for the iPad.

As technology rewrites what it means to be a book, we will raise our expectations for our information experiences, with implications potentially as significant as the move to printed books from scrolls.

L. Gordon Crovitz, Wall Street Journal


Full article:

The book of Jobs

It has revolutionised one industry after another. Now Apple hopes to transform three at once

APPLE is regularly voted the most innovative company in the world, but its inventiveness takes a particular form. Rather than developing entirely new product categories, it excels at taking existing, half-baked ideas and showing the rest of the world how to do them properly. Under its mercurial and visionary boss, Steve Jobs, it has already done this three times. In 1984 Apple launched the Macintosh. It was not the first graphical, mouse-driven computer, but it employed these concepts in a useful product. Then, in 2001, came the iPod. It was not the first digital-music player, but it was simple and elegant, and carried digital music into the mainstream. In 2007 Apple went on to launch the iPhone. It was not the first smart-phone, but Apple succeeded where other handset-makers had failed, making mobile internet access and software downloads a mass-market phenomenon.

As rivals rushed to copy Apple’s approach, the computer, music and telecoms industries were transformed. Now Mr Jobs hopes to pull off the same trick for a fourth time. On January 27th he unveiled his company’s latest product, the iPad—a thin, tablet-shaped device with a ten-inch touch-screen which will go on sale in late March for $499-829. Years in the making, it has been the subject of hysterical online speculation in recent months, verging at times on religious hysteria: sceptics in the blogosphere jokingly call it the Jesus Tablet.

The enthusiasm of the Apple faithful may be overdone, but Mr Jobs’s record suggests that when he blesses a market, it takes off. And tablet computing promises to transform not just one industry, but three—computing, telecoms and media.

Companies in the first two businesses view the iPad’s arrival with trepidation, for Apple’s history makes it a fearsome competitor. The media industry, by contrast, welcomes it wholeheartedly. Piracy, free content and the dispersal of advertising around the web have made the internet a difficult environment for media companies. They are not much keener on the Kindle, an e-reader made by Amazon, which has driven down book prices and cannot carry advertising. They hope this new device will give them a new lease of life, by encouraging people to read digital versions of books, newspapers and magazines while on the move. True, there are worries that Apple could end up wielding a lot of power in these new markets, as it already does in digital music. But a new market opened up and dominated by Apple is better than a shrinking market, or no market at all.

Keep taking the tablets

Tablet computers aimed at business people have not worked. Microsoft has been pushing them for years, with little success. Apple itself launched a pen-based tablet computer, the Newton, in 1993, but it was a flop. The Kindle has done reasonably well, and has spawned a host of similar devices with equally silly names, including the Nook, the Skiff and the Que. Meanwhile, Apple’s pocket-sized touch-screen devices, the iPhone and iPod Touch, have taken off as music and video players and hand-held games consoles.

The iPad is, in essence, a giant iPhone on steroids. Its large screen will make it an attractive e-reader and video player, but it will also inherit a vast array of games and other software from the iPhone. Apple hopes that many people will also use it instead of a laptop. If the company is right, it could open up a new market for devices that are larger than phones, smaller than laptops, and also double as e-readers, music and video players and games consoles. Different industries are already converging on this market: mobile-phone makers are launching small laptops, known as netbooks, and computer-makers are moving into smart-phones. Newcomers such as Google, which is moving into mobile phones and laptops, and Amazon, with the Kindle, are also entering the fray: Amazon has just announced plans for an iPhone-style “app store” for the Kindle, which will enable it to be more than just an e-reader.

If the past is any guide, Apple’s entry into the field will not just unleash fierce competition among device-makers, but also prompt consumers and publishers who had previously been wary of e-books to take the plunge, accelerating the adoption of this nascent technology. Sales of e-readers are expected to reach 12m this year, up from 5m in 2009 and 1m in 2008, according to iSuppli, a market-research firm.

Hold the front pixels

Will the spread of tablets save struggling media companies? Sadly not. Some outfits—metropolitan newspapers, for instance—are probably doomed by their reliance on classified advertising, which is migrating to dedicated websites. Others are too far gone already. Tablets are expensive, and it will be some years before they are widespread enough to fulfil their promise. In theory a newspaper could ask its readers to sign up for a two-year electronic subscription, say, and subsidise the cost of a tablet. But such a subsidy would be hugely pricey, and expensive printing presses will have to be kept running for readers who want to stick with paper.

Still, even though tablets will not save weak media companies, they are likely to give strong ones a boost. Charging for content, which has proved difficult on the web, may get easier. Already, people are prepared to pay to receive newspapers and magazines (including The Economist) on the Kindle. The iPad, with its colour screen and integration with Apple’s online stores, could make downloading books, newspapers and magazines as easy and popular as downloading music. Most important, it will allow for advertising, on which American magazines, in particular, depend. Tablets could eventually lead to a wholesale switch to digital delivery, which would allow newspapers and book publishers to cut costs by closing down printing presses.

If Mr Jobs manages to pull off another amazing trick with another brilliant device, then the benefits of the digital revolution to media companies with genuinely popular products may soon start to outweigh the costs. But some media companies are dying, and a new gadget will not resurrect them. Even the Jesus Tablet cannot perform miracles.


Full article and photo:

The Times to Charge for Frequent Access to Its Web Site

Beginning in January 2011, unlimited access to will require a paper subscription or payment of a flat fee.

Taking a step that has tempted and terrified much of the newspaper industry, The New York Times announced on Wednesday that it would charge some frequent readers for access to its Web site — news that drew ample reaction from media analysts and consumers, ranging from enthusiastic to withering.

Starting in January 2011, a visitor to will be allowed to view a certain number of articles free each month; to read more, the reader must pay a flat fee for unlimited access. Subscribers to the print newspaper, even those who subscribe only to the Sunday paper, will receive full access to the site without any additional charge.

Executives of The New York Times Company said they wanted to create a system that would have little effect on the millions of occasional visitors to the site, while trying to cash in on the loyalty of more devoted readers. But fundamental features of the plan have not yet been decided, including how much the paper will charge for online subscriptions or how many articles a reader will be allowed to see without paying.

“This announcement allows us to begin the thought process that’s going to answer so many of the questions that we all care about,” Arthur Sulzberger Jr., the Times Company chairman and publisher of the newspaper, said in an interview. “We can’t get this halfway right or three-quarters of the way right. We have to get this really, really right.”

For years, publishers banked on a digital future supported entirely by advertising, dismissing online fees as little more than a formula for shrinking their audiences and ad revenue. But two years of plummeting advertising has many of them weighing anew whether they might collect more money from readers than they would lose from advertisers.

Financial analysts and writers who follow the media business had mostly qualified praise for the decision of The Times. is the most popular newspaper site in the country, with more than 17 million readers a month in the United States, according to Nielsen Online; analysts say it is the leader in advertising revenue, as well, giving The Times more to lose if the move backfires.

“You can’t continue to be The New York Times unless you find” a new source of revenue, said James McQuivey, media analyst at Forrester Research.

Mike Simonton, an analyst at Fitch Ratings, said, “We expect that The Times will be able to execute a strategy like this,” adding that other papers will try it in the near future, but few are likely to succeed.

But the response was far from universally positive. Felix Salmon, a respected writer on media for Reuters, wrote, “Successful media companies go after audience first, and then watch revenues follow; failing ones alienate their audience in an attempt to maximize short-term revenues.”

Others endorsed the idea of a pay wall generally, while criticizing the approach of The Times.

Thousands of readers sent e-mail messages to The Times or posted comments on the site Wednesday, with those saying they supported the move outnumbered by others who vowed not to pay.

Shares of the Times Company fell 39 cents, closing at $13.31.

All visitors to will have full access to the home page. In addition, readers will be able to read individual articles through search sites like Google, Yahoo and Bing without charge. After that first article, though, clicking on subsequent ones will count toward the monthly limit. Among the nation’s largest newspapers, only The Wall Street Journal and Newsday charge for access to major portions of their Web sites. A few smaller ones also do, including The Financial Times, The Arkansas Democrat-Gazette and The Albuquerque Journal, and more are expected to join their ranks this year.

The Times Company has been studying the matter for almost a year, searching for common ground between pro- and anti-pay camps. Company executives said the changes would wait another year primarily because they need to build pay-system software that works seamlessly with and the print subscriber database.

“There’s no prize for getting it quick,” said Janet L. Robinson, the company’s president and chief executive. “There’s more of a prize for getting it right.”

Within the newsroom of The Times, where there has long been strong sentiment in favor of charging, the primary criticism was about the wait until 2011.

“I think we should have done it years ago,” said David Firestone, a deputy national news editor. “As painful as it will be at the beginning, we have to get rid of the notion that high-quality news comes free.”

The Times has tried and abandoned more limited online pay models. In the 1990s it charged overseas readers, and from 2005 to 2007 the newspaper’s TimesSelect service charged for access to editorials and columns.

Company executives said the current decision was not a reaction to the ad recession but a long-term strategy to develop new revenue. “This is a bet, to a certain degree, on where we think the Web is going,” Mr. Sulzberger said. “This is not going to be something that is going to change the financial dynamics overnight.”

Most readers who go to the Times site, as with other news sites, are incidental visitors, arriving no more than once in a while through searches and links, and many of them would be unaffected by the new system. A much smaller number of committed readers account for the bulk of the site visits and page views, and the essential question is how many of them will pay.

The Times Company looked at several approaches, including a straightforward pay wall similar to The Journal’s, which makes some articles available to any visitor, and others accessible only to paying readers. It also rejected the ideas of varying the price depending on how much a consumer uses the site, and a “membership” format similar to the one used in public broadcasting.

The approach the company took was “the one that after much research and study we determined has the most upside” in both subscriptions and advertising, said Martin A. Nisenholtz, senior vice president for digital operations. “We’re trying to maximize revenue. We’re not saying we want to put this revenue stream above that revenue stream. The goal is to maximize both revenue streams in combination.”



Full article and photo:

German government warns against using MS Explorer

IE Logo

The warning applies to versions 6, 7 and 8 of Internet Explorer

The German government has warned web users to find an alternative browser to Internet Explorer to protect security.

The warning from the Federal Office for Information Security comes after Microsoft admitted IE was the weak link in recent attacks on Google’s systems.

Microsoft rejected the warning, saying that the risk to users was low and that the browsers’ increased security setting would prevent any serious risk.

However, German authorities say that even this would not make IE fully safe.

Thomas Baumgaertner, a spokesman for Microsoft in Germany, said that while they were aware of the warning, they did not agree with it, saying that the attacks on Google were by “highly motivated people with a very specific agenda”.

“These were not attacks against general users or consumers,” said Mr Baumgaertner.

“There is no threat to the general user, consequently we do not support this warning,” he added.

Microsoft says the security hole can be shut by setting the browser’s security zone to “high”, although this limits functionality and blocks many websites.

However, Graham Cluley of anti-virus firm Sophos, told BBC News that not only did the warning apply to 6, 7 and 8 of the browser, but the instructions on how to exploit the flaw had been posted on the internet.

“This is a vulnerability that was announced in the last couple of days. Microsoft have no patch yet and the implication is that this is the same one that exploited on the attacks on Google earlier this week,” he said.

“The way to exploit this flaw has now appeared on the internet, so it is quite possible that everyone is now going to have a go.”

Microsoft traditionally release a security update once a month – the next scheduled patch is the 9th of February. However, a spokesman for Microsoft told BBC News that developers for the firm were trying to fix the problem.

“We are working on an update on this issue and this may well involve an out of cycle security update,” he said.

Fix development

However, this is no easy task. Not only have the firm got to fix the loophole, but they have to ensure it does not create another one and – equally importantly – works on all computers. This is a challenge compounded by the fact they have to fix three different versions of its browser.

Microsoft said that while all versions of Internet Explorer were affected, the risk was lower with more recent releases of its browser.

The other problem facing developers is that the possible risk might not be prevented by anti-virus software, even when recently updated.

“We’ve been working to analyse the malware that the Chinese are using. But new versions can always be created,” said Mr Cluley.

“We’ve been working with Microsoft to see if the damage can be mitigated and we are hoping that they will release an emergency patch.

“One thing that should be stressed is that every browser has its security issues, so switching may remove this current risk but could expose you to another.”


Full article and photo:

Flowers for a funeral

Google and China

Censorship and hacker attacks provide the epitaph for Google in China

“WE’RE in this for the long haul,” wrote a Google executive four years ago when the company launched a self-censored version of its search engine for the China market. Now Google says it might have to pull out of the country because of alleged attacks by hackers in China on its e-mail service and a tightening of China’s restrictions on free speech on the internet. Its change of heart, as the company rightly points out, could have “far-reaching consequences”.

Google’s “new approach to China”, as the company’s chief legal officer, David Drummond, called it on January 12th on the company’s official blog, will certainly infuriate China’s government. The authorities are sensitive to foreign complaints about internet controls in China. In November, during a visit by President Barack Obama, his obliquely worded criticism of Chinese online censorship was itself censored from official reports. If it does close down in China, Google would be the first big-brand foreign company to do so citing freedom of speech in many years.

Mr Drummond’s blog-posting also contained unusually direct finger-pointing by a foreign multinational at China as a source of hacker attacks. It said that in mid-December Google detected a “highly sophisticated and targeted attack” on its corporate computer systems “originating from China”. It found that at least 20 other large companies from various industries had also been attacked. A primary goal, of the hacking of Google, it said, appeared to be to gain access to the e-mail of Chinese human-rights activists who use Google’s “Gmail” service. The hackers succeeded in partially penetrating two such accounts.

“Third parties” had also, wrote Mr Drummond, “routinely” gained access to the Gmail accounts of dozens of other human-rights advocates in America, Europe and China itself. Unlike the mid-December attack, these breaches appeared to involve “phishing” scams or “malware” on users’ computers rather than direct attacks on Google’s systems. All this, he said, along with attempts over the past year to impose further limits on free speech on the web, had led Google to “review the feasibility” of its Chinese business.

The company has decided to stop censoring the results of its China-based search engine, Mr Drummond said this might result in having to shut down and Google’s offices in China. In the face of much criticism from Western human-rights advocates, Google justified its decision to set up in 2006 by pointing out that China often blocked its uncensored engine, Better to offer a censored service (with warnings to users that results were filtered), the company argued, than nothing at all. China would certainly not allow an uncensored search engine to be based on its territory.

Google’s decision at the time was presumably driven in part by the lure of China’s rapidly expanding internet market. In part because of intermittent blocking of, and the slowness of access to the company’s foreign-based servers, Baidu, a Beijing-based company listed on America’s NASDAQ exchange, dwarfed Google’s share of the search-engine business in China. The launch of did little to dent Baidu’s domination.

Nor has Google’s acquiescence in self-censorship of its searches made China any less wary of its other, uncensored, services. Google’s video-sharing site, YouTube, has been blocked since March, because of footage of Chinese police beating Tibetan monks. Its photo-album site, Picasa Web Albums, suffered the same fate soon after. Access to Google’s blog service, Blogger, has long been intermittent. It is currently unavailable in Beijing.

Google’s frustrations are widely shared. In the build-up to the Beijing Olympics in August 2008, China lifted longstanding blocks on several websites, as it tried to present a more open image to foreign visitors. Since then, controls have been stepped up to unprecedented levels. Internet access in the western region of Xinjiang has been all but cut off since ethnic riots erupted there in July.

The unrest also prompted a shutdown of foreign social-networking sites such as Twitter and Facebook. The role of such sites in protests in Iran, after its stolen elections in June, had already alarmed the government. Its fear of dissent around the 60th anniversary in October of the founding of communist China prompted even greater vigilance against sensitive debate online. But there has been no sign of relaxation since then. In recent weeks the authorities have tightened restrictions on the registration of websites under the .cn domain name (only businesses may apply). A crackdown on internet pornography has led to closer scrutiny by internet service providers of non-porn websites.

In December Yeeyan, a site with translations of articles from foreign newspapers including the Guardian and the New York Times, was closed for several days. It was allowed to reopen after putting tighter controls in place on the publication of politically sensitive pieces. Ecocn, a site offering translations of articles from this newspaper, was also briefly shut down as officials trawled for pornography, but resurfaced unscathed. The volunteers who run this informal operation make translations of sensitive articles available only to users they trust.

The anti-porn drive turned up the heat on Google too. Last year was among several search engines in China accused by the authorities of providing links to pornographic sites. The state-controlled press gave particular prominence to Google’s alleged transgressions, which the company promised to investigate. The Chinese media have also published frequent criticisms in recent months of Google’s alleged violations of Chinese copyrights in its Google Books venture.

In Silicon Valley, its home, Google’s change of tack in China was widely applauded. But some were asking whether it was “more about business than thwarting evil” to quote TechCrunch, a widely read website. Besides pointing to Google’s failure to eat into Baidu’s market share, cynics noted that, whereas, according to Mr Drummond, Google’s revenues in China are “truly immaterial”, its costs are not. It employs about 700 people in China, some of them royally paid engineers, who may now may have to look for other jobs. Hacker attacks and censorship, critics say, are convenient excuses for something Google wanted to do anyway, without appearing to be retreating commercially. Google strongly rejects this interpretation.

In China, however, the government is clearly fearful that the company’s public stand against censorship will be celebrated by many Chinese internet-users. Chinese news accounts of the company’s decision failed to mention the reason for Google’s actions. Chinese web portals buried the story. Many internet-users in China have become adept at finding ways of circumventing China’s blocks on overseas websites, including the installation of “virtual private network” software. Numerous tributes to Google that rapidly appeared on Chinese internet discussion forums, and flowers laid outside Google’s office in Beijing, showed that the attempts at censorship had failed. Few, however, believe the company’s announcement will dissuade China from keeping on trying.


Full article and photo:

World Wide Mush

In his new book, “You Are Not A Gadget,” online pioneer Jaron Lanier explains how the Internet has gone off course; a chorus of voices makes everything flat—and scary

All too many of today’s Internet buzzwords— including “Web 2.0,” “Open Culture,” “Free Software” and the “Long Tail”—are terms for a new kind of collectivism that has come to dominate the way many people participate in the online world. The idea of a world where everybody has a say and nobody goes unheard is deeply appealing. But what if all of the voices that are piling on end up drowning one another out?

There’s no escaping collectivism in our online world. If you search about most any topic online, for instance, you will likely be directed first to Wikipedia, a collective effort. Google Wave, a new communication tool that is intended to supplant email, encourages you to blur personal boundaries by editing what someone else has said in a conversation with you, and you can watch each other as you type so nobody gets a private moment to consider a thought before posting. And if you listen to music online, there’s a good chance your listening will be guided by statistical analysis of Internet crowd preferences.

Most people know me as the “father of Virtual Reality technology.” In the 1980s and 1990s, I was a young computer scientist and entrepreneur working on how to apply virtual reality to things like surgical simulation. But I was also part of a circle of friends who tried to imagine how computers would fit into the peoples’ lives, including how people might make a living in the future. Our dream came true, in part. It turns out that millions of people are ready to contribute instead of sitting passively on the couch watching television. On the other hand, we made a huge mistake in making those contributions unpaid, and often anonymous, because those bad decisions robbed people of dignity. I am appalled that our old fantasies have become so entrenched that it’s hard to get anyone to remember that there are alternatives to a framework that isn’t working.

Here’s one problem with digital collectivism: We shouldn’t want the whole world to take on the quality of having been designed by a committee. When you have everyone collaborate on everything, you generate a dull, average outcome in all things. You don’t get innovation

If you want to foster creativity and excellence, you have to introduce some boundaries. Teams need some privacy from one another to develop unique approaches to any kind of competition. Scientists need some time in private before publication to get their results in order. Making everything open all the time creates what I call a global mush.

There’s a dominant dogma in the online culture of the moment that collectives make the best stuff, but it hasn’t proven to be true. The most sophisticated, influential and lucrative examples of computer code—like the page-rank algorithms in the top search engines or Adobe’s Flash— always turn out to be the results of proprietary development. Indeed, the adored iPhone came out of what many regard as the most closed, tyrannically managed software-development shop on Earth.

Actually, Silicon Valley is remarkably good at not making collectivization mistakes when our own fortunes are at stake. If you suggested that, say, Google, Apple and Microsoft should be merged so that all their engineers would be aggregated into a giant wiki-like project—well you’d be laughed out of Silicon Valley so fast you wouldn’t have time to tweet about it. Same would happen if you suggested to one of the big venture-capital firms that all the start-ups they are funding should be merged into a single collective operation.

But this is exactly the kind of mistake that’s happening with some of the most influential projects in our culture, and ultimately in our economy.

Digital collectivism might seem participatory and democratic, but it’s painting us into a corner from which we will have to concoct an awkward escape. It is strange to me that this isn’t more obvious to many of my Silicon Valley colleagues.

The U.S. made a fateful decision in the late 20th century to routinely cede manufacturing and other physical-world labors to foreign competitors so that we could focus more on lucrative, comfortable intellectual activities like design, entertainment and the creation of other types of intellectual property. That formulation still works for certain products that remain within a system of proprietary control, like Apple’s iPhone.

Unfortunately, we were also making another decision at the same time: that the very idea of intellectual property impedes information flow and sharing. Over the last decade, many of us cheered as a lot of software, music and news became free, but we were shooting ourselves in the collective feet.

On the one hand we want to avoid physical work and instead benefit from intellectual property. On the other hand, we’re undermining intellectual property so that information can roam around for nothing, or more precisely as bait for advertisements. That’s a formula that leaves no way for our nation to earn a living in the long term.

The “open” paradigm rests on the assumption that the way to get ahead is to give away your brain’s work—your music, writing, computer code and so on—and earn kudos instead of money. You are then supposedly compensated because your occasional dollop of online recognition will help you get some kind of less cerebral work that can earn money. For instance, maybe you can sell custom branded T-shirts.

We’re well over a decade into this utopia of demonetized sharing and almost everyone who does the kind of work that has been collectivized online is getting poorer. There are only a tiny handful of writers or musicians who actually make a living in the new utopia, for instance. Almost everyone else is becoming more like a peasant every day.

And it’s going to get worse. Before too long—in 10 years, I’d guess—cheap home robots will be able to make custom T-shirts from free designs off the Internet. When that day comes, then a T-shirt’s design will be no more valuable than recorded music is today.

The T-shirt-making robot is only one example of a general principle. As technology gets better and better, more and more jobs will essentially become threatened, just like today’s jobs for reporters or recording musicians.

One of the bright spots in the employment picture for the U.S. is in health-care jobs, such as those related to elder care. But the Japanese are developing health-care robots to anticipate the needs of their aging population. When those robots get good and cheap, which they probably will within a couple of decades, a lot of health-care jobs in the U.S. will either go away or become much less well-paid.

This isn’t how things should be. Improving technology is supposed to create ever more comfortable and cerebral jobs for people. Some kind of intellectual-property system is the only way Americans, or people anywhere, can earn money in the long, long term, as technology gets very good.

The owners of big computer resources on the Internet, like Google, will be able to make money from the open approach for a long time, of course, by routing advertisements, but middle-class people will be increasingly asked to accept a diet of mere kudos. No one should feel insulated from this trend. Poverty has a way of trickling up. Once everyone is aggregated, what will be left to be advertised?

All too often, a youthful perspective falls prey to the fallacy of collectivism. I fell prey to it myself. In my early 20s, I lived in collective households and belonged to food co-ops, as did most of my friends. I recall these things now as harmless diversions, more of a way of extending the experience of childhood than an attempt at revolution.

Youthful fascination with collectivism is in part simply a way to address perceived “unfairness.” If everyone shares, then a young person arriving on the scene fresh will not have less than an older person who has been around for a while.

This is all harmless enough, but the pattern can be manipulated in dangerous ways. I don’t want our young people aggregated, even by a benevolent social-networking site. I want them to develop as fierce individuals, and to earn their living doing exactly that. When they work together, I hope they’ll do so in competitive, genuinely distinct teams so that they can get honest feedback and create big-time innovations that earn royalties, instead of spending all their time on crowd-pleasing gambits to seek kudos. This is not just so that they and their children will thrive, but so that they won’t become a mob, which, as history has shown us again and again, is a vulnerability of human nature.

Jaron Lanier is known as the father of virtual-reality technology and has worked on the interface between computer science and medicine, physics, and neuroscience. This essay is adapted from his book “You Are Not a Gadget,” due out next week from Knopf.


Full article and photos:

Google Gets On the Right Side of History

No more censored searches to please the Chinese government.

One night in the mid-1990s when I was working as a journalist in Beijing, I went out to dinner with some Chinese friends. I had just finished reading a book called “The File” by the British historian Timothy Garton Ash. It’s about what happened in East Berlin after the Berlin Wall came down and everybody could see the files the Stasi had been keeping all those years. People discovered who had been ratting on whom—in some cases neighbors and co-workers, but also lovers, spouses and even children. After I described the book to my Chinese dinner companions—a hip and artsy intellectual crowd—one friend declared: “Some day the same thing will happen in China, then I’ll know who my real friends are.”

The table went silent.

China today is very different from Soviet-era Eastern Europe. It’s unlikely that its current political system—or its system for blocking foreign Web sites known widely as the “great firewall”—will crumble like the Berlin Wall any time soon. Both are supported and enabled by the current geopolitical, commercial and investment climate in ways that Soviet-era Eastern Europe and the Iron Curtain never were.

I do believe, however, that in my lifetime the Chinese people may learn more about some of the conversations that have taken place over the past decade between Internet company executives and Chinese authorities. When that happens, they will know who sold them out and who was most eager to help the Chinese Communist Party in building a blinkered cocoon of disinformation around their lives—and in some cases deaths.

This censored environment makes it easier for the Chinese government to lie to its people, steal from them, turn a blind eye when they are poisoned with tainted foodstuffs, and cover up their children’s deaths due to substandard building codes. It is a constant struggle, and sometimes literally a crime, for people to share information about such matters or to use the Internet to mobilize against corruption and malfeasance.

That is the information environment that China’s business elites, many of whom have gotten rich running Internet and telecommunications companies, are responsible for helping to build and maintain. For now they are national heroes, having made great (and lucrative) efforts on behalf of China’s economic growth and global competitiveness, making China a force to be reckoned with on the global stage. But if history takes some unexpected turns—and that’s the one thing you can count on Chinese history doing—it won’t always be on their side.

By announcing it will no longer censor its Chinese search engine and will reconsider its presence in China, Google has taken a bold step onto the right side of history.

Four years ago when Google entered the Chinese market and launched, Chinese bloggers called it the “neutered Google.” At the time, Google executives said the decision to bow to the Chinese government’s censorship demands had been made after heated internal debates. They said they had weighed the positives and negatives and concluded Chinese Internet users were better off with the neutered Google than with no Google. They drew a red line under search and said they would not bring any other Google products containing users’ personal information—including email and blogging—into China. They held to that line.

Over the past four years I tested from time to time and compared its search results with the Chinese market leader, Baidu. I found that tended to censor search results somewhat less than Baidu. This supported Google’s argument that it at least gave Chinese Internet users more information than the domestic alternatives.

Google executives also pointed out that a notice appeared at the bottom of every page of censored results on, informing users that some information was being hidden from them at the behest of Chinese authorities. In this way, the logic went, they were at least being honest with the Chinese public about the fact that Google was helping their government put blinkers on them.

The company’s effort to walk a fine line between Chinese regulators and free speech critics ended up being unsustainable. Anticensorship activists still viewed its compromise as contributing to the spread of censorship around the world. On the other hand, the compromise was also unacceptable to Chinese authorities, who were unhappy that Google wasn’t censoring as heavily as Baidu. Last year Google came under a series of attacks in the state-run media for failing to censor porn adequately when users—horror of horrors—typed smutty phrases into the search box.

As Google considers exactly what it will do next now that it has refused to censor, some Chinese users are expressing support and sending flowers, others are upset, and others are thumbing their noses, good riddance. Competitors are gloating. Google is in for a rough few months ahead. In the longer run, history will reveal to the Chinese people who their real friends have been.

Ms. MacKinnon is a fellow with the Open Society Institute. She is writing a book about China and the Internet


Full article:

Keep a Civil Cybertongue

Rude and abusive online behavior should not be met with silence.

In less than 20 years, the World Wide Web has irrevocably expanded the number of ways we connect and communicate with others. This radical transformation has been almost universally praised.

What hasn’t kept pace with the technical innovation is the recognition that people need to engage in civil dialogue. What we see regularly on social networking sites, blogs and other online forums is behavior that ranges from the carelessly rude to the intentionally abusive.

Flair-ups occur on social networking sites because of the ease by which thoughts can be shared through the simple press of a button. Ordinary people, celebrities, members of the media and even legal professionals have shown insufficient restraint before clicking send. There is no shortage of examples—from the recent Twitter heckling at a Web 2.0 Expo in New York, to a Facebook poll asking whether President Obama should be killed.

The comments sections of online gossip sites, as well as some national media outlets, often reflect semi-literate, vitriolic remarks that appear to serve no purpose besides disparaging their intended target. Some sites exist solely as a place for mean-spirited individuals to congregate and spew their venomous verbiage.

Online hostility targeting adults is vastly underreported. The reasons victims fail to come forward include the belief that online hostility is an unavoidable and even acceptable mode of behavior; the pervasive notion that hostile online speech is a tolerable form of free expression; the perceived social stigma of speaking out against attacks; and the absence of readily available support infrastructure to assist victims.

The problem of online hostility, in short, shows no sign of abating on its own. Establishing cybercivility will take a concerted effort. We can start by taking the following steps:

First, and most importantly, we need to create an online culture in which every person can participate in an open and rational exchange of ideas and information without fear of being the target of unwarranted abuse, harassment or lies. Everyone who is online should have a sense of accountability and responsibility.

Too frequently, we hear the argument that being online includes the right to be nasty—and that those who chose to participate on the Web should develop thicker skin. This gives transgressors an out for immoral behavior.

Just as we’ve learned what is deemed appropriate face-to-face communication, we need to learn what is appropriate behavior in an environment that frequently deals with purely written modes of communication and an inherent absence of nonverbal cues.

Second, individuals appalled at the degeneration of online civility need to speak out, to show that this type of behavior will no longer be tolerated. Targets of online hostility should also consider coming forward to show that attacks can have serious consequences. There are already several documented cases of teens taking their own lives because of cyberbullying.

A third step has to do with media literacy. People need to know how to differentiate between information that is published on legitimate sites that follow defined standards and also possibly a professional code of ethics, and information published in places like gossip sites whose only goal is to post the most outrageous headlines and stories in order to increase traffic. People can and will learn to shun and avoid such sites over time, particularly with education about why they are unethical.

Fourth, adult targets of online hostility deserve a national support network. This should be a safe place where they can congregate online to receive emotional support, practical advice on how to deal with transgressors, and information on whom to contact for legal advice when appropriate.

Finally, it’s time to re-examine the current legal system. Online hostility is cross-jurisdictional. We might need laws that directly address this challenge. There is currently no uniformity of definition among states in the definition of cyberbullying and cyberharassment. Perhaps federal input is needed.

The Internet is bringing about a revolution in human knowledge and communication, and we have an unprecedented opportunity to make the global conversation more reasonable and productive. But we can only do so if we prevent the worst among us from silencing the best among us with hostility and incivility.

Mr. Wales is the founder of Wikipedia and sits on the board of CiviliNation, a nonprofit. Ms. Weckerle is the founder and president of CiviliNation.


Full article:

Technology Predictions Are Mostly Bunk

Bill Gates, 1981: ‘No one will need more than 637 kb of memory for a personal computer.’

‘Tis the season for predictions, so “Information Age” bravely goes out on this limb: Most technology predictions for 2010 won’t come true. The more we learn about how innovation happens, the less straight the lines of advance look.

“Inventions have long since reached their limit, and I see no hope for further developments,” said Roman engineer Julius Sextus Frontinus in 10 A.D. This end-of-progress view has been echoed many times, including by Charles Duell, commissioner for the U.S. Patent Office, who in 1899 said, “Everything that can be invented has already been invented.”

It’s worth recalling, especially in a gloomy year like the one drawing to an end, that the opposite is true: The more we invent, the more we invent. Knowledge grows on itself.

So here are the rest of my Top 10 Worst Technology Predictions, which prove that when it comes to tech, optimism pays:

“The Americans have need of the telephone, but we do not. We have plenty of messenger boys,” Sir William Preece, chief engineer at the British Post Office, 1878.

“Who the hell wants to hear actors talk?” H.M. Warner, Warner Bros., 1927.

“I think there is a world market for maybe five computers,” Thomas Watson, chairman of IBM, 1943.

“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night,” Darryl Zanuck, 20th Century Fox, 1946.

“The world potential market for copying machines is 5,000 at most,” IBM executives to the eventual founders of Xerox, 1959.

“There is no reason anyone would want a computer in their home,” Ken Olsen, founder of mainframe-producer Digital Equipment Corp., 1977.

“No one will need more than 637 kb of memory for a personal computer—640K ought to be enough for anybody,” Bill Gates, Microsoft, 1981.

“Next Christmas the iPod will be dead, finished, gone, kaput,” Sir Alan Sugar, British entrepreneur, 2005.

Sometimes predictions about technology fail because they’re overly optimistic—for example, we don’t commute by jetpack yet—but more often predictions fail because we underestimate the ability of inventors.

Arthur C. Clarke, the science fiction writer, identified what he called the “three laws of prediction,” reflecting an optimistic view of ingenuity: 1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong; 2. The only way of discovering the limits of the possible is to venture past them into the impossible; and 3. Any sufficiently advanced technology is indistinguishable from magic.

Clarke was an exception to the rule that predicting future technology is hard. In Wireless World magazine in 1945, he proposed using a set of satellites in geostationary orbit to form a global communications network.

In “The View from Serendip,” published in 1977, Clarke predicted the Internet: “Immediate access in the home via simple computer-type keyboards, and TV displays, to all the world’s great libraries . . . And items needed for permanent reference could be printed off as soon as located on a copying machine—or filed magnetically in the home storage system.”

In the same book, he also forecast email and online news: “Facsimile services whereby letters, printed matter, etc. can be reproduced instantly. The physical delivery of mail and newspapers will thus be largely replaced by the orbital post office, and the orbital newspaper . . .”

As we go into 2010, there are entrepreneurs and technologists doing their best to confound predictions. Or as computer scientist Alan Kay said, “The best way to predict the future is to invent it.”

A year ago, it would have been hard to predict that social networking Web sites would become the new mass media, or that Google would be a mobile-phone brand. Technological advances can be frustrating when almost every industry is being dislocated by that fast-moving change. On the other hand, we all benefit from these changes, including in endless consumer choice.

Even a skeptical column on technology predictions would be incomplete without a few predictions, so here are a couple: The much-anticipated tablet computer from Apple won’t be on sale before March, and Google’s market share for search will remain strong, but not go beyond 85% at the end of 2010.

Disclosure: These predictions look like safe bets because the potential outcomes of these topics are being traded in online betting markets. These markets reflect the wisdom of crowds, which tend to make more accurate predictions than individuals.

L. Gordon Crovitz, Wall Street Journal


Full article:

OMG, how obsolete am I?

Pete Lalonde, 14, displays some terms used when chatting on the internet, lol (Laugh out loud) and igtg (I’ve got to go).

The problem with trying to sound like you’re still with it is that you’re bound to make an idiot of yourself

I exchange a lot of e-mail with a certain friend of mine, whom I’ll call A. A is quite expressive, and until recently she concluded her e-mails by writing, “LOL, A.”

Eventually she discovered her mistake – but only after sending several condolence messages to a newly bereaved friend. “LOL,” she wrote tenderly, not knowing that LOL generally stands for “laughing out loud.” “I feel terrible,” she groaned. “I always thought it stood for ‘lots of love.’ ”

OMG, I felt so bad for her! But I could relate. For the longest time, I thought it stood for “lots of love,” too.

The problem with trying to sound like you’re still with it is that you’re bound to make an idiot of yourself. People already know you’re not with it, so why remind them?

Another friend of mine was invited to give a talk about her bestselling book to an audience of high-school girls. It was a memoir of her own childhood, so she thought about what part might interest them. She decided to tell the story of how she met Marilyn Monroe when she was 10. Halfway through the story, she realized the faces gazing back at her were politely blank. A teacher waded in to help her out. “Who knows who Marilyn Monroe was?” she asked brightly. Two or three hesitant hands went up. “A rock star?” ventured one.

It’s painful to realize that most of my stock of cultural knowledge is about as relevant as my parents’ stash of Kingston Trio records. So are many of the skills that I used to be so proud of. I am secretly appalled that the next generation will grow up without knowing how to write in cursive, tell time from a clock with hands, do long division, read a map, use a stick shift, hyphenate or spell. But this is just another indication of how obsolete I am. I might as well be horrified because they don’t know how to shoe a horse.

Nothing makes me feel my obsolescence quite so much as chatting to my friends’ kids as they come home for the holidays. Most of them have jobs I find hard to grasp. One is in charge of social marketing for the Olympics, an extremely important responsibility that seems to involve making friends with athletes on Facebook. She told me not to feel bad, because her bosses don’t understand what she does either. They just issue orders and fake it. Another young woman has a highly successful website optimization business. Her sideline is detecting fraudulent online restaurant reviews (e.g., ones posted by friends of the owner) on a major restaurant site. This work is so hush-hush that I’m not even supposed to mention it exists.

The variety of specialties and niches created by the new economy never ceases to amaze me. The other day I had a long chat with someone whose job is to make websites more sticky.* I know someone else who is the head of Brand Experience for a big design firm, which makes a lot of money by helping your local bank to appear more friendly to thirtysomethings. None of these bright young people worry about job security, because none of them have ever had it.

There’s always been a generation gap, of course. But it’s weird to find yourself on the other side of it. One of my friends has a daughter who is organizing her entire wedding (except for the event itself) in cyberspace. Everything else – the life story of the bride and groom, the wedding plans, the gift registry and eventually the wedding album – is online. The bride-to-be was floored when her mom suggested that it might be helpful to go down to Ashley’s and look at a bunch of china – real china – before she picked her pattern. (Although the store is only blocks away from her in Toronto, it had simply not occurred to her to go in person.) Now the bride-to-be is even planning to send out old-fashioned wedding invitations – ink on paper, in the mail, with stamps. Her mom will show her how.

I’m no Luddite. I spent decades changing typewriter ribbons, wrestling with carbon paper and mastering the use of Wite-Out. I don’t miss that labour any more than my great-grandma missed drawing water from the well when they put the plumbing in. Progress is good. Still, I wonder if my great-grandnieces will ever sit around my fake electric hearth as I spin tales of the olden days, when people got lost sometimes, and had to look things up in books to get information. Perhaps I will amaze them with ancient courtship rituals, such as the blind date.

Meantime, I am seriously thinking of getting an iPhone. My friend, A, got one, and she loves it, mostly because it keeps her from getting lost. She is my BBF! (Or is that BFF?) Anyway, my boss thinks I should have one, or at the very least a BlackBerry. She doesn’t know how I get along without it. She is 16 years younger than I am, and she has a book on her desk called Managing the Older Employee. OMG!! She says it’s just a joke. Ha, ha. Or should I say, LOL.

*This means getting people to linger on your website for longer. Stickiness is very good. If you’ve read this far, it means I am sticky too.

Margaret Wente, Globe and Mail


Full article and photo:

Answering Machine

WHY is the sky blue? Why do cats purr? And why did you get married? If you’re engaging in some year-end reflection, you’re not alone. These are among the mysteries people want explained.

We know this thanks to an “auto-suggest” feature many search engines now use. When you type even a single word into these search boxes, it gives you a list of suggested, presumably popular completions. Enter “Michelle,” for example, and you might get back Obama, Malkin, Pfeiffer.

Suggestions for the word “why” result in questions about the sky, cats and marriage (see above), along with “why do dogs eat grass” and “why do men cheat” and “why is pink the color for girls.” This labor-saving device — part fortuneteller, part shrink? — has opened a window into our collective soul. With millions of people pouring their hearts into this modern-day confessional, we get a direct, if mysterious, glimpse into the heads of our fellow Web surfers.

A wealth of data can be confusing, but smarter technology can sort it out — whether or not the subject is serious. In the holiday spirit, the two of us looked at some seasonal queries and used software we’d designed to create pictures of the results. In these images, the size of the arrows and words reflect how many pages on the Web answer each question.

For example, take the phrase, “Is Santa Claus …” The biggest arrows indicate the most popular search results. To see how Santa stacks up against, say, the Easter Bunny, we can put two lists side by side.

Take a look at what else we found on one popular search engine. And may all your questions be answered in 2010.

Fernanda Viégas and Martin Wattenberg are research scientists at I.B.M.’s Center for Social Software.


Full article and photo:

A Deluge of Data Shapes a New Era in Computing

THINKER A collection of essays pays tribute to Jim Gray, a database software engineer who disappeared off the California coast almost three years ago.

In a speech given just a few weeks before he was lost at sea off the California coast in January 2007, Jim Gray, a database software pioneer and a Microsoft researcher, sketched out an argument that computing was fundamentally transforming the practice of science.

Dr. Gray called the shift a “fourth paradigm.” The first three paradigms were experimental, theoretical and, more recently, computational science. He explained this paradigm as an evolving era in which an “exaflood” of observational data was threatening to overwhelm scientists. The only way to cope with it, he argued, was a new generation of scientific computing tools to manage, visualize and analyze the data flood.

In essence, computational power created computational science, which produced the overwhelming flow of data, which now requires a computing change. It is a positive feedback loop in which the data stream becomes the data flood and sculptures a new computing landscape.

In computing circles, Dr. Gray’s crusade was described as, “It’s the data, stupid.” It was a point of view that caused him to break ranks with the supercomputing nobility, who for decades focused on building machines that calculated at picosecond intervals.

He argued that government should instead focus on supporting cheaper clusters of computers to manage and process all this data. This is distributed computing, in which a nation full of personal computers can crunch the pools of data involved in the search for extraterrestrial intelligence, or protein folding.

The goal, Dr. Gray insisted, was not to have the biggest, fastest single computer, but rather “to have a world in which all of the science literature is online, all of the science data is online, and they interoperate with each other.” He was instrumental in making this a reality, particularly for astronomy, for which he helped build vast databases that wove much of the world’s data into interconnected repositories that have created, in effect, a worldwide telescope.

Now, as a testimony to his passion and vision, colleagues at Microsoft Research, the company’s laboratory that is focused on science and computer science, have published a tribute to Dr. Gray’s perspective in “The Fourth Paradigm: Data-Intensive Scientific Discovery.” It is a collection of essays written by Microsoft’s scientists and outside scientists, some of whose research is being financed by the software publisher.

The essays focus on research on the earth and environment, health and well-being, scientific infrastructure and the way in which computers and networks are transforming scholarly communication. The essays also chronicle a new generation of scientific instruments that are increasingly part sensor, part computer, and which are capable of producing and capturing vast floods of data. For example, the Australian Square Kilometre Array of radio telescopes, CERN’s Large Hadron Collider and the Pan-Starrs array of telescopes are each capable of generating several petabytes of digital information each day, although their research plans call for the generation of much smaller amounts of data, for financial and technical reasons. (A petabyte of data is roughly equivalent to 799 million copies of the novel “Moby Dick.”)

“The advent of inexpensive high-bandwidth sensors is transforming every field from data-poor to data-rich,” Edward Lazowska, a computer scientist and director of the University of Washington eScience Institute, said in an e-mail message. The resulting transformation is occurring in the social sciences, too.

“As recently as five years ago,” Dr. Lazowska said, “if you were a social scientist interested in how social groups form, evolve and dissipate, you would hire 30 college freshmen for $10 an hour and interview them in a focus group.”

“Today,” he added, “you have real-time access to the social structuring and restructuring of 100 million Facebook users.”

The shift is giving rise to a computer science perspective, referred to as “computational thinking” by Jeannette M. Wing, assistant director of the Computer and Information Science and Engineering Directorate at the National Science Foundation.

Dr. Wing has argued that ideas like recursion, parallelism and abstraction taken from computer science will redefine modern science. Implicit in the idea of a fourth paradigm is the ability, and the need, to share data. In sciences like physics and astronomy, the instruments are so expensive that data must be shared. Now the data explosion and the falling cost of computing and communications are creating pressure to share all scientific data.

“To explain the trends that you are seeing, you can’t just work on your own patch,” said Daron Green, director of external research for Microsoft Research. “I’ve got to do things I’ve never done before: I’ve got to share my data.”

That resonates well with the emerging computing trend known as “the cloud,” an approach being driven by Microsoft, Google and other companies that believe that, fueled by the Internet, the shift is toward centralization of computing facilities.

Both Microsoft and Google are hoping to entice scientists by offering cloud services tailored for scientific experimentation. Examples include Worldwide Telescope from Microsoft and Google Sky, intended to make a range of astronomical data available to all.

Similar digital instruments are emerging in other fields. In one chapter, “Toward a Computational Microscope for Neurobiology,” Eric Horvitz, an artificial intelligence researcher for Microsoft, and William Kristan, a neurobiologist at the University of California, San Diego, chart the development of a tool they say is intended to help understand the communications among neurons.

“We have access to too much data now to understand what’s going on,” Dr. Horvitz said. “My goal now is to develop a new kind of telescope or microscope.”

By imaging the ganglia of leeches being studied in Dr. Kristan’s laboratory, the researchers have been able to identify “decision” cells, responsible for summing up a variety of inputs and making an action, like crawling. Someday, Dr. Horvitz hopes to develop the tool into a three-dimensional display that makes it possible to overlay a set of inferences about brain behavior that can be dynamically tested.

The promise of the shift described in the fourth paradigm is a blossoming of science. Tony Hey, a veteran British computer scientist now at Microsoft, said it could solve a common problem of poor use of graduate students. “In the U.K.,” Dr. Hey said, “I saw many generations of graduates students really sacrificed to doing the low-level IT.”

The way science is done is changing, but is it a shift of the magnitude that Thomas Kuhn outlined in “The Structure of Scientific Revolutions”?

In his chapter, “I Have Seen the Paradigm Shift, and It Is Us,” John Wilbanks, the director of Science Commons, a nonprofit organization promoting the sharing of scientific information, argues for a more nuanced view of data explosion.

“Data is not sweeping away the old reality,” he writes. “Data is simply placing a set of burdens on the methods and the social habits we use to deal with and communicate our empiricism and our theory.”

John Markoff, New York Times


Full article and photos:

A Hulu for print

Magazines take on Amazon

Magazines attempt to win back control of their digital editions

LET it never again be said that old-media firms are slow to deal with new technology. On December 8th Condé Nast, Hearst, Meredith, News Corporation and Time Inc invested in an as-yet-unnamed venture that will create and sell digital magazines and newspapers for the new generation of e-readers that is likely to succeed Amazon’s monochrome Kindle in the next year or so. It was as if a group of explorers had announced plans to settle a country that had not yet been discovered.

Consumers can already get hold of many publications on smart-phones and e-readers. But smart-phones have small screens, and e-readers render magazines as crudely illustrated black-and-white books. They cannot reproduce magazines’ distinctive fonts or elegant graphics. Worse, they are unsuited to advertising, on which most magazines depend. In the year to June, Meredith’s publishing arm, which produces Better Homes and Gardens among dozens of other titles, made almost twice as much from advertising as it did from newsstand sales and subscriptions.

Publishers are irked at the prospect of formatting content for multiple devices with slightly different requirements—a problem that will worsen. They are even more irked at the current market leader, Amazon, which returns as little as 30% of the sale price of a digital magazine to publishers and provides less detail about customers’ reading habits than they would like. Publishers who want to go digital currently have a choice between the open internet, which generally provides revenue from advertising (but not much) and no subscriptions, and e-readers, which provide revenue from subscriptions (but not much) and no advertising.

The consortium plans to develop software that can be used to create digital publications for a wide range of devices. It will also set up a storefront similar to iTunes, Apple’s online music outlet. This will not be restricted to the consortium’s publications, nor will it be the only way to get hold of them. Condé Nast is already working with Adobe to develop software of its own for advanced e-readers. Hearst, another member of the consortium, has a start-up called Skiff. How the new venture’s efforts will mesh with these other projects is not yet certain. Yet the destination is clear, says John Squires of Time Inc, who will manage the consortium at first. His company has produced a mock-up of an edition of Sports Illustrated, complete with video and interactive ads, which provides a compelling, if hypothetical, glimpse into the future of magazines.

In important ways the consortium resembles Hulu, an outfit Mr Squires praises as “artful”. Hulu’s website streams television programmes from three of America’s four big English-language broadcasters, as well as a few pay-television shows. It has no sneezing pandas, tedious home-made tirades or any of the other detritus with which YouTube is filled. Hulu is popular with both consumers and companies, which pay stiff rates to place advertisements in its programmes (it helps that Hulu does not yet run many ads). As with the magazine consortium, media companies own equity stakes in Hulu.

This model is spreading. On the very day the publishers agreed to set up their venture, record companies launched a Hulu of sorts for music videos in America. Vevo is partly owned by Universal and Sony and licenses other content from EMI. Although it is run in conjunction with YouTube, it is intended to be a separate, cleaner world. Such is the evolving wisdom for traditional media firms that want to engage with digital technology: put some distance between your content and the dross, and make sure you have a stake in any new outfit that appears.

Full article:

A cut from CSI

Virtual autopsies

A CT scanner and gaming technology opens up a body

Under the skin, virtually

PERFORMING a postmortem on a murder victim can take days, delaying any criminal investigation. Moreover, pathologists sometimes get only one chance to look for clues when dissecting a body. But Anders Persson, director of Linköping University’s centre for medical image science and visualisation in Sweden, hopes to change that. Along with his colleagues Thomas Rydell and Anders Ynnerman of the Norrköping Visualisation Centre, they have created a virtual autopsy system.

The body needing to be examined is first scanned using a computed tomography (CT) machine, a process which takes about 20 seconds and creates up to 25,000 images, each one a slice through the body. Different tissues, bodily substances and foreign objects (such as bullets) absorb the scanner’s X-rays in varying amounts. The software recognises these and assigns them a density value. These densities are then rendered with the aid of an NVIDiA graphics card, of a type used for high-speed gaming, into a 3-D visualisation of different colours and opacities. Air pockets are shown as blue, soft tissues as beige, blood vessels as red and bone as white. A pathologist can then peel through layers of virtual skin and muscle with the click of a computer mouse.

To make the process easier, Dr Persson and his colleagues have also created a virtual autopsy table. This is a large touch-sensitive LCD screen which stands like a table in an operating room, displaying an image of the body. Up to six people can gather around the table and, with a swipe of a finger, remove layers of muscle, zoom in and out of organs and slice through tissue with a virtual knife.

The Swedish police have already used the researchers’ virtual-autopsy technology to investigate nearly 350 cases. It has proved capable of detecting crucial but difficult-to-spot pieces of evidence, such as the angle of a bullet’s trajectory, air pockets in the wrong place in the body and bone fractures in a burns victim. The virtual autopsy can also be used to determine the cause of death in a few hours. And unlike a physical autopsy it does not alter evidence, enabling investigators to revisit a cadaver for additional clues if necessary. Television detectives everywhere will have them soon.


Full article and photo:

Google search goes real-time

• Messages from social networks to gain prominence
• Image search and translation technologies also unveiled

Google's vice president of engineering, Vic Gundotra

Google’s vice-president of engineering, Vic Gundotra, introduces the company’s latest advances.

Google has moved to head off some of the threat from young rivals such as Twitter and Facebook by announcing plans to prominently display results from social networking sites in its search pages.

The new development, which the Californian technology giant dubs “real-time search”, aims to bring users more up-to-date information as they scour the web for information. Over the next few days, anybody searching online using Google will see their traditional search results augmented by a string of constantly updating messages drawn from social networks, news sites and blogs.

The move is part of a wider push to make Google’s search index even faster and more up to date, as people increasingly use services like Twitter to transmit information about events as they happen.

Google executive Amit Singhal said that with more information being put on the web every day, it was vital that the company learned how to give users the most relevant results – and as quickly as possible.

“Information is being posted at a pace I have never seen before,” he said. “In this information environment, seconds matter.”

As well as watching for developments on news sites, Google is working closely with Twitter, Facebook and MySpace to include updates from their users – and Singhal said he would not rule out any potential source of up-to-the-second information in the future.

Though executives were keen to use the launch event – which was held near the company’s headquarters in Mountain View, California – as a display of power, it was also intended to quieten growing speculation that an inability to conduct real time searches could become Google’s achilles heel.

Some critics have posited that websites like Facebook and Twitter could eventually rival Google, thanks to their ability to tap into millions of public messages being sent constantly between individuals. That threat comes in addition to more traditional search engines like Microsoft’s have threatened to forge exclusive deals with some content providers as a way to claw back market share.

Instead, Google has acted to bring those services into the fold, though it would neither confirm nor deny whether there was a financial relationship behinds its links with social networking sites. Not everybody thinks the move was make or break for Google, however, even if it gives users more timely information.

“There’s no doubt that it’s good to have,” said Danny Sullivan, a prominent observer of Google’s activities, writing on his SearchEngineLand website. “It’s incredibly difficult to be a leading information source and yet when there’s an earthquake, people are instead turning to Twitter for confirmation faster than traditional news sources on Google can provide.”

The company also used the event to unveil a number of other advances it said were significant technological advances.

These included an experimental program called Google Goggles that allows users to take a photograph of an object or product and ask Google what it is, getting a selection of information back just as if they had conducted a web search on the item in question.

Vic Gundotra, the company’s vice-president of engineering, said there were already more than a billion items stored in the company’s systems and that there were fierce ambitions to make this technology – which has eluded experts for generations – as widely available as possible.

“Today marks the beginning of this journey,” he said. “It’s our goal to be able to visually identify any image.”

Gundotra also showcased a forthcoming translation product which allows users to speak any phrase into a mobile phone and then translate it, almost instantly, into any one of a number of languages. The resulting phrase could then be spoken back by Google through the phone’s speaker, potentially allowing travellers to use any high-end handset as a universal translation device. The first elements of the software should be available to the public in the first quarter of 2010.

The company said such technologies were possible thanks to improvements in speed and power, but added that there were more plans coming soon – and that the ultimate goal was to make searching for information as fast as physically possible.

“It takes one 10th of a second for light to travel around the world,” said Singhal. “At Google we will only be satisfied until that is the only barrier between you and information.”


See on YouTube: Real Time Search


Full article and photo:

Optimism as Artificial Intelligence Pioneers Reunite

INTELLIGENCE John McCarthy, seated center, who ran the Stanford Artificial Intelligence Laboratory, at a reunion last month with Bruce Buchanan to his left and Vic Scheinman on the right. Standing, from left, are Ralph Gorin, Whit Diffie, Dan Swinehart, Tony Hearn, Larry Tesler and Lynn Quam.

The personal computer and the technologies that led to the Internet were largely invented in the 1960s and ’70s at three computer research laboratories next to the Stanford University campus.

One laboratory, Douglas Engelbart’s Augmentation Research Center, became known for the mouse; a second, Xerox’s Palo Alto Research Center, developed the Alto, the first modern personal computer. But the third, the Stanford Artificial Intelligence Laboratory, or SAIL, run by the computer scientist John McCarthy, gained less recognition.

That may be because SAIL tackled a much harder problem: building a working artificial intelligence system. By the mid-1980s, many scientists both inside and outside of the artificial intelligence community had come to see the effort as a failure. The outlook was more promising in 1963 when Dr. McCarthy began his effort. His initial proposal, to the Advanced Research Projects Agency of the Pentagon, envisioned that building a thinking machine would take about a decade.

Four and a half decades later, much of the original optimism is back, driven by rapid progress in artificial intelligence technologies, and that sense was tangible last month when more than 200 of the original SAIL scientists assembled at the William Gates Computer Science Building here for a two-day reunion.

During their first 10 years, SAIL researchers embarked on an extraordinarily rich set of technical and scientific challenges that are still on the frontiers of computer science, including machine vision and robotic manipulation, as well as language and navigation.

In 1966, the laboratory took up residence in the foothills of the Santa Cruz Mountains behind Stanford in an unfinished corporate research facility that had been intended for a telecommunications firm.

The atmosphere, however, was anything but button-down corporate. The antiwar movement and the counterculture were in full swing, and the lab reflected the widely disparate political views and turmoil of the time. Dr. McCarthy was a committed leftist who would gradually move to the right during the ’60s; Les Earnest, the laboratory’s deputy director, who had worked in government intelligence, would move to the left.

The graduate students soon discovered the building’s attic and took up residence there. Mr. Earnest found a clever way, known in the parlance of the A.I. community as a “hack,” to pay for a sauna in the basement of the building, and because many of the young researchers were devotees of Tolkien’s “Lord of the Rings,” they created a special font in Elvish and used it to identify offices as places from Middle Earth.

The scientists and engineers who worked at the laboratory constitute an extraordinary Who’s Who in the computing world.

Dr. McCarthy coined the term artificial intelligence in the 1950s. Before coming to SAIL he developed the LISP programming language and invented the time-sharing approach to computers. Mr. Earnest designed the first spell-checker and is rightly described as the father of social networking and blogging for his contribution of the finger command that made it possible to tell where the laboratory’s computer users were and what they were doing.

Among others, Raj Reddy and Hans Moravec went on to pioneer speech recognition and robotics at Carnegie Mellon University. Alan Kay brought his Dynabook portable computer concept first to Xerox PARC and later to Apple. Larry Tesler developed the philosophy of simplicity in computer interfaces that would come to define the look and functioning of the screens of modern Apple computers — what is called the graphical user interface, or G.U.I.

Don Knuth wrote the definitive texts on computer programming. Joel Pitts, a Stanford undergraduate, took a version of the Space War computer game and turned it into the first coin-operated video game — which was installed in the university’s student coffee house — months before Nolan Bushnell did the same with Atari. The Nobel Prize-winning geneticist Joshua Lederberg worked with Edward Feigenbaum, a computer scientist, on an early effort to apply artificial intelligence techniques to create software to act as a kind of medical expert.

John Chowning, a musicologist, referred to SAIL as a “Socratean abode.” He was invited to use the mainframe computer at the laboratory late at night when the demand was light, and his group went on to pioneer FM synthesis, a technique for creating sounds that transforms the quality, or timbre, of a simple waveform into a more complex sound. (The technique was discovered by Dr. Chowning at Stanford in 1973 and later licensed to Yamaha.)

The laboratory merged with the computer science department at Stanford in 1980, reopened in 2004, and is now enjoying a renaissance. Its trajectory can be seen in the progress made since 1970, when a graduate researcher programmed a robot to automatically follow a white line under controlled lighting conditions at eight-tenths mile per hour. Thirty-five years later, a team of artificial intelligence researchers at Stanford would equip a Volkswagen Touareg named Stanley with lasers, cameras and a cluster of powerful computers to drive autonomously for 131 miles over mountain roads in California at an average speed of 19.1 miles per hour to win $2 million in the 2005 Darpa Grand Challenge, a robotic vehicle contest.

“We are a first-class citizen right now with some of the strongest recent advances in the field,” said Sebastian Thrun, a roboticist who is the director of SAIL and was one of the leaders of the Stanley team.

The reunion also gave a hint of what is to come. During an afternoon symposium at the reunion, several of the current SAIL researchers showed a startling video called “Chaos” taken from the Stanford Autonomous Helicopter project. An exercise in machine learning, the video shows a model helicopter making a remarkable series of maneuvers that would not be possible by a human pilot. The demonstration is particular striking because the pilot system first learned from a human pilot and then was able to extend those skills.

But an artificial intelligence? It is still an open question. In 1978, Dr. McCarthy wrote, “human-level A.I. might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.”

John Markoff, New York Times


Full article and photo:

Microsoft, Google Take Maps in New Direction

The battle between Microsoft Corp. and Google Inc. has shifted into new territory: a race to see who can make online maps that make people feel like they’re really there.

After lagging behind Google Maps, Microsoft this week unveiled an overhaul of its Bing Maps Web site that supplements the traditional bird’s eye view of cities and other locations with rich photographs on the ground. In addition to the street-level images pioneered by Google Maps that let people “move” along the roads pictured, Microsoft’s technology stitches together images uploaded by users into three-dimensional photo collages. The technology, called Photosynth, lets users post on Bing Maps interior shots of everything from restaurants to museums to hotels.

Microsoft’s new program lets users upload photos to Bing Maps, such as this picture of a gallery inside New York’s Metropolitan Museum of Art.

The Microsoft technology and similar efforts by Google are further signs that online maps are evolving from a digital version of an atlas into something more akin to a videogame. Both Microsoft, based in Redmond, Wash., and Google, Mountain View, Calif., are experimenting with a variety of tools that make hunting for locations far more immersive.

Having better maps gives Microsoft and Google more than just bragging rights. It also potentially gives companies who use their Internet maps—such as hotels and restaurants—a new tool for attracting business and standing out from competitors.

“Bing has pushed what Google was doing a step forward,” says Greg Sterling, an analyst with Sterling Market Intelligence.

John Hanke, vice president of Google maps, said Microsoft is playing “catch up” with most of its new map features, pointing out that Google also lets people post images that show up in Google maps in the locations they were shot.

The photo collages on Bing, which Microsoft calls “synths,” go beyond ordinary panoramic images that allow people to pivot around a street scene from a single fixed point. Users can create the synths with a conventional digital camera by snapping dozens or even hundreds of shots of the interior of, say, a furniture store, from a variety of vantage points.

Consumers can then use a free program from Microsoft that stitches the images together in such a way that they can experience a crude simulation of moving around inside the store by clicking around the photo collage with their mouse. Anybody can then make the synth accessible through Bing Maps, represented by a pin icon on the spot where the images were shot.

On the new test version of Bing Maps, a search for the Metropolitan Museum of Art in New York calls up an aerial shot of the museum, from which people can swoop down to a view of the facade of the building from 5th Ave. Microsoft’s Bing street view images, as with Google, are taken by company-hired vehicles outfitted with an array of cameras that shoot 360-degree images as the driver cruises around a city.

The outside of the museum on Bing Maps is also festooned with green icons, which people can click to view nearly a dozen synths of the Greek and Roman section and other exhibits, allowing a viewer to examine artwork from different angles and to zoom in on details.

Bill Garrison, a Seattle real estate agent, posted a synth of a property he is selling in the city after a friend at Microsoft helped him shoot the pictures. The synth lets a viewer travel through the home, even allowing a peek at the view through a kitchen window from different angles.

Mr. Garrison says the synth is “much better” than traditional photo panoramas but says the images download too slowly for most house hunters to tolerate. “If it could just be smoothed out and speeded up,” he says.

Blaise Aguera y Arcas, chief architect of Bing Maps, said the performance of the synth feature will be improved. Although Microsoft hasn’t formally begun approaching businesses to do synths of their establishments, he predicts 3D interior shots will eventually “come to be expected” by customers who research restaurants and other places online. Microsoft believes the synth feature is easy enough for amateurs to use.

There are already companies like EveryScape Inc. that specialize in photographing the interior of hotels and other businesses to create immersive images for Web users. Rebecca MacQuarrie, director of marketing at EveryScape, said she wasn’t familiar with Bing’s new synth technology but believes most businesses will favor its professionally photographed environments.

Nick Wingfield, Wall Street Journal


Full article and photo:

How Google Can Help Newspapers

Video didn’t kill the radio star, and the Internet won’t destroy news organizations. It will foster a new, digital business model.

It’s the year 2015. The compact device in my hand delivers me the world, one news story at a time. I flip through my favorite papers and magazines, the images as crisp as in print, without a maddening wait for each page to load.

Even better, the device knows who I am, what I like, and what I have already read. So while I get all the news and comment, I also see stories tailored for my interests. I zip through a health story in The Wall Street Journal and a piece about Iraq from Egypt’s Al Gomhuria, translated automatically from Arabic to English. I tap my finger on the screen, telling the computer brains underneath it got this suggestion right.

Some of these stories are part of a monthly subscription package. Some, where the free preview sucks me in, cost a few pennies billed to my account. Others are available at no charge, paid for by advertising. But these ads are not static pitches for products I’d never use. Like the news I am reading, the ads are tailored just for me. Advertisers are willing to shell out a lot of money for this targeting.

This is a long way from where we are today. The current technology—in this case the distinguished newspaper you are now reading—may be relatively old, but it is a model of simplicity and speed compared with the online news experience today. I can flip through pages much faster in the physical edition of the Journal than I can on the Web. And every time I return to a site, I am treated as a stranger.

So when I think about the current crisis in the print industry, this is where I begin—a traditional technology struggling to adapt to a new, disruptive world. It is a familiar story: It was the arrival of radio and television that started the decline of newspaper circulation. Afternoon newspapers were the first casualties. Then the advent of 24-hour news transformed what was in the morning papers literally into old news.

Now the Internet has broken down the entire news package with articles read individually, reached from a blog or search engine, and abandoned if there is no good reason to hang around once the story is finished. It’s what we have come to call internally the atomic unit of consumption.

Painful as this is to newspapers and magazines, the pressures on their ad revenue from the Internet is causing even greater damage. The choice facing advertisers targeting consumers in San Francisco was once between an ad in the Chronicle or Examiner. Then came Craigslist, making it possible to get local classifieds for free, followed by Ebay and specialist Web sites. Now search engines like Google connect advertisers directly with consumers looking for what they sell.

With dwindling revenue and diminished resources, frustrated newspaper executives are looking for someone to blame. Much of their anger is currently directed at Google, whom many executives view as getting all the benefit from the business relationship without giving much in return. The facts, I believe, suggest otherwise.

Google is a great source of promotion. We send online news publishers a billion clicks a month from Google News and more than three billion extra visits from our other services, such as Web Search and iGoogle. That is 100,000 opportunities a minute to win loyal readers and generate revenue—for free. In terms of copyright, another bone of contention, we only show a headline and a couple of lines from each story. If readers want to read on they have to click through to the newspaper’s Web site. (The exception are stories we host through a licensing agreement with news services.) And if they wish, publishers can remove their content from our search index, or from Google News.

The claim that we’re making big profits on the back of newspapers also misrepresents the reality. In search, we make our money primarily from advertisements for products. Someone types in digital camera and gets ads for digital cameras. A typical news search—for Afghanistan, say—may generate few if any ads. The revenue generated from the ads shown alongside news search queries is a tiny fraction of our search revenue.

It’s understandable to look to find someone else to blame. But as Rupert Murdoch has said, it is complacency caused by past monopolies, not technology, that has been the real threat to the news industry.

We recognize, however, that a crisis for news-gathering is not just a crisis for the newspaper industry. The flow of accurate information, diverse views and proper analysis is critical for a functioning democracy. We also acknowledge that it has been difficult for newspapers to make money from their online content. But just as there is no single cause of the industry’s current problems, there is no single solution. We want to work with publishers to help them build bigger audiences, better engage readers, and make more money.

Meeting that challenge will mean using technology to develop new ways to reach readers and keep them engaged for longer, as well as new ways to raise revenue combining free and paid access. I believe it also requires a change of tone in the debate, a recognition that we all have to work together to fulfill the promise of journalism in the digital age.

Google is serious about playing its part. We are already testing, with more than three dozen major partners from the news industry, a service called Google Fast Flip. The theory—which seems to work in practice—is that if we make it easier to read articles, people will read more of them. Our news partners will receive the majority of the revenue generated by the display ads shown beside stories.

Nor is there a choice, as some newspapers seem to think, between charging for access to their online content or keeping links to their articles in Google News and Google Search. They can do both.

This is a start. But together we can go much further toward that fantasy news gadget I outlined at the start. The acceleration in mobile phone sophistication and ownership offers tremendous potential. As more of these phones become connected to the Internet, they are becoming reading devices, delivering stories, business reviews and ads. These phones know where you are and can provide geographically relevant information. There will be more news, more comment, more opportunities for debate in the future, not less.

The best newspapers have always held up a mirror to their communities. Now they can offer a digital place for their readers to congregate and talk. And just as we have seen different models of payment for TV as choice has increased and new providers have become involved, I believe we will see the same with news. We could easily see free access for mass-market content funded from advertising alongside the equivalent of subscription and pay-for-view for material with a niche readership.

I certainly don’t believe that the Internet will mean the death of news. Through innovation and technology, it can endure with newfound profitability and vitality. Video didn’t kill the radio star. It created a whole new additional industry.

Mr. Schmidt is chairman and CEO of Google Inc.


Full article and photo:

E-Readers: They’re Hot Now, But the Story Isn’t Over

Books are having their iPod moment this holiday season. But buyer beware: It could also turn out to be an eight-track moment.

While e-reading devices were once considered a hobby for early adopters, Justin Timberlake is now pitching one on prime-time TV commercials for Sony Corp. Meanwhile, Inc.’s Kindle e-reading device has become its top-selling product of any kind. Forrester Research estimates 900,000 e-readers will sell in the U.S. in November and December.

But e-reader buyers may be sinking cash into a technology that could become obsolete. While the shiny glass-and-metal reading gadgets offer some whiz-bang features like wirelessly downloading thousands of books, many also restrict the book-reading experience in ways that trusty paperbacks haven’t, such as limiting lending to a friend. E-reader technology is changing fast, and manufacturers are aiming to address the devices’ drawbacks.

“If you have the disposable income and love technology—not books—you should get a dedicated e-reader,” says Bob LiVolsi, the founder of BooksOnBoard, the largest independent e-book store. But other people might be better-off repurposing an old laptop or spending $300 on a cheap laptop known as a netbook to use for reading. “It will give you a lot more functionality, and better leverages the family income,” he says.

For gadget lovers, several factors are converging to make e-reading devices alluring this holiday season. More such devices are debuting than ever to challenge Amazon’s Kindle, notably the Nook from Barnes & Noble Inc. Sony also recently launched three new versions of its Reader, which will be sold—along with devices from smaller makers like Irex Technologies BV—in dedicated e-book sections of Best Buy Co. stores. Already, these devices are beginning to sell out: Barnes & Noble says people who ordered the Nook after Nov. 20 won’t get one until the week of Jan. 4, and Sony says that it can’t guarantee delivery of its high-end wireless Reader by Christmas.

There’s also more selection of books for the devices, with most popular publishers now selling e-books. Also, library-scanning efforts by Google Inc. is producing more than a million out-of-copyright books like “The Adventures of Tom Sawyer” that people can download free. There are only a few holdouts against e-books, including “Harry Potter” author J.K. Rowling.

Prices for e-book readers are also dropping. Amazon recently cut the price of the international Kindle to $259 from $279, while Sony sells a new entry-level model for $199. A refurbished first-generation Kindle retails on Amazon for $219. Amazon, Barnes & Noble and other bookstores are also discounting prices on best-selling e-book titles to $10 to lure more readers.

Still, it’s unclear how—and on what sort of device—most people will be comfortable reading e-books. Many people seem perfectly happy reading books on their PCs: Reading Web site, which offers millions of amateur and professional works, is attracting 50 million readers each month. LibreDigital Inc., a distributor of e-books for publishers, says the overwhelming majority of e-book buyers are women who read e-books on an ordinary computer screen, mostly between 4 p.m. and 11 p.m. A growing number of readers are also perusing books on cellphones.

Most of the current crop of dedicated e-reading devices try to replicate the traditional reading experience with a screen that’s about the size of a paperback novel that displays black-and-white (or, rather, dark grey and light grey) text and graphics. You turn the page by clicking on a button, or using your finger or a stylus to touch the screen. You can buy books online and transfer to them your device with a cable or, on some models, download them directly via a wireless connection. Most e-books, which cost about $10 for popular new titles, are yours at least for the life of your device, though some models let you borrow books for a short period of time from libraries or a friend.

Fans of e-readers acknowledge the devices have their flaws. Dianna Broughton, a 45-year-old stay-at-home mom in Lancaster, S.C., bought a Kindle last year and says she now “reads more, and my kids read more.”

But Ms. Broughton says she can’t recommend the Kindle to people who aren’t technically savvy and might want to purchase their books anywhere other than the Amazon store. That’s because the Kindle doesn’t read copyright protected files from other bookstores or libraries. It also makes it tough for parents to monitor what their children are reading, if a child has a Kindle that is registered to his parent’s Amazon account.

“The parent’s entire e-book archive is accessible to that child’s Kindle–individual titles can’t be locked out,” says Ms. Broughton. “Parental controls are one of the most wished-for features.” There are technical work-arounds for some of these issues, but they require downloading unofficial software.

Indeed, many e-book readers place limits on how and where consumers can use them. Only the Nook allows people to share some of their books with a friend by wirelessly transmitting them—and even then, you can share each book just once and only for 14 days. And only Sony’s Readers make it easy to check out free books from Overdrive Inc., the e-book service used by many public libraries.

The e-book market is also caught up in a format war, with different companies limiting their devices to certain kinds of e-books, with file types such as .azw and mobipocket on the Kindle and .epub and Adobe Digital Editions on Sony. As a result, there’s no guarantee an e-book bought from one online store will work on devices sold by a competitor.

Sony has tried to differentiate itself in e-books by supporting an open industry standard called Epub and digital-rights-management software from Adobe. Barnes & Noble recently said it will do the same. But Amazon, which dominates the e-reader market, has so far shown no signs of changing from its own proprietary format.

Amazon says it is working on making Kindle books play on more devices, including iPhones, BlackBerrys and PCs.

“Our goal is to create the best possible reading experience for customers,” says Amazon’s vice president of Kindle, Ian Freed. “Along the way, we have figured out that it is pretty important to do that with a range of devices.”

For now, the lack of interoperability in e-books has tripped up readers like Maria Blair, a 61-year-old lab technician in Baltimore. She decided to switch from the Kindle to the Sony Reader last year, because she preferred the weight and feel of the Sony. But now, “I’m not able to read the books I bought for the Kindle on my Sony,” she says.

Future e-book readers may be a lot more interactive. Plastic Logic says it will launch a business-oriented reading device early next year that will offer the largest screen yet (8½ inches by 11 inches), along with tools to help business people manage their documents on the go. And while all of the dedicated e-book readers on the market this holiday season use black-and-white screens, color screens are coming late next year.

Next year, Apple Inc. is also expected to debut a tablet device that can be used for reading, watching movies, surfing the Web and other interactive tasks.

Geoffrey A. Fowler, Wall Street Journal


Full article and photo:

Amazon’s e-reader doesn’t exactly kindle my passion

Ian Brown cuddles up with the electronic reading device, which arrived in Canada this week, and finds it akin to having sex while wearing multiple condoms: clinical, fiddly, way less fun than a romp through a good old book

Do you ever find yourself all on your own at the end of your branch, thinking: Unless I’m losing my mind, the world is suffering from mass delusion?

I had the experience just the other day, when I finally laid my hands on a Kindle.

Kindle – the article is silent – is Amazon’s e-book reader, the electronic doodad that has every publisher in the world fearing for the future. Kindle was introduced this week in Canada, and, according to a lot of technophiles, will soon render the physical book a has-been. Fewer than half of 1 per cent of all the books sold in North America last year were e-books, but some estimates call for that share to double every year.

Instead of traipsing off to the library or the distant bookstore and lugging home stacks of heavy, expensive, tree-destroying books, Kindle loads books wirelessly in digital form for a fraction of their paperbound price.

“Although Kindle is about the size of a paperback book,” the online manual informed me, neglecting to add that it’s also twice as heavy, “it can store over a thousand digital books, newspapers, blogs and magazines, which are referred to collectively as ‘content.’”










Members of the media are given a demonstration of the new Kindle DX, which he unveiled at a press conference by Amazon CEO Jeff Bezos at the Michael Schimmel Center for the Arts at Pace University May 6, 2009 in New York City. The Kindle DX, which is as yet not available in Canada, features a larger 9.7-inch electronic paper display, built-in PDF reader, auto-rotate capability, and storage for up to 3,500 books.

Kindle was also a joy to use, according to an electronic letter I found on my Kindle, addressed to me, personally, from Jeff Bezos, founder and CEO of

“Our top design objective,” wrote Jeff – I feel I can call him Jeff – “was for Kindle to disappear in your hands – to get out of the way – so you can enjoy your reading. We hope you’ll quickly forget you’re reading on an advanced wireless device and instead be transported into that mental realm readers love, where the outside world dissolves, leaving only the author’s stories, words and ideas.”

Having read on Kindle for 24 hours, I’ll say this: If you can forget you’re using one, I’m the next Miss Sweden.

The first thing I did was download Nicholson Baker’s new novel, The Anthologist. I happened to have a hard copy of The Anthologist sitting on my desk, and I wanted to compare the old way of reading with the new.

I turned randomly in the book to page 10, where Baker writes “People are going to feed you all kinds of oyster crackers about iambic pentameter” – except that it wasn’t page 10 on Kindle, it was at “locations 131-38,” which is how Kindle has to number things because its typeface is customizable. You want bigger type? Okay! Fewer words per line? No problem. Unfortunately, that’s also why it’s such a pain in the grasp to find specific stuff on Kindle. A distinctly clitoral toggle switch lets you move around a page and highlight and make notes and save passages – stuff you’d use a pencil to do in a standard book – but it’s fussy and Lilliputian to boot.

In its book form, page 10 of The Anthologist is a left-hand, or verso, page, matched by a right-hand, recto counterpart. The pages sit open like a pair of wings, and the layout has a roomy feel.

On Kindle, there are no left or right pages, and nothing winged: just a cramped three-and-three-eighths-inch-by-four-and-five-eighths-inch rectangle of pale grey, non-reflective, not hard-to-read but unbeguilingly dull window of e-ink text in a slim, five-by-sevenish-inch sleeve of metal and white plastic.

There was a gauge at the bottom of the Kindle screen that informed me I was 5 per cent of my way through The Anthologist ; in the book I was on page 10 of 243.

Kindle text was easy to read, and I’m sure the proprietary protocol Amazon uses will get more flexible, so that one day it may be possible to buy a Kindle book such as The Anthologist and lend it to a friend, which you can’t do now unless you share your Kindle too.

But in every other way, reading on a Kindle is to reading a book as having sex while wearing (two) condoms is to having sex: It’s still technically intercourse, but doesn’t feel the same. A book feels like a thing that can be passed from hand to hand. Kindle feels convenient. Vague as that sounds, it’s a profound difference.

Design-wise, Kindle’s no triumph. Finding a book in the Kindle Store and clit-toggling up or down to select the title you want and then pressing the toggle and waiting for it to wirelessly load and display on your Kindle is like chopping garlic – fiddly, sticky and definitely not fast.

Kindle isn’t an aesthetic experience, either. Kindle books often don’t display covers, because Kindle is a black-and-white device and can’t always render them. Instead, many Kindle books begin with a title page that runs straight into copyright and half title and table of contents and chapter one without interruption. Is it so strange to like the way physical books announce themselves gradually, each opening page making another step toward the oncoming state of suspension a good book promises – what Bezos called “the mental realm readers love” in his personal letter to me? A Kindle book, on the other hand, comes on like a lap dancer 10 minutes before closing time. It barely says hello.

There are allegedly more than 90 newspapers available on the new Canadian Kindle. I could find only two, one of which was The Globe and Mail – and let me tell you, reading a broadsheet like The Globe and Mail on a napkin-sized Kindle feels like your mind is in prison, and that the jailer is permitting you one story at a time, one toggle after another, with no pictures, no context, no sense of relative importance, not even much variation in type size.

It was hideous. I thought: I know, I’ll use the text-to-speech feature, which lets Kindle read text aloud, to declaim the Globe’s stories. Unfortunately that was like being in prison with a droid that needed its adenoids out. Kindle was more palatable when I ordered up The Spectator, a London magazine of zesty writing that I would otherwise have to find an international newsstand to purchase.

Amazon claims more than 300,000 titles are available on Kindle. More interesting is what titles aren’t there. I couldn’t find Gabriel Garcia Marquez’s One Hundred Years of Solitude , or J. D. Salinger’s Franny and Zooey , or even something called The Boy in the Moon , by Ian Brown, to name just a few.

Don’t ask me why: Kindle doesn’t say “It’s out of stock” or “No one wants that” or “I think it’s over in gay parenting,” as even the hinkiest and most zombified Indigo clerk will if you push hard enough. Kindle just leaves you slightly fearful that the intellectual firmament you knew has disintegrated. I know Amazon isn’t the ancient library at Alexandria; I can understand that it might not have Thomas Bernhard’s Concrete ready for electronic uptake. But no Phillip Roth, at all? Nothing by Raymond Chandler? Not even The Diary of Anne Frank ?

Admittedly, we get jumpy every time we change our collective reading habits. It wasn’t until the ninth century that most people began to read books silently, to themselves (as opposed to out loud, in groups); experts immediately began to fret that private reading would turn everyone into a neurasthenic layabout.

So let me tell you one thing Kindle is good for. When I woke for my nightly bout of sleeplessness, I turned to Kindle, to read in the dark. Alas, it wouldn’t turn on – until I realized that it was on, that Kindle doesn’t light up.

So I padded into the spare room and turned on the bedside light, and downloaded the kind of book that is available on Kindle, the kind of book I don’t often get a chance to read – part one of Stephanie Meyer’s massively popular Twilight saga.

I whizzed through four chapters of Twilight on Kindle, inhaling screenfuls of text at a single glance. She’s a very readable writer. But that’s also the secret of Kindle: It’s brilliant for popular stuff, for the kind of genre book that delivers reliable, not-too-radical thrills you can absorb with half your brain elsewhere. Kindle is a marketing gadget that could make the consumption of certain kinds of book more convenient and efficient. Unlike its battery, a life of the mind is not included.

Ian Brown, Globe and Mail

Full article and photo:

Google allows publishers to limit free content

Google Inc. is allowing publishers of paid content to limit the number of free news articles accessed by people using its Internet search engine, a concession to an increasingly disgruntled media industry.

There has been mounting criticism of Google’s practices from media publishers – most notably News Corp. chairman and chief executive Rupert Murdoch – that argue the company is profiting from online news pages.

In an official blog posted late Tuesday, Josh Cohen, Google’s senior business product manager, said the company had updated its so-called First Click Free program so publishers can limit users to viewing no more than five articles a day without registering or subscribing.

Previously, each click from a user of Google’s search engine would be treated as free.

“If you’re a Google user, this means that you may start to see a registration page after you’ve clicked through to more than five articles on the website of a publisher using First Click Free in a day … while allowing publishers to focus on potential subscribers who are accessing a lot of their content on a regular basis,” Cohen wrote in the post.

Murdoch on Tuesday told a Washington D.C. conference that media companies should charge for content and stop news aggregators like Google from “feeding off the hard-earned efforts and investments of others.”

News Corp. already charges for online access to The Wall Street Journal and it plans to expand that to other publications, including British newspapers The Sun and The Times.

A fundamental problem facing the media industry, Murdoch told the U.S. Federal Trade Commission workshop, is that “technology makes it cheap and easy to distribute news for anyone with Internet access, but producing journalism is expensive.”

“Right now there is a huge gap in costs,” he added, referring to news compilation sites like Google.

Cohen stressed that publishers and Google could coexist, with the former able to charge for their content and still make it available via Google under the revamped click program.

“The two aren’t mutually exclusive,” Cohen said on the blog.

“After all, whether you’re offering your content for free or selling it, it’s crucial that people find it,” he added. “Google can help with that.”

Cohen said that Google will also begin crawling, indexing and treating as “free” any preview pages – usually the headline and first few paragraphs of a story – from subscription Web sites.

People using Google would then see the same content that would be shown free to a user of the media site and the stories labelled as “subscription” in Google News.

“The ranking of these articles will be subject to the same criteria as all sites in Google, whether paid or free,” Cohen said. “Paid content may not do as well as free options, but that is not a decision we make based on whether or not it’s free. It’s simply based on the popularity of the content with users and other sites that link to it.”


Full article:

Open Source as a Model for Business Is Elusive

In many ways, MySQL embodies the ideals of the populist software movement known as open source, in which a program’s creator releases it to the world free of charge, and legions of volunteers contribute improvements that are also freely shared.

The start-up company came out of nowhere, building a database application beloved by vibrant, young Internet companies. Logging in from homes scattered around the globe, its workers seemed more a part of a virtual commune than a corporate monolith, and they relished taking on proprietary software giants like Microsoft.

But like most open-source companies, MySQL’s sales, tied to support deals, never matched the astronomical number of downloads for its product, about 60,000 a day. In January 2008, the founders decided to sell the company for $1 billion to Sun Microsystems. And this year, Sun agreed to sell itself to Oracle, which makes database software aimed at larger companies and tougher jobs, for $7.4 billion.

Now, disagreement over the value of MySQL — both as a stand-alone entity and as part of a big company — lies at the heart of a bitter public battle between Oracle and the European Union over the Sun acquisition. The fight illuminates a larger truth about open-source companies: their societal and strategic importance far exceeds their financial value as operating businesses.

European regulators view MySQL as sort of a database of the people, a low-cost alternative to Oracle’s costly proprietary products. The regulators worry that Oracle may stop improving MySQL in favor of protecting its core traditional products, and customers will lose an important option in the database market.

Neelie Kroes, Europe’s competition commissioner, wants open-source software to be available.

“In the current economic context, all companies are looking for cost-effective I.T. solutions, and systems based on open-source software are increasingly emerging as viable alternatives to proprietary solutions,” said the European Commission’s competition chief, Neelie Kroes, in a recent statement. “The commission has to ensure that such alternatives would continue to be available.”

Oracle, meanwhile, insists that it will continue to develop MySQL and other Sun technologies. Oracle’s chief executive, Lawrence J. Ellison, contends that MySQL serves a different part of the database market than Oracle’s main products do — an assessment supported by many analysts. One main incentive for Oracle to keep improving MySQL is that the program serves as a bulwark against Microsoft’s SQL Server database, which challenges Oracle’s products on the low end.

“The commission’s statement of objections reveals a profound misunderstanding of both database competition and open source dynamics,” Oracle said in a statement.

To Ms. Kroes’s point, there is an open-source alternative, and usually a pretty good one, to just about every major commercial software product. In the last decade, these open-source wares have put tremendous pricing pressure on their proprietary rivals. Governments and corporations have welcomed this competition.

Whether open-source firms are practical as long-term businesses, however, is a much murkier question.

The best-known open-source company is Red Hat, which produces a variant of the Linux operating system for server computers. Like most of its peers, Red Hat offers a free version of its base product and relies on selling support services and extra tools for revenue. In its last fiscal year, which ended in March, the company’s revenue rose 25 percent to $653 million, and it reported net income of $79 million.

But Red Hat is a rare case. “There’s only one company making real money out of open source, and that’s Red Hat,” said Simon Crosby, the chief technology officer at Citrix Systems, which acquired the open-source software maker XenSource for $500 million in 2007. “Everyone else is in trouble.”

The enduring appeal of open-source software revolves more around its disruptive nature than blockbuster sales.

As long as there has been software, there have been some people eager to share and improve it for the common good. The rise of the Internet made such sharing easier than ever, enabling people the world over to work together on projects outside the confines of a formal corporate structure.

Open-source software has thrived and played a prominent role in the building of the Internet’s infrastructure. Many companies rely on Linux-based computers and Apache Web server software to display their Web pages. Similarly, the Mozilla Firefox Web browser has emerged as the most formidable competitor to Microsoft’s Internet Explorer.

The grass-roots nature of open source has led advocates to view the projects as a populist foil to proprietary software, where a company keeps the inner workings of its applications secret.

But in the last decade, open-source software has become more of a corporate affair than a people’s revolution.

In some cases, dominant technology companies have used open-source projects as pawns. Google, for example, has needled Microsoft by providing financial support to the nonprofit Mozilla Foundation, which oversees of the development of Firefox. I.B.M. has been a major backer of Linux, helping to raise it as a competitor to Microsoft’s Windows and other proprietary operating systems.

Many of the top open-source developers are anything but volunteers tinkering in their spare time. Companies like I.B.M., Google, Oracle and Intel pay these developers top salaries to work on open-source projects and further the companies’ strategic objectives.

In the last three years, there have been five big acquisitions in which a major technology company bought an up-and-coming open-source company for many times its annual revenue. Sun, for example, bought MySQL for about 10 times its revenue, while Citrix bought XenSource for more than 150 times its revenue, according to people familiar with the companies’ sales.

Most recently, VMware, the leading maker of virtualization software, brought SpringSource for $420 million, or about 20 times its annual sales.

“A lot of these guys were getting close to an I.P.O., but they elected to go the acquisition route instead,” said Michael Olson, the chief executive of Cloudera, an open-source start-up. “A lot of open-source firms are one-product companies, and it’s hard to build a long-term, successful business that way.”

The larger technology companies have tended to buy these one-trick ponies for strategic purposes. With its core server business declining, Sun hoped it could piggy-back on MySQL’s momentum with Internet companies. In SpringSource, VMware acquired a company that had cultivated deep interest with software developers and helped VMware diversify beyond its virtualization roots.

“VMware took into consideration that which money can’t buy, which is a critical mass of adoption,” said Peter Fenton, a venture capitalist at Benchmark Capital, who has been involved in some fashion with many of the large open-source deals. “SpringSource’s main product was the equivalent of a best-selling novel.”

Citrix took perhaps the biggest risk of all, paying a huge premium for XenSource in the hopes of disrupting VMware’s position in the virtualization market.

“I don’t think Citrix would ever say it paid too much,” Mr. Crosby said. “Citrix leaped to the forefront of a whole software category. The ability to talk credibly about virtualization is worth a huge amount in its own right.”

Meanwhile, the ideal of an independent open-source giant has faded.

Mr. Fenton said that many open-source advocates had once hoped Red Hat would scoop up the top open-source start-ups, keeping these crown jewels out of the hands of proprietary software makers. But the company failed to go after other open-source companies initially and later could not afford to pay the high prices offered by larger companies.

“You could make the case there was a window of opportunity to do that three to five years ago,” Mr. Fenton said. “That opportunity has gone away. And it’s hard to put Humpty Dumpty back together again now.”

Ashlee Vance, New York Times


Full article and photo:

Feeling in the dark

Computer mice for the blind

A tactile mouse helps blind people to use the internet

COMPUTERS have become such an integral part of life, in the rich world at least, that even social networking is done online. The blind, however, are often excluded from such interactions. Now a system has been developed to make it easier for blind people to navigate the internet, use word-processing software and even trace the shapes of graphs and charts. Its inventors hope it will enable more blind people to work in offices.

The system developed by staff at Tactile World, an Israeli company, uses a device that looks similar to a conventional computer mouse. On its top, however, it has two pads, each with 16 pins arranged in a four-by-four array. Software supplied with the mouse translates text displayed on the screen into Braille.

In traditional Braille, numbers and letters are represented by raised bumps in the paper of the page being read. The pins on the mouse take the role of these bumps. As the cursor controlled by the mouse is moved across the screen, the pins rise and fall to represent the text across which they are moving. One pad represents the character under the cursor, the other gives the reader information about what is coming next, such as whether it is a letter or the end of the word. This advance information makes interpretation easier. As the user reads the text, the system also announces the presence of links to other websites. And the user can opt, if he wishes, to have the computer read the whole text out loud.

The mouse’s software has an “anchor” feature, to hold onto the line of text that is being read. Alternatively, a user can click a button on the mouse and the text will scroll along and run under his fingers without him having to move the device.

When he encounters a graph, map or other such figure, the pins rise when the mouse is on a line. The number of pins raised reflects the thickness of the line. If he strays from the line, the pins fall. He is thus able to trace, say, the curve of a graph or the border of a country. More complex diagrams can also be interpreted. Dark areas of maps, for example, can be represented by raising all the pins, while light areas are places where all the pins are dropped.

Not only is the tactile mouse more advanced than existing technologies for blind people, it is also cheaper than existing Braille readers, which plug into a computer and typically display 40 Braille characters at a time. The tactile mouse costs $695, rather than $3,500-8,000 for a Braille reader.


Full article and photo:

Web-wide war

Bing and online newspapers

Microsoft opens a new front in its battle with Google

EVEN technology pundits can sometimes be right. Jason Calacanis, an entrepreneur and noted agent provocateur, recently argued that there is a simple solution to the woes of both Microsoft and big media companies. The world’s largest software firm should pay Time Warner, News Corporation and others firms to block Google, the search giant, from indexing their content—and make it searchable exclusively through Bing, Microsoft’s new search service. Media companies would thus get badly needed cash and Bing a chance to gain market share from Google.

This week it emerged that Microsoft and News Corp are talking about just that. Although the discussions may come to naught, or prove a mere ploy in the media firm’s ongoing negotiations with Google, the news caused a stir. It is a sign not only of how far Microsoft is willing to go in order to turn Bing into a serious rival to Google, but also of how the entire internet could well evolve.

It should come as no surprise that News Corp would be the first to discuss such a deal. Rupert Murdoch, its boss, has long criticised Google for “stealing” his newspapers’ stories by pasting links to them on Google’s own site. He has also announced loudly and often that he wants to charge for more of the content that his firm puts online. What is more, he needs to renegotiate the deal that in 2006 gave Google the exclusive right to place search ads on MySpace, a social network owned by News Corp. Back then Google agreed to shell out $900m over three years for the privilege, although it may in the end pay less, as traffic on MySpace has not met the targets specified.

Google is unlikely to want to pay such a high price again, given that declining traffic and thus disappointing advertising revenues. Google also knows that Mr Murdoch will think twice before blocking the biggest source of traffic for his newspapers’ websites. More than a quarter of all visitors to the Wall Street Journal’s site, for instance, come from Google, which is in line with most other newspapers, according to Hitwise, a market-research firm.

Microsoft, for its part, cannot afford to let Google rule the search business and, by extension, a big part of the online advertising that is expected pay for many services in the age of cloud computing. In recent years the firm has invested billions in its search capabilities. With Bing, it has at last come up with a plausible alternative, which works better than Google for some searches, such as comparing prices of consumer electronics or looking for cheap flights. To boost Bing’s market share, Microsoft in July agreed with Yahoo!, another online giant, to merge both firms’ search activities.

Yet all this may not be enough. Since its launch in June, Bing’s market share has grown by two percentage points to nearly 10% of all searches in America, but Yahoo!’s has dropped by the same figure to 18%. Exclusive content deals may just be what Microsoft needs to reach a combined 30%, which some experts see as the minimum to make a dent in Google’s business. Microsoft appears ready to spend whatever is needed: up to 10% of the company’s overall operating income over the next five years, according to Steve Ballmer, the firm’s boss. This would, all things being equal, add up to some $11 billion.

Yet what looks like good news for media firms is rather worrisome for champions of an open internet. To them, exclusive content deals are another big step away from an online world with few borders, where everybody plays according to the same rules. Already, they say, Apple dictates which applications are allowed to run on the iPhone, Facebook tries to discourage members from surfing elsewhere, and Google’s navigation software is only free for users of its own operating system for smart-phones. “We’re heading into a war for the control of the web—and against the web as an interoperable platform”, warns Tim O’Reilly, the internet guru who coined the term “web 2.0”.

Mr O’Reilly is definitely on to something. The question, however, is whether this “war 2.0” is really so unwelcome. A handful of well-funded and robust platforms locked in heated competition could be better for consumers and generate more innovation than Mr O’Reilly’s vision of an internet made of many “small pieces loosely joined”.

The Economist


Full article and photo:

Some Courts Raise Bar on Reading Employee Email

Companies Face Tougher Tests to Justify Monitoring Workers’ Personal Accounts; Rulings Hinge on ‘Expectation of Privacy’

Big Brother is watching. That is the message corporations routinely send their employees about using email.

But recent cases have shown that employees sometimes have more privacy rights than they might expect when it comes to the corporate email server. Legal experts say that courts in some instances are showing more consideration for employees who feel their employer has violated their privacy electronically.

Driving the change in how these cases are treated is a growing national concern about privacy issues in the age of the Internet, where acquiring someone else’s personal and financial information is easier than ever.

“Courts are more inclined to rule based on arguments presented to them that privacy issues need to be carefully considered,” said Katharine Parker, a lawyer at Proskauer Rose who specializes in employment issues.

In past years, courts showed sympathy for corporations that monitored personal email accounts accessed over corporate computer networks. Generally, judges treated corporate computers, and anything on them, as company property.

Now, courts are increasingly taking into account whether employers have explicitly described how email is monitored to their employees.

That was what happened in a case earlier this year in New Jersey, when an appeals court ruled that an employee of a home health-care company had a reasonable expectation that email sent on a personal account wouldn’t be read.

And last year, a federal appeals court in San Francisco came down on the side of employee privacy, ruling employers that contract with an outside business to transmit text messages can’t read them unless the worker agrees. The ruling came in a lawsuit filed by Ontario, Calif., police officers who sued after a wireless provider gave their department transcripts of an officer’s text messages in 2002. The case is on appeal to the U.S. Supreme Court.

Lawyers for corporations argue that employers are entitled to take ownership of the keystrokes that occur on work property. In addition, employers fear productivity drops when workers spend too much time crafting personal email messages.

“Employers are right to expect their employees when they are paid for their time at work are actually working,” said Jane McFetridge, a lawyer who handles employment issues for the Chicago office of Jackson Lewis.

Many workers log in to personal email accounts from the office. In a 2009 study by the Ponemon Institute, a Traverse City, Mich.-based data-security research firm, 52% of employees surveyed said they access their personal email accounts from their work computer. Of those individuals, 60% said they send work documents or spreadsheets to their personal email addresses.

Data security experts say such actions could invite viruses or security leaks.

More corporations are monitoring employees’ email traffic. In a June survey of 220 large U.S. firms commissioned by Proofpoint Inc., a provider of email security and data loss prevention services, 38% of companies said they employ staff to read or otherwise analyze the content of outgoing email, up from 29% last year. More companies also say they are worried about information leaks: Thirty-four percent of respondents said their businesses had been affected by the exposure of sensitive or embarrassing information, up from 23% in 2008.

The growing concerns about security and privacy comes as expanding technology muddies the waters between personal and professional.

“Computers are becoming recognized as being so much a part of the ongoing personal as well as professional life of employees and everyone else that courts are more sympathetic all the time to granting greater recognition to privacy,” said Floyd Abrams, a First Amendment attorney at Cahill Gordon & Reindel LLP. Employees often assume their communications on personal email accounts should stay private even if they are using work-issued computers or smart phones. But in most instances when using a work device, emails of all kinds are captured on a server and can be retrieved by an employer.

Still, in some cases courts are finding that unless they have explicitly told the employee they will monitor email, they don’t have the legal right to do it — even if the email in question was a personal one sent using a work account, rather than a personal address.

In a case earlier this year in New Jersey, a worker on the brink of resigning from her job at the Loving Care Agency Inc. used a personal, password-protected Yahoo account on a work laptop to email her lawyer to hash out the details of a workplace discrimination suit she was planning to file against the agency. After the employee, Marina Stengart, left her job and filed suit, her employer extracted the emails from the hard drive of her computer laptop.

A lower court found that the emails from Ms. Stengart were company property, because the company’s internal policies had put her on sufficient notice that her emails would be viewed.

But a New Jersey appellate court disagreed, ruling in her favor in June, ordering the company to turn over the emails to Ms. Stengart and delete them from their hard drives. The court’s ruling went so far as to dissect the company’s internal policies about employee communications and decided they offered “little to suggest that an employee would not retain an expectation of privacy in such [personal] emails.”

“We reject the employer’s claimed right to rummage through and retain the employee’s emails to her attorney,” the appellate court ruling said.

Loving Care, which declined to comment, has appealed the ruling. The case is pending in the New Jersey Supreme Court.

In another case this year, Bonnie Van Alstyne, a former vice president of sales and marketing at Electronic Scriptorium Ltd., a data-management company, was in the thick of a testy legal battle in Virginia state court with the company over employment issues when it came to light that her former boss had been accessing and reading her personal AOL email account. The monitoring went on for more than a year, continuing after Ms. Van Alstyne left the company. Ms. Van Alstyne sometimes used her personal email account for business purposes, and her supervisor said he was concerned that she was sharing trade secrets.

The supervisor, Edward Leonard, had accessed her account “from home and Internet cafes, and from locales as diverse as London, Paris, and Hong Kong,” according to legal filings in the case.

Ms. Van Alstyne sued Mr. Leonard and the company for accessing her email without authorization. A jury sided with her, and the case eventually settled.

Nicholas Hantzes, a lawyer for the company and Mr. Leonard, said employers could learn from the case that to avoid legal tangles they “should do everything they can to discourage employees from using personal email for business purposes.”

Dionne Searcey, Wall Street Journal


Full article and photos:

Volunteers Log Off as Wikipedia Ages is the fifth-most-popular Web site in the world, with roughly 325 million monthly visitors. But unprecedented numbers of the millions of online volunteers who write, edit and police it are quitting.

That could have significant implications for the brand of democratization that Wikipedia helped to unleash over the Internet — the empowerment of the amateur.

Volunteers have been departing the project that bills itself as “the free encyclopedia that anyone can edit” faster than new ones have been joining, and the net losses have accelerated over the past year. In the first three months of 2009, the English-language Wikipedia suffered a net loss of more than 49,000 editors, compared to a net loss of 4,900 during the same period a year earlier, according to Spanish researcher Felipe Ortega, who analyzed Wikipedia’s data on the editing histories of its more than three million active contributors in 10 languages.

Eight years after Wikipedia began with a goal to provide everyone in the world free access to “the sum of all human knowledge,” the declines in participation have raised questions about the encyclopedia’s ability to continue expanding its breadth and improving its accuracy. Errors and deliberate insertions of false information by vandals have undermined its reliability.

Executives at the Wikimedia Foundation, which finances and oversees the nonprofit venture, acknowledge the declines, but believe they can continue to build a useful encyclopedia with a smaller pool of contributors. “We need sufficient people to do the work that needs to be done,” says Sue Gardner, executive director of the foundation. “But the purpose of the project is not participation.”

Indeed, Wikipedia remains enormously popular among users, with the number of Web visitors growing 20% in the 12 months ending in September, according to comScore Media Metrix.

Wikipedia contributors have been debating widely what is behind the declines in volunteers. One factor is that many topics already have been written about. Another is the plethora of rules Wikipedia has adopted to bring order to its unruly universe — particularly to reduce infighting among contributors about write-ups of controversial subjects and polarizing figures.

“Wikipedia is becoming a more hostile environment,” contends Mr. Ortega, a project manager at Libresoft, a research group at the Universidad Rey Juan Carlos in Madrid. “Many people are getting burnt out when they have to debate about the contents of certain articles again and again.”

Wikipedia’s struggles raise questions about the evolution of “crowdsourcing,” one of the Internet era’s most cherished principles. Crowdsourcing posits that there is wisdom in aggregating independent contributions from multitudes of Web users. It has been promoted as a new and better way for large numbers of individuals to collaborate on tasks, without the rules and hierarchies of traditional organizations.

But as it matures, Wikipedia, one of the world’s largest crowdsourcing initiatives, is becoming less freewheeling and more like the organizations it set out to replace. Today, its rules are spelled out across hundreds of Web pages. Increasingly, newcomers who try to edit are informed that they have unwittingly broken a rule — and find their edits deleted, according to a study by researchers at Xerox Corp.

“People generally have this idea that the wisdom of crowds is a pixie dust that you sprinkle on a system and magical things happen,” says Aniket Kittur, an assistant professor of human-computer interaction at Carnegie Mellon University who has studied Wikipedia and other large online community projects. “Yet the more people you throw at a problem, the more difficulty you are going to have with coordinating those people. It’s too many cooks in the kitchen.”

Wikipedia founder Jimmy Wales, who is chairman emeritus of the foundation, acknowledges participation has been declining. But he says it still isn’t clear to him what the “right” number of volunteer “Wikipedians” should be. “If people think Wikipedia is done,” he says, meaning that with three million articles it is hard to find new things to write about, “that’s substantial. But if the community has become more hostile to newbies, that’s a correctable problem.”

Mr. Wales says his top priority is to improve the accuracy of Wikipedia’s articles. He’s pushing a new feature that would require top editors to approve all edits before they are displayed on the site. The idea is to prevent the kind of vandalism that in January declared Sen. Edward Kennedy’s death months before his actual passing.

Jimmy Wales, founder of the online encyclopedia, which is written and edited by volunteers.

Mr. Wales, a onetime options trader in Chicago, founded Wikipedia in 2001 amid frustration that his effort to create an online encyclopedia was hampered by the slow pace of copy-editing and getting feedback from experts. He saw Wikipedia as a side project — a radical experiment with software that allows multiple people to edit the same Web page. The term “wiki” comes from the Hawaiian word for fast.

The collaborative software fostered a unique form of online governance. One of Wikipedia’s principles is that decisions should be made by consensus-building. One of the few unbreakable rules is that articles must be written from a neutral point of view. Another is that anyone should be able to edit most articles. One policy serves as a coda: “Ignore all rules.”

The Wikimedia Foundation employs a staff of 34, mostly in San Francisco, to run the site’s computers, guide its planning and serve as its public face. In its fiscal year ended in June, it reported expenses of $5.6 million. It funds its operations mostly through donations. Earlier this month, it launched a campaign to raise $7.5 million from users.

Wikipedia’s popularity has strained its consensus-building culture to the breaking point. Wikipedia is now a constant target for vandals who spray virtual graffiti throughout the site — everything from political views presented as facts to jokes about their friends — and spammers who try to insert marketing messages into articles.

In 2005, journalist John Seigenthaler Sr. wrote about his own Wikipedia write-up, which unjustly accused him of murder. The resulting bad press was a wake-up call. Wikipedians began getting more aggressive about patrolling for vandals and blocking suspicious edits, according to Andrew Lih, a professor at the University of Southern California and a regular Wikipedia contributor.

That helped transform the site into a more hierarchical society where volunteers had to negotiate a thicket of new rules. Wikipedia rolled out new antivandalism features, including “semiprotection,” which prevents newcomers from editing certain controversial articles.

“It was easier when I joined in 2004,” says Kat Walsh, a longtime contributor who serves on Wikimedia’s board of trustees. “Everything was a little less complicated…. It’s harder and harder for new people to adjust.”

In 2008, Wikipedia’s editors deleted one in four contributions from infrequent contributors, up sharply from one in 10 in 2005, according to data compiled by social-computing researcher Ed Chi of Xerox’s Palo Alto Research Center.

Nina Paley, a New York cartoonist who calls herself an “information radical,” had no luck when she tried to post her syndicated comic strips from the ’90s. She does not copyright their artwork but instead makes money on ancillary products and services, making her perfect for Wikipedia’s free-content culture.

It took her a few days to decipher Wikipedia’s software.”I figured out how to do it with this really weird, ugly code,” she says. “I went to bed feeling so proud of myself, and I woke up and found it had been deleted because it was ‘out of scope.'”

A Wikipedia editor had decided that Ms. Paley’s comics didn’t meet the criteria for educational art. Another editor weighed in with questions about whether she had copyright permission for the photo of herself that she uploaded. She did.

Ultimately, it was decided that Ms. Paley’s comics were suitable for the site. Samuel Klein, a veteran Wikipedian who serves on the board of trustees, intervened and restored her contributions. Mr. Klein says experiences like Ms. Paley’s happen too often. Mr. Klein says that the Wikipedia community needs to rein in so-called deletionists — editors who shoot first and ask questions later.

“Wikipedians” from around the world gathered in August at the annual Wikimania conference in Buenos Aires

The Wikimedia Foundation says it is seeking to increase participation, but that growing the overall number of participants isn’t its main focus.

“The early days were a gold rush,” says Ms. Gardner, the foundation’s executive director. “They attracted lots and lots of people, because a new person could write about anything.” The encyclopedia isn’t finished, she says, but the “easy work” of contributing is done.

To attract new recruits to help with the remaining work, Ms. Gardner has hired an outreach team, held seminars to train editors in overlooked categories, and launched task forces to seek ways to increase participation in markets such as India. The foundation also invested $890,000 in a new design for the site, slated to go live in the next few months, that aims to make editing easier for contributors who aren’t computer-savvy.

She says increasing contributor diversity is her top goal. A survey the foundation conducted last year determined that the average age of an editor is 26.8 years, and that 87% of them are men.

Much of the task of making Wikipedia more welcoming to newcomers falls to Frank Schulenburg, the foundation’s head of public outreach. An academic, he began contributing to articles about French philosophers on the German Wikipedia in 2005.

“The community has created its own language, and that is certainly a barrier to new participants,” he says.

One of Mr. Schulenburg’s first projects, called the “bookshelf,” is an effort to gather the basic rules for contributing to Wikipedia in one place for newcomers. He hopes the new multimedia bookshelf will be the Wikipedia community’s equivalent of a high-school civics textbook.

In Germany, to recruit more academics, Mr. Schulenburg had devised an educational program called Wikipedia Academy. In July, he conducted the first such program in the U.S., for scientists and administrators at the National Institutes of Health in Bethesda, Md. His goal was to entice the scientists to contribute.

Wikipedia already attracts lots of academics, but science isn’t its strength. By its own internal grading standards, the article on Louis Pasteur, one of the founders of microbiology, for example, is lower in quality than its article on James T. Kirk, the fictional “Star Trek” captain.

For the July event, Mr. Schulenburg got about 100 scientists and NIH staffers to spend the day listening to arguments about why they should bother contributing to Wikipedia, despite the fact that it doesn’t pay, won’t help them get a grant or even win them applause from their peers.

His audience was skeptical about the lack of credentials among Wikipedia editors. “One of my concerns is not knowing who the editor is,” said Lakshmi Grama, a communications official from the National Cancer Institute.

Several participants started contributing to Wikipedia right after the event. The NIH says it is considering whether to adopt formal policies to encourage its staff to contribute while at work.

Each year, Wikipedians from around the world gather at a conference they call Wikimania. At this year’s meeting in Buenos Aires in August, participants at one session debated the implications of the demographic shifts.

“The number one headline I have been seeing for five years is that Wikipedia is dying,” said Mathias Schindler, a board member of Wikimedia Germany. He argued that Wikipedia needed to focus less on the total number of articles and more on “smarter metrics” such as article quality.

He said he disagreed with dire views about the project’s future. “I don’t expect to see Wikipedia follow the rule of any curve or any projection.”

Julia Angwin and Geoffrey A. Fowler, Wall Street Journal


Full article and photos:

Swiss take Google to court over Street View

The Swiss data protection watchdog is taking Google to the country’s Federal Administrative Court over an alleged failure to protect people’s privacy on its Street View website, two months after launching the service in Switzerland.

Hanspeter Thür, the federal data protection and information commissioner, said Google had not done enough to make faces and vehicle number plates unrecognisable on the service, which provides panoramic, street-level photos.

He has filed a motion seeking to freeze any expansion of Google’s activities under a temporary injunction. This would prevent Google from taking any further photography but would not require it to shut down the service entirely.

It is the first time that Google has faced a lawsuit from a government agency over Street View. Privacy regulators in a number of countries, including Italy, Germany and Japan, have raised concerns about the service but Google has been able to negotiate measures that have reassured them.

In Japan, it agreed to lower the height of the cameras taking pictures of the streets by 40cm to ensure they did not take images of people’s private gardens, while in Germany it agreed to erase the raw, identifiable photos of people and property from its system if individuals requested it.

Google also faced a private lawsuit in the US over the service, which was ultimately dismissed.

Google met Swiss data protection authorities in the run-up to launching Street View in Switzerland in September and was initially given the green light. But Mr Thür later became critical of the service, saying Google had failed to impose promised measures to improve privacy.

He said on Friday that Google had given the authority incomplete information.

“Google announced that it would primarily be filming urban centres, but then put comprehensive images of numerous towns and cities on the internet. In outlying districts, where there are far fewer people on the streets, the simple blurring of faces is no longer sufficient to conceal identities,” said Mr Thür.

Peter Fleischer, Google’s global privacy counsel, said: “We were very disappointed that the DPA [data protection agency] has said he will take this to court. We believe this is unnecessary and Street View is completely legal. We will contest any case vigorously.”

Google said Street View was very popular in Switzerland, with more than 80 per cent of users saying they found it useful.

Fewer than one in 20,000 views had resulted in a request for additional blurring of faces or car licence plates, the company added.


Full article:

I’m Innocent. Just Check My Status on Facebook.


Rodney Bradford used Facebook to provide an alibi in a robbery case.

The message on Rodney Bradford’s Facebook page, posted at 11:49 a.m. on Oct. 17, asked where his pancakes were. The words were typed from a computer in his father’s apartment in Harlem.

At the time, the sentence, written in street slang, was just another navel-gazing, cryptic Facebook status update — meaningless to anyone besides Mr. Bradford. But when Mr. Bradford, 19, was arrested the next day as a suspect in a robbery at the Farragut Houses in Brooklyn, where he lives, the words took on greater importance. They became his alibi.

His defense lawyer, Robert Reuland, told a Brooklyn assistant district attorney, Lindsay Gerdes, about the Facebook entry, which was made at the time of the robbery. The district attorney subpoenaed Facebook to verify that the words had been typed from a computer at an apartment at 71 West 118th Street in Manhattan, the home of Mr. Bradford’s father. When that was confirmed, the charges were dropped.

“This is the first case that I’m aware of in which a Facebook update has been used as alibi evidence,” said John G. Browning, a lawyer in Dallas who studies social networking and the law. “We are going to see more of that because of how prevalent social networking has become.”

With more people revealing the details of their lives online, sites like Facebook, MySpace and Twitter are providing evidence in legal battles.

Up to now, social networking activity has mostly been used as prosecutorial evidence, Mr. Browning said. He cited a burglary case in September in Martinsburg, Pa., in which the burglar used the victim’s computer to log on to Facebook and forgot to log off. The police followed the digital trail to Jonathan G. Parker, 19, who was arrested.

As part of his defense, a suspect in an Indiana murder case, Ian J. Clark, claimed he was not the kind of man who could kill his girlfriend’s child. But remarks he was found to have posted on MySpace left him vulnerable to character examination, Mr. Browning said, contributing to his conviction and a sentence of life in prison without parole.

In civil cases, too, online communications have helped strengthen evidence, especially in divorce cases, where they are often used as proof of cheating.

And postings by a probationary sheriff’s deputy, Brian Quinn, 26, of Marion County, Fla., on his MySpace page led to his firing in June 2006 for “conduct unbecoming an officer.”

Such cases are becoming more prevalent in part because Congress in 2006 mandated changes to the federal rules of civil procedure, expanding the acceptance of electronically stored information as evidence.

With the use of a Facebook update as an alibi, such communications may also be used to prove innocence, Mr. Browning said.

Mr. Bradford’s arrest was for the mugging at gunpoint of Jeremy Dunklebarger and Rolando Perez-Lorenzo at 11:50 a.m. on Oct. 17, according to Mr. Reuland, Mr. Bradford’s lawyer.

Mr. Bradford, who was facing charges in a previous robbery, contended he was in Harlem at the time of the Oct. 17 robbery — a claim supported by Mr. Bradford’s father, Rodney Bradford Sr., and his stepmother, Ernestine Bradford, Mr. Reuland said.

Mr. Reuland acknowledged that, in principle, anyone who knew Mr. Bradford’s user name and password could have typed the Facebook update, but he regards it as unlikely.

“This implies a level of criminal genius that you would not expect from a young boy like this; he is not Dr. Evil,” Mr. Reuland said, adding that the Facebook entry was just “icing on the cake,” since his client had other witnesses who provided an alibi.

Jonah Bruno, a spokesman for the Brooklyn district attorney, Charles J. Hynes, said he could not discuss details of the case because it was sealed. But he acknowledged that Facebook was crucial to the charges’ being dropped.

But Joseph A. Pollini, who teaches at the John Jay College of Criminal Justice, said prosecutors should not have been so quick to drop the charges.

“With a user name and password, anyone can input data in a Facebook page,” Mr. Pollini said.

“Some of the brightest people on the Internet are teenagers,” he said. “They know the Internet better than a lot of people. Why? Because they use it all the time.”


Full article and photo:

New Computer Simulator Helps Design Military Strategies Based On Ants’ Movements


Researchers in Spain have designed a system for the mobility of military troops within a battlefield following the mechanisms used by ant colonies to move. The scientists have used settings of Panzer General, a commercial war video game, for the development of this software.

A researcher at the University of Granada has designed a new system for the mobility of military troops within a battlefield based on the mechanisms used by ant colonies to move using a commercial video game.

This work, developed at the department of Computer Architecture and Technology of the UGR, has designed several algorithms that permit to look for the best route path (this is, to find the better route to satisfy certain criteria) within a particular environment.

Specifically, this research work has developed a software that would allow the army troops to define the best path within a military battle field, considering that such path will be covered by a company and this must consider the security criteria (reaching their destination with the lower number of casualties) and speed (reaching their destination as quickly as possible).

To that end, the scientists have used the so called ‘ant colony optimization algorithm (ACO)’, a probabilistic technique used to solve optimization problems and inspired in the behaviors of ants to find trajectories from the colony to the food.

A mini-simulator

This work has been carried out by Antonio Miguel Mora García, and supervised by professors Juan Julián Merelo Guervós and Pedro Ángel Castillo Valdivieso, of the department of Computer Architecture and Technology of the UGR.

The scientists of the UGR have developed a mini-simulator in order to define the settings (battlefields), locate the unit and their enemies, execute the algorithms and see the results. In addition, the software designed by them offers a few tools useful to analyze both the initial map and the results.

To prepare this system, Mora García started from the battlefields present in the video game Panzer General™, defining later the necessary properties and restrictions to make them faithful to reality.

The research work developed at the University of Granada has also had the participation of members of the Doctrine and Training Command of the Spanish Army (MADOC), organism belonging to the Ministry of Defense, which in the long term could incorporate some of the features of the new simulator for the design of actual military strategies.

The UGR scientists point out that, apart form this application the simulator could also be useful to solve other actual problems, such as the search for the best path for a sales agent or a transporter to visit his clients optimizing fuel consumption or time, for example. “In addition -they say- it could also be useful to solve planning problems for the distribution of goods, trying to serve the highest possible number of customers starting from a central warehouse, considering the lowest possible number of vehicles”.

Part of the results of this research work has been presented in several conferences, both national and international, and published in journals including the International Journal of Intelligent Systems. The software designed for this research work is free software, and it can be downloaded though the Internet freely.

Full article and photo:

Celebrating 40 years of the net

Arpanet circa 1971, Larry Roberts

By 1971 the fledgling internet had spanned the US.

It has often been said that a journey of a thousand miles begins with a single step. For the internet, that first step was more of a stumble.

At 2100, on 29 October 1969, engineers 400 miles apart at the University of California in Los Angeles (UCLA) and Stanford Research Institute (SRI) prepared to send data between the first nodes of what was then known as Arpanet.

It got the name because it was commissioned by the US Department of Defense’s Advanced Research Projects Agency (Arpa).

The fledgling network was to be tested by Charley Kline attempting to remotely log in to a Scientific Data Systems computer that resided at SRI.

Kline typed an “L” and then asked his colleague Bill Duvall at SRI via a telephone headset if the letter had arrived.

It had.

Kline typed an “O”. Duvall said that arrived too.

Kline typed a “G”. Duvall could only report that the system had crashed.

They got it working again by 22:30 and everything went fine. After that first misstep, the network almost never put a foot wrong. The rest has made history.

Big changes

Watching remotely in Washington 40 years ago was Dr Larry Roberts, the MIT scientist who worked out the fundamental technical specifications of the Arpanet. The engineers who built the hardware that made Arpanet work, did so to his design.

But, he told BBC News, the initial reaction to setting up Arpanet was anything but positive.

“They thought it was a horrible idea,” he said.


The Interface Message Processors (IMPs) helped to shuttle data around the Arpanet

Arpa boss Bob Taylor wanted Arpanet built to end the crazy situation of every institution he funded demanding ever more computer power and duplicating research on those machines.

“At the time computers were completely incompatible and moving data was a huge chore,” he said.

The resistance came about because those institutions wanted to keep control of their computer resources. But, said Dr Roberts, they soon saw that hooking up to Arpanet meant a huge increase in the potential computer power they had at their disposal.

“They quickly learned that there was a tremendous gain for them,” said Dr Roberts. It also fulfilled Bob Taylor’s goal of cutting spending on computers.

Back in those days, long before the utility of the net was demonstrated, Dr Roberts and his colleagues had an inkling that remarkable things would happen once such a network were built.

“We knew that if we could connect all the data we were collecting that would change the face of research and development and business,” he said.

Dividing data

The Arpanet became the internet in the 1970s but the change was largely cosmetic. The fundamental technological idea that made it work, known as packet switching, was demonstrated on that October evening.

The motivation for developing packet switching also had a financial element. Computer networks were in used prior to the creation of Arpanet but not many people used them.

“The cost was enormous because we were doing it so inefficiently,” said Dr Roberts. “We knew we needed something to share that rather than have it as a dedicated session.”

Post Office sorting office, Getty

The inspiration for packet switching partly came from the Post Office

Analysis by Dr Roberts showed that only one fifteenth of the capacity of a telephone line used to remotely connect to a mainframe was used.

Far better, he reasoned, was to find a way to divide up that capacity among many computers.

Dr Roberts was not alone in building a network using these principles. Packet switching got its name thanks to late British scientist Donald Davies who was creating a network that used this technique at the National Physical Laboratory (NPL).

Not only did it make it easier, and cheaper, to use telephone lines it helped speed up the passing of data.

“If you have packets arriving in little pieces you can very quickly sort them,” said Roger Scantlebury, one of Dr Davies’ colleagues. “But if you have a huge message you have to wait for that to finish before anything else can happen.”

Rather than just theorise, Dr Davies and his colleagues put their work into action.

“When we first started we were just going to build something to show it would work, but fairly quickly Donald realised that in order for it to have any impact it needed to be a proper working system, and we actually built the network which went live at the start of 1970,” he said.

He told the BBC News: “When we first put the network together at NPL, we weren’t constrained by telephone wires, so we built high capacity links and everyone had 1.5 megabytes, which at the time everyone said was crazy.”

From those first two nodes, Arpanet quickly grew and by December of 1969 it had four nodes. By 1972 it had 37 and then started the process of connecting up networks to each other and the internet, a network of networks, came into being.

Dr Roberts has spent his professional life involved in networks and is not done yet. He is currently driving a Darpa research project to get the net ready for the next 40 years.

The work is concentrating on ways to improve security, enshrine fairness so no-one can hog capacity and guarantee quality of connection to support exquisitely time sensitive applications such as remote surgery.

There’s no doubt that the net’s first step was the start of a giant leap.


Full article and photos:

Old Trick Threatens the Newest Weapons

cyberwar 1Despite a six-year effort to build trusted computer chips for military systems, the Pentagon now manufactures in secure facilities run by American companies only about 2 percent of the more than $3.5 billion of integrated circuits bought annually for use in military gear.

That shortfall is viewed with concern by current and former United States military and intelligence agency executives who argue that the menace of so-called Trojan horses hidden in equipment circuitry is among the most severe threats the nation faces in the event of a war in which communications and weaponry rely on computer technology.

As advanced systems like aircraft, missiles and radars have become dependent on their computing capabilities, the specter of subversion causing weapons to fail in times of crisis, or secretly corrupting crucial data, has come to haunt military planners. The problem has grown more severe as most American semiconductor manufacturing plants have moved offshore.

Only one-fifth of all computer chips are now made in the United States, and just one-quarter of the chips based on the most advanced technologies are built here, I.B.M. executives say. That has led the Pentagon and the National Security Agency to expand significantly the number of American plants authorized to manufacture chips for the Pentagon’s Trusted Foundry program.

Despite the increases, semiconductor industry executives and Pentagon officials say, the United States lacks the ability to fulfill the capacity requirements needed to manufacture computer chips for classified systems.

“The department is aware that there are risks to using commercial technology in general and that there are greater risks to using globally sourced technology,” said Robert Lentz, who before his retirement last month was in charge of the Trusted Foundry program as the deputy assistant defense secretary for cyber, identity and information assurance.

Counterfeit computer hardware, largely manufactured in Asian factories, is viewed as a significant problem by private corporations and military planners. A recent White House review noted that there had been several “unambiguous, deliberate subversions” of computer hardware.

cyberwar 2

CONCERNS Malicious software could disable missiles and other weapons.

“These are not hypothetical threats,” the report’s author, Melissa Hathaway, said in an e-mail message. “We have witnessed countless intrusions that have allowed criminals to steal hundreds of millions of dollars and allowed nation-states and others to steal intellectual property and sensitive military information.”

Ms. Hathaway declined to offer specifics.

Cyberwarfare analysts argue that while most computer security efforts have until now been focused on software, tampering with hardware circuitry may ultimately be an equally dangerous threat. That is because modern computer chips routinely comprise hundreds of millions, or even billions, of transistors. The increasing complexity means that subtle modifications in manufacturing or in the design of chips will be virtually impossible to detect.

“Compromised hardware is, almost literally, a time bomb, because the corruption occurs well before the attack,” Wesley K. Clark, a retired Army general, wrote in an article in Foreign Affairs magazine that warns of the risks the nation faces from insecure computer hardware.

“Maliciously tampered integrated circuits cannot be patched,” General Clark wrote. “They are the ultimate sleeper cell.”

Indeed, in cyberwarfare, the most ancient strategy is also the most modern.

Internet software programs known as Trojan horses have become a tool of choice for computer criminals who sneak malicious software into computers by putting it in seemingly innocuous programs. They then pilfer information and transform Internet-connected PCs into slave machines. With hardware, the strategy is an even more subtle form of sabotage, building a chip with a hidden flaw or a means for adversaries to make it crash when wanted.

Pentagon executives defend the manufacturing strategy, which is largely based on a 10-year contract with a secure I.B.M. chipmaking plant in Burlington, Vt., reported to be valued as high as $600 million, and a certification process that has been extended to 28 American chipmakers and related technology firms.

“The department has a comprehensive risk-management strategy that addresses a variety of risks in different ways,” said Mitchell Komaroff, the director of a Pentagon program intended to develop a strategy to minimize national security risks in the face of the computer industry’s globalization.

Mr. Komaroff pointed to advanced chip technologies that made it possible to buy standard hardware components that could be securely programmed after they were acquired.

But as military planners have come to view cyberspace as an impending battlefield, American intelligence agency experts said, all sides are arming themselves with the ability to create hardware Trojan horses and to hide them deep inside the circuitry of computer hardware and electronic devices to facilitate military attacks.

In the future, and possibly already hidden in existing weapons, clandestine additions to electronic circuitry could open secret back doors that would let the makers in when the users were depending on the technology to function. Hidden kill switches could be included to make it possible to disable computer-controlled military equipment from a distance. Such switches could be used by an adversary or as a safeguard if the technology fell into enemy hands.

A Trojan horse kill switch may already have been used. A 2007 Israeli Air Force attack on a suspected partly constructed Syrian nuclear reactor led to speculation about why the Syrian air defense system did not respond to the Israeli aircraft. Accounts of the event initially indicated that sophisticated jamming technology was used to blind the radars. Last December, however, a report in an American technical publication, IEEE Spectrum, cited a European industry source in raising the possibility that the Israelis might have used a built-in kill switch to shut down the radars.

Separately, an American semiconductor industry executive said in an interview that he had direct knowledge of the operation and that the technology for disabling the radars was supplied by Americans to the Israeli electronic intelligence agency, Unit 8200.

The disabling technology was given informally but with the knowledge of the American government, said the executive, who spoke on the condition of anonymity. His claim could not be independently verified, and American military, intelligence and contractors with classified clearance declined to discuss the attack.

The United States has used a variety of Trojan horses, according to various sources.

In 2004, Thomas C. Reed, an Air Force secretary in the Reagan administration, wrote that the United States had successfully inserted a software Trojan horse into computing equipment that the Soviet Union had bought from Canadian suppliers. Used to control a Trans-Siberian gas pipeline, the doctored software failed, leading to a spectacular explosion in 1982.

Crypto AG, a Swiss maker of cryptographic equipment, was the subject of intense international speculation during the 1980s when, after the Reagan administration took diplomatic actions in Iran and Libya, it was widely reported in the European press that the National Security Agency had access to a hardware back door in the company’s encryption machines that made it possible to read electronic messages transmitted by many governments.

According to a former federal prosecutor, who declined to be identified because of his involvement in the operation, during the early ’80s the Justice Department, with the assistance of an American intelligence agency, also modified the hardware of a Digital Equipment Corporation computer to ensure that the machine — being shipped through Canada to Russia — would work erratically and could be disabled remotely.

The American government began making a concerted effort to protect against hardware tampering in 2003, when Deputy Defense Secretary Paul D. Wolfowitz circulated a memorandum calling on the military to ensure the economic viability of domestic chipmakers.

In 2005, the Defense Science Advisory Board issued a report warning of the risks of foreign-made computer chips and calling on the Defense Department to create a policy intended to stem the erosion of American semiconductor manufacturing capacity.

Former Pentagon officials said the United States had not yet adequately addressed the problem.

“The more we looked at this problem the more concerned we were,” said Linton Wells II, formerly the principal deputy assistant defense secretary for networks and information integration. “Frankly, we have no systematic process for addressing these problems.”

John Markoff, New York Times


Full article and photos:

A Win for Internet Speech

The sheriff of Cook County, Ill., grabbed headlines earlier this year when he sued Craigslist, the online classified advertising forum, for allowing posts that he said promoted prostitution. A federal judge in Chicago wisely threw out the suit last week. As Congress has recognized, if an Internet proprietor had to police every posting that a third party put up, the cost would be enormous — and it would likely stifle communications.

Craigslist warns users that offers or solicitations of prostitution are prohibited. Sheriff Thomas Dart argued that its “erotic services” section still included numerous listings for paid sexual services, including some using code words. The company made voluntary changes after the suit was filed, including conducting a manual review of the listings. Late last year, before the suit was filed, it started charging for those ads in an effort to appease critics.

Even without these changes, Craigslist was operating entirely within the law. The Communications Decency Act of 1996 protects “interactive computer services” — ranging from small bloggers to giant Internet service providers — from liability, in most cases, for speech they did not help create.

The legal question before Judge John F. Grady was not a difficult one. Last year, the United States Court of Appeals for the Seventh Circuit, whose decisions are binding in Illinois, ruled in a fair-housing case that Craigslist cannot be held liable for its users’ illegal real estate listings. As Judge Grady rightly concluded, the same logic applies to adult listings.

Other law enforcement officials, including several state attorneys general, have attacked Craigslist recently for its adult listings, despite its immunity under the Communications Decency Act.

This is the wrong approach. Sheriff Dart told the court that his office had conducted sting operations using Craigslist that led to numerous arrests on prostitution and related charges. He seemed to think it was an argument against Craigslist, but it actually shows why suits like his are unnecessary.

Editorial, New York Times


Full article:

Betting on bytes

The IT business rebounds

Optimism that tech firms will help kick-start economic recovery is overdone

EVERY year, many leading lights of the internet world congregate at the Web 2.0 Summit in San Francisco. The 2009 event, which took place this week, included an evening reception thrown by a venture capital company at a swanky hotel and was dubbed “Web After Dark”. And evidence is growing to suggest that the darkness that has hung over the information technology (IT) industry for many months is lifting.

Three of the sector’s heavyweights—IBM, Intel and Google—recently reported surprisingly robust profits. Even Yahoo! did less badly than expected. On Monday October 19th Apple stunned even the most bullish investors by posting its best quarterly results ever: third-quarter revenues came in at $9.9 billion—24% higher than the same period a year earlier. Then came the news that venture capital investments in America are growing again. And Windows 7, Microsoft’s new operating system, launched on Thursday, is expected to drive demand for personal computers and related wares.

The outlook for IT firms in other countries is also brighter. The OECD detected signs of a recovery as early as August, particularly in Asia. Countries such as South Korea and Taiwan, which boast many companies specialising in chips and hardware, had been hit particularly hard by the downturn, with production in some sectors dropping by as much as 40%. But now that inventories have been depleted, manufacturers there are cranking up production again.

All this is more than welcome. But the wave of good news has already restarted the hype machine, for which the IT industry is well known. Once again, the sector is being trumpeted as the saviour of the economy. Some even predict that IT will pull the economy out of recession, with investment in technology giving a swift boost to productivity and job creation.

Just how much of a boost IT can provide is a subject of some contention. Both Forrester and Gartner, the industry’s leading research firms, see the downturn bottoming out in the current quarter and predict that demand will rebound next year. But while both firms agree on the timing of a recovery, they differ on the severity of the recession in IT and, more importantly, the speed at which the industry will pull out of its slump. Forrester is both more bearish and more bullish. In late September it predicted that worldwide IT purchases will have fallen by 11.4% at the end of this year, to $1.5 trillion, but will grow by 4.9% in 2010. In a report released on Monday, Gartner put these numbers at 5.2%, 3.3% and $3.3 trillion respectively.

There are good reasons to be conservative. For a start, several statistical effects that make the latest numbers look better than they actually are. After a steep downturn, growth numbers can seem equally dramatic. The volatile dollar muddles the picture as well. As long as the currency was relatively strong it weighed heavily on the results of American IT firms by devaluing foreign revenues. Now the dollar’s increasing weakness makes their numbers look far healthier.

In addition, excellent results at Apple, Google and even Intel reflect increased demand from consumers. Apple has benefited from the boom in smart phones, Google from users clicking on more advertisements and Intel from the popularity of netbooks, or small laptops, many of which contain its chips. But companies still account for by far the biggest chunk of technology spending. IBM, which offers the entire range of corporate IT services, from powerful computers to consulting services, is therefore a much better proxy for the overall health of the IT industry. Although its profits were better than expected, its revenues fell by nearly 7% compared with the third quarter of last year.

Yet more to the point, encouraging numbers or not, the technology sector is unlikely to lead the economy out of the recession. More likely, it is the economy, supported by cheap money and stimulus programmes, that is pushing IT. Ultimately, the IT industry will stage a real rebound—it will just take some time. Perhaps it is a result of the severity of the recession, but many are reacting to the first signs of an IT recovery as if it were the latest great thing. As with many new technologies, they overestimate the short-term impact, but underestimate what will happen in the longer run.


Full article and photo:

Millions tricked by ‘scareware’

Computer keyboard
The scam is difficult for police and other agencies to target

Online criminals are making millions of pounds by convincing computer users to download fake anti-virus software, internet security experts claim.

Symantec says more than 40 million people have fallen victim to the “scareware” scam in the past 12 months.

The download is usually harmful and criminals can sometimes use it to get the victim’s credit card details.

The firm has identified 250 versions of scareware, and criminals are thought to earn more than £750,000 each a year.

Franchised out

Scareware sellers use pop-up adverts deliberately designed to look legitimate, for example, using the same typefaces as Microsoft and other well-known software providers.

They appear, often when the user is switching between websites, and falsely warn that a computer’s security has been compromised.

If the user then clicks on the message they are directed towards another site where they can download the fake anti-virus software they supposedly need to clean up their computer – for a fee of up to £60.

Con Mallon, from Symantec, told the BBC the apparent fix could have a double impact on victims.

“Obviously, you’re losing your own hard-earned cash up front, but at the back end of that, if you’re transacting with these guys online you’re offering them credit card details, debit card details and other personal information,” he said.

“That’s obviously very valuable because these cyber criminals can try to raid those accounts themselves or they can then pass them on or sell them to others who ultimately will try to use that information to their benefit not yours.”

The findings were revealed in a report written following Symantec analysis of data collected from July 2008 to June 2009. Symantec said 43 million people fell for such scams during that period.

It has become so popular that the rogue software has been franchised out.

Fake software fake review
Fake reviews help build the credibility of bogus anti-virus software.

Mr Mallon said some scareware took the scam a step further.

“[They] could hold your computer to ransom where they will stop your computer working or lock up some of your personal information, your photographs or some of your Word documents.

“They will extort money from you at that point. They will ask you to pay some additional money and they will then release your machine back to you.”

The scam is hard for police or other agencies to investigate because the individual sums of money involved are very small.

Therefore, experts say users must protect themselves with common sense and legitimate security software.

‘Steal your identity’

Tony Neate, from Get Safe Online, told the BBC the threats presented by the internet had changed in recent years.

“Where we used to say protect your PC… we’ve now got to look at ourselves, making sure we’re protected against the con men who are out there,” he said.

“They want you to help them infect your machine. When they’ve infected your machine it’s possibly no longer your machine – you’ve got no control over it.

“Then what they’re looking to do is take away your identity, steal bits of your identity, or even get some financial information from you.”

He added: “They used to be 16-year-olds in their bedrooms causing damage with viruses. Now those 16-year-olds have grown up [and] they’re looking for money, they’re looking for information.”


Full article and photos:

The Book That Contains All Books

The globally available Kindle could mark as big a shift for reading as the printing press and the codex

On Monday, the Kindle 2 will become the first e-reader available globally. The only other events as important to the history of the book are the birth of print and the shift from the scroll to bound pages. The e-reader, now widely available, will likely change our thinking and our being as profoundly as the two previous pre-digital manifestations of text. The question is how. And the answer can be found in the history of earlier book forms.


 The Kindle, which will be available internationally next week.

Most literate people are familiar with at least some of the consequences of the print revolution of the 15th century, but far fewer are as aware of the much more profound change that occurred when rolls were replaced by codices—pages bound between covers—in the late Roman period. Think of the scattered, tattered remainders of the Dead Sea Scrolls—each text is isolated and vulnerable. Codices were originally mini-libraries, much more useful and easy than storing masses of loose individual texts.

In “Christianity and the Transformation of the Book” (2007), Anthony Grafton and Megan Williams argue that the codex was one of the keys to the nascent power of Christianity in the late Roman period: “The rise of the codex, with its compact proportions, greatly intensified the physical—as well as the symbolic—concentration of cultural power that a sizable library embodied.” The Gospels became both a single object and a small library. The simple act of binding involved the bringing together of voices and interests, a move from having the Lamentations of Jeremiah and histories of the Kings of Israel and the laws of Moses to having the Bible which contains them all.

The development of the codex was a shift from thinking of literature as a unique object, like a painting, to seeing it as an institutional object. Conversely, as the codex came to dominate as a means of intellectual transmission, the scroll began to take on the status of a holy object, which is why synagogues keep the Torah in scrolls.

The introduction of the printing press brought a similarly enormous change to the nature of reading. One of the most interesting figures in that transformation is the great Benedictine scholar Trithemius. He lived in Sponheim in the 15th century and managed to amass a library fully half the size of the Vatican library, an incredible achievement. He was also the author of “In Praise of Scribes,” the foremost defense of scribal practice, in favor of writing things out and against printing them.


“The Trial of Christ; The Death of Judas’ from the Codex Purpureus Rossanensis, early 6th century.

He reminds me particularly of Nicholson Baker, who disapproves highly of Kindle 2. I mean the comparison absolutely as a compliment to Mr. Baker, who recently published a diatribe against Kindle with the subtitle “Centuries of Evolved Beauty Rinsed Away.” His argument boils down to how much he likes the feel of paper. Trithemius had stronger arguments against the newfangled technology of the press: Printed books could never match the beauty and uniqueness of a copied text; copying produced a state of contemplation which was spiritually beneficial; and copying was a way of reducing error, which indeed it was at first.

His central claim was that hand-produced books were inherently holy. His leading anecdote is the story of a scribe who died after decades of copying texts. When they disinterred him, the three fingers of his right hand, his writing hand, had not decomposed. Anyone who has held a handmade medieval missal—or even a handwritten letter—knows what Trithemius is talking about: the sense that someone is communicating something to you personally.

But “In Praise of Scribes” is a good object lesson in the impossibility of avoiding technological change. Trithemius didn’t have his book copied. Too few people could have read it that way. It went straight to the printing press (just as Nicholson Baker’s polemic against Kindle 2 is available online). Trithemius was the first in a line of would-be Luddites who couldn’t resist the power of the new.

My paper library consists of 2,000 volumes, making it both much too big and much too small. I consider a working library to have about 5,000 volumes, but a mere 2,000 has been sufficient to be one of the most continuous problems of my life. Moving it around is a nightmare. A hundred boxes of books is a terrible burden in the 21st century. Yet I know that I will never get rid of them. I’m too attached now. Just as the ancients respected the scroll more after the development of the book, just as the hand-written manuscript became sacred after the invention of print, the printed book is now beginning to glow with its own obsolescence.

dead sea scrolls

The Dead Sea Scrolls.

But I am immensely excited for the new phase of the book. So far the new technology has been called the “e-reader,” a term obviously picked by engineers, not poets. In literary terms it’s a transbook, by which I mean that it is the book which can contain all books. Why are so many writers so afraid of this staggeringly wonderful possibility? A book is a singular object that can contain many voices, but the transbook has the potential to be a singular object containing all voices. It is not just another kind of media; it is the dream of ultimate text.

We are still in early days, but it is obvious where the transbook is headed: It will eventually provide access to all text that is non-copyright, and to the purchase of every book in or out of “print.” Kindle 2’s boast of being able to hold 1,500 titles will eventually sound as ludicrous as those early ads for floppy disks boasting that they could hold up to 64k of data. We will want everything and we will get it. Possibly there will eventually develop a subscription service, which provides access to all books for a monthly fee. At any rate, a single object will contain the contents of all the world’s libraries. It’s just a matter of when that will happen. And who will profit.

Kindle 2 isn’t really about what we may or may not want as readers and writers. It’s about what the book wants to be. And the book wants to be itself and everything. It wants to be a vast abridgment of the universe that you can hold in your hand. It wants to be the transbook.

Stephen Marche is the pop culture columnist at Esquire magazine. His most recent book, “Shining at the Bottom of the Sea,” is a literary anthology of an invented country.


Full article and photos:

Making its bookmark

Digital publishing

Google wants to shake up the digital book market

IT WAS a fitting place to announce an experiment in bookselling. At the Frankfurt book fair on Thursday October 15th, Tom Turvey of Google revealed plans for a new online service that will allow users to download electronic copies of books from the search giant or from publishers using its technology. Called Google Editions, the service will launch in the first half of next year and will trigger a head-to-head battle between Google and another web behemoth, Amazon, the current leader in the digital book arena.

That arena is still tiny. The Association of American Publishers, an industry group, reckons that total book sales in America last year reached $24.3 billion, but e-books accounted for just $113m of that amount. Nevertheless those sales were 68% higher than the previous year and demand for digital texts is expected to rise steeply in coming years. The appetite for e-versions of books is being driven in part by a rapid expansion of the electronic-reader market, which is dominated by Amazon’s Kindle. On October 6th the company unveiled a version of its e-reader with international wireless access that can be used in markets outside America.

But Google reckons that Amazon’s proprietary content system will be its Achilles heel. Purchasers of a digital text for the Kindle can only view it on Amazon’s machine and some smart phones that have appropriate software on them, whereas Google’s digital editions will be accessible via a wide range of gizmos that boast a web browser, including smart phones and personal computers. That should make its e-books more attractive to potential purchasers. Google also intends to let publishers set the prices at which digital texts are sold through its service, whereas Amazon dictates the price of Kindle versions. That may tempt more publishers to work with Google.

The search firm, which on Thursday announced a 27% year-on-year increase in its third-quarter profit, also seems determined to outdo Amazon when it comes to the number of books that it offers. Google Editions is expected to launch with some 400,000-600,000 digital books on its virtual shelves; Amazon currently has around 330,000 books in its online library. Unsurprisingly, the company is deeply opposed to a mooted deal between Google and groups representing publishers and authors that would give the search firm the exclusive digital rights to millions of out-of-print books in America. An American court has given Google and its partners until November 9th to revise an initial agreement that met with stiff opposition from rivals and publishers both at home and in countries such as Germany and France.

Google’s proposed digital bookstore may eventually encourage Amazon to slash the price of its Kindle e-reader range deeply to encourage many more people to buy one before Google Editions arrives. A recent report from Forrester, a research firm, predicts that adoption of e-readers will only take off once the price of such devices falls below $99. The cheapest Kindle still costs $259, while the international model is $279.

Yet even a big cut in the cost of a Kindle will not solve the bigger challenge that it and other proprietary e-readers face. Already, many people are choosing to read books on smart phones rather than on other devices, weakening Amazon’s grip on the market. And both Microsoft and Apple are working on tablet-style computers that combine features of an e-reader with a panoply of multimedia capabilities. Once they become available, such machines will be ideal for accessing digital texts. Let the battle for readers’ eyes begin.


Full article and photo:

Sudden Change, Big Effect

The Internet is already our era’s big disrupter. Its long-term effects will be even greater

In the Middle Ages, a simple military innovation helped to create an entirely new social structure. By introducing saddle stirrups made out of flexible leather rather than rigid metal, Charlemagne enabled mounted soldiers to keep their balance while moving freely—and to fight more formidably than their earthbound compatriots. To give these “knights” an income, he granted them their own territories from which they could collect rents. Thus was born feudalism. Charlemagne, meanwhile, ascended to new heights as Holy Roman Emperor.

It is far from certain, of course, that a leather stirrup can even begin to explain what “caused” feudal society. But to Larry Downes, in “The Laws of Disruption,” it is a useful instance of a small material change having big effects. Mr. Downes, the author of “Unleashing the Killer App” (1998), says that history is pushed in surprising directions by exactly such innovations. He notes, for instance, that the steam engine, antibiotics and the atom bomb made dramatic appearances that were then followed “by even more dramatic changes to the civilizations that used them.” The Internet is our own era’s big disrupter. We already know how it has changed our habits and ways of doing things. Mr. Downes says that its long-term effects on society will be even greater.

The central thesis of “The Laws of Disruption” is that “technology changes exponentially, but social, economic and legal systems change incrementally.” When it comes to the digital revolution, Mr. Downes says, our laws have not kept pace with the changes that it has brought about. Governments levy taxes, oversee intellectual property and regulate communications as if we all lived in a prelapsarian world—with a land-line phone, a typewriter and a library card. Thus he argues against out-dated regulatory distinctions—for instance, between different types of voice and data carriers. He argues against the ban on Internet gambling, too, and against the attempts to bring about “net neutrality.” He is particularly critical of the Federal Communications Commission, which he says has shown a “baffling and dogged determination to see the world exactly as it looked” at the end of AT&T’s monopoly reign 25 years ago.


Not that Mr. Downes wants regulators and judges to rush to keep up by imposing ever new laws and regulations. He counsels instead that these authorities simply use a light touch, because the rapid pace of change makes it impossible to predict the course of technology. He quotes federal judge Frank Easterbrook, who once observed that “the blind are not good trailblazers.” For the most part, Mr. Downes says, regulators should leave the Web alone and simply protect it from interference—for example, by granting immunity to Web sites that might be sued for the comments of their users.

Mr. Downes’s libertarian instincts are admirable, particularly since government intervention often fails anyway. As Fred Wilson, a leading venture investor, has written: Regulation is largely irrelevant since “entrepreneurs and market forces are . . . much more powerful than government regulation.” Still, Mr. Downes can be a bit of a free-market triumphalist. It’s obviously true, for instance, that the market for communications is far more competitive than it was in the days of the AT&T monopoly, but he exaggerates the breadth of the marketplace when he suggests that all “consumers around the world have multiple choices for how to transport their bits.”

Oddly, Mr. Downes’s free-market approach begins to look a little socialistic when it comes to intellectual property. He claims that “treating information as a kind of property just doesn’t work anymore,” since in its digital form it is a “purely non-rivalrous good.” That is: One person reading an e-book doesn’t prevent someone else from reading it at the same time. But the value of a book has always been in the words, not the paper—and those words still need to be protected, in whatever form they appear.

Mr. Downes’s ideas on intellectual property derive in part from his view that “the value of information . . . increases exponentially as new users absorb it.” This may be true in certain instances, but not universally. In fact, scarcity generally makes information more valuable. Financial data, for instance, are plainly worth more to traders the less widely they are distributed. Mr. Downes says that the “more places my brand or logo appears,” the higher its value. Again, in some instances, yes, but not usually. Firms carefully protect their brands to ensure that they are not over-exposed or given the wrong associations. Mr. Downes himself illustrates the distinction when he discusses the dispute between Apple Inc. (the computer company) and Apple Corp. (the company set up by the Beatles to control their music catalog) over the use of the trademark “Apple.”

Mr. Downes concludes that “it is now virtually impossible for average consumers to avoid violating copyright law” and so the law, he says, needs to be dramatically reined in. But limiting copyright is a perverse way of reducing infringement. Mr. Downes may well overstate the case when he says that our “industrial-age legal system” will not survive, but there is no doubt that a lot more disruption lies ahead.

Mr. Philips is executive vice president of News Corp., which owns Dow Jones & Co., the publisher of The Wall Street Journal.


Full article and photo:

Why Email No Longer Rules…

And what that means for the way we communicate

twitter xxse

Services like Twitter, Facebook and Google Wave create a constant stream of interaction among users—for better or worse.

Email has had a good run as king of communications. But its reign is over.

In its place, a new generation of services is starting to take hold—services like Twitter and Facebook and countless others vying for a piece of the new world. And just as email did more than a decade ago, this shift promises to profoundly rewrite the way we communicate—in ways we can only begin to imagine.

We all still use email, of course. But email was better suited to the way we used to use the Internet—logging off and on, checking our messages in bursts. Now, we are always connected, whether we are sitting at a desk or on a mobile phone. The always-on connection, in turn, has created a host of new ways to communicate that are much faster than email, and more fun.

Why wait for a response to an email when you get a quicker answer over instant messaging? Thanks to Facebook, some questions can be answered without asking them. You don’t need to ask a friend whether she has left work, if she has updated her public “status” on the site telling the world so. Email, stuck in the era of attachments, seems boring compared to services like Google Wave, currently in test phase, which allows users to share photos by dragging and dropping them from a desktop into a Wave, and to enter comments in near real time.

Little wonder that while email continues to grow, other types of communication services are growing far faster. In August 2009, 276.9 million people used email across the U.S., several European countries, Australia and Brazil, according to Nielsen Co., up 21% from 229.2 million in August 2008. But the number of users on social-networking and other community sites jumped 31% to 301.5 million people.

“The whole idea of this email service isn’t really quite as significant anymore when you can have many, many different types of messages and files and when you have this all on the same type of networks,” says Alex Bochannek, curator at the Computer History Museum in Mountain View, Calif.

So, how will these new tools change the way we communicate? Let’s start with the most obvious: They make our interactions that much faster.

Into the River

Years ago, we were frustrated if it took a few days for a letter to arrive. A couple of years ago, we’d complain about a half-hour delay in getting an email. Today, we gripe about it taking an extra few seconds for a text message to go through. In a few months, we may be complaining that our cellphones aren’t automatically able to send messages to friends within a certain distance, letting them know we’re nearby. (A number of services already do this.)

These new services also make communicating more frequent and informal—more like a blog comment or a throwaway aside, rather than a crafted email sent to one person. No need to spend time writing a long email to your half-dozen closest friends about how your vacation went. Now those friends, if they’re interested, can watch it unfold in real time online. Instead of sending a few emails a week to a handful of friends, you can send dozens of messages a day to hundreds of people who know you, or just barely do.

Consider Twitter. The service allows users to send 140-character messages to people who have subscribed to see them, called followers. So instead of sending an email to friends announcing that you just got a new job, you can just tweet it for all the people who have chosen to “follow” you to see. You can create links to particular users in messages by entering @ followed by their user name or send private “direct messages” through the system by typing d and the user name.

Facebook is part of the trend, too. Users post status updates that show up in their friends’ “streams.” They can also post links to content and comment on it. No in-box required.

Dozens of other companies, from AOL and Yahoo Inc. to start-ups like Yammer Inc., are building products based on the same theme.

David Liu, an executive at AOL, calls it replacing the in-box with “a river that continues to flow as you dip into it.”

But the speed and ease of communication cut both ways. While making communication more frequent, they can also make it less personal and intimate. Communicating is becoming so easy that the recipient knows how little time and thought was required of the sender. Yes, your half-dozen closest friends can read your vacation updates. But so can your 500 other “friends.” And if you know all these people are reading your updates, you might say a lot less than you would otherwise.

Too Much Information

Another obvious downside to the constant stream: It’s a constant stream.

That can make it harder to determine the importance of various messages. When people can more easily fire off all sorts of messages—from updates about their breakfast to questions about the evening’s plans—being able to figure out which messages are truly important, or even which warrant a response, can be difficult. Information overload can lead some people to tune out messages altogether.

Such noise makes us even more dependent on technology to help us communicate. Without software to help filter and organize based on factors we deem relevant, we’d drown in the deluge.

Enter filtering. In email land, consumers can often get by with a few folders, if that. But in the land of the stream, some sort of more sophisticated filtering is a must.

On Facebook, you can choose to see updates only from certain people you add to certain lists. Twitter users have adopted the trend of “tagging” their tweets by topic. So people tweeting about a company may follow their tweet with the # symbol and the company name. A number of software programs filter Tweets by these tags, making it easier to follow a topic.

The combination of more public messages and tagging has cool search and discovery implications. In the old days, people shared photos over email. Now, they post them to Flickr and tag them with their location. That means users can, with little effort, search for an area, down to a street corner, and see photos of the place.

Tagging also is creating the potential for new social movements. Instead of trying to organize people over email, protesters can tweet their messages, tag them with the topic and have them discovered by others interested in the cause. Iranians used that technique to galvanize public opinion during their election protests earlier this year. It was a powerful example of what can happen when messages get unleashed.

Who Are You?

Perhaps the biggest change that these email successors bring is more of a public profile for users. In the email world, you are your name followed by a “dot-com.” That’s it. In the new messaging world, you have a higher profile, packed with data you want to share and possibly some you don’t.

Such a public profile has its pluses and minuses. It can draw the people communicating closer, allowing them to exchange not only text but also all sorts of personal information, even facial cues. You know a lot about the person you are talking to, even before you’ve ever exchanged a single word.

Take, for example, Facebook. Message someone over the site and, depending on your privacy settings, he may be a click away from your photos and your entire profile, including news articles you have shared and pictures of that party you were at last night. The extra details can help you cut to the chase. If you see that I am in London, you don’t need to ask me where I am. They can also make communication feel more personal, restoring some of the intimacy that social-network sites—and email, for that matter—have stripped away. If I have posted to the world that I am in a bad mood, you might try to cheer me up, or at least think twice about bothering me.

Email is trying to compete by helping users roll in more signals about themselves. Yahoo and Google Inc. have launched new profile services that connect to mail accounts. That means just by clicking on a contact, one can see whatever information she has chosen to share through her profile, from her hobbies to her high school.

But a dump of personal data can also turn off the people you are trying to communicate with. If I really just want to know what time the meeting is, I may not care that you have updated your status message to point people to photos of your kids.

Having your identity pegged to communication creates more data to manage and some blurry lines. What’s fine for one sort of recipient to know about you may not be acceptable for another. While our growing digital footprints have made it easier for anyone to find personal information about anyone online if they go search for it, new communications tools are marrying that trail of information with the message, making it easier than ever for the recipient to uncover more details.

A Question of Time

Meanwhile, one more big question remains: Will the new services save time, or eat up even more of it?

Many of the companies pitching the services insist they will free up people.

Jeff Teper, vice president of Microsoft Corp.’s SharePoint division, which makes software that businesses use to collaborate, says in the past, employees received an email every time the status changed on a project they were working on, which led to hundreds of unnecessary emails a day. Now, thanks to SharePoint and other software that allows companies to direct those updates to flow through centralized sites that employees can check when they need to, those unnecessary emails are out of users’ in-boxes.

“People were very dependent on email. They overused it,” he says. “Now, people can use the right tool for the right task.”

Perhaps. But there’s another way to think about all this. You can argue that because we have more ways to send more messages, we spend more time doing it. That may make us more productive, but it may not. We get lured into wasting time, telling our bosses we are looking into something, instead of just doing it, for example. And we will no doubt waste time communicating stuff that isn’t meaningful, maybe at the expense of more meaningful communication. Such as, say, talking to somebody in person.

Ms. Vascellaro is a staff reporter in The Wall Street Journal’s San Francisco bureau.


Full article and photo:

An emotional response

Using computers to analyse sentiments

Software that can tell when people are getting upset

THE difference between saying what you mean and meaning what you say is obvious to most people. To computers, however, it is trickier. Yet getting them to assess intelligently what people mean from what they say would be useful to companies seeking to identify unhappy customers and intelligence agencies seeking to identify dangerous individuals from comments they post online.

Computers are often inept at understanding the meaning of a word because that meaning depends on the context in which the word is used. For example “killing” is bad and “bacteria” are bad but “killing bacteria” is often good (unless, that is, someone is talking about the healthy bacteria present in live yogurt, in which case, it would be bad).

An attempt to enable computers to assess the emotional meaning of text is being led by Stephen Pulman of the University of Oxford and Karo Moilanen, one of his doctoral students. It uses so-called “sentiment analysis” software to assess text. The pair have developed a classification system that analyses the grammatical structure of a piece of text and assigns emotional labels to the words it contains, by looking them up in a 57,000-word “sentiment lexicon” compiled by people. These labels can be positive, negative or neutral. Words such as “never”, “failed” and “prevent” are tagged as “changing” or “reversive” words because they reverse the sentiment of word they precede.

The analysis is then broken into steps that progressively take into account larger and larger grammatical chunks, updating the sentiment score of each entity as it goes. The grammatical rules determine the effect of one chunk of text on another. The simplest rule is that positive and negative sentiments both overwhelm neutral ones. More complex syntactic rules govern seemingly conflicting cases such as “holiday hell” or “abuse helpline” that make sense to people but can confuse computers.

By applying and analysing emotional labels, the software can construct sentiment scores for the concepts mentioned in the text, as a combination of positive, negative and neutral results. For example, in the sentence, “The region’s largest economies were still mired in recession,” the parser finds four of the words in the sentiment lexicon: largest (positive, neutral or negative); economies (positive or neutral); mired (negative); and recession (negative). It then analyses the sentence structure, starting with “economies” and progressing to “largest economies”, “region’s largest economies” and “the region’s largest economies”. At each stage, it computes the changing sentiment of the sentence. It then does the same for the second half of the sentence.

Instead of simply adding up the number of positive and negative mentions for each concept, the software applies a weighting to each one. For example, short pieces of text such as “region” are given less weight than longer ones such as “the region’s largest economies”. Once the parser has reassembled the original text (“the region’s largest economies were still mired in recession”) it can correctly identify the sentence as having a mainly negative meaning with respect to the concept of “economies”.

The researchers say this approach is better than existing text-mining systems, which can inform companies what people think of them but cannot make complex links between statements. It may be, for example, that a carmaker’s customers love its latest sports car, but loathe its sound system.

As well as companies seeking to better understand their customer, intelligence agencies are also becoming interested in the sentiment analysis. Some agencies are using tools developed at the University of Arizona’s artificial intelligence laboratory to map intense, violent emotion in online forums frequented by political radicals in order to identify surges of bad feelings and even potential terrorists. Hsinchun Chen, director of the laboratory, says that out of 8m postings, his software might be able to isolate hundreds of postings by just 20 or 30 individuals that warrant a closer look. But the software can only supplement human judgment—not least because people don’t always mean what they say.


Full article and photo:

What Matters in the Media

Obsessed with growth, executives overlook true competitive advantage.

The rise of the Web promised great upside for media companies: expanding markets, declining distribution costs, the chance to offer new products and services. Yes, it was clear that the Web would eventually wipe out some revenue sources, like print classifieds. But growth, it was assumed, would more than compensate for such losses.

“The Curse of the Mogul” argues that this upside was a mirage. Jonathan Knee and his co-authors, Bruce Greenwald and Ava Seave, observe that media conglomerates as a whole have underperformed since the advent of the Internet. The Web has eroded the barriers protecting traditional businesses without improving the competitive position of even one incumbent. For a new competitor, of course, lower barriers mean opportunity, but they will mean opportunity for still newer competitors, too, making it difficult to establish a sustainable advantage. Citing Warren Buffett, the authors say that companies should be “continuously digging the moat around their business.” But media companies have often done just the opposite, “inadvertently construct[ing] bridges for competitors when they think they are strengthening the moat.”

For the media, just as for other industries, genuine competitive advantage comes from limited sources. The authors sum them up as “scale, customer captivity, cost and government protection.” But media businesses, they contend, commonly pride themselves on “sham” attributes.

“Deep pockets,” said to enable investment in expensive, risky ventures, would only matter if media managers were particularly adept at spotting opportunities that capital markets might otherwise overlook. But they are not, the authors assert. Being a “first mover” sounds like an advantage—e.g., the first company to offer a search engine (AltaVista) or a cellphone (Motorola)—but it seldom leads to long-term leadership. Competitors soon learn how to do things better. On the entertainment side of the business, media companies often claim that their talent is a key source of advantage. But while talent may sometimes produce hits, the authors concede, it does not lead to a long-term advantage since talent—think of Steven Spielberg’s paycheck or, for that matter, Stephen King’s—eventually captures most of the profit.

mogulMedia companies are suffering, then, according to “The Curse of the Mogul,” because they have failed to grasp the nature of competitive advantage—and because, obsessed with growth, they have made acquisitions that cost too much and lack a strategic rationale. The authors would surely have grave doubts about the wisdom of the potential Comcast/NBC Universal tie-up, which appears to offer few synergies. In contrast, they praise Comcast’s 2001 acquisition of AT&T Broadband as “strategically sensible and flawlessly executed”—although still a “dud” because of the inflated $72 billion price tag.

Comcast is not singled out, though. Messrs. Knee and Greenwald and Ms. Seave are critical of almost all the major media deals in the past decade. Their examples—faulted for overpayment or weak business logic—include AOL/Time Warner, Viacom/CBS, Disney/Cap Cities and, not least, News Corp./Dow Jones. The authors are not against acquisitions in principle—in their analysis, Microsoft’s aborted attempt last year to acquire Yahoo for $45 billion would have been strategically sound and reasonably priced. But frequently, they say, media moguls imagine a synergy where none exists or are so obsessed with growth that they ignore price.

So are the media doomed? “The Curse of the Mogul” is not a gloomy book, for all its censure. The authors note that media consumption is greater than ever. The challenge, they say, is the rise of digital media—since the Web grinds down barriers to entry that are the source of value creation. But the authors seem to underestimate the potency of rising digital barriers. They themselves cite, in passing, the value of “network effects” (more Skype users make the service more valuable for everyone) and “customer captivity” (the more you invest in your profile on one social network, the greater the cost of switching to another).

It is true that some leading companies in social networking and communication (e.g., Twitter) have yet to implement a lucrative business model. But their strong positions may well yield rich profits in the future. For an example one need look no further than online classified advertising—which, the authors say, was the “first killer moneymaking application” on the Web. Leading online players around the world, charging fees, have withstood challenges from rivals offering listings free—suggesting significant competitive advantage. Craigslist, mythology aside, has been charging for job listings in its home market of San Francisco for more than a decade.

Given its close analysis of business economics, “The Curse of the Mogul” might have been a niche text, but Messrs. Knee and Greenwald and Ms. Seave have broadened its appeal by weaving into their analysis the colorful assertion that corporate blunders can be traced, in part, to the desire of media executives to build an empire and enhance their personal prestige. The authors drolly observe that the “media mogul” test to assess competitive advantage includes such things as getting “a good table at Spago” (a chain restaurant that the authors believe is exclusive) or wangling an invitation to “Sun Valley” (a media conference held in Idaho). They do allow for exceptions. They note that no one in the media “has created more value” than Michael Bloomberg, the founder of Bloomberg LP, while Rupert Murdoch, the chairman and chief executive of News Corp., “alone possesses all the elements of the perfect mogul.”

Moguls aside, the authors’ analysis—while at times idiosyncratic—provides a sharp reminder of the importance of focusing on competitive advantage and on the barriers that enable it. As they bluntly put it: “Strategy is exclusively about establishing or reinforcing barriers to entry.” Certainly no one in the media would deny the attraction of living behind a deep moat, even if it is filled with sharks.

Mr. Philips is executive vice president of News Corp., which owns Dow Jones & Co., the publisher of The Wall Street Journal.


Full article and photo:

New Mathematical Model Suggests How The Brain Might Stay In Balance

The human brain is made up of 100 billion neurons — live wires that must be kept in delicate balance to stabilize the world’s most magnificent computing organ. Too much excitement and the network will slip into an apoplectic, uncomprehending chaos. Too much inhibition and it will flatline. A new mathematical model describes how the trillions of interconnections among neurons could maintain a stable but dynamic relationship that leaves the brain sensitive enough to respond to stimulation without veering into a blind seizure.

Marcelo O. Magnasco, head of the Laboratory of Mathematical Physics at The Rockefeller University, and his colleagues developed the model to address how such a massively complex and responsive network such as the brain can balance the opposing forces of excitation and inhibition. His model’s key assumption: Neurons function together in localized groups to preserve stability. “The defining characteristic of our system is that the unit of behavior is not the individual neuron or a local neural circuit but rather groups of neurons that can oscillate in synchrony,” Magnasco says. “The result is that the system is much more tolerant to faults: Individual neurons may or may not fire, individual connections may or may not transmit information to the next neuron, but the system keeps going.”

Magnasco’s model differs from traditional models of neural networks, which assume that each time a neuron fires and stimulates an adjoining neuron, the strength of the connection between the two increases. This is called the Hebbian theory of synaptic plasticity and is the classical model for learning. “But our system is anti-Hebbian,” Magnasco says. “If the connections among any groups of neurons are strongly oscillating together, they are weakened because they threaten homeostasis. Instead of trying to learn, our neurons are trying to forget.” One advantage of this anti-Hebbian model is that it balances a network with a much larger number of degrees of freedom than classical models can accommodate, a flexibility that is likely required by a computer as complex as the brain.

In work published this summer in Physical Review Letters, Magnasco theorizes that the connections that balance excitation and inhibition are continually flirting with instability. He likens the behavior to an indefinitely large number of public address systems tweaked to that critical point at which a flick of the microphone brings on a screech of feedback that then fades to quiet with time.

This model of a balanced neural network is abstract — it does not try to recreate any specific neural function such as learning. But it requires only half of the network connections to establish the homeostatic balance of exhibition and inhibition crucial to all other brain activity. The other half of the network could be used for other functions that may be compatible with more traditional models of neural networks, including Hebbian learning, Magnasco says.

Developing a systematic theory of how neurons communicate could provide a key to some of the basic questions that researchers are exploring through experiments, Magnasco hopes. “We’re trying to reverse-engineer the brain and clearly there are some concepts we’re missing,” he says. “This model could be one part of a better understanding. It has a large number of interesting properties that make it a suitable substrate for a large-scale computing device.”


Full article:

On the Internet, Everyone’s a Critic But They’re Not Very Critical

Average Review Is 4.3 Out of Five Stars; Jerkface Fights Back and Gets Bounced

The Web can be a mean-spirited place. But when it comes to online reviews, the Internet is a village where the books are strong, YouTube clips are good-looking and the dog food is above average.

One of the Web’s little secrets is that when consumers write online reviews, they tend to leave positive ratings: The average grade for things online is about 4.3 stars out of five.

People like Jonas Luster aim to introduce a little negativity. A private chef, Mr. Luster recently beckoned fellow San Francisco area diners to “quit with the nicey-nicey” in a blog post titled “In Defense of Negative Reviews.” His own average rating on restaurant-review sites is 3.6. He even awarded celebrity chef Alice Waters’s Chez Panisse restaurant a 1-star rating after he felt he had been served an overdone duck.

“I am a meanie,” says the 36-year-old from Fremont, Calif. “My pet peeve is menus that say something is cooked ‘to perfection.’ Perfection is a state you never attain.”

Mr. Luster is part of a movement on the Web that’s taking aim at 4.3, a figure reported as the average by companies like Bazaarvoice Inc., which provides review software used by nearly 600 sites. Inc. says its average is similar.

Many companies have noticed serious grade inflation. Google Inc.’s YouTube says the videos on its site average 4.6 stars, because viewers use five-star ratings to “give props” to video makers., which aggregates reviews from 3,000 sites, has tracked millions of reviews and has spotted particular exuberance for products such as printer paper (average: 4.4 stars), boots (4.4) and dog food (4.7).

If the rest of the Internet is filled with nasty celebrity blogs and email flame wars, what makes product reviews sites so lovey-dovey? “If you inspire passion in somebody in a good way or a bad way, that is when they want to write a review,” says Russell Dicker, the senior manager of community at Amazon.

His boss, Amazon’s Chief Executive Jeff Bezos, follows that pattern. He has posted five-star reviews for products like Tuscan brand whole milk and some “ridiculously good cookies” sold on the site. Mr. Bezos’s only non-five-star review: one star for a science-fiction movie, “The 13th Warrior.”


Elizabeth Chiang is becoming a tougher critic on

Culture may play a role in the positivism: Ratings in the U.K. average an even higher 4.4, reports Bazaarvoice. But the largest contributor may be human nature. Marketing research firm Keller Fay Group surveys 100 consumers each week to ask them about what products they mentioned to friends in conversation. “There is an urban myth that people are far more likely to express negatives than positives,” says Ed Keller, the company’s chief executive. But on average, he finds that 65% of the word-of-mouth reviews are positive and only 8% are negative.

“It’s like gambling. Most people remember the times they win and don’t realize that in aggregate they’ve lost money,” says Andy Chen, the chief executive of Power Reviews Inc., a reviews software maker that runs Buzzillions.

That’s why Amazon reviewer Marc Schenker in Vancouver has become a Web-ratings vigilante. For the past several years, he has left nothing but one-star reviews for products. He has called men’s magazine Maxim a “bacchanalia of hedonism,” and described “The Diary of Anne Frank” as “very, very, very disappointing.”

The vast majority of reviewers on Amazon “are a bunch of brown-nosing cheerleaders,” says Mr. Schenker, who reviews under pseudonyms including Jerkface. “In an online store selling millions of items, there’s bound to be many, many awful ones,” he says.

Mr. Schenker suspects that Amazon intentionally deletes negative reviews so it can sell more products. It did kick him off the site last year and, he says, won’t even let him make purchases. The company wouldn’t comment on his removal, but a letter he says he received from Amazon describes his posts as “rude, harassing and abusive to others.”

Other critical reviewers say they get flak for their brutal honesty. Mark Nuckols, an American teaching finance in Moscow whose Amazon book ratings average a three, says he’s concerned by what he senses is a practice of “pre-emptive deletion.” When he posted a “mildly critical review” of a recent children’s book by Tom Tomorrow, it never surfaced. When he tried to post a review of another Tom Tomorrow book, it didn’t show up, either.

An Amazon spokeswoman wouldn’t comment on Mr. Nuckols’s experience, but said that the company allows negative comments if they don’t contain distasteful language.

Some suspect companies goose their ratings. This summer, which averages just above a four, posted warnings that some of its hotel reviews may have been written by hotel managers. But review sites say the incidence of fakes is tiny, and many pay people to delete puffery.

Other sites admit they have a positivity problem and are taking novel steps to curb the enthusiasm. One way is to redefine average. Reviews of’s millions of merchants were so positive that eBay made 4.3 out of five stars its minimum service standard. Beginning this month, it is switching to a system that counts just the number of one- and two-star reviews. Sellers who get more than 3% to 4% of those ratings could get kicked off of eBay.

Another site, Goodrec, decided to ditch the five-star rating system altogether, replacing it with a thumbs-up and thumbs-down system. Amazon now highlights what it dubs “the most helpful critical review” at the top of its reviews page.

Jeremy Stoppelman, chief executive of, which posts reviews of local businesses in cities around the country, bragged in September that his site’s reviews were more diverse. The average review on Yelp is 3.8. Many assume online reviews are “only rants or raves, resulting in consumer Web sites composed solely of ratings on the extremes,” he blogged. “A broader range of opinions can give consumers a more complete view of a business,” he says.

Being more negative is something that comes with practice, says Elizabeth Chiang, a 26-year-old financial consultant, who posts a lot of local business reviews on Yelp. When she began writing them in 2006 she was easily impressed by the wide variety of bars and restaurants in New York. “I thought everything was awesome,” she says.

But after reflecting upon her reviews, she realized recently “it’s kind of meaningless if every one is great.” Now Ms. Chiang writes a review only after trying a restaurant at least twice, and has lowered her average to a 3.6, on about 250 write-ups. In a recent review, she said that one cocktail tasted like “listless, ennui-crippled sugar water.”

Geoffrey Fowler, Wall Street Journal


Full article and photo:

Six sins of online dating


Experts say people don’t realize that the rules of engagement apply on the Internet.

It may be easier to approach that cute single from behind the safety of your computer screen than in a crowded bar, but the worlds of online and offline dating aren’t so different. The rate of rejection is still high: Only one in three of the first messages sent by members of dating site ever get a response (sorry guys, the rate is only 27 per cent for you).

According to dating experts, this is because people don’t realize that the same rules of engagement apply on the Internet. The team at OKCupid recently pored over 500,000 first messages sent by the site’s members and tracked response rates. If you’re striking out, it may be because you’re committing one of these sins.

You’re gushing about looks

Flattering someone you’re interested in is a good way to go in that first message, but make sure it’s with the right compliments.

Messages sent on OKCupid with the words “sexy,” “beautiful” and “hot” had much lower response rates than those that were more personality-based, such as “awesome,” “fascinating” and “cool.”

“When you meet someone in a bar, all you have to go with has to do with physical appearance. Because of profiles, the expectation is that you should have something more than just, ‘You’re pretty,’” said Sam Yagan, co-founder of OKCupid.



U luv netspeak

Twitter, Facebook and IMs have helped push netspeak into the mainstream, but you should clean up your act for that first message on an online dating site. Ditch the slang for proper language – and make sure you proofread.

“A typo means they’re not paying attention to detail. It’s a metaphor for what they’re like in a relationship,” said dating coach Rachel Greenwald and author of Why He Didn’t Call You Back.

Rachel O’Neill, a 32-year-old gardener and OKCupid member from Victoria, agrees that poor grammar and spelling are a major turnoff.

“It’s not that much more work, really, to say ‘oh my God’ than ‘omg.’”

You’re moving too fast

Asking someone you think is a great match for their IM screen name or e-mail address in your first message can be tempting – but take it slow. First messages sent through OKCupid that contained the words “chat,” “e-mail,” “yahoo” and “msn” had a response rate of only 10 or 11 per cent.

“Imagine a man approaches a woman in a bar – the art is when do you ask for the phone number,” Mr. Yagan said.

You’re not putting your best face forward

The picture you choose may occupy a small piece of real estate on your profile page, but women especially should be careful about what image they pick.

“[Men] scrutinize every element of the photo,” Ms. Greenwald said.

Lilith Darling, a 40-year-old promotions manager in Halifax, has proof: While few men comment on the text on her Plenty of Fish page, she gets countless messages asking her if she’s a porn star – likely because she posted a picture of herself with porn king Ron Jeremy.

Audrey, a 30-year-old community development worker in Victoria, said photos of men in natural settings – such as restaurants or outdoors among friends – are what win her over on dating sites. She’s messaged guys in the past to tell them to delete their webcam shots – the worst type of profile picture, she says.

“They look like they’re just slouching over after a five-hour World of Warcraft raid.”

Your opening is bland

You may not put too much thought into your opening line, but those few words could be the key to a conversation – or rejection.

Mr. Yagan theorizes that users who open with generic salutations such as “hello” have much lower response rates than those who say “how’s it going?” for a simple reason: The latter salutation is a question.

“There’s something in there engaging the other person,” he said.

You’re talking too much

While short messages of “hi” or “ur hot” that do little to stimulate conversation are a no-no, what’s even worse are mini-essays. The OKCupid team crunched numbers to find that the optimum length of a message is 200 characters: just a bit longer than a tweet.

While you may want to tell someone in 850 words about how you also enjoyed scuba diving while on vacation in Borneo, Kate Bilenki, an operations manager at, says that’s “too much detail way too soon.”

Dakshana Bascaramurty, Globe and Mail


Full article and photos:

On the Web, forever has a due date

If GeoCities — once the most popular face of personal Websites — can disappear, what about YouTube, Google Docs and Facebook?

Let’s imagine, for a moment, that the year is 2019, and we have dragged ourselves into the future with a minimum of apocalypse.

Picture yourself sitting in front of your news-o-scope (my patent is pending) when up pops word that a website you were really into a decade ago is shutting down.

“Facebook!” you exclaim. “I remember Facebook! I posted 250,000 pictures to Facebook. My lost youth!”

If it sounds improbable that everything you’ve piled into Facebook might evaporate in just 10 years, then consider: One of the biggest websites of the late 1990s is about to get deleted.

GeoCities may have been kind of an amateur extravaganza where the little under-construction guy shovelled away all day and all night, but there was something glorious about that


At the end of October, Yahoo will pull the plug on GeoCities, the service that more than 1 million people used to set up web pages. On Oct. 27, the whole thing will simply cease to exist. It will, as we say in the industry, go poof.

This poofing business does not bode well. Lately, there’s been so much discussion about the permanence of information – especially the embarrassing kind – that we have overlooked the fact that it can also disappear. At a time when we’re throwing all kinds of data and memories onto free websites, it’s a blunt reminder that the future can bring unwelcome surprises.

Ten years ago, you could have called GeoCities the garish, beating heart of the Web. It was one of the first sites that threw its doors open to users and invited them to populate its pages according to their own creativity. At a time when the Web was still daunting, it encouraged laypeople to set up their own homepages free of charge.

And that’s exactly what laypeople did. GeoCities exploded, turning into a gaudy carnival of websites devoted to everything from Civil War history to ichthyology, from quilting to Quaaludes. The place was designed around an urban metaphor, divided into cutely named “neighbourhoods” according to content. Nobody seemed to police what went where, which meant you could explore without knowing what you were looking for, or what you might stumble over next.

(As a cloistered university student, I recall trawling through the American political pages and meeting my first bigot, who had written up his unpleasant views on sexual politics – all of them in lime-green type on a black background. I was fascinated. We corresponded. To this day, I fondly remember the time when there was only one bigot on the Internet, and he answered his e-mail.) Alas, the site never excelled at the money-making thing and its ham-fisted attempts to turn a buck drove users away. In 1999, Yahoo purchased GeoCities for $3.57-billion in stock, which turned out to be $3.57-billion too much. The world moved on, and GeoCities faded into a ghost town.

And now, it’s curtains. GeoCities won’t disappear entirely. The Internet Archive – a non-profit foundation based in San Francisco dedicated to backing up the Web for posterity’s sake – is trying to salvage as much as it can before the deadline hits. At least one other independent group is trying to do the same. But this complicates things, because it puts GeoCities users’ data into the hands of an unaccountable third party.

Money-losing websites aren’t exactly novelties. Smaller sites flicker in and out of existence like those bugs that only have 18 hours to mate before they die. But it’s disconcerting to see a big site – one that, long ago, was one of the most popular on the Web – not just fade into obscurity, but come to its end game.

It bring to light some truths about data that are easily overlooked. Websites are like buildings: you can’t just abandon them indefinitely and expect them to keep working. For one thing, that electronic storage isn’t free. Storing files requires media that degrade and computers that fail and power that needs paying for.

And data futures are more important than ever. In an immediate sense, think of how many photos you’ve shovelled onto Facebook lately. Or e-mails into Gmail or docs into Google or Tweets into Twitter. As I type this, I’m uploading a batch of photos to Flickr, a photo-sharing service that’s also owned by Yahoo.

It is clear that online storage is taking over more and more tasks that were the domain of personal computers. There is even a buzzword for the trend: “cloud computing.”

It seems unlikely that Facebook or Google or Yahoo or Microsoft will crumble to dust any time soon. And as we feed their gaping maws with ever more personal data – social connections, correspondence, photos and work documents that may not be backed on hard drives – it’s easy to get lulled into thinking they’re too big to fail.

It’s not hard to save a quick copy of an old GeoCities page and walk away. But what happens to your thousands of photos and photo captions and comments on photos, should the future prove equally unkind to Facebook?

Where do your Gmail messages, YouTube clips and Google Documents go in the event that Google’s search-based advertising model – its golden egg – doesn’t survive the introduction of my news-o-scope?

Companies can promise a great many things, and I’m willing to believe most of them. But they can’t promise to be there forever. We should stop whistling on, doe-eyed, pretending like they have.

Ivor Tossel, Globe and Mail


Full article:

Advantage Google

Three hundred years ago, Daniel Defoe offered a memorable image for the relationship between authors and their work: “A Book is the Author’s Property, ’tis the Child of his Inventions, the Brat of his Brain.”

The line comes from an essay Defoe wrote in support of the first-ever copyright act, the 1710 Statute of Anne. That law, one of the great inventions of human civilization, managed to do two good things at once: it gave writers ownership of their work, thus freeing them from patronage, and it limited the term of ownership to 28 years, thus giving the rest of us a public domain, a world of print we all may enter because no one owns it.

Defoe’s metaphor nicely points toward copyright’s public ends: both books and brats grow up; their relationship to those who bore them changes over time. Like a farmer’s children, books must help their author make hay until they come of age, whereupon they are free to leave home and participate in the larger community.

This history has been on my mind recently because it is about to reappear in a courtroom where Judge Denny Chin of the Federal District Court for the Southern District of New York will very likely hold a hearing later this fall on the proposed settlement of the lawsuit brought by authors and publishers against Google, after Google made digital copies of millions of in-copyright books. The settlement is currently being revised in the wake of objections raised by the Department of Justice and other parties. But whatever form it takes, before he approves it Judge Chin will have to deal with the ghost of Defoe’s parental metaphor, come now to pose a riddle Defoe never had to consider: What shall we do with the orphans?

Orphan works are all those Brats whose copyrights are still active but whose parents cannot be found. There are millions of them out there, and they are gumming up the world of publishing. Suppose a publisher wants to print an anthology of 1930s magazine fiction. Copyright now lasts so long (a century in many cases) that the publisher must assume that there are rights holders for all those stories. Suppose that half the owners can’t be found. What should the publisher do? Its lawyers will advise abandoning the anthology: statutory damages for copyright infringement now stand between $750 and $150,000 per instance.

Less hypothetically, when Carnegie Mellon University tried to digitize a collection of out-of-print books, one of every five turned out to be orphaned. When Cornell tried to post a collection of agricultural monographs online, half were orphans. The United States Holocaust Museum owns millions of pages of archival documents that it can neither publish nor digitize.

Of more than seven million works scanned by Google so far, four to five million appear to be orphaned. If Judge Chin approves the settlement in something close to its current form, the authors and publishers will let Google commercialize these works — sell them, display them online with ads, charge libraries for their use, and more. A portion of the money thus earned will go to Google outright; the rest will go to a new Book Rights Registry, where it will regularly be set aside for five years waiting for absent owners to claim it. At the end of each five-year period, all unclaimed funds will be distributed to the authors and publishers whose works the registry represents.

This is a smart way to untangle the orphan works mess, but it has some serious problems, the most obvious being that it treats orphans as if they were Brats who can be set to work for families who had no hand in their creation. Nothing in the history of copyright can possibly allow for such indenture. In an essay written late in life, James Madison explained that copyright is best viewed as “a compensation for a benefit actually gained to the community.” There were good reasons, he wrote, to give authors a “temporary monopoly” over their work, “but it ought to be temporary” because the long-term goal is to enrich public knowledge, not private persons.

Madison honors the same beneficiaries found in the Statute of Anne, the writer and the rest of us. In no case are third parties meant to profit, as the Google settlement would allow. To let them do so would be like letting an executor drain an estate whose rightful heirs cannot be found.

Surely there are better ways to dispose of orphan income. The Department of Justice in fact suggested one two weeks ago, when it issued a critique of the proposed settlement saying, among other things, that the court might do as we do with actual orphans: appoint a guardian to look out for them until they come of age. In this case, I believe, such a guardian would have to be charged with service to both the rights holders and the public good. He would have to try to find lost owners and pay them their due; should no owners be found, he would have to devise a way to release these works to the public domain. (He could simply require that users who’ve been charged for orphans get their money back, or that the fees Google charges libraries be lowered in proportion to revenue collected in error.)

The idea of a guardian obliged finally to serve public ends suggests a second way to expand Defoe’s metaphor. The Brat of the Brain has never been thought of the way that European nobility once thought of their land, as something to be handed down generation after generation. A copyright may be inherited, yes, but not in perpetuity. At this nation’s founding, “perpetuities” were understood to be one of the devices by which aristocracy maintained its power, and the founders therefore looked on forms of long-term ownership with a skeptical eye.

Jefferson especially believed that no generation had a right to bind those that followed. “The earth belongs . . . to the living,” he wrote to Madison in 1789; “the dead have neither powers nor right over it.” That being the case, “perpetual monopolies” in arts “ought expressly to be forbidden,” Jefferson’s own suggestion being that copyright run no more than 19 years.

Such time-limited ownership relocates inheritance to serve democratic rather than aristocratic ends. Where Europeans had shaped inheritance to serve powerful families, Americans would shape it so that something new under the sun — “the people” — might receive the legacy of all their forebears had created. The founders valued “civic virtue,” the honor that private citizens acquire by acting for the public good. By insisting that copyright exist only for “limited times” (as the Constitution says), they suggested a way that law itself might engender virtue, transforming the fruits of human imagination from private into common wealth by the mere passage of time.

The point here, of course, is that the parties to the Google settlement are asking the judge to let them be orphan guardians but without any necessary obligation to the public side of the copyright bargain. Quite the opposite: if Judge Chin grants them a pass to profit from orphan works, he will also be granting them a private monopoly in digital books.

Why? Because the Google case is a class-action lawsuit structured such that it will bind all rights holders unless they opted out by a deadline that passed last month. The missing owners of orphan works could not do that, of course; by definition they don’t even know this litigation concerns them. Now, included by default in the proposed settlement, their Brats are being readied for trade.

That does free the orphans from copyright limbo, but here’s the catch: They will effectively belong only to Google and the other settling parties. It will be almost impossible for any other online player to get the same right to use them. The only way a potential competitor could avoid the threat of statutory damages would be to do what Google did: scan lots of books, attract plaintiffs willing to form a class with an “opt out” feature, negotiate a settlement and get it approved by a judge. Even for those with time and money to spare, that promises to be an insurmountable barrier to entry.

Thus does the settlement portend Google’s unlimited dominion over electronic books. By aggregating the monopoly power latent in each orphan, the proposed agreement doesn’t just get the Brats to work on Google’s farm; it secures for Google a lasting monopoly in this newest of book trades. Talk about making hay!

Lewis Hyde is a professor of creative writing at Kenyon College and a fellow of the Berkman Center for Internet and Society at Harvard. His book “The Gift: Creativity and the Artist in the Modern World” was recently reissued in paperback.


Full article:

The U.S. Abandons the Internet

Multilateral governance of the domain name system risks censorship and repression.

There’s a lot of concern out there right now about America’s world leadership—facing down Iran’s nuclear program, bracing NATO’s commitment in Afghanistan, maintaining free trade. Here’s something else to worry about: Has the Obama administration just given up U.S. responsibility for protecting the Internet?

What makes it possible for users to connect with all the different Web sites on the Internet is the system that allocates a unique electronic address to each site. The addresses are organized within larger entities called top-level domains—”.com,” “.edu,” “.gov” and so on. Overseeing this arrangement is a relatively obscure entity, the Internet Corporation for Assigned Names and Numbers (ICANN). Without the effective oversight of ICANN, the Internet as we know it would not exist, billions of dollars of online commerce and intellectual property would be at risk, and various forms of mass censorship could become the norm.

Since its establishment in 1998, ICANN has operated under a formal contract with the U.S. Department of Commerce, which stipulated the duties and limits that the U.S. government expected ICANN to respect. The Commerce Department did not provide much active oversight, although the need to renew this contract, called the Joint Project Agreement (JPA), helped keep ICANN policies within reasonable bounds. That’s why last spring, when the Commerce Department asked for comment on ending the JPA, the U.S. business community opposed the idea.

But the U.S. government’s role in ICANN has long been a source of complaint from foreign nations. United Nations conferences have repeatedly voiced concerns about “domination of the Internet by one power” and suggested that management of the system should be handed off to the International Telecommunications Union—a U.N. agency dominated by developing countries. The European Union has urged a different scheme in which a G-12 of advanced countries would manage the Internet.

The Obama administration has declined to endorse such alternatives. Instead it has replaced the latest JPA, which expired Sept. 30, with a vaguely worded “Affirmation of Commitments.” In it, ICANN promises to be a good manager of the Internet, and the Commerce Department promises—well, not much of anything. The U.S. will participate in a Governmental Advisory Committee along with some three dozen other nations but claims no greater authority than any other country on the committee, whose recommendations are not binding on ICANN in any case.

An ICANN cut loose from U.S. government oversight will not, for that reason, be free from political pressures. One source of pressure will come from disputes about expanding top-level domain names. For example, would a “.xxx” domain help to isolate pornographic sites in a unique (and blockable) special area, or would it encourage censorship in other domains by suggesting that offensive images only appear there? Should we have “.food” or “.toys” along with “.com” domains? If we do, as the Justice Department warned last year in a letter to Commerce, companies that have invested huge sums to protect their trademarks under “.com” will have to fight for protection of their names in the new domains. Yet strangely, there is not a word in the new plan about protecting trademark rights or other intellectual property interests that might be threatened by new ICANN policies.

Even more disturbing is the prospect that foreign countries will pressure ICANN to impose Internet controls that facilitate their own censorship schemes. Countries like China and Iran already block Web sites they regard as politically objectionable. Islamic nations insist that the proper understanding of international human-rights treaties requires suppression of “Islamophobic” content on the Internet. Will ICANN be better situated to resist such pressures now that it no longer has a formal contract with the U.S. government?

It may be that the Obama administration expects to exert a steadying hand on ICANN in indirect or covert ways. Or here too it may have calculated that winning applause from other nations now is worth taking serious risks in the long run.

Mr. Rabkin is professor of law at George Mason University. Mr. Eisenach is an adjunct law professor at George Mason and chairman of Empiris LLC, which does consulting work for Verisign, an Internet registry.


Full article:

Software mimics ant behavior by swarming against cyber threats

Ant-swarmLooking to create computer defenses that adapt well to the cat-and-mouse game played between computer users and cyber attackers, a team of researchers has turned to one of nature’s most effective militias—ants. Computer scientists at Wake Forest University in Winston-Salem, N.C., and the Pacific Northwest National Laboratory in Richland, Wash., are studying whether software written to behave like an army of “digital ants” can successfully find and flag malicious software (or malware).

“In nature, we know that ants defend against threats very successfully,” Wake Forest computer science professor Errin Fulp said in a prepared statement. “They can ramp up their defense rapidly, and then resume routine behavior quickly after an intruder has been stopped. We were trying to achieve that same framework in a computer system.”

To prove that their “swarm intelligence” model could more quickly and thoroughly scan for malware, Fulp and his colleagues developed a way to divide up the process of searching for security threats across 64 computers networked together. As the digital ants sought out potential security problems, they left a digital trail of their progress, much the same way normal ants leave behind a scent that can be picked up and followed by other ants. When the researchers unleashed a worm on the network, the digital ants were able to find it.

This approach differs from conventional computer security software, which can for the most part be programmed to search only for known malware. Makers of this software often update it with descriptions of new viruses and worms, but this reactive model keeps computer users at least a step behind their adversaries. Fulp and his team hope that the sharing of information among the digital ants will lead to computer defense systems that can find malware written with slight variations in order to avoid detection.

Computer scientists are already studying programs that act like swarming ants to help alleviate telecommunications system bottlenecks. “The foraging of ants has led to a novel method for rerouting network traffic in busy telecommunications systems,” Eric Bonabeau and Guy Thèraulaz wrote in an article, “Swarm Smarts,” in Scientific American‘s 2008 special report on robots. Bonabeau is chief executive and chief scientific officer at Icosystem Corporation in Cambridge, Mass., while Thèraulaz is a research director at the Research Center on Animal Cognition of the National Center for Scientific Research (CNRS) at Paul Sabatier University in Toulouse, France.

Computer-maker Hewlett-Packard and the University of the West of England together invented a network routing technique in which antlike agents deposit bits of information, or “virtual pheromones,” at telephone network nodes (or switching stations), according to Bonabeau and Theraulaz. These mark less congested areas of the network that could be used by phone companies to divert surges in traffic on the network.

Larry Greenemeier, Scientific American


Full article and photo:

If Skype should fall

There are now plenty of able-bodied alternatives

SEVERAL years ago, your correspondent found his telephone bill was getting out of hand and vowed to halve it. The obvious answer was to sign up for a free Skype account—and get the benefit of computer-to-computer phone calls around the world for nothing plus calls to conventional landline phones for little more than two cents a minute.

At the time, he was paying his landline carrier (Verizon) five cents a minute for local calls, 11 cents for cross-country calls and an average of 16 cents a minute for international calls. Overseas calls accounted for half his monthly bill.

Before deciding to hang up his landline, he looked at a number of alternatives to Skype—including Gizmo Project (now called Gizmo5), SightSpeed, GrandCentral, TalkPlus, iSkoot, Mobivox, ooVoo, Jajah, Jangl and others with even sillier names. At the time, none came close to challenging Skype in terms of features, convenience and popularity.

One special attraction for a roving correspondent was Skype’s videoconferencing facility, which was simplicity itself to use. Also, with almost 200m users (now 480m) signed up for the service, there was a fair chance that many colleagues and acquaintances would already have Skype accounts and be readily accessible. The clincher was that Skype also ran on dozens of mobile phones, portable game consoles and other internet appliances. Today, that includes Apple’s iPhone and iPod Touch.

That was not to say Skype was without its problems. The lack of adequate security had been a concern since the day it was launched in 2003. Enthusiasts are quick to point out that Skype has some of the best encryption technology around for preventing eavesdroppers from listening into conversations. That is true. But Skype’s ability to evade wiretapping is of little concern for most users. For business users especially, the main concern is that Skype provides an ideal vehicle for delivering malware into the inner sanctum of an organisation, as well as for sneaking corporate secrets out.

Remember, Skype was designed by the same Estonian whizz-kids who created the all but unblockable KazAa file-sharing network that rocked the music industry a decade ago and provided the same proprietary “peer-to-peer” architecture capable of tunnelling through firewalls. With traffic forwarded from one computer to another via an inner circle of some 20,000 super-nodes, Skype has no central servers directing the traffic flow, logging the calls and preventing viruses, Trojan horses and spyware from piggybacking on the flow of encrypted data. That can be a serious concern for small firms and home users who lack the professional means to protect themselves.

Another worry has been the way anyone can join Skype’s network without proof of identity. In fact, users can set up numerous accounts under different fictitious names and go wholly unchallenged. That makes it a jungle where antisocial behaviour is common.

Despite such reservations, your correspondent has found Skype a handy way of staying in touch with friends and family, and having business meetings. Over the past few years it has saved him literally thousands of dollars in travel costs alone. He has had countless video conferences using his office PC or laptop while on the road. He also carries a slick little Belkin handset that can make Skype calls over Wi-Fi networks without the need for a computer. With free Wi-Fi hotspots in public places throughout California, the Skype phone gets more use than its owner’s mobile.

But now, suddenly, storm clouds are gathering over Skype. In a flurry of lawsuits, the investment group that recently agreed to acquire 65% of Skype from eBay for $1.9 billion is being sued by the original founders, Janus Friis and Niklas Zennstrom, who sold the company to eBay for $2.6 billion in 2005.

Back then, eBay was granted a licence to use the founders’ proprietary peer-to-peer software—but not the right to open it up for developers to tinker with. Among other things, eBay and Skype’s new would-be owners are being sued for breach of this licensing agreement, copyright infringement and theft of trade secrets. Meanwhile, their licence to use the so-called Global Index Software has been terminated by Joltid, the Stockholm firm the founders set up to license their proprietary software.

Whatever the motives behind these messy legal wranglings, the threat to Skype’s future is real enough to cause concern among those who use it. It was bad enough two years ago when Skype fell silent for 36 hours after the mass downloading of a Microsoft “Patch Tuesday” release, which unexpectedly brought the peer-to-peer network to its knees. The scurry by worried Skype users to get a backup plan in place crippled a number of other internet telephony services as traffic to them suddenly rocketed.

As a precaution, your correspondent has been taking a fresh look at some of the alternatives he dismissed last time round. He has opened accounts with a couple of the more desirable ones in case they are overwhelmed by another sudden rush to sign up if Skype goes offline again. Most of the alternatives are now far more competitive than they were three years ago. A few are every bit as good as (or even better than) the latest version of Skype, though none has anything like the mass appeal.

For Macintosh users, iChat is everything you would expect of Apple—slick, simple and with stunning graphics. Its voice quality is even better than Skype’s. The video chat feature lets you set up multi-person conferences on the fly. And it is less of a bandwidth hog than Skype. All you need is an internet connection and a video camera, plus an account with one of the more popular instant-messaging services, such as AIM, Google Talk, Jabber or MobileMe—and, of course, a Macintosh computer running Mac OS X.

The choice for Windows users is wider, though few of the products are as polished as iChat. SightSpeed comes close. It is delightfully simple to set up and use, and provides excellent 30 frames-a-second video with crisp audio and little delay. You can also send video e-mail and text chat with its built in instant-messaging service. And it works on Macs as well as PCs.

One drawback: the SightSpeed software (called Logitech Vid) is free only to those using a video camera made by Logitech; otherwise, all you get is a 30-day free trial. If you are planning to buy a stand-alone video camera for your computer, the SightSpeed service is reason enough to chose a Logitech device. Those with laptops that have a video camera already built in are better off looking elsewhere.

If making “SkypeOut” calls to landline and mobile phones—as well as making free voice and video calls from computer to computer—is important to you, then look no further than Gizmo5. This is identical to Skype in most respects save one: it uses open standards for managing calls, though its compression algorithms and client software are as proprietary as Skype’s. However, by embracing the popular internet-signalling standard called Session Initiation Protocol (SIP), Gizmo5’s free software can work seamlessly with other SIP-based networks, including the phone companies’.

Depending on the carrier and the handset used, that can mean free—or, at least, much cheaper—calls to landline and mobile phones, as well as free voice and video calls between computers. For many, that is enough to make Gizmo5 an even better deal than Skype. Should Skype go silent (or even if it does not), Gizmo5 could well pick up much of the running.


Full article and photo:

Klein Drowns in the Ethical Shallows

On Friday, Post blogger Ezra Klein took a short break from Barack Obama’s unpaid policy staff to respond to my last column and dismiss the importance of bigotry and hatred on the Internet. “That doesn’t describe the Internet I know,” he claims, “but the Internet is big, and Gerson might visit parts I miss.”

Sometimes innocence, however, is merely ignorance. You don’t need to trawl the seedier portions of the Internet to be familiar with a growing literature on Internet hate. An education on this topic might include the Simon Wiesenthal Center report, “Facebook, YouTube +: How Social Media Outlets Impact Digital Terrorism and Hate.” Or the Anti-Defamation League’s recent conference, “The Internet Is Making Anti-Semitism Socially Acceptable.” Or the ADL’s report on how mainstream Web sites were flooded with anti-Semitism in the wake of the Madoff scandal. Or a variety of resources on the role of the Internet in promoting Middle Eastern anti-Semitism and the revival of racist ideology in Germany.

In preparing my Friday column, I found an interview with David Goldman by the Southern Poverty Law Center particularly interesting. After monitoring Internet hate sites for many years, Goldman has concluded that the main dangers are now found in chat rooms, comment boxes and email. “In chat rooms,” he says, “which are populated mainly by young people, you can swear and use racial epithets with a certain amount of ease, and that helps to support your own stereotypes and racial bigotry. Unlike hate sites, these chat rooms create a sense of immediacy and community.”

These are the type of sources one encounters while doing extensive research for a column. A blogged response to a column, of course, is free from such archaic, old-media constraints.

One part of Klein’s post is particularly illuminating. He finds it amusing to belittle the threat of a hypothetical someone he calls “jewhater429, the 97th entrant in a comment thread” — just a few months after an Internet-based Jew hater entered the Holocaust Museum with a gun and killed an African-American guard. Some people have the oddest sense of humor.

The real threat, according to Klein, is not from Jew haters, Holocaust deniers or white supremacists. It is from conservatives who listen to Rush Limbaugh on the radio. And why? Because Limbaugh interferes more directly with Klein’s political agenda. The seriousness of this moral argument is…undetectable. It is a case study in how an excess of ideology can affect the optic nerve — leading to complete moral blindness.

I have little patience for the hyper-partisan rants of Glenn Beck or Arianna Huffington. But they are not Nazis because I disagree with them. Interestingly, Beck, Huffington and Klein seem comfortable with this same, lazy tactic — the reductio ad Hitlerum. They are full partners in the same calumny.

This approach is both uncivil and dangerous. We do not avoid comparing our opponents to Nazis merely out of politeness. We reserve this charge for actual racists, for actual incitement to violence, for actual evil, so that the accusation is not diluted and powerless when it is most needed. Those, like Klein, who trivialize evil are actually making its advance more likely. Their cynicism and ideological manias are the allies of genuine bigotry, because they blur its distinctive shape and cover its distinctive smell.

On this topic, Klein wades into the ethical shallows and manages to drown. In the future, it might be less embarrassing to avoid the water entirely.

Michael Gerson, Washington Post


Full article:

Hate in the Media


“In the course of a few years,” writes Michael Gerson, “a fringe party was able to define a national community by scapegoating internal enemies; elevate a single, messianic leader; and keep the public docile with hatred while the state committed unprecedented crimes. The adaptive use of new technology was central to this achievement.”

That party? The Nazis. That technology? Talk radio. But Gerson’s subject is not talk radio or the Nazis, but the vast expanses of the Internet. “User-driven content on the Internet often consists of bullying, conspiracy theories and racial prejudice,” writes Gerson, which is interesting, as I thought it consisted of porn and teenagers holding party cups. “The absolute freedom of the medium paradoxically encourages authoritarian impulses to intimidate and silence others,” he continues. “The least responsible contributors see their darkest tendencies legitimated and reinforced, while serious voices are driven away by the general ugliness.”

That doesn’t describe the Internet I know (unless, for some reason, you don’t think Autotune the News is a serious voice), but the Internet is big, and Gerson might visit parts I miss. “The exploitation of technology by hatred will never be eliminated,” he concludes. “But hatred must be confined to the fringes of our culture — as the hatred of other times should have been.”

What’s striking is that this doesn’t really describe the Internet. Hateful voices remain on the fringe. And they stay on the fringe. The beauty of the Internet is that it’s pretty much all fringe. Controlling a Web site or a blogspot domain is not like controlling a radio station or a television network.

Gerson’s examples, in fact, come from comment threads, which virtually disproves his thesis. But there is a major medium where the hateful voices sit firmly in control of the content, and it’s the same medium that begins Gerson’s remarks: talk radio. And, to a lesser extent, cable news. That’s where society’s most hateful conspiracy theories sit and fester, where its most explosive lies are recounted and amplified, where its least responsible elites have control of the means of production. I don’t worry about jewhater429, the 97th entrant in a comment thread. I worry about Beck and Limbaugh and Savage.

Ezra Klein, Washington Post


Full article:

Life magazine opens archives


Famous photo of 3-D movie viewers that appeared in Life Magazine in 1952.

Decades of Life magazine have been scanned and posted online, giving the public the first comprehensive electronic access to the iconic publication’s archives.

Life already has made images available through the website and a partnership with Google Inc. The latest effort, also with Google, makes stories available as well, all searchable and viewable for free in their original magazine layout.

“Every day we receive requests from readers looking for these issues for research purposes, and to find photos and articles featuring family members, hometowns and other memories,” Andrew Blau, president of Life Inc., said in a statement. “Now with these full issues available online, readers will be able to browse through history as it was being recorded.”

The online archives are part of Google’s ambitious book-scanning project, which has prompted a copyright-infringement lawsuit by publishers and authors. The parties have settled, though they are renegotiating details after the U.S. Justice Department concluded that the original deal probably violates antitrust law.


In its latest bid to provide quick, easy access to the sum total of human knowledge, Google is making available 10 million photographs from Life magazine’s archives available online. Dating back to the Civil War, it includes some well-known images but many more that were previously unpublished and undigitized.

The Life archives are not dependent on that settlement because the Time Warner Inc. magazine is agreeing to make its works available through Google.

The archives cover the magazine’s main run as a weekly, from 1936 to 1972 – more than 1,860 issues in all. After the weekly ceased publication in 1972, it was resurrected as a monthly in 1978 and ended again in 2000. From 2004 to 2007, Life appeared as a weekly newspaper supplement.


Full article and photos:

Will Google Be the Giraffe That Grows a Back Scratcher?

Or: How search engines are about to drive dictionary sites out of business.

Ostensibly, this is the story of dictionary Web sites and their impending demise. But really, this is the story of the oxpecker. I ask your patience while I get ornithological; we’ve got a metaphor to spin.

Meet the oxpecker bird a plucky, selfless little thing. Its life amounts—as for so many of us—to nothing but consumption for the sake of others. The oxpecker is a helpful friend to the bigger game of the sub-Saharan grasslands. Giraffes, wildebeests, and cattle all welcome oxpeckers onto their hides in exchange for a master cleanse. The oxpeckers, meanwhile, feed off of the parasites, insects, and ticks that they’re picking off their gracious hosts. It’s a symbiotic relationship, one of those quirks of nature that keeps an ecosystem churning. You scratch my back, I’ll fatten yours.

Most interesting for our purposes is the dynamic between the two animals. It’s a mutually beneficial partnership: The oxpecker provides a service the animal can’t manage in-house, and the animal offers the oxpecker a parasitic cornucopia. A beautiful consequence of evolution, nature organically assigning roles to different animals.

Dictionary Web sites, as you may have surmised, are akin to the oxpeckers. The only way they can sustain themselves is by borrowing resources from far larger game—search engines, in this case. All sorts of traffic finds its way to an online dictionary through a search engine, which means that all sorts of advertising revenue depends on those clicks coming through. On the Internet, pageviews equal revenue, especially when you pack dozens of ads on a page, as the dictionary sites do. Without search engines feeding the dictionaries traffic, the reference sites probably couldn’t survive.

But search engines are smarter than giraffes. They’ve always had the ability to evolve and start providing definitions on their own. Thus, the dictionary sites have always been in a precarious spot; their hosts could grow the equivalent of a backscratcher at any moment and put the oxpecker out of a job. And now it finally appears that search engines have had enough. Without warning, the evolution has already begun; dictionary sites are more endangered than ever.

To understand the way things once were, let’s look at Google. The market leader in search hasn’t significantly changed the way it deals with definitions in years, and its presentation is a useful time capsule. Open up Google in a new tab and run a search for a single, SAT-caliber word. Let’s use loquacious as our example. Don’t know what it means? Perfect; that makes it the kind of thing you would Google. Google, you’ll notice, doesn’t give you a definition itself—it just links off to a bunch of other dictionary sites.

And here’s where things get interesting. The definition isn’t in any of the search returns’ little two-line descriptions, either. That’s because they’re purposefully trying to obscure the results. Every site on the Internet can suggest to Google what to pull as those two-line descriptions. Google doesn’t have to listen—as we’ll see later—but more often than not its algorithms don’t bother bypassing the hand-fed description. Dictionary sites know this and purposefully keep their definitions out of the suggested description. Instead they insert the requested word into a generic few sentences. Take a look at’s description:

Loquacious—Definition of Loquacious at a free online dictionary with pronunciation, synonyms, and translation of Loquacious. Word of the Day and Crossword Puzzles.

Keeping definitions out of search engine returns is a major business initiative. Web sites are always hesitant to release traffic figures, but Merriam-Webster’s electronic product director told me that a “majority” of their traffic comes from search engines.* Whatever the exact percentage is, it would surely drop if definitions were displayed in the search returns. Why click through to the actual site if you already have your answer before you get there? (*Correction: This story originally misspelled Merriam-Webster. It has been corrected throughout.)

This is the way that things were (and for Google, still are). But it’s not the way that things will be. Search engines are increasingly expected to be more than just a portal to what we need online, but they also provide us shortcuts to the answers that we seek, minimizing the number of clicks that it takes to find them. I want to go to the search engine that gets me to my information the fastest—and that means I shouldn’t have to click through to get a definition. Google’s chief competitor has already bought into this philosophy. Microsoft treats definitions in a wholly different, more selfish way than Google does.

Microsoft’s search engine, Bing, shamelessly borrows the dictionary content without any of the typical kickbacks. Bing has figured out a cheap and effective way to harness dictionary sites’ information that heightens the user experience. Go ahead and plug loquacious into Bing. You’ll see an in-line definition, this time pulled from Microsoft’s Encarta encyclopedia. (The same one that they’re discontinuing at the end of the year.)

But then scroll down to the next return, from You’ll see something different from Google’s results; the definition is right there in the search return. Bing has ignored’s recommended description and apparently cut straight through to the good stuff. This shifts the power dynamic of the search-engine-dictionary-site relationship. By reaping all the benefits, Bing’s ability to repay the favor is limited. Remember, people don’t need to click on a link if they already know what awaits on the other side.

And there’s even more data in the Bing framework. Hover your mouse over that search link and you’ll see a vertical bar pop up on the right side of the return. Move your mouse over that and you’ll get even more information imported from the site—pronunciation, etymology, derivatives of the word.

Where does Bing get off using others’ content in their own skin? Microsoft did not make someone available for an interview, but instead offered a boilerplate statement on some, but not all, questions asked by The Big Money. On why they include this information from third-party sites—“Instant Answers” in Bing-speak—they offered this bland statement:

We designed Bing as a decision engine; a search experience aimed at delivering results in a more organized way so you can find the information you are looking for more quickly and ultimately make better decisions. Rather than bringing back a sea of blue links to sift through, Instant Answers provide relevant direct answers to many topics.

Reading between the lines: We’re here to make the users happy. If that means depriving content sites of pageviews, so be it. Let them adjust.

The dictionary sites know that a new day is coming. Traffic is still on the rise—according to ComScore, all the dictionary sites had more traffic in July 2009 than July 2008—but that doesn’t make the business models any less precarious. Michael Guzzi, the digital products manager I spoke with at Merriam-Webster, said they haven’t seen traffic dip at all (ComScore has them up 29 percent with 6.2 million unique visitors in July 2009), but that they are aware of a change in the way the search engines are interacting with their content. “Fighting it doesn’t make sense,” he told me, “even if there were some way to block Google from grabbing info. Do we really want to be the only dictionary that shows up without the definitions?” That doesn’t make sense for their business model, either. Then the clicks definitely won’t come in.

So what to do? One hope is to work with the enemy. Along with direct access to the definitions, Bing also offers extra links back to the dictionary sites. In those boxes on the right side of the search return are a set of “popular links,” which are links to synonyms and related terms of use on the dictionary site. For reasons Merriam-Webster doesn’t yet understand, Bing isn’t playing nice with the M-W site architecture, so it’s not offering those related links. That’s a wasted opportunity for more traffic. Fine-tuning these kinds of processes will be important as the dictionary fights adjust their business models.

Even more pressing than the current threats, though, are the unknown ones that lie in the future. Bing, remember, is small bore. It routes only 10.7 percent of the country’s searches, compared with Google’s 64.6 percent. (Though it is gaining market share quickly.) The thing that would really upset the dictionary sites’ business would be a change in the way Google handles their content. What happens if Google starts acting like Bing?

What would a Google adaptation of Bing’s style look like? We already know. For years, Google has already had a template in place that offers dictionaries in search returns. If you type in “define loquacious” or “loquacious definition” to Google you’ll get a different result than when just typing in “loquacious.” There, at the top of the page, is the definition pulled from a Princeton dictionary—no need to click through. But it’s up to the user to type in “define” or “definition.” These are what Google calls operators; they tell the search-engine that you want a definition straight-away.

So why not have this happen automatically? I asked a Google spokesman why Google doesn’t automatically put definitions in single-word search queries, even without the “define” operators. It’s partly because Google doesn’t want words like “cat” coming back with definitions. Of course, but Bing doesn’t offer a definition when you just search for “cat.” Couldn’t Google know which words people want definitions for and which ones they don’t, based on where they usually go after the search page? Yes, I was assured. Those data are tracked.

The Google spokesman told me that “at this point, nothing to announce regarding future product plans in this area.” Fine. But the thing is: Bing’s way really works. As a user, I am happier to not have to click so many times, especially when the search engine obviously knows what information I’m looking for. If it’s giving me a full page of links to dictionary sites, then it seems silly for it not to give me the definition. Given the choice of Bing vs. Google, Bing has the better definition interface. It may be worse for the online dictionary business, but it’s better for the consumer.

Google is going to be forced to catch up for business reasons. Bing’s rapid growth—22.1 percent in August—has proven that there’s a market for Bing’s “decision engine” approach. (The dictionary technique is only one element of that, of course.)

And this type of change wouldn’t violate Google’s search ethos. Search for three numbers and the return already brings up an area code location automatically. A FedEx number? Automatic tracking inside the search results. Unit conversion? Yep. Math equations? Let Google be your calculator. Definition requests are the last major wall yet to fall. Tear it down.

But then what of the dictionaries? If Google goes Bing then dictionaries’ traffic will go poof. And thus we confront the real risk in this scenario. If the search engines borrow too much content and leech too many page views they’ll send the dictionary sites out of business. And then where will they get their definitions? Plus, there are ad revenue concerns at play. The dictionaries are huge platforms for text and banner ads, especially those running through Google networks . The beasts will have killed off the oxpeckers at their own expense. The ecosystem will have been destroyed. Any survivors left behind will have blood on their hands and nobody to help clean it off.

Chadwick Matlin, The Big Money


Full article:

Future is TV-shaped, says Intel

Intel TV
Intel said the TV of the future would require a lot of computing power

By 2015 more than 12 billion devices will be capable of connecting to 500 billion hours of TV and video content, says chip giant Intel.

It said its vision of TV everywhere will be more personal, social, ubiquitous and informative.

“TV is out of the box and off the wall,” Justin Rattner, Intel’s chief technology officer, told BBC News.

“TV will remain at the centre of our lives and you will be able to watch what you want where you want.”

Mr Rattner said: “We are talking about more than one TV-capable device for every man and woman on the planet.

“People are going to feel connected to the screen in ways they haven’t in the past.”

Speaking at Intel’s Developer Forum (IDF) in San Francisco, he said the success of TV was due to the growing number of ways to consume content.

Today that includes everything from the traditional box in the corner of the living room to smartphones, laptops, netbooks, desktops and mobile internet devices.

Continuing the theme, Malachy Moynihan, Cisco’s vice-president of video product strategy, told IDF attendees to expect an explosion of content for such devices.

“We are seeing an amazing move of video to IP (internet) networks,” he said. “By 2013 90% of all IP traffic will be video; 60% of all video will be consumed by consumers over IP networks.”

Infinite choice

Developers keen to tap into this growth were told by Eric Kim, Intel’s digital home group boss, to “keep it simple and easy”.

Intel SoC
The new CE4100 will make TV the centre for entertainment, said Intel

“Don’t make my TV act like a PC. This is what we hear consistently from the consumer,” said Mr Kim. “The key challenge is how to bring the power and richness of the internet but keep it TV simple.”

Mr Kim unveiled some hardware Intel hopes developers will adopt to make more devices TV capable.

He showed off the Atom CE4100 system-on-a-chip (SoC) that can be used to bring internet content and services to digital TVs, DVD players and advanced set-top boxes.

Codenamed Sodaville, it is the first 45 nanometre manufactured consumer electronics SoC based on Intel architecture.

IDF attendees also heard from speakers about what promises to be a new kind of TV experience as broadcast content, video content, internet content and personal content is all blended together.

Eric Huggers, director of the BBC’s Future Media and Technology, who has driven development of the iPlayer, said: “It’s about unlocking a whole raft of new capabilities and services.

“Think of TV as an opportunity to give consumers a gateway to infinite choice,” he added.

IMAX quality

Mr Rattner also took time to highlight another technology gaining ground – 3D TV.

“It seems like there is an announcement every week on 3D,” he told the audience.

Intel tv
Friends will be able to share the viewing experience, said Intel

He said he planned to use a high-definition TV during his presentation but changed his mind when he heard about a Silicon Valley start up called HDI.

HDI claimed a world first with the launch of its 100in (2.5m) 3D laser set in early September.

Big manufacturers such as Sony and Panasonic have announced plans to release 3D TV sets in 2010, while Samsung and Mitsubishi have recently released their products.

Speaking in early September at the IFA consumer electronics show in Berlin, Howard Stringer, Sony’s chief executive, said: “3D is clearly on the way to the mass market. The train is on the track and Sony is ready to drive it home.”

Analyst firm Screen Digest forecasts 1.2 million 3D capable sets in American homes by the end of 2010. That figure is expected to rise to 9.7 million, or 8% of households, by 2013.

Fading fast

To drive home the point about 3D, Mr Rattner’s presentation incorporated a live 3D broadcast.

3D broadcast at Intel
Mr Rattner’s speech involved a live 3-D broadcast

While he was inside the auditorium, Mr Rattner spoke to a 3D projected version of Howard Postley, technology boss of 3ality Digital, who was outside in the hallway.

The two men talked about a new high-speed optical technology from Intel codenamed Light Peak aimed at speeding and simplifying the complexity and cost of digital downloads.

The conference was told that 50 copper-based cables on the set of a 3D shoot today may one day be replaced with a single optical cable that can use Light Peak technology.

Intel hopes to start shipping Light Peak in 2010.

The overall 3D market is expected to grow to an estimated $25bn (£15.6bn) by 2012 according to the research firm Piper-Jaffray.

“The old TV world is fading fast and the future is here,” said Mr Rattner.


Full article and photos:

Banish the Cyber-Bigots

The transformation of Germany in the 1920s and ’30s from the nation of Goethe to the nation of Goebbels is a specter that haunts, or should haunt, every nation.

The triumph of Nazi propaganda in this period is the subject of a remarkable exhibit at the United States Holocaust Memorial Museum (where I serve on the governing board). Germany in the 1920s was a land of broad literacy and diverse politics, boasting 146 daily newspapers in Berlin alone. Yet in the course of a few years, a fringe party was able to define a national community by scapegoating internal enemies; elevate a single, messianic leader; and keep the public docile with hatred while the state committed unprecedented crimes.

The adaptive use of new technology was central to this achievement. The Nazis pioneered voice amplification at rallies, the distribution of recorded speeches and the sophisticated targeting of poster art toward groups and regions.

But it was radio that proved the most powerful tool. The Nazis worked with radio manufacturers to provide Germans with free or low-cost “people’s receivers.” This new technology was disorienting, taking the public sphere, for the first time, into private places — homes, schools and factories. “If you tuned in,” says Steve Luckert, curator of the exhibit, “you heard strangers’ voices all the time. The style had a heavy emphasis on emotion, tapping into a mass psychology. You were bombarded by information that you were unable to verify or critically evaluate. It was the Internet of its time.”

This comparison to the Internet is apt. The Nazis would have found much to admire in the adaptation of their message on neo-Nazi, white supremacist and Holocaust-denial Web sites.

But the challenge of this technology is not merely an isolated subculture of hatred. It is a disorienting atmosphere in which information is difficult to verify or critically evaluate, the rules of discourse are unclear, and emotion — often expressed in CAPITAL LETTERS — is primary. User-driven content on the Internet often consists of bullying, conspiracy theories and racial prejudice. The absolute freedom of the medium paradoxically encourages authoritarian impulses to intimidate and silence others. The least responsible contributors see their darkest tendencies legitimated and reinforced, while serious voices are driven away by the general ugliness.

Ethicist Clive Hamilton calls this a “belligerent brutopia.” “The Internet should represent a great flourishing of democratic participation,” he argues. “But it doesn’t. . . . The brutality of public debate on the Internet is due to one fact above all — the option of anonymity. The belligerence would not be tolerated if the perpetrators’ identities were known because they would be rebuffed and criticized by those who know them. Free speech without accountability breeds dogmatism and confrontation.”

This destructive disinhibition is disturbing in itself. It also allows hatred to invade respected institutional spaces on the Internet, gaining for these ideas a legitimacy denied to fringe Web sites. After the Bernard Madoff scandal broke, for example, major newspaper sites included user-generated content such as “Find a Jew who isn’t Crooked” and “Just another jew money changer thief” — sentiments that newspapers would not have printed as letters to the editor. Postings of this kind regularly attack immigrants and African Americans, recycle centuries of anti-Semitism and deny the events of the Holocaust as a massive Jewish lie.

Legally restricting such content — apart from prosecuting direct harassment and threats against individuals or incitement to violence — is impossible. In America, the First Amendment protects blanket statements of bigotry. But this does not mean that popular news sites, along with settings such as Facebook and YouTube, are constitutionally required to provide forums for bullies and bigots. As private institutions, they are perfectly free to set rules against racism and hatred. This is not censorship; it is the definition of standards.

Some online institutions, such as The New York Times and the Los Angeles Times, screen user comments before posting them. Others, such as The Post and The Wall Street Journal, rely on readers to identify objectionable content — a questionable strategy because numbness to abusiveness and hatred on the Internet is part of the challenge.

Whatever the method, no reputable institution should allow its publishing capacity, in print or online, to be used as the equivalent of the wall of a public bathroom stall.

The exploitation of technology by hatred will never be eliminated. But hatred must be confined to the fringes of our culture — as the hatred of other times should have been.

Michael Gerson, Washington Post


Full article:

Project ‘Gaydar’

At MIT, an experiment identifies which students are gay, raising new questions about online privacy


It started as a simple term project for an MIT class on ethics and law on the electronic frontier.

Two students partnered up to take on the latest Internet fad: the online social networks that were exploding into the mainstream. With people signing up in droves to reconnect with classmates and old crushes from high school, and even becoming online “friends” with their family members, the two wondered what the online masses were unknowingly telling the world about themselves. The pair weren’t interested in the embarrassing photos or overripe profiles that attract so much consternation from parents and potential employers. Instead, they wondered whether the basic currency of interactions on a social network – the simple act of “friending” someone online – might reveal something a person might rather keep hidden.

Using data from the social network Facebook, they made a striking discovery: just by looking at a person’s online friends, they could predict whether the person was gay. They did this with a software program that looked at the gender and sexuality of a person’s friends and, using statistical analysis, made a prediction. The two students had no way of checking all of their predictions, but based on their own knowledge outside the Facebook world, their computer program appeared quite accurate for men, they said. People may be effectively “outing” themselves just by the virtual company they keep.

“When they first did it, it was absolutely striking – we said, ‘Oh my God – you can actually put some computation behind that,’ ” said Hal Abelson, a computer science professor at MIT who co-taught the course. “That pulls the rug out from a whole policy and technology perspective that the point is to give you control over your information – because you don’t have control over your information.”

The work has not been published in a scientific journal, but it provides a provocative warning note about privacy. Discussions of privacy often focus on how to best keep things secret, whether it is making sure online financial transactions are secure from intruders, or telling people to think twice before opening their lives too widely on blogs or online profiles. But this work shows that people may reveal information about themselves in another way, and without knowing they are making it public. Who we are can be revealed by, and even defined by, who our friends are: if all your friends are over 45, you’re probably not a teenager; if they all belong to a particular religion, it’s a decent bet that you do, too. The ability to connect with other people who have something in common is part of the power of social networks, but also a possible pitfall. If our friends reveal who we are, that challenges a conception of privacy built on the notion that there are things we tell, and things we don’t.

“Even if you don’t affirmatively post revealing information, simply publishing your friends’ list may reveal sensitive information about you, or it may lead people to make assumptions about you that are incorrect,” said Kevin Bankston, senior staff attorney for the Electronic Frontier Foundation, a nonprofit digital rights organization in San Francisco. “Certainly if most or many of your friends are of a particular religious or political or sexual category, others may conclude you are part of the same category – even if you haven’t said so yourself.”

The project, given the name “Gaydar” by the students, Carter Jernigan and Behram Mistree, is part of the fast-moving field of social network analysis, which examines what the connections between people can tell us. The applications run the gamut, from predicting who might be a terrorist to the likelihood a person is happy or fat. The idea of making assumptions about people by looking at their relationships is not new, but the sudden availability of information online means the field’s powerful tools can now be applied to just about anyone.

For example, Murat Kantarcioglu, an assistant professor of computer science at the University of Texas at Dallas, found he could make decent predictions about a person’s political affiliation. He and a student – who later went to work for Facebook – took 167,000 profiles and 3 million links between people from the Dallas-Fort Worth network. They used three methods to predict a person’s political views. One prediction model used only the details in their profiles. Another used only friendship links. And the third combined the two sets of data.

The researchers found that certain traits, such as knowing what groups people belonged to or their favorite music, were quite predictive of political affiliation. But they also found that they did better than a random guess when only using friendship connections. The best results came from combining the two approaches.

Other work, by researchers at the University of Maryland, College Park, analyzed four social networks: Facebook, the photo-sharing website Flickr, an online network for dog owners called Dogster, and BibSonomy, in which people tag bookmarks and publications. Those researchers blinded themselves to the profiles of half the people in each network, and launched a variety of “attacks” on the networks, to see what private information they could glean by simply looking at things like groups people belonged to, and their friendship links.

On each network, at least one attack worked. Researchers could predict where Flickr users lived; Facebook users’ gender, a dog’s breed, and whether someone was likely to be a spammer on BibSonomy. The authors found that membership in a group gave away a significant amount of information, but also found that predictions using friend links weren’t as strong as they expected. “Using friends in classifying people has to be treated with care,” computer scientists Lise Getoor and Elena Zheleva wrote.

The idea behind the MIT work, done in 2007, is as old as the adage that birds of a feather flock together. For years, sociologists have known of the “homophily principle” – the tendency for similar people to group together. People of one race tend to have spouses, confidants, and friends of the same race, for example. Jernigan and Mistree downloaded data from the Facebook network, choosing as their sample people who had joined the MIT network and were in the classes 2007-2011 or graduate students. They were interested in three things people frequently fill in on their social network profile: their gender, a category called “interested in” that they took to denote sexuality, and their friend links.

Using that information, they “trained” their computer program, analyzing the friend links of 1,544 men who said they were straight, 21 who said they were bisexual, and 33 who said they were gay. Gay men had proportionally more gay friends than straight men, giving the computer program a way to infer a person’s sexuality based on their friends.

Then they did the same analysis on 947 men who did not report their sexuality. Although the researchers had no way to confirm the analysis with scientific rigor, they used their private knowledge of 10 people in the network who were gay but did not declare it on their Facebook page as a simple check. They found all 10 people were predicted to be gay by the program. The analysis seemed to work in identifying gay men, but the same technique was not as successful with bisexual men or women, or lesbians.

“It’s just one example of how information could be inadvertently shared,” said Jernigan. “It does highlight risks out there.”

The researchers treated their data anonymously, never using names except to validate their predictions during data analysis. The only copy of the data is on an encrypted DVD they gave to a professor, and they said they got the approval of an ethical review board at MIT. The students, who have since graduated, discussed the paper with the Globe, but did not provide a copy of it because they are hoping to have it published in a journal.

Facebook spokesman Simon Axten could not respond to Jernigan and Mistree’s analysis, since it is not public, but pointed out that it is something that happens every day.

“In general, it’s not too surprising that someone might make inferences about someone else without knowing that person based on who the person’s friends are. This isn’t specific to Facebook and is entirely possible in the real world as well,” Axten wrote in an e-mail. “For example, if I know that someone has certain political views because that person makes them known in some way (say, by putting a bumper sticker on his car), and then I see the person walking out of a movie with friends I don’t know, I might assume those friends also have those political views.”

Privacy has become a growing and evolving concern as social networks learn how to deal with the fact that they provide a resource that brings people together, but also may endanger privacy in ways they did not anticipate. Social networks like Facebook already give people power over that information, with privacy features that allow people to hide their profiles, and even make their list of friends invisible to outsiders, as well as from select friends.

Because the features and services offered on social networks are new, they also evolve in response to user demand that may not always be anticipated by the company. In 2007, for example, Facebook introduced Beacon, a feature that broadcasted friends’ activities – such as buying movie tickets on a specific website – like targeted advertisements. That drew an angry response from users concerned about privacy, and prompted an apologetic blog posting from Facebook cofounder Mark Zuckerberg, along with modifications that meant people could opt out.

Computer scientists are identifying the ways in which anyone from a potential employer to an advertiser might be able to make informed guesses about a person. But there are limits to online privacy, and ultimately, say some experts, people will simply have to weigh the costs and benefits of living online.

“You can do damage to your reputation with social networking data, and other people can do damage to you. I do think that there’s been a very fast learning curve – people are quickly learning the dos and don’ts of Internet behavior,” said Jason Kaufman, a research fellow at the Berkman Center for Internet and Society at Harvard University who is studying a set of Facebook data. “Potentially everything you ever do on the Internet will live forever. I like to think we’ll all learn to give each other a little more slack for our indiscretions and idiosyncrasies.”

Carolyn Y. Johnson is a science reporter at the Globe.


Full article and photo:

Google to reincarnate digital books as paperbacks


A scanner passes over a book at the University of Michigan in Ann Arbor, Mich., March 21, 2008. Hundreds of librarians from Minnesota to England are helping Google Inc.’s Book Search create digital versions of all the estimated 50 million to 100 million books in the world and make them readily available online for free for people everywhere.

Will provide link to high-speed publishing machine that can manufacture paperback-bound book of about 300 pages in under five minutes

Google Inc. is giving two million books in its digital library a chance to be reincarnated as paperbacks.

As part of a deal announced Thursday, Google is opening up part of its index to the maker of a high-speed publishing machine that can manufacture a paperback-bound book of about 300 pages in under five minutes. The new service is an acknowledgment by the Internet search leader that not everyone wants their books served up on a computer or an electronic reader like those made by Inc. and Sony Inc.

The “Espresso Book Machine” has been around for several years already, but it figures to become a hotter commodity now that it has access to so many books scanned from some of the world’s largest libraries. And On Demand Books, the Espresso’s maker, potentially could get access to even more hard-to-find books if Google wins court approval of a class-action settlement giving it the right to sell out-of-print books.


Dane Neller, CEO of On Demand Books, holds a book produced by the company’s Espresso Book Machine at Google headquarters in Mountain View, Calif.

“This is a seminal event for us,” said Dane Neller, On Demand Books’ chief executive, as he oversaw a demonstration of the Espresso Book Machine Wednesday at Google’s Mountain View, Calif. headquarters.

In the background, some of the books that Google spent the past five years scanning into a digital format were returning to their paper origins.

“It’s like things are coming full circle,” Google spokeswoman Jennie Johnson said. “This will allow people to pick up the physical copy of a book even if there may be just one or two other copies in some library in this country, or maybe it’s not even available in this country at all.”

On Demand’s printing machines already are in more than a dozen locations in the United States, Canada, Australia, England and Egypt, mostly at campus book stores, libraries and small retailers. The Harvard Book Store will be among the first already equipped with an instant-publishing machine to have access to Google’s digital library.

The books published by The Espresso Machine will have a recommended sales price of $8 (U.S.) per copy, although the final decision will be left to each retailer. New York-based On Demand Books will get a $1 of each sale with another buck going to Google, which says it will donate its commission to charities and other non-profit causes.

The high-speed publishing machine itself sells for about $100,000, although On Demand Books is willing to lease the equipment to retailers instead.

For starters, Google is only allowing The Espresso Machine to publish from the section of its digital library that consists of two million books no longer protected by copyright.

These “public domain” books were published before 1923 — an era that includes classics like Moby Dick and Adventures of Huckleberry Finn as well as very obscure titles. The paperbacks churned out in Wednesday’s demonstration of the Espresso included Lathe Work For Beginners,Dame Curtsey’s Book of Candy Making, and Memoirs of A Cavalier, a Daniel Defoe novel that never caught on quite like his most famous work, Robinson Crusoe.

Millions more titles could be added to On Demand’s virtual inventory if Google gets federal court approval of a class-action settlement that would grant it the right to sell copyrighted books no longer being published. Google estimates it already has made digital copies of about six million out-of-print books.

The settlement terms includes a provision that could authorize republishing the books with a machine like the Espresso. Some of Google rivals and a long list of other critics are hoping to block the settlement, mainly because they believe it will give Google a monopoly on the digital rights to out-of-print books and make it too easy to track people’s reading preferences.

The U.S. Justice Department is investigating the monopoly allegations and is scheduled to share some of its preliminary thoughts with U.S. District Judge Denny Chin in a brief due Friday.

Mr. Neller of On Demand Books is thrilled just to have the right to publish selections from Google’s digital library of public domain books. Mr. Neller thinks it could help him reach his ambition to turn the Espresso machine into the book industry’s equivalent of an automated teller machine.

“It’s more efficient for everyone involved and readers are the biggest beneficiaries of all,” Mr. Neller said.


Full article and photos:

Google turns page on news content

fast flip
The BBC is the only UK media outlet that has signed up to Fast Flip

Google has unveiled a service called Fast Flip to let users consume news more quickly and to boost the flagging fortunes of the news industry.

The product is designed to mirror the way readers flick through magazines and newspapers.

Google has teamed up with more than 30 providers such as the BBC to provide what it calls a new reading experience.

The search giant was recently called a parasite for making money aggregating content it did not create.

“I don’t believe we are part of the problem. I believe we are part of the solution,” Google’s vice-president of search, Marissa Mayer, told BBC News.

“We have tried to build platforms and tools that build a healthy, rich eco-system online that is supportive of content. This is a new way of looking at content.”

Earlier this year, Wall Street Journal chief Robert Thomson called the search company and other aggregators such as Yahoo “parasites or tech tapeworms in the intestines of the internet”.

The news industry has been struggling with how to broaden the size of its online audience and how to make money from content it has long given away free.

Last month, media mogul Rupert Murdoch said he hoped all of his major newspapers would be charging for online content by the end of June next year.


Fast Flip imitates a conventional print publication by offering screenshots of the web pages containing relevant articles.

Daily Mail printing press
Newspapers have struggled to make money from online content

The stories are organised following a number of different criteria. For example, readers will be offered articles that have been popular all day, that reflect their personal preference or that have been recommended by friends.

Users who want to dig deeper into the story can click through to the publisher’s website.

To make money, Fast Flip also serves up contextual adverts around the screenshots.

Publishers who have signed up to provide content to the service will share in that revenue; that was proof, said Ms Mayer, that Google was keen to help the industry at a time when it was clearly struggling.

“We are excited to team with publishers and look at a new possibility for how people might consume news online and how to monetise it,” said Ms Mayer.

Google admitted that there was no “magic bullet” to quickly solve the challenges the publishing industry faced but it added that “we believe encouraging readers to read more news is a necessary part of the solution”.

Ms Mayer said the science behind this was simple.

“Advertising responds well when you have engaged users.

“If you have users that stay on the site for a long time and who do a lot of page views, all of those are good measurers because you will have a better chance to engage them with the ads and learn from their behaviour what type of ads to target,” explained Ms Mayer.

Hands-on control

Ms Mayer told TechCrunch 50, a conference aimed at start-up companies, that Google co-founder Larry Page had asked why the web was not more like a magazine, allowing users to flip from screen to screen seamlessly.

Delegates were told that one reason had to do with media-rich content that took time to load – five to 10 seconds.

fast flip
Fast Flip marries the best of the web and publishing, said Google

“Imagine if it took that long to flip a magazine page,” said Krishna Bharat, a distinguished engineer at Google who led the creation of the Google news service.

“We wanted to bring the advantages of print media, the speed and hands-on control you get with a newspaper or magazine, and combine that with the technical advantages of the internet. We wanted the best of both worlds,” said Mr Bharat.

Ms Mayer revealed that initially they thought a solution to the problem posed by Mr Page was “a decade away”.

She explained that Google had long been trying to harness increased speed, “shaving a millisecond here and another millisecond there”.

But Ms Mayer said that the success of Fast Flip was down to having a specific problem to solve.

“A big part of innovation is having the right goal and asking the right question,” she said.

Initially Fast Flip will concentrate on audiences in the US. The BBC is the only UK-based media outlet to have a presence on the site, due largely to its popularity in America.

Other publishers involved include Cosmopolitan, Marie Claire, Elle, Popular Mechanics, Slate, Salon, the New York Times, the Washington Post and ProPublica.


Full article and photos:

The Rise of the Professional Blogger

The blogosphere was supposed to democratize publishing and empower the little guy. Turns out, the big blogs are all run by The Man.

In a recent essay in the New York Review of Books, Michael Massing articulates a point made so often about the Web that it’s nearly catechismal. Blogs, he says, have torn down the power structure of old media. “Decentralization and democratization” are the law of the land, offering “a podium to Americans of all ages and backgrounds to contribute.” This is a notion that bloggers and web gurus have been touting for years. In his 2006 book, An Army of Davids, for example, “Instapundit” blogger Glenn Reynolds argued that “markets and technology” empowered “ordinary people to beat big media.” And this June, internet sage Clay Shirky assured an audience at a TED event that the old model, where “professionals broadcast messages to amateurs,” is “slipping away.”

But is this really true? Among some of the biggest bloggers, this notion is increasingly seen as suspect. In early July, Laura McKenna, a widely respected and longtime blogger, argued on her site, 11D, that blogging has perceptibly changed over the six years she’s been at it. Many of blogging’s heavy hitters, she observed, have ended up “absorbed into some other professional enterprise.” Meanwhile, newer or lesser-known bloggers aren’t getting the kind of links and attention they used to, which means that “good stuff” is no longer “bubbling to the top.” Her post prompted a couple of the medium’s most legendary, best-established hands to react: Matthew Yglesias (formerly of The Atlantic, now of ThinkProgress), confirmed that blogging has indeed become “institutionalized,” and Ezra Klein (formerly of The American Prospect, now of The Washington Post) concurred, “The place has professionalized.” Almost everyone weighing in agreed that blogging has become more corporate, more ossified, and increasingly indistinguishable from the mainstream media. Even Glenn Reynolds had a slight change of heart, admitting in a June interview that the David-and-Goliath dynamic is eroding as blogs have become “more big-media-ish.” All this has led Matthew Hindman, author of The Myth of Digital Democracy, to declare that “The era when political comment on the Web is dominated by solo bloggers writing for free is gone.”

He may be right. As the medium has become more popular, money has flowed in. And while no one would deny that blogging has lowered the barriers to self-publication by average citizens, the free-wheeling fraternal spirit of blogging has become increasingly subject to market disciplines. As a result, as Web critic Nicholas Carr told me, blogging has evolved to become “a lot more like a traditional mass medium.”

The data would seem to back this up. First, a clear, stable, class at the top has emerged. An examination of the Technorati rankings for recent years reveals that turnover among the top 50 blogs has become increasingly rare. Even as the total number of blogs has swelled to 133 million from 27 million in 2006, the top 50 have remained relatively static. On March 15, 2006, 30 blogs out of the top 50 were new to the list, never having appeared at the top in any previous year; last month, that number was down to 18. Even the new entrants are no mom-and-pop shops: National Review, Entertainment Weekly and Politico are among the owners, and one of the few independent upstarts, Seeking Alpha, is backed by venture capital. The bulk of the list consists of familiar names, many of whom were among the first to emerge on the Web—from Andrew Sullivan, now of the Atlantic, to the Daily Kos and Boing Boing.

Of the top 50 blogs, 21 are owned by such familiar names as CNN, the New York Times, ABC, and AOL. And many blogs that began as solo operations are developing into full-fledged publications. Josh Marshall’s newsgathering war horse, the Talking Points Memo, has plans to expand its staff of 11 to a full 60. (If another quixotic Josh Marshall came along, Talking Points Memo would be among the media titans he would have to dethrone.) TechCrunch, founded by Michael Arrington in 2005, now has a staff of more than 20. There are only a handful of self-employed solo writers left among the top fifty, and these include standout talents such as Michelle Malkin, Perez Hilton, and Seth Godin.

An immense proportion of the online readership—roughly 42% of all blog traffic—flows to the top 50 blogs. Their dominance of the market is reinforced by the dynamics of the Web itself: users hunting for blogs typically end up directed by search engines to the same group of highly-linked, already popular sites. What’s more, even deliberate attempts to go off the beaten path aren’t likely to lead out of the conglomerate world: the most lucrative niche categories have attracted dominant brands, too, with AOL alone owning 27 of the top 100 blogs, in categories ranging from automobiles, to free software, to independent film and pop culture. The big brands have become so powerful that it’s little wonder that 94 percent of the blogs counted in Technorati’s 2008 State of the Blogosphere report have been shuttered and abandoned.

For the little guy, then, it’s clearly true that, in Hindman’s words, “There is a difference between speaking and being heard.” In their effort to be heard, smart new writers are trying to lash themselves to major online brands, as they would any traditional print publication. Even some of the bloggers we’ve come to admire as bootstrap-heroes are in truth products of the farm club. The Internet’s favorite Cinderella figure, Nate Silver—the statistician-outsider turned political prodigy—cut his teeth not at some hinterland Word Press blog, but at the Daily Kos. Conversely, many brands have become strong enough to outlive the loss of their marquee talents. Gawker burned through such gifted early editors as Elizabeth Spiers and Choire Sicha, while traffic continued to multiply. Today, the romantic notion that solitary, untamed bloggers are running the Web is more fantasy than fact—nearly as apocryphal as old myths about stoic Western sheriffs killing 11 outlaws with six bullets.

Institutionalization may make for a more reliable, professional blogosphere. But it’s misguided to imagine, as Massing does, that blogs can also still be hailed for offering “a podium to Americans of all ages and backgrounds to contribute.” Rather, far from leveling the playing field, blogs have simply built up challenging new pathways to success, ones that with their familiar requirements—impress the right gatekeepers, court a mentor, work one’s way up from the inside—mirror the old-media ways. Ezra Klein’s trajectory from lone blogger, to the American Prospect, to the Washington Post, is a classic illustration of the new path to internet stardom, which increasingly means working one’s way into affiliation with a prestigious, well-funded institution.

Blogging, then, seems to be an industry on the cusp of maturity. Nick Carr compares its evolution to that of ham radio in the early twentieth century. Out of the amateur hubbub emerged self-made stars, who were then hired by fledgling networks that eventually grew into CBS, NBC and ABC. In much the same way, blogging celebrities have been snatched up by old and new conglomerates, while a sudden heart attack in the old-media world has put commercial blogging enterprises into a startlingly advantageous position. To wit, in the midst of a major downturn in advertising profits across most media, revenue to Gawker’s network of eight blogs jumped 45% in the first half of this year.

Clearly, a new establishment is taking shape. It seems ever more likely that the next media kingpins will come from the proverbial – and increasingly mythical – pajama-wearing classes.

Benjamin Carlson is a digital media fellow at The Atlantic.


Full article:

Tech giants offer ideas on charging readers online

IBM, Microsoft, Oracle and even Google respond to request by Newspaper Association of America for proposals on ways to easily, unobtrusively charge for news on the Web.

Some of the world’s biggest technology companies say they can help publishers successfully charge readers for news online.

If only that were the hard part.

IBM Corp., Microsoft Corp., Oracle Corp. and even Google Inc. — a company some newspapers blame for helping to dig their financial hole — responded to a request by the Newspaper Association of America for proposals on ways to easily, unobtrusively charge for news on the Web.

But while building the infrastructure for charging readers is one part of the equation, the new proposals underscore what may be the more intractable issue: getting publishers to make the leap and stop giving news out for free on the Web.

Randy Bennett, the senior vice president of business development at the newspaper association, said his group initiated the process after a meeting of publishers in May near Chicago. A report that was posted online Wednesday by the Nieman Journalism Lab at Harvard University includes 11 different responses from technology companies.

Bennett said the trade group wanted to give newspapers options, and will not recommend one proposal over the others.

Google’s proposal may be the most eyebrow raising, if only because the company — which aggregates thousands of articles from media outlets on its news pages — is so closely associated with the freewheeling ethos of an open Internet.

“Google believes that an open Web benefits all users and publishers,” the company writes in its proposal. “However, `open’ need not mean free.”

Google proposed offering news organizations a version of its Google Checkout system, which is used for processing online payments. It would give readers a place to sign in to an account and then pay for media from a variety of sources without having to punch in their information over and over. And the company says it could offer publishers a variety of pay methods, from basic subscriptions to so-called “micropayments” on a per-article basis.

Along with the technology heavyweights offering ideas are tiny startups.

CircLabs, run by just four people and incubated at the Missouri School of Journalism, is developing an application that would feed news from different sources into a bar across the top of Web browsers. Martin Langeveld, the company’s executive vice president, said the application will offer both targeted advertising and the option of charging. (Langeveld said the company has seed money from The Associated Press. AP spokesman Paul Colford said the news cooperative does not disclose which ventures it invests in.)

The idea, Langeveld said, isn’t just to squeeze more money out of readers but to build “something that addresses the needs of consumers, publishers and advertisers.”

The number of proposals bodes further competition for Journalism Online, a startup led by Court TV founder Steven Brill and former Wall Street Journal publisher Gordon Crovitz. The company has made a well-publicized effort to sign up newspapers for its own payment system.

Still, having the tools available may not persuade publishers to use them. That’s because publishers are nervous about scaring off readers. Charging for news may open a new source of revenue for struggling newspapers but also could choke off online ad dollars by driving down traffic.

“This was supposed to be the year that newspapers started charging for online content,” said Alan Mutter, a former newspaper editor who works as an industry consultant and blogger and submitted one of the 11 proposals. (Mutter said the AP also has invested in his project). “Based on what I’ve seen, I don’t get any sense that there is unanimity about charging or that they would know how to go about doing it.”

Given the risks, publishers are looking for options short of walling off everything they offer online. Some are considering charging a small fee per article or a metered approach that would allow Web surfers free access up to a certain number of page views.

Some don’t want to charge readers at all and would rather focus on getting more money from advertising.


Full article:

World’s biggest monopoly game kicks off


Monopoly City Streets with Google.

Hasbro and Google have teamed up for an online, real-world version of the classic board game Monopoly.

The world’s largest Monopoly game kicks off today, and the whole planet is up for sale.

Hasbro Inc., owner of what is perhaps the most ubiquitous board game ever made, is teaming up with Google Inc. to launch Monopoly City Streets, a massive Web-based version of the classic game that turns every street on the planet into a potential real estate opportunity.

The game uses Google’s map data as a board, allowing players to buy up everything from Pennsylvania Avenue in Washington to Vancouver’s Burrard Street.

Prospective slumlords and real estate barons can then build sheds, skyscrapers and everything in between, collecting rent and sabotaging each other’s empires in the process.

Already the subject of much Internet buzz, Monopoly City Streetsis actually an elaborate marketing campaign dreamed up by Hasbro’s ad agency, Tribal DDB, to hype the latest (real-life) iteration of the classic board game, and facilitated by Google.

The partnership illustrates what both companies will likely do a lot more of in the future.

Google is looking to monetize its huge stores of maps, books and other data, and Hasbro wants to breathe new life into titles such as Monopoly, which has been around since 1935.

“We’re trying to show that board games are contemporary,” said Donetta Allen, a spokeswoman for Hasbro.

“This is a fun and unusual way to introduce the game to a new generation.”

In the virtual iteration of the game, players the world over will start with $3-million (U.S.) in digital cash, which they can use to buy just about any street recognized by Google Maps. The streets are priced based on a formula that takes into account their length and proximity to various landmarks and the centre of town. (Toronto’s Queen Street East goes for about $1-million, while New York’s Fifth Avenue is one of the world’s most expensive, carrying a $28.7-million price tag). Players can then build and collect rent from structures ranging from single-storey houses to the $100-million Monopoly Tower.

“You’re not going around a game board,” Ms. Allen said. “It’s all about how far you can expand your empire.”

Besides the trading and negotiation elements that made the traditional board game so engrossing and infuriating, the online version will also include a healthy dose of attrition. The move is a lead-up to the new board game coming this fall, in which players can sabotage one another by, for example, building a sewage plant on another player’s property, thereby driving property values down.

Tribal DDB approached Google earlier this year with the Web-based Monopoly idea, which the Web giant agreed to facilitate. In addition to providing map data, Google also made available its development team behind SketchUp, a 3-D modelling tool, to work on a contest allowing players to design their own buildings for use in the game.

The deal will likely serve as a blueprint for further endeavours – Google says it’s looking for more partners to work on similar marketing campaigns that take advantage of the Web giant’s massive collection of information, such as map data and street images.

This isn’t the first time Hasbro has opted for a “non-traditional product launch” to advertise its titles. Tribal DDB previously developed a “real-life” Monopoly game using the streets of London as a board and taxis fitted with GPS receivers as playing pieces. Hasbro has also teamed up with McDonald’s Corp. for a scratch-and-win promotion, using the board game’s iconic street names.

Rumours have been swirling for the past two years that the next reincarnation of Monopoly’s marketing machine will be even more ambitious: a major Hollywood movie based on the game. However, Ms. Allen would neither confirm nor deny those rumours.


Full article and photo:

SA pigeon ‘faster than broadband’

Winston the pigeon

Winston the pigeon was allowed no “performance-enhancing seeds”

Broadband promised to unite the world with super-fast data delivery – but in South Africa it seems the web is still no faster than a humble pigeon.

A Durban IT company pitted an 11-month-old bird armed with a 4GB memory stick against the ADSL service from the country’s biggest web firm, Telkom.

Winston the pigeon took two hours to carry the data 60 miles – in the same time the ADSL had sent 4% of the data.

Telkom said it was not responsible for the firm’s slow internet speeds.

The idea for the race came when a member of staff at Unlimited IT complained about the speed of data transmission on ADSL.

He said it would be faster by carrier pigeon.

“We renown ourselves on being innovative, so we decided to test that statement,” Unlimited’s Kevin Rolfe told the Beeld newspaper.

‘No cats allowed’

Winston took off from Unlimited IT’s call centre in the town of Howick to deliver the memory stick to the firm’s office in Durban.

According to Winston’s website there were strict rules in place to ensure he had no unfair advantage.

Kevin Rolfe with Winston
Winston is over the moon
Kevin Rolfe

They included “no cats allowed” and “birdseed must not have any performance-enhancing seeds within”.

The firm said Winston took one hour and eight minutes to fly between the offices, and the data took another hour to upload on to their system.

Mr Rolfe said the ADSL transmission of the same data size was about 4% complete in the same time.

Hundreds of South Africans followed the race on social networking sites Facebook and Twitter.

“Winston is over the moon,” Mr Rolfe said.

“He is happy to be back at the office and is now just chilling with his friends.”

Meanwhile Telkom said it could not be blamed for slow broadband services at the Durban-based company.

“Several recommendations have, in the past, been made to the customer but none of these have, to date, been accepted,” Telkom’s Troy Hector told South Africa’s Sapa news agency in an e-mail.

South Africa is one of the countries hoping to benefit from three new fibre optic cables being laid around the African continent to improve internet connections.


Full article and photos:

Fighting it out

Can a computer be programmed to be cunning yet fallible?

A new Turing test

IF A computer could fool a person into thinking that he were interacting with another person rather than a machine, then it could be classified as having artificial intelligence. That, at least, was the test proposed in 1950 by Alan Turing, a British mathematician. Turing envisaged a typed exchange between machine and person, so that a genuine conversation could happen without the much harder problem of voice emulation having to be addressed.

More recently, the abilities of computers to play games such as chess, go and bridge has been regarded as a form of artificial intelligence. But the latest effort to use machines to emulate the way people interact with one another focuses neither on natural languages nor traditional board and card games. Rather, it concentrates on that icon of modernity, the shoot-’em-up computer game.

At a symposium on computational intelligence and games organised in Milan this week by America’s Institute of Electrical and Electronics Engineers, researchers are taking part in a competition called the 2K BotPrize. The aim is to trick human judges into thinking they are playing against other people in such a game. The judges will be pitted against both human players and “bots” over the course of several battles, with the winner or winners being any bot that convinces at least four of the five judges involved that they are fighting a human combatant. Last year, when the 2K BotPrize event was held for the first time, only one bot fooled any judges at all as to its true identity—and even then only two of them fell for it.

Computers can, of course, be programmed to shoot as quickly and accurately as you like. To err, however, is human, so too much accuracy does tend to give the game away. According to Chris Pelling, a student at the Australian National University in Canberra who was one of last year’s finalists and will compete again this year, a successeful bot must be smart enough to navigate the three-dimensional environment of the game, avoid obstacles, recognise the enemy, choose appropriate weapons and engage its quarry. But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year’s competition, puts it, “it is kind of like artificial stupidity”.

Mr Pelling says that one of the biggest challenges lies in programming the bots to account for sneaky tactics from the judges. It is relatively easy to manipulate the game and do unnatural things in order to elicit behavioural flaws in a badly programmed bot. And if a judge observes even a single instance of unnatural behaviour the game is, as it were, over.

Even if a bot does eventually fool the judges, though, would it really mark a significant advance in artificial intelligence? One of their number, David Fogel, the chairman of Natural Selection, a software development firm based in San Diego, California, thinks not. As he observes, once Garry Kasparov had been defeated by Deep Blue, the significance of beating the best human chess player suddenly seemed less important.


Full article and photo: