Neutrino experiment repeat at Cern finds same result

The team which found that neutrinos may travel faster than light has carried out an improved version of their experiment – and confirmed the result.

Neutrinos travel through 700km of rock before reaching Gran Sasso’s underground laboratories

If confirmed by other experiments, the find could undermine one of the basic principles of modern physics.

Critics of the first report in September had said that the long bunches of neutrinos (tiny particles) used could introduce an error into the test.

The new work used much shorter bunches.

It has been posted to the Arxiv repository and submitted to the Journal of High Energy Physics, but has not yet been reviewed by the scientific community.

The experiments have been carried out by the Opera collaboration – short for Oscillation Project with Emulsion (T)racking Apparatus.

It hinges on sending bunches of neutrinos created at the Cern facility (actually produced as decays within a long bunch of protons produced at Cern) through 730km (454 miles) of rock to a giant detector at the INFN-Gran Sasso laboratory in Italy.

The initial series of experiments, comprising 15,000 separate measurements spread out over three years, found that the neutrinos arrived 60 billionths of a second faster than light would have, travelling unimpeded over the same distance.

The idea that nothing can exceed the speed of light in a vacuum forms a cornerstone in physics – first laid out by James Clerk Maxwell and later incorporated into Albert Einstein’s theory of special relativity.

Timing is everything

Initial analysis of the work by the wider scientific community argued that the relatively long-lasting bunches of neutrinos could introduce a significant error into the measurement.

Those bunches lasted 10 millionths of a second – 160 times longer than the discrepancy the team initially reported in the neutrinos’ travel time.

To address that, scientists at Cern adjusted the way in which the proton beams were produced, resulting in bunches just three billionths of a second long.

When the Opera team ran the improved experiment 20 times, they found almost exactly the same result.

“This is reinforcing the previous finding and ruling out some possible systematic errors which could have in principle been affecting it,” said Antonio Ereditato of the Opera collaboration.

“We didn’t think they were, and now we have the proof,” he told BBC News. “This is reassuring that it’s not the end of the story.”

The first announcement of evidently faster-than-light neutrinos caused a stir worldwide; the Opera collaboration is very aware of its implications if eventually proved correct.

The error in the length of the bunches, however, is just the largest among several potential sources of uncertainty in the measurement, which must all now be addressed in turn; these mostly centre on the precise departure and arrival times of the bunches.

“So far no arguments have been put forward that rule out our effect,” Dr Ereditato said.

“This additional test we made is confirming our original finding, but still we have to be very prudent, still we have to look forward to independent confirmation. But this is a positive result.”

That confirmation may be much longer in coming, as only a few facilities worldwide have the detectors needed to catch the notoriously flighty neutrinos – which interact with matter so rarely as to have earned the nickname “ghost particles”.

Next year, teams working on two other experiments at Gran Sasso experiments – Borexino and Icarus – will begin independent cross-checks of Opera’s results.

The US Minos experiment and Japan’s T2K experiment will also test the observations. It is likely to be several months before they report back.


Full article and photo:

Dutch Scientists Drive Single-Molecule Car

Scientists in the Netherlands have introduced a molecule-sized car. Legroom might be an issue.

Its wheels are comprised of a few atoms each; its motor, a mere jolt of electricity. Scientists in the Netherlands have introduced the world’s smallest car — and it’s only a single molecule long.

It’s certainly no Porsche, but scientists at the University of Groningen in the Netherlands are still excited about their latest achievement: creating a “car” that’s only a billionth of a meter long.

The nanometer-sized vehicle, introduced in the British journal Nature on Wednesday, is comprised of a miniscule frame with four rotary units, each no wider than a few atoms. In fact, the whole construction is 60,000 times thinner than a human hair, according to the AFP news agency.

The research team was able to propel the nanocar six billionths of a meter by firing electrons at it with a tunnelling electron microscope. The “electronic and vibrational excitation” of the jolts changes the way the atoms of the “wheels” interact with those on a copper surface, the reports says, propelling the car forward in a single direction. The only problem, it would seem, is getting all the wheels to turn in the same direction every time.

A Small Future

It might be tough to imagine the use of such a diminutive roadster. But nanotechnology is widely considered one of the most exciting fields of the 21st century, and the researchers view their design as “a starting point for the exploration of more sophisticated molecular mechanical systems with directionally controlled motion.”

Utilizing materials at an atomic or molecular level — “nano” comes from the Greek word for “dwarf” — finds applications in everything from medicine and engineering to consumer products, such as sunscreen, ketchups and even powdered sugar.


Full article and photo:,1518,796970,00.html

Going round in circles


In contradiction to most cosmologists’ opinions, two scientists have found evidence that the universe may have existed for ever

WHAT happened before the beginning of time is—by definition, it might be thought—metaphysics. At least one physicist, though, thinks there is nothing meta about the question at all. Roger Penrose, of Oxford University, believes that the Big Bang in which the visible universe began was not actually the beginning of everything. It was merely the latest example of a series of such bangs that renew reality when it is getting tired out. More importantly, he thinks that the pre-Big Bang past has left an imprint on the present that can be detected and analysed, and that he and a colleague in Armenia have found it.

The imprint in question is in the cosmic microwave background (CMB). This is a bath of radiation that fills the whole universe. It was frozen in its present form some 300,000 years after the Big Bang, and thus carries information about what the early universe was like. The CMB is almost, but not quite, uniform, and the known irregularities in it are thought to mark the seeds from which galaxies—and therefore stars and planets—grew.

Dr Penrose, though, predicts another form of irregularity—great circles in the sky where the microwave background is slightly more uniform than it should be. These, if they exist, would be fossil traces of black holes from the pre-Big Bang version of reality. And in a paper just published in, an online database, he claims they do indeed exist.

Once upon a time

The Penrose version of cosmology stands in sharp distinction to received wisdom. This is that the universe popped out of nowhere about 13.7 billion years ago in a quantum fluctuation similar to the sort that constantly create short-lived virtual particles in so-called empty space. Before this particular fluctuation could disappear again, though, it underwent a process called inflation that both stabilised it and made it 1078 times bigger than it had previously been in a period of 10-32 seconds. Since then, it has expanded at a more sedate rate and will continue to do so—literally for ever.

Dr Penrose, however, sees inflation as a kludge. The main reason it was dreamed up (by Alan Guth, a cosmologist at the Massachusetts Institute of Technology) was to explain the extraordinary uniformity of the universe. A period of rapid inflation right at the beginning would impose such uniformity by stretching any initial irregularities so thin that they would become invisible.

As kludges go, inflation has been successful. Those of its predictions that have been tested have all been found true. But that does not mean it is right. Dr Penrose’s explanation of the uniformity is that, rather than having been created at the beginning of the universe, it is left over from the tail end of reality’s previous incarnation.

Dr Penrose’s version of events is that the universe did not come into existence at the Big Bang but instead passes through a continuous cycle of aeons. Each aeon starts off with the universe being of zero size and high uniformity. At first the universe becomes less uniform as it evolves and objects form within it. Once enough time has passed, however, all of the matter around will end up being sucked into black holes. As Stephen Hawking has demonstrated, black holes eventually evaporate in a burst of radiation. That process increases uniformity, eventually to the level the universe began with.

Thus far, Dr Penrose’s version of cosmology more or less matches the standard version. At this point, though, he introduces quite a large kludge of his own. This is the idea that when the universe becomes very old and rarefied, the particles within it lose their mass.

That thought is not entirely bonkers. The consensus among physicists is that particles began massless and got their mass subsequently from something known as the Higgs field—the search for which was one reason for building the Large Hadron Collider, a huge and powerful particle accelerator located near Geneva. Mass, then, is not thought an invariable property of matter. So Dr Penrose found himself speculating one day about how a universe in which all particles had lost their mass through some as-yet-undefined process might look. One peculiarity of massless particles is that they have to travel at the speed of light. That (as Einstein showed) means that from the particle’s point of view time stands still and space contracts to nothingness. If all particles in the universe were massless, then, the universe would look to them to be infinitely small. And an infinitely small universe is one that would undergo a Big Bang.

Uncommon sense

It is well known that fundamental physics is full of ideas that defy what humans are pleased to call common sense. Even by those standards, however, Dr Penrose’s ideas are regarded as a little eccentric by his fellow cosmologists. But they do have one virtue that gives them scientific credibility: they make a prediction. Collisions between black holes produce spherical ripples in the fabric of spacetime, in the form of gravitational waves. In the Penrose model of reality these ripples are not abolished by a new Big Bang. Images of black-hole collisions that happened before the new Bang may thus imprint themselves as concentric circular marks in the emerging cosmic microwave background.

The actual search for such cosmic circles has been carried out by Vahe Gurzadyan of the Yerevan Physics Institute in Armenia. Dr Gurzadyan analysed seven years’ worth of data from WMAP, an American satellite whose sole purpose is to measure the CMB, and also looked at data from another CMB observatory, the BOOMERanG balloon experiment in Antarctica. His verdict, arrived at after he scoured over 10,000 points on the microwave maps, is that Dr Penrose’s concentric circles are real. He says he has found a dozen sets of them—one of which is illustrated. (The visible rings in the picture have been drawn on subsequently to show where computer analysis has found circle-defining uniformity.)

This is, of course, but a single result—and supporters of inflation do not propose to give up without a fight. Amir Hajian, a physicist at Princeton, for example, says he is concerned about distortions in the WMAP data caused by the satellite spending more time mapping some parts of the sky than others. Then there is the little matter of how the masslessness comes about.

Dr Guth, meanwhile, claims that a handful of papers are published every year pointing to inconsistencies between the microwave background data and inflation, and that none has withstood the test of time. Moreover, even if the circles do hold up, they may have a cause different from the one proposed by Dr Penrose. Nevertheless, when a strange theory makes a strange prediction and that prediction proves correct, it behoves science to investigate carefully. For if what Dr Penrose and Dr Gurzadyan think they have found is true, then much of what people thought they knew about the universe is false.


Full article and photo:

Stone Power

Q. I’ve heard that if a penny is dropped from the Empire State Building it could kill someone. But what about hail? It’s often much larger and falls from much higher, so why do I never hear about any deaths caused by it?

A. Hail can cause human fatalities, but does not usually do so, according to the National Severe Storms Laboratory of the National Oceanic and Atmospheric Administration.

While one hail event in India in 1988 caused 246 deaths, this was truly exceptional. In the United States, most years see no deaths at all, though one or two are very rarely reported. It has been suggested that one reason for this is that Americans spend less time out in the open than people who live in regions like northern India where hail is a greater risk to human life.

Another important factor is that hail does not fall uninterrupted from the high reaches of the atmosphere, but is tossed up and down by the winds of a thunderstorm, bumping into raindrops and other hailstones, a process that slows the fall. The winds also frequently make the hailstones fall at an angle.

The friction with other precipitation deforms a hailstone from a perfect sphere, making its velocity hard to calculate when it does become heavy enough to fall to earth. One estimate is that a half-inch stone falls about 30 feet a second, while a three-inch stone falls nearly 160 feet a second.

C. Clairbone Ray, New York Times


Full article and photo:

Dr Hawking’s bright idea

Mimicking black holes

A long-predicted phenomenon has turned up in an unexpected place

IN 1974 Stephen Hawking, pictured right, had a startling theoretical insight about black holes—those voracious eaters of matter and energy from whose gravitational clutches not even light can escape. He predicted that black holes should not actually be black. Instead, because of the quirks of quantum mechanics, they should glow ever so faintly, like smouldering embers in a dying fire. The implications were huge. By emitting this so-called Hawking radiation, a black hole would gradually lose energy and mass. If it failed to replenish itself it would eventually evaporate completely, like a puddle of water on a hot summer’s day.

Unfortunately for physicists, Dr Hawking also predicted that the typical temperature at which a black hole radiates should be about a billionth of that of the background radiation left over from the Big Bang itself. Proving his theory by observing actual Hawking radiation from a black hole in outer space has therefore remained a practical impossibility.

In a paper just accepted by Physical Review Letters, however, a team of researchers led by Daniele Faccio from the University of Insubria, in Italy, report that they have observed Hawking radiation in the laboratory. They managed this trick not by creating an Earth-gobbling black hole on a benchtop but by firing pulses of laser light into a block of glass. This created a region from which light could not escape (analogous to a black hole) and also its polar opposite, a region which light could not enter. When the team focused a sensitive camera on to the block, they saw the faint glow of Hawking radiation.

Black and light

If a dying star is massive enough, it can collapse to form a region of infinite density, called a singularity. The gravity of such an object is so strong that nothing, not even light itself, can break free if it strays too close. Once something has passed through the so-called event horizon that surrounds this region, it is doomed to a one-way trip.

Dr Hawking’s insight came from considering what happens in the empty space just outside the event horizon. According to quantum mechanics, empty space is anything but empty. Rather, it is a roiling, seething cauldron of evanescent particles. For brief periods of time, these particles pop into existence from pure nothingness, leaving behind holes in the nothingness—or antiparticles, as physicists label them. A short time later, particle and hole recombine, and the nothingness resumes.

If, however, the pair appears on the edge of an event horizon, either particle or hole may wander across the horizon, never to return. Deprived of its partner, the particle (or the hole) has no “choice” but to become real. These particles, the bulk of which are photons (the particles of light), make up Hawking radiation—and because photons and antiphotons are identical, the holes contribute equally. The energy that goes into these now-real photons has to come from somewhere; that somewhere is the black hole itself, which thus gradually shrinks. By linking the disparate fields of gravitational science, quantum mechanics and thermodynamics, Hawking radiation has become a crucial concept in theoretical physics over the past quarter of a century.

In 1981, that concept was extended. William Unruh of the University of British Columbia pointed out that black holes are actually extreme examples of a broader class of physical systems that can form event horizons. Consider a river approaching a waterfall. As the water nears the edge, the current moves faster and faster. In theory, it can move so fast that ripples on the surface are no longer able to escape back upstream. In effect, an event horizon has formed in the river, preventing waves from making their way out. Since then, other researchers have come up with other quotidian examples of event horizons.

Dr Faccio and his team were able to create their version because, as the laser pulse moves through the glass block, it changes the glass’s refractive index (the speed at which light travels through a material). Light in the vicinity of the pulse is slowed more and more when the refractive index changes as the pulse passes by.

To see how the pulse can act like a black hole, imagine that it is sent chasing after a slower, weaker pulse. It will gradually catch up with this slow pulse, reducing the speed of light in the slow pulse’s vicinity. That will slow the slow pulse down still more until eventually it is slowed so much that it is stuck. Essentially, the leading edge of the chasing pulse sucks it in, acting like the event horizon of a black hole.

Now imagine that the chasing pulse is itself being chased, but again by a much weaker pulse. As this third pulse approaches the tail of the second one it will also slow down (because the speed of light in the glass it is passing through has been reduced by the second pulse’s passage). The closer it gets, the slower it travels, and it can never quite catch up. The trailing edge of the second pulse, therefore, also acts as an event horizon. This time, though, it stops things getting in rather than stopping them getting out. It resembles the opposite of a black hole—a white hole, if you like.

In the actual experiment, there were no leading and trailing pulses. Instead, their role was played by evanescent photons continually popping into existence around the strong pulse. As the pulse passed through the glass, its event horizons should have swept some of these photons up, producing Hawking radiation from the partners they left behind.

Sure enough, when Dr Faccio and his team focused a suitable camera on the block and fired 3,600 pulses from their laser, they recorded a faint glow at precisely the range of frequencies which the theory of Hawking radiation predicts. After carefully considering and rejecting other possible sources for this light, they conclude that they have indeed observed Hawking radiation for the first time.

Because of its tabletop nature, other groups will certainly attempt to replicate and extend Dr Faccio’s experiment. Although such studies cannot prove that real black holes radiate and evaporate, they lend strong support to the ideas that went into Dr Hawking’s line of reasoning. Unless a tiny black hole turns up in the collisions of a powerful particle accelerator, that may be the best physicists can hope for. It may even be enough to convince Sweden’s Royal Academy of Science to give Dr Hawking the Nobel prize that many think he deserves, but which a lack of experimental evidence has hitherto caused it to withhold.


Full article and photo:

Ye cannae change the laws of physics

Or can you?

RICHARD FEYNMAN, Nobel laureate and physicist extraordinaire, called it a “magic number” and its value “one of the greatest damn mysteries of physics”. The number he was referring to, which goes by the symbol alpha and the rather more long-winded name of the fine-structure constant, is magic indeed. If it were a mere 4% bigger or smaller than it is, stars would not be able to sustain the nuclear reactions that synthesise carbon and oxygen. One consequence would be that squishy, carbon-based life would not exist.

Why alpha takes on the precise value it has, so delicately fine-tuned for life, is a deep scientific mystery. A new piece of astrophysical research may, however, have uncovered a crucial piece of the puzzle. In a paper just submitted to Physical Review Letters, a team led by John Webb and Julian King from the University of New South Wales in Australia present evidence that the fine-structure constant may not actually be constant after all. Rather, it seems to vary from place to place within the universe. If their results hold up to the scrutiny, and can be replicated, they will have profound implications—for they suggest that the universe stretches far beyond what telescopes can observe, and that the laws of physics vary within it. Instead of the whole universe being fine-tuned for life, then, humanity finds itself in a corner of space where, Goldilocks-like, the values of the fundamental constants happen to be just right for it.

Slightly belying its name, the fine-structure constant is actually a compound of several other physical constants, whose values can be found in any physics textbook. You start with the square of an electron’s charge, divide it by the speed of light and Planck’s constant, then multiply the whole lot by two pi. The point of this convoluted procedure is that this combination of multiplication and division produces a pure, dimensionless number. The units in which the original measurements were made cancel each other out and the result is 1/137.036, regardless of the measuring system you used in the first place.

Despite its convoluted origin, though, alpha has a real meaning. It characterises the strength of the force between electrically charged particles. As such, it governs—among other things—the energy levels of an atom formed from negatively charged electrons and a positive nucleus. When electrons jump between these energy levels, they absorb and emit light of particular frequencies. These frequencies show up as lines (dark for absorption; bright for emission) in a spectrum. When many different energy levels are involved, as they are in the spectrum of a chemically mixed star, the result is a fine, comb-like structure—hence the constant’s name. If it were to take on a different value, the wavelengths of these lines would change. And that is what Dr Webb and Mr King think they have found.

The light in question comes not from individual stars but from quasars. These are extremely luminous (and distant) galaxies whose energy output is powered by massive black holes at their centres. As light from a quasar travels through space, it passes through clouds of gas that imprint absorption lines onto its spectrum. By measuring the wavelengths of a large collection of these absorption lines and subtracting the effects of the expansion of the universe, the team led by Dr Webb and Mr King was able to measure the value of alpha in places billions of light-years away.

Dr Webb first conducted such a study almost a decade ago, using 76 quasars observed with the Keck telescope in Hawaii. He found that, the farther out he looked, the smaller alpha seemed to be. In astronomy, of course, looking farther away means looking further back in time. The data therefore indicated that alpha was around 0.0006% smaller 9 billion years ago than it is now. That may sound trivial. But any detectable deviation from zero would mean that the laws of physics were different there (and then) from those that pertain in the neighbourhood of the Earth.

Such an important result needed independent verification using a different telescope, so in 2004 another group of researchers looked from the European Southern Observatory’s Very Large Telescope (VLT) in Chile. They found no evidence for any variation of alpha. Since then, though, flaws have been discovered in that second analysis, so Dr Webb and his team set out to do their own crosscheck with a sample of 60 quasars observed by the VLT.

What they found shocked them. The further back they looked with the VLT, the larger alpha seemed to be—in seeming contradiction to the result they had obtained with the Keck. They realised, however, that there was a crucial difference between the two telescopes: because they are in different hemispheres, they are pointing in opposite directions. Alpha, therefore, is not changing with time; it is varying through space. When they analysed the data from both telescopes in this way, they found a great arc across the sky. Along this arc, the value of alpha changes smoothly, being smaller in one direction and larger in the other. The researchers calculate that there is less than a 1% chance such an effect could arise at random. Furthermore, six of the quasars were observed with both telescopes, allowing them to get an additional handle on their errors.

If the fine-structure constant really does vary through space, it may provide a way of studying the elusive “higher dimensions” that many theories of reality predict, but which are beyond the reach of particle accelerators on Earth. In these theories, the constants observed in the three-dimensional world are reflections of what happens in higher dimensions. It is natural in these theories for such constants to change their values as the universe expands and evolves.

Unfortunately, their method does not allow the team to tell which of the constants that goes into alpha might be changing. But it suggests that at least one of them is. On the other hand, the small value of the change over a distance of 18 billion light-years suggests the whole universe is vastly bigger than had previously been suspected. A diameter of 18 billion light-years (9 billion in each direction) is a considerable percentage of observable reality. The universe being 13.7 billion years old, 13.7 billion light-years—duly stretched to allow for the expansion of the universe—is the maximum distance it is possible to see in any direction. If the variation Dr Webb and Mr King have found is real, and as gradual as their data suggest, you would have to go a very long way indeed to come to a bit of space where the fine-structure constant was more than 4% different from its value on Earth.

If. Other teams of astronomers are already on the case, and Victor Flambaum, one of Dr Webb’s colleagues at the University of New South Wales, points out in a companion paper that laboratory tests involving atomic clocks only slightly better than those that exist already could provide an independent check. These would vary as the solar system moves through the universe. But if and when such confirmation comes, it will break one of physics’s greatest taboos, the assumption that physical laws are the same everywhere and everywhen. And the fine-structure constant will have shown itself to be more mysterious than even Feynman conceived.

Full article and photo:

Sun storm

Meet the northern lights

THIS PAST week, residents of several US states had a rare opportunity to see the northern lights bathe the sky in their eerie glow of pale green and red. The lights are normally only visible in far northern latitudes, but a surge of activity from the sun pushed them far enough south that there was even a chance we’d get a glimpse in Massachusetts. And fortunately for sky watchers, this may only be the beginning: Scientists say the sun is being roused into a period of high activity, which may bring more displays over the next few years.

The northern lights are a rare visible manifestation of space weather, the currents of matter and energy that roil above the earth’s lower atmosphere. The lights are caused by solar wind — protons and electrons streaming outwards from the sun — which can gust at speeds up to thousands of kilometers per second. When these winds push against the earth’s magnetic field, the result is a geomagnetic storm that appears to us as an aurora — arcs, sheets, or rippled filaments of light. In northern latitudes, this phenomenon is called the aurora borealis, after the Roman goddess of dawn and the Greek word for north wind. The aurora borealis only reaches the Boston area every few years; our last glimpse was in 2005.

The recent storm was created by an enormous blast from the sun called a coronal mass ejection. Although it made for some beautiful displays, it’s by no means the most impressive we’ve seen. In the summer of 1859, a huge solar flare lit up skies across Europe, the United States, and even Japan. A New York Times article from Sept. 2 that year said the northern lights in Boston were “so brilliant that at about one o’clock ordinary print could be read by the light.” That night, two operators of the telegraph line between Boston and Portland conversed for two hours powered solely by current induced by the aurora.

The sun’s cycles of activity last about 11 years on average, and after two quiet years — a longer rest than usual — the sun is stirring back into action. This means more opportunities for watching the dramatic storms it causes. In the meantime, you can see spectacular views of the sun’s activity at NASA’s Solar Dynamics Observatory website. A gallery of images of the recent aurora taken from around the world is on display at, and devoted sky watchers can also sign up there for “aurora alerts” on their cellphones. Anyone who wants to track space weather conditions, including opportunities to view aurorae, can follow the National Weather Service’s Space Weather Prediction Center ( Should another chance come, the best way to see the northern lights is to escape city light pollution, ideally by finding a remote spot in the country with unobstructed views of the northern sky.

Courtney Humphries, author of ”Superdove: How the Pigeon Took Manhattan…And the World,” is a regular contributor to Ideas.


Full article and photo:

Five Best Books on Inventions

Eureka! William Rosen hails these books about inventions

1. Longitude

By Dava Sobel
Walker, 1995

The story of the first marine chronometer, invented by the self-taught British clockmaker John Harrison, had a remarkable number of dramatic elements. Thanks to the Longitude Act of 1714, the protagonist’s goal—a £20,000 prize for a method of determining longitude to within 30 nautical miles—could not have been clearer. Even more theatrically, Harrison was a legitimate “lone genius,” who spent 19 solitary years on a single version of a clock accurate enough to compare local noon at sea with noon back home. And, in the devious Nevil Maskelyne, astronomer royal and champion of a competing method of calculating longitude, the story had a perfect villain. To this raw material Dava Sobel added a sculptor’s sense for the physicality of things—for the self-lubricating gears that Harrison carved from close-grained wood—and enough poetic imagination to describe H-1, Harrison’s first chronometer, as “a model ship, escaped from its bottle, afloat on the sea of time.”

2. The Making of the Atomic Bomb

By Richard Rhodes
Simon & Schuster, 1986

Richard Rhodes’s story of the birth of the nuclear age is an epic that, in terms of scientific discovery, unfolds in the blink of an eye—Hiroshima, after all, was destroyed just 34 years after the discovery of the atomic nucleus. His cast of characters is a virtual Who’s Who of 20th-century physics, from Albert Einstein to J. Robert Oppenheimer, but one that also gives star turns to brilliant and dogged engineers like Vannevar Bush and Gen. Leslie Groves. Rhodes pays his readers the compliment of assuming that they are familiar enough with the story to foresee critical moments. We know, for instance, before Glenn Seaborg himself, that Seaborg will name element 94 (“this speck of matter God had not welcomed at the Creation,” Rhodes writes) for the Roman god of the dead: plutonium.

3. To Conquer the Air

By James Tobin
Free Press, 2003

The story of the onetime bicycle-shop owners from Dayton, Ohio (in 1900, America’s per capita patent leader), is simultaneously a brilliant panorama of early 20th-century America and an unforgettable portrait of Wilbur Wright. Both Wilbur and his brother Orville were exemplars of grace under pressure, showing high intelligence, modesty and determination without foolhardiness—all the while competing against everyone from Alexander Graham Bell to the motorcycle-racing champion Glenn Curtiss to be the first aloft. But Wilbur is clearly the star. His decision to master airborne stability and balance before power—to create the optimal wing and let the engine take care of itself—gives James Tobin’s tale an enormously satisfying structure, as well as an entirely apt metaphor for Wilbur Wright’s life.

4. The Deltoid Pumpkin Seed

By John McPhee
Farrar, Straus & Giroux, 1973

In the late 1960s, an unlikely team of ex-Navy airship specialists, model builders, aeronautics professors and the pastor of the Fourth Presbyterian Church in Trenton, N.J., who instigated it all, set out to build a new airborne means of transporting heavy freight. What they envisioned was a helium-filled hybrid of an airplane and a rigid airship. What they created was the Aereon 26, the triangular oblate “pumpkin seed” of the title of John McPhee’s book, which succeeds as a narrative despite the ultimate failure of the craft’s design. McPhee’s genius is for understanding the eccentricity of human motivation as he depicts, with an unerring eye for the telling detail, characters toiling to revive the world of lighter-than-air flight.

5. The Soul of a New Machine

By Tracy Kidder
Little, Brown, 1981

The holy grail for Tom West and his team of computer engineers was a machine that was smaller and nimbler than a mainframe but still able to process 32 bits of information: a superminicomputer. Tracy Kidder chronicled their painstaking quest in one of the more improbable best sellers ever. (A book about writing software code?) But even now “The Soul of a New Machine” is capable of inducing in readers the same sleepless nights that the project demanded of the twentysomething geeks who designed and built the machine they dubbed the Eagle. “The real game is pinball,” West tells them. “You win one game, you get to play another; you build this machine, you get to build another.”

Mr. Rosen is the author of “The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention.”


Full article:

A New Clue to Explain Existence

Physicists at the Fermi National Accelerator Laboratory are reporting that they have discovered a new clue that could help unravel one of the biggest mysteries of cosmology: why the universe is composed of matter and not its evil-twin opposite, antimatter. If confirmed, the finding portends fundamental discoveries at the new Large Hadron Collider outside Geneva, as well as a possible explanation for our own existence.

In a mathematically perfect universe, we would be less than dead; we would never have existed. According to the basic precepts of Einsteinian relativity and quantum mechanics, equal amounts of matter and antimatter should have been created in the Big Bang and then immediately annihilated each other in a blaze of lethal energy, leaving a big fat goose egg with which to make stars, galaxies and us. And yet we exist, and physicists (among others) would dearly like to know why.

Sifting data from collisions of protons and antiprotons at Fermilab’s Tevatron, which until last winter was the most powerful particle accelerator in the world, the team, known as the DZero collaboration, found that the fireballs produced pairs of the particles known as muons, which are sort of fat electrons, slightly more often than they produced pairs of anti-muons. So the miniature universe inside the accelerator went from being neutral to being about 1 percent more matter than antimatter.

“This result may provide an important input for explaining the matter dominance in our universe,” Guennadi Borissov, a co-leader of the study from Lancaster University, in England, said in a talk Friday at Fermilab, in Batavia, Ill. Over the weekend, word spread quickly among physicists. Maria Spiropulu of CERN and the California Institute of Technology called the results “very impressive and inexplicable.”

The results have now been posted on the Internet and submitted to the Physical Review.

It was Andrei Sakharov, the Russian dissident and physicist, who first provided a recipe for how matter could prevail over antimatter in the early universe. Among his conditions was that there be a slight difference in the properties of particles and antiparticles known technically as CP violation. In effect, when the charges and spins of particles are reversed, they should behave slightly differently. Over the years, physicists have discovered a few examples of CP violation in rare reactions between subatomic particles that tilt slightly in favor of matter over antimatter, but “not enough to explain our existence,” in the words of Gustaaf Brooijmans of Columbia, who is a member of the DZero team.

The new effect hinges on the behavior of particularly strange particles called neutral B-mesons, which are famous for not being able to make up their minds. They oscillate back and forth trillions of times a second between their regular state and their antimatter state. As it happens, the mesons, created in the proton-antiproton collisions, seem to go from their antimatter state to their matter state more rapidly than they go the other way around, leading to an eventual preponderance of matter over antimatter of about 1 percent, when they decay to muons.

Whether this is enough to explain our existence is a question that cannot be answered until the cause of the still-mysterious behavior of the B-mesons is directly observed, said Dr. Brooijmans, who called the situation “fairly encouraging.”

The observed preponderance is about 50 times what is predicted by the Standard Model, the suite of theories that has ruled particle physics for a generation, meaning that whatever is causing the B-meson to act this way is “new physics” that physicists have been yearning for almost as long.

Dr. Brooijmans said that the most likely explanations were some new particle not predicted by the Standard Model or some new kind of interaction between particles. Luckily, he said, “this is something we should be able to poke at with the Large Hadron Collider.”

Neal Weiner of New York University said, “If this holds up, the L.H.C. is going to be producing some fantastic results.”

Nevertheless, physicists will be holding their breath until the results are confirmed by other experiments.

Joe Lykken, a theorist at Fermilab, said, “So I would not say that this announcement is the equivalent of seeing the face of God, but it might turn out to be the toe of God.”

Dennis Overebye, New York Times


Full article:

Is Anybody Out There?

After 50 years, astronomers haven’t found any signs of intelligent life beyond Earth. They could be looking in the wrong places

Fifty years ago this week, on April 8, 1960, a little-known astronomer named Frank Drake sat at the controls of an 85-foot radio telescope at an observatory in Green Bank, W.Va., and began to sweep the skies, looking for a signal from an alien civilization. It was the start of the most ambitious scientific experiment in history.

Barely an hour had passed when the equipment suddenly went wild. A loudspeaker hooked up to the giant antenna began booming and the pen recorder gyrated manically. The radio telescope was pointed at a nearby star called Epsilon Eridani. Mr. Drake was nonplussed. Surely his quest couldn’t be that easy? He was right. The commotion turned out to be a signal from a secret military radar.

The astronomer’s solitary vigil lasted for a few weeks; he ran out of telescope time with little to report. Nevertheless, his pioneering effort sparked the genesis of a 50-year project known as the Search for Extraterrestrial Intelligence, now an international research program with a multimillion-dollar budget. It has included renting time on some of the biggest radio telescopes in the world—such as the 1,000-foot dish at Arecibo in Puerto Rico, featured in the James Bond movie “GoldenEye.”

After five decades of patient listening, however, all the astronomers have to show for it is an eerie silence. Does that mean we are alone in the universe after all? Or might we be looking for the wrong thing in the wrong place at the wrong time?

The search for extraterrestrial intelligence, once considered a quixotic enterprise at best, has now become part of mainstream science. In the past decade or so, over 400 planets have been found orbiting nearby stars, and astronomers estimate there could be billions of Earth-like planets in the Milky Way alone. Biologists have discovered microbes living in extreme environments on Earth not unlike conditions on Mars, and have detected the molecular building blocks of life in deep space as well as in meteorites. Many scientists now maintain that the universe is teeming with life, and that some planets could harbor intelligent organisms.

Speculation about other worlds populated by sentient beings stretches back into pre-history. For millennia, the subject remained squarely in the provinces of religion and philosophy, but by the 19th century, science had become involved. Astronomical observations hinted that Mars could be a congenial abode for life, and in the 1870s the Italian astronomer Giovanni Schiaparelli fancied he could see lines on the surface of the red planet. A wealthy American writer, Percival Lowell, became fixated with the idea that Martians had built a network of canals to irrigate their parched planet, a conjecture fueled by the publication of H.G. Wells’s novel “The War of the Worlds.” Mr. Lowell built an observatory in Flagstaff, Ariz., specifically to map the canals and to look for other signs of Martian engineering.

Sadly for Mr. Lowell, there were no canals. Space probes sent to Mars in the 1960s found no sign of Martian engineering projects, and no sign of life either, just a freeze-dried desert bathed in deadly ultraviolet radiation.

In the next few decades, the search for radio messages from the stars was taken seriously enough to attract government funding. From 1970 to 1993, NASA spent about $78 million on projects that sought to refine Mr. Drake’s trail-blazing observations, starting with a feasibility study for the construction of an array of 1,000 dishes sensitive enough to pick up routine television and radio transmissions from nearby stars. In 1992, NASA officially launched a program called the High Resolution Microwave Survey—but Congress killed it the following year, ending NASA’s involvement.

Most of the funding today comes from private donations through the SETI Institute, a private nonprofit founded in 1984 in Mountain View, Calif. The jewel in its crown is the Allen Telescope Array, a $35 million dedicated network of 42 small dishes in northern California, with about $30 million of the funding contributed by Microsoft co-founder Paul Allen. The goal is to ultimately increase the network to 350 dishes. Donors on other projects have included David Packard and Bill Hewlett (co-founders of Hewlett-Packard) and Gordon Moore (co-founder of Intel).


Starry-Eyed Believers

Views of intelligent alien life through history.

Titus Lucretius Carus

The ancient Roman poet Titus Lucretius Carus covered atoms, humans and the cosmos in his epic poem “De rerum natura” (“The nature of things”). Because space is infinite and the same physical laws occur throughout the universe, he wrote, there must be intelligent beings in other worlds.

Nicholas of Cusa

In the 15th century, the German cardinal Nicholas of Cusa held advanced scientific views for his time, including that the Earth was not the center of the universe. He also speculated that life existed on other planets, writing: “It may be conjectured that in the area of the sun there exist solar beings, bright and enlightened intellectual denizens, and by nature more spiritual than such as may inhabit the moon—who are possibly lunatics.”

Johannes Kepler

The invention of the telescope led scientists to ponder alien civilization. In the early 1600s, astronomer Johannes Kepler believed that because the moon’s craters were perfectly round, they must have been made by intelligent creatures.

Rev. Thomas Dick

The Scottish Rev. Thomas Dick wrote several successful books on religion and astronomy. In his 1838 book “Celestial Scenery,” he calculated the populations of planets and other bodies in Earth’s solar system, based on the number of people per square mile in England at the time. His estimates included 4.2 billion inhabitants on the moon and 8.1 trillion on Saturn’s rings.

Guglielmo Marconi

In 1919, radio pioneer Guglielmo Marconi reported picking up strange radio signals, saying they might come from beyond Earth. Some guessed that the signals originated from Mars or Venus. Others tackled the issue of how to respond: Elmer Sperry of the Sperry Gyroscope Company proposed sending a beacon to Mars using 150 to 200 of his company’s searchlights.


Our own radio stations broadcast continuous narrow-band signals, that is, radio waves tuned to a sharply-defined frequency. Searches have mostly focused on something similar coming from space. The late Carl Sagan, a charismatic champion of searching for extraterrestrial signals in the 1980s, envisaged an advanced alien civilization deliberately beaming narrow-band radio messages at Earth to attract our attention. That scenario now seems very unlikely. Even optimists like Mr. Drake, still an active researcher, suppose that the nearest alien civilization would be hundreds of light years away. Because nothing travels faster than light, these hypothetical aliens would have no idea that a radio-savvy society exists on Earth yet.

A more likely sign could be a beacon, a radio source that goes bleep on a regular basis for anyone who might be listening, sweeping the plane of the Milky Way galaxy like the beam from a lighthouse. It would show up in a radio telescope as a brief pulse that repeats periodically—perhaps every few months or years.

Astronomers do occasionally detect brief radio bursts coming from space. A famous example was the so-called “Wow!” signal, recorded on Aug. 15, 1977, by Jerry Ehman using Ohio State University’s Big Ear radio telescope. Mr. Ehman discovered it while perusing the antenna’s computer printout, and was so excited he wrote “Wow!” in the margin. Radio pulses can arise from a variety of astronomical phenomena, ranging from spinning neutron stars to black hole explosions, but the characteristics of the Wow signal don’t fit any known natural event. Nor did the pulse match a man-made disturbance. Nothing has been detected again from that part of the sky when astronomers have looked.

The logistics of building beacons have been analyzed by the astrophysicist Gregory Benford of the University of California at Irvine and his brother James Benford, an expert on high-powered beamed microwaves. The main unknown is how often a beacon would repeat, so the Benfords are urging for a systematic search to be made. It would need a dedicated set of radio telescopes, oriented to stare for years on end at a fixed patch of the sky—preferably towards the center of the galaxy, where the oldest stars are found and the most advanced and best-resourced civilizations are likely to be located.

By focusing on radio signals, however, the search for intelligent life has been extremely limited. As in forensic science, the clues left by alien activity might be very subtle and require sophisticated scientific techniques. An advanced civilization might engage in large-scale astro-engineering, reconfiguring its planetary system or even modifying its host star, effects that could be observed from Earth or near space. The physicist Freeman Dyson once suggested that an energy-hungry alien community might create a shell of material around a star to trap most of its heat and light to run its industry—a solar energy program with a vengeance. Dyson spheres would betray their existence by radiating strongly in the infrared region of the spectrum. A few searches have been made using satellite data, but without success.


If a civilization endures for long enough, it might seek to migrate beyond its planetary system and colonize, or at least explore, the galaxy. The Milky Way is huge—about 100,000 light years across—and contains 400 billion stars, but given enough time, a determined civilization could spread far and wide. Our solar system is about 4.5 billion years old, but the galaxy is much older; there were stars and planets around long before Earth even existed. There has been plenty of time for at least one of those expansionary civilizations to reach our galactic neighborhood—a prospect that once led the physicist Enrico Fermi to famously utter “Where is everybody?”

How do we know they haven’t been here already?

It would be an incredible coincidence if Earth had been visited by aliens during the brief span of human history. On purely statistical grounds any visitation is likely to have been a very long time ago. To pluck a figure out of midair, imagine that an alien expedition passed our way 100 million years ago. Would any traces remain?

Not many. However, some remnants might still persist. Buried nuclear waste could be detectable even after billions of years. Large-scale mineral exploitation such as quarrying leaves distinctive scars that, in the case of Earth, would eventually become obscured by overlying strata but would still show up in geological surveys. Space probes parked in orbit round the sun might lie dormant yet intact for an immense period of time. Scientists could look for such hallmarks of alien technology on Earth and the moon, in near space, on Mars and among the asteroids.

Another physical object with enormous longevity is DNA. Our bodies contain some genes that have remained little changed in 100 million years. An alien expedition to Earth might have used biotechnology to assist with mineral processing, agriculture or environmental projects. If they modified the genomes of some terrestrial organisms for this purpose, or created their own micro-organisms from scratch, the legacy of this tampering might endure to this day, hidden in the biological record.

Which leads to an even more radical proposal. Life on Earth stores genetic information in DNA. A lot of DNA seems to be junk, however. If aliens, or their robotic surrogates, long ago wanted to leave us a message, they need not have used radio waves. They could have uploaded the data into the junk DNA of terrestrial organisms. It would be the modern equivalent of a message in a bottle, with the message being encoded digitally in nucleic acid and the bottle being a living, replicating cell. (It is possible—scientists today have successfully implanted messages of as many as 100 words into the genome of bacteria.) A systematic search for gerrymandered genomes would be relatively cheap and simple. Incredibly, a handful of (unsuccessful) computer searches have already been made for the tell-tale signs of an alien greeting.

One of the hazards of searching for alien life is an inbuilt anthropocentric bias. There is a natural temptation to fall back on what we would do when trying to guess the motives and activities of aliens. But this is almost certainly misleading. Unless alien communities inevitably destroy themselves, they could last for tens of millions of years or more. It is impossible for us to guess what such immensely long-lived civilizations would be like or how they would affect their environment.

One thing seems clear, though. Biological intelligence is likely to be merely a brief phase in the evolution of intelligence in the universe. Even in our own young species, computers now outperform people in arithmetic and chess, and Google is smarter than any human being on the planet. Soon, most of the mental heavy lifting will be done by designed and distributed systems, and over time those systems will themselves design better systems. Given a very long period of development, information and knowledge processing, networks could merge and in principle expand to cover the entire surface of a moon or planet. If we ever do make contact with E.T., it is unlikely to be a flesh-and-blood being with a big head, but a gigantic throbbing artificial brain. Whether such an entity, inhabiting the highest reaches of the intellectual universe, would have the slightest interest in us is moot.

We have no evidence whatsoever for any life beyond Earth, let alone intelligent life. It could be that life’s origin was a stupendous fluke, and that we are alone after all. But the consequences of discovering that other intelligences exist, or have existed, are so momentous it seems worth taking a penetrating look at how we could uncover evidence for it. While astronomers painstakingly monitor the hiss and crackle of the natural universe for any hint of a signal, scientists of all disciplines should reflect on how alien technology might reveal its existence in other ways, both across the vastness of space, and in our own astronomical backyard.

For many nonscientists, the fascination with the search for extraterrestrial intelligence is its tantalizing promise of wisdom in the sky. Frank Drake has said that the search for alien intelligence is really a search for ourselves, and how we fit into the great cosmic scheme. To know that we are not the only sentient beings in a mysterious and sometimes frightening universe—that an alien community had endured for eons, overcoming multiple problems—would represent a powerful symbol of hope for mankind.

Paul Davies is author of “The Eerie Silence.” He is director of the Beyond Center for Fundamental Concepts in Science at Arizona State University.


Full article and photo:

For Nuclear Reactors, Metals That Heal Themselves

A nuclear reactor is a tough place for metals. All those neutrons bouncing around wreak havoc with the crystalline structure of steel, tungsten and other metals used in fuel rods and other parts. Over time, the metals can swell and become brittle. (They become radioactive, too, but that’s another story.)

Now researchers at Los Alamos National Laboratory in New Mexico have shown that by altering the microstructure of metals, metallurgists may be able to make reactor parts that are self healing.

Blas P. Uberuaga, Xian-Ming Bai and colleagues conducted computer simulations of the long-term impact of neutron emissions on copper — not because much copper is used in nuclear plants, but because it is a relatively well-modeled metal. Their findings are published in Science.

When a neutron hits metal, it displaces atoms within the crystal lattice. In a metal with a largely uniform structure, these atoms move to the surface, eventually causing the metal to swell, and the vacancies they leave behind can lead to voids that further weaken the material.

But it is possible to fabricate metals that have a nonuniform structure, with very small crystal grains, or regions of different phase or orientation. When atoms are displaced in this nanocrystalline material, rather than traveling to the surface they migrate to the boundaries between the grains. In their simulations, the researchers found that these atoms can then travel back away from the boundary and, if they find a vacancy, fill it, in effect healing the defect.

Dr. Uberuaga said that the basic principle should apply to other metals and complex alloys, and that if self-healing metals were used in the cladding around nuclear fuel, the fuel might be able to be “burned” at higher rates, with less damage to the metal. But there are many technological hurdles to overcome. “It’s not going to change how we design reactors tomorrow,” he said.

Henry Fountain, New York Times


Full article:

Scientists Discover Heavy New Element

A team of Russian and American scientists has discovered a new element that has long stood as a missing link among the heaviest bits of atomic matter ever produced. The element, still nameless, appears to point the way toward a brew of still more massive elements with chemical properties no one can predict.

The team produced six atoms of the element by smashing together isotopes of calcium and a radioactive element called berkelium in a particle accelerator about 75 miles north of Moscow on the Volga River, according to a paper that has been accepted for publication at the journal Physical Review Letters.

Data collected by the team seem to support what theorists have long suspected: that as newly created elements become heavier and heavier they will eventually become much more stable and longer-lived than the fleeting bits of artificially produced matter seen so far.

If the trend continues toward a theorized “island of stability” at higher masses, said Dawn A. Shaughnessy, a chemist at Lawrence Livermore National Laboratory in California who is on the team, the work could generate an array of strange new materials with as yet unimagined scientific and practical uses.

By scientific custom, if the latest discovery is confirmed elsewhere, the element will receive an official name and take its place in the periodic table of the elements, the checkerboard that begins with hydrogen, helium and lithium and hangs on the walls of science classrooms and research labs the world over.

“For a chemist, it’s so fundamentally cool” to fill a square in that table, said Dr. Shaughnessy, who was much less forthcoming about what the element might eventually be called. A name based on a laboratory or someone involved in the find is considered one of the highest honors in science. Berkelium, for example, was first synthesized at the University of California, Berkeley.

“We’ve never discussed names because it’s sort of like bad karma,” she said. “It’s like talking about a no-hitter during the no-hitter. We’ve never spoken of it aloud.”

Other researchers were equally circumspect, even when invited to suggest a whimsical temporary moniker for the element. “Naming elements is a serious question, in fact,” said Yuri Oganessian, a nuclear physicist at the Joint Institute for Nuclear Research in Dubna, Russia, and the lead author on the paper. “This takes years.”

Various aspects of the work were done at the particle accelerator in Dubna; the Livermore lab; Oak Ridge National Laboratory and Vanderbilt University in Tennessee; the University of Nevada, Las Vegas; and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia.

For the moment, the discovery will be known as ununseptium, a very unwhimsical Latinate placeholder that refers to the element’s atomic number, 117.

“I think they have an excellent convincing case for the first observation of element 117; most everything has fallen into line very well,” said Walter D. Loveland, a professor of chemistry at Oregon State University who was not involved in the work.

Elements are assigned an atomic number according to the number of protons — comparatively heavy particles with a positive electric charge — in their nuclei. Hydrogen has one proton, helium has two, and uranium has 92, the most in any atom known to occur naturally. Various numbers of charge-free neutrons add to the nuclear mass of atoms but do not affect the atomic number.

As researchers have artificially created heavier and heavier elements, those elements have had briefer and briefer lifetimes — the time it takes for unstable elements to decay by processes like spontaneous fission of the nucleus. Then, as the elements got still heavier, the lifetimes started climbing again, said Joseph Hamilton, a physicist at Vanderbilt who is on the team.

The reason may be that the elements are approaching a theorized “island of stability” at still higher masses, where the lifetimes could go from fractions of a second to days or even years, Dr. Hamilton said.

In recent years, scientists have created several new elements at the Dubna accelerator, called a cyclotron, by smacking calcium into targets containing heavier radioactive elements that are rich in neutrons — a technique developed by Dr. Oganessian.

Because calcium contains 20 protons, simple math indicates scientists would have to fire the calcium at something with 97 protons — berkelium — to produce ununseptium, element 117.

Berkelium is mighty hard to come by, but a research nuclear reactor at Oak Ridge produced about 20 milligrams of highly purified berkelium and sent it to Russia, where the substance was bombarded for five months late last year and early this year.

An analysis of decay products from the accelerator indicated that the team had produced a scant six atoms of ununseptium. But that was enough to title the paper, “Synthesis of a new element with atomic number Z=117.”

That is about the closest thing to “Eureka!” that the dry conventions of scientific publication will allow. The new atoms and their decay products displayed the trend toward longer lifetimes seen in past discoveries of such heavy elements. The largest atomic number so far created is 118, also at the Dubna accelerator.

Five of the six new atoms contained 176 neutrons to go with their 117 protons, while one atom contained 177 neutrons, said Jim Roberto, a physicist at Oak Ridge on the project.

Atomic nuclei can be thought of as concentric shells of protons and neutrons. The most stable nuclei occur when the outermost shells are filled. Some theories predict this will happen with 184 neutrons and either 120 or 126 protons: the presumed center of the island of stability.

What happens beyond that point is anyone’s guess, said Kenton Moody, a radiochemist on the team at Livermore. “The question we’re trying to answer is, ‘Does the periodic table come to an end, and if so, where does it end?’ ” Dr. Moody said.

James Glanz, New York Times


Full article and photo:

A Second Big Bang In Geneva?

The Large Hadron Collider could unlock the secrets of genesis.

Champagne bottles were popped Tuesday in Geneva where the largest science machine ever built finally began to smash subatomic particles together. After 16 years—and an accident that crippled the machine a year and a half ago—the Large Hadron Collider successfully smashed two beams of protons at the astounding energy of 3.5 trillion electron volts apiece. This act produced temperatures not seen since the Big Bang occurred 13.7 billions years ago.

The LHC is colossal. It is a gigantic doughnut, 17 miles in circumference, in which two beams of protons will eventually create energies of 14 trillion electron volts. Yet by nature’s standards the LHC is a pea shooter. For billions of years the earth has been bathed in cosmic rays much more powerful than those created by the LHC.

Despite this great achievement, European taxpayers are asking if this 10 billion euro machine is a waste of money, particularly given the current financial crisis. These skeptics would do well to remember that the LHC could help us understand not only the instant of genesis, but will help unify the four fundamental forces that rule the universe. Each time one of these forces was deciphered it changed the course of human history.

When Sir Isaac Newton worked out the theory of the first force—gravity—in the 17th century, he created the mechanics that laid the groundwork for steam engines and the Industrial Revolution. The Machine Age unleashed humanity from the bondage of subsistence farming, lifting untold numbers from grinding poverty.

When Thomas Edison, James C. Maxwell and Michael Faraday helped to decipher and harness the second force—electromagnetism—it eventually gave us TV, radio, radar, computers and the Internet.

When Albert Einstein wrote down E=mc2, it helped to unlock the secret of the two nuclear forces (weak and strong), which unraveled the secret of the stars and unleashed nuclear power.

Today the LHC may have the potential to explain the origin of all four fundamental forces—gravity, electromagnetism, and the strong and weak nuclear forces. Physicists believe that at the beginning of time there was a single superforce which unified these fundamental forces. Finding it could be the crowning achievement in the history of science, ending 2,000 years of speculation since the Greeks first wondered what the world is made of. It could answer some of the deepest questions facing us, such as: What happened before the Big Bang? Are there parallel universes? Is time travel possible? And are there other dimensions?

In addition to helping us unlock the mysteries of the universe, the LHC may also create a new scientific elite. These scientists will likely spearhead new industries, creating jobs and perhaps significant wealth in Europe.

It’s sobering to remember that this could have happened in the U.S. Back in the 1980s, President Ronald Reagan pushed to create a Superconducting Supercollider just outside Dallas, Texas. This machine would have been three times larger than the LHC and would have maintained U.S. leadership in advanced science for at least a generation. Congress allotted $1 billion to dig the hole for the Supercollider. Then it got cold feet and cancelled the plans in 1993, spending another $1 billion to fill up the hole. U.S. high-energy physics was set back an entire generation and has never recovered. So today the Europeans can brag about being the world’s leader in advanced physics.

Remember that because of World War II, the cream of European science, perhaps no more than a few hundred people, fled Europe for America. They unleashed the greatest explosion in science the world has ever seen. These Europeans trained new generations of American scientists, people that went on to create radar, microwaves, nuclear power, computers, the Internet, the laser and the space program. They created a scientific establishment that is the envy of the world, a source of profound wealth, and a magnet for young scientists world-wide. U.S. technological superiority and all the high-tech wonders of today can, in some sense, be traced back to this exodus. But such leadership is not a given.

I extend my congratulations to the Europeans; the LHC is their well-earned prize. I only hope that U.S. policy makers are paying close attention to Geneva.

Mr. Kaku, a professor of theoretical physics at City College of New York, is the author of “Physics of the Impossible” (Doubleday, 2008) and host of “Sci Fi Science: Physics of the Impossible,” on the Science Channel.


Full article:

Ninth Rock From the Sun

It’s time for a new and improved definition of ‘planet’—one that restores Pluto to its former glory.

I’m not a Republican and I’m not a Democrat . . . for years I’ve been a Plutocrat.

—Clyde Tombaugh, discoverer of Pluto

The 2006 vote by a few hundred astronomers to strip Pluto of its planetary status was supposed to end a longstanding dispute. Instead, hundreds of other astronomers signed a petition saying they didn’t recognize the vote and would continue regarding Pluto as a planet in our solar system. Nonastronomers started peppering me with questions like, “Why did you guys do something so stupid?” An entire blog ( arose to restore Pluto’s planethood. And a 2009 poll found an overwhelming majority of Americans favoring the pro-Pluto side.

The passion for Pluto is understandable. Discovered 80 years ago today by a young astronomer from Kansas named Clyde Tombaugh, Pluto is so distant that sunlight, which takes just eight minutes to reach Earth, requires several hours to strike Pluto. This incredibly distant world—Pluto’s average distance from the Sun is 3.67 billion miles—orbits the Sun only once every 248 years.

It’s little wonder then that Pluto has long inspired explorers. Whereas spacecraft have flown past every planet from Mercury to Neptune, no probe has ventured to Pluto. Thus, no one knows exactly what it looks like. All we know is that Pluto is a frigid world of rock and ice, accompanied by three moons and enveloped in an atmosphere of nitrogen, the same gas that makes up most of our air.

How did astronomers get into a pickle over Pluto? For many years they thought it was larger than it actually is. A 1950 observation put Pluto halfway in size between Mercury and Mars. Thus, Pluto’s diameter seemed to plant it firmly in the planetary firmament. But observations in the 1970s and ’80s revealed that Pluto is smaller. In fact, Pluto is only half the diameter of Mercury. But that’s more than twice the diameter of Ceres, the largest asteroid. So what is Pluto? Planet? Or asteroid?

Then, in 1992, astronomers started to find other objects revolving around the Sun beyond Neptune’s orbit. Most of these objects are much smaller than Pluto. But the discoveries mean Pluto belongs to a belt resembling one that Irish astronomer Kenneth Edgeworth postulated in 1943 and American astronomer Gerard Kuiper in 1951. The Edgeworth-Kuiper belt harbors more than a thousand known objects. All but one are smaller than Pluto.

The controversial 2006 vote demoted Pluto by demanding a proper planet satisfy a newfangled criterion: It must clear its orbital zone around the Sun, which means nothing substantial should cross its path. This criterion Pluto spectacularly fails. For one thing, Pluto belongs to the Edgeworth-Kuiper belt, with its myriad objects. For another, the distant world crosses Neptune’s orbit. From 1979 to 1999, Pluto was the eighth planet from the Sun, not the ninth.

But this definition of a planet runs into major problems. Even if the Earth, whose diameter is more than five times Pluto’s, belonged to the Edgeworth-Kuiper belt, it would not be considered a planet, for it would not have cleared out its orbital zone. Furthermore, astronomers have discovered planets beyond our solar system that—wait for it—cross each other’s orbits, just as Neptune and Pluto do.

Fortunately, a better definition exists: A planet of our solar system is an object that orbits the Sun and has a diameter that equals or exceeds Pluto’s. This definition is clear and simple. It preserves the planethood Pluto has enjoyed since its discovery. And because Pluto is rather large, this definition won’t confer planethood on every last iceball beyond Neptune. So the word planet will continue to connote an important object orbiting the Sun.

Yes, you read that right. Pluto is rather large. It’s the 10th largest object that goes around the Sun. Pluto looks dim primarily because of its great distance. Thus, the sunlight striking it is weak, and this light gets further attenuated during the long trek back to Earth. But put Pluto in place of Mars and, when closest to Earth, Pluto would outshine every star at night.

This definition means the Sun has 10 known planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto and Eris. Discovered in 2005, Eris is the farthest object ever seen in our solar system. It’s currently nine billion miles from the Sun, three times farther than Pluto. And it’s slightly bigger than Pluto. Like Earth, Eris even has a moon.

In 2015, a spacecraft will fly past Pluto and resolve many of the puzzles surrounding a world that has been mysterious since its discovery. But Eris ensures that the mysteries of another distant planet will beckon, inspiring future generations just as Pluto has inspired ours.

Mr. Croswell, an astronomer, is the author of “Ten Worlds” (Boyds Mills Press, 2007) and “The Lives of Stars” (Boyds Mills Press, 2009).


Full article:

Feynman and the Futurists

On Dec. 29, 1959, Richard P. Feynman gave an after-dinner talk at an annual American Physical Society meeting in Pasadena, Calif. Feynman was not the public figure he would later become—he had not yet received a Nobel Prize, unraveled the cause of the Challenger accident, written witty books of popular science, or been the subject of biographies, documentaries and even a play starring Alan Alda. But the 41-year-old was already respected by fellow physicists for his originality, his crackling intellect, and his roguish charm.

Physicist and writer Richard Feynman in 1959.

The announced title of Feynman’s lecture, “There’s Plenty of Room at the Bottom,” mystified the attendees. One later told science writer Ed Regis that the puzzled physicists in the room feared Feynman meant that “there are plenty of lousy jobs in physics.”

Feynman said that he really wanted to discuss “the problem of manipulating and controlling things on a small scale.” By this he meant not mere miniaturization but something much more extreme. “As far as I can see,” Feynman said, the principles of physics “do not speak against the possibility of maneuvering things atom by atom.” In fact, he argued, it is “a development which I think cannot be avoided.” The physicist spoke of storing all the information in all the world’s books on “the barest piece of dust that can be made out by the human eye.” He imagined shrinking computers and medical devices, and developing new techniques of manufacturing and mass production. In short, a half-century ago he anticipated what we now call nanotechnology—the manipulation of matter at the level of billionths of a meter.

Some historians depict the speech as the start of this now-burgeoning field of research. Yet Feynman didn’t use the word “nanotechnology” himself, and his lecture went for years almost entirely unmentioned in the scientific literature. Not until the 1980s did nanotechnology researchers begin regularly citing Feynman’s lecture. So why, then, does one encyclopedia call it “the impetus for nanotechnology”? Why would one of Feynman’s biographers claim that nanotechnology researchers think of Feynman “as their spiritual father”?

The story of how his talk was forgotten and then, decades later, inserted into the history of nanotechnology is worth understanding less because of what it tells us about the past than because of what it hints about the future, a future in which billions of dollars in research and development funds are at stake.

Much of the work that now goes under the rubric of nanotechnology is essentially a specialized form of materials science. In the years ahead, it is expected to result in new medical treatments and diagnostic tools, ultraefficient water-filtration systems, strong and lightweight materials for military armor, and breakthroughs in energy, computing and medicine. Meanwhile, hundreds of consumer products using (or at least claiming to use) nanomaterials or nanoparticles went on the market in the past decade, including paints and cosmetics, stain-resistant garments, and bacteria-battling washing machines and food containers.

The most prominent scientists involved in this mainstream version of nanotechnology have admitted that Feynman’s “Plenty of Room” talk had no influence on their work. Christopher Toumey, a University of South Carolina cultural anthropologist, interviewed several of nanotech’s biggest names, including Nobel laureates; they uniformly told him that Feynman’s lecture had no bearing on their research, and several said they had never even read it.

But there is another kind of nanotechnology, one associated with much more hype. First described in the 1980s by K. Eric Drexler, this vision involves building things “from the bottom up” through molecular manufacturing. It was Mr. Drexler who first brought the term “nanotechnology” to a wide audience, most prominently with his 1986 book “Engines of Creation.” And it is Mr. Drexler’s interpretation that has captured the public imagination, as witness the novels, movies and video games that name-drop nanotechnology with the same casual hopefulness that the comic books of the 1960s mentioned the mysteries of radiation.

Using the theoretical techniques Mr. Drexler outlined, personal desktop nanofactories the size of a microwave oven could one day be programmed to convert raw materials into gleamingly perfect complex objects such as laptop computers. More radically, nanoscale machines might replace or repair damaged cells in your body, staving off aging—or they could be employed in terrible new weapons. In short, if mainstream nanotechnology promises to make our lives easier, Mr. Drexler’s version aims to remake the world.

These two understandings of nanotechnology are regularly conflated in the press—a fact that vexes mainstream researchers, in part because Mr. Drexler’s more ambitious take on nanotech is cherished by several colorful futurist movements (transhumanism, cryonics, and so forth). Worse, for all the fantastical speculation that Drexlerian nanotechnology invites, it has also driven critics, like the late novelist Michael Crichton and the software entrepreneur Bill Joy, to warn of nanotech nightmares.

Hoping to dissociate their nanotechnology work from dystopian scenarios and fringe futurists, some prominent mainstream researchers have taken to belittling Mr. Drexler and his theories. And that is where Feynman re-enters the story: Mr. Drexler regularly invokes the 1959 lecture, which broadly corresponds with his own thinking. As he told Mr. Regis, the science writer: “It’s kind of useful to have a Richard Feynman to point to as someone who stated some of the core conclusions. You can say to skeptics, ‘Hey, argue with him!'” It is thanks to Mr. Drexler that we remember Feynman’s lecture as crucial to nanotechnology, since Mr. Drexler has long used Feynman’s reputation as a shield for his own.

If this dispute over nano-nomenclature only involved some sniping scientists and a few historians watching over a tiny corner of Feynman’s legacy, it would be of little consequence. But hundreds of companies and universities are teeming with nanotech researchers, and the U.S. government has been pouring billions of dollars into its multiagency National Nanotechnology Initiative.

So far, none of that federal R&D funding has gone toward the kind of nanotechnology that Drexler proposed, not even toward the basic exploratory experiments that the National Research Council called for in 2006. If Drexler’s revolutionary vision of nanotechnology is feasible, we should pursue it for its potential for good, while mindful of the dangers it may pose to human nature and society. And if Drexler’s ideas are fundamentally flawed, we should find out—and establish just how much room there is at the bottom after all.

Mr. Keiper is the editor of The New Atlantis and a fellow at the Ethics and Public Policy Center.


Full article and photo:

A Dark Matter Breakthrough?

New evidence of the invisible matter that could make up 90% of the universe.

In early December, the Cold Dark Matter Search (CDMS) experiment located in the deep Soudan Mine in northern Minnesota leaked a tantalizing hint that they may have discovered something remarkable. The experiment is designed to directly detect new elementary particles that might make up the dark matter known to dominate our own Milky Way galaxy, all galaxies, and indeed all mass in the universe—so news of a possible breakthrough was thrilling.

The actual result? Two pulses were detected over the course of almost a year that might have been due to dark matter, CDMS announced on Dec. 17. However, there is a 25% chance that the pulses were actually caused by background radioactivity in and around the detector.

Physicists remain fascinated by the possibility that the events at CDMS, reported on the back pages of the world’s newspapers, might nevertheless be real. If they are, they will represent the culmination of one of the most incredible detective stories in the history of science.

Beginning in the 1970s, evidence began to accumulate that there was much more mass out there than meets the eye. Scientists, mostly by observing the speed of rotation of our galaxy, estimated that there was perhaps 10 times as much dark matter as visible material.

At around the same time, independent computer calculations following the possible gravitational formation of galaxies supported this idea. The calculations suggested that only some new type of material that didn’t interact as normal matter does could account for the structures we see.

Meanwhile, in the completely separate field of elementary particle physics, my colleagues and I had concluded that in order to understand what we see, it is quite likely that a host of new elementary particles may exist at a scale beyond what accelerators at the time could detect. This is one of the reasons there is such excitement about the new Large Hadron Collider in Geneva, Switzerland. Last month, it finally began to produce collisions, and it might eventually directly produce these new particles.

Theorists who had proposed the existence of such particles realized that they could have been produced during the earliest moments of the fiery Big Bang in numbers that could account for the inferred abundance of dark matter today. Moreover, these new particles would have exactly the properties needed for such material. They would interact so weakly with normal matter that they could go through the Earth without a single interaction.

Emboldened by all of these arguments, a brave set of experimentalists began to devise techniques by which they might observe such particles. This required building detectors deep underground, far from the reach of most cosmic rays that would overwhelm any sensitive detector, and in clean rooms with no radioactivity that could produce a false signal.

So when the physics community heard rumors that one of these experiments had detected something, we all waited with eager anticipation. A convincing observation would vindicate almost half a century of carefully developed, if fragile, arguments suggesting a whole new invisible world waiting to be discovered.

For the theorist working at his desk alone at night, it seems almost unfathomable that nature might actually obey the delicate theories you develop on pieces of paper. This is especially true when the theories involve ideas from so many different areas of science and require leaps of imagination.

Alas, to celebrate would be premature: The reported results are intriguing, but less than convincing. Yet if the two pulses observed last week in Minnesota are followed by more signals as bigger detectors turn on in the coming year or two, it will provide serious vindication of the power of human imagination. Combined with rigorous logical inference and technological wizardry—all the things that make science worth celebrating—scientists’ creativity will have uncovered hidden worlds that a century ago could not have been conceived.

If, on the other hand, the events turn out to have been mere background radioactivity, physicists will not give up. It will only force us to be more clever and more energetic as we try to unravel nature’s mysteries.

Mr. Krauss is director of the Origins Institute at Arizona State University, and a theoretical physicist who has been involved in the search for dark matter for 30 years. His newest book, “Quantum Man,” will appear in 2010.


Full article:

Solving a Tonal Mystery in Orbit Around Saturn

The Cassini spacecraft took this photo of Saturn’s moon Iapetus in September 2007.

Researchers have solved what may be the oldest mystery in planetary science: the two-tone surface of Saturn’s moon Iapetus.

The odd feature — the moon’s trailing side is about 10 times brighter than its leading side — has been a mystery since it was first observed by Giovanni Cassini in 1671. In two papers published online by Science, researchers have unraveled the mystery, using images and data from instruments aboard the spacecraft named for Cassini.

The studies confirm an earlier idea that dust, most likely from another of Saturn’s moons, falls on the leading side of Iapetus as it orbits the planet.

“It’s just like a motorcyclist, who only gets the flies on the leading side of the helmet rather than the trailing side,” said Tillmann Denk of the Free University of Berlin, an author with John R. Spencer of the Southwest Research Institute of one of the papers and lead author of the other.

But the pattern of the surface features — the dark area extends to the trailing side at the equator, for example — is not fully explained by the deposition dust. Rather, the researchers say, the reason has a lot to do with the moon’s rotation on its axis, which takes 80 earth days.

Such a slow rotation (“midday” lasts for a couple of weeks) allows the distant Sun to warm the dark dust-covered areas enough that water ice becomes vapor.

The vapor migrates elsewhere, freezing to ice again when it reaches colder areas. The areas where the ice was lost become darker, and those that gained ice become brighter.

Henry Fountain, New York Times


Full article and photo:

Quantum Leap

This biography is a gift. It is both wonderfully written (certainly not a given in the category Accessible Biographies of Mathematical Physicists) and a thought-provoking meditation on human achievement, limitations and the relations between the two. Here we find a man with an almost miraculous apprehension of the structure of the physical world, coupled with gentle incomprehension of that less logical, messier world, the world of other people.

At Cambridge University in 1930, Subrahmanyan Chandrasekhar took a class in quantum mechanics from the 28-year-old Paul Dirac. Three years later, Dirac would become the youngest theoretician to receive the Nobel Prize in Physics up to that time (50 years after that, Chandrasekhar would become one of the older ones). Chandrasekhar described Dirac as a “lean, meek, shy young ‘Fellow’ ” (i.e., of the Royal Society) “who goes slyly along the streets. He walks quite close to the walls (like a thief!), and is not at all healthy.” Dirac’s class — which Chandrasekhar took in its entirety four times, even though Dirac taught it by repeating material from his recently published textbook word for word — was “just like a piece of music you want to hear over and over again.”

Dirac is the main character of a thousand humorous tales told among physicists for his monosyllabic approach to conversation and his innocent, relentless application of logic to everything. Listening to a Dirac story is like slipping into an alternate universe: Dirac reads “Crime and Punishment” and reports it “nice” but notes that in one place the sun rises two times in a day; Dirac eats his dinner in silence until his companion asks, “Have you been to the theater or cinema this week?” and Dirac replies, “Why do you wish to know?”

His work was as sui generis as his social skills. “The great papers of the other quantum pioneers were more ragged, less perfectly formed than Dirac’s,” explained Freeman Dyson, who took Dirac’s course as a precocious 19-year-old. Dirac’s discoveries “were like exquisitely carved marble statues falling out of the sky, one after another. He seemed to be able to conjure laws of nature from pure thought.” (Most notably, Dirac predicted the existence of antimatter in 1928 because his just discovered relativistic electron equation required it.) “It was this purity that made him unique.”

In 1990, Helge Kragh wrote “Dirac: A Scientific Biography,” a useful resource comprising physics, a little history and a dessert of Dirac stories in a chapter entitled “The Purest Soul.” And indeed, what else besides quantum mechanics and amusing anecdotes did this great and single-minded physicist’s life hold?

“The purest soul” is a quotation about Dirac from Niels Bohr, as is Graham Farmelo’s title. (“Dirac is the strangest man,” Bohr said, “who ever visited my institute.”) But purity and strangeness were not the whole story. Kragh’s book offers a collage of a brilliant and peculiar man seen from the outside; Farmelo’s is a tapestry, and he provides glimpses of the inside.

A senior research fellow at the Science Museum in London, Farmelo gives us the texture of Dirac’s life, much of it spent outdoors — from long Sunday walks as a young man, looking like “the bridegroom in an Italian wedding photograph,” “dressed in the suit he wore all week, his hands joined behind his back, both feet pointing outwards as he made his way around the countryside in his metronomic stride”; to late-life canoeing trips with Leopold Halpern, a physicist even stranger than he, “through forests of sassafras and American beech trees, draped with Spanish moss. The alligators made scarcely a sound: the silence was broken only by the rhythmic sloshing of the paddles, the cry of a circling osprey, the occasional shuffling of wind passing through shoreline gaps in the forest.” (After lunch, they swam and paddled back, “scarcely exchanging a word.”)

We follow Dirac from his pinched and chilly childhood in Bristol (a few blocks away from the two-years-younger Archie Leach, a k a Cary Grant); through his discovery, visiting the Bohrs in Copenhagen, of what a happy family was like; his fiercely loyal friendship with Werner Heisenberg; his joyful beach honeymoon, still in a three-piece suit; his careful fatherhood (constructing for his daughters’ cat a door wider than its whiskers); to his death in Florida — “a place where recreational walkers are regarded as perverse” — in 1984.

The science writing in “The Strangest Man” isn’t glib, but neither does it require problem-solving on the part of the reader. In most cases, Farmelo presents the technical matter clearly and efficiently, and in all cases — one of the great joys of the book — Dirac’s scientific insights are placed within the circumstances in which they were born: e.g., the “sweltering July” of 1926 when Dirac, sitting at his college desk, produced his paper on what became Fermi-Dirac statistics.

In a prologue, Farmelo describes a visit to the elderly Dirac paid by his biologist colleague Kurt Hofer. Through the eyes of Hofer, we see Dirac suddenly break out of monosyllables to talk for two hours with increasing vehemence about his monstrous father. This represents the author’s careful decision to keep the tale Dirac told about his childhood separate from — even as it overshadows — the rest of the book, and it ends with Hofer’s thoughts, not Dirac’s: “ ‘I simply could not conceive of any childhood as dreadful as Dirac’s.’ . . . Could it be that Dirac — usually as literal-minded as a computer — was exaggerating? Hofer could not help asking himself, over and again: ‘Why was Paul so bitter, so obsessed with his father?’ ”

The conflict between this prologue (which gives ample reason for Dirac to be bitter about his father) and the seemingly warm family life that emerges in the first chapter casts a tension over the rest of the book very similar to that felt when reading a mystery. And as in a mystery, the penultimate chapter sheds new light. There Farmelo delves into a sensitive exploration of the possibility that Dirac was autistic, and of the ways in which his lack of facility in reading the emotions of others affected their perceptions of him and his perceptions of them. The emphasis on Dirac’s childhood as a story — one Farmelo (along with me) believes to be true — usefully reinforces the importance of point of view.

In a memorable episode, Dirac and his wife visit their closest friends, Peter and Anna Kapitza, in Russia. In 1934, the long arm of the Soviet state had wrenched Kapitza, despite his devoted long-distance fellow-traveling, away from his lab at Cambridge under Ernest Rutherford and back into the Soviet Union. In 1937 the friends reunited at the Kapitzas’ summer house in the piney woods of Bolshevo, “with wild strawberries ripe for gathering and a fast-flowing river close by.” They arrived only “days before Stalin authorized the torture of suspected enemies of the people,” Farmelo writes. “On the roads around Bolshevo, some of the trucks marked ‘Meat’ and ‘Vegetables’ hid prisoners on their way to be shot and buried in the forests to the north of the city which Dirac admired through his binoculars.”

Farmelo handles such scenes with a refreshing, cleareyed understanding of how complicated the world actually is. Dirac did not — probably could not — know what the Soviet Union really was; he also could not know who his father really was, and his father could not really know him. These complexities and unresolvably cubist perspectives make, paradoxically, for the most satisfying and memorable biography I have read in years.

Louisa Gilder is the author of “The Age of Entanglement: When Quantum Physics Was Reborn,” which will be published in paperback in November.


Full article and photo:

Science and the Sublime

In this big two-hearted river of a book, the twin energies of scientific curiosity and poetic invention pulsate on every page. Richard Holmes, the pre-eminent biographer of the Romantic generation and the author of intensely intimate lives of Shelley and Coleridge, now turns his attention to what Coleridge called the “second scientific revolution,” when British scientists circa 1800 made electrifying discoveries to rival those of Newton and Galileo. In Holmes’s view, “wonder”-driven figures like the astronomer William Herschel, the chemist Humphry Davy and the explorer Joseph Banks brought “a new imaginative intensity and excitement to scientific work” and “produced a new vision which has rightly been called Romantic science.”

A major theme of Holmes’s intricately plotted “relay race of scientific stories” is the double-edged promise of science, the sublime “beauty and terror” of his subtitle. Both played a role in the great balloon craze that swept across Europe after 1783, when the Montgolfier brothers sent a sheep, a duck and a rooster over the rooftops of Versailles, held aloft by nothing more substantial than “a cloud in a paper bag.” “What’s the use of a balloon?” someone asked Benjamin Franklin, who witnessed the launching from the window of his carriage. “What’s the use of a newborn baby?” he replied. The Gothic novelist Horace Walpole was less enthusiastic, fearing that balloons would be “converted into new engines of destruction to the human race — as is so often the case of refinements or discoveries in Science.”

The British, more advanced in astronomy, could afford to scoff at lowly French ballooning. William Herschel, a self-taught German immigrant with “the courage, the wonder and the imagination of a refugee,” supported himself and his hard-working assistant, his sister Caroline, by teaching music in Bath. The two spent endless hours at the enormous telescopes that Herschel constructed, rubbing raw onions to warm their hands and scanning the night sky for unfamiliar stars as musicians might “sight-read” a score. The reward for such perseverance was spectacular: Herschel discovered the first new planet to be identified in more than a thousand years.

Holmes describes how the myth of this “Eureka moment,” so central to the Romantic notion of scientific discovery, doesn’t quite match the prolonged discussion concerning the precise nature of the tail-less “comet” that Herschel had discerned. It was Keats, in a famous sonnet, who compared the sudden sense of expanded horizons he felt in reading Chapman’s Elizabethan translation of Homer to Herschel’s presumed elation at the sight of Uranus: “Then felt I like some watcher of the skies / When a new planet swims into his ken.” Holmes notes the “brilliantly evocative” choice of the verb “swims,” as though the planet is “some unknown, luminous creature being born out of a mysterious ocean of stars.” As a medical student conversant with scientific discourse, Keats may also have known that telescopes can give the impression of objects viewed “through a rippling water surface.”

Though Romanticism, as Holmes says, is often presumed to be “hostile to science,” the Romantic poets seem to have been positively giddy — sometimes literally so — with scientific enthusiasm. Coleridge claimed he wasn’t much affected by Herschel’s discoveries, since as a child he had been “habituated to the Vast” by fairy tales. It was the second great Romantic field of science that lighted a fire in Coleridge’s mind. “I shall attack Chemistry, like a Shark,” Coleridge announced, and invited the celebrated scientist Humphry Davy, who also wrote poetry, to set up a laboratory in the Lake District.

Coleridge wrote that he attended Davy’s famous lectures on the mysteries of electricity and other chemical processes “to enlarge my stock of metaphors.” But he was also, predictably, drawn to Davy’s notorious experiments with nitrous oxide, or laughing gas. “The objects around me,” Davy reported after inhaling deeply, “became dazzling, and my hearing more acute.” Coleridge, an opium addict who coined the word “psycho­somatic,” compared the pleasurable effects of inhalation to the sensation of “returning from a walk in the snow into a warm room.” Davy passed out frequently while under the influence, but strangely, as Holmes notes, failed to pursue possible applications in anesthesia.

In assessing the quality of mind that poets and scientists of the Romantic generation had in common, Holmes stresses moral hope for human betterment. Coleridge was convinced that science was imbued with “the passion of Hope,” and was thus “poetical.” Holmes finds in Davy’s rapid and systematic invention of a safety lamp for English miners, one that would not ignite methane, a perfect example of such Romantic hope enacted. Byron celebrated “Davy’s lantern, by which coals / Are safely mined for,” but his Venetian mistress wondered whether Davy, who was visiting, might “give me something to dye my eyebrows black.”

Yet it is in his vivid and visceral accounts of the Romantic explorers Joseph Banks and Mungo Park, whose voyages were both exterior and interior, that Holmes is best able to unite scientific and poetic “wonder.” Wordsworth had imagined Newton “voyaging through strange seas of Thought, alone.” When Banks accompanied Captain Cook to Tahiti and witnessed exotic practices like surfing and tattooing and various erotic rites, he returned to England a changed man; as president of the Royal Society, he steadily encouraged others, like Park, to venture into the unknown.

“His heart,” Holmes writes of Park, “was a terra incognita quite as mysterious as the interior of Africa.” At one low point in his African travels in search of Timbuktu, alone and naked and 500 miles from the nearest European settlement, Park noticed a piece of moss “not larger than the top of one of my fingers” pushing up through the hard dirt. “At this moment, painful as my reflections were, the extraordinary beauty of a small moss in fructification irresistibly caught my eye,” he wrote, sounding a great deal like the Ancient Mariner. “I could not contemplate the delicate conformation of its roots, leaves and capsula, without admiration.”

For Holmes, the “age of wonder” draws to a close with Darwin’s voyage aboard the Beagle in 1831, partly inspired by those earlier Romantic voyages. “With any luck,” Holmes writes wistfully, “we have not yet quite outgrown it.” Still, it’s hard to read his luminous and horizon-expanding “Age of Wonder” without feeling some sense of diminution in our own imaginatively circumscribed times. “To us, their less tried successors, they appear magnified,” as Joseph Conrad, one of Park’s admirers, wrote in “Lord Jim,” “pushing out into the unknown in obedience to an inward voice, to an impulse beating in the blood, to a dream of the future. They were wonderful; and it must be owned they were ready for the wonderful.”

Christopher Benfey is the Mellon professor of English at Mount Holyoke College. His books include “A Summer of Hummingbirds” and an edition of Lafcadio Hearn’s “American Writings” for the Library of America.


Full article and photo:


See also:

‘The Age of Wonder’


The Age of Wonder is a relay-race of scientific stories, and they link together to explore a larger historical narrative. This is my account of the second scientific revolution, which swept through Britain at the end of the 18th century, and produced a new vision which has rightly been called Romantic science. [See Sources for the recent work of Golinski; Cunningham and Jardine; Fulford, Kitson, and Lee; Ruston et al. since 1990, who all use the term “Romantic Science” ]

Romanticism as a cultural force is generally regarded as intensely hostile to science, its ideal of subjectivity eternally opposed to that of scientific objectivity. But I do not believe this was always the case, or that the terms are so mutually exclusive. The notion of wonder seems to be something that once united them, and can still do so. In effect there is Romantic science in the same sense there is Romantic poetry, and often for the same enduring reasons.

The first scientific revolution of the 17th century is familiarly associated with the names of Newton, Hooke, Locke and Descartes, and the almost simultaneous foundations of the Royal Society in London, and the Acadèmie des Sciences in Paris. It existence has long been accepted, and the biographies of its leading figures are well known.

But this second revolution was something different. The first person who referred to a “second scientific revolution” was probably the poet Coleridge in his Philosophical Lectures of 1819 [See also The Friend, 1819; RH Coleridge DR p482; pp490-2] It was inspired primarily by a sudden series of break-throughs in the fields of astronomy and chemistry. It was a movement that grew out of 18th century Enlightenment rationalism, but largely transformed it, by bringing a new imaginative intensity and excitement to scientific work . It was driven by a common ideal of intense, even reckless, personal commitment to discovery.

It was also a movement of transition. It flourished for a relative brief time, perhaps two generations, but produced long-lasting consequences – raising hopes and questions – that are still with us today. Romantic Science can be dated roughly, and certainly symbolically, between two celebrated voyages of exploration. These were Captain Cook’s first round the world expedition aboard the Endeavour, begun in 1768; and Charles Darwin’s voyage to the Galapagos islands aboard the Beagle begun in 1831. This was the time I have called the Age of Wonder, and with any luck we have not yet quite outgrown it.

The idea of the exploratory voyage, often lonely and perilous, is in one form or another a central and defining metaphor of Romantic science. That is how William Wordsworth brilliantly transformed the great Enlightenment image of Sir Isaac Newton into a Romantic one. As a university student in the 1780’s Wordsworth had often contemplated the full-size marble statue of Newton, with his severely close-cropped hair, that still dominates the stone-flagged entrance hall to the chapel of Trinity College, Cambridge. As Wordsworth originally put it, he could see a few yards off from his bedroom window, over the brick wall of St John’s College

“The Antechapel, where the Statue stood
Of Newton, with his Prism and silent Face.”

Sometime after 1805, Wordsworth animated this static figure, so monumentally fixed in his assured religious setting. Newton became a haunted and restless Romantic traveller amidst the stars:

“And from my pillow, looking forth by light
Of moon or favouring stars, I could behold
The Antechapel where the Statue stood
Of Newton, with his prism and his silent face,
The marble index of a Mind for ever
Voyaging through strange seas of Thought, alone.”
[The Prelude, 1850, Book 3, lines 58-64]

Around such a vision Romantic science created, or crystallised, several other crucial conceptions – or misconceptions – which are still with us. First, the dazzling idea of the solitary scientific “genius”, thirsting and reckless for knowledge, for its own sake and perhaps at any cost. This neo-Faustian idea, celebrated by many of the imaginative writers of the period including Goethe and Mary Shelley, is certainly one of the great, ambiguous creations of Romantic science which we have all inherited.

Closely connected with this is the idea of the Eureka moment, the intuitive inspired instant of invention or discovery, for which no amount of preparation or preliminary analysis can really prepare. Originally the cry of the Greek philosopher Archimedes, this became the “fire from heaven” of Romanticism, the other true mark of scientific genius, which also allied it very closely to poetic inspiration and creativity. Romantic science would seek to identify such moments of singular, almost mystical vision in its own history. One of its first and most influential examples, was to become the story of the solitary brooding Newton in his orchard, seeing an apple fall and “suddenly” having his vision of universal gravity. This story was never told by Newton at the time, but only began to emerge in the mid 18th century, in a series of memoirs and reminiscences.

The notion of an infinite, mysterious Nature, waiting to be discovered or seduced into revealing all her secrets was widely held. Scientific instruments played an increasingly important role in this process of revelation, allowing man not merely to extend his senses passively – using the telescope, the microscope, the barometer – but to intervene actively, using the voltaic battery, the electrical generator, the scalpel or the air pump. Even the Montfgolfier balloon could be seen as an instrument of discovery, or indeed of seduction.

There was, too, a subtle the reaction against the idea of a purely mechanistic universe, the mathematical world of Newtonian physics, the hard material world of objects and impacts. These doubts, expressed especially in Germany, favoured a softer “dynamic” science of invisible powers and mysterious energies, of fluidity and transformations, of growth and organic change. This is one of the reasons that the study of electricity (and chemistry in general) became the signature science of the period; though astronomy itself, once the exemplary science of the Enlightenment, would also be changed by Romantic cosmology. [Eg Coleridge again, see RH DR p548-9]

The ideal of a pure, “disinterested” science, independent of political ideology and even religious doctrine, also began slowly to emerge. The emphasis of on secular, humanist (even atheist) body of knowledge, dedicated to the “benefit of all mankind” was particularly strong in revolutionary France. This would soon involve Romantic science in new kinds of controversy: for instance, whether it could be an instrument of the state, in the case of inventing weapons of war? Or a handmaiden of the Church, supporting the widely held view of “Natural theology”, in which science reveals evidence of a divine Creation or intelligent design?

With these went the new notion of a popular science, a people’s science. The scientific revolution of the late 17th century had promulgated an essentially private, elitist, specialist form of knowledge. Its lingua franca was Latin, and its common currency mathematics. Its audience were a small (if international) circle of scholars and savants. Romantic science, on the other hand, had a new commitment to explain, to educate, to communicate to a general public.

This became the first great age of the public scientific lecture, the laboratory demonstration, and the introductory textbook, often written by women. It was the age when science began to be taught to children, and the “experimental method” became the basis of a new, secular philosophy of life, in which the infinite wonders of creation (whether divine or not) were increasingly valued for their own sake. It was a science that, for the first time, generated sustained public debates, such as the great Regency controversy over “Vitalism”: whether there was such a thing as a life force or principle, or whether men and women (or animals) had souls.

Finally, it was the age which broke the elite monopoly of the Royal Society, and saw the foundation of scores of new scientific institutions, mechanics institutes and “philosophical” societies, most notably the Royal Institution in Albemarle Street in 1799, the Geological Society in 1807, the Astronomical Society in 1820, and the British Association for the Advancement of Science in 1831.

Much of this transition from Enlightenment to Romantic science is expressed in the iconic paintings of Joseph Wright of Derby. Closely attached to the Lunar Society, and the friend of Erasmus Darwin and Joseph Priestley, Wright became a dramatic painter of experimental and laboratory scenes, which reinterpreted late 18th century Enlightenment science as a series of mysterious, romantic moments of revelation and vision. The calm glowing light of reason is surrounded by the intense, psychological chiaroscuro associated with George de la Tour. This is most evident in his famous series of scientific demonstration scenes, painted at the hight of his career: “The Orrery” (1766, Derby City Museum and book cover), “The Air Pump” (1767, National Gallery, London), and “The Alchemist” (1768, Derby City Museum).


Was all this such a good thing? There is a counter view that sees Romantic science as a disastrous betrayal of the benign Enlightenment view of Nature, modest, respectful and pious. It replaced it with a fatal commitment to a blind, positivist view of human progress driven by personal ambition, technology and material greed. We have certainly inherited this dillema in the Western world. Romantic science was originally the product of a revolutionary age. The wave of political optimism that carried first the American Declaration of Independence, and then the French Revolution, also inspired Romantic science with a progressive secular idealism, carrying strong radical and republican overtones. But under the patriotic demands and pressures of the Napoleonic Wars its free spirit was curbed, tamed and professionalized. An open, radical science became institutionalized, conservative and doctrinaire.

In the process the international scientific co-operation of 18th century Europe was changed into intense national rivalries. This was especially so between public “men of science” in Britain and France. Co-operative science became competitive. The secular ideals of Enlightenment science, with its notions of disinterestedness and universal human benevolence, became corrupted by Imperial ambitions. Considerations of commercial, religious missionary and national interest fatally compromised pure science. Exploration became colonization.

Worse still, the open imaginative spirit of the Enlightenment, celebrated by poets and writers, was increasingly displaced by an inhuman science, analytical, industrial, invasive, which damaged and exploited both Nature and the human soul. It could not be trusted because, in John Keats’s words, it would “unweave the rainbow”. It drove a profound split between the artistic and scientific response to the world.

Finally, the figure of the inspired, unworldly scientific genius shut away in his lonely laboratory or observatory, following his dreams like Isaac Newton, was changed into a more ambiguous symbol. He was now seen as someone intoxicated by worldly power, or driven by mad ambition, like Dr Victor Frankenstein. The Romantic scientist was a danger to society, not a benefactor.


The Age of Wonder asks which version of Romantic science in Britain is really true; or more true. Yet in the end it remains a narrative, a piece of biographical story telling. It aims to capture the inner life of science, its impact on the heart as well as on the mind. In the broadest sense it aims to present scientific passion, so much of which is summed up in that childlike, but infinitely complex word, wonder. Plato had argued that the notion of “wonder” was central to all philosophical thought: “In Wonder all Philosophy began: in Wonder it ends….But the first Wonder is the Offspring of Ignorance; the last is the Parent of Adoration.”

Wonder, in other words, goes through various stages, evolving both with age and with knowledge, but retaining an irreducible fire and spontaneity. This seems to be the implication of Wordsworth famous lyric of 1802, which was inspired not by Newton’s prism, but by Nature’s:

“My heart leaps up when I behold
A rainbow in the sky;
So was it when my life began;
So is it now I am a man;
So be it when I shall grow old,
Or let me die!….”

[Plato’s wonder as interpreted by Coleridge in Aids to Reflection, 1825 “Spiritual Aphorism 9″p236 see RH DR p540]

This book is centered on two scientific lives, those of the astronomer William Herschel and the chemist Humphry Davy. Their discoveries dominate the period, yet they offer two almost diametrically opposed versions of the Romantic “scientist”, a term not coined until 1833, after they were both dead. It also gives an account of their assistants and protégées, who eventually became much more than that, and handed on the flame into the very different world of professional Victorian science. But it draws in many others lives (see Appendix “Cast List”), and it is interrupted by different moments of scientific endeavour and high adventure so characteristic of the Romantic spirit: ballooning, exploring, soul-hunting. These were all part of the great journey.

It is also held together by as a kind of chorus figure, a scientific Virgil, to whom (it must be admitted) I have become greatly attached. It is no coincidence that he began his career a young and naïve scientific traveller and secret journal-keeper. However he ended it as the longest-serving, most experienced and most domineering President of the Royal Society : the botanist, diplomat and eminence grise Sir Joseph Banks. As a young man Banks sailed with Captain Cook round the world, setting out in 1768 on that perilous three-year voyage into the unknown. This voyage may count as one of the earliest distinctive exploits of Romantic science, not least because it involved a long stay in a beautiful but ambiguous version of Paradise – Otaheite, or the South Pacific island of Tahiti.

Excerpted by arrangement with Random House from “The Age of Wonder” by Richard Holmes.


Full article:

Sons of Atom

The first quarter of the 20th century produced two theories, relativity and quantum mechanics, that are still changing our universe.

With special relativity, Albert Einstein upended the long-understood meaning of time, space and simultaneity. With general relativity, he swapped Newton’s law of gravity based on force for curved space­time, and cosmology became a science. Just after World War I, relativity made front-page news when astronomers saw the Sun bend starlight. Overnight, Einstein became famous as no physical scientist before or since, his theory the subject of poetry, painting and architecture.

Then, with the development of quantum mechanics in the 1920s, physics got ­really interesting. Quantum physics was a theory so powerful — and so powerfully weird — that nearly a century later, we’re still arguing about how to reconcile it with Einsteinian relativity and debating what it tells us about causality, locality and realism.

Relativity leads to a world far from every­day intuition. But relativity was still classical physics: classical in the sense that it was as causal, maybe even more so, as the physics of Newton. The relativist could defend the view that we could refine our local specification of the state of things now — that we could spell out what every last particle was up to — and then predict the future, as accurately as wanted. Back in the Enlightenment, Pierre-Simon de Laplace imagined a machine that could calculate the future. He didn’t know relativity, of course, but you could imagine a Laplace 2.0 (with relativity) that kept his predictive dream alive.

Quantum mechanics shattered that Laplacian vision. From 1925 to 1927, Niels Bohr, Werner Heisenberg, Erwin Schrödinger, Max Born and many others made the theory into a toolkit that could be used to calculate how copper conducted electricity, how nuclei fissioned, how transistors worked. Quantum mechanics was easy to use, but hard to understand. For example, two particles that interacted might subsequently fly to opposite sides of the solar system, and still act as if they were dependent. Measuring one near Pluto affected measurements as the other zipped by Mercury. Einstein viewed this inseparability, now known as “entanglement,” as the fatal mark of the incompleteness of quantum mechanics: he sought a successor theory that would be local, realist and therefore complete.

Looking back on the early 20th century, Bohr wistfully reflected that Einstein had done so much of relativity theory by himself, while quantum mechanics took a whole generation of physicists 30 years. Telling the quantum story up to 1927 has been an industry for the past 80 years. In the first half of her new book, “The Age of Entanglement,” Louisa Gilder does her level best to cope with this plethora of sources, characters and topics, with mixed results. She writes engagingly, using dialogue reconstructed from letters, papers and memoirs to capture the spirit of confrontation among the players. That’s good. But she seems ill at ease with the German sources and so is reliant on the secondary literature — some of which is well done, some not. That’s not so good.

But on Page 181, the clouds part and Gilder reveals a sparkling, original book. Leaving Copenhagen, Berlin and Göt­tingen behind, she recounts a history of the quantum physics that did not end in 1927. With a smaller, more contemporary cast of characters from Berkeley, Innsbruck, Harvard and CERN, the big accelerator outside Geneva, Gilder brings the reader into a mix of ideas and personalities handled with a verve reminiscent of Jeremy Bernstein’s scientific portraits in The New Yorker.

This second-half book begins with the story of David Bohm, a student of J. Robert Oppenheimer who dissented from political orthodoxy and paid for it with his career. Hauled before the House Un-American Activities Committee in 1949, he refused their bullying questions, lost his job, and fled to Brazil and then to Israel. Often ill, the isolated Bohm railed against the orthodox interpretation of quantum physics as well, and agitated for a theory he hoped would replace it. He had the sympathy of Einstein and Richard Feynman but somehow always orbited outside the action — his work, Wolfgang Pauli once said, like an uncashed check. Gilder movingly portrays Bohm’s lonely trajectory. Even Einstein turned away from Bohm in 1952, calling his approach “too cheap,” while Max Born later wrote back that Pauli had come up with a new idea that “slays Bohm not only philosophically but physically.”

Next comes the real center of her story, John Bell, a remarkable Irish theorist at CERN. Like Bohm, Bell resisted the too easy slide into orthodoxy that had made one interpretation of quantum physics into a canon law from which even questioning was greeted suspiciously. For several years, Bell worked through Bohm’s studies, isolating what was so troubling about quanta. Better yet, he derived predictions.

Bell’s theorem, stated in a 1964 paper: You cannot have a theory consistent with his experimental predictions of quantum mechanics and have that theory describe the world in a completely local way. To put it differently, we may be troubled by various aspects of quantum physics and hope it can be replaced by some other theory that will capture its predictions but go deeper, giving a local, un-entangled account. But Bell showed that if a certain measurable inequality was confirmed experimentally, it would follow that any successor theory to quantum physics you tried to write would itself exhibit one of the strangest features of quantum theory: it will still be non-local.

Bell’s prediction bore on correlations in properties between particles that had once been entangled — even if the particles flew far apart. Suddenly interpretations of quantum mechanics opened into something else: a laboratory test to demonstrate that local hidden variable theories could not exist. Experimentalists, not theorists, now had the floor, and Gilder beautifully evokes their world: equipment catalogs instead of books; piles of dry ice; messy clockwork; boiling metal. Gilder captures the vaulting ambition of this recent generation in joining engineering with the foundations of quantum theory — no easy task. Alongside the successes, she shows the frustration of contradictory results, the worries about whether these results reflected reality — or were just a stupid machine bug.

Some experimentalists wanted quantum mechanics to succeed. Others hoped it would crash and burn. These experiments seemed all at once to be playing for the highest stakes possible and yet might just confirm again what almost every physicist already accepted. Would the experiments kill the greatest theory, or wreck careers not yet begun?

Quantum physics survived Bell’s test. But in all the testing in those years since the mid-1960s, the nature — the weirdness — of quantum mechanics gained a clarity and force it had never had, even in the hands of Einstein and Bohr. Entanglement was here to stay: Bell’s inequality, powered by experiment, said so. What’s more, the oddness of entanglement makes a new kind of computing imaginable. Odd as it might seem, these foundational ideas of quantum mechanics have led governments, industries and militaries to explore how the entangled state of separated particles might accelerate computing to a staggering degree: instead of taking, say, a million steps to crack a secret password, the still-nascent quantum computer promises a solution in the square-root number of steps — in this case, a mere thousand steps.

What had been for generations a story of theoretical malcontents now intrigues spooks and start-ups. All this radiates from Louisa Gilder’s story. Quantum physics lives.

Peter Galison is a professor of the history of science at Harvard and the author of “Einstein’s Clocks, Poincaré’s Maps.” His film “Secrecy,” made with Robb Moss, had its premiere at the 2009 Sundance Film Festival.


Full article and photo:


See also:

‘The Age of Entanglement’

The Socks
1978 and 1981

In 1978, when John Bell first met Reinhold Bertlmann, at the weekly tea party at the Organisation Européenne pour la Recherche Nucléaire, near Geneva, he could not know that the thin young Austrian, smiling at him through a short black beard, was wearing mismatched socks. And Bertlmann did not notice the characteristically logical extension of Bell’s vegetarianism — plastic shoes.

Deep under the ground beneath these two pairs of maverick feet, ever-increasing magnetic fields were accelerating protons (pieces of the tiny center of the atom) around and around a doughnut-shaped track a quarter of a kilometer in diameter. Studying these particles was part of the daily work of CERN, as the organization was called (a tangled history left the acronym no longer correlated with the name). In the early 1950s, at the age of twenty-five, Bell had acted as consultant to the team that designed this subterranean accelerator, christened in scientific pseudo-Greek “the Proton Synchrotron.” In 1960, the Irish physicist returned to Switzerland to live, with his Scottish wife, Mary, also a physicist and a designer of accelorators. CERN’s charmless, colorless campus of box-shaped buildings with protons flying through their foundations became Bell’s intellectual home for the rest of his life, in the green pastureland between Geneva and the mountains. At such a huge and impersonal place, Bell believed, newcomers should be welcomed. He had never seen Bertlmann before, and so he walked up to him and said, his brogue still clear despite almost two decades in Geneva: “I’m John Bell.”

This was a familiar name to Bertlmann — familiar, in fact, to almost anyone who studied the high-speed crashes and collisions taking place under Bell’s and Bertlmann’s feet (in other words, the disciplines known as particle physics and quantum field theory). Bell had spent the last quarter of a century conducting piercing investigations into these flying, decaying, and shattering particles. Like Sherlock Holmes, he focused on details others ignored and was wont to make startlingly clear and unexpected assessments. “He did not like to take commonly held views for granted but tended to ask, ‘How do you know?,'” said his professor, Sir Rudolf Peierls, a great physicist of the previous generation. “John always stood out through his ability to penetrate to the bottom of any argument,” an early co-worker remembered, “and to find the flaws in it by very simple reasoning.” His papers — numbering over one hundred by 1978 — were an inventory of such questions answered, and flaws or treasures discovered as a result.

Bertlmann already knew this, and that Bell was a theorist with an almost quaint sense of responsibility who shied away from grand speculations and rooted himself in what was directly related to experiments at CERN. Yet it was this same responsibility that would not let him ignore what he called a “rottenness” or a “dirtiness” in the foundations of quantum mechanics, the theory with which they all worked. Probing the weak points of these foundations — the places in the plumbing where the theory was, as he put it, “unprofessional” — occupied Bell’s free time. Had those in the lab known of this hobby, almost none of them would have approved. But on a sabbatical in California in 1964, six thousand miles from his responsibilities at CERN, Bell had made a fascinating discovery down there in the plumbing of the theory.

Revealed in that extraordinary paper of 1964, Bell’s theorem showed that the world of quantum mechanics — the base upon which the world we see is built — is composed of entities which are either, in the jargon of physics, not locally causal, not fully separable, or even not real unless observed.

If the entities of the quantum world are not locally causal, then an action like measuring a particle can have instantaneous “spooky” effects across the universe. As for separability: “Without such an assumption of the mutually independent existence (the ‘being-thus’) of spatially distant things …,” Einstein insisted, “physical thought in the sense familiar to us would not be possible. Nor does one see how physical laws could be formulated and tested without such a clean separation.” The most extreme version of nonseparability is the idea that the quantum entities are not independently real: that atoms do not become solid until they are observed, like the proverbial tree that makes no sound when it falls unless a listener is around. Einstein found the implications ludicrous: “Do you really believe the moon is not there if nobody looks?”

Up to that point, the idea of science rested on separability, as Einstein had said. It could be summarized as humankind’s long intellectual journey away from magic (not locally causal) and from anthropocentricism (not independently real). Perversely, and to the consternation of Bell himself, his theorem brought physics to the point where it seemingly had to choose between these absurdities.

Whatever the ramifications, it would become obvious by the beginning of this century that Bell’s paper had caused a sea change in physics. But in 1978 the paper, published fourteen years before in an obscure journal, was still mostly unknown.

Bertlmann looked with interest at his new acquaintance, who was smiling affably with eyes almost shut behind big metal-rimmed glasses. Bell had red hair that came down over his ears — not flaming red, but what was known in his native country as “ginger” — and a short beard. His shirt was brighter than his hair, and he wore no tie.

In his painstaking Viennese-inflected English, Bertlmann introduced himself: “I’m Reinhold Bertlmann, a new fellow from Austria.”

Bell’s smile broadened. “Oh? And what are you working on?”

It turned out that they were both engaged with the same calculations dealing with quarks, the tiniest bits of matter. They found they had come up with the same results, Bell by one method on his desktop calculator, Bertlmann by the computer program he had written.

So began a happy and fruitful collaboration. And one day, Bell happened to notice Bertlmann’s socks.

Three years later, in an austere room high up in one of the majestic stone buildings of the University of Vienna, Bertlmann was curled over the screen of one of the physics department’s computers, deep in the world of quarks, thinking not in words but in equations. His computer — at fifteen feet by six feet by six feet one of the department’s smaller ones — almost filled the room. Despite the early spring chill, the air-conditioning ran, fighting the heat produced by the sweatings and whirrings of the behemoth. Occasionally Bertlmann fed it a new punch card perforated with a line of code. He had been at his work for hours as the sunlight moved silently around the room.

He didn’t look up at the sound of someone’s practiced fingers poking the buttons that unlocked the door, nor when it swung open. Gerhard Ecker, from across the hall, was coming straight at him, a sheaf of papers in hand. He was the university’s man in charge of receiving preprints — papers that have yet to be published, which authors send to scientists whose work is related to their own.

Ecker was laughing. “Bertlmann!” he shouted, even though he was not four feet away.

Bertlmann looked up, bemused, as Ecker thrust a preprint into his hands: “You’re famous now!”

The title, as Bertlmann surveyed it, read:

Bertlmann’s Socks and the Nature of Reality
J. S. Bell
CERN, Geneve, Suisse

The article was slated for publication in a French physics periodical, Journal de Physique, later in 1981. Its title was almost as incomprehensible to Bertlmann as it would be for a casual reader.

“But what’s this about? What possibly—”

Ecker said, “Read it, read it.”

He read.

The philosopher in the street, who has not suffered a course in quantum mechanics, is quite unimpressed by Einstein-Podolsky-Rosen correlations. He can point to many examples of similar correlations in everyday life. The case of Bertlmann’s socks is often cited.

My socks? What is he talking about? And EPR correlations? It’s a big joke, John Bell is playing a big published joke on me.

“EPR” — short for the paper’s authors, Albert Einstein, Boris Podolsky, and Nathan Rosen — was, like Bell’s 1964 theorem, which it inspired thirty years later, something of an embarrassment for physics. To the question posed by their title, “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?,” Einstein and his lesser-known cohorts answered no. They brought to the attention of physicists the existence of a mystery in the quantum theory. Two particles that had once interacted could, no matter how far apart, remain “entangled” — the word Schrödinger coined in that same year — 1935 — to describe this mystery. A rigorous application of the laws of quantum mechanics seemed to force the conclusion that measuring one particle affected the state of the second one: acting on it at a great distance by those “spooky” means. Einstein, Podolsky, and Rosen therefore felt that quantum mechanics would be superseded by some future theory that would make sense of the case of the correlated particles.

Physicists around the world had barely looked up from their calculations. Years went by, and it became more and more obvious that despite some odd details, ignored like the eccentricities of a general who is winning a war, quantum mechanics was the most accurate theory in the history of science. But John Bell was a man who noticed details, and he noticed that the EPR paper had not been satisfactorily dealt with.

Bertlmann felt like laughing in confusion. He looked at Ecker, who was grinning: “Read on, read on.”

Dr. Bertlmann likes to wear two socks of different colors. Which color he will have on a given foot on a given day is quite unpredictable. But when you see (Fig. 1) that the first sock is pink…

What is Fig. 1? My socks? Bertlmann ruffled through the pages and found, appended at the end, a little line sketch of the kind John Bell was fond of doing. He read on:

But when you see that the first sock is pink you can be already sure that the second sock will not be pink. Observation of the first, and experience of Bertlmann, give immediate information about the second. There is no accounting for tastes, but apart from that there is no mystery here. And is not the EPR business just the same?

Bertlmann imagined John’s voice saying this, conjured up his amused face. For three years we worked together every day and he never said a thing.

Ecker was laughing. “What do you think?”

Bertlmann had already dashed past him, out the door, down the hall to the phone, and with trembling fingers was calling CERN.

Bell was in his office when the phone rang, and Bertlmann came on the line, completely incoherent. “What have you done? What have you done?”

Bell’s clear laugh alone, so familiar and matter-of-fact, was enough to bring the world into focus again. Then Bell said, enjoying the whole thing: “Now you are famous, Reinhold.”

“But what is this paper about? Is this a big joke?”

“Read the paper, Reinhold, and tell me what you think.”

A tigress paces before a mirror. Her image, down to the last stripe, mimics her every motion, every sliding muscle, the smallest twitch of her tail. How are she and her reflection correlated? The light shining down on her narrow slinky shoulders bounces off them in all directions. Some of this light ends up in the eye of the beholder: either straight from her fur, or by a longer route, from tiger to mirror to eye. The beholder sees two tigers moving in perfectly opposite synchrony.

Look closer. Look past the smoothness of that coat to see its hairs; past its hairs to see the elaborate architectural arrangements of molecules that compose them, and then the atoms of which the molecules are made. Roughly a billionth of a meter wide, each atom is (to speak very loosely) its own solar system, with a dense center circled by distant electrons. At these levels — molecular, atomic, electronic — we are in the native land of quantum mechanics.

The tigress, though large and vividly colored, must be near the mirror for a watcher to see two correlated cats. If she is in the jungle, a few yards’ separation would leave the mirror showing only undergrowth and swinging vines. Even out in the open, though, at a certain distance the curvature of the earth would rise up to obscure mirror from tigress and decouple their synchrony. But the entangled particles Bell was talking about in his paper can act in unison with the whole universe in between.

Quantum entanglement, as Bell would go on to explain in his paper, is not really like Bertlmann’s socks. No one puzzles over how he always manages to pick different-colored socks, or how he pulls the socks onto his feet. But in quantum mechanics there is no idiosyncratic brain “choosing” to coordinate distant particles, and it is hard not to compare how they do it to magic.

In the “real world,” correlations are the result of local influences, unbroken chains of contact. One sheep butts another — there’s a local influence. A lamb comes running to his mother’s bleat after waves of air molecules hit each other in an entirely local domino effect, starting from her vocal cords and ending when they beat the tiny drum in the baby’s ear in a pattern his brain recognizes as Mom. Sheep scatter at the arrival of a coyote: the moving air has carried bits of coyote musk and dandruff into their nostrils, or the electromagnetic waves of light from the moon have bounced off the coyote’s pelt and into the retinas of their eyes. Either way, it’s all local, including the nerves firing in each sheep’s brain to say danger, and carrying the message to her muscles.

Grown up, sold, and separated on different farms, twin lambs both still chew their cud after eating, and produce lambs that look eerily similar. These correlations are still local. No matter how far the lambs ultimately separate, their genetic material was laid down when they were a single egg inside their mother’s womb.

Bell liked to talk about twins. He would show a photograph of the pair of Ohio identical twins (both named “Jim”) separated at birth and then reunited at age forty, just as Bell was writing “Bertlmann’s Socks.” Their similarities were so striking that an institute for the study of twins was founded, appropriately enough at the University of Minnesota in the Twin Cities. Both Jims were nail-biters who smoked the same brand of cigarettes and drove the same model and color of car. Their dogs were named “Toy,” their ex-wives “Linda,” and current wives “Betty.” They were married on the same day. One Jim named his son James Alan, his twin named his son James Allen. They both liked carpentry — one made miniature picnic tables and the other miniature rocking chairs.


Excerpted from The Age of Entanglement by Louisa Gilder


Full article:

The Circular Logic of the Universe

Vasily Kandinsky, “Several Circles,” 1926.

CIRCLING my way not long ago through the Vasily Kandinsky show now on display in the suitably spiral setting of the Guggenheim Museum, I came to one of the Russian master’s most illustrious, if misleadingly named, paintings: “Several Circles.”

Those “several” circles, I saw, were more like three dozen, and every one of them seemed to be rising from the canvas, buoyed by the shrewdly exuberant juxtapositioning of their different colors, sizes and apparent translucencies. I learned that, at around the time Kandinsky painted the work, in 1926, he had begun collecting scientific encyclopedias and journals; and as I stared at the canvas, a big, stupid smile plastered on my face, I thought of yeast cells budding, or a haloed blue sun and its candied satellite crew, or life itself escaping the careless primordial stew.

I also learned of Kandinsky’s growing love affair with the circle. The circle, he wrote, is “the most modest form, but asserts itself unconditionally.” It is “simultaneously stable and unstable,” “loud and soft,” “a single tension that carries countless tensions within it.” Kandinsky loved the circle so much that it finally supplanted in his visual imagination the primacy long claimed by an emblem of his Russian boyhood, the horse.

PAINTING IN THE ROUND “Circular Forms,” oil on canvas by Robert Delaunay. Another artist, the Russian master Vasily Kandinsky, loved the circle, which he described as “a single tension that carries countless tensions within it.”

Quirkily enough, the artist’s life followed a circular form: He was born in December 1866, and he died the same month in 1944. This being December, I’d like to honor Kandinsky through his favorite geometry, by celebrating the circle and giving a cheer for the sphere. Life as we know it must be lived in the round, and the natural world abounds in circular objects at every scale we can scan. Let a heavenly body get big enough for gravity to weigh in, and you will have yourself a ball. Stars are giant, usually symmetrical balls of radiant gas, while the definition of both a planet like Jupiter and a plutoid like Pluto is a celestial object orbiting a star that is itself massive enough to be largely round.

On a more down-to-earth level, eyeballs live up to their name by being as round as marbles, and, like Jonathan Swift’s ditty about fleas upon fleas, those soulful orbs are inscribed with circular irises that in turn are pierced by circular pupils. Or think of the curved human breast and its bull’s-eye areola and nipple.

Our eggs and those of many other species are not egg-shaped at all but spherical, and when you see human eggs under a microscope they look like tranquil suns with Kandinsky coronas behind them. Raindrops start life in the clouds not with the pear-shaped contours of a cartoon teardrop, but as liquid globes, aggregates of water molecules that have condensed around specks of dust or salt and then mutually clung themselves into the rounded path of least resistance. Only as the raindrops fall do they lose their symmetry, their bottoms often flattening out while their tops stay rounded, a shape some have likened to a hamburger bun.

Sometimes roundness is purely a matter of physics. “The shape of any object represents the balance of two opposing forces,” explained Larry S. Liebovitch of the Center for Complex Systems and Brain Sciences at Florida Atlantic University. “You get things that are round when those forces are isotropic, that is, felt equally in all directions.”

In a star, gravity is pulling the mass of gas inward toward a central point, while pressure is pushing the gas outward, and the two competing forces reach a dynamic détente — “simultaneously stable and unstable,” you might say — in the form of a sphere. For a planet like Earth, gravity tugs the mostly molten rock in toward the core, but the rocks and their hostile electrons push back with equal vehemence. Plutoids are also sufficiently massive for gravity to overcome the stubbornness of rock and smooth out their personal lumps, although they may not be the gravitationally dominant bodies in their neighborhood

In precipitating clouds, water droplets are exceptionally sticky, as the lightly positive end of one water molecule seeks the lightly negative end of another. But, again, mutually hostile electrons put a limit on molecular intimacy, and the compromise conformation is shaped like a ball. “A sphere is the most compact way for an object to form itself,” said Denis Dutton, an evolutionary theorist at the University of Canterbury in New Zealand.

A sphere is also tough. For a given surface area, it’s stronger than virtually any other shape. If you want to make a secure container using the least amount of material, Dr. Liebovitch said, make that container round. “That’s why, when you cook a frankfurter, it always splits in the long direction,” he said, rather than along its circumference. The curved part has the tensile strength of a sphere, the long axis that of a rectangle: no contest.

The reliability of bubble wrap may help explain some of the round objects found among the living, where the shapes of body parts are assumed to have some relation to their purpose. Eggs are a valuable commodity in nature, and if a round package is the safest option, by all means, make them caviar round. Among many birds, of course, eggs are oval rather than round, a trait that biologists attribute to both the arduous passage the egg makes through the avian oviduct, and the fact that oval eggs roll in a circle rather than a straight line and thus are less likely to fall out of a nest.

Yet scientists admit that they don’t always understand the evolutionary pressures that sculpture a given carbon-based shape.

While studying the cornea at Columbia University College of Physicians and Surgeons, Dr. Liebovitch became curious about why eyeballs are round. “It seemed like their most salient feature,” he said. He explored the options. To aid in focusing? But only a small region of the retina is involved in focusing, he said, and the whole spherical casing seems superfluous to the optical needs of that foveal patch. To enable the eye to roll easily in the socket and dart this way and that? But birds and other animals with fixed eyes still have bulging round eyeballs. “It’s not really clear what the reason is,” he said.

And for speculative verve, nothing beats the assortment of hypotheses that have been put forth to explain the roundness of the human female breast. It’s a buttock mimic. It’s a convenient place to store fat for hard times. It’s a fertility signal, a youth signal, a health signal, a wealth symbol. Large breasts emphasize the woman’s comparatively small waist, which is really what men are interested in. As for me, I’m waiting for somebody to explain why a man’s well-developed bicep looks like a wandering breast.

Whatever the prompt, our round eyes are drawn to round things. Jeremy M. Wolfe of Harvard Medical School and his colleagues found that curvature was a basic feature we used while making a visual search. Maybe we are looking for faces, a new chance to schmooze.

Studying rhesus monkeys, Doris Tsao of the California Institute of Technology and her colleagues identified a set of brain cells that responded strongly to images of faces, monkey and otherwise. The only other sort of visual stimulus that aroused those face tracing neurons, Dr. Tsao said, were round objects — clocks, apples and the like. She suspects the results would be similar for humans. We make a fetish of faces. “If you have a round object with two spots in the middle,” she said, “that instantly attracts your attention.”

Or maybe the circle beckons not for its resemblance to human face but as a mark of human art. Dr. Dutton, author of “The Art Instinct,” pointed out that perfect shapes were exceedingly rare in nature. “Take a look at a billiard ball,” he said. “It’s impossible to imagine that nature threw that one up.” We are predisposed to recognize “human artifacture,” he said, and roundness can be a mark of our handiwork. When nature does play the meticulous Michelangelo, we are astonished.

“People come to see the Moeraki boulders of New Zealand,” he said, “and ooh and aah because they’re so amazingly spherical.”

Artists in turn have used the circle as shorthand for the divine: in mandalas, rose windows, the lotus pad of the Buddha, the halos of Christian saints. For Kandinsky, said Tracey Bashkoff, who curated the Guggenheim exhibition, the circle was part of a “cosmic language” and a link to a grander, more spiritual plane. A round of applause! We’ve looped back to Kandinsky again.

Natalie Angier, New York Times


Full article and photos:

The Man Behind The God Particle

Meet Peter Higgs

The Large Hadron Collider accelerator in Geneva was constructed to search for the Higgs boson, among other things.

Physicist Peter Higgs is now world famous because of the subatomic particle bearing his name. But his ideas were initially snubbed by the academic world, with his landmark publication predicting the existence of the Higgs boson being rejected at first. The editor apparently didn’t understand a word of it.

Inside the walls of NIKHEF, the Dutch institute for nuclear research in Amsterdam, a group of renowned Dutch physicists have joined Peter Higgs in the cafeteria. Higgs is in town for the premiere of the Dutch documentary “Higgs, Into the Heart of the Imagination.”

Higgs, who is 80, has become world famous because of the subatomic particle bearing his name, the so-called Higgs boson. Thousands of physicists are now chasing after the elusive particle at CERN, the European Organization for Nuclear Research in Geneva. The quest prompted the construction of the Large Hadron Collider, the most powerful particle accelerator on the planet, which cost more than €6 billion to build.

Did the idea that would lead to the discovery of the Higgs boson just pop into his head in 1964? “No,” Higgs says, munching on a cheese sandwich and sipping orange juice. “That is not how it went.”

His ideas formed more gradually, he says. “By the summer of 1964 I knew I was on to something. That was perhaps the reason — because my head was full of thoughts — that I forgot the instructions for putting up the tent when we went for a camping trip in the Scottish mountains.”

A Fruitful End to a Dreary Holiday

Typically for Scotland, it was raining cats and dogs. The couple holed up in a bed and breakfast, and returned home earlier than intended. “I was pleased to get back,” Higgs recalls. Back at the University of Edinburgh, he wrote the article that would make him world famous. “But I wasn’t walking around shouting ‘eureka,'” Higgs says, taking another bite out of his sandwich.

Physicists often refer to the Higgs particle, which is nicknamed the “God particle,” as the pinnacle of the so-called Standard Model. That model describes the smallest particles of which all discernable matter in the universe is composed: stars, planets, people and atoms.

The Higgs particle might explain why so many of those particles have mass, causing them to move slowly and stick together, unlike the wispy photons that shoot through space at the speed of light.

Because Higg’s theory is so complicated, metaphors have been invented to describe it. “Spontaneous broken symmetry,” for instance, has been compared with the asymmetries caused by fibres and veins running through wood. Particles travelling along the lines of the veins experience little or no resistance, while those travelling at a perpendicular angle are slowed down, becoming heavy, like the matter of which stars and people are constructed.

Another metaphor is that of the US president making his entry at a party. When Barack Obama — representing, in this analogy, a very heavy particle — enters the room, the commotion caused by his arrival — the Higgs boson — spreads quickly and draws everybody in his direction. Because of all the people now flocking around him — the Higgs field — Obama is no longer able to move through the room quickly. He is far slower than the relatively unknown Dutch Prime Minister Jan Peter Balkenende, a much lighter particle in this metaphor.

Genius Unrecognised

The question the scientists at CERN are hoping to answer is whether or not the Higgs particle is really at work in the physical world. In other words, if it actually exists or not.

It is “somewhat ironic,” says Higgs, that his article was first rejected by Physics Letters, a journal published at the same CERN, in 1964. “It was only much later that my roommate at university in Edinburgh told me the editor had not understood it at all,” he recalls. The editor replied that he “did not see the relevance of the work for physics” and wrote a polite letter suggesting he send his article to another magazine, like Nuovo Cimento.

“It was only later that I discovered that that suggestion was not so polite after all,” Higgs recalls, “since Nuovo Cimento publishes all articles without peer review” — in other words, regardless of their quality.

Higgs had gone another route by then. He had added a paragraph to the article demonstrating his “new” mechanism would be able to produce particles with proper mass. “But I was thinking in the wrong direction,” Higgs adds. “I thought of hadrons.” Hadrons, which are subject to the strong force, one of the four elementary forces in nature, were a hot topic among scientists at the time.

Still, it was that extra paragraph, Higgs suspects, that led to his article being accepted by the American scientific journal Physical Review Letters, and, more importantly, that drew attention to it.

The Shoulders of Giants

At a speech given to colleagues last Friday at NIKHEF, Higgs credited the scientists who paved the way for him and who dotted the i’s and crossed the t’s of his work. He thanked countless people, including a number of Nobel Prize winners. Among the latter were people like Yoichiro Nambu, who got the idea for spontaneous broken symmetry from superconductor research into particle physics. Or Phil Anderson, who came close, but never drew the same final conclusions that Higgs did. Or Sheldon Glashow, Abdus Salam and Steven Weinberg, who unified the electromagnetic and weak nuclear forces under the Standard Model. And the Dutch scientists Gerard ‘t Hooft and Martin Veltman, who gave this electroweak force a sturdy theoretical foundation.

It was with the electroweak force that the Higgs mechanism proved particularly useful. This theory covers the interaction between weightless particles (the photons of the electroweak force) and massive particles (like the W- and Z-particles of the weak nuclear force). Only Higgs’ mechanism could explain the asymmetrical masses, through the existence of a particle which came to be known as the Higgs particle, or Higgs boson, since a well-attended congress in 1972.

Over the years, physicists became convinced the Higgs particle might actually be detectable. Indirectly, through precise measurements of the electroweak force at CERN and Fermilab in the United States, they established how. “And that is when my life as a boson really started,” Higgs says.

It could have been different. When Higgs’ manuscript arrived at Physical Review Letters on Aug. 31, 1964, the magazine had just published an article by the Belgian physicists Francois Englert and Robert Brout. They had come to the same conclusion through different means. “Because their method was quite complicated they were somewhat uncertain of their results. Perhaps that is why they did not tout the applications it might have in particle physics,” Higgs says carefully.

This may be true, but the “Englert field” never gained popular recognition, and neither did the Brout boson. And this is despite the fact that his name only has five letters, as Brout is said to have remarked somewhat wryly. And even Higgs’ painstaking efforts to pay tribute to his fellow researchers can do little to change the fact that it is his name that is now forever associated with the elusive boson.

What If It Doesn’t Exist?

But what if the Higgs particle isn’t found? “Then I no longer understand a whole area of physics which puzzled me as an undergraduate,” Higgs answers, sounding determined. “And I thought the one thing we understand rather well now is the electromagnetic interaction and how it relates to electroweak theory.”

Isn’t it strange to see billions being invested in pursuit of a particle bearing his own name? “If physicists were looking for a different particle they would have constructed an accelerator just as strong and experiments just as complex,” Higgs says with a shrug.


Full article and photo:,1518,665144,00.html

Physicists Move One Step Closer to Quantum Computing

Physicists at UC Santa Barbara have made an important advance in electrically controlling quantum states of electrons, a step that could help in the development of quantum computing. The work is published online November 20 on the Science Express Web site.

The researchers have demonstrated the ability to electrically manipulate, at gigahertz rates, the quantum states of electrons trapped on individual defects in diamond crystals. This could aid in the development of quantum computers that could use electron spins to perform computations at unprecedented speed.

Using electromagnetic waveguides on diamond-based chips, the researchers were able to generate magnetic fields large enough to change the quantum state of an atomic-scale defect in less than one billionth of a second. The microwave techniques used in the experiment are analogous to those that underlie magnetic resonance imaging (MRI) technology.

The key achievement in the current work is that it gives a new perspective on how such resonant manipulation can be performed. “We set out to see if there is a practical limit to how fast we can manipulate these quantum states in diamond,” said lead author Greg Fuchs, a postdoctoral researcher at UCSB. “Eventually, we reached the point where the standard assumptions of magnetic resonance no longer hold, but to our surprise we found that we actually gained an increase in operation speed by breaking the conventional assumptions.”

While these results are unlikely to change MRI technology, they do offer hope for the nascent field of quantum computing. In this field, individual quantum states take on the role that transistors perform in classical computing.

“From an information technology standpoint, there is still a lot to learn about controlling quantum systems,” said David Awschalom, principal investigator and professor of physics, electrical and computer engineering at UCSB. “Still, it’s exciting to stand back and realize that we can already electrically control the quantum state of just a few atoms at gigahertz rates — speeds comparable to what you might find in your computer at home.”

The work was performed at UCSB’s Center for Spintronics and Quantum Computation, directed by Awschalom. Co-authors on the paper include David. M. Toyli and F. Joseph Heremans, both of UCSB. Slava V. Dobrovitski of Ames Laboratory and Iowa State University contributed to the paper.


Full article:

By Happy Accident, Chemists Produce a New Blue

Variations of a blue pigment were developed at Oregon State University.

Blue is sometimes not an easy color to make.

Blue pigments of the past have often been expensive (ultramarine blue was made from the gemstone lapis lazuli, ground up), poisonous (cobalt blue is a possible carcinogen and Prussian blue, another well-known pigment, can leach cyanide) or apt to fade (many of the organic ones fall apart when exposed to acid or heat).

So it was a pleasant surprise to chemists at Oregon State University when they created a new, durable and brilliantly blue pigment by accident.

The researchers were trying to make compounds with novel electronic properties, mixing manganese oxide, which is black, with other chemicals and heating them to high temperatures.

Then Mas Subramanian, a professor of material sciences, noticed that one of the samples that a graduate student had just taken out of the furnace was blue.

“I was shocked, actually,” Dr. Subramanian said.

In the intense heat, almost 2,000 degrees Fahrenheit, the ingredients formed a crystal structure in which the manganese ions absorbed red and green wavelengths of light and reflected only blue.

When cooled, the manganese-containing oxide remained in this alternate structure. The other ingredients — white yttrium oxide and pale yellow indium oxide — are also required to stabilize the blue crystal. When one was left out, no blue color appeared.

The pigments have proven safe and durable, Dr. Subramanian said, although not cheap because of the cost of the indium. The researchers are trying to replace the indium oxide with cheaper oxides like aluminum oxide, which possesses similar properties.

The findings appear in the Journal of the American Chemical Society.


Kenneth Chang, New York Times


Full article and photo:

Water Found on Moon, Scientists Say

Moon water

This artist’s rendering released by NASA shows the Lunar Crater Observation and Sensing Satellite as it crashed into the moon to test for the presence of water last month.

There is water on the Moon, scientists stated unequivocally on Friday.

“Indeed yes, we found water,” Anthony Colaprete, the principal investigator for NASA’s Lunar Crater Observation and Sensing Satellite, said in a news conference. “And we didn’t find just a little bit. We found a significant amount.”

The confirmation of scientists’ suspicions is welcome news to explorers who might set up home on the lunar surface and to scientists who hope that the water, in the form of ice accumulated over billions of years, holds a record of the solar system’s history.

The satellite, known as Lcross (pronounced L-cross), crashed into a crater near the Moon’s south pole a month ago. The 5,600-miles-per-hour impact carved out a hole 60 to 100 feet wide and kicked up at least 26 gallons of water.

“We got more than just a whiff,” Peter H. Schultz, a professor of geological sciences at Brown University and a co-investigator of the mission, said in a telephone interview. “We practically tasted it with the impact.”

For more than a decade, planetary scientists have seen tantalizing hints of water ice at the bottom of these cold craters where the sun never shines. The Lcross mission, intended to look for water, was made up of two pieces of an empty rocket stage to slam into the floor of Cabeus, a crater 60 miles wide and 2 miles deep, and a small spacecraft to measure what was kicked up.

For space enthusiasts who stayed up, or woke up early, to watch the impact on Oct. 9, the event was anticlimactic, even disappointing, as they failed to see the anticipated debris plume. Even some high-powered telescopes on Earth like the Palomar Observatory in California did not see anything.

The National Aeronautics and Space Administration later said that Lcross did indeed photograph a plume but that the live video stream was not properly attuned to pick out the details.

The water findings came through an analysis of the slight shifts in color after the impact, showing telltale signs of water molecules that had absorbed specific wavelengths of light. “We got good fits,” Dr. Colaprete said. “It was a unique fit.”

The scientists also saw colors of ultraviolet light associated with molecules of hydroxyl, consisting of one hydrogen and one oxygen, presumably water molecules that had been broken apart by the impact and then glowed like neon signs.

In addition, there were squiggles in the data that indicated other molecules, possibly carbon dioxide, sulfur dioxide, methane or more complex carbon-based molecules. “All of those are possibilities,” Dr. Colaprete said, “but we really need to do the work to see which ones work best.”

Remaining in perpetual darkness like other craters near the lunar poles, the bottom of Cabeus is a frigid minus 365 degrees Fahrenheit, cold enough that anything at the bottom of such craters never leaves. These craters are “really like the dusty attic of the solar system,” said Michael Wargo, the chief lunar scientist at NASA headquarters.

The Moon was once thought to be dry. Then came hints of ice in the polar craters. In September, scientists reported an unexpected finding that most of the surface, not just the polar regions, might be covered with a thin veneer of water.

The deposits in the lunar craters may be as informative about the Moon as ice cores from Earth’s polar regions are about the planet’s past climates. Scientists want to know the source and history of whatever water they find. It could have come from the impacts of comets, for instance, or from within the Moon.

“Now that we know that water is there, thanks to Lcross, we can begin in earnest to go to this next set of questions,” said Gregory T. Delory of the University of California, Berkeley.

Dr. Delory said the findings of Lcross and other spacecraft were “painting a really surprising new picture of the Moon; rather than a dead and unchanging world, it could be in fact a very dynamic and interesting one.”

Lunar ice, if bountiful, not only give future settlers something to drink, but could also be broken apart into oxygen and hydrogen. Both are valuable as rocket fuel, and the oxygen would also give astronauts air to breathe.

NASA’s current exploration plans call for a return of astronauts to the Moon by 2020, for the first visit since 1972. But a panel appointed in May recently concluded that trimmings of the agency’s budget made that goal impossible. One option presented to the Obama administration was to bypass Moon landings for now and focus on long-duration missions in deep space.

Even though the signs of water were clear and definitive, the Moon is far from wet. The Cabeus soil could still turn out to be drier than that in deserts on Earth. But Dr. Colaprete also said that he expected that the 26 gallons were a lower limit and that it was too early to estimate the concentration of water in the soil.

The scientists also do not know whether the information from Cabeus is representative of the state of other lunar craters.

Kenneth Chang, New York Times


Full article and photo:

Sniff test to preserve old books

Old books (ACS)

The test could help to preserve treasured books and documents

The key to preserving the old, degrading paper of treasured, ageing books is contained in the smell of their pages, say scientists.

Researchers report in the journal Analytical Chemistry that a new “sniff test” can measure degradation of old books and historical documents.

The test picks up and identifies the chemicals that the pages release as they degrade.

This could help libraries and museums preserve a range of precious books.

The test is based on detecting the levels of volatile organic compounds.

These are released by paper as it ages and produce the familiar “old book smell”.

The international research team, led by Matija Strlic from University College London’s Centre for Sustainable Heritage, describes that smell as “a combination of grassy notes with a tang of acids and a hint of vanilla over an underlying mustiness”.

“This unmistakable smell is as much part of the book as its contents,” they wrote in the journal article.

Dr Strlic told BBC News that the idea for new test came from observing museum conservators as they worked.

“I often noticed that conservators smelled paper during their assessment,” he recalled.

“I thought, if there was a way we could smell paper and tell how degraded it is from the compounds it emits, that would be great.”

The test does just that. It pinpoints ingredients contained within the blend of volatile compounds emanating from the paper.

That mixture, the researchers say, “is dependent on the original composition of the… paper substrate, applied media, and binding”.

Their new method is called “material degradomics”. The scientists are able to use it to find what chemicals books release, without damaging the paper.

It involves an analytical technique called gas chromatography-mass spectrometry. This simply “sniffs” the paper and separates out the different compounds.

Chemical fingerprint

The team tested 72 historical papers from the 19th and 20th centuries – some of which they bought on eBay – and identified 15 compounds that were “reliable markers” of degradation.

“The aroma is made up of hundreds of compounds, but these 15 contain most of the information that we need,” said Dr Strlic.

Measuring the levels of these individual compounds made it possible to produce a “fingerprint” of each document’s condition.

Such a thorough chemical understanding of the state of a book will help museums and libraries to identify the books and documents most in need of protection from further degradation.

The information could also be used to fine-tune preservation techniques.

The method, the researchers say, is not exclusively applicable to books, and could be used on other historical artefacts.


Full article and photo:

CERN Collider Adds New Punchlines to Growing Collection

Particle Physics Slapstick


The Large Hadron Collider has so far produced a number of odd news stories, but little else.

The list of problems encountered by the Large Hadron Collider, a super-sized particle accelerator in Switzerland, is long and becoming longer. It ranges from French bread to French terrorists, and from black holes to time travel, and makes for increasingly entertaining reading.

One can almost hear the tone of surprise in Monday’s press release from the enormous particle accelerator at the European Organization for Nuclear Research, known as CERN for short. “Particles Have Gone Half Way Round the LHC,” reads the headline, referring to the Large Hadron Collider.

At first glance, it seems odd that the people at the LHC would find such a partial particle peregrination worthy of triumphalism, no matter how tepid. But given that the launch of the ambitious experiments slated for the multi-billion euro science kit is now over a year behind schedule, the LHC has been starved of anything positive to say at all.

Indeed, the periodic hiccups on the way to functionality have become something of a running joke in the media coverage of CERN. This week has seen two new punchlines added to the list. On Monday, CERN announced that a bird carrying a hunk of French bread accidentally dropped its snack on an external power generator last week, creating a short-circuit that briefly shut down the accelerator’s all-important cooling system.

Ties to al-Qaida

And in Bern, the Swiss Federal Prosecutor’s Office confirmed on Monday that it has opened an investigation into a French CERN physicist suspected of having ties to al-Qaida. The Swiss case comes in addition to preliminary charges already filed in France against the 32-year-old Frenchman of Algerian origin, whose identity has not been revealed.

French officials have said that the suspect has admitted to having communicated with al-Qaida regarding potential terror attacks.

But if he were planning an attack on the particle accelerator, he perhaps need not have bothered. Scientists hope that the Large Hadron Collider will provide insights into the behavior of quantum particles, many times smaller than the protons, neutrons and electrons which physicists once thought were the tiniest components of all matter. Some hope to find the as-yet theoretical particle known as Higgs boson — also referred to as the “God particle” because it is presumed to have been present at the Big Bang. Others are looking for verification as to the veracity of string theory, which posits the existence of additional dimensions beyond the four currently known.

Hopes were high for the LHC, the most powerful particle accelerator ever built. Fully 27 kilometers (17 miles) in circumference, the ultra-complex machine is designed to speed up sub-atomic particles to 99.9999991 percent of the speed of light. But problems started almost immediately after it was fired up in September 2008, when an electrical failure resulted in damage that has taken a year to fix.

Sci-Fi Time Travel

Indeed, progress has been so slow that some mathematicians have even posited that the future is sabotaging the present in order to prevent the creation of the God particle. Theoretical physicists think the Higgs boson is responsible for turning energy into mass, thus making the particle responsible for creating all the mass in the universe.

“It must be our prediction that all Higgs producing machines shall have bad luck,” Danish physicist Dr. Holger Bech Nielsen — who, together with his Japanese colleague Dr. Masao Ninomiya, created the bizarre, sci-fi time-travel theory — told the New York Times last month.

CERN scientists insist that the machine is only experiencing “teething problems” and that, after this week’s bird incident, ongoing repairs to the accelerator were delayed by only a few hours. Proton collisions are now set to begin prior to Christmas.

In contrast to last year, however, few now fear that the LHC might cause a black hole to open up and swallow the world, as some had theorized in 2008. After all, the energy achieved by speedy protons will be much lower than originally intended. Rather than the 7 trillion electron volts initially hoped for, the collisions this year will be at a measly 1.1 trillion electron volts, barely higher than at CERN’s rival accelerator, the Tevatron outside Chicago.


Full article and photo:,1518,660500,00.html

Setting Sail Into Space, Propelled by Sunshine

sunlight 1

DEEP-SPACE TRAVEL If the launching of LightSail-1 goes off according to plan next year, humans may soon be solar-sailing, as shown in this illustration.

Peter Pan would be so happy.

About a year from now, if all goes well, a box about the size of a loaf of bread will pop out of a rocket some 500 miles above the Earth. There in the vacuum it will unfurl four triangular sails as shiny as moonlight and only barely more substantial. Then it will slowly rise on a sunbeam and move across the stars.

LightSail-1, as it is dubbed, will not make it to Neverland. At best the device will sail a few hours and gain a few miles in altitude. But those hours will mark a milestone for a dream that is almost as old as the rocket age itself, and as romantic: to navigate the cosmos on winds of starlight the way sailors for thousands of years have navigated the ocean on the winds of the Earth.

“Sailing on light is the only technology that can someday take us to the stars,” said Louis Friedman, director of the Planetary Society, the worldwide organization of space enthusiasts.

Even as the National Aeronautics and Space Administration continues to flounder in a search for its future, Dr. Friedman announced Monday that the Planetary Society, with help from an anonymous donor, would be taking baby steps toward a future worthy of science fiction. Over the next three years, the society will build and fly a series of solar-sail spacecraft dubbed LightSails, first in orbit around the Earth and eventually into deeper space.

The voyages are an outgrowth of a long collaboration between the society and Cosmos Studios of Ithaca, N.Y., headed by Ann Druyan, a film producer and widow of the late astronomer and author Carl Sagan.

Sagan was a founder of the Planetary Society, in 1980, with Dr. Friedman and Bruce Murray, then director of the Jet Propulsion Laboratory. The announcement was made at the Hart Senate Office Building in Washington at a celebration of what would have been Sagan’s 75th birthday. He died in 1996.

sunlight 2

Ms. Druyan, who has been chief fund-raiser for the society’s sailing projects, called the space sail “a Taj Mahal” for Sagan, who loved the notion and had embraced it as a symbol for the wise use of technology.

There is a long line of visionaries, stretching back to the Russian rocket pioneers Konstantin Tsiolkovsky and Fridrich Tsander and the author Arthur C. Clarke, who have supported this idea. “Sails are just a marvelous way of getting around the universe,” said Freeman Dyson, of the Institute for Advanced Study in Princeton, N.J., and a longtime student of the future, “but it takes a long time to imagine them becoming practical.”

The solar sail receives its driving force from the simple fact that light carries not just energy but also momentum — a story told by every comet tail, which consists of dust blown by sunlight from a comet’s core. The force on a solar sail is gentle, if not feeble, but unlike a rocket, which fires for a few minutes at most, it is constant. Over days and years a big enough sail, say a mile on a side, could reach speeds of hundreds of thousands of miles an hour, fast enough to traverse the solar system in 5 years. Riding the beam from a powerful laser, a sail could even make the journey to another star system in 100 years, that is to say, a human lifespan.

Whether humans could ever take these trips depends on just how starry-eyed one’s view of the future is.

Dr. Friedman said it would take too long and involve too much exposure to radiation to sail humans to a place like Mars. He said the only passengers on an interstellar voyage — even after 200 years of additional technological development — were likely to be robots or perhaps our genomes encoded on a chip, a consequence of the need to keep the craft light, like a giant cosmic kite.

In principle, a solar sail can do anything a regular sail can do, like tacking. Unlike other spacecraft, it can act as an antigravity machine, using solar pressure to balance the Sun’s gravity and thus hover anyplace in space.

And, of course, it does not have to carry tons of rocket fuel. As the writer and folk singer Jonathan Eberhart wrote in his song “A Solar Privateer”:

No cold LOX tanks or reactor banks, just Mylar by the mile.

No stormy blast to rattle the mast, a sober wind and true.

Just haul and tack and ball the jack like the waterlubbers do.

Those are visions for the long haul. “Think centuries or millennia, not decades,” said Dr. Dyson, who also said he approved of the Planetary Society project.

“We ought to be doing things that are romantic,” he said, adding that nobody knew yet how to build sails big and thin enough for serious travel. “You have to get equipment for unrolling them and stretching them — a big piece of engineering that’s not been done. But the joy of technology is that it’s unpredictable.”

At one time or another, many of NASA’s laboratories have studied solar sails. Scientists at the Jet Propulsion Laboratory even once investigated sending a solar sail to rendezvous and ride along with Halley’s Comet during its pass in 1986.

But efforts by the agency have dried up as it searches for dollars to keep the human spaceflight program going, said Donna Shirley, a retired J.P.L. engineer and former chairwoman of the NASA Institute for Advanced Concepts. Dr. Shirley said that the solar sail was feasible and that the only question was, “Do you want to spend some money?” Until the technology had been demonstrated, she said, no one would use it.

Japan continues to have a program, and test solar sails have been deployed from satellites or rockets, but no one has ever gotten as far as trying to sail them anywhere.

Dr. Friedman, who cut his teeth on the Halley’s Comet proposal, has long sought to weigh anchor in space. An effort by the Planetary Society and the Russian Academy of Sciences to launch a sail about 100 feet on a side, known as Cosmos-1, from a Russian missile submarine in June 2005 ended with what Ms. Druyan called “our beautiful spacecraft” at the bottom of the Barents Sea.

Ms. Druyan and Dr. Friedman were beating the bushes for money for a Cosmos-2, when NASA asked if the society wanted to take over a smaller project known as the Nanosail. These are only 18 feet on a side and designed to increase atmospheric drag and thus help satellites out of orbit.

And so LightSail was born. Its sail, adapted from the Nanosail project, is made of aluminized Mylar about one-quarter the thickness of a trash bag. The body of the spacecraft will consist of three miniature satellites known as CubeSats, four inches on a side, which were first developed by students at Stanford and now can be bought on the Web, among other places. One of the cubes will hold electronics and the other two will carry folded-up sails, Dr. Friedman said.

Assembled like blocks, the whole thing weighs less than five kilograms, or about 11 pounds. “The hardware is the smallest part,” Dr. Friedman said. “You can’t spend a lot on a five-kilogram system.”

The next break came when Dr. Friedman was talking about the LightSail to a group of potential donors. A man — “a very modest dear person,” in Ms. Druyan’s words — asked about the cost of the missions and then committed to paying for two of them, and perhaps a third, if all went well.

After the talk, the man, who does not wish his identity to be known, according to the society, came up and asked for the society’s bank routing number. Within days the money was in its bank account. The LightSail missions will be spread about a year apart, starting around the end of 2010, with the exact timing depending on what rockets are available. The idea, Dr. Friedman said, is to piggyback on the launching of a regular satellite. Various American and Russian rockets are all possibilities for a ride, he said.

Dr. Friedman said the first flight, LightSail-1, would be a success if the sail could be controlled for even a small part of an orbit and it showed any sign of being accelerated by sunlight. “For the first flight, anything measurable is great,” he said. In addition there will be an outrigger camera to capture what Ms. Druyan called “the Kitty Hawk moment.”

The next flight will feature a larger sail and will last several days, building up enough velocity to raise its orbit by tens or hundreds of miles, Dr. Friedman said.

For the third flight, Dr. Friedman and his colleagues intend to set sail out of Earth orbit with a package of scientific instruments to monitor the output of the Sun and provide early warning of magnetic storms that can disrupt power grids and even damage spacecraft. The plan is to set up camp at a point where the gravity of the Earth and Sun balance each other — called L1, about 900,000 miles from the Earth — a popular place for conventional scientific satellites. That, he acknowledges, will require a small rocket, like the attitude control jets on the shuttle, to move out of Earth orbit, perhaps frustrating to a purist.

But then again, most sailboats do have a motor for tooling around in the harbor, which is how Dr. Friedman describes being in Earth orbit. Because the direction of the Sun keeps changing, he said, you keep “tacking around in the harbor when what you want to do is get out on the ocean.”

The ocean, he said, awaits.

Dennis Overbye, New York Times


Full article and photos:

Tweak Gravity: What If There Is No Dark Matter?

Modifications to the theory of gravity could account for observational discrepancies, but not without introducing other complications.

Theorists and observational astronomers are hot on the trail of dark matter, the invisible material thought to account for puzzling mass disparities in large-scale astronomical structures. For instance, galaxies and galactic clusters behave as if they were far more massive than would be expected if they comprised only atoms and molecules, spinning faster than their observable mass would explain. What is more, the very presence of assemblages such as our Milky Way Galaxy speaks to the influence of more mass than we can see. If the mass of the universe were confined to atoms, the clumping of matter that allowed galaxies to take shape would never have transpired.

Dark matter was theorized into existence to account for the missing mass. The prevailing view holds that dark matter contributes five times as much to the mass of the universe as ordinary matter does.

But some researchers have taken to approaching the problem from the other direction: What if the discrepancy arises from a flaw in our theory of gravity rather than from some provider of mass that we cannot see? In the 1980s physicist Mordehai Milgrom of the Weizmann Institute of Science in Rehovot, Israel, proposed a modification to Newtonian dynamics that would explain many of the observational discrepancies without requiring significant mass to be hidden away in dark matter. But it fell short of describing all celestial objects, and to incorporate the full span of gravitational interactions, a modification to Albert Einstein’s theory of general relativity is needed.

A review article in the November 6 Science checks in on the status of these modified-gravity theories, including a proposal put forth by physicist Jacob Bekenstein of The Hebrew University of Jerusalem in 2004. Pedro Ferreira, a University of Oxford cosmologist and one of the review paper’s co-authors, says that there is good news and bad news for proponents of such models.

The bad news is that in order for modified versions of general relativity to work, some sort of unseen—or “dark”—presence must be in play, which in some cases can look a lot like dark matter. “If you try and build a consistent, relativistic theory that gives you modified Newtonian dynamics, you have no choice but to introduce extra stuff,” Ferreira says. “I don’t think it will be described by particles, in the way that dark matter is described—it may be described in a more wavelike form or a more fieldlike form.”

In other words, a theory of gravity can do away with dark matter but cannot describe the universe simply as the product of a tweaked Einsteinian gravity acting on the mass we can see. “The old paradigm where all you were doing was modifying gravity simply doesn’t hold,” Ferreira says. “You modify gravity, but through the backdoor you introduce extra fields, which means that the distinction between dark matter and modified gravity isn’t as clear as people thought before.”

The good news? According to Ferreira, “all is not lost.” Observational campaigns now in the works, such as the Joint Dark Energy Mission planned by NASA and the U.S. Department of Energy as well as an international radio telescope project known as Square Kilometer Array, should allow astronomers and cosmologists to test competing worldviews in the next decade or so.

By cross-correlating large-scale surveys of galaxies and observations of how galaxies distort background light in a relativistic process known as weak lensing, Ferreira says, the true nature of mass and the forces acting on it can be tested. “Whether gravity is modified or not will greatly affect the result,” he predicts.

Although Ferreira works on theories of modified gravity, he is careful to note that the new paper does not advocate for those theories’ correctness over the prevailing model. In his personal view, Ferreira says, “by far the simplest proposal is normal gravity plus dark matter.”

John Matson, Scientific American


Full article:

African Desert Rift Confirmed As New Ocean In The Making


New research confirms that the volcanic processes at work beneath the Ethiopian rift are nearly identical to those at the bottom of the world’s oceans, and the rift is indeed likely the beginning of a new sea.

In 2005, a gigantic, 35-mile-long rift broke open the desert ground in Ethiopia. At the time, some geologists believed the rift was the beginning of a new ocean as two parts of the African continent pulled apart, but the claim was controversial.

Now, scientists from several countries have confirmed that the volcanic processes at work beneath the Ethiopian rift are nearly identical to those at the bottom of the world’s oceans, and the rift is indeed likely the beginning of a new sea.

The new study, published in the latest issue of Geophysical Research Letters, suggests that the highly active volcanic boundaries along the edges of tectonic ocean plates may suddenly break apart in large sections, instead of little by little as has been predominantly believed. In addition, such sudden large-scale events on land pose a much more serious hazard to populations living near the rift than would several smaller events, says Cindy Ebinger, professor of earth and environmental sciences at the University of Rochester and co-author of the study.

“This work is a breakthrough in our understanding of continental rifting leading to the creation of new ocean basins,” says Ken Macdonald, professor emeritus in the Department of Earth Science at the University of California, Santa Barbara, and who is not affiliated with the research. “For the first time they demonstrate that activity on one rift segment can trigger a major episode of magma injection and associated deformation on a neighboring segment. Careful study of the 2005 mega-dike intrusion and its aftermath will continue to provide extraordinary opportunities for learning about continental rifts and mid-ocean ridges.”

“The whole point of this study is to learn whether what is happening in Ethiopia is like what is happening at the bottom of the ocean where it’s almost impossible for us to go,” says Ebinger. “We knew that if we could establish that, then Ethiopia would essentially be a unique and superb ocean-ridge laboratory for us. Because of the unprecedented cross-border collaboration behind this research, we now know that the answer is yes, it is analogous.”

Atalay Ayele, professor at the Addis Ababa University in Ethiopia, led the investigation, painstakingly gathering seismic data surrounding the 2005 event that led to the giant rift opening more than 20 feet in width in just days. Along with the seismic information from Ethiopia, Ayele combined data from neighboring Eritrea with the help of Ghebrebrhan Ogubazghi, professor at the Eritrea Institute of Technology, and from Yemen with the help of Jamal Sholan of the National Yemen Seismological Observatory Center. The map he drew of when and where earthquakes happened in the region fit tremendously well with the more detailed analyses Ebinger has conducted in more recent years.

Ayele’s reconstruction of events showed that the rift did not open in a series of small earthquakes over an extended period of time, but tore open along its entire 35-mile length in just days. A volcano called Dabbahu at the northern end of the rift erupted first, then magma pushed up through the middle of the rift area and began “unzipping” the rift in both directions, says Ebinger.

Since the 2005 event, Ebinger and her colleagues have installed seismometers and measured 12 similar — though dramatically less intense — events.

“We know that seafloor ridges are created by a similar intrusion of magma into a rift, but we never knew that a huge length of the ridge could break open at once like this,” says Ebinger. She explains that since the areas where the seafloor is spreading are almost always situated under miles of ocean, it’s nearly impossible to monitor more than a small section of the ridge at once so there’s no way for geologists to know how much of the ridge may break open and spread at any one time. “Seafloor ridges are made up of sections, each of which can be hundreds of miles long. Because of this study, we now know that each one of those segments can tear open in a just a few days.”

Ebinger and her colleagues are continuing to monitor the area in Ethiopia to learn more about how the magma system beneath the rift evolves as the rift continues to grow.

Additional authors of the study include Derek Keir, Tim Wright, and Graham Stuart, professors of earth and environment at the University of Leeds, U.K.; Roger Buck, professor at the Earth Institute at Columbia University, N.Y.; and Eric Jacques, professor at the Institute de Physique du Globe de Paris, France.


Full article and photo:

Opening Up A Colorful Cosmic Jewel Box

jewel box

The FORS1 instrument on the ESO Very Large Telescope (VLT) at ESO’s Paranal Observatory was used to take this exquisitely sharp close up view of the colorful Jewel Box cluster, NGC 4755. The telescope’s huge mirror allowed very short exposure times: just 2.6 seconds through a blue filter (B), 1.3 seconds through a yellow/green filter (V) and 1.3 seconds through a red filter (R). The field of view spans about seven arcminutes.

Star clusters are among the most visually alluring and astrophysically fascinating objects in the sky. One of the most spectacular nestles deep in the southern skies near the Southern Cross in the constellation of Crux.

The Kappa Crucis Cluster, also known as NGC 4755 or simply the “Jewel Box” is just bright enough to be seen with the unaided eye. It was given its nickname by the English astronomer John Herschel in the 1830s because the striking colour contrasts of its pale blue and orange stars seen through a telescope reminded Herschel of a piece of exotic jewellery.

Open clusters [1] such as NGC 4755 typically contain anything from a few to thousands of stars that are loosely bound together by gravity. Because the stars all formed together from the same cloud of gas and dust their ages and chemical makeup are similar, which makes them ideal laboratories for studying how stars evolve.

The position of the cluster amongst the rich star fields and dust clouds of the southern Milky Way is shown in the very wide field view generated from the Digitized Sky Survey 2 data. This image also includes one of the stars of the Southern Cross as well as part of the huge dark cloud of the Coal Sack [2].

A new image taken with the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile shows the cluster and its rich surroundings in all their multicoloured glory. The large field of view of the WFI shows a vast number of stars. Many are located behind the dusty clouds of the Milky Way and therefore appear red [3].

The FORS1 instrument on the ESO Very Large Telescope (VLT) allows a much closer look at the cluster itself. The telescope’s huge mirror and exquisite image quality have resulted in a brand-new, very sharp view despite a total exposure time of just 5 seconds. This new image is one of the best ever taken of this cluster from the ground.

The Jewel Box may be visually colourful in images taken on Earth, but observing from space allows the NASA/ESA Hubble Space Telescope to capture light of shorter wavelengths than can not be seen by telescopes on the ground. This new Hubble image of the core of the cluster represents the first comprehensive far ultraviolet to near-infrared image of an open galactic cluster. It was created from images taken through seven filters, allowing viewers to see details never seen before. It was taken near the end of the long life of the Wide Field Planetary Camera 2 ― Hubble’s workhorse camera up until the recent Servicing Mission, when it was removed and brought back to Earth. Several very bright, pale blue supergiant stars, a solitary ruby-red supergiant and a variety of other brilliantly coloured stars are visible in the Hubble image, as well as many much fainter ones. The intriguing colours of many of the stars result from their differing intensities at different ultraviolet wavelengths.

The huge variety in brightness of the stars in the cluster exists because the brighter stars are 15 to 20 times the mass of the Sun, while the dimmest stars in the Hubble image are less than half the mass of the Sun. More massive stars shine much more brilliantly. They also age faster and make the transition to giant stars much more quickly than their faint, less-massive siblings.

The Jewel Box cluster is about 6400 light-years away and is approximately 16 million years old.


[1] Open, or galactic, star clusters are not to be confused with globular clusters ― huge balls of tens of thousands of ancient stars in orbit around our galaxy and others. It seems that most stars, including our Sun, formed in open clusters.

[2] The Coal Sack is a dark nebula in the Southern Hemisphere, near the Southern Cross, that can be seen with the unaided eye. A dark nebula is not the complete absence of light, but an interstellar cloud of thick dust that obscures most background light in the visible.

[3] If the light from a distant star passes through dust clouds in space the blue light is scattered and absorbed more than the red. As a result the starlight looks redder when it arrives on Earth. The same effect creates the glorious red colours of terrestrial sunsets.


Full article and photo:

Gamma-ray Photon Race Ends In Dead Heat; Einstein Wins This Round


In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect, eliminating some approaches to a new theory of gravity.

Racing across the universe for the last 7.3-billion-years, two gamma-ray photons arrived at NASA’s orbiting Fermi Gamma-ray Space Telescope within nine-tenths of a second of one another. The dead-heat finish may stoke the fires of debate among physicists over Einstein’s special theory of relativity because one of the photons possessed a million times more energy than the other,

For Einstein’s theory, that’s no problem. In his vision of the structure of space and time, unified as space-time, all forms of electromagnetic radiation — gamma rays, radio waves, infrared, visible light and X-rays — are reckoned to travel through the vacuum of space at the same speed, no matter how energetic. But in some of the new theories of gravity, space-time is considered to have a “shifting, frothy structure” when viewed at a scale trillions of times smaller than an electron. Some of those models predict that such a foamy texture ought to slow down the higher-energy gamma-ray photon relative to the lower energy one. Clearly, it did not.

Even in the world of high-energy particle physics, where a minute deviation can sometimes make a massive difference, nine-tenths of a second spread over more than 7 billion years is so small that the difference is likely due to the detailed processes of the gamma-ray burst rather than confirming any modification of Einstein’s ideas.

“This measurement eliminates any approach to a new theory of gravity that predicts a strong energy-dependent change in the speed of light,” said Peter Michelson, professor of physics at Stanford University and principal investigator for Fermi’s Large Area Telescope (LAT), which detected the gamma-ray photons on May 10. “To one part in 100 million billion, these two photons traveled at the same speed. Einstein still rules.”

Michelson is one of the authors of a paper that details the research, published online Oct. 28 by Nature.

Physicists have yearned for years to develop a unifying theory of how the universe works. But no one has been able to come up with one that brings all four of the fundamental forces in the universe into one tent. The Standard Model of particle physics, which was well developed by the end of the 1970s, is considered to have succeeded in unifying three of the four: electromagnetism; the “strong force” (which holds nuclei together inside atoms); and the “weak force” (which is responsible for radioactive decay, among other things.) But in the Standard Model, gravity has always been the odd man out, never quite fitting in. Though a host of theories have been advanced, none has been shown successful.

But by the same token, Einstein’s theories of relativity also fail to unify the four forces.

“Physicists would like to replace Einstein’s vision of gravity — as expressed in his relativity theories — with something that handles all fundamental forces,” Michelson said. “There are many ideas, but few ways to test them.”

The two photons provided rare experimental evidence about the structure of space-time. Whether the evidence will prove sufficient to settle any debates remains to be seen.

The photons were launched on their pan-galactic marathon during a short gamma-ray burst, an outpouring of radiation likely generated by the collision of two neutron stars, the densest known objects in the universe.

A neutron star is created when a massive star collapses in on itself in an explosion called a supernova. The neutron star forms in the core as matter is compressed to the point where it is typically about 10 miles in diameter, yet contains more mass than our sun. When two such dense objects collide, the energy released in a gamma-ray burst can be millions of times brighter than the entire Milky Way, albeit only briefly. The burst (designated GRB 090510) that sent the two photons on their way lasted 2.1 seconds.

NASA’s Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership, developed in collaboration with the U.S. Department of Energy, along with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.


Full article and photo:

Galileo’s Notebooks May Reveal Secrets Of New Planet

Galileo knew he had discovered a new planet in 1613, 234 years before its official discovery date, according to a new theory by a University of Melbourne physicist.

Professor David Jamieson, Head of the School of Physics, is investigating the notebooks of Galileo from 400 years ago and believes that buried in the notations is the evidence that he discovered a new planet that we now know as Neptune.

A hypothesis of how to look for this evidence has been published in the journal Australian Physics and was presented at the first lecture in the 2009 July Lectures in Physics program at the University of Melbourne in the beginning of July.

If correct, the discovery would be the first new planet identified by humanity since deep antiquity.

Galileo was observing the moons of Jupiter in the years 1612 and 1613 and recorded his observations in his notebooks. Over several nights he also recorded the position of a nearby star which does not appear in any modern star catalogue.

“It has been known for several decades that this unknown star was actually the planet Neptune. Computer simulations show the precision of his observations revealing that Neptune would have looked just like a faint star almost exactly where Galileo observed it,” Professor Jamieson says.

But a planet is different to a star because planets orbit the Sun and move through the sky relative to the stars. It is remarkable that on the night of January 28 in 1613 Galileo noted that the “star” we now know is the planet Neptune appeared to have moved relative to an actual nearby star.”

There is also a mysterious unlabeled black dot in his earlier observations of January 6, 1613, which is in the right position to be Neptune.

“I believe this dot could reveal he went back in his notes to record where he saw Neptune earlier when it was even closer to Jupiter but had not previously attracted his attention because of its unremarkable star-like appearance.”

If the mysterious black dot on January 6 was actually recorded on January 28, Professor Jamieson proposes this would prove that Galileo believed he may have discovered a new planet.

By using the expertise of trace element analysts from the University of Florence, who have previously analyzed inks in Galileo’s manuscripts, dating the unlabelled dot in his notebook may be possible. This analysis may be conducted in October this year.

“Galileo may indeed have formed the hypothesis that he had seen a new planet which had moved right across the field of view during his observations of Jupiter over the month of January 1613,” Professor Jamieson says.

“If this is correct Galileo observed Neptune 234 years before its official discovery.”

But there could be an even more interesting possibility still buried in Galileo’s notes and letters.

“Galileo was in the habit of sending a scrambled sentence, an anagram, to his colleagues to establish his priority for the sensational discoveries he made with his new telescope. He did this when he discovered the phases of Venus and the rings of Saturn. So perhaps somewhere he wrote an as-yet undecoded anagram that reveals he knew he discovered a new planet,” Professor Jamieson speculates.

Professor Jamieson presented at the first of a series of lectures in July, aimed at giving an insight into fundamental questions in physics to celebrate the 2009 International Year of Astronomy.


Full article:

Geologists Point To Outer Space As Source Of The Earth’s Mineral Riches


New research suggests that the wealth of some minerals that lie in the rock beneath the Earth’s surface may be extraterrestrial in origin.

According to a new study by geologists at the University of Toronto and the University of Maryland, the wealth of some minerals that lie in the rock beneath the Earth’s surface may be extraterrestrial in origin.

“The extreme temperature at which the Earth’s core formed more than four billion years ago would have completely stripped any precious metals from the rocky crust and deposited them in the core,” says James Brenan of the Department of Geology at the University of Toronto and co-author of the study published in Nature Geoscience on October 18.

“So, the next question is why are there detectable, even mineable, concentrations of precious metals such as platinum and rhodium in the rock portion of the Earth today? Our results indicate that they could not have ended up there by any known internal process, and instead must have been added back, likely by a ‘rain’ of extraterrestrial debris, such as comets and meteorites.”

Geologists have long speculated that four and a half billion years ago, the Earth was a cold mass of rock mixed with iron metal which was melted by the heat generated from the impact of massive planet-sized objects, allowing the iron to separate from the rock and form the Earth’s core. Brenan and colleague William McDonough of the University of Maryland recreated the extreme pressure and temperature of this process, subjecting a similar mixture to temperatures above 2,000 degrees Celsius, and measured the composition of the resulting rock and iron.

Because the rock became void of the metal in the process, the scientists speculate that the same would have occurred when the Earth was formed, and that some sort of external source – such as a rain of extraterrestrial material – contributed to the presence of some precious metals in Earth’s outer rocky portion today.

“The notion of extraterrestrial rain my also explain another mystery, which is how the rock portion of the Earth came to have hydrogen, carbon and phosphorous – the essential components for life, which were likely lost during Earth’s violent beginning.”

The research was funded with the support of the Natural Sciences and Engineering Research Council of Canada and a NASA Cosmochemistry grant.


Full article and photo:

Color of Fabric Matters When Protecting Skin From Ultraviolet Rays

It takes more than sunscreen to keep the sun’s ultraviolet rays from harming your skin. The type of clothing you wear can offer protection, too — or not. Studies have shown that some lightweight fabrics do not provide enough UV protection.

But it is not just the type of fiber and the weave of the fabric that matters, but also the color. Ascención Riva of the Polytechnic University of Catalonia and colleagues have addressed the color issue, studying the effects of different dyes on the UV protection provided by lightweight woven cottons.

The researchers chose three fabrics, not dyed, with different initial levels of UV protection based on the weave and other factors. Then they dyed them in varying shades of blue, red and yellow and measured how much UV radiation was absorbed and transmitted.

They found that red and blue shades performed better than yellow, particularly in blocking UV-B rays, which are the most harmful. Protection increased as the shades were made darker and more intense. And if the initial protection level of the fabric was higher, the darker shades offered even greater improvement.

The researchers say the findings, reported in Industrial and Engineering Chemistry Research, should help fabric and garment manufacturers optimize their products for UV protection.

Henry Fountain, New York Times


Full article:

Scientists announce planet bounty

Gliese 667C (ESO/L. Calçada)

Artist’s impression: Astronomers are finding smaller and smaller planet
Astronomers have announced a haul of planets found beyond our Solar System.

The 32 “exoplanets” ranged in size from five times the mass of Earth to 5-10 times the mass of Jupiter, the researchers said.

They were found using a very sensitive instrument on a 3.6m telescope at the European Southern Observatory’s La Silla facility in Chile.

The discovery is exciting because it suggests that low-mass planets could be numerous in our galaxy.

“From [our] results, we know now that at least 40% of solar-type stars have low-mass planets. This is really important because it means that low-mass planets are everywhere, basically,” explained Stephane Udry from Geneva University, Switzerland.

“What’s very interesting is that models are predicting them, and we are finding them; and furthermore the models are predicting even more lower-mass planets like the Earth.”

Size selection

The discovery now takes the number of known exoplanets – planets outside our Solar System – to more than 400.

These have been identified using a range of astronomical techniques and telescopes, but this latest group was spotted as a result of observations made with the Harps spectrometer at La Silla.

The High Accuracy Radial Velocity Planet Searcher instrument employs what is sometimes called the “wobble technique”.

This is an indirect method of detection that infers the existence of orbiting planets from the way their gravity makes a parent star appear to twitch in its motion across the sky.

Astronomy is working right at the limits of the current technology capable of detecting exoplanets and most of those found so far are Jupiter-scale and bigger.

Harps, however, has focussed its efforts on small, relatively cool stars – so-called M-class stars – in the hope of finding low-mass planets, ones most likely to resemble the rocky planets in our own Solar System.

Of the 28 planets known with masses below 20 Earth-masses, Harps has now identified 24 of them – and six of those are in the newly announced group.

“We have two candidates at five Earth-masses and two at six Earth-masses,” Professor Udry told BBC News.

Combined approach

Harps has previously identified an object which is only twice as massive as the Earth (announced in April).

Scientists are confident this planet harbours no life, though, because it orbits so close to its parent star that surface temperatures would be scorching.

In revealing the new collection of planets on Monday, the Harps team-members said they expected to confirm the existence of another batch, similar in number, during the coming six months.

The ultimate goal is to find a rocky planet in a star’s “habitable zone”, an orbit where temperatures are in a range that would support the presence of liquid water.

Scientists believe the introduction of newer, more sensitive technologies will allow them to identify such an object within just a few years from now.

The US space agency (Nasa) recently launched its Kepler telescope.

This hopes to find Earth-size planets by looking for the tiny dip in light coming from a star as an object crosses its face as viewed from Earth.

To properly characterise a planet, different observing techniques are required. The Kepler “transit” method reveals the diameter of an object, but a Harps-like measurement is needed to resolve the mass.


Full article and photo:

LHC gets colder than deep space

Atlas (Cern/C. Marcelloni)

The giant Atlas detector will search for hints of the elusive Higgs boson particle

The Large Hadron Collider (LHC) experiment has once again become one of the coldest places in the Universe.

All eight sectors of the LHC have now been cooled to their operating temperature of 1.9 kelvin (-271C; -456F) – colder than deep space.

The large magnets that bend particle beams around the LHC are kept at this frigid temperature using liquid helium.

The magnets are arranged end-to-end in a 27km-long circular tunnel straddling the Franco-Swiss border.

The cool-down is an important milestone ahead of the collider’s scheduled re-start in the latter half of November.

The LHC has been shut down since 19 September 2008, when a magnet problem called a “quench” caused a tonne of liquid helium to leak into the LHC tunnel.

After the accident, the particle accelerator had to be warmed up so that repairs could take place.

The most powerful physics experiment ever built, the Large Hadron Collider will recreate the conditions just after the Big Bang. It is operated by the European Organization for Nuclear Research (Cern), based in Geneva.

Two beams of protons will be fired down pipes running through the magnets. These beams will travel in opposite directions around the main “ring” at close to the speed of light.

At allotted points around the tunnel, the proton beams cross paths, smashing into one another with cataclysmic energy. Scientists hope to see new particles in the debris of these collisions, revealing fundamental new insights into the nature of the cosmos.

Awesome energy

The operating temperature of the LHC is just a shade above “absolute zero” (-273.15C) – the coldest temperature possible. By comparison, the temperature in remote regions of outer space is about 2.7 kelvin (-270C; -454F).

The LHC’s magnets are designed to be “superconducting”, which means they channel electric current with zero resistance and very little power loss. But to become superconducting, the magnets must be cooled to very low temperatures.

For this reason, the LHC is innervated by a complex system of cryogenic lines using liquid helium as the refrigerant of choice.

No particle physics facility on this scale has ever operated in such frigid conditions.

But before a beam can be circulated around the 27km-long LHC ring, engineers will have to thoroughly test the machine’s new quench protection system and continue with magnet powering tests.

Particle beams have already been brought “to the door” of the Large Hadron Collider. A low-intensity beam could be injected into the LHC in as little as a week.

This beam test would involve only parts of the collider, rather than the whole “ring”.

LHC tunnel (Cern/M.Brice)

The LHC’s tunnel runs for 27 km under the Franco-Swiss border

Officials now plan to circulate a beam around the LHC in the second half of November. Engineers will then aim to smash low-intensity beams together, giving scientists their first data.

The beams’ energy will then be increased so that the first high-energy collisions can take place. These will mark the real beginning of the LHC’s research programme.

Collisions at high energy have been scheduled to occur in December, but now look more likely to happen in January, according to Cern’s director of communications James Gillies.

Feeling the squeeze

Mr Gillies said this would involve delicate operation of the accelerator.

“Whilst you’re accelerating [the beams], you don’t have to worry too much about how wide the beams are. But when you want to collide them, you want the protons as closely squeezed together as possible.

He added: “If you get it wrong you can lose beam particles – so it can take a while to perfect. Then you line up the beams to collide.

“In terms of the distances between the last control elements of the LHC and the collision point, it’s a bit like firing knitting needles from across the Atlantic and getting them to collide half way.”

Officials plan a brief hiatus over the Christmas and New Year break, when the lab will have to shut down.

Although managers had discussed working through this period, Mr Gillies said this would have been “too logistically complicated”.

The main determinant in the decision to close over winter were workers’ contracts, which would have needed to be re-negotiated, he said.

An upgraded early warning system, or quench protection system, should prevent incidents of the kind which shut the collider last year, officials say.

This has involved installing hundreds of new detectors around the machine.

Cern has spent about 40m Swiss Francs (£24m) on repairs following the accident, including upgrades to the quench protection system.


Full article and photos:

Galactic Magnetic Fields May Control Boundaries Of Our Solar System

The first all-sky maps developed by NASA’s Interstellar Boundary Explorer (IBEX) spacecraft, the initial mission to examine the global interactions occurring at the edge of the solar system, suggest that the galactic magnetic fields had a far greater impact on Earth’s history than previously conceived, and the future of our planet and others may depend, in part, on how the galactic magnetic fields change with time.

“The IBEX results are truly remarkable, with emissions not resembling any of the current theories or models of this never-before-seen region,” says Dr. David J. McComas, IBEX principal investigator and assistant vice president of the Space Science and Engineering Division at Southwest Research Institute. “We expected to see small, gradual spatial variations at the interstellar boundary, some 10 billion miles away. However, IBEX is showing us a very narrow ribbon that is two to three times brighter than anything else in the sky.”

A “solar wind” of charged particles continuously travels at supersonic speeds away from the Sun in all directions. This solar wind inflates a giant bubble in interstellar space called the heliosphere — the region of space dominated by the Sun’s influence in which the Earth and other planets reside. As the solar wind travels outward, it sweeps up newly formed “pickup ions,” which arise from the ionization of neutral particles drifting in from interstellar space. IBEX measures energetic neutral atoms (ENAs) traveling at speeds of roughly half a million to two and a half million miles per hour. These ENAs are produced from the solar wind and pick-up ions in the boundary region between the heliosphere and the local interstellar medium.

The IBEX mission just completed the first global maps of these protective layers called the heliosphere through a new technique that uses neutral atoms like light to image the interactions between electrically charged and neutral atoms at the distant reaches of our Sun’s influence, far beyond the most distant planets. It is here that the solar wind, which continually emanates from the Sun at millions of miles per hour, slams into the magnetized medium of charged particles, atoms and dust that pervades the galaxy and is diverted around the system. The interaction between the solar wind and the medium of our galaxy creates a complex host of interactions, which has long fascinated scientists, and is thought to shield the majority of harmful galactic radiation that reaches Earth and fills the solar system.

“The magnetic fields of our galaxy may change the protective layers of our solar system that regulate the entry of galactic radiation, which affects Earth and poses hazards to astronauts,” says Nathan Schwadron of Boston University’s Center for Space Physics and the lead for the IBEX Science Operations Center at BU.

Each six months, the IBEX mission, which was launched on October 18, 2008, completes its global maps of the heliosphere. The first IBEX maps are strikingly different than any of the predictions, which are now forcing scientists to reconsider their basic assumptions of how the heliosphere is created.

“The most striking feature is the ribbon that appears to be controlled by the magnetic field of our galaxy,” says Schwadron.

Although scientists knew that their models would be tested by the IBEX measurements, the existence of the ribbon is “remarkable” says Geoffrey Crew, a Research Scientist at MIT and the Software Design Lead for IBEX. “It suggests that the galactic magnetic fields are much stronger and exert far greater stresses on the heliosphere than we previously believed.”

The discovery has scientists thinking carefully about how different the heliosphere could be than they expected.

“It was really surprising that the models did not generate features at all like the ribbon we observed,” says Christina Prested, a BU graduate student working on IBEX. “Understanding the ribbon in detail will require new insights into the inner workings of the interactions at the edge of our Sun’s influence in the galaxy.”

Adds Schwadron,”Any changes to our understanding of the heliosphere will also affect how we understand the astrospheres that surround other stars. The harmful radiation that leaks into the solar system from the heliosphere is present throughout the galaxy and the existence of astrospheres may be important for understanding the habitability of planets surrounding other stars.”

IBEX is the latest in NASA’s series of low-cost, rapidly developed Small Explorers space missions. Southwest Research Institute in San Antonio, Texas, leads and developed the mission with a team of national and international partners. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the Explorers Program for NASA’s Science Mission Directorate in Washington.


Full article:

‘Magnetricity’ Observed And Measured For First Time

magnetricityThe magnetic equivalent of electricity in a ‘spin ice’ material: atom sized north and south poles in spin ice drift in opposite directions when a magnetic field is applied.

A magnetic charge can behave and interact just like an electric charge in some materials, according to new research led by the London Centre for Nanotechnology (LCN).

The findings could lead to a reassessment of current magnetism theories, as well as significant technological advances.

The research, published in Nature, proves the existence of atom-sized ‘magnetic charges’ that behave and interact just like more familiar electric charges. It also demonstrates a perfect symmetry between electricity and magnetism – a phenomenon dubbed ‘magnetricity’ by the authors from the LCN and the Science and Technology Facility Council’s ISIS Neutron and Muon Source.

In order to prove experimentally the existence of magnetic current for the first time, the team mapped Onsager’s 1934 theory of the movement of ions in water onto magnetic currents in a material called spin ice. They then tested the theory by applying a magnetic field to a spin ice sample at a very low temperature and observing the process using muons at ISIS.

The experiment allowed the team to detect magnetic charges in the spin ice (Dy2Ti2O7), to measure their currents, and to determine the elementary unit of the magnetic charge in the material. The monopoles they observed arise as disturbances of the magnetic state of the spin ice, and can exist only inside the material.

Professor Steve Bramwell, LCN co-author of the paper, said: “Magnetic monopoles were first predicted to exist in 1931, but despite many searches, they have never yet been observed as freely roaming elementary particles. These monopoles do at least exist within the spin ice sample, but not outside.

“It is not often in the field of physics you get the chance to ask ‘How do you measure something?’ and then go on to prove a theory unequivocally. This is a very important step to establish that magnetic charge can flow like electric charge. It is in the early stages, but who knows what the applications of magnetricity could be in 100 years time.”

Professor Keith Mason, Chief Executive of STFC said: “The unequivocal proof that magnetic charge is conducted in spin ice adds significantly to our understanding of electromagnetism. Whilst we will have to wait to see what applications magnetricity will find in technology, this research shows that curiosity driven research will always have the potential to make an impact on the way we live and work. Advanced materials research depends greatly on having access to central research labs like ISIS allowing the UK science community to flourish and make exciting discoveries like this.”

Dr Sean Giblin, instrument scientist at ISIS and co-author of the paper, added: “The results were astounding, using muons at ISIS we are finally able to confirm that magnetic charge really is conducted through certain materials at certain temperatures – just like the way ions conduct electricity in water.”

Full article and photo:

Looking beyond

Through-the-wall vision

A cheap way of using small radios to see inside buildings

SUPERMAN had X-ray vision, which was useful for looking through walls when rescuing heroines and collaring villains. But beyond Hollywood, the best that engineers have been able to come up with to see inside buildings are devices that use radar. Some are portable enough to be placed against an outside wall by, say, a police unit planning a raid—and sophisticated enough to show, with reasonable accuracy, the location of anyone inside. But the best models cost more than $100,000, so they are not widely deployed. Now a team led by Neal Patwari and Joey Wilson of the University of Utah has come up with a way to peer through the walls of a building using a network of little radios that cost only a few dollars each.

Radar works by recording radio waves that have been reflected from the object under observation. Dr Patwari’s and Mr Wilson’s insight was to look not for reflections but for shadows. Their device broadcasts a radio signal through a building and, when that signal comes out the other side, monitors variations in its strength. The need for variation means the system cannot see things that are stationary. When the signal is temporarily blocked by a moving object such as a person, however, it shows up loud and clear.

Using a network of small transmitters and receivers, the researchers have found it is possible to plot a person’s position quite accurately and display it on the screen of a laptop. They call the process radio tomographic imaging, because constructing an image by measuring the strengths of radio signals along several pathways is similar to the computerised tomographic body-scanning used by hospitals—though medical machines employ X-rays, not radio waves, to do the scanning.

The radios used by Dr Patwari and Mr Wilson are low-cost types designed for use in what are known as ZigBee networks. In that application they transmit data between devices such as thermostats, fire detectors and some automated factory equipment. They are not even as powerful as the radios used in Wi-Fi networks to link computers together.

Small and inexpensive as these ZigBee radios are, though, there is strength in their numbers. Each is in contact with all of the others. A building under examination is thus penetrated by a dense web of links. In one experiment, for example, a network of 34 radios was able to keep track of Mr Wilson’s position with an accuracy of less than a metre—a figure that Dr Patwari and Mr Wilson think could be improved greatly by using specially designed radios instead of off-the-shelf ones. Moreover, putting radios on the roof of a building as well as around its walls should make it possible to produce three-dimensional views of what is going on inside.

The ability to “see” people moving around in a building with such a cheap system has many plausible applications, and Mr Wilson has set up a company called Xandem to commercialise the idea. Besides military, police and private-security uses, radio networks might be employed to locate people trapped by fire or earthquake. More commercially, they might be used to measure what retailers call “footfall”—recording how people use stores and shopping centres. At the moment, this is done with cameras, or by triangulating the position of signals given off by mobile phones that customers are carrying. Radio tomography could be simpler, more accurate and, some might feel, less intrusive. Certainly less so than a man in tights with X-ray eyes.


Full article:

Welcome to the world of sci-fi science

Large Hadron Collider and the Time Machine
One of these devices may actually send things through time
Teleportation, time travel, antimatter and wireless electricity. It all sounds far-fetched, more fiction than fact, but it’s all true.

Everybody is used to science fiction featuring science that seems, well, not very scientific.

But you might be surprised at the way some things that seem fantastical have a solid grounding in actual science.


Actual Tardis-style time travel won’t be materialising any time soon

The theory: Build a machine that lets you change the past or visit the future.

The science fiction: The Time Machine by HG Wells, where the Time Traveller visits humanity’s far future and doesn’t like what he finds.

In practice: Einstein’s relativity allows time travel in extreme circumstances. Some interpretations of quantum mechanics permit particles to travel backwards in time. Two physicists recently suggested that the Large Hadron Collider may have malfunctioned because a Higgs boson particle, travelling back in time from a future experiment, wrecked the machine.

The layman’s explanation: Time travel seems paradoxical – what happens if you go back in time and kill your own grandfather? But current physical theories do not forbid it.

In relativity, particles can travel along “closed timelike curves”, going round a time-loop from past to present to future and back to the same past. One theoretical method uses a wormhole, which is a black hole linked to its time-reversal by a tube. If you pull the black hole around near the speed of light, you get a time machine. However, you need a special kind of matter to keep the wormhole open, and we don’t yet have any.

Quantum mechanics involves a fundamental symmetry in nature. If you swap positive and negative charges, reflect the universe in a mirror, and reverse the flow of time, then the laws of physics don’t change. So a Higgs boson travelling backwards in time is the same as an anti-Higgs travelling forwards.

Coming to a shop near you?: In about 15,000 years at this rate, assuming new laws of physics don’t rule it out.


Electricity travels between two Tesla coils
This wireless electricity would probably not charge your mobile.

The theory: Plug your gadgets into the mains without using a cable.

The science fiction: Isaac Asimov’s 1941 story Reason is about a solar power station run by robots that transmits energy to Earth.

In practice: Electricity and magnetism are “fields” in space, and can be converted into each other. Electromagnetic radiation is a wave, and can travel from one place to another. So in principle wireless transmission of electrical power should be a doddle. Edison thought about it in 1875.

The layman’s explanation: If you move a magnet, it creates an electrical field. If you move an electrical field, it creates a magnetic one. The two are different aspects of one basic force of nature – electromagnetism. In particular, electrical currents can be transported from one gadget to another over a distance, a process called induction. Electrical generators and motors use this.

Microwaves, which are effectively light with a very long wavelength, are a practical way to transport electrical power. It is also possible to turn electricity into light, using a laser, and then reverse the process at the other end.

Coming to a shop near you?: In 1975, an American team showed that it’s possible to transmit tens of kilowatts of power using microwaves. A few months ago a Japanese consortium announced a plan to build a $21bn facility in space to beam solar power to Earth – within 30 years it could supply 300,000 homes with electricity. This year, at the TED conference in Oxford, the company Witricity demonstrated a wireless power system that can recharge mobile phones and TV sets. In Tesco in time for Christmas.


Three characters
These three thought of antimatter as we think of super unleaded

The theory: There is a special kind of matter which explodes violently on contact with ordinary matter, producing more energy than a hydrogen bomb.

The science fiction: Star Trek uses antimatter to power its warp drives.

In practice: Paul Dirac should have predicted antimatter using quantum mechanics in 1928 but he fluffed it. Carl Anderson spotted the first antiparticle, the positron, in 1932. In 1995, the CERN particle accelerator facility in Geneva created atoms of antihydrogen.

The layman’s explanation: Matter is made of extremely tiny particles, which have various masses, electric charges, spins, and so on. Associated with each particle is an antiparticle with the same mass but opposite charge. If the two collide, they annihilate in a burst of energy. A small mass produces a lot of energy thanks to Einstein’s famous equation – energy = mass times the square of the speed of light. The Big Bang somehow produced a billion and one particles of matter for every billion particles of antimatter. No one really knows why, but if it hadn’t, we wouldn’t be here because there wouldn’t be a here for us to inhabit.

You might think of antimatter as a compact source of almost unbounded energy. Put some in a magnetic bottle, a magnetic field that confines the antimatter in a cavity so that it doesn’t touch any normal matter – the only way known to be able to contain it – and then release it very slowly, allowing it to react with normal matter. It would make fusion power seem like a car battery.

Coming to a store near you?: Positrons are very ho-hum. They occur in radioactive decay and are used routinely in medical PET (Positron Emission Tomography) scanners. Beyond those, it gets hard. Expect a few thousand atoms of antihydrogen within the next 50 years, costing the GNP of a small country, and an atom or two of heavier elements. Mass production looks like a long shot.


The theory: Going from here to somewhere else without passing through anywhere in between.

The science fiction: Beam me up, Scotty.

In practice: Take two particles of light and entangle them – now you can teleport quantum information – such as what their spin is – from one to the other, instantaneously.

The layman’s explanation: Photons, particles of light, have a property called “spin”. This can be up, down, or a mixture of the two. Alice has a photon, and she wants Bob to have one with the same spin. She can’t send him hers because the Post Office is on strike, and she can’t measure her spin and phone him, because the measurement can change the spin.

Fortunately, the last time she met Bob she gave him one photon from an entangled pair, and kept the other. “Entangled” means that the two photons were prepared so that their states were related in a special way. Alice lets her photon interact with her other photon from the entangled pair. This instantly teleports information about the spin to Bob’s half. However, he can’t “read” that information until a message arrives by more conventional means. A quick call on Alice’s mobile, telling him some measurements she has made, now puts his entangled photon into the desired state.

Quantum “teleportation” destroys the original state and can’t be used to send messages faster than light. It doesn’t actually teleport matter – just quantum information.

Coming to a store near you?: In 1998, the quantum optics group at Caltech used “squeezed light” to teleport the state of a photon in a laboratory. It’s now been done with atoms, too. In 2004 Austrian physicists teleported the state of a photon across the Danube river. Within another century it will be an amoeba. But be warned: when you are teleported, your body will be ripped to shreds and rebuilt at the other end.

Ian Stewart’s latest book is Professor Stewart’s Hoard of Mathematical Treasures, published by Profile.


Full article and photos:

French Investigate Scientist in Formal Terrorism Inquiry

A French court placed a physicist working at CERN, the high-energy research laboratory in Switzerland, under formal investigation on Monday for suspected “conspiracy with a terrorist enterprise.”

Although the physicist’s name had not been officially released by the French police, an official with direct knowledge of the investigation identified him as Adlène Hicheur, a French particle physicist born in Algeria. The official spoke on condition of anonymity.

Dr. Hicheur, 32, and a younger brother were arrested on Thursday in his home in Vienne, France, on suspicion of having contacts with a member of Al Qaeda in the Islamic Maghreb, a Sunni extremist group based in Algeria that has affiliated itself with Osama bin Laden’s terrorist network. The brother has been released.

Dr. Hicheur has not been charged with a crime, and the French authorities have not said what evidence they have in the case. A person informed of the investigation said that some incriminating information was in the form of e-mail messages and other communications obtained at the time of Dr. Hicheur’s arrest.

Under French law, a person in a terrorism case can be held under “provisional detention” with no time limit. In France, being placed under formal investigation does not necessarily lead to a trial and does not imply guilt.

In an interview with the journal Nature, published online on Tuesday, a brother of Dr. Hicheur said the accusations against his brother were “completely false.” The brother, Halim, said that his family traded e-mail messages with people in Algeria, but denied any contacts with Al Qaeda. According to news reports, Dr. Hicheur was born in Setif, Algeria, and is one of six children.

Dr. Hicheur is part of a 49-member team from the Laboratory for High Energy Physics at the École Polytechnique Fédérale de Lausanne that is working on one experiment at CERN’s Large Hadron Collider, as part of a 700-member international group.

The collider was built to accelerate protons to seven trillion electron volts of energy and then bang them together in search of forces and particles that existed in the early moments of the Big Bang.

The experiment the Lausanne team works on, called LHCb, is aimed at clarifying any difference between matter and its opposite, antimatter, and in that way explaining why the universe is made of the former and not the latter.

A spokesman for the technical school in Lausanne characterized Dr. Hicheur’s colleagues as being “extremely surprised and in emotional shock” at the possibility that he was a suspect. Dr. Hicheur spent most of his time at his office at CERN, the spokesman said, returning to Lausanne only once a week to teach a class — exactly what class, he said he was not allowed to say.

Dr. Hicheur has been working on various aspects of the antimatter problem for his entire career. A paper presented last year in La Thuile, Italy, was about so-called new physics that could emerge from the LHCb collaboration’s gigantic detector, one of four spaced around the collider tunnel underneath the Swiss-French border near Geneva.

Dr. Hicheur was awarded his Ph.D. in 2003 from the University of Savoie in Annecy, France, for work on aspects of the antimatter problem involving rare decays of the subatomic particles called B mesons. The research was done at the Stanford Linear Collider in California, where he worked for several months in 2002 as part of the BaBar collaboration, said Rob Brown, a spokesman for the Stanford lab.

According to archival physics Web sites, Dr. Hicheur is listed as an author on more than a hundred physics papers, most with the BaBar team. According to British press reports Dr. Hicheur also once worked at the Rutherford Appleton Laboratory at Chilton, in Oxfordshire, England.

As a member of the LHCb team, Dr. Hicheur had an office and an e-mail address at the CERN complex outside Geneva, but according to James Gillies, head of CERN’s press office, he did not have access to the tunnel.

Asked if radiation from the proton beams could be used to create radioactive materials for a dirty bomb, Dr. Gillies said it was unlikely. The isotopes produced would be too short-lived to be of use to terrorists, he said, or would be produced in quantities too small for a weapon.

“If someone were to try and introduce something into the tunnel, it would be impossible to expose it directly to the beam, so the flux of particles hitting it would be low,” he said. “There is no conceivable way to produce harmful radioactive materials that could be of interest to terrorists.”

In principle, antimatter could be used to make a powerful bomb, because particles and their antiparticles annihilate each other into pure energy on contact. This was the premise of the recent movie and book by Dan Brown, “Angels and Demons,” as well as a propulsion scheme in “Star Trek.”

CERN has in fact produced antimatter, and even anti-atoms in the quest to understand antimatter, but the lab produces so little, according to a calculation on the CERN Web site, that it would take two billion years to make enough for a bomb.

Dennis Overbye, New York Times


Full article:

Physicists Measure Elusive ‘Persistent Current’ That Flows Forever

ringHarris made the first definitive measurement of an electric current that flows continuously in tiny, but ordinary, metal rings.

Physicists at Yale University have made the first definitive measurements of “persistent current,” a small but perpetual electric current that flows naturally through tiny rings of metal wire even without an external power source.

The team used nanoscale cantilevers, an entirely novel approach, to indirectly measure the current through changes in the magnetic force it produces as it flows through the ring. “They’re essentially little floppy diving boards with the rings sitting on top,” said team leader Jack Harris, associate professor of physics and applied physics at Yale. The findings appear in the October 9 issue of Science.

The counterintuitive current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. “These are ordinary, non-superconducting metal rings, which we typically think of as resistors,” Harris said. “Yet these currents will flow forever, even in the absence of an applied voltage.”

Although persistent current was first theorized decades ago, it is so faint and sensitive to its environment that physicists were unable to accurately measure it until now. It is not possible to measure the current with a traditional ammeter because it only flows within the tiny metal rings, which are about the same size as the wires used on computer chips.

Past experiments tried to indirectly measure persistent current via the magnetic field it produces (any current passing through a metal wire produces a magnetic field). They used extremely sensitive magnetometers known as superconducting quantum interference devices, or SQUIDs, but the results were inconsistent and even contradictory.

“SQUIDs had long been established as the tool used to measure extremely weak magnetic fields. It was extremely optimistic for us to think that a mechanical device could be more sensitive than a SQUID,” Harris said.

The team used the cantilevers to detect changes in the magnetic field produced by the current as it changed direction in the aluminum rings. This new experimental setup allowed the team to make measurements a full order of magnitude more precise than any previous attempts. They also measured the persistent current over a wider range of temperature, ring size and magnetic field than ever before.

“These measurements could tell us something about how electrons behave in metals,” Harris said, adding that the findings could lead to a better understanding of how qubits, used in quantum computing, are affected by their environment, as well as which metals could potentially be used as superconductors.


Full article and photo:

Research in a Vacuum: DARPA Tries to Tap Elusive Casimir Effect for Breakthrough Technology

DARPA mainly hopes that research on this quantum quirk can produce futuristic microdevices


A FEW GOOD MEMS Harnessing the Casimir effect (which takes place between the two metal plates in the above diagram) could help researchers build tiny machines, such as microelectromechanical systems (MEMS), that today are hindered by surface interactions that can make nanomaterials sticky to the point of permanent adhesion.

Named for a Dutch physicist, the Casimir effect governs interactions of matter with the energy that is present in a vacuum. Success in harnessing this force could someday help researchers develop low-friction ballistics and even levitating objects that defy gravity. For now, the U.S. Defense Department’s Defense Advanced Research Projects Agency has launched a two-year, $10-million project encouraging scientists to work on ways to manipulate this quirk of quantum electrodynamics.

Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested, in work done in the 1940s with fellow Dutch physicist Dirk Polder, that two metal plates held apart in a vacuum could trap the waves, creating vacuum energy that, depending on the situation, could attract or repel the plates. As the boundaries of a region of vacuum move, the variation in vacuum energy (also called zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, Vrije University Amsterdam and elsewhere has proved Casimir correct—and given some experimental underpinning to DARPA’s request for research proposals.

Investigators from five institutions—Harvard, Yale University, the University of California, Riverside, and two national labs, Argonne and Los Alamos—received funding. DARPA will assess the groups’ progress in early 2011 to see if any practical applications might emerge from the research. “If the program delivers, there’s a good chance for a follow-on program to apply” the research, says Thomas Kenny, the DARPA physicist in charge of the initiative.

Program documents on the DARPA Web site state the goal of the Casimir Effect Enhancement program “is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force. One could leverage this ability to control phenomena such as adhesion in nanodevices, drag on vehicles, and many other interactions of interest to the [Defense Department].”

Nanoscale design is the most likely place to start and is also the arena where levitation could emerge. Materials scientists working to build tiny machines called microelectromechanical systems (MEMS) struggle with surface interactions, called van der Waals forces, that can make nanomaterials sticky to the point of permanent adhesion, a phenomenon known as “stiction”. To defeat stiction, many MEMS devices are coated with Teflon or similar low-friction substances or are studded with tiny springs that keep the surfaces apart. Materials that did not require such fixes could make nanotechnology more reliable. Such materials could skirt another problem posed by adhesion: Because surface stickiness at the nanoscale is much greater than it is for larger objects, MEMS designers resort to making their devices relatively stiff. That reduces adhesion (stiff structures do not readily bend against each other), but it reduces flexibility and increases power demands.

Under certain conditions, manipulating the Casimir effect could create repellant forces between nanoscale surfaces. Hong Tang and his colleagues at Yale School of Engineering & Applied Science sold DARPA on their proposal to assess Casimir forces between miniscule silicon crystals, like those that make up computer chips. “Then we’re going to engineer the structure of the surface of the silicon device to get some unusual Casimir forces to produce repulsion,” he says. In theory, he adds, that could mean building a device capable of levitation.

Such claims emit a strong scent of fantasy, but researchers say incremental successes could open the door to significant breakthroughs in key areas of nanotechnology, and perhaps larger structures. “What I can contribute is to understand the role of the Casimir force in real working devices, such as microwave switches, MEMS oscillators and gyroscopes, that normally are made of silicon crystals, not perfect metals,” Tang says.

The request for proposals closed in September. The project received “a lot of interest,” Kenny says. “I was surprised at the creativity of the proposals, and at the practicality,” he adds, although he declined to reveal how many teams submitted proposals. “It wasn’t pure theory. There were real designs that looked buildable, and the physics looked well understood.”

Still, the Casimir project was a “hard sell” for DARPA administrators, Kenny acknowledges. “It’s very fundamental, very risky, and even speculative on the physics side,” he says. “Convincing the agency management that the timing was right was difficult, especially given the number of programs that must compete for money within the agency.”

DARPA managers certainly would be satisfied if the Casimir project produced anything tangible, because earlier attempts had failed. Between 1996 and 2003, for example, NASA had a program to explore what it calls Breakthrough Propulsion Physics to build spacecraft capable of traveling at speeds faster than light (299,790 kilometers per second). One way to do that is by harnessing the Casimir force in a vacuum and using the energy to power a propulsion system. The program closed with this epitaph on its Web site: “No breakthroughs appear imminent.”

One of many problems with breakthrough propulsion based on the Casimir force is that whereas zero-point energy may be theoretically infinite, it is not necessarily limitless in practice—or even minutely accessible. “It’s not so much that these look like really good energy schemes so much as they are clever ways of broaching some really hard questions and testing them,” says Marc Millis, the NASA physicist who oversaw the propulsion program.

The DARPA program faces several formidable obstacles, as well, cautions Jeremy Munday, a physicist at California Institute of Technology who studies the Casimir effect. For starters, simply measuring the Casimir force is difficult enough. These experiments take many years to complete, adds Munday, who recently published a paper in Nature (Scientific American is part of the Nature Publishing Group) describing his own research. What’s more, he says, although several groups have measured the Casimir force, only a few have been able to modify it significantly. Still, Munday adds, the exploratory nature of the program means its goals and expectations are “quite reasonable.”

Tang is pragmatic about his efforts, given the unlikelihood that Casimir force will ever provide much energy to harness. “The force is really small,” he says. “After all, a vacuum is a vacuum.”

Yet sometimes the best science can hope for is baby steps. “To come up with anything that can lead to a viable energy conversion or a viable force producing effect, we’re not anywhere close,” Millis says. “But then, of course, you don’t make progress unless you try.

Adam Marcus, Scientific American


Full article and photo:

The Wonderful World of the Teeny-Tiny

Microscopic Photography

There are millions of photo competitions. But very few of them deal with objects that are normally invisible to the naked eye. SPIEGEL ONLINE brings you the winners of this year’s microscopic photo competition.

It isn’t uncommon for scientists to spend countless hours staring into a microscope. Only rarely, however, do they take pictures of what they see. And even then the images tend to be gray and amorphous, depicting malignant tissue or the activity of a particular protein inside a cell.


For the uninitiated, such images are impenetrable. Yet the micro-world can also be a beautiful place, full of splendour that normally remains hidden to the naked eye. Capturing that beauty is the aspiration of micro-photographers, those who magnify the miniature and take pictures of the tiny. The images that result are often full of unfamiliar shapes and forms — and surprisingly colorful. Only rarely is it possible to identify the subject being photographed.

Since 1974, though, depictions of the diminutive have been the subject of an annual photo contest, called the Nikon Small World Competition. A jury of photographers, science journalists and researchers choose the best of the best among microscopic photos.


micro 1

First place in this year’s Nikon Small World Competition went to Heiti Paves of Estonia. The image shows the anther of a thale cress (arabidopsis thaliana) magnified 20 times. The plants pollinate themselves and reproduce quickly, making them a favorite for genetics researchers.

micro 2

Second place, Gerd Günther of Germany. The spiny sow thistle (sonchus asper) can be found in Austria and Germany. This image is part of the plant’s flower stem magnified 150 times.

micro 3

Third place, Pedro Barrios-Perez of Canada. The image shows a wrinkled photoresist, a light-sensitive material used in a number of industrial processes, such as micro-electronics. The image was magnified 200 times.

micro 4

Fourth place, James Hayden of the US. This image is the result of viewing the ovary of an anglerfish through a special fluorescent microscope. Magnified four times.

micro 5

Fifth place, Bruno Vellutini of Brazil. A researcher at the University of Sao Paolo, Vellutini’s picture shows a young sea star magnified 40 times.

micro 6

Eleventh place, Dominik Paquet of Germany. Zebra fish are often used in the study of genetic Alzheimer’s. In this image, magnified 10 times, the nerve cells are stained green while the Alzheimer’s genes are colored blue and red.


One of those honored this year, Dominik Paquet of the Adolf Butenandt Institute in Munich, is a prime example as to how many of the images in the contest come into existence. His image, which came in 11th place, sprang from his research into the cellular processes related to Alzheimer’s disease. Zebra fish are often used to make the death of nerve cells visible. Tiny fish larvae are injected with an Alzheimer’s-causing gene, which is then colored using an antibody to make it easily perceptible. His laser microscope does the rest.

A Simple Sow Thistle

Paquet entered one of the resulting images to the photo contest. “Compelling images are important for research,” Paquet, 29, says. “And they help communicate what we are doing to the broader public.”

Some 2,000 photographers sent in their work to the contest, and the subject matter varies widely. Some photographers took pictures of magnified chemical compounds, others show details from the world of microbiology. And not all those who submitted photographs come from the world of science. Anyone with a microscope can participate in the contest. Although standard instruments are enough, many of the images were taken with highly specialized microscopes that can cost hundreds of thousands of euros.

But even the simplest of microscopes can result in impressive photos. An image submitted by photographer Gerd Günther from Düsseldorf took second place in this year’s contest — and was created using a simple, store-bought device. His subject? A simple sow thistle.


Full article and photos:,1518,654690,00.html

The Collider, the Particle and a Theory About Fate


SUICIDE MISSION? The core of the superconducting solenoid magnet at the Large Hadron Collider in Switzerland.

More than a year after an explosion of sparks, soot and frigid helium shut it down, the world’s biggest and most expensive physics experiment, known as the Large Hadron Collider, is poised to start up again. In December, if all goes well, protons will start smashing together in an underground racetrack outside Geneva in a search for forces and particles that reigned during the first trillionth of a second of the Big Bang.

Then it will be time to test one of the most bizarre and revolutionary theories in science. I’m not talking about extra dimensions of space-time, dark matter or even black holes that eat the Earth. No, I’m talking about the notion that the troubled collider is being sabotaged by its own future. A pair of otherwise distinguished physicists have suggested that the hypothesized Higgs boson, which physicists hope to produce with the collider, might be so abhorrent to nature that its creation would ripple backward through time and stop the collider before it could make one, like a time traveler who goes back in time to kill his grandfather.

Holger Bech Nielsen, of the Niels Bohr Institute in Copenhagen, and Masao Ninomiya of the Yukawa Institute for Theoretical Physics in Kyoto, Japan, put this idea forward in a series of papers with titles like “Test of Effect From Future in Large Hadron Collider: a Proposal” and “Search for Future Influence From LHC,” posted on the physics Web site in the last year and a half.

According to the so-called Standard Model that rules almost all physics, the Higgs is responsible for imbuing other elementary particles with mass.

“It must be our prediction that all Higgs producing machines shall have bad luck,” Dr. Nielsen said in an e-mail message. In an unpublished essay, Dr. Nielson said of the theory, “Well, one could even almost say that we have a model for God.” It is their guess, he went on, “that He rather hates Higgs particles, and attempts to avoid them.”

This malign influence from the future, they argue, could explain why the United States Superconducting Supercollider, also designed to find the Higgs, was canceled in 1993 after billions of dollars had already been spent, an event so unlikely that Dr. Nielsen calls it an “anti-miracle.”

You might think that the appearance of this theory is further proof that people have had ample time — perhaps too much time — to think about what will come out of the collider, which has been 15 years and $9 billion in the making.

The collider was built by CERN, the European Organization for Nuclear Research, to accelerate protons to energies of seven trillion electron volts around an 18-mile underground racetrack and then crash them together into primordial fireballs.

For the record, as of the middle of September, CERN engineers hope to begin to collide protons at the so-called injection energy of 450 billion electron volts in December and then ramp up the energy until the protons have 3.5 trillion electron volts of energy apiece and then, after a short Christmas break, real physics can begin.


Dr. Nielsen and Dr. Ninomiya started laying out their case for doom in the spring of 2008. It was later that fall, of course, after the CERN collider was turned on, that a connection between two magnets vaporized, shutting down the collider for more than a year.

Dr. Nielsen called that “a funny thing that could make us to believe in the theory of ours.”

He agreed that skepticism would be in order. After all, most big science projects, including the Hubble Space Telescope, have gone through a period of seeming jinxed. At CERN, the beat goes on: Last weekend the French police arrested a particle physicist who works on one of the collider experiments, on suspicion of conspiracy with a North African wing of Al Queda.

Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs.

Sure, it’s crazy, and CERN should not and is not about to mortgage its investment to a coin toss. The theory was greeted on some blogs with comparisons to Harry Potter. But craziness has a fine history in a physics that talks routinely about cats being dead and alive at the same time and about anti-gravity puffing out the universe.

As Niels Bohr, Dr. Nielsen’s late countryman and one of the founders of quantum theory, once told a colleague: “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.”

Dr. Nielsen is well-qualified in this tradition. He is known in physics as one of the founders of string theory and a deep and original thinker, “one of those extremely smart people that is willing to chase crazy ideas pretty far,” in the words of Sean Carroll, a Caltech physicist and author of a coming book about time, “From Eternity to Here.”

Another of Dr. Nielsen’s projects is an effort to show how the universe as we know it, with all its apparent regularity, could arise from pure randomness, a subject he calls “random dynamics.”

Dr. Nielsen admits that he and Dr. Ninomiya’s new theory smacks of time travel, a longtime interest, which has become a respectable research subject in recent years. While it is a paradox to go back in time and kill your grandfather, physicists agree there is no paradox if you go back in time and save him from being hit by a bus. In the case of the Higgs and the collider, it is as if something is going back in time to keep the universe from being hit by a bus. Although just why the Higgs would be a catastrophe is not clear. If we knew, presumably, we wouldn’t be trying to make one.

We always assume that the past influences the future. But that is not necessarily true in the physics of Newton or Einstein. According to physicists, all you really need to know, mathematically, to describe what happens to an apple or the 100 billion galaxies of the universe over all time are the laws that describe how things change and a statement of where things start. The latter are the so-called boundary conditions — the apple five feet over your head, or the Big Bang.

The equations work just as well, Dr. Nielsen and others point out, if the boundary conditions specify a condition in the future (the apple on your head) instead of in the past, as long as the fundamental laws of physics are reversible, which most physicists believe they are.

“For those of us who believe in physics,” Einstein once wrote to a friend, “this separation between past, present and future is only an illusion.”

In Kurt Vonnegut’s novel “Sirens of Titan,” all of human history turns out to be reduced to delivering a piece of metal roughly the size and shape of a beer-can opener to an alien marooned on Saturn’s moon so he can repair his spaceship and go home.

Whether the collider has such a noble or humble fate — or any fate at all — remains to be seen. As a Red Sox fan my entire adult life, I feel I know something about jinxes.

Dennis Overbye, New York Times


Full article and photo:

Bacterium Transforms Toxic Gold Compounds To Their Metallic Form

gold ccc

A C. metallidurans ultra-thin section containing a gold nanoparticle.

Australian scientists have found that the bacterium Cupriavidus metallidurans catalyses the biomineralisation of gold by transforming toxic gold compounds to their metallic form using active cellular mechanism.

Researchers reported the presence of bacteria on gold surfaces but have never clearly elucidated their role. Now, an international team of scientists has found that there may be a biological reason for the presence of these bacteria on gold grain surfaces.

“A number of years ago we discovered that the metal-resistant bacterium Cupriavidus metallidurans occurred on gold grains from two sites in Australia. The sites are 3500 km apart, in southern New South Wales and northern Queensland, so when we found the same organism on grains from both sites we thought we were onto something. It made us wonder why these organisms live in this particular environment. The results of this study point to their involvement in the active detoxification of Au complexes leading to formation of gold biominerals,” explains Frank Reith, leader of the research and working at the University of Adelaide (Australia).

The experiments showed that C. metallidurans rapidly accumulates toxic gold complexes from a solution prepared in the lab. This process promotes gold toxicity, which pushes the bacterium to induce oxidative stress and metal resistance clusters as well as an as yet uncharacterized Au-specific gene cluster in order to defend its cellular integrity. This leads to active biochemically-mediated reduction of gold complexes to nano-particulate, metallic gold, which may contribute to the growth of gold nuggets.

For this study scientists combined synchrotron techniques at the European Synchrotron Radiation Facility (ESRF) and the Advanced Photon Source (APS) and molecular microbial techniques to understand the biomineralisation in bacteria. It is the first time that these techniques have been used in the same study, so Frank Reith brought together a multinational team of experts in both areas for the success of the experiment. The team was made up of scientists from the University of Adelaide, the Commonwealth Scientific and Research Organization (CSIRO), the University of California (US), the University of Western Ontario and the University of Saskatchewan (Canada), Martin-Luther-Universität (Germany), University of Nebraska-Lincoln (US), SCK.CEN (Belgium) and the APS (US) and the ESRF (France).

This is the first direct evidence that bacteria are actively involved in the cycling of rare and precious metals, such as gold. These results open the doors to the production of biosensors.

“The discovery of an Au-specific operon means that we can now start to develop gold-specific biosensors, which will help mineral explorers to find new gold deposits. To achieve this we need to further characterize the gold-specific operon on a genomic as well as proteomic level. If funding for this research is granted I believe we can produce a functioning biosensor within three to five years,” concludes Reith.


Full article and photo:

Canadian Astronomers Capture Spectacular Meteor Footage And Images


Composite all-sky camera image of the end of the fireball as seen from Hamilton (Camera #3, McMaster).

Astronomers from The University of Western Ontario have released footage of a meteor that was approximately 100 times brighter than a full moon. The meteor lit up the skies of southern Ontario two weeks ago and Western astronomers are now hoping to enlist the help of local residents in recovering one or more possible meteorites that may have crashed in the area of Grimsby, Ontario.

The Physics and Astronomy Department at Western has a network of all-sky cameras in southern Ontario that scan the atmosphere monitoring for meteors. Associate Professor Peter Brown, who specializes in the study of meteors and meteorites, says that on Friday, September 25 at 9:03 p.m. EST seven all-sky cameras of Western’s Southern Ontario Meteor Network (SOMN) recorded a brilliant fireball in the evening sky over the west end of Lake Ontario.

Brown along with Phil McCausland, a postdoctoral fellow at Western’s Centre for Planetary Science & Exploration, are now working to get the word out amongst interested people who may be willing to see if they can spot any fallen meteorites.

“This particular meteorite fall, if any are found, is very important because its arrival was so well recorded. We have good camera records as well as radar and infrasound detections of the event, so that it will be possible to determine its orbit prior to collision with the Earth and to determine the energy of the fireball event,” says McCausland. “We can also figure out where it came from and how it got here, which is rare. In all of history, only about a dozen meteorite falls have that kind of record.”

The fireball was first detected by Western’s camera systems at an altitude of 100 km over Guelph moving southeastwards at 20.8 km/s. The meteoroid was initially the size of a child’s tricycle.

Analysis of the all-sky camera records as well as data from Western’s meteor radar and infrasound equipment indicates that this bright fireball was large enough to have dropped meteorites in a region south of Grimsby on the Niagara Peninsula, providing masses that may total as much as several kilograms.

Researchers at Western are interested in hearing from anyone within 10 km of Grimsby who may have witnessed or recorded this event, seen or heard unusual events at the time, or who may have found possible fragments of the freshly fallen meteorite.

According to McCausland, meteorites are of great scientific value. He also points out that in Canada meteorites belong to the owner of the land upon which they are discovered. If individuals intend to search they should, in all cases, obtain the permission of the land owner before searching on private land.

Meteorites may best be recognized by their dark and scalloped exterior, and are usually denser than normal rock and will often attract a fridge magnet due to their metal content. In this fall, meteorites may be found in a small hole produced by their dropping into soil. Meteorites are not dangerous, but any recovered meteorites should be placed in a clean plastic bag or container and be handled as little as possible to preserve their scientific information.

For video footage, still images and site maps, please visit


Full article and photo:

Classical Chaos Occurs In The Quantum World, Scientists Find


This image shows the kind of pictures Jessen’s team produces with tomography. The top two spheres are from a selected experimental snapshot taken after 40 cycles of changing the direction of the axis of spin of a cesium atom, the quantum “spinning top.” The two spheres below are theoretical models that agree remarkably with the experimental results.

Chaotic behavior is the rule, not the exception, in the world we experience through our senses, the world governed by the laws of classical physics.

Even tiny, easily overlooked events can completely change the behavior of a complex system, to the point where there is no apparent order to most natural systems we deal with in everyday life.

The weather is one familiar case, but other well-studied examples can be found in chemical reactions, population dynamics, neural networks and even the stock market.

Scientists who study “chaos” — which they define as extreme sensitivity to infinitesimally small tweaks in the initial conditions — have observed this kind of behavior only in the deterministic world described by classical physics.

Until now, no one has produced experimental evidence that chaos occurs in the quantum world, the world of photons, atoms, molecules and their building blocks.

This is a world ruled by uncertainty: An atom is both a particle and a wave, and it’s impossible to determine its position and velocity simultaneously.

And that presents a major problem. If the starting point for a quantum particle cannot be precisely known, then there is no way to construct a theory that is sensitive to initial conditions in the way of classical chaos.

Yet quantum mechanics is the most complete theory of the physical world, and therefore should be able to account for all naturally occurring phenomena.

“The problem is that people don’t see [classical] chaos in quantum systems,” said Professor Poul Jessen of the University of Arizona. “And we believe quantum mechanics is the fundamental theory, the theory that describes everything, and that we should be able to understand how classical physics follows as a limiting case of quantum physics.”

Experiments Reveal Classical Chaos In Quantum World

Now, however, Jessen and his group in UA’s College of Optical Sciences have performed a series of experiments that show just how classical chaos spills over into the quantum world.

The scientists report their research in the Oct. 8 issue of the journal Nature in an article titled, “Quantum signatures of chaos in a kicked top.”

Their experiments show clear fingerprints of classical-world chaos in a quantum system designed to mimic a textbook example of chaos known as the “kicked top.”

The quantum version of the top is the “spin” of individual laser-cooled cesium atoms that Jessen’s team manipulate with magnetic fields and laser light, using tools and techniques developed over a decade of painstaking laboratory work.

“Think of an atom as a microscopic top that spins on its axis at a constant rate of speed,” Jessen said. He and his students repeatedly changed the direction of the axis of spin, in a series of cycles that each consisted of a “kick” and a “twist”.

Because spinning atoms are tiny magnets, the “kicks” were delivered by a pulsed magnetic field. The “twists” were more challenging, and were achieved by subjecting the atom to an optical-frequency electric field in a precisely tuned laser beam.

They imaged the quantum mechanical state of the atomic spin at the end of each kick-and-twist cycle with a tomographic technique that is conceptually similar to the methods used in medical ultrasound and CAT scans.

The end results were pictures and stop-motion movies of the evolving quantum state, showing that it behaves like the equivalent classical system in some significant ways.

One of the most dramatic quantum signatures the team saw in their experiments was directly visible in their images: They saw that the quantum spinning top observes the same boundaries between stability and chaos that characterize the motion of the classical spinning top. That is, both quantum and classical systems were dynamically stable in the same areas, and dynamically erratic outside those areas.

A New Signature Of Chaos Called ‘Entanglement’

Jessen’s experiment revealed a new signature of chaos for the first time. It is related to the uniquely quantum mechanical property known as “entanglement.”

Entanglement is best known from a famous thought experiment proposed by Albert Einstein, in which two light particles, or photons, are emitted with polarizations that are fundamentally undefined but nevertheless perfectly correlated. Later, when the photons have traveled far apart in space, their polarizations are both measured at the same instant in time and found to be completely random but always at right angles to each other.

“It’s as though one photon instantly knows the result for the other and adjusts its own polarization accordingly,” Jessen said.

By itself, Einstein’s thought experiment is not directly related to quantum chaos, but the idea of entanglement has proven useful, Jessen added.

“Entanglement is an important phenomenon of the quantum world that has no classical counterpart. It can occur in any quantum system that consists of at least two independent parts,” he said.

Theorists have speculated that the onset of chaos will greatly increase the degree to which different parts of a quantum system become entangled.

Jessen took advantage of atomic physics to test this hypothesis in his laboratory experiments.

The total spin of a cesium atom is the sum of the spin of its valence electron and the spin of its nucleus, and those spins can become quantum correlated exactly as the photon polarizations in Einstein’s example.

In Jessen’s experiment, the electron and nuclear spins remained unentangled as a result of stable quantum dynamics, but rapidly became entangled if the dynamics were chaotic.

Entanglement is a buzzword in the science community because it is the foundation for quantum cryptography and quantum computing.

“Our work is not directly related to quantum computing and communications,” Jessen said. “It just shows that this concept of entanglement has tendrils in all sorts of areas of quantum physics because entanglement is actually common as soon as the system gets complicated enough.”


Full article and photo:

New ring detected around Saturn


A colossal new ring has been identified around Saturn.

The dusty hoop lies some 13 million km (eight million miles) from the planet, about 50 times more distant than the other rings and in a different plane.

Scientists tell the journal Nature that the tenuous ring is probably made up of debris kicked off Saturn’s moon Phoebe by small impacts.

They think this dust then migrates towards the planet where it is picked up by another Saturnian moon, Iapetus.

The discovery would appear to resolve a longstanding mystery in planetary science: why the walnut-shaped Iapetus has a two-tone complexion, with one side of the moon significantly darker than the other.

“It has essentially a head-on collision. The particles smack Iapetus like bugs on a windshield,” said Anne Verbiscer from the University of Virginia, US.

Observations of the material coating the dark face of Iapetus indicate it has a similar composition to the surface material on Phoebe.

The scale of the new ring feature is astonishing. Nothing like it has been seen elsewhere in the Solar System.

The more easily visible outlier in Saturn’s famous bands of ice and dust is its E-ring, which encompasses the orbit of the moon Enceladus. This circles the planet at a distance of just 240,000km.

The newly identified torus is not only much broader and further out, it is also tilted at an angle of 27 degrees to the plane on which the more traditional rings sit.

This in itself strongly links the ring’s origin to Phoebe, which also takes a highly inclined path around Saturn.

Scientists suspected the ring might be present and had the perfect tool in the Spitzer space telescope to confirm it.

The US space agency observatory is well suited to picking up the infrared signal expected from cold grains of dust about 10 microns (millionths of a metre) in size.

Phoebe (Nasa)
Impacts on the moon Phoebe are probably supplying the ring

The ring would probably have a range of particle sizes – some bigger than this, and some smaller.

Modelling indicates the pressure of sunlight would push the smallest of these grains towards the orbit of Iapetus, which is circling Saturn at a distance of 3.5 million km.

“The particles are very, very tiny, so the ring is very, very tenuous; and actually if you were standing in the ring itself, you wouldn’t even know it,” Dr Verbiscer told BBC News.

“In a cubic km of space, there are all of 10-20 particles.”

Indeed, so feeble is the ring that scientists have calculated that if all the material were gathered up, it would fill a crater on Phoebe no more than a kilometre across.

The moon is certainly a credible source for the dust. It is heavily pockmarked. It is clear that throughout its history, Phoebe has been hit many, many times by space rocks and clumps of ice.


Full article and photos:

Schrödinger’s virus

Quantum mechanics

An old thought experiment may soon be realised

But what about the other eight lives?

ONE of the most famous unperformed experiments in science is Schrödinger’s cat. In 1935 Erwin Schrödinger (pictured), who was one of the pioneers of quantum mechanics, imagined putting a cat, a flask of Prussic acid, a radioactive atom, a Geiger counter, an electric relay and a hammer in a sealed box. If the atom decays, the Geiger counter detects the radiation and sends a signal that trips the relay, which releases the hammer, which smashes the flask and poisons the cat.

The point of the experiment is that radioactive decay is a quantum process. The chance of the atom decaying in any given period is known. Whether it has actually decayed (and thus whether the cat is alive or dead) is not—at least until the box is opened. The animal exists, in the argot of the subject, in a “superposition” in which it is both alive and dead at the same time.

Schrödinger’s intention was to illuminate the paradoxes of the quantum world. But superposition (the existence of a thing in two or more quantum states simultaneously) is real and is, for example, the basis of quantum computing. A pair of researchers at the Max Planck Institute for Quantum Optics in Garching, Germany, now propose to do what Schrödinger could not, and put a living organism into a state of quantum superposition.

The organism Ignacio Cirac and Oriol Romero-Isart have in mind is the flu virus. Pedants might object that viruses are not truly alive, but that is a philosophical rather than a naturalistic argument, for they have genes and are capable of reproduction—a capability they lose if they are damaged. The reason for choosing a virus is that it is small. Actual superposition (as opposed to the cat-in-a-box sort) is easiest with small objects, for which there are fewer pathways along which the superposition can break down. Physicists have already put photons, electrons, atoms and even entire molecules into such a state and measured the outcome. In the view of Dr Cirac and Dr Romero-Isart, a virus is just a particularly large molecule, so existing techniques should work on it.

The other thing that helps maintain superposition is low temperature. The less something jiggles about because of heat-induced vibration, the longer it can remain superposed. Dr Cirac and Dr Romero-Isart therefore propose putting the virus inside a microscopic cavity and cooling it down to its state of lowest energy (ground state, in physics parlance) using a piece of apparatus known as a laser trap. This ingenious technique—which won its inventors, one of whom was Steven Chu, now America’s energy secretary, a Nobel prize—works by bombarding an object with laser light at a frequency just below that which it would readily absorb and re-emit if it were stationary. This slows down the movement, and hence the temperature, of its atoms to a fraction of a degree above absolute zero.

Once that is done, another laser pulse will jostle the virus from its ground state into an excited state, just as a single atom is excited by moving one of its electrons from a lower to a higher orbital. By properly applying this pulse, Dr Cirac believes it will be possible to leave the virus in a superposition of the ground and excited states.

For that to work, however, the virus will need to have certain physical properties. It will have to be an insulator and to be transparent to the relevant laser light. And it will have to be able to survive in a vacuum. Such viruses do exist. The influenza virus is one example. Its resilience is legendary. It can survive exposure to a vacuum, and it seems to be an insulator—which is why the researchers have chosen it. And if the experiment works on a virus, they hope to move on to something that is indisputably alive: a tardigrade.

Tardigrades are tiny but resilient arthropods. They can survive in vacuums and at very low temperatures. And, although the difference between ground state and an excited state is not quite the difference between life and death, Schrödinger would no doubt have been amused that his 70-year-old jeu d’esprit has provoked such an earnest following.


Full article and photo:

Nobel Awarded for Advances in Harnessing Light

nobel physics

Half of this year’s Nobel Prize in Physics went to Charles K. Kao, center. The other half of the prize was shared by two researchers at Bell Labs, Willard S. Boyle, left, and George E. Smith.

The mastery of light through technology was the theme of this year’s Nobel Prize in Physics as the Royal Swedish Academy of Sciences honored breakthroughs in fiber optics and digital photography.

Half of the $1.4 million prize went to Charles K. Kao for insights in the mid-1960s about how to get light to travel long distances through glass strands, leading to a revolution in fiber optic cables. The other half of the prize was shared by two researchers at Bell Labs, Willard S. Boyle and George E. Smith, for inventing the semiconductor sensor known as a charge-coupled device, or CCD for short. CCDs now fill digital cameras by the millions.

The prize will be awarded in Stockholm on Dec. 10.

Fiber optic cables and lasers capable of sending pulses of light down them already existed when Dr. Kao started working on fiber optics. But at that time, the light pulses could travel only about 20 meters through the glass fibers before 99 percent of the light had dissipated. His goal was to extend the 20 meters to a kilometer. At the time, many researchers thought tiny imperfections, like holes or cracks in the fibers, were scattering the light.

In January 1966, Dr. Kao, then working at the Standard Telecommunication Laboratories in England, presented his findings. It was not the manufacturing of the fiber that was at fault, but rather that the ingredient for the fiber — the glass — was not pure enough. A purer glass made of fused quartz would be more transparent, allowing the light to pass more easily. In 197o, researchers at Corning Glass Works were able to produce a kilometer-long ultrapure optical fiber.

According to the academy in its prize announcement, the optical cables in use today, if unraveled, would equal a fiber more than a billion kilometers long.

In September 1969, Dr. Boyle and Dr. Smith, working at Bell Labs in Murray Hill, N.J., sketched out an idea on a blackboard in Dr. Boyle’s office. Their idea, originally intended for electronic memory, takes advantage of the photoelectric effect, which was discovered by Albert Einstein and won him the Nobel in 1921. When light hits a piece of silicon, it knocks out electrons. The brighter the light, the more electrons are knocked out.

In a CCD, the knocked-out electrons are gathered in small wells, where they are counted — essentially one pixel of an image. The data from an array of CCDs can then be reconstructed as an image. A 10-megapixel camera contains 10 million CCDs.

Besides consumer cameras, CCDs also made possible the cosmic panoramas from the Hubble Space Telescope and the Martian postcards taken by NASA’s rovers.

All three of the winning scientists hold American citizenship. Dr. Kao is also a British citizen, and Dr. Boyle is also a Canadian citizen.


Full article and photos:

Physicists Create First Atomic-scale Map Of Quantum Dots


An atomic-scale map of the interface between an atomic dot and its substrate. Each peak represents a single atom. The map, made with high-intensity X-rays, is a slice through a vertical cross-section of the dot.

University of Michigan physicists have created the first atomic-scale maps of quantum dots, a major step toward the goal of producing “designer dots” that can be tailored for specific applications.

Quantum dots—often called artificial atoms or nanoparticles—are tiny semiconductor crystals with wide-ranging potential applications in computing, photovoltaic cells, light-emitting devices and other technologies. Each dot is a well-ordered cluster of atoms, 10 to 50 atoms in diameter.

Engineers are gaining the ability to manipulate the atoms in quantum dots to control their properties and behavior, through a process called directed assembly. But progress has been slowed, until now, by the lack of atomic-scale information about the structure and chemical makeup of quantum dots.

The new atomic-scale maps will help fill that knowledge gap, clearing the path to more rapid progress in the field of quantum-dot directed assembly, said Roy Clarke, U-M professor of physics and corresponding author of a paper on the topic published online Sept. 27 in the journal Nature Nanotechnology.

Lead author of the paper is Divine Kumah of the U-M’s Applied Physics Program, who conducted the research for his doctoral dissertation.

“I liken it to exploration in the olden days,” Clarke said of dot mapping. “You find a new continent and initially all you see is the vague outline of something through the mist. Then you land on it and go into the interior and really map it out, square inch by square inch.

“Researchers have been able to chart the outline of these quantum dots for quite a while. But this is the first time that anybody has been able to map them at the atomic level, to go in and see where the atoms are positioned, as well as their chemical composition. It’s a very significant breakthrough.”

To create the maps, Clarke’s team illuminated the dots with a brilliant X-ray photon beam at Argonne National Laboratory’s Advanced Photon Source. The beam acts like an X-ray microscope to reveal details about the quantum dot’s structure. Because X-rays have very short wavelengths, they can be used to create super-high-resolution maps.

“We’re measuring the position and the chemical makeup of individual pieces of a quantum dot at a resolution of one-hundredth of a nanometer,” Clarke said. “So it’s incredibly high resolution.”

A nanometer is one-billionth of a meter.

The availability of atomic-scale maps will quicken progress in the field of directed assembly. That, in turn, will lead to new technologies based on quantum dots. The dots have already been used to make highly efficient lasers and sensors, and they might help make quantum computers a reality, Clarke said.

“Atomic-scale mapping provides information that is essential if you’re going to have controlled fabrication of quantum dots,” Clarke said. “To make dots with a specific set of characteristics or a certain behavior, you have to know where everything is, so that you can place the atoms optimally. Knowing what you’ve got is the most important thing of all.”

In addition to Clarke, co-authors of the Nature Nanotechnology paper are Sergey Shusterman, Yossi Paltiel and Yizhak Yacoby.

The research was sponsored by a grant from the National Science Foundation. The U.S. Department of Energy supported work at Argonne National Laboratory’s Advanced Photon Source.

Full article and photo:

Ancient Rainforests Resilient To Climate Change


Earth’s first rainforests. (Credit: Courtesy of Mary Parrish, Smithsonian Institution)

Climate change wreaked havoc on the Earth’s first rainforests but they quickly bounced back, scientists reveal. The findings of the research team, led by Dr Howard Falcon-Lang from Royal Holloway, University of London, are based on spectacular discoveries of 300-million-year-old rainforests in coal mines in Illinois, USA.

Preserved over vast areas, these fossilized rainforests in Illinois are the largest of their kind in the world. The rocks at this site – in which the rainforests occur – contain evidence for climate fluctuations. During cold ‘ice ages’, fossils show that the tropics dried out and rainforests were pushed to the brink of extinction. However, rainforests managed to recover and return to their former glory.

Dr Falcon-Lang, from the Department of Earth Sciences, worked with colleagues at the Smithsonian Institution and Illinois Geological Survey. In their paper published in the journal Geology, they show that rainforest species all but vanished at the height of the ice ages. Yet they also reveal that the coal beds that formed shortly after, as the climate warmed, contain abundant rainforest species.

Falcon-Lang said, ‘These discoveries radically change our understanding of the Earth’s first rainforests. We used to think these were stable ecosystems, unchanged for tens of millions of years. Now we know they were incredibly dynamic, constantly buffeted by climate change’.

The research may also shed light on how climate change will affect the Amazon rainforest in the future. Dr Falcon-Lang commented, ‘If we can understand how climate shaped rainforests in the distant past, we can infer how they will respond in the future. We’ve shown that within certain limits, rainforests are resilient to climate change; however, extreme climate change may push rainforests beyond a point of no return’.

The work is part of a five-year project funded by the UK’s Natural Environment Research Council and aims to study how climate change affected the Earth’s first rainforests. These ancient rainforests date from the Carboniferous period, 300 million years ago, when most of the world’s coal resources were formed.


Full article and photo:

Algae And Pollen Grains Provide Evidence Of Remarkably Warm Period In Antarctica’s History

For Sophie Warny, LSU assistant professor of geology and geophysics and curator at the LSU Museum of Natural Science, years of patience in analyzing Antarctic samples with low fossil recovery finally led to a scientific breakthrough. She and colleagues from around the world now have proof of a sudden, remarkably warm period in Antarctica that occurred about 15.7 million years ago and lasted for a few thousand years.

Last year, as Warny was studying samples sent to her from the latest Antarctic Geologic Drilling Program, or ANDRILL AND-2A, a multinational collaboration between the Antarctic Programs of the United States (funded by the National Science Foundation), New Zealand, Italy and Germany, one sample stood out as a complete anomaly.

“First I thought it was a mistake, that it was a sample from another location, not Antarctica, because of the unusual abundance in microscopic fossil cysts of marine algae called dinoflagellates. But it turned out not to be a mistake, it was just an amazingly rich layer,” said Warny. “I immediately contacted my U.S. colleague, Rosemary Askin, our New Zealand colleagues, Michael Hannah and Ian Raine, and our German colleague, Barbara Mohr, to let them know about this unique sample as each of our countries had received a third of the ANDRILL samples.”

Some colleagues had noted an increase in pollen grains of woody plants in the sample immediately above, but none of the other samples had such a unique abundance in algae, which at first gave Warny some doubts about potential contamination.

“But the two scientists in charge of the drilling, David Harwood of University of Nebraska – Lincoln, and Fabio Florindo of Italy, were equally excited about the discovery,” said Warny. “They had noticed that this thin layer had a unique consistency that had been characterized by their team as a diatomite, which is a layer extremely rich in fossils of another algae called diatoms.”

All research parties involved met at the Antarctic Research Facility at Florida State University in Tallahassee. Together, they sampled the zone of interest in great detail and processed the new samples in various labs. One month later, the unusual abundance in microfossils was confirmed.

Among the 1,107 meters of sediments recovered and analyzed for microfossil content, a two-meter thick layer in the core displayed extremely rich fossil content. This is unusual because the Antarctic ice sheet was formed about 35 million years ago, and the frigid temperatures there impede the presence of woody plants and blooms of dinoflagellate algae.

“We all analyzed the new samples and saw a 2,000 fold increase in two species of fossil dinoflagellate cysts, a five-fold increase in freshwater algae and up to an 80-fold increase in terrestrial pollen,” said Warny. “Together, these shifts in the microfossil assemblages represent a relatively short period of time during which Antarctica became abruptly much warmer.”

These palynomorphs, a term used to described dust-size organic material such as pollen, spores and cysts of dinoflagellates and other algae, provide hard evidence that Antarctica underwent a brief but rapid period of warming about 15 million years before present.

“This event will lead to a better understanding of global connections and climate forcing, in other words, it will provide a better understanding of how external factors imposed fluctuations in Earth’s climate system,” said Harwood. “The Mid-Miocene Climate Optimum has long been recognized in global proxy records outside of the Antarctic region. Direct information from a setting proximal to the dynamic Antarctic ice sheets responsible for driving many of these changes is vital to the correct calibration and interpretation of these proxy records.”

These startling results will offer new insight into Antarctica’s climatic past – insights that could potentially help climate scientists better understand the current climate change scenario.

“In the case of these results, the microfossils provide us with quantitative data of what the environment was actually like in Antarctica at the time, showing how this continent reacted when climatic conditions were warmer than they are today,” said Warny.

According to the researchers, these fossils show that land temperatures reached a January average of 10 degrees Celsius – the equivalent of approximately 50 degrees Fahrenheit – and that estimated sea surface temperatures ranged between zero and 11.5 degrees Celsius. The presence of freshwater algae in the sediments suggests to researchers that an increase in meltwater and perhaps also in rainfall produced ponds and lakes adjacent to the Ross Sea during this warm period, which would obviously have resulted in some reduction in sea ice.

These findings most likely reflect a poleward shift of the jet stream in the Southern Hemisphere, which would have pushed warmer water toward the pole and allowed a few dinoflagellate species to flourish under such ice-free conditions. Researchers believe that shrub-like woody plants might also have been able to proliferate during an abrupt and brief warmer time interval.

“An understanding of this event, in the context of timing and magnitude of the change, has important implications for how the climate system operates and what the potential future response in a warmer global climate might be,” said Harwood. “A clear understanding of what has happened in the past, and the integration of these data into ice sheet and climate models, are important steps in advancing the ability of these computer models to reproduce past conditions, and with improved models be able to better predict future climate responses.”

While the results are certainly impressive, the work isn’t yet complete.

“The SMS Project Science Team is currently looking at the stratigraphic sequence and timing of climate events evident throughout the ANDRILL AND-2A drillcore, including those that enclose this event,” said Florindo. “A broader understanding of ice sheet behavior under warmer-than-present conditions will emerge.”


Full article:

World’s Most Sensitive Astronomical Camera Developed

photonA team of Université de Montréal researchers, led by physics PhD student Olivier Daigle, has developed the world’s most sensitive astronomical camera. Marketed by Photon etc., a young Quebec firm, the camera will be used by the Mont-Mégantic Observatory and NASA, which purchased the first unit.

The camera is made up of a CCD controller for counting photons; a digital imagery device that amplifies photons observed by astronomical cameras or by other instruments used in situations of very low luminosity. The controller produces 25 gigabytes of data per second.

Electric signals used to pilot the imagery chip are 500 times more precise than those of a conventional controller. This increased precision helps reduce noise that interferes with the weak signals coming from astronomical objects in the night sky. The controller allows to substantially increase the sensitivity of detectors, which can be compared to the mirror of the Mont-Mégantic telescope doubling its diameter.

“The first astronomical results are astounding and highlight the increased sensitivity acquired by the new controller,” says Daigle. “The clarity of the images brings us so much closer to the stars that we are attempting to understand.”

A thriving Quebec company Photon etc. developed a commercial version of the controller devised by Daigle and his team and integrated it in complete cameras. NASA was first to place an order for one of these cameras and was soon followed by a research group from the University of Sao Paulo, and by a European-Canadian consortium equipping a telescope in Chili. In addition, researchers in nuclear medicine, bioluminescence, Raman imaging and other fields requiring rapid imagery have expressed interest in purchasing the cameras.

Photon etc. is a Quebec research and development company that specializes in the manufacting of photonic measurement and analysis instruments. The company is growing rapidly after spending four years in the Université de Montréal and its affiliated École Polytechnique IT business incubator.

“The sensitivity of the cameras developed by the Centre de recherche en astrophysique du Québec (CRAQ) and Photon etc. will not only help us better understand the depths of the universe but also better perceive weak optical signals coming from the human body. These signals can reveal the early signs of several diseases such as macular degeneration and certain types of cancer. An early diagnostic leads to early intervention, hopefully before the disease becomes more serious thus saving lives and important costs,” says Sébastien Blais-Ouellette, president of Photon etc.

Scientific results for the camera were recently featured in the Publications of the Astronomical Society of the Pacific, a prestigious instrumentation journal.

This research was made possible thanks to the financial support of the Natural Sciences And Engineering Research Council of Canada, Photon etc., the Canada Foundation for Innovation, the Fonds québécois de la recherche sur la nature et les technologies.


Full article and photo:

Samoa tsunami: 10 facts about tsunamis

A tsunami in the Pacific has killed more than 100 people in Samoa. We look at what causes tsunamis and what to look out for.


Christopher Moore of NOAA looks at computer graphs at the Pacific Tsunami Warning Centre in Hawaii, concerning the earthquake and tsunami that hit American Samoa.

The word ‘tsunami’ is Japanese, and translates as ‘harbour wave’. Tsunamis used to be called ‘tidal waves’, but the term has fallen out of use with scientists as they have nothing to do with tides.

A tsunami consists of a series of waves, known as a wave train, rather than a single wave. For a large tsunami, these waves could arrive over a period of hours, and the first is not necessarily the largest.

Most tsunamis are caused by undersea earthquakes. A magnitude 8.0 earthquake is behind the Samoan disaster, according to the US Geological Survey. An earthquake will cause a tsunami if it is powerful enough and if it is under a sufficient depth of water.

About 80 per cent of all tsunamis take place in the Pacific Ocean.

The theory that underwater earthquakes were behind tsunamis was first put forward by the ancient Greek historian Thucydides, in 426BC, in his book History of the Peloponnesian War.

Volcanic eruptions, massive landslides, meteorite impacts and underwater nuclear explosions can also cause tsunamis, as can tropical cyclones or other weather conditions. A storm-caused tsunami is known as a ‘meteotsunami’; such an event devastated Burma in 2008.

Despite the enormous size of the waves when they hit the land, the amplitude (wave height) of a tsunami is often as little as three feet in the open ocean, while its wavelength (distance between two peaks) can be as long as 120 miles. At this point it will be travelling at more than 500mph.

As the tsunami reaches shallower water the waves compress, making the wavelength shorter and the amplitude higher. The wave slows down, although it will still be travelling at around 50mph.

Predicting a tsunami is near impossible. In some cases a few minutes’ warning can be gained when the water along the shore suddenly recedes, in a phenomenon called ‘drawback’. This happens when a tsunami’s trough reaches the land before the peak.

A 10-year-old English girl, Tilly Smith, saved nearly a hundred lives with this knowledge ahead of the 2004 Indian Ocean tsunami. She had learned about drawback in a geography lesson and warned her family, who in turn told others. She has since given a speech at the United Nations and had an asteroid named after her: 20002 Tillysmith.


Full article and photo:

Peering into the future

Building a bionic eye

A contact lens that could put names to faces and guide soldiers in combat

SINCE the late 19th century, people with imperfect vision have been able to use contact lenses to improve their eyesight. In the early days these lenses were made of glass and could perform only simple visual corrections. Now they are usually made of plastic and can be moulded into the more complex shapes appropriate to those who suffer from astigmatism or who require bifocals. They can also be tinted, for people who wish to change the colour of their eyes. Yet the main purpose of even the most sophisticated contact lens remains what it always has been: to improve a person’s sight. That is about to change.

Researchers at the University of Washington, in Seattle, led by Babak Parviz, have incorporated electronic circuitry into a plastic lens, including light-emitting diodes (LEDs) for “on-eye” displays, transistors for computing, a radio for wireless communication and an antenna for collecting power from a radio source, such as a mobile phone, in a person’s pocket.

Making a “smart” lens like this is not easy. Electronic components are usually manufactured at temperatures which would melt plastic and are made of materials that do not naturally adhere to a contact lens’s plastic. Dr Parviz and his colleagues have therefore designed a lens that is peppered with small wells, ten microns deep, that are connected by a network of tiny metal wires. Each well is sculpted so that a component of a particular shape will fit snugly into it and, at its bottom, it contains a small amount of an alloy with a low melting-point. In addition, wells that will accommodate LEDs must be fitted with microlenses to focus the light from the LED in a way that the eye can cope with.

The components are manufactured individually and suspended in a liquid. This suspension is then washed over the lens, allowing the components to blunder into holes of the appropriate shape, where they stay put. The alloy is then gently heated, melting the alloy and connecting the components to the wires and thus to one another.

The researchers say that the resulting circuitry requires so little power that it does not produce enough heat to cause discomfort. And although Dr Parviz has not, himself, worn the lens, he has tested it on rabbits—and the animals do not seem to find it uncomfortable.

So far, the prototype’s display is rudimentary (in truth, it consists of but a single LED). However, Dr Parviz and his colleagues are working on a lens that can accommodate an eight-by-eight array of LEDs. They are also exploring a design which produces images using tiny shutters, in the manner of a liquid-crystal display.

As well as an LED, the prototype contains a small radio chip and antenna so that it can be powered without wires. The researchers will discuss the performance of their wireless-power system, which taps into the mobile-phone frequencies in the range 900-megahertz to 6-gigahertz range and draws about 100 microwatts of power, at a conference in Beijing in November.

What the display will show, of course, is up to the imagination—the name, perhaps, of someone the wearer has met but does not recall, or the street directions in an unfamiliar city. Or, perhaps, the quickest route to a target that needs destroying. For this sort of technology surely has military applications as well.


Full article:

Finding Order in the Apparent Chaos of Currents


FLUID MOVEMENT Sensors near Santa Cruz, Calif., take surface current measurements in Monterey Bay.

Suppose a blob of dioxin-rich pesticide is spilled into Monterey Bay. It might quickly disperse to the Pacific Ocean. But hours later, a spill of the same size at the same spot could circle near the coastline, posing a greater danger to marine life. The briny surface waters of the bay churn so chaotically that a slight shift in the place or time an oil drop, a buoy — or even a person — falls in can dictate whether it is swept out to the open ocean or swirls near the shore.

But the results are not unpredictable. A team of scientists studying Monterey Bay since 2000 has found that underlying its complex, seemingly jumbled currents is a structure that guides the dispersal patterns, a structure that changes over time.

With the aid of high-frequency radar that tracks the speed and direction of the flowing waters, and computers that rapidly perform millions of calculations, the scientists found that a hidden skeleton guided whether floating debris lingered or exited the bay.

Over the past 10 years, scientists have made enormous strides in their ability to identify and make images of the underlying mechanics of flowing air and water, and to predict how objects move through these flows.

Assisted by instruments that can track in fine detail how parcels of fluid move, and by low-cost computers that can crunch vast amounts of data quickly, researchers have found hidden structures beyond Monterey Bay, structures that explain why aircraft meet unexpected turbulence, why the air flow around a car causes drag and how blood pumps from the heart’s ventricles. In December, the journal Chaos will highlight the research under way to track the moving skeletons embedded in complex flows, known as Lagrangian coherent structures.

“There’s been an explosion of interest in this area,” said David K. Campbell, editor in chief of Chaos, a physicist and provost at Boston University. “Why it’s become more interesting is that experimentalists can now watch these structures emerge.”

The patterns of flow have fascinated thinkers for centuries. In the 1500s, Leonardo da Vinci sketched the swirling eddies he saw in rivers and the vortexes of blood he imagined in the aortic valve. Just as those visible patterns of flow change quickly, eluding our ability to predict the fate of objects caught up in them, the hidden structures of flow also move and morph over time.

The concept of the structures grew out of dynamical systems theory, a branch of mathematics used to understand complicated phenomena that change over time. The discovery of the structures in a wide range of real-world cases has shown that they play a key role in complex and chaotic fluid flows in the atmosphere and ocean.

The structures are invisible because they often exist only as dividing lines between parts of a flow that are moving at different speeds and in different directions. In the ocean, the path of a drop of water on one side of such a structure might diverge from the path of a drop of water on the other side; they will drift farther apart as time passes.

“They aren’t something you can walk up to and touch,” Jerrold E. Marsden, an engineering and mathematics professor at Caltech, said of the structures. “But they are not purely mathematical constructions, either.”

As an analogy, Dr. Marsden suggests imagining a line that divides a part of a city that has been affected by a disease outbreak from a part that has not. The line is not a fence or a road, but it still marks a physical barrier. And as the outbreak spreads, the line will change.

To find the structures, scientists must track flow, not by watching it go by but from the perspective of the droplets of water or molecules of air moving in it. “It’s like being a surfer,” Dr. Campbell said. “You want to catch the wave and move with the wave.”

moving boundary

In the laboratory, researchers shine lasers on tiny particles caught in a flow, capturing their speed and trajectory with fast, high-resolution digital cameras similar to the way tracer rounds from machine guns track the path of bullets. In the ocean or atmosphere, scientists rely on instantaneous data from high-frequency radar, laser detection systems, buoys and satellites. In the human body, phase-contrast magnetic resonance imaging has helped researchers map the complex patterns of blood flow in detail. Computers take in the data from all those sources, applying algorithms that unveil the flow structures.

“We’re just recognizing that these things exist and are playing a role in a variety of scenarios,” said Thomas Peacock, a mechanical engineering professor at M.I.T. who is evaluating how Lagrangian coherent structures affect vehicle performance and efficiency. “The idea is that cars, airplanes and submarines down the line would be fitted with sensors that will help them adapt to these structures.”

Studies of the air flow patterns surrounding Hong Kong International Airport have shown that Lagrangian coherent structures cause unexpected jolts to planes during landing attempts, forcing pilots to waste fuel while they revert to holding patterns. George Haller, an engineering professor at McGill University in Montreal who forged the mathematical criteria for finding such structures in fluid flows, is working with the airport’s officials to design a tool that allows pilots to see and navigate around the structures. It will rely on data from laser scans, analyzed by computers as planes approach the airport.

At Stanford, researchers are mapping blood flow in patients with abdominal aortic aneurysms to see whether frequent exercise changes the flow structures in ways that correlate to slower bulging of the artery.

The scientists studying Monterey Bay found a Lagrangian coherent structure that acts as a moving ridge, separating a region of the bay that spreads pollutants out to sea and a region that recirculates them in the bay. They watched this ridge drift and change over 22 days and found that if computed in real time, it could be used to predict one-day windows when pollutants could do less damage to the bay environment.

The scientists proposed building a holding tank for the fertilizers and pesticides that wash from farmland into the neighboring watershed that could release pollutants only at times when they would quickly drift into the ocean, where they would be so diluted they would pose less harm to marine life. In a later experiment, scientists found that the path of buoys dispatched in the bay followed the path predicted by the computer simulations.

Researchers who studied the waters along the southeastern coast of Florida found a similar structure that they argued could be used to reduce the effects of pollution near Hollywood Beach, south of Fort Lauderdale.

Their research in Monterey Bay piqued the interest of Art Allen, a physical oceanographer for the Coast Guard who thinks that Lagrangian coherent structures could improve search-and-rescue operations for people lost at sea by offering more precision than current techniques.

Researchers in private industry and the French Navy have expressed interest in using models of the structures to track the spread of oil after spills in coastal areas, said Francois Lekien, an applied mathematics professor at the École Polytechnique at the Université Libre de Bruxelles in Belgium who was a co-author of the bay studies.

Strategies based on Lagrangian coherent structures have yet to be tested to see if they curb coastal pollution. And they have several limitations. Scientists cannot yet predict what happens to pollutants that do not float on the ocean surface. The models do not yet account for the interaction with wind patterns that also guide how floating objects or people drift at sea. The method also requires continuing, detailed data akin to what was available in Monterey Bay, which has an ocean monitoring program that far surpasses that of most coastal areas.

Even if the structures in flow do not guide engineering or pollution strategies as well as researchers hope, many scientists believe that unearthing and visualizing them provides useful insights. For example, the structures identified in coastal waters have exposed flaws in our intuition about flow. “There are myths out there that it’s O.K. to dump pollutants at high tide,” said Dr. Marsden, co-author of the Monterey Bay and coastal Florida studies. “But it’s really these structures that will determine where pollutants end up.”

Finding the structures in various settings has also given researchers a fresh perspective on what remains a great scientific puzzle: the dynamics of flow.

“In complex systems such as the atmosphere, there are a lot of things that people can’t explain offhand,” Dr. Haller said. “People used to attribute this to randomness or chaos. But it turns out, when you look at data sets and find these structures, you can actually explain those patterns.”

Bina Venkataraman, New York Times


Full article and photos:

Lab Demonstrates 3-D Printing In Glass

glass xx

An object printed from powdered glass, using the Solheim Lab’s new Vitraglyphic process.

A team of engineers and artists working at the University of Washington’s Solheim Rapid Manufacturing Laboratory has developed a way to create glass objects using a conventional 3-D printer. The technique allows a new type of material to be used in such devices.

The team’s method, which it named the Vitraglyphic process, is a follow-up to the Solheim Lab’s success last spring printing with ceramics.

“It became clear that if we could get a material into powder form at about 20 microns we could print just about anything,” said Mark Ganter, a UW professor of mechanical engineering and co-director of the Solheim Lab. (Twenty microns is less than one thousandth of an inch.)

Three-dimensional printers are used as a cheap, fast way to build prototype parts. In a typical powder-based 3-D printing system, a thin layer of powder is spread over a platform and software directs an inkjet printer to deposit droplets of binder solution only where needed. The binder reacts with the powder to bind the particles together and create a 3-D object.

Glass powder doesn’t readily absorb liquid, however, so the approach used with ceramic printing had to be radically altered.

“Using our normal process to print objects produced gelatin-like parts when we used glass powders,” said mechanical engineering graduate student Grant Marchelli, who led the experimentation. “We had to reformulate our approach for both powder and binder.”

By adjusting the ratio of powder to liquid the team found a way to build solid parts out of powdered glass purchased from Spectrum Glass in Woodinville, Wash. Their successful formulation held together and fused when heated to the required temperature.

Glass is a material that can be transparent or opaque, but is distinguished as an inorganic material (one which contains no carbon) that solidifies from a molten state without the molecules forming an ordered crystalline structure. Glass molecules remain in a disordered state, so glass is technically a super-cooled liquid rather than a true solid.

In an instance of new technology rediscovering and building on the past, Ganter points out that 3-D printed glass bears remarkable similarities to pate de verre, a technique for creating glassware. In pate de verre, glass powder is mixed with a binding material such as egg white or enamel, placed in a mold and fired. The technique dates from early Egyptian times. With 3-D printing the technique takes on a modern twist.

As with its ceramics 3-D printing recipe, the Solheim lab is releasing its method of printing glass for general use.

“By publishing these recipes without proprietary claims, we hope to encourage further experimentation and innovation within artistic and design communities,” said Duane Storti, a UW associate professor of mechanical engineering and co-director of the Solheim Lab.

Artist Meghan Trainor, a graduate student in the UW’s Center for Digital Arts and Experimental Media working at the Solheim Lab, was the first to use the new method to produce objects other than test shapes.

“Creating kiln-fired glass objects from digital models gives my ideas an immediate material permanence, which is a key factor in my explorations of digital art forms,” Trainor said. “Moving from idea to design to printed part in such a short period of time creates an engaging iterative process where the glass objects form part of a tactile feedback loop.”

Ronald Rael, an assistant professor of architecture at the University of California, Berkeley, has been working with the Solheim Lab to set up his own 3-D printer. Rael is working on new kinds of ceramic bricks that can be used for evaporative cooling systems.

“3-D printing in glass has huge potential for changing the thinking about applications of glass in architecture,” Rael said. “Before now, there was no good method of rapid prototyping in glass, so testing designs is an expensive, time-consuming process.” Rael adds that 3-D printing allows one to insert different forms of glass to change the performance of the material at specific positions as required by the design.

The new method would also create a way to repurpose used glass for new functions, Ganter said. He sees recycled glass as a low-cost material that can help bring 3-D printing within the budget of a broader community of artists and designers.

The Solheim Rapid Prototyping Laboratory, on the UW’s Seattle campus, specializes in advanced research and teaching in solid modeling, rapid prototyping, and innovative 3-D printing systems.


Full article and photo:

By 2040 you will be able to upload your brain…

…or at least that’s what Ray Kurzweil thinks. He has spent his life inventing machines that help people, from the blind to dyslexics. Now, he believes we’re on the brink of a new age – the ‘singularity’ – when mind-boggling technology will allow us to email each other toast, run as fast as Usain Bolt (for 15 minutes) – and even live forever. Is there sense to his science – or is the man who reasons that one day he’ll bring his dad back from the grave just a mad professor peddling a nightmare vision of the future?

gm ss

Standing up for GM: Kurzweil believes that opposition to advances such as genetic modification harm humankind.

Should, by some terrible misfortune, Ray Kurzweil shuffle off his mortal coil tomorrow, the obituaries would record an inventor of rare and visionary talent. In 1976, he created the first machine capable of reading books to the blind, and less than a decade later he built the K250: the first music synthesizer to nigh-on perfectly duplicate the sound of a grand piano. His Kurzweil 3000 educational software, which helps students with learning difficulties such as dyslexia and attention deficit disorder, is likewise typical of an innovator who has made his name by combining restless imagination with technological ingenuity and a commendable sense of social responsibility.

However, these past accomplishments, as impressive as they are, would tell only half the Kurzweil story. The rest of his biography – the essence of his very existence, he would contend – belongs to the future.

Following the publication of his 2005 book, The Singularity is Near: When Humans Transcend Biology, Kurzweil has become known, above all, as a technology speculator whose predictions have polarised opinion – from stone-cold scepticism and splenetic disagreement to dedicated hero worship and admiration. It’s not just that he boldly envisions a tomorrow’s world where, for example, tiny robots will reverse the effects of pollution, artificial intelligence will far outstrip (and supplement) biological human intelligence, and humankind “will be able to live indefinitely without ageing”. No, the real reason Kurzweil has become such a magnet for blogospheric debate, and a tech-celebrity, is that he’s convinced those future predictions – and many more just as stunning – are imminent occurrences. They will all, he steadfastly maintains, happen before the middle of the 21st century.

Which means, regarding the earlier allusion to his mortal coil, that he doesn’t plan to do any shuffling any time soon. Ray Kurzweil, 61, sincerely believes that his own immortality is a realistic proposition… and just as strongly contends that, using a combination of grave-site DNA and future technologies, he will be able to reclaim his father, Fredric Kurzweil (the victim of a fatal heart attack in 1970), from death.

Just when will this ultimate life-affirming feat be possible? In Kurzweil’s estimation, we will be able to upload the human brain to a computer, capturing “a person’s entire personality, memory, skills and history”, by the end of the 2030s; humans and non-biological machines will then merge so effectively that the differences between them will no longer matter; and, after that, human intelligence, transformed for the better, will start to expand outward into the universe, around about 2045. With this last prediction, Kurzweil is referring not to any recognisable type of space travel, but to a kind of space infusion. “Intelligence,” he writes, “will begin to saturate the matter and energy in its midst [and] spread out from its origin on Earth.”

It’s as well to mention at this point that, in 2005, Mikhail Gorbachev personally congratulated Kurzweil for foreseeing the pivotal role of communications technology in the collapse of the Soviet Union, and that Microsoft chairman Bill Gates calls him “the best person I know at predicting the future of artificial intelligence”. A man of lesser accomplishments, touting the same head-spinning claims, would impress few beyond an inner circle of sci-fi obsessives, but Kurzweil – honoured as an inventor by US presidents Lyndon B Johnson and Bill Clinton – has rightfully earned himself a stockpile of credibility.

In person, chewing pensively on a banana, the softly spoken, slightly built Kurzweil looks chipper for his 61 years, and wears an elegantly tailored suit. A father of two, he resides in the Boston suburbs with his psychologist wife, Sonya, but has flown into Los Angeles for a private screening of Transcendent Man, the upcoming documentary that examines his life and theories over a suitably cosmic score by Philip Glass. “People don’t really get their intellectual arms around the changes that are happening,” he says, perched lightly on the edge of a large armchair, his overall sheen of wellbeing perhaps a shade more encouraging than you’d expect from a man of his age. “The issue is not just [that] something amazing is going to happen in 2045,” he says. “There’s something remarkable going on right now.”

To understand exactly what he means, and why he thinks that his predictions bear up to hard scrutiny, it’s necessary to return to the title of the above-mentioned book, and the grand idea on which it’s based: “the singularity”.

Borrowed from black-hole physics, in which the singularity is taken to signify what is unknowable, the term has been applied to technology to suggest that we haven’t really got a clue what’s going to happen once machines are vastly more “intelligent” than humans. The singularity, writes Kurzweil, is “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed”. He is not unique in his adoption of the idea – the information theorist John von Neumann hinted at it in the 1950s; retired maths professor and sci-fi author Vernor Vinge has been exploring it at length since the early 1980s – but Kurzweil’s version is currently the most popular “singularitarian” text.

“I didn’t come to these ideas because I had certain conclusions and worked backwards,” he explains. “In fact, I didn’t start looking for them at all. I was looking for a way to time my inventions and technology projects as I realised timing was the critical factor to success. And I made this discovery that if you measure certain underlying properties of information technology, it follows exquisitely predictable trajectories.”

For Kurzweil, the crux of the singularity is that the pace of technology is increasing at a super-fast, exponential rate. What’s more, there’s also “exponential growth in the rate ‘ of exponential growth”. It is this understanding that gives him the confidence to believe that technology – through an explosion of progress in genetics, nanotechnology and robotics – will soon surpass the limits of his imagination.

It is also why, in addition to bananas and the odd beneficial glass of red wine, he follows a regime of around 200 vitamin pills daily: not so much a diet as an attempt to “aggressively re-programme” his biochemistry. He claims that tests have shown he aged only two biological years over the course of 16 actual vitamin-popping years. He also says that, thanks to the regime, he has effectively cured himself of Type 2 diabetes. Not even open-heart surgery, which he underwent last year, and from which he made a rapid recovery (“a few hours later I was in the next room, and sent an email”) could dent his convictions. On the contrary, he thinks that the brevity of his convalescence is proof positive that the pills are working. If he slows down the ageing process, he reckons, he’ll be around long enough to witness the arrival of technology that will prolong his life… forever.

Kurzweil was raised in Queens, New York, where two youthful obsessions – electronics and music – would lead to a guest appearance on the 1960s TV quiz show I’ve Got a Secret, on which (aged 17) he showcased his first major invention: a home-made computer that could compose tunes. Five years later came the death (in 1970, when Ray was 22) of his father, Fredric, a struggling composer and conductor who, Kurzweil believes, never really got his due. “I’m painfully aware of the limitations he had, which were not his fault,” he says. “In that generation, information about health was not very available, and we didn’t have [today’s] resources for creating music. Now, a kid in a dorm room can create a whole orchestral composition on a synthesizer.”

The tragedy of that loss – and the fact that the means to repair a congenital heart defect were available to him, but not his father – is clearly an intense motivation for Kurzweil. Sometime soon, he believes, he will once again be able to converse with his father, such is the potential of the scientific advances he believes will ultimately pave the way to the singularity. Not everyone, though, concurs with his appraisal of technological progress, and his belief in the imminence of immortality.

Memorably, in the Transcendent Man documentary, Kevin Kelly, founding editor of future-thinking magazine Wired, labels Kurzweil a “deluded dreamer” who is “performing the services of a prophet”. In reacting to that assessment, Kurzweil’s habitually mellow tone of voice takes on a hint – albeit mild – of umbrage. “It’s interesting that [Kelly] says my views are ‘hard-wired’, when I actually think his views are hard-wired,” he says. “He’s a linear thinker, and linear thinking is hard-wired in our brains: it worked very well 1,000 years ago. Some people really are resistant to accepting this exponential perspective, and they’re very smart people. You show them the data, and yes, they follow it, but they just cannot get past it. Other people accept it readily.”

Whereas Kelly differs from Kurzweil on the grounds of interpretation and tone, other voices of dispute are rooted in a deep-seated fear of technological calamity. “The form of opposition from fundamentalist humanists, and fundamentalist naturalists – that we should make no change to nature [or] to human beings – is directly contrary to the nature of human beings, because we are the species that goes beyond our limitations,” counters Kurzweil. “And I think that’s quite a destructive school of thought – you can show that hundreds of thousands of kids went blind in Africa due to the opposition to [genetically engineered] golden rice. The opposition to genetically modified organisms is just a blanket, reflexive opposition to the idea of changing nature. Nature, and the natural human condition, generates tremendous suffering. We have the means to overcome that, and we should deploy it.”

To those opponents who detect a thick strain of techno-evangelism in Kurzweil’s basically optimistic interpretation of the singularity, he reacts with self-parody: there’s a tongue-in-cheek photo in The Singularity is Near of the author wearing a sandwich board bearing the book’s title, and he insists he was never “searching for an alternative to customary faith”. At the same time, he says humankind’s inevitable move towards non-biological intelligence is “an essentially spiritual undertaking”.

Whether or not he attracts a significant following of dedicated believers in search of deliverance, ecstasy or any variation thereof (some commentators have called the singularity “the rapture for geeks”), Kurzweil has undoubtedly positioned himself at the heart of a growing singularity industry. He is a director of the non-profit Singularity Institute for Artificial Intelligence, “the only organisation that exists for the expressed purpose of achieving the potential of smarter-than-human intelligence safer and sooner”; there’s a second film awaiting release (part fiction, part documentary, co-produced by Kurzweil), also based on The Singularity is Near; and in addition to his theoretical books, he has co-authored a series of health titles, including Transcend: Nine Steps to Living Well Forever and Fantastic Voyage: Live Long Enough to Live Forever. The secret of immortality, he wants you to know, is available in book form.

Those who have lent Kurzweil their support include space-travel pioneer Peter Diamandis, chairman of the X-Prize Foundation; videogame designer (and creator of Spore and SimCity) Will Wright; and Nobel Prize-winning astrophysicist George Smoot. All three can be found on the faculty and adviser list of the recently founded Singularity University (Silicon Valley), of which Kurzweil is chancellor and trustee.

If the pace of technology continues to accelerate, as Kurzweil predicts, it seems likely that discussion of the singularity will see an exponential growth of its own. Few would dispute that it’s one of the 21st century’s most compelling ideas, because it connects issues that intensely polarise people (God, the energy crisis, genetic engineering) with sci-fi concepts that stir the imagination (artificial intelligence, immersive virtual reality, molecular engineering). Thanks largely to Kurzweil and the singularity, scenarios once viewed as diverting entertainment are being reappraised with a new seriousness. The line between fanciful thinker and credible, scientific analyst is becoming blurred: what once would have been relegated to the realms of sci-fi is now gaining factual currency.

“People can wax philosophically,” says Kurzweil. “It’s very abstract – whether it’s a good thing to overcome death or not – but when it comes to some new methodology that’s a better treatment for cancer, there’s no controversy. Nobody’s picketing doctors who put computers inside people’s brains for Parkinson’s: it’s not considered controversial.”

Might that change as more people become aware of the singularity and the pace of technological change? “People can argue about it,” says Kurzweil, relaxed as ever within his aura of certainty. “But when it comes down to accepting each step along the way, it’s done really without much debate.”

The greatest thing since sliced bread?

Ray Kurzweil’s guide to incredible future technologies — and when he thinks they’re likely to arrive

1 Reconnaissance dust

“These so-called ‘smart dust’ – tiny devices that are almost invisible but contain sensors, computers and communication capabilities – are already being experimented with. Practical use of these devices is likely within 10 to 15 years”

2 Nano assemblers

“Basically, these are three-dimensional printers that can create a physical object from an information file and inexpensive input materials. So we could email a blouse or a toaster or even the toast. There is already an industry of three-dimensional printers, and the resolution of the devices that can be created is getting finer and finer. The nano assembler would assemble devices from molecules and molecular fragments, and is about 20 years away”

3 Respirocytes

“A respirocyte is a nanobot (a blood cell-sized device) that is designed to replace our biological red blood cells but is 1,000 times more capable. If you replaced a portion of your biological red blood cells with these robotic versions you could do an Olympic sprint for 15 minutes without taking a breath, or sit at the bottom of a swimming pool for four hours. These are about 20 years away” ‘

4 Foglets

“Foglets are a form of nanobots that can reassemble themselves into a wide variety of objects in the real world, essentially bringing the rapid morphing qualities of virtual reality to real reality. Nanobots that can perform useful therapeutic functions in our bodies, essentially keeping us healthy from inside, are only about 20 years away. Foglets are more advanced and are probably 30 to 40 years away”

5 Blue goo

“The concern with full-scale nanotechnology and nanobots is that if they had the capability to replicate in a natural environment (as bacteria and other pathogens do), they could destroy humanity or even all of the biomass. This is called the grey goo concern. When that becomes feasible we will need a nanotechnology immune system. The nanobots that would be protecting us from harmful self-replicating nanobots are called blue goo (blue as in police). This scenario is 20 to 30 years away.”


Full article and photo:

Seti: The hunt for ET

Scientists have been searching for aliens for 50 years, scanning the skies with an ever-more sophisticated array of radio telescopes and computers. Known as Seti, the search marks its half-century this month. Jennifer Armstrong and Andrew Johnson examine its close – and not so close – encounters.


Scientists have been searching for aliens for 50 years.

1. Seti stands for the Search for ExtraTerrestrial Intelligence.

 2. If intelligent aliens are out there, Dr Seth Shostak, the Seti Institute’s senior astronomer, believes they will be “thinking machines”. He believes a highly advanced species will be several centuries ahead of us in technological development.

3. Professor Duncan Forgan, an astronomer from Edinburgh University, estimates that between 360 and 38,000 life forms capable of interstellar communications have evolved at some point in the history of our galaxy.

4. In April 2006, Dr Shostak predicted we would find evidence of extraterrestrial life between 2020 and 2025. He believes the best way of bringing them up to speed with the human race is to send them the contents of the internet.

5. So far, no alien signals have been heard, however.

6. It was a September 1959 article in the journal Nature that persuaded the scientific community that, despite the unlikely aliens found in the era’s Cold War-inspired UFO films, alien intelligence was more likely than not, so kick-starting the Seti project.

7. The search proper began in 1960, however, with “Project Ozma” at the Green Bank radio telescope in West Virginia, America, directed by a Harvard graduate, Frank Drake.

8. Project Ozma was named after the queen of L Frank Baum’s fictional land of Oz, a place which is “very far away, difficult to reach, and populated by strange and exotic beings”.

9. The Microsoft founder Paul Allen is funding 42 radio antennae – the Allen Telescope Array in California – at a cost of £16m for the Seti project. It powered up this month.

10. When complete, the Allen Telescope Array will have 350 antenna dishes, each six metres in diameter.

11. At the moment, scientists scavenge time on the world’s biggest telescopes to hunt for signals. One of the most significant is the Arecibo Observatory radio telescope in Puerto Rico, made famous by Pierce Brosnan in the final sequence of the James Bond film Golden Eye. It’s the world’s biggest with a 305m diameter.

12. The most promising radio signal found to date, SHGb02+14a, was detected in 2003 at Arecibo. It was found on three occasions but emanates from between the constellations of Pisces and Aries where there are no stars. It is also a very weak signal. Scientists think it may have been due to an astrological phenomenon or a computer glitch.

13. A set of quickly pulsing signals known as LGM1 (Little Green Men) caused great excitement in 1967. It turned out that they were from a previously unknown class of super-dense rotating neutron stars now known as pulsars. The discovery won Tony Hewish, emeritus professor of radio astronomy at Cambridge University, a Nobel prize.

14. While radio telescopes on Earth are tuned into frequencies that scientists believe are the most likely to be used by intelligent life, there have been many attempts to contact aliens by sending signals and objects from Earth to likely-looking stars.

15. In 1974, astronomers sent crude pictures of humans, our DNA and our solar system to the star cluster M13, which is 21,000 light years away and contains a third of a million stars.

16. In 2001 a “reply” to the 1974 message was found in Hampshire in the form of a crop circle, featuring crude pictures of an alien, modified DNA and an improbable solar system. It is believed to be a hoax.

17. Nasa’s attempt to communicate with aliens by playing a Beatles track in February 2008 caused consternation. Some scientists pointed out that making a highly advanced race, which might have exhausted all the resources on their planet, aware of our existence might not be the most sensible thing to do.

18. Now an international agreement is in place preventing any reply to an extraterrestrial signal unless there is agreement that it’s a good idea.

19. However, if Einstein’s theory is correct that it is impossible to travel faster than the speed of light, there is no need to worry. It would take extraterrestrial life-forms millennia to reach us, unless they had the technology to cut corners in space by travelling through highly theoretical tunnels called wormholes. “You’re not going to see them in person, I think,” Dr Shostak said. “To go from here to the nearest star is a project requiring a 100,000-year trip. And that’s longer than you’re going to want to sit there eating airline food.”

20. But maybe they do have wormhole technology. See No 2.

21. The nearest stars likely to have planets are three parsecs away (one parsec equals 3.26 light years, or 19 million million miles) so even if a common language were found, it would take a century to communicate.

22. Seti hit the headlines in 1977 when a volunteer found a strong signal and wrote “Wow!” in the margin of a printout. The “Wow! Signal”, as it came to be known, was never found again despite repeated attempts.

23. Frank Drake’s Ozma project was originally kept secret as the observatory was government-funded and nobody wanted to let Congress know they were looking for aliens.

24. Nevertheless, Congress pulled the plug in 1993. The project is now funded by private donations.

25. Five million people have joined a scheme organised by the University of California in 1999 in which home computers help sift the millions of Seti readings during their “downtime” after a special screensaver is downloaded. SETI@home is the world’s largest supercomputer.

26. SETI@home can do tens to hundreds of billions of operations per second.

27. There are now lots of group-computing projects using the same software as SETI@home, from decoding enigma messages sent in the Second World War to predicting future climates or helping to find a cure for Aids.

28. Searches for other-worldly intelligence also involve looking for signals aliens may have sent us using light waves or infrared as well as radio waves.

29. The Drake equation (N = N* fp ne fl fi fc fL) was created by Frank Drake in 1961 to work out how many intelligent civilisations there may be in our galaxy. The values stand for things such as the number of stars and estimated number of planets. The answer varies from 2.31 to 1,000, as many of the values rely on guesswork.

30. Gene Roddenberry used the equation to justify the number of inhabited planets discovered by the crew of the Starship Enterprise in Star Trek.

31. Scientists admit, however, that aliens may already have tried to contact us with a form of communication completely unknown to us – a bit like trying to make contact with a lost tribe in Borneo using TV signals.

32. In 1950, Italian Nobel laureate and nuclear scientist Enrico Fermi stated the Fermi paradox: there’s a high probability of alien life but we haven’t detected any yet.

33. In the mid-1990s, Seti scientists thought they were on to something when they picked up a signal every evening at 7pm. It turned out to be from a microwave oven used by technicians in the cellar at the Parkes Observatory in Australia. There is now a note on the microwave asking people not to use it while Seti is active.

34. Other false calls have included signals from electronic garage doors, jet airliners, radios, televisions and even the Pioneer space craft. “We found intelligent life,” said Richard Davis, a radio astronomer at Jodrell Bank in Cheshire, “but it was us.”

35. The privately funded Seti Institute in California has an annual budget of $7m. It employs 130 staff and was founded 25 years ago in November.

36. The MoD recorded 394 UFO sightings in the UK in the first eight months of this year.

37. In 1996 only six exoplanets – those outside our solar system – had been found. Now nearly 400 have been discovered. Although none are Earth-like, scientists believe it is just a matter of time before one shows up.

38. Which is why Nasa launched the Kepler telescope in March. It will survey 100,000 Sun-like stars over the next four years, looking for Earth-like planets in the “Goldilocks Zone” – a distance from the Sun that is not too hot and not too cold.

39. Some think early flying saucer stories originated from spottings of experimental Nazi aircraft.

40. In June this year, Seti upgraded its Serendip (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) programme at Arecibo. The first programme listened to 100 channels simultaneously, the new programme can track more than two billion.

41. ET and Close Encounters director Steven Spielberg has been obsessed with the search for life outside our planet since childhood and donates money to Seti.

42. llie Arroway, Jodie Foster’s character in the film Contact, finds aliens using the same methods as a Seti radio-wave analysing programme Project Phoenix based in Australia.

43. Those hopeful of so-called “exo-biology” have been encouraged by recent discoveries of the building blocks of life floating around in space. Radio telescopes have picked up the chemical signatures of 150 molecules in interstellar space, including sugar, alcohol and amino acids.

44. The twin Voyager space probes, launched in 1977, carried gold discs containing information about Earth, including recordings of greetings in 54 different human languages, humpback whale song, 117 pictures of Earth and a collection of sounds including music from Mozart to Louis Armstrong. The discs were put together by the Seti advocate Carl Sagan at the request of Nasa. It will be 40,000 years before the discs get anywhere near another planetary system.

45. If aliens do find them, they will need to locate an old vinyl record player. Fortunately, there are instructions and a stylus on the spaceship.

46. In the 1820s the German mathematician Carl Friedrich Gauss, below, tried to contact aliens by reflecting sunlight towards planets. He also wanted to cut a giant triangle into the Siberian forest and plant wheat inside to show a geometric object visible from the Moon.

47. Around the same time, the Austrian mathematician Joseph Johann von Littrow proposed digging a circular canal in the Sahara 20 miles in diameter, filling it with paraffin and setting it on fire, thus alerting alien species to our existence.

48. Charles Cros, a French poet and inventor, thought spots of light on Mars and Venus were indicators of civilisations. He tried to convince the French government to build a giant mirror to communicate with the aliens. The lights he saw were probably noctilucent clouds (clouds so high they reflect sunlight at night); the mirror was almost certainly impossible to build.

49. Japan has prepared guidelines on how to handle aliens if they land and a strategy to defend the country from alien attack.

50. Early alien hunters at the 1960 conference at Green Bank, West Virginia, which established Seti as a scientific discipline, called themselves the Order of the Dolphin in honour of John Lilly, who had recently concluded that dolphins were intelligent and pioneered attempts to communicate with them.


Full article and photo:

Superheavy Element 114 Confirmed: A Stepping Stone To The ‘Island Of Stability’

Scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory have been able to confirm the production of the superheavy element 114, ten years after a group in Russia, at the Joint Institute for Nuclear Research in Dubna, first claimed to have made it. The search for 114 has long been a key part of the quest for nuclear science’s hoped-for Island of Stability.

Heino Nitsche, head of the Heavy Element Nuclear and Radiochemistry Group in Berkeley Lab’s Nuclear Science Division (NSD) and a professor of chemistry at the University of California at Berkeley, and Ken Gregorich, a senior staff scientist in NSD, led the team that independently confirmed the production of the new element, which was first published by the Dubna Gas Filled Recoil Separator group.

Using an instrument called the Berkeley Gas-filled Separator (BGS) at Berkeley Lab’s 88-Inch Cyclotron, the researchers were able to confirm the creation of two individual nuclei of element 114, each a separate isotope having 114 protons but different numbers of neutrons, and each decaying by a separate pathway.

“By verifying the production of element 114, we have removed any doubts about the validity of the Dubna group’s claims,” says Nitsche. “This proves that the most interesting superheavy elements can in fact be made in the laboratory.”

Verification of element 114 is reported in Physical Review Letters. In addition to Nitsche and Gregorich, the Berkeley Lab team included Liv Stavestra, now at the Institute of Energy Technology in Kjeller, Norway; Berkeley Lab postdoctoral fellow Jan Dvořák; and UC graduate students Mitch Andrē Garcia, Irena Dragojević, and Paul Ellison, with laboratory support from UC Berkeley postdoctoral fellow Zuzana Dvořáková.

The realm of the superheavy

Elements heavier than uranium, element 92 – the atomic number refers to the number of protons in the nucleus – are radioactive and decay in a time shorter than the age of Earth; thus they are not found in nature (although traces of transient neptunium and plutonium can sometimes be found in uranium ore). Elements up to 111 and the recently confirmed 112 have been made artificially – those with lower atomic numbers in nuclear reactors and nuclear explosions, the higher ones in accelerators – and typically decay very rapidly, within a few seconds or fractions of a second.

Beginning in the late 1950s, scientists including Gertrude Scharff-Goldhaber at Brookhaven and theorist Wladyslaw Swiatecki, who had recently moved to Berkeley and is a retired member of Berkeley Lab’s NSD, calculated that superheavy elements with certain combinations of protons and neutrons arranged in shells in the nucleus would be relatively stable, eventually reaching an “Island of Stability” where their lifetimes could be measured in minutes or days – or even, some optimists think, in millions of years. Early models suggested that an element with 114 protons and 184 neutrons might be such a stable element. Longtime Berkeley Lab nuclear chemist Glenn Seaborg, then Chairman of the Atomic Energy Commission, encouraged searches for superheavy elements with the necessary “magic numbers” of nucleons.

“People have been dreaming of superheavy elements since the 1960s,” says Gregorich. “But it’s unusual for important results like the Dubna group’s claim to have produced 114 to go unconfirmed for so long. Scientists were beginning to wonder if superheavy elements were real.”

To create a superheavy nucleus requires shooting one kind of atom at a target made of another kind; the total protons in both projectile and target nuclei must at least equal that of the quarry. Confirming the Dubna results meant aiming a beam of 48Ca ions – calcium whose nuclei have 20 protons and 28 neutrons – at a target containing 242Pu, the plutonium isotope with 94 protons and 148 neutrons. The 88-Inch Cyclotron’s versatile Advanced Electron Cyclotron Resonance ion source readily created a beam of highly charged calcium ions, atoms lacking 11 electrons, which the 88-Inch Cyclotron then accelerated to the desired energy.

Four plutonium oxide target segments were mounted on a wheel 9.5 centimeters (about 4 inches) in diameter, which spun 12 to 14 times a second to dissipate heat under the bombardment of the cyclotron beam.

“Plutonium is notoriously difficult to manage,” says Nitsche, “and every group makes their targets differently, but long experience has given us at Berkeley a thorough understanding of the process.” (Experience is especially long at Berkeley Lab and UC Berkeley – not least because Glenn Seaborg discovered plutonium here early in 1941.)

When projectile and target nuclei interact in the target, many different kinds of nuclear reaction products fly out the back. Because nuclei of superheavy elements are rare and short-lived, both the Dubna group and the Berkeley group use gas-filled separators, in which dilute gas and tuned magnetic fields sweep the copious debris of beam-target collisions out of the way, ideally leaving only compound nuclei with the desired mass to reach the detector. The Berkeley Gas-filled Separator had to be modified for radioactive containment before radioactive targets could be used.

In sum, says Gregorich, “The high beam intensities from the 88-Inch Cyclotron, together with the efficient background suppression of the BGS, allow us to look for nuclear reaction products with very small cross-sections – that is, very low probabilities of being produced. In the case of element 114, that turned out to be just two nuclei in eight days of running the experiment almost continuously.”

Tracking the isotopes of 114

The researchers identified the two isotopes as 286114 (114 protons and 172 neutrons) and 287114 (114 protons and 173 neutrons). The former, 286114, decayed in about a tenth of a second by emitting an alpha particle (2 protons and 2 neutrons, a helium nucleus) – thus becoming a “daughter” nucleus of element 112 – which subsequently spontaneously fissioned into smaller nuclei. The latter, 287114, decayed in about half a second by emitting an alpha particle to form 112, which also then emitted an alpha particle to form daughter element 110, before spontaneously fissioning into smaller nuclei.

The Berkeley Group’s success in finding these two 114 nuclei and tracking their decay depended on sophisticated methods of detection, data collection, and concurrent data analysis. After passing through the BGS, the candidate nucleus enters a detector chamber. If a candidate element 114 atom is detected, and is subsequently seen to decay by alpha-particle emission, the cyclotron beam instantly shuts off so further decay events can be recorded without background interference.

In addition to such automatic methods of enhancing data collection, the data was analyzed by completely independent software programs, one written by Gregorich and refined by team member Liv Stavsetra, another written by team member Jan Dvořák.

“One surprise was that the 114 nuclei had much smaller cross sections – were much less likely to form – than the Dubna group reported,” Nitsche says. “We expected to get about six in our eight-day experiment but only got two. Nevertheless, the decay modes, lifetimes, and energies were all consistent with the Dubna reports and amply confirm their achievement.”

Says Gregorich, “Based on the ideas of the 1960s, we thought when we got to element 114 we would have reached the Island of Stability. More recent theories suggest enhanced stability at other proton numbers, perhaps 120, perhaps 126. The work we’re doing now will help us decide which theories are correct and how we should modify our models.”

Nitsche adds, “During the last 20 years, many relatively stable isotopes have been discovered that lie between the known heavy element isotopes and the Island of Stability – essentially they can be considered as ‘stepping stones’ to this island. The question is, how far does the Island extend – from 114 to perhaps 120 or 126? And how high does it rise out the Sea of Instability.”

The accumulated expertise in Berkeley Lab’s Nuclear Science Division; the recently upgraded Berkeley Gas-filled Separator that can use radioactive targets; the more powerful and versatile VENUS ion source that will soon come online under the direction of operations program head Daniela Leitner – all add up to Berkeley Lab’s 88-Inch Cyclotron remaining highly competitive in the ongoing search for a stable island in the sea of nuclear instability.

This work was supported by the U.S. Department of Energy’s Office of Science.


Full article:

A Damp Moon Overhead

We’re sure that somewhere a marketer is already designing the campaign for Moon Water — available, of course, in attractive, biodegradable containers. Scientists analyzing data collected by three spacecraft have discovered that there may be a fair amount of water — or hydroxyl, which is one hydrogen atom short of being water — on the Moon, albeit spread out in millimeter-thin layers on or near the surface.

This will take some re-imagining, especially after those pictures from the Apollo missions that showed a spectacularly dry, dusty and oasis-free place. It is also a place where temperature swings are extreme, which means it should be inhospitable to a volatile compound like water. These new findings suggest that there is water lurking not only in permanently shadowed craters near the lunar poles but also elsewhere on the lunar surface.

If a decision is made to build a new space base on the Moon — and space enthusiasts differ on the value of doing so — it may be able to extract some water and oxygen from the soil. As for where the water comes from, scientists suggest it may be created when protons in the solar wind collide with the Moon’s surface and trigger reactions that produce water. Forty years ago, there was evidence of water in the lunar soil samples brought back by astronauts. At the time, scientists dismissed the possibility. The Moon was too dry, and they assumed that the samples had been corrupted by Houston’s moist air.

That’s what comes of living on a truly wet planet.

Editorial, New York Times


Full article:


See also:

In Surprise, Moon Shows Signs of Water

moon s

Images of the Moon captured in 1999 by the Cassini spacecraft show regions of trace surface water (blue) and hydroxyl (orange and green).

There appears to be, to the surprise of planetary scientists, water, water everywhere on the Moon, although how many drops future astronauts might be able to drink is not clear.

Data from three spacecraft indicate the widespread presence of water or hydroxyl, a molecule consisting of one hydrogen atom and one oxygen atom as opposed to the two hydrogen and one oxygen atoms that make up a water molecule. The discoveries are being published Thursday on the Web site of the journal Science.

“It’s so startling because it’s so pervasive,” said Lawrence A. Taylor of the University of Tennessee, Knoxville, a co-author of one of the papers that analyzed data from a National Aeronautics and Space Administration instrument aboard India’s Chandrayaan-1 satellite. “It’s like somebody painted the globe.”

For decades, the Moon has been regarded as a completely dry place. The dark side is more than ice cold, but when it passes into sunlight, any ice should have long ago been baked away. The possible exceptions are permanently shadowed craters near the Moon’s poles, and data announced this month by NASA verified the presence of hydrogen in those areas, which would most likely be in the form of water.

If water is somehow more widespread, that could make future settlement of the Moon easier, especially if significant water could be extracted just by heating the soil. Oxygen would also be a key component for breathable air for astronauts, and hydrogen and oxygen can also be used for rocket fuel or power generation.

Samples of lunar soil brought back from NASA’s Apollo missions about four decades ago actually did show signs of water, but most scientists working with the samples, including Dr. Taylor, dismissed the readings as contamination from humid Houston air that seeped in before the rocks were analyzed at NASA’s Johnson Space Center.

“I was one of the ones back in the Apollo days that was firmly against lunar water,” Dr. Taylor said.

Now he is convinced he was wrong. “I’ve eaten my shorts,” he said.

The Chandrayaan-1 data looked at sunlight reflected off the Moon’s surface and found a dip at a wavelength where water and hydroxyl absorb infrared light. Dr. Taylor estimated the concentration at about one quart of water per cubic yard of lunar soil and rock.

Meanwhile, Roger N. Clark of the United States Geological Survey analyzed decade-old data from NASA’s Cassini spacecraft when it passed the Moon en route to Saturn. He, too, found signs of water or hydroxyl, mostly at the poles, but also at lower latitudes.

Scientists working with the Deep Impact spacecraft, which later studied the Comet Tempel 1, also found infrared absorption at the water and hydroxyl wavelengths. More interesting, the amount of absorption — and thus the quantity of water — varied with temperature.

That suggests the water is being created when protons from the solar wind slam into the lunar surface. The collisions may free oxygen atoms in the minerals and allow them to recombine with protons and electrons to form water.

Lori M. Feaga, a research scientist at the University of Maryland who is a member of the team that analyzed the Deep Impact data, said this process would work only to about one millimeter into the lunar surface. If correct, that would not give future astronauts much to drink.

“You would have to scrape the area of a baseball field or a football field to get one quart of water,” she said.


Full article and photo:

Monsters of the deep

Rogue waves

Huge, freak waves may not be as rare as once thought

ON JULY 26th 1909 the SS Waratah, en route to London from Melbourne, left Durban with 211 passengers and crew. She was due in Cape Town three days later but never arrived. The steamship was last sighted along the east coast of South Africa—known to sailors as the “wild coast” for its violent weather—struggling through a stormy sea with waves more than nine metres (30 feet) high. No trace of the vessel has ever been found.

A theory which might explain her disappearance, and that of some other vessels, is that they were struck by rogue waves—which begin with a deep trough followed by a wall of water the size of an eight- or nine-storey building. For many years oceanographers dismissed sailors’ reports of rogue waves much as they did stories of mermaids. But in 1995 an oil rig in the North Sea recorded a 25.6-metre wave. Then in 2000 a British oceanographic vessel recorded a 29-metre wave off the coast of Scotland. In 2004 scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each 25 metres or more high.

A typical ocean wave forms when wind produces a ripple across the surface of the sea. If the wind is strong, the ripples grow larger. A hurricane can amplify a wave to a few storeys. But trying to create giant rogue waves in a laboratory tank is very difficult, making them hard to study. Now researchers led by Eric Heller of Harvard University and Lev Kaplan of Tulane University, New Orleans, have started using microwaves rather than water waves to create a laboratory model.

Rogue waves are not tsunamis, which are set in motion by earthquakes. These travel at high speed, building up as they approach the shore. Rogue waves seem to occur in deep water or where a number of physical factors such as strong winds and fast currents converge. This may have a focusing effect, which can cause a number of waves to join together. Such conditions exist along Africa’s wild coast, where strong winds blowing from the north-west interact with the swift and narrow Agulhas current flowing down the coast to produce enormous waves. Dr Heller, who likes to sail, says there may be other mechanisms at work too, including an interference effect that causes different ocean swells, travelling at different speeds, to add up to produce a rogue, and a non-linear effect in which a small change in something like wind direction or speed causes a disproportionately large wave.

To study the phenomenon the group created a platform measuring 26cm by 36cm on which they randomly placed around 60 small brass cones to mimic random eddies in ocean currents. When microwaves were beamed at the platform, the researchers found that hot spots (the microwave equivalent of rogue waves) appeared far more often than conventional wave theory would predict; they were between ten and 100 times more likely.

Dr Heller says the results tend to support anecdotal evidence from seamen that rogue waves are not as rare as once thought. He thinks the work could also be used to understand more about the formation of these dangerous waves, perhaps to the point where it would one day be possible to provide a warning in places where rogue waves may be prone to appear. Seafarers would be thankful for that.


Full article and photo:

Secrets Of Insect Flight Revealed: Modeling The Aerodynamic Secrets Of One Of Nature’s Most Efficient Flyers

locust wing

Smoke visualization in Oxford University’s wind tunnel showing the airflow over a flying locust’s wings

Researchers are one step closer to creating a micro-aircraft that flies with the manoeuvrability and energy efficiency of an insect after decoding the aerodynamic secrets of insect flight.

Dr John Young, from the University of New South Wales (UNSW) in Australia, and a team of animal flight researchers from Oxford University’s Department of Zoology, used high-speed digital video cameras to film locusts in action in a wind tunnel, capturing how the shape of a locust’s wing changes in flight. They used that information to create a computer model which recreates the airflow and thrust generated by the complex flapping movement.

The breakthrough result, published in the journal Science this week, means engineers understand for the first time the aerodynamic secrets of one of Nature’s most efficient flyers – information vital to the creation of miniature robot flyers for use in situations such as search and rescue, military applications and inspecting hazardous environments.

“The so-called `bumblebee paradox’ claiming that insects defy the laws of aerodynamics, is dead. Modern aerodynamics really can accurately model insect flight,” said Dr Young, a lecturer in the School of Aerospace, Civil and Mechanical Engineering at the Australian Defence Force Academy (UNSW@ADFA).

“Biological systems have been optimised through evolutionary pressures over millions of years, and offer many examples of performance that far outstrips what we can achieve artificially.

“An insect’s delicately structured wings, with their twists and curves, and ridged and wrinkled surfaces, are about as far away as you can get from the streamlined wing of an aircraft,” Dr Young said.

“Until very recently it hasn’t been possible to measure the actual shape of an insect’s wings in flight – partly because their wings flap so fast, and partly because their shape is so complicated.

“Locusts are an interesting insect for engineers to study because of their ability to fly extremely long distances on very limited energy reserves.”

Once the computer model of the locust wing movement was perfected, the researchers ran modified simulations to find out why the wing structure was so complex.

In one test they removed the wrinkles and curves but left the twist, while in the second test they replaced the wings with rigid flat plates. The results showed that the simplified models produced lift but were much less efficient, requiring much more power for flight.

“The message for engineers working to build insect-like micro-air vehicles is that the high lift of insect wings may be relatively easy to achieve, but that if the aim is to achieve efficiency of the sort that enables inter-continental flight in locusts, then the details of deforming wing design are critical,” Dr Young said.

The Oxford team were Dr Simon Walker, Dr Richard Bomphrey, Dr Graham Taylor and Professor Adrian Thomas of the Animal Flight Group in the Department of Zoology.

The research paper, “Details of Insect Wing Design and Deformation Enhance Aerodynamic Function and Flight Efficiency,” appears in the September 18 issue of Science.


Full article and photo:

Science in Pictures

science 1

Paul Sereno, a paleontologist at the University of Chicago, adding the toe claw to the skeleton of the new tyrannosaur Raptorex. The discovery in China of what amounts to a miniature prototype of Tyrannosaurus rex calls into question theories about the dinosaur’s evolution.

science 2

The Hypersonic Thermodynamic InfraRed Measurements team at NASA’s Langley Research Center in Hampton, Va., captured a thermal image of the shuttle Discovery as it returned on Sept. 11. The team is using thousands of frames of re-entry data to paint a picture of the shuttle’s heating patterns. This is one of the first images.

science 3

Researchers say they have identified a basaltic meteorite that originated from a different parent asteroid than most others on record, implying that there are sources of basaltic meteorites other than the asteroid called 4 Vesta in the main inner asteroid belt. Watching the sky in the desert of Western Australia, researchers tracked the meteorite’s orbit, located it after it fell to earth and analyzed its composition. In this long exposure that lasted most of the night, stars appear as curved white streaks. The streaks that cut diagonally across them are “fireballs.”

science 4

New research on the entanglement of photons from light beams of different wavelengths offers clues on how a quantum computer of the future might work. Entanglement is at the heart of quantum information processing, in which the basic units of information — in this case, photons — can occupy multiple states at once. Practical systems, researchers say, would contain a network of quantum components, possibly operating at different frequencies.

science 5

NASA’s Swift satellite took this picture of invisible ultraviolet radiation from the giant spiral galaxy in Andromeda known as M31. Only the hottest stars, which are also usually the youngest and most massive, emit much ultraviolet light, which is a form of electromagnetic radiation with wavelengths just shorter than those of blue light in the visible spectrum. The picture shows astronomers where new stars have been forming in the galaxy Andromeda. The galaxy, which is about 2.5 million light years away, is a twin of our own Milky Way and so this is a sort of self-portrait. As in our own galaxy, most of the action is in the spiral arms, where gravity is compressing clouds of gas and dust into new stars.

science 6

Conservation geneticists are using DNA barcodes to help track the endangered and highly migratory sea turtles. DNA bar codes are short genetic sequences that efficiently distinguish species from each other — even if the samples from which the DNA is extracted are minute or degraded, researchers say. A new study shows that this technology can be applied to all seven sea turtle species and can provide insight into the genetic structure of a widely dispersed and ancient group of animals.

science 7

Shadowy impact sites on the south pole of the Moon could be the coldest places in the solar system, NASA scientists said in unveiling findings from the Lunar Reconnaissance Orbiter spacecraft. The orbiter, launched in June, officially began its one-year mission to map the Moon’s surface this week.

science 8

The Horsehead Nebula (which looks like a dark region against a bright background) is composed of a cloud of molecular material.

science 9

Researchers are studying plasma-surface interactions to figure out how the heat from magnetic confinement fusion might be contained. In one experiment at the University of California, San Diego, Center for Energy Research’s PISCES lab, researchers exposed a hot tungsten sample (at 1,000 degrees Celsius) to high-power deuterium plasma. At the center of the Sun, fusion takes place at 15 million degrees, but fusion reactors on Earth must operate at lower pressures and higher temperatures of about 100 million degrees.

science 10

Based on new ways of looking at shark teeth and new shark fossils from a Peruvian desert, most experts now believe great whites sharks are not descended from a megatoothed megashark, but from a more modest relative of mako sharks.


Full article and photos:

Lunar Craters May Be Chilliest Spots in Solar System

moon_190A photograph from the Lunar Reconnaissance Orbiter showing the rim of Shackleton crater, near the Moon’s south pole.

The shadowy craters near the south pole of the Moon may be the coldest places in the solar system, colder than even Pluto, NASA scientists reported Thursday as they unveiled some of the first findings from the Lunar Reconnaissance Orbiter spacecraft.

“We’re looking at the Moon with new eyes,” said Richard Vondrak, the mission’s project scientist.

The orbiter, launched in June, officially began its one-year mission to map the Moon’s surface this week. But during three months of turning on, testing and calibration of its seven instruments, it had already begun returning data. Notably, its camera captured pictures of the Apollo landing sites, including some of the tracks that the astronauts left on the surface.

In the newly released data, thermal measurements showed that daytime temperatures over much of the surface reached 220 degrees Fahrenheit — hotter than boiling water — before plummeting to frigidness at night.

But the bottoms of the craters, which lie in permanent darkness, never warm above minus 400. Those ultracold temperatures have trapped and held deposits of ice for several billion years. The ice could prove a valuable resource to future explorers, not only as drinking water but also, when the water molecules are broken apart, hydrogen and oxygen.

If it exists, the ice could also hold a detailed historical record of past comet impacts on the Moon, which would provide new hints of the early conditions in the solar system.

A second instrument detected slow-moving neutrons, which indicate the presence of hydrogen in the polar regions. The hydrogen is most likely in the form of water, and that data support the findings of the Lunar Prospector spacecraft a decade ago.

In a twist, the reconnaissance orbiter found hydrogen not only in some craters but also in some areas outside of the craters. Also, some of the craters did not appear to have hydrogen.

That means the water — or some other hydrogen-containing molecule like methane — lies beneath the surface. “It would be very durable there,” Dr. Vondrak said. “What we don’t know is the abundance and how deep it is buried.”

Getting to the material at the bottom of the craters could be difficult. An instrument that maps the topography by bouncing a laser beam off the surface has found the sides of the craters to be steep and rough terrain.

The primary mission of the Lunar Reconnaissance Orbiter, gathering data on the Moon from an altitude of 31 miles to prepare for the return on astronauts, will continue for a year. After that, it will continue to operate to gather information for scientists.


Full article and photo:

Why opposites don’t always attract

A lucky lab accident helps to explain the mystery of bouncing droplets.

It’s a natural fact that opposites attract — or so scientists thought. But a new study of fluid droplets shows that opposites can sometimes bounce right off one another. The results may seem esoteric, but they could have big implications for everything from oil refining to microfluidic lab-on-a-chip technologies.

The work, published today in Nature, began as a laboratory accident. William Ristenpart, a chemical engineer at the University of California at Davis, was studying how the shape of a water column in oil changed as it was drawn towards an electrically charged plate. “I basically messed up,” he says. “I was applying a few kilovolts, the system shorted out and the water exploded.”

Tiny droplets of water went ricocheting around the oil-filled chamber. But as Ristenpart watched, he noticed something odd: oppositely charged water bubbles seemed to be bouncing off one another. “The first time I saw that I was terribly confused,” Ristenpart says.

Charge puzzle

That’s because, like other researchers, Ristenpart believed that oppositely charged water droplets would attract each other and form larger drops. This property has long been exploited in the electrostatic separation process used by the petroleum industry to collect and remove bubbles of seawater from crude oil.

Ristenpart and his colleagues studied his laboratory accident for three years, and with the help of high-speed videos and mathematical calculations they now claim to understand the phenomenon. Because of the force of surface tension, water droplets are normally held in tight spheres. But as two electrically charged droplets come close to each other, the spheres begin to warp — and at very short distances, a small bridge of fluid forms between the drops.

When the electrical charge is low, that bridge grows until the drops merge together, but when the charge is high, something else happens: the bridge allows the droplets to exchange their charge and then snaps. The water flows back into the bubbles, and by the time the two drops collide, they are back in their spherical shape. Rather than merging, their surface tension causes them to bounce off one another like beach balls (see video at

Seeing is believing

“Wow, how can that be?” Frieder Mugele, a physicist at the University of Twente in Enschede, the Netherlands, remembers asking himself on first seeing the result. But Mugele says he is wholly convinced by the group's explanation. “The fundamental principle is captured by what they are saying,” he says. “It’s a very striking phenomenon.”

A bigger question is whether the bouncing effect could actually be useful. Many scientists are working to develop microfluidic systems — known as labs-on-a-chip — that can mix small amounts of chemical reagents or biological molecules. Electrical charge is one way that chemicals can be moved around these chips, and the study’s authors say that knowledge of the bouncing bubbles could aid their development. Ristenpart says that the work could also find an application in the oil industry, which currently uses building-sized electrostatic separators to remove seawater from crude oil. The American Chemical Society has given Ristenpart’s team a grant to see whether their research can create a more efficient separator, he says.

But even if the myriad potential applications don’t pan out, Ristenpart is still planning a long future in droplet studies. His group is now looking at unusual collisions in which the droplets break into a pair of daughter drops, one large and one small. “That is not really well understood at all,” he says. “There’s a lot more thinking to do for sure.”


Full article:

Planck telescope’s first glimpse

Planck maps tiny temperature fluctuations (mottled colours in the strip). These fluctuations correspond to the early distribution of matter in the cosmos. It will take Planck six months to complete a full sky map.

The European telescope sent far from Earth to study the oldest light in the Universe has returned its first images.

The Planck observatory, launched in April, is surveying radiation that first swept out across space just 380,000 years after the Big Bang.

The light holds details about the age, contents and evolution of the cosmos.

The new images show off Planck’s capabilities now that it has been set up, although major science results are not expected for a couple of years.

“The images show first of all that we are working and that we are able to map the sky,” said Planck project scientist Dr Jan Tauber.

“They show that in areas where we expect to see certain things, we do indeed see them, that we are able to image very faint emission, and finally that the two instruments are working in tandem well,” he told BBC News.

Background information

Planck is a European Space Agency (Esa) endeavour.

It was launched on an Ariane rocket and thrown out to an observing position some 1.5 million km from Earth.

Planck scanning animation (Esa)
Planck rotates about once per second
As it rotates, it gathers precise temperature information from a narrow “strip” of the sky
The strips are then fitted together to form a thermal picture of the farthest regions of our Universe
It will take about six months to cover the whole sky

It is trying to make the finest-ever measurements of what has become known as the Cosmic Microwave Background (CMB).

This is light that was finally allowed to move out across space once a post-Big-Bang Universe had cooled sufficiently to permit the formation of hydrogen atoms.

Before that time, scientists say, the Universe would have been so hot that matter and radiation would have been “coupled” – the cosmos would have been opaque.

Researchers can detect temperature variations in this ancient heat energy that give them insights into the early structure of the Universe.

With Planck, they also hope to find firm evidence of “inflation”, the faster-than-light expansion that cosmologists believe the Universe experienced in its first, fleeting moments.

Theory predicts this event ought to be “imprinted” in the CMB and the detail should be retrievable with sufficiently sensitive instruments. Planck is designed to have that capability.

Its detectors, or bolometers, are the most sensitive ever flown in space, and operate at a staggering minus 273.05C – just a tenth of a degree above what scientists term “absolute zero”.

“In terms of the instrumental performance, we are getting what we expected from ground testing,” explained Dr Tauber.


The work to fully commission and optimise Planck for science was completed in mid-August. It was then immediately followed by the “first light” survey that produced the new images.

The pictures are essentially maps of a strip of the sky, one for each the nine frequencies Planck uses. Each map is a ring, about 15 degrees wide, stretching across the full sky.

Planck (Esa)
The telescope is kept phenomenally cold to carry out its work

The telescope has now begun routine operations. It will take the observatory roughly six months to assemble a complete map of the sky. The mission objectives call for at least two of these maps to be made.

It will be at least a couple of years before the Planck research teams are able to present some of their major scientific findings.

“The mission has gone much better than I expected so far,” said Dr Tauber.

“It’s been an unexpectedly smooth ride. We’ve had the usual minor hitches here and there, but I think overall it is doing fantastically well. Everything is chugging away and we are collecting data.”

Planck’s co-passenger on April’s Ariane launch was the Herschel Space Observatory.

It views the cosmos at shorter wavelengths, in the far-infrared, allowing it to peer through clouds of dust and gas to see stars at the moment they are born.

It is currently still in its demonstration phase, collecting images designed to show off its capabilities.

Two of its instruments are working well. A third, however, is currently down after experiencing a fault.

Engineers can switch to a back up system to reactivate the Heterodyne Instrument for the Far Infrared (HiFi) but they do not intend to do that until they can understand the cause the anomaly.

HiFi is a spectrometer that will identify elements and molecules in the clouds of gas and dust which give rise to stars.


Full article and photos:

Detecting Digitally Altered Video

A study in Applied Cognitive Science finds that we’re likely to believe a doctored video over own memories of an event.

The last few years have seen digitally altered photos land in numerous media outlets. Modern technology is making it tough for even the expert to spot a fake. But imagine if a doctored image lands in court and even sways eyewitnesses, who had seen the event in question?

Evidence from a study published in the journal Applied Cognitive Science shows that people will believe a videotaped version of an event, even if it differs from the reality they lived through.

Sixty subjects participated in a gambling game, where they’d make bets on getting the answer to a trivia question right. All subjects had another player seated next to them. Except the “other player” was really a researcher.

Later, a video of the gambling session was doctored to make it seem that the other player—the researcher—had cheated.

A third of the subjects were told that the person next to them MAY have cheated. Another third were told the player next to them was caught on camera cheating. And the rest were shown the fake footage of the other player cheating. Then all were asked to sign a statement only if they had seen the act of cheating take place.

Just 5 percent of the control group, who were merely told about the cheating,  signed the statement. 

Only 10 percent of the group who were told that the cheating had been caught by cameras—but did not actually see the video—signed the statement.

But nearly 40 percent of those who saw the fake video signed. And another 10 percent signed after being asked a second time by the researcher.

With ever-new digital tricks, we need to be aware that seemingly ironclad evidence may in fact be altered.  And find our way to the truth by employing one of our most valuable resources: a healthy skepticism. 

Christie Nicholson, Scientific American


Full article:


See also:

Digital Forensics: How Experts Uncover Doctored Images

Modern software has made manipulation of photographs easier to carry out and harder to uncover than ever before, but the technology also enables new methods of detecting doctored images.

Key Concepts

  • Fraudulent photographs produced with powerful, commercial software appear constantly, spurring a new field of digital image forensics.
  • Many fakes can be exposed because of inconsistent lighting, including the specks of light reflected from people’s eyeballs.
  • Algorithms can spot when an image has a “cloned” area or does not have the mathematical properties of a raw digital photograph.

History is riddled with the remnants of photographic tampering. Stalin, Mao, Hitler, Mussolini, Castro and Brezhnev each had photographs manipulated—from creating more heroic-looking poses to erasing enemies or bottles of beer. In Stalin’s day, such phony images required long hours of cumbersome work in a darkroom, but today anyone with a computer can readily produce fakes that can be very hard to detect.

Barely a month goes by without some newly uncovered fraudulent image making it into the news. In February, for instance, an award-winning photograph depicting a herd of endangered Tibetan antelope apparently undisturbed by a new high-speed train racing nearby was uncovered to be a fake. The photograph had appeared in hundreds of newspapers in China after the controversial train line was opened with much patriotic fanfare in mid-2006. A few people had noticed oddities immediately, such as how some of the antelope were pregnant, but there were no young, as should have been the case at the time of year the train began running. Doubts finally became public when the picture was featured in the Beijing subway this year and other flaws came to light, such as a join line where two images had been stitched together. The photographer, Liu Weiqing, and his newspaper editor resigned; Chinese government news agencies apologized for distributing the image and promised to delete all of Liu’s photographs from their databases.

In that case, as with many of the most publicized instances of fraudulent images, the fakery was detected by alert people studying a copy of the image and seeing flaws of one kind or another. But there are many other cases when examining an image with the naked eye is not enough to demonstrate the presence of tampering, so more technical, computer-based methods—digital image forensics—must be brought to bear.

I am often asked to authenticate images for media outlets, law-enforcement agencies, the courts and private citizens. Each image to be analyzed brings unique challenges and requires different approaches. For example, I used a technique for detecting inconsistencies in lighting on an image that was thought to be a composite of two people. When presented with an image of a fish submitted to an online fishing competition, I looked for pixel artifacts that arise from resizing. Inconsistencies in an image related to its JPEG compression, a standard digital format, revealed tampering in a screen shot offered as evidence in a dispute over software rights.

As these examples show, because of the variety of images and forms of tampering, the forensic analysis of images benefits from having a wide choice of tools. Over the past five years my students, colleagues and I, along with a small but growing number of other researchers, have developed an assortment of ways to detect tampering in digital images. Our approach in creating each tool starts with understanding what statistical or geometric properties of an image are disturbed by a particular kind of tampering. Then we develop a mathematical algorithm to uncover those irregularities. The boxes on the coming pages describe five such forensic techniques.

The validity of an image can determine whether or not someone goes to prison and whether a claimed scientific discovery is a revolutionary advance or a craven deception that will leave a dark stain on the entire field. Fake images can sway elections, as is thought to have happened with the electoral defeat of Senator Millard E. Tydings in 1950, after a doctored picture was released showing him talking with Earl Browder, the leader of the American Communist Party. Political ads in recent years have seen a startling number of doctored photographs, such as a faux newspaper clipping distributed on the Internet in early 2004 that purported to show John Kerry on stage with Jane Fonda at a 1970s Vietnam War protest. More than ever before, it is important to know when seeing can be believing.

Everywhere You Look

The issue of faked images crops up in a wide variety of contexts. Liu was far from the first news photographer to lose his job and have his work stricken from databases because of digital fakery. Lebanese freelancer Adnan Hajj produced striking photographs from Middle Eastern conflicts for the Reuters news agency for a decade, but in August 2006 Reuters released a picture of his that had obviously been doctored. It showed Beirut after being bombed by Israel, and some of the voluminous clouds of smoke were clearly added copies.

Brian Walski was fired by the Los Angeles Times in 2003 after a photograph of his from Iraq that had appeared on the newspaper’s front page was revealed to be a composite of elements from two separate photographs combined for greater dramatic effect. A sharp-eyed staffer at another newspaper noticed duplicated people in the image while studying it to see if it showed friends who lived in Iraq. Doctored covers from newsmagazines Time (an altered mug shot of O. J. Simpson in 1994) and Newsweek (Martha Stewart’s head on a slimmer woman’s body in 2005) have similarly generated controversy and condemnation.

Scandals involving images have also rocked the scientific community. The infamous stem cell research paper published in the journal Science in 2005 by Woo Suk Hwang of Seoul National University and his colleagues reported on 11 stem cell colonies that the team claimed to have made. An independent inquiry into the case concluded that nine of those were fakes, involving doctored images of two authentic colonies. Mike Rossner estimates that when he was the managing editor of the Journal of Cell Biology, as many as a fifth of the accepted manuscripts contained a figure that had to be remade because of inappropriate image manipulation.

The authenticity of images can have myriad legal implications, including cases involving alleged child pornography. In 2002 the U.S. Supreme Court ruled that computer-generated images depicting a fictitious minor are constitutionally protected, overturning parts of a 1996 law that had extended federal laws against child pornography to include such images. In a trial in Wapakoneta, Ohio, in 2006, the defense argued that if the state could not prove that images seized from the defendant’s computer were real, then he was within his rights in possessing the images. I testified on behalf of the prosecutor in that case, educating the jurors about the power and limits of modern-day image-processing technology and introducing results from an analysis of the images using techniques to discriminate computer-generated images from real photographs. The defense’s argument that the images were not real was unsuccessful.

Yet several state and federal rulings have found that because computer-generated images are so sophisticated, juries should not be asked to determine which ones are real or virtual. At least one federal judge questioned the ability of even expert witnesses to make this determination. How then are we to ever trust digital photography when it is introduced as evidence in a court of law?

Arms Race

The methods of spotting fake images discussed in the boxes have the potential to restore some level of trust in photographs. But there is little doubt that as we continue to develop software to expose photographic frauds, forgers will work on finding ways to fool each algorithm and will have at their disposal ever more sophisticated image manipulation software produced for legitimate purposes. And although some of the forensic tools may be not so tough to fool—for instance, it would be easy to write a program to restore the proper pixel correlations expected in a raw image—others will be much harder to circumvent and will be well beyond the average user. The techniques described in the first three boxes exploit complex and subtle lighting and geometric properties of the image formation process that are challenging to correct using standard photo-editing software.

As with the spam/antispam and virus/antivirus game, not to mention criminal activity in general, an arms race between the perpetrator and the forensic analyst is inevitable. The field of image forensics will, however, continue to make it harder and more time-consuming (but never impossible) to create a forgery that cannot be detected.

Although the field of digital image forensics is still relatively young, scientific publishers, news outlets and the courts have begun to embrace the use of forensics to authenticate digital media. I expect that as the field progresses over the next five to 10 years, the application of image forensics will become as routine as the application of physical forensic analysis. It is my hope that this new technology, along with sensible policies and laws, will help us deal with the challenges of this exciting—yet sometimes baffling—digital age.


Full article:


See also:

Digital Forensics: 5 Ways to Spot a Fake Photo

fake image 1 

This image has been modified in several places. The digital forensic techniques described on the following pages could be used to detect where changes were made.


Composite images made of pieces from different photographs can display subtle differences in the lighting conditions under which each person or object was originally photographed. Such discrepancies will often go unnoticed by the naked eye.

For an image such as the one at the right, my group can estimate the direction of the light source for each person or object (arrows). Our method relies on the simple fact that the amount of light striking a surface depends on the relative orientation of the surface to the light source. A sphere, for example, is lit the most on the side facing the light and the least on the opposite side, with gradations of shading across its surface according to the angle between the surface and the direction to the light at each point.

To infer the light-source direction, you must know the local orientation of the surface. At most places on an object in an image, it is difficult to determine the orientation. The one exception is along a surface contour, where the orientation is perpendicular to the contour (red arrows right). By measuring the brightness and orientation along several points on a contour, our algorithm estimates the light-source direction.

fake image 2

For the image above, the light-source direction for the police does not match that for the ducks (arrows). We would have to analyze other items to be sure it was the ducks that were added.

Eyes and Positions

Because eyes have very consistent shapes, they can be useful for assessing whether a photograph has been altered.

fake image 3A person’s irises are circular in reality but will appear increasingly elliptical as the eyes turn to the side or up or down (a). One can approximate how eyes will look in a photograph by tracing rays of light running from them to a point called the camera center (b). The picture forms where the rays cross the image plane (blue). The principal point of the camera—the intersection of the image plane and the ray along which the camera is pointed—will be near the photograph’s center.

fake image 4My group uses the shape of a person’s two irises in the photograph to infer how his or her eyes are oriented relative to the camera and thus where the camera’s principal point is located (c). A principal point far from the center or people having inconsistent principal points is evidence of tampering (d). The algorithm also works with other objects if their shapes are known, as with two wheels on a car.

The technique is limited, however, because the analysis relies on accurately measuring the slightly different shapes of a person’s two irises. My collaborators and I have found we can reliably estimate large camera differences, such as when a person is moved from one side of the image to the middle. It is harder to tell if the person was moved much less than that.

Specular Highlights

Surrounding lights reflect in eyes to form small white dots called specular highlights. The shape, color and location of these highlights tell us quite a bit about the lighting.

fake picture 5


In 2006 a photo editor contacted me about a picture of American Idol stars that was scheduled for publication in his magazine (above). The specular highlights were quite different (insets).

fake image 6


The highlight position indicates where the light source is located (above left). As the direction to the light source (yellow arrow) moves from left to right, so do the specular highlights.

The highlights in the American Idol picture are so inconsistent that visual inspection is enough to infer the photograph has been doctored. Many cases, however, require a mathematical analysis. To determine light position precisely requires taking into account the shape of the eye and the relative orientation between the eye, camera and light. The orientation matters because eyes are not perfect spheres: the clear covering of the iris, or cornea, protrudes, which we model in software as a sphere whose center is offset from the center of the whites of the eye, or sclera (above right).


fake image 7

Our algorithm calculates the orientation of a person’s eyes from the shape of the irises in the image. With this information and the position of the specular highlights, the program estimates the direction to the light. The image of the American Idol cast (above; directions depicted by red dots on green spheres) was very likely composed from at least three photographs.

Send in the Clones

Cloning—the copying and pasting of a region of an image—is a very common and powerful form of manipulation.

fake image 8


This image is taken from a television ad used by George W. Bush’s reelection campaign late in 2004. Finding cloned regions by a brute-force computer search, pixel by pixel, of all possible duplicated regions is impractical because they could be of any shape and located anywhere in the image. The number of comparisons to be made is astronomical, and innumerable tiny regions will be identical just by chance (“false positives”). My group has developed a more efficient technique that works with small blocks of pixels, typically about a six-by-six-pixel square (inset).

For every six-by-six block of pixels in the image, the algorithm computes a quantity that characterizes the colors of the 36 pixels in the block. It then uses that quantity to order all the blocks in a sequence that has identical and very similar blocks close together. Finally, the program looks for the identical blocks and tries to “grow” larger identical regions from them block by block. By dealing in blocks, the algorithm greatly reduces the number of false positives that must be examined and discarded.

When the algorithm is applied to the image from the political ad, it detects three identical regions (red, blue and green).

Camera Fingerprints

Digital retouching rarely leaves behind a visual trace. Because retouching can take many forms, I wanted to develop an algorithm that would detect any modification of an image. The technique my group came up with depends on a feature of how virtually all digital cameras work.

A camera’s digital sensors are laid out in a rectangular grid of pixels, but each pixel detects the intensity of light only in a band of wavelengths near one color, thanks to a color filter array (CFA) that sits on top of the digital sensor grid. The CFA used most often, the Bayer array, has red, green and blue filters arranged as shown below.

fake image 10


Each pixel in the raw data thus has only one color channel of the three required to specify a pixel of a standard digital image. The missing data are filled in—either by a processor in the camera itself or by software that interprets raw data from the camera—by interpolating from the nearby pixels, a procedure called demosaicing. The simplest approach is to take the average of neighboring values, but more sophisticated algorithms are also used to achieve better results. Whatever demosaicing algorithm is applied, the pixels in the final digital image will be correlated with their neighbors. If an image does not have the proper pixel correlations for the camera allegedly used to take the picture, the image has been retouched in some fashion.

My group’s algorithm looks for these periodic correlations in a digital image and can detect deviations from them. If the correlations are absent in a small region, most likely some spot changes have been made there. The correlations may be completely absent if image-wide changes were made, such as resizing or heavy JPEG compression. This technique can detect changes such as those made by Reuters to an image it released from a meeting of the United Nations Security Council in 2005 (above): the contrast of the notepad was adjusted to improve its readability.

A drawback of the technique is that it can be applied usefully only to an allegedly original digital image; a scan of a printout, for instance, would have new correlations imposed courtesy of the scanner.

Hany Farid, Scientific American


Full article and photos:

Jupiter borrowed a passing comet to make a moon for 12 years


Modeled orbit of Comet 147P/Kushida-Muramatsu around Jupiter (at center of diagram): Ohtsuka/Asher

The middle of the 20th century was an eventful time in terms of Earth’s geopolitics. In the spring of 1949, the North Atlantic Treaty Organization (NATO) was taking shape, and simmering tensions in Korea hinted at the war that would begin there the following year. Twelve years later, in the summer of 1961, President John F. Kennedy was in his first year of office and had already committed the U.S. to reaching the moon before the decade was out.

A few hundred million miles away, during that same interval of years, Jupiter had its own share of the action. The gas giant passed the time by borrowing a comet called 147P/Kushida-Muramatsu to form a temporary satellite, holding onto it for two orbits. That’s the conclusion, anyway, of a study presented yesterday at the European Planetary Science Congress in Potsdam, Germany, by a team of researchers from Japan and the U.K.

A few other such Jovian events are known—in one case, the massive planet may have held onto its captive comet for more than half a century.

To uncover the 12-year rendezvous between Jupiter and Comet 147P/Kushida-Muramatsu, an icy body discovered in 1993, the research team, led by Katsuhito Ohtsuka of the Tokyo Meteor Network, tracked the orbits of likely comets back 100 years based on their known characteristics today.

From about May 1949 to July 1961, the group found, Comet 147P/Kushida-Muramatsu was pulled in by Jupiter’s influence before escaping to its present orbit via a gravitationally stable zone known as a Lagrange point, where the gravitational influence of two bodies—in this case Jupiter and the sun—balance out.

The research on 147P/Kushida-Muramatsu was originally published in October 2008 in Astronomy & Astrophysics.


Full article and photo:

A 360-Degree Virtual Reality Chamber Brings Researchers Face to Face with Their Data

Scientists can climb inside the University of California, Santa Barbara’s three-story-high AlloSphere for a life-size interaction with their research


DATA LIVES!: The AlloSphere is essentially a house-size digital microscope powered by a supercomputer. High-resolution video projectors can project images across the entire inner surface.

Scientists often become immersed in their data, and sometimes even lost. The AlloSphere, a unique virtual reality environment at the University of California, Santa Barbara, makes this easier by turning large data sets into immersive experiences of sight and sound. Inside its three-story metal sphere researchers can interpret and interact with their data in new and intriguing ways, including watching electrons spin from inside an atom or “flying” through an MRI scan of a patient’s brain as blood density levels play as music.

Housed in a 5,760-square-meter space in the California NanoSystems Institute building, the AlloSphere is essentially a house-size digital microscope powered by a supercomputer. Its outer chamber is a cube covered with sound-absorbing material, making it one of the largest near-anechoic (nonechoing) spaces in the world. Inside are two joined hemispheres of perforated aluminum that contain a suspended bridge.

More than 500 audio elements—woofers, tweeters and the like—are suspended in rings just outside the hemispheres. High-resolution video projectors can project images across the entire inner surface. The result is something far beyond other virtual reality systems such as a Cave Automatic Virtual Environment (CAVE) or a planetarium: 360 degrees of sounds and images in a chamber large enough to hold 30 or more researchers at once.

“It’s a place where you can use all of your senses” to find new patterns in data, says JoAnn Kuchera-Morin, the AlloSphere’s director. “You can almost say researchers are shrunk down to the size of their data, immersed at a perceptual level.” Trained as an orchestral composer and director of the school’s Center for Research in Electronic Art Technology (CREATE), she designed the AlloSphere to straddle the line between art and science. Still, she emphasizes that it is a real research instrument, not a virtual-reality environment for entertainment.

The bridge is often crowded with physicists, engineers, computer scientists and artists working on projects for weeks or months at a time. Researchers interact with their data, which can be streamed live, using 3-D glasses, special wireless controllers, and sensors embedded in the bridge’s railings. (Gesture control and voice recognition are in the works.)

Chris Van de Walle, a professor in U.C. Santa Barbara’s Materials Department, has been studying conductivity in a class of materials called transparent conductors, used in solar cells to let in as much light as possible. Inside the AlloSphere, researchers such as Van de Walle use a joystick to maneuver through three-dimensional constellations of the oxygen, hydrogen and zinc atoms (linked by a complex lattice of chemical bonds) that make up these conductors.

“You really feel like you’re standing inside a crystal of zinc oxide,” Van de Walle says. “You see a hydrogen atom, you see the electron clouds around it. It feels very real.”

August saw the start of a project to visualize measurements of the background radiation of the universe made by the European Space Agency’s Planck satellite, launched in May. Viewers can see the microwave residue from the big bang “painted” across the sphere of the sky, and—after the data are translated for human ears—hear a version of what the early universe may have sounded like.

Another ongoing project is attempting to model the time-dependent Schrödinger equation, which describes the electron’s changing quantum states. Luca Peliti, a professor of statistical mathematics at of Italy’s University of Naples–Frederico II, says that visualizing electron orbitals in the AlloSphere far outstrips regular 2-D projections. “Every time we come up with an idea and try it, the result is unexpected,” he says. “I think we are just scratching the surface of what can be done with the AlloSphere—not because the instrument is lacking, but because we lack the ideas to exploit all its possibilities.”

Although the instrument has been operating since 2007, its systems are continually being developed and upgraded. In a year the school plans to have it operating at levels approaching the limits of our perception of actual reality: a visual resolution of 24 million pixels [on the entire surface] and a full 512-channel sound system that will make it seem “like a bird is flying around your head,” Kuchera-Morin says. “All of the scientists we are working with believe that representing their data [with the AlloSphere] will lead to the possibility of new discoveries.”

In a way, the AlloSphere’s main value may be as a communications device, Van de Walle says. “Being able to demonstrate to scientists as well as nonscientists what we are actually working on instead of just talking about it—being able to actually show somebody what it looks like—makes a big impression,” he says. “Sometimes I feel a little like Carl Sagan.”


Full article and photo:

Drilling Project Pulls Up Evidence for Early Oxygen in the Oceans

The timeline of oxygen’s appearance is a contentious point in Earth sciences


BREATH OF FRESH AIR: At some point in Earth’s history the emergence of cyanobacteria such as these allowed free oxygen to appear in the oceans and the atmosphere.

A new study tracing the history of the oceans, as recorded in multibillion-year-old sediments brought up in a South African drill core, provides evidence that oxygen emerged on Earth about 300 million years earlier than is broadly agreed. One of the study’s co-authors acknowledges that his paper is contentious, but it supports a number of other analyses carried out in recent years using separate methods.  

The new research, published online Sunday in Nature Geoscience, focuses on the nitrogen contained in the drill core’s geologic record. (Scientific American is part of the Nature Publishing Group). The nitrogen cycle is driven by life, says co-author Paul Falkowski, a biogeochemist at Rutgers University, so it carries the stamp of major biological shifts—in this case, the ability of bacteria in the upper waters of the ocean to produce oxygen through photosynthesis.  

What Falkowski and his Rutgers colleague, marine geochemist Linda Godfrey, found was a shift in the prevalence of nitrogen isotopes in ocean sediments indicating oxygenation. (Isotopes are species of the same atoms with differing numbers of neutrons and hence differing atomic weights.) That shift occurred nearly 2.7 billion years ago, a few hundred million years before other substantial evidence shows the presence of oxygen in the atmosphere. (Oxygen already existed, of course, in the form of water and other molecules—the research at hand concerns so-called free oxygen, or O2.)  

Woodward Fischer, a geobiologist at the California Institute of Technology who worked on the same drilling project but did not contribute to the new research, says that the timing of the evolution of oxygenic photosynthesis is an open question. “The field is really split on when that occurred,” Fischer says.  

Researchers generally agree that a great deal of oxygen appeared following the so-called Great Oxidation Event, somewhere around 2.4 billion years ago, Fischer says, but several other isotopic studies have also supported the theory that photosynthesizers were capable of producing oxygen much earlier, in the late stages of the Archean eon, which ended around 2.5 billion years ago. The new research “is not out there by itself,” he notes. “There are a couple of different papers looking at a couple of different isotope systems or geochemical systems that have come to the same conclusion independently.”  

Nitrogen has two stable isotopes: nitrogen 14 and heavier nitrogen 15. The balance between them, Falkowski explains, is very sensitive to the presence of oxygen. In environments with small amounts of oxygen, nitrogen is converted by microorganisms into ammonium and then nitrate or nitrite, Falkowski says. Finally, nitrogen gas is released into the atmosphere as organisms consume the nitrate and nitrite. “When you have oxygen…nitrogen gets isotopically heavier because you blow the lighter isotope off into the atmosphere,” Falkowski says. “And if you didn’t have any oxygen the isotopic record would just be light.”  

“The bonds between atoms are stronger when heavier isotopes of the atoms are involved,” Godfrey explains. The breakup of nitrate or nitrite to produce nitrogen gas proceeds more easily in compounds containing the lighter isotope—nitrogen 14—so that isotope preferentially escapes into the atmosphere. More nitrogen 15, on the other hand, tends to be left behind.  

“We find that we get the heavier isotope several times in the record, hundreds of millions of years before we get the Great Oxidation Event,” Falkowski says. “So that means that there must have been oxygen in the ocean, but it didn’t yet get into the atmosphere. It didn’t really oxidize the world yet for at least 300 million or 400 million years.”  

That delay is somewhat curious, as photosynthesizers would be expected to boost atmospheric oxygen levels on much shorter timescales. Under the new analysis, Fischer says, “if there is oxygenic photosynthesis around in late Archean oceans and there is essentially no oxygen accumulating [in the atmosphere], then something very funny is going on that we don’t appreciate yet.”  

Appreciating the detailed chemistry of a world before oxygenation is a tall order, Fischer says. A totally anaerobic (oxygen-free) world, he says, might involve isotopic effects that are very different than what we see in our present oxygen-rich environment. “With all the assumptions that go into how we think the nitrogen cycle should work, we could be looking at a world that is actually very different,” he says.

Falkowski says that the reliance on the nitrogen cycle made for a contentious process in publishing the paper. “What we have developed is a model where nitrogen isotopes really become sensitive to the [oxidation and reduction reactions] of the world,” he says. “That’s a new model, and whenever you start with a new paradigm, it’s inevitable you’re going to run into some cross fire.”


Full article:


Tree Electricity Runs Nano-Gadget

tree powerPlugging into tree power is not just a dream, although powering a lamp is not yet possible.

If scientists have their way, we may someday be tapping maples—not for pancake fixin’s, but for power. Because researchers from the University of Washington in Seattle have found there’s enough electricity flowing in trees to run an electronic circuit.

If you’ve ever made a potato battery, you know that plant material can generate current. But the energy in trees is something else entirely. The potato experiment uses electrodes of two different metals to set up a charge difference that gets local electrons flowing.

But in the current study, researchers use electrodes made of the same material. Sticking one electrode into a tree and another in the soil, they found that big leaf maples generate a steady voltage of up to a few hundred millivolts. That’s way less than the volt-and-a-half provided by a standard AA battery. So the scientists designed a gadget so small, with parts just 130 nanometers in size, that it can run on tree power alone. Their results appear in the journal IEEE Transactions on Nanotechnology.

If you’re nuts for renewable energy, you probably can’t get much greener than a forest full of electrici-tree.


Full article:



See also:

Trees could be the ultimate in green power

Shoving electrodes into tree trunks to harvest electricity may sound like the stuff of dreams, but the idea is increasingly attracting interest. If we can make it work, forests could power their own sensor networks to monitor the health of the ecosystem or provide early warning of forest fires.

Children the world over who have tried the potato battery experiment know that plant material can be a source of electricity. In this case, the energy comes from reduction and oxidation reactions eating into the electrodes, which are made of two different metals – usually copper and zinc.

The same effect was thought to lie behind claims that connecting electrodes driven into a tree trunk and the ground nearby can provide a current. But last year Andreas Mershin’s team at MIT showed that using electrodes made of the same metal also gives a current, meaning another effect must be at work. Mershin thinks the electricity derives from a difference in pH between the tree and the soil, a chemical imbalance maintained by the tree’s metabolic processes.

Practical power

While proving that trees can provide a source of power is a significant step, a key question remains: can the tiny voltage produced by a tree be harnessed for anything useful?

Trees seem capable of providing a constant voltage of anywhere between 20 and a few hundred millivolts – way below the 1.5 volts from a standard AA battery and close to the level of background electrical noise in circuits, says Babak Parviz, an electrical engineer at the University of Washington in Seattle. “Normal circuits don’t run from very small voltages, so we need ways to convert the small voltages to something that is usable,” he says.

His team has managed to obtain a usable voltage from big-leaf maple trees by adding a device called a voltage boost converter. The converter spends most of its time in a kind of stand-by mode as it stores electrical energy from the tree, periodically releasing it at 1.1 volts.

To provide that periodic wake-up call, Parviz’s team developed a clock, also powered by the tree, which keeps time by tracking the quantum tunnelling of electrons through thin layers of insulating material. It operates at 350 millivolts and uses just a nanowatt of power.

Parviz thinks trees could power gadgets to monitor their own physiology or their immediate surroundings, for ecological research. And, he adds, as electronic components continue to shrink and require less power, it is possible tree electricity could one day have a wide range of uses.

Green power race

Parviz’s team isn’t the only one trying to harness the tiny voltages trees can provide. Voltree Power, a company based in Canton, Massachusetts, patented a tree-powered circuit in 2005, says the company’s CEO, Stella Karavaz.

Her firm is using energy harvested from trees to power sensors that monitor temperature and humidity inside forests. Earlier this year the company trialed a wireless sensor network to detect forest fires.

Devices that lose water the way trees transpire through their leaves could also be used to supply power, according to Michel Maharbiz at the University of California, Berkeley. His team recently showed that evaporation from simulated leaves can act like a mechanical pump, and that the effect can be harnessed to provide power.


Full article:

Electron Bolts: Even Deeply Bound Electrons Can Escape Molecules via Quantum Tunneling

The intriguing quantum-mechanical property is looking less conventional all the time.

electron-tunneling_1A representation of what a traditional scanning tunneling microscope  would look like if it were able to detect tunneling electrons from a lower-lying orbital in hydrogen chloride.

In quantum mechanics particles can escape from their confines, even if a barrier stands in their way, via a process known as tunneling. Tunneling is no mere quantum curiosity—tunneling electrons, for instance, are harnessed by scanning tunneling microscopes to observe on the smallest scales. Those probes can image a surface at the atomic level by detecting the tunneling of electrons from the surface across a small gap to the microscope’s tiny scanning tip.

A paper in this week’s Science adds new depth to tunneling by showing how readily electrons can tunnel out from multiple orbitals in a molecule. “Until just recently, everyone would have thought that only the most easily available electron could tunnel,” says study co-author Paul Corkum, a physicist at the University of Ottawa and director of Attosecond Science at the National Research Council Canada. A series of research papers in the past few years has begun to revise that thinking, showing that lower-lying orbitals get into the act, as well.

In the new research, Corkum and his colleagues observed electrons tunneling out of hydrogen chloride (HCl) molecules subjected to laser pulses and traced the electrons back to their parent orbitals. “You would think that the highest one, which has to go underneath the classically allowed barrier by only a little bit, would have a huge advantage, and the [next] lower one, which has to go under the barrier by a lot, would be highly suppressed,” Corkum says. But the team found that the second-highest orbital contributed a measurable amount to the total tunneling current.

Late last year two groups published papers in Science showing how intense laser pulses could be used to liberate electrons not only from the highest molecular orbital but also from the next orbital below. Markus Gühr, a Stanford University chemical physicist at the SLAC National Accelerator Laboratory in Menlo Park, Calif., co-authored one of those papers with a view toward examining molecular processes in real-time.

“The general vision that we have in the community is we want to look at chemistry,” says Gühr, who did not participate in the new research. Probing the way electrons form and break bonds between atoms is critical to tracing the workings of chemistry at the ground level. “The sensitivity on electrons is really a new crucial step, I would say,” Gühr adds.

In the 2008 work Gühr’s group examined electrons tunneling in nitrogen. “The nitrogen molecule has the advantage that these two orbitals…are pretty close together,” Gühr says. In the hydrogen chloride molecule probed by Corkum’s group, he adds, the orbitals are much farther apart, making the tunneling contribution from the lower-lying one all the more notable.

The hydrogen chloride molecule made a handy test bed for such a tunneling experiment. When the electron is stripped from hydrogen chloride’s highest orbital, an ion (a charged version of the molecule) survives. But the electron in the next orbital down accounts for the bond between the molecule’s atoms, so when an electron tunnels from that orbital, the HCl molecule breaks apart. Such a fragmentation is one signature of the lower-level tunneling.

Aside from demonstrating lower-orbital tunneling in a molecule that is less amenable to it, Corkum’s group was also able to show just how often it takes place. “I would say in the evidence that we presented [last year], and also that other groups presented, it is clear that [the lower-level orbital] definitely has a contribution in tunnel ionization,” Gühr says. “But it is not clear quantitatively to which exact extent.” Corkum’s group, Gühr adds, has taken the next step of directly quantifying the contribution of the lower orbital—in this experiment, the lower-lying orbital contributed 0.2 percent of the total tunneling current.

Corkum notes that it is exponentially more difficult, but not theoretically prohibited, to get electrons from lower orbitals rather than from higher orbitals. So although the new results were at first glance surprising, they make sense from a physics perspective. “I would say our prejudice was wrong, not the theory itself,” Corkum says.


Full article and photo:

The Wall Street Journal 2009 Technology Innovation Awards

This year’s winners include: a tool to identify new disease strains, an artificial hand, a solar-powered base station for mobile phones and paper-thin flexible speakers.

Medical detective work may have just gotten a lot easier.

Just how difficult it is gets highlighted every time an infectious disease sweeps the globe, as the new strain of swine flu did earlier this year. Current methods of testing for disease-causing microbes are pretty effective at discovering whether an infected fluid or tissue sample contains a known virus or bacteria. But trying to detect previously unknown organisms is a whole different story.

To address this problem, David Ecker, co-founder of Ibis Biosciences Inc., and a team of researchers developed a sensor able to quickly detect and identify all the pathogens in a given sample.

The equipment promises not only to alert health officials to new disease strains, but also to guard against bioterrorism and enable hospitals to identify antibiotic-resistant bacteria.

Abbott Laboratories and its Ibis Biosciences unit, which developed the Ibis T5000 sensor, took the Gold in this year’s Wall Street Journal Technology Innovation Awards.

The Silver award went to Touch Bionics Inc. for its i-Limb artificial hand, which features bendable fingers and a rotating thumb. The hand uses sophisticated motors and computer controls to grip objects and move in ways that traditional prosthetic hands can’t.

Vihaan Networks Ltd., an Indian telecommunications company known as VNL, won the Bronze award for a solar-powered base station to bring cellphone access to remote rural villages. The inexpensive base station can be quickly assembled and set up by unskilled villagers, and can run entirely on the built-in solar panels and batteries.

For the ninth annual Innovation Awards, a Journal editor reviewed nearly 500 entries, sending more than 180 to a team of judges from research institutions, venture-capital firms and other companies. Judges considered whether innovations were truly groundbreaking and—new this year—looked at whether their application would be particularly useful in a time of economic hardship.


And the winners in each category are…

Computing Systems

Capturing real-life motion to use in computer animation can be complicated. Typically, actors are filmed wearing bodysuits covered with glowing dots or embedded with sensors that trace their movements, then high-powered computers use that data to help create characters that move realistically.

New York-based Organic Motion Inc. won in the computing-systems category for developing a motion-capture system that doesn’t require bodysuits or markers.

The core of the system is technology that uses sophisticated software to produce a digital clone of a person being filmed. Fourteen video cameras capture images simultaneously and send them to a standard computer with a high-end programmable graphics card, making the system far cheaper than the specialized equipment used in movie special-effects shops.

Organic Motion systems are being used in the creation of virtual environments for training coal-mine rescue personnel and for helping returning military veterans readjust to civilian life. Andrew Tschesnok, the company’s chief executive and founder, says future versions will work with next-generation game consoles for more-lifelike game experiences.

Consumer Electronics

Taiwan’s Industrial Technology Research Institute, or ITRI, won in the consumer-electronics category for its work developing a paper-thin, flexible speaker.

Researchers at ITRI, a nonprofit organization, devised a way to create arrays of tiny speakers that can be combined to produce high-fidelity speaker systems of almost any size.

Because the fleXpeaker is lightweight and consumes little power, it could be attractive for use in cellphones or in car sound systems. Other possible applications include giant banners that could be used to deliver public-service announcements in train stations or advertising messages in shopping malls.

ITRI is seeking to license the technology or create a spinoff ­company to commercialize the ­product.

One advantage of the SFC fuel cells is that they produce power from methanol. Many fuel cells produce electricity from hydrogen. But hydrogen is highly explosive, so it needs to be stored in special heavy-metal cartridges. Cartridges for the SFC fuel cells are less expensive, lighter and less bulky.


SFC Smart Fuel Cell AG, based just outside Munich, won in this category for developing small, lightweight fuel cells that can be used by soldiers instead of much bulkier, heavier batteries to power communications and navigation devices and other battlefield equipment.

One advantage of the SFC fuel cells is that they produce power from methanol. Many fuel cells produce electricity from hydrogen. But hydrogen is highly explosive, so it needs to be stored in special heavy-metal cartridges. Cartridges for the SFC fuel cells are less expensive, lighter and less bulky.


Serious Materials Inc. of Sunnyvale, Calif., was recognized in this category for its EcoRock drywall substitute, which is made with recycled material and, the company says, requires 80% less energy to make and produces 80% less carbon dioxide than standard gypsum-based drywall. EcoRock, which is also termite- and mold-resistant, will be priced to compete with premium drywall products. Serious Materials has been selling limited test quantities of the product to a few contractors since early this year and plans to expand production and distribution over the next two years.

Though some judges wondered if a relatively high price would limit how widely the product is used, it is a “novel solution to a basic problem that has enormous impact,” says Darlene Solomon, chief technology officer of Agilent Technologies and an Innovation Awards judge.

Health-Care IT, a Washington-based nonprofit, and its co-founder, Joel Selanikio, won in this category for EpiSurveyor, free software for mobile devices designed to help health officials in developing countries collect health information.

technology 2

In developing countries, gathering and analyzing time-sensitive health-care information can be a challenge. Rural health clinics typically compile data only in paper records, making it difficult to spot and to respond quickly to emerging trends.

With EpiSurveyor, developed with support from the United Nations Foundation and the Vodafone Foundation, health officials can create health-survey forms that can be downloaded to commonly used mobile phones. Health workers carrying the phones can then collect information—about immunization rates, vaccine supplies or possible disease outbreaks—when they visit local clinics. The information can then be quickly analyzed to determine, say, whether medical supplies need to be restocked or to track the spread of a disease.

The software has been rolled out in more than 20 African countries.

Materials and Other Base Technologies

Light fixtures based on light-emitting diodes—semiconductors that glow brightly when charged—promise long-lasting, low-energy illumination. But there’s a problem: The light produced is harsh and bluish in color. Special filters can be added to produce warmer tones, but they can make the fixtures less efficient. Devising a way to make warmer-colored, high-efficiency LEDs is seen as essential to their widespread adoption.

QD Vision Inc. of Watertown, Mass., won in this category for inventing a way to dramatically improve the color quality of LED lights by using quantum dots—tiny semiconducting nanocrystals. QD Vision quantum dots can also be used to make energy-efficient flat-panel and other displays that can deliver purer, more intense colors.

technology 3

The company recently joined with a small LED light-fixture maker, Nexxus Lighting Inc., to make a screw-in LED bulb. The bulbs, which promise to be six times more efficient than incandescent bulbs, are expected on the market later this year.

Medical Devices

The i-Limb from U.K.-based Touch Bionics, the overall Silver winner, received top honors in this category.

Prosthetic hands typically have been limited to simple pincer-like grips that imitate the motions of a thumb and forefinger. While they can perform most essential hand functions, they lack the utility and appearance of a real hand.

The trick in developing the i-Limb was coming up with materials that could match the shape and weight of a human hand yet be powerful enough to handle all the tasks of muscle and bone. The hand uses motors that fit in the space of a knuckle to control the fingers; the motors are controlled by a computer chip.

With the hand, wearers can grip and turn a key, for instance, or hold a business card using a thumb and index finger. They can also close all the fingers and the thumb around an object, like a drink can or a shopping-bag handle. It’s also possible to point with the index finger, which is useful in operating a phone or a cash machine, among other things. The thumb can also be rested next to the rest of the hand, so that it doesn’t snag when putting on clothing.

Adding to its life-like appearance, the i-Limb comes covered with a flexible silicone skin. But wearers don’t have to go with the natural look. Stuart Mead, Touch Bionics’ chief executive, says a lot of younger wearers prefer either a clear skin that shows off the device’s inner workings or a black metallic covering “that looks a little like Darth Vader.”

technology 4

Medicine – Biotech

Abbott’s Ibis Biosciences unit, the overall Gold winner, was the top entry in this category. The technology takes a novel approach to detecting and identifying pathogens. When faced with unidentified organisms, clinical labs typically have to incubate infected fluid or tissue samples and test them for bacteria or viruses. Newer microarray technologies can run thousands of such tests simultaneously. But they are expensive and require lots of high-quality genetic material for their analyses, making them less than ideal for diagnostic purposes, says Mr. Ecker, a divisional vice president at Abbott. (Last year’s Silver award winner, the PhyloChip, is a microarray system for detecting bacteria in water and other environmental samples.)

Ibis uses a combination of technologies to identify organisms: mass spectrometry—a way of identifying the molecules that make up a sample by measuring their mass and charge—to determine the genetic markers of the organisms in a sample; a vast database of genetic signatures for different organisms; and a mathematical process to match the analysis with the signatures in the database. The test not only can reveal all the known organisms present in the sample, it also can also flag previously unknown organisms. Since the first system was completed in 2005, Ibis sensors have been deployed in 20 sites around the U.S., including the Centers for Disease Control. This spring, the device helped the Naval Health Research Center in San Diego to identify the first two cases of the H1N1 swine flu in the U.S. Abbott, a health-care company based in Abbott Park, Ill., acquired Ibis earlier this year.

Security – Privacy

Ksplice Inc., based in Cambridge, Mass., won in this category for software that makes it possible to install security patches and other software updates without rebooting computer systems.

Software makers periodically send out updates, and before they can take effect the computer system needs to be shut down and restarted. So even critical security updates are often delayed until late at night or weekends when shutdowns are less disruptive. Ksplice was developed so that companies can perform updates without interrupting their operations. The software was first deployed commercially last year, and the company has about a dozen customers. Though it currently is available only for Linux-based systems, the techniques can be applied to other operating systems, says Jeff Arnold, the company’s president and co-founder.


Qualcomm Inc., the San Diego-based wireless-technology company, won in this category for a mobile-device display it calls mirasol, a low-power, full-color alternative to traditional displays.

The mirasol display uses micro-electromechanical systems, or MEMS, and thin-film reflective material to produce color images that remain vivid even in direct sunlight. The displays are able to produce a full color spectrum, and images are refreshed quickly enough that full-motion video can be displayed as well as static images. By relying on ambient light, the displays require little power. The technology was originally developed by Iridigm Display Corp., which Qualcomm acquired in 2004 and renamed Qualcomm MEMS Technologies.

The first black-and-white displays using the technology became available in early 2008 and have been used in mobile navigating devices, Bluetooth headsets and MP3 players. The company introduced a color version in May, and has agreed to provide the displays for future cellphones from LG Electronics Inc.


Cloud computing promises to replace the complex array of hardware and software that makes up a company’s information-technology infrastructure with simple IT services delivered over the Internet, much the way that utilities provide electricity. But not all businesses can take advantage of cloud computing’s benefits—because of security concerns, or because they already have significant investments in their own data centers and other IT systems.

The latest version of VMware Inc.’s virtualization software suite, called vSphere, is this year’s winner in the software category. It promises to make it easier to turn a company’s existing data centers into a private cloud—an array of IT services delivered throughout a company over its own computer network—that’s secure, reliable and easy to manage.

VMware has long been the market leader in virtualization software, which makes it possible to run different applications or operating systems on a single computer by dividing the computer into several “virtual” machines, each running programs independent of the others.

technology 5

With vSphere, which VMware describes as a “cloud operating system,” IT managers can quickly turn all the servers in a data center into a network of virtual machines. A simple dashboard makes it possible to see all the applications that are running on each virtual device.

Judges noted that there is a lot of interest in private clouds, and said that VMware has taken a big step in helping companies build them.

“It’s a very important trend, and these guys are clearly the leader,” says Asheem Chandna, a partner at the venture-capital firm Greylock Partners who was one of the Innovation Awards judges.


The Bronze winner, VNL’s solar-powered base station for cellphone networks, led the wireless category.

Mobile-phone service can deliver huge benefits to developing countries. But getting cellphone coverage to remote, rural parts of India and other countries is hindered by high installation and operating costs, as well as the specialized knowledge needed to set up and run a cellular station. As a result, few operators have gone into these communities.

VNL is looking to overcome this obstacle with a low-power cellular base station that requires little capital expense and has almost no operating costs. The base stations can be powered by a small solar panel in daylight; batteries provide backup power for up to 72 hours.

Another challenge was making the device so simple that it can be installed at low cost by villagers.

The solution was inspired by the Scandinavian retailer Ikea: The entire base station comes delivered in six boxes, small enough to all fit in an ox cart. Simple illustrated instructions show how to put the pieces together using color-coded cables.

Even turning the station to the right microwave signal is easy—it emits a continuous beeping sound when the signal is strongest.

The technology may not be much of a technical breakthrough, but “it’s worthy because of what it might bring to developing countries,” says William Webb, head of research and development at Ofcom, the U.K. communications regulator, and one of the Innovation Awards judges.


Full article and photos:

The Nobel Prize Will Go To…

Who might the future winners be? Here are some candidates.

Next month, Nobel Prizes once again will be handed out to scientists who have conducted field-changing research.

Which scientists may be honored this year, and in the future? It’s impossible to know for sure, since the selection process is notoriously secretive and often surprising. In fact, the names of nominees considered by the various Nobel committees aren’t even revealed for 50 years. So in search of future Nobel laureates, we looked at winners of other prestigious prizes, and interviewed previous Nobel winners and other experts—and came up with the following list.


F. ULRICH HARTL, Director, Max Planck Institute of Biochemistry

ARTHUR HORWICH, Sterling Professor of Genetics and Pediatrics, Yale School of Medicine

Drs. Hartl and Horwich are recognized for their work demonstrating that many cellular proteins don’t fold themselves properly on their own, but instead require the help of so-called molecular chaperones. Without such chaperones, proteins can stick to each other, leading to progressive brain diseases like Parkinson’s, Alzheimer’s and Huntington’s. If drugs that activate the chaperone system can be developed, they can be used to slow or prevent such diseases. The biotechnology industry has already adopted the technique of boosting chaperone production as a way of increasing protein yield in the production of biologic therapies.

“They were significant contributors to the field,” says John Blanchard, a professor of biochemistry at the Albert Einstein College of Medicine and chair of the American Chemical Society’s division of biological chemistry. “These chaperones were really first and best characterized by” Drs. Hartl and Horwich.

RICHARD A. LERNER, Professor of Chemistry, President, Scripps Research Institute

GREG WINTER, Deputy Director, The Medical Research Council’s Laboratory of Molecular Biology

Dr. Lerner and Sir Greg are known for their work that allowed for the creation of a group of antibodies far more diverse than the human immune system can make on its own. They also developed new methods for combing through the various combinations of antibodies to look for blends that have the potential to treat disease. Their research has led to the development of Humira, a treatment for inflammatory diseases, made by Abbott Laboratories.

“The use of antibodies therapeutically has lagged behind the discoveries of how antibodies protect us from various infectious disease,” says Barton Haynes, director of the Duke Human Vaccine Institute. “From the therapy standpoint,” he adds, the work of Dr. Lerner and Sir Greg “helped us speed the development of antibodies as drugs and as agents.”


KRZYSZTOF MATYJASZEWSKI, Professor, Department of Chemistry, Carnegie Mellon University

Dr. Matyjaszewski developed atom transfer radical polymerization, or ATRP, a way of combining small molecules called monomers into blocks of polymers, which are plastics made up of long molecules. This method allows for the production of a combination of materials with multiple properties that can be controlled in ways that traditional plastics can’t. Dr. Matyjaszewski’s work has significant implications for industrial manufacturing: It could be used to produce more environmentally friendly materials, such as substances that retain their shape better and thus are more easily recycled, and that use up less natural resources, he says.

The chemistry developed by Dr. Matyjaszewski “allows one to make a very large range of new materials,” says Devon Shipp, a polymer scientist and chemistry professor at Clarkson University.


VICTOR AMBROS, Professor in Molecular Medicine, University of Massachusetts Medical School

GARY RUVKUN, Professor of Genetics, Harvard Medical School

Drs. Ambros and Ruvkun discovered a class of small ribonucleic acid, or microRNA, that regulates when genes are turned off and on. They showed that these tiny regulatory RNA help to control critical processes during embryonic development, and also play a role in conditions like cancer and heart disease. Regulation of microRNA activity in expression of genes could be developed as a therapy to treat such diseases.

These researchers made a “fundamental discovery” in understanding cell growth and organism development, says Curtis Harris, chief of the Laboratory of Human Carcinogenesis at the National Cancer Institute. Therapeutic research targeting microRNAs could start human testing within two to three years.

ELIZABETH BLACKBURN, Morris Herztein Professor of Biology and Physiology, Department of Biochemistry and Biophysics, University of California, San Francisco

CAROL GREIDER, Daniel Nathans Professor and Director of Molecular Biology and Genetics and Professor of Oncology, Johns Hopkins University School of Medicine

Drs. Blackburn and Greider discovered telomeres—caps found at the ends of chromosomes that protect their genetic information—and the telomerase enzyme, which is vital to normal human development but can also play a role in the growth of cancer. Their work has shown that suppressing telomerase activity appears to inhibit the growth of cancer cells.

During prenatal development, telomerase is important to ribonucleic acid, a single-stranded chain made of DNA’s basic building blocks. Without telomerase, telomeres get shorter, chromosomes become unstable and cell aging occurs. But once development proceeds far enough, telomerase activity is turned off “in almost all human tissues, and only returns with cancer,” says Jerry Shay, a professor of cell biology at the University of Texas Southwestern Medical Center at Dallas.

The reactivation of telomerase in cancer cells allows the cells to continue to change and become more malignant, Dr. Shay says, so targeting telomerase could be an important new approach to treating cancer. Telomerase also may “have utility in cell and tissue rejuvenation medicine,” in essence slowing or preventing aging, says Dr. Shay.


SHINYA YAMANAKA, Director and Professor of the Center for Induced Pluripotent Stem (iPS) Cell Research and Applications, Kyoto University

Senior Investigator, Gladstone Institute of Cardiovascular Disease

Dr. Yamanaka is known for his work on creating embryonic-like stem cells from adult skin cells. Stem cells are characterized by their ability to turn into different kinds of specialized cells, like heart or brain cells.

The ability to “reprogram” adult cells back into an earlier, undifferentiated state by inserting four genes has the potential to reshape the ethical debate over stem-cell research, because the cells no longer have to be taken from an embryo. Robert Lanza, chief scientific officer of Advanced Cell Technology and an adjunct professor at the Institute for Regenerative Medicine at Wake Forest University, says Dr. Yamanaka’s work “is likely to be the most important stem-cell breakthrough of all time. The ability to generate an unlimited supply of patient-specific stem cells will revolutionize the future of medicine.”

Today, Dr. Yamanaka, along with another researcher, was awarded the prestigious 2009 Albert Lasker Basic Medical Research Award.



PETER W. HIGGS, Professor of Physics, Emeritus, University of Edinburgh

ROBERT BROUT, Professor of Physics, Emeritus, Université Libre de Bruxelles

FRANÇOIS ENGLERT, Professor of Physics, Emeritus, Université Libre de Bruxelles

Drs. Brout, Englert and Higgs clarified how particles get mass. In particular, they discovered how mass can be generated under different conditions for particles that carry out fundamental forces such as electromagnetism and radioactivity, which has helped the field better understand fundamental interactions.

Their theoretical work has postulated a yet undiscovered particle, known as the Higgs particle, which would validate the basic mathematical structure of the particle-physics model that explains three of the four fundamental forces in physics. “It’s often said that the Higgs particle is the missing last piece in this standard model,” says Peter Jenni, a senior staff physicist at CERN, the European Organization for Nuclear Research.

SUMIO IIJIMA, Professor of Materials Science and Engineering, Meijo University


DANIEL KLEPPNER, Lester Wolfe Professor of Physics, Emeritus, Massachusetts Institute of Technology

Dr. Iijima is known for his early work on carbon nanotubes, which are tiny, extremely hard and strong, tube-like material.

The material is “about as small as you can make a wire out of atoms that has this tremendously fascinating set of things you can do with it,” says Joseph Serene, a theoretical condensed-matter physicist and an operating officer of the American Physical Society. Carbon nanotubes can have a broad range of electronic properties depending on how they are put together, from insulators to semiconductors to metals, which can be used in making smaller and smaller electronics, says Dr. Serene.

Dr. Kleppner’s work in atomic physics includes the development of the hydrogen maser, a device that produces electromagnetic waves, as well as the physics of Rydberg atoms, which are atoms with very excited outer electrons. The hydrogen maser has been “essential” for technical applications like timekeeping, such as in atomic clocks, according to Kate Kirby, an atomic and molecular physicist who is an executive officer of the American Physical Society. The ability to keep precise time has allowed for precision in terms of identifying geographical position, leading to such technology as the Global Positioning System.

–Ms. Wang is a staff reporter in The Wall Street Journal’s New York bureau.


Full article and photo:

Some Creative Destruction on a Cosmic Scale

Scientists Say Asteroid Blasts, Once Thought Apocalyptic, Fostered Life on Earth by Carrying Water and Protective Greenhouse Gas.

In a paradox of creation, new evidence suggests that devastating avalanches of cosmic debris may have fostered life on Earth, not annihilated it. If so, life on our planet may be older than scientists previously thought — and more persistent.

Astronomers world-wide have been transfixed by a roiling gash the size of Earth in the atmosphere of Jupiter, caused by an errant comet or asteroid that smashed into the gas giant last month. The lingering turbulence is an echo of a cataclysmic bombardment that shaped the origin of life here 3.9 billion years ago, when millions of asteroids, comets and meteors pummeled our planet.

Known as the Late Heavy Bombardment, these intense showers of rubble created conditions so hellish that scientists named this opening chapter of Earth’s formation the Hadean era, after classical visions of the underworld and the realm of the dead. “The impact that killed the dinosaurs was just a firecracker compared to the impacts during this bombardment,” says planetary scientist Oleg Abramov at the University of Colorado at Boulder. “If you were standing on the surface, you would have been vaporized.”

Until recently, many researchers thought that this rain of rocks, lasting 20 million years or more, almost certainly wiped out early life on Earth — perhaps more than once. No one knows. The earliest known traces of life belong to a period shortly after the asteroid showers slackened. “The idea was that we were hit so many times and so hard that, if there had been any life forming then, it would have been wiped out and required to rise again,” says astrobiologist Lynn Rothschild at NASA’s Ames Research Center in Mountain View, Calif.

But in their super-heated plunge through the atmosphere, these asteroids and meteors may have helped create conditions ideal for emerging life. “Everyone focuses on the meteor that hits the ground,” says geochemist Richard Court at London’s Imperial College. “No one thinks about the products of its journey that get pumped into the atmosphere.”


Recommended Reading

Meteors made Earth more habitable, geochemists at Imperial College London reported in Meteorite ablation products and their contribution to the atmospheres of terrestrial planets: An experimental study using pyrolysis-FTIR published in the journal Geochimica et Cosmochimica Acta.

In Nature, University of Colorado experts reported that early life could have survived asteroid impacts underground in Microbial habitability of the Hadean Earth during the late heavy bombardment.

Researchers at the Hawai’i Institute of Geophysics and Planetology summarized a new theory on the role of wandering gas giants in the Late Heavy Bombardment.

NASA’s Near Earth Object Program catalogs the asteroids known to cross Earth’s orbit.

On Wednesday, the U.S. National Research Council released Near-Earth Object Surveys and Hazard Mitigation Strategies: Interim Report at its new Asteroid Watch web site.

NASA’s Jet Propulsion Laboratory offers an Asteroid Watch Widget for Apple and Windows computers that tracks asteroids and comets that will make relatively close approaches to Earth.


As they vented, they collectively could have imported billions of tons of life-sustaining water into the air every year, Dr. Court and his colleague Mark Sephton recently determined. They calculated that these showers of volatile rocks delivered 10 times the daily outflow of the Mississippi River every year for 20 million years. By analyzing the fumes emitted under such extreme heat, they discovered these rocks also could have injected billions of tons of carbon dioxide into the air every year.

Combined with so much water vapor, the carbon dioxide could have induced a global greenhouse effect. That could have kept any life emerging on Earth safely in a planetary incubator at a time when the planet might easily have frozen because the Sun radiated 25% less energy than today. “The amount of CO2 that was produced is about the same we produce today through fossil fuel use and we know that is a climate-changing volume,” says Dr. Court.

They analyzed gases emitted by 12 meteorites of the sort believed to have hit during the bombardment using a new laboratory technique called pyrolysis-FTIR, which can instantly heat samples to 1,000 degrees Celsius. They found that, on average, each meteorite could release up to 12% of its mass as water vapor and 6% as carbon dioxide.

To study so many ruinous impacts, Dr. Abramov and Stephen Mojzsis at the University of Colorado developed a global computer simulation to gauge temperatures beneath individual impact craters — some caused by asteroids 50 miles or more in diameter.

By their calculations, our planet may have fared better than expected. Less than 25% of Earth’s crust would have melted during such a bombardment. “What we find is that under no circumstances can we sterilize the Earth during the bombardment,” says Dr. Mojzsis. “The surface zone was certainly sterile, but that is not where all life is.”

cosmic crash site

In fact, evolving microbes of the sort considered ancestral to all life forms today may have flourished underground in water heated by the impacts. Such habitable havens actually expanded during the bombardment, the computer simulation showed. Microbes able to live at temperatures ranging from 175 degrees to 230 degrees Fahrenheit could have survived unscathed. Some bacteria today thrive in even hotter water, such as those in hydrothermal vents at Yellowstone National Park.

No one knows what caused the bombardment. Nothing like it has happened since. But a controversial new perspective on orbital mechanics and the formation of the solar system suggests that Jupiter may have been partly responsible. The theory was developed by physicists at the Observatoire de la Côte d’Azur in France, the Universidade Federal do Rio de Janeiro in Brazil and the Southwest Research Institute in Boulder, Colo.

In their hypothesis, Neptune and Uranus originally orbited much closer to the Sun. Starting about four billion years ago, Jupiter and Saturn pushed them into the more distant orbits they follow today. Indeed, Neptune may have started closer to the Sun than Uranus, but ended up farther away. Disrupting the gravitational balance, these huge planets triggered a shotgun blast of planetary buckshot so violent that Mars, Mercury and the Moon still bear its scars.

“It is literally a revolution in our ideas about how our solar system evolved,” says asteroid expert William Bottke at the Southwest Research Institute. “It could be that our form of life today — every living thing that we see today — is due to this bombardment that happened 3.9 billion years ago.”

Earth still speeds through fields of rubble and star dust. This past week, the annual Perseid meteor shower peppered the planet with hundreds of meteors per hour. Every year, 40,000 tons or so of extraterrestrial dust and debris falls on Earth — a sprinkle compared with the millions of rocks still sheltered in the Asteroid Belt or the more distant Kuiper Belt and Oort Cloud.

“The object that hit Jupiter is not out of the ordinary for what we currently have in the solar system,” says Amy Simon-Miller, chief of the planetary systems laboratory at NASA’s Goddard Space Flight Center in Maryland. “From our perspective, the ones we worry about are the ones that cross the path of Earth.”

So far, astronomers have discovered 784 asteroids a half mile or so in diameter that intersect Earth’s orbit. They are tracking thousands of smaller ones and are searching for more. Despite close calls and false alarms, none of them so far threaten Earth, says comet expert Donald Yeomans, manager of NASA’s Near-Earth Object Program Office.

In a report released Wednesday, a panel of experts convened by the U.S. National Research Council warned that Earth could still be blindsided. They are studying ways to safely deflect any that do come too close.

In this game of orbital roulette, Dr. Yeomans does have his eye on one large near-Earth asteroid called Apophis. On its next close approach past Earth, there is a 1-in-45,000 chance that the interplay of gravitational forces could nudge it onto a potential collision course.

“In the unlikely event that happens, it will come back and hit us on April 13, 2036,” Dr. Yeomans says. “That’s Easter Sunday.”

Robert Lee Hotz, Wall Street Journal


Full article and photo:

Hubble gives glimpse of ‘pillar of creation’


This undated handout image provided by NASA, released Wednesday, Sept. 9, 2009, taken by the refurbished Hubble Space Telescope, shows a celestial object that looks like a delicate butterfly.

The refurbished deep space telescope releases first images since its billion dollar repair

It is a dazzling view of stellar dust and gases billowing in the interstellar void, already described by NASA in biblical language as a “pillar of creation” picture.

The colossal plume rising in the Carina Nebula was among the images unveiled Wednesday for the first time since astronauts upgraded the Hubble Space Telescope for a final time this spring.

Researchers greeted the images giddily, saying that Hubble’s final years will bring a rich harvest of new scientific insight.

“The data here is just spectacular. These are very exciting images,” said Harvey Richer, a University of British Columbia astronomer.

The images of the Carina Nebula show in spectacular fashion the cosmic clouds within which infant stars emerge.

Less obvious but equally stunning was the scale of those images. The cloud pillar in the Carina Nebula is three light-years long – nearly 30 trillion kilometres. Next to it, Dr. Richer said, our solar system would be just a small pinpoint in the picture.

The new images show galaxies sheared and distorted by gravitational pull, light rays bent by dark matter, and an unknown object – either a comet or an asteroid – crashing into Jupiter.

To Dr. Richer, the most exciting new image is a colourful shot of a myriad of red, yellow and blue shimmering stars in Omega Centauri.


Hubble captured in that shot a stellar formation born 12 billion years ago, just within two billion years of the Big Bang that created the universe.

The red stars in the image, Dr. Richer said, are colder giants that have burned up their core hydrogen and swollen to a hundred times their original size, our sun’s fate in five billion years.

For Dr. Richer, that picture gave a taste of his coming research: Hubble will devote 121 orbits next year to looking at a cluster of one million stars in the southern hemisphere, known as 47 Tucanae.

Dr. Richer is hoping Hubble will enable him to find evidence of planets among those ancient stars. The findings could help in uncovering whether life arose early in the universe.

“If planets formed very early in the history of the universe, when these star clusters formed … that would mean there’s been a very long time for life to evolve,” Dr. Richer said.

The images released yesterday were the first since May, when the crew of shuttle mission STS-125 overhauled Hubble for the final time, replacing gyroscopes and batteries, and installing new sensors and cameras. The repair will enable Hubble to keep operating until 2014, when its successor, the James Webb Space Telescope, is to be launched.

“We have a fully functioning, beautifully operating telescope today,” said Hubble senior scientist Dave Leckrone of the Goddard Space Flight Center.


Full article and photos:

Planet-hunter will find alien moons


Planet-hunter will find alien moons

A planet-seeking spacecraft launched in March is so powerful that it will even detect habitable moons around alien worlds, UK scientists said today.

NASA’s $595 million Kepler mission is flying through space checking out 100,000 stars looking for other planets resembling Earth.

Its instruments scan the light of stars in one small region of the Milky Way, watching for little blips revealing that a planet is passing in front of one of them.

Now a team led by Dr David Kipping of University College London says they may even find habitable moons too. They will be able to support alien life if they live in the “Goldilocks zone” around a star where conditions are not too hot or cold but just right.

Dr Kipping, who believed that many thousands or even millions of these moons exist in the galaxy, has devised a way to discover them by looking for a wobble in the planet that each is orbiting, due to its gravitational pull. The new research shows that Kepler’s telescope will be powerful enough to spot the changes in the planet’s position and velocity.

An alien solar system’s moon – dubbed an exomoon – will be easiest to detect if it is orbiting a fluffy planet like Saturn, say the scientists, rather than a more dense or solid world. This is because Saturn’s lightness would make it wobble much more than a heavy planet.

If the Saturn-like planet is at the right distance from its star, then the temperature will allow liquid water to be stable on any sufficiently large moons in orbit around it and these could then be habitable.

The team found that moons as small as a fifth the weight of the Earth should be easily detectable with the Kepler spaceprobe around 25,000 stars up to 500 light-years away from Earth.

Star Wars fans are already wondering if Kepler might find planetary satellites like the fabled Forested Moon of Endor, the planet that was home to the Ewoks.

Dr Kipping said: “For the first time, we have demonstrated that potentially habitable moons up to hundreds of light years away may be detected with current instrumentation.

“As we ran the simulations, even we were surprised that moons as small as one-fifth of the Earth’s mass could be spotted. It seems probable that many thousands, possibly millions, of habitable exomoons exist in the Galaxy and now we can start to look for them.”

The team’s findings will be published by the Royal Astronomical Society. Last month, Skymania News told how Kepler had detected the phases of an extrasolar planet.

Full article and photo:

Underwater laser pops in navy ops

Laboratory laser (SPL)
The approach could use commercially available lasers

US military researchers are developing a method for communication that uses lasers to make sound underwater.

The approach focuses laser light to produce bubbles of steam that pop and create tiny, 220-decibel explosions.

Controlling the rate of these explosions could provide a means of communication or even acoustic imaging.

Researchers at the US Naval Research Laboratory say the approach could be used for air-to-submarine or fully underwater communication.

One of the peculiar effects of high-intensity laser beams is that they can actually focus themselves when passing through some materials, like water.

As the laser focuses, it rips electrons off water molecules, which then become superheated and create a powerful “pop”.

Because different colours of light travel at markedly different speeds underwater, the precise location where different colours focus together could be manipulated by the suitable design of a many-coloured input pulse.

Those same focusing effects are significantly reduced in air, so that a laser “signal” could be launched from an airborne source to communicate with submarines, so that they do not need to surface.

The idea could also be used for underwater acoustic imaging, by using a moveable mirror to direct the pulses into an array of pops whose echoes can give a detailed picture of underwater terrain.


Full article and photo:

Mighty microbes might help clean up oil extraction and radioactive wastes

E coli Ag Res Mag

E. coli

There appears to be literally nothing microbes cannot do. From the invention of photosynthesis to lifecycles that require no sunlight—even surviving extreme radiation—the most extreme microbes thrive almost everywhere scientists look. And now microbiologists have added two more energy-related tricks to the microbial arsenal.

At the European Society for General Microbiology meeting this week, Richard Johnson and his fellow scientists from the University of Essex will present research showing that a mixed ecosystem of particular bacteria can survive—and clean up—one of the most lethal man-made environments: the residue from extracting petroleum from oil sands.

Extracting this heavy oil and refining it produces a slew of toxic waste, particularly water with naphthenic acid (one of the secret ingredients of napalm). In Athabasca region of Canada—home to much of the oil sands industry—there are at least one billion cubic meters of such polluted water sitting in local ponds.

What to do? Unleash bacteria, Johnson says. The microbes can break down the naphthenic acid into more benign byproducts in a few days rather than the decade or more it can take naturally. This can cut down on the environmental impact of producing oil from tar sands, of which there is an estimated 3.6 trillion barrels (double known conventional oil reserves).

It does not, however, address that other related byproduct: climate change caused by the greenhouse gases emitted when the oil is burned. Maybe microbes can help with that too (after all, they were responsible for the composition of the atmosphere until humans came along).

And it turns out E. coli—most famous for its role in food poisoning—does a pretty good job of cleaning up another potentially important but lethal energy source: radioactive waste. Lynne Macaskie and colleagues at the University of Birmingham show in another presentation at the same meeting how said E. coli, in conjunction with a cheap, widely available chemical (inositol phosphate), can recover uranium from the polluted waters of mines.

Basically, the E. coli break down the chemical and free the phosphate, which then bonds with the uranium and forms a precipitate on the exterior of the cell that can be harvested.

The researchers estimate that such recovered uranium would cost about 15 cents per gram of the nuclear fuel element. But it also offers an environmental protection advantage, removing radioactive material from the mine tailings. The process could even be used on spent fuel rods and other nuclear waste.


Full article and photo:

Researchers Claim to Cook Up Isolated Magnetic Poles

A family of rare-earth compounds called spin ices appears to harbor a form of long-sought magnetic monopoles, if not their theoretical ideal.


POLE POSITION: Can magnetic north exist independent of magnetic south?
Magnets are remarkable exemplars of fairness—each north pole is invariably accompanied by a counterbalancing south pole. Split a magnet in two, and the result is a pair of perfectly neutral magnets, each with its own north and south.

For decades researchers have sought the exception to this rule of fairness and balance: the magnetic monopole. Magnetism’s answer to electricity’s negatively charged electron, a monopole would be a free-floating carrier of either magnetic north or magnetic south—a yin unbound from its yang.

A pair of papers published online this week in Science offer experimental evidence that such monopoles do in fact exist, albeit not as electron-like elementary particles, a caveat that one self-professed purist says disqualifies them from genuine monopole status.

Both studies examine the magnetic behavior of a family of rare-earth materials known as spin ices—one group using holmium titanate and the other dysprosium titanate. The man-made spin ices take their name from their similarity to water ice—at the molecular level their internal magnetic structure is analogous to the arrangement of protons in ice.

Claudio Castelnovo, a postdoctoral physicist at the University of Oxford who co-authored one of the Science papers and also co-wrote a paper in Nature last year describing how monopoles might be realized in spin ices, explains that the compounds offer a peculiar combination of order and freedom that facilitates the dissociation of the poles.

At low temperatures, there is still some magnetic wiggle room in the spin ice’s lattice structure, but not a lot—the magnetic freedom of the system is frustrated, so to speak. “As a result, this is a substance that has degrees of freedom that look the same, microscopically, as you would see in a fridge magnet,” Castelnovo says. “But a fridge magnet is able to order so as to act as a fridge magnet and stick to metals, while this one is not able to achieve this level of ordering in spite of having this magnetic structure inside, because of this frustration.”

Internally, the tiny magnetic components arrange themselves head to tail in strings, like chains of bar magnets stretching across a table in different directions. In a very cold, clean sample, those strings form closed loops. But excitation induced by a rise in temperature can introduce tiny defects in these chains, Castelnovo says—in the bar-magnet analogue, one of the magnets is flipped, breaking the head-to-tail continuity. “You have your path that is north–south–north–south, and at a certain point one of the needles actually twists 180 degrees and points the wrong way,” he explains.

On either side of that defect, all of a sudden, is a concentration of magnetic charge—two norths at one end, two souths at the other. Those concentrations can float free along the string, acting as—voilà—magnetic monopoles.

“The beauty of spin ice is that the remaining degree of disorder in this low-temperature phase makes these two points independent of each other, apart from the fact that they attract each other from a magnetic point of view because one is a north and one is a south,” Castelnovo says. “But they are otherwise free to move around.”

Of course, this method of synthesizing monopoles cannot bring a north into existence without also generating a south—the key is their dissociation. “They always have to come in pairs,” Castelnovo says, “but they don’t have to be anywhere specifically in relation to one another.”

But Kimball Milton, a University of Oklahoma physicist who wrote a 2006 review article summarizing the status of monopole searches, is not convinced. “These are not magnetic monopoles,” he says.

“I might object to [the researchers] saying ‘genuine magnetic monopoles’, because when you say genuine, that implies to me it’s a point particle, and it’s not,” Milton says. “It’s an effective excitation that at some level looks like a monopole, but it’s not really fundamentally a monopole.”

He also says it is “completely wrong” to describe, as the research teams do, the chain of magnetism within spin ices as a Dirac string, a hypothetical invisible tether with a monopole at its end that was envisioned in the 1930s by English physicist Paul Dirac. “But that’s just because I’m a purist,” Milton says.

By his assessment, the magnetic strings in the spin ice do not fit the Dirac definition because they are, in fact, observable and merely carry flux between two opposing so-called monopoles. “Real monopoles, if they existed, would be isolated, and the string would run off to infinity,” he says.

“I’m not trying to put down the experiment or the work in any way,” Milton says. “I’m sure [the new studies] are important in the field of condensed matter physics. They’re not important from a fundamental point of view.”


Full article and photo:

Panels of Light Fascinate Designers

Oled 1

Flexibility is one of the appeals of OLED, a cousin of the LED.

LED light bulbs, with their minuscule energy consumption and 20-year life expectancy, have grabbed the consumer’s imagination.

But an even newer technology is intriguing the world’s lighting designers: OLEDs, or organic light-emitting diodes, create long-lasting, highly efficient illumination in a wide range of colors, just like their inorganic LED cousins. But unlike LEDs, which provide points of light like standard incandescent bulbs, OLEDs create uniform, diffuse light across ultrathin sheets of material that eventually can even be made to be flexible.

Ingo Maurer, who has designed chandeliers of shattered plates and light bulbs with bird wings, is using 10 OLED panels in a table lamp in the shape of a tree. The first of its kind, it sells for about $10,000.


A lamp using 10 OLED panels by the designer Ingo Maurer costs about $10,000.

He is thinking of other uses. “If you make a wall divider with OLED panels, it can be extremely decorative. I would combine it with point light sources,” he said.

Other designers have thought about putting them in ceiling tiles or in Venetian blinds, so that after dusk a room looks as if sunshine is still streaming in.

Today, OLEDs are used in a few cellphones, like the Impression from Samsung, and for small, expensive, ultrathin TVs from Sony and soon from LG. (Sony’s only OLED television, with an 11-inch screen, costs $2,500.) OLED displays produce a high-resolution picture with wider viewing angles than LCD screens.

In 2008, seven million of the one billion cellphones sold worldwide used OLED screens, according to Jennifer Colegrove, a DisplaySearch analyst. She predicts that next year, that number will jump more than sevenfold, to 50 million phones.

But OLED lighting may be the most promising market. Within a year, manufacturers expect to sell the first OLED sheets that one day will illuminate large residential and commercial spaces. Eventually they will be as energy efficient and long-lasting as LED bulbs, they say.

Because of the diffuse, even light that OLEDs emit, they will supplement, rather than replace, other energy-efficient technologies, like LED, compact fluorescent and advanced incandescent bulbs that create light from a single small point.

Its use may be limited at first, designers say, and not just because of its high price. “OLED lighting is even and monotonous,” said Mr. Maurer, a lighting designer with studios in Munich and New York. “It has no drama; it misses the spiritual side.”

“OLED lighting is almost unreal,” said Hannes Koch, a founder of rAndom International in London, a product design firm. “It will change the quality of light in public and private spaces.”

Mr. Koch’s firm was recently commissioned by Philips to create a prototype wall of OLED light, whose sections light up in response to movement.

oled 3

An OLED installation by Hannes Koch, who said the technology would “change the quality of light in public and private spaces.”

Because OLED panels could be flexible, lighting companies are imagining sheets of lighting material wrapped around columns. (General Electric created an OLED-wrapped Christmas tree as an experiment.) OLED can also be incorporated into glass windows; nearly transparent when the light is off, the glass would become opaque when illuminated.

Because OLED panels are just 0.07 of an inch thick and give off virtually no heat when lighted, one day architects will no longer need to leave space in ceilings for deep lighting fixtures, just as homeowners do not need a deep armoire for their television now that flat-panel TVs are common.

The new technology is being developed by major lighting companies like G.E., Konica Minolta, Osram Sylvania, Philips and Universal Display.

“We’re putting significant financial resources into OLED development,” said Dieter Bertram, general manager for Philips’s OLED lighting group. Philips recently stepped up its investment in this area with the world’s first production line for OLED lighting, in Aachen, Germany.

Universal Display, a company started 15 years ago that develops and licenses OLED technologies, has received about $10 million in government grants over the last five years for OLED development, said Joel Chaddock, a technical project manager for solid state lighting in the Energy Department.

Armstrong World Industries and the Energy Department collaborated with Universal Display to develop thin ceiling tiles that are cool to the touch while producing pleasing white light that can be dimmed like standard incandescent bulbs. With a recently awarded $1.65 million government contract, Universal is now creating sheetlike undercabinet lights.

“The government’s role is to keep the focus on energy efficiency,” Mr. Chaddock said. “Without government input, people would settle for the neater aspects of the technology.”

G.E. is developing a roll-to-roll manufacturing process, similar to the way photo film and food packaging are created; it expects to offer OLED lighting sheets as early as the end of next year.

“We think that a flexible product is the way to go,” said Anil Duggal, head of G.E.’s 30-person OLED development team. OLED is one of G.E.’s top research priorities; the company is spending more than half its research and development budget for lighting on OLED.

Exploiting the flexible nature of OLED technology, Universal Display has developed prototype displays for the United States military, including a pen with a built-in screen that can roll in and out of the barrel.

The company has also supplied the Air Force with a flexible, wearable tablet that includes GPS technology and video conferencing capabilities.

As production increases and the price inevitably drops, OLED will eventually find wider use, its proponents believe, in cars, homes and businesses.

“I want to get the price down to $6 for an OLED device that gives off the same amount of light as a standard 60-watt bulb,” said Mr. Duggal of G.E. “Then, we’ll be competitive.”


Full article and photos:

Quantum computer slips onto chips

Optical computing chip (Science)

Chips like this one could form the basis for future optical computers

Researchers have devised a penny-sized silicon chip that uses photons to run Shor’s algorithm – a well-known quantum approach – to solve a maths problem.

The algorithm computes the two numbers that multiply together to form a given figure, and has until now required laboratory-sized optical computers.

This kind of factoring is the basis for a wide variety of encryption schemes.

The work, reported in Science, is rudimentary but could easily be scaled up to handle more complex computing.

Shor’s algorithm and the factoring of large numbers has been a particular case used to illustrate the power of quantum computing.

Quantum computers exploit the counterintuitive fact that photons or trapped atoms can exist in multiple states or “superpositions” at the same time.

For certain types of calculations, that “quantum indeterminacy” gives quantum computers a significant edge.

While traditional or “classical” computers find factoring large numbers impracticably time-consuming, for example, quantum computers can in principle crack the problem with ease.

That has important implications for encryption methods based on factoring, such as the “RSA” method that is used to make transactions on the internet more secure.

‘Important step’

Optical computing has been touted as a potential future for information processing, by using packets of light instead of electrons as the information carrier.

But these packets, called photons, are also endowed with the indeterminate properties that make them quantum objects – so an optical computer can also be a quantum computer.

In fact just this kind of photon-based quantum factoring has been accomplished before, but the ability to put the heart of the machine on a standard chip is promising for future applications of the idea.

“The way people used to make this kind of circuit consumed square metres of laboratory space and took graduate students many months to align,” said Jeremy O’Brien, the University of Bristol researcher who led the work.

“Doubling the complexity of the circuit often times turns it from being a difficult task to a practically impossible one, whereas for us to double the complexity it’s really straightforward,” he told BBC News.

The Bristol team’s approach makes use of waveguides – channels etched into the chips that provide a path for the photons around the chips like the minuscule wires in conventional electronics.

While Professor O’Brien said he is confident that such waveguides are the logical choice for future optical quantum computers, he added that there is still a significant amount of work to do before they make it out of the laboratory.

“To get a useful computer it needs to be probably a million times more complex, so a full-scale useful factoring machine is still at least two decades away,” he said.

“But this is one important step in that direction.”


Full article and photo:

The Real Sea Monsters: On the Hunt for Rogue Waves

Scientists hope a better understanding of when, where and how mammoth oceanic waves form can someday help ships steer clear of danger

big wave

A monster rogue wave approaches a merchant ship in the Bay of Biscay, an arm of the Atlantic Ocean bordered by the coasts of northwestern Spain and southwestern France.

A near-vertical wall of water in what had been an otherwise placid sea shocked all on board the ocean liner Teutonic—including the crew—on that Sunday in February, more than a century ago.

“It was about 9 o’clock, and [First Officer Bartlett], as he walked the bridge, had not the slightest premonition of the impending danger.  The wave came over the bow from nobody seems to know where, and broke in all its fury,” reported The New York Times on March 1, 1901: “Many of the passengers were inclined to believe that the wave was the result of volcanic phenomena, or a tidal wave. These opinions were the exception, however, for had the sea been of the tidal order Bartlett would have seen it coming.” The volcano theory was just as unlikely: “Absurd, absurd,” one of the Teutonic’s officers told the Times. “It was a giant sea, and there is no doubt of that.”

This is just one of the many anecdotal accounts in maritime history of waves upward of 30 meters devouring ships, even swallowing low-flying helicopters. But what sea captains and scientists have long believed to be true only gained widespread acceptance after the first digitally recorded rogue wave struck an oil rig in 1995. “The seamen tales about large waves eating their ships are correct,” says Tim Janssen, an oceanographer at San Francisco State University. “This was proof to everybody else, and a treat for scientists. They suspected it, but to see it and have an observation is something else.”

Now that there is no longer a question of rogue waves’ existence, other mysteries have arisen: How frequently do they occur? Just how do they come about? Are there areas or conditions where they are more likely? Janssen is among a growing group of researchers in search of answers to these questions, which could someday lead to safer seas.

Rogue waves by the numbers

Before any answers could be attempted, scientists first had to characterize a rogue (or freak) wave. The widely accepted definition, according to Janssen, is a wave roughly three times the average height of its neighbors. This is a somewhat arbitrary cutoff. Really, he notes, they are just “unexpectedly large waves.” The wave that swept onlookers off the coast in Acadia National Park in Maine on August 23 may not fit the former definition, for example, because background waves were already quite large due to Hurricane Bill, and rogues typically occur in the open ocean. Yet that wave has still been readily referred to as a “rogue”.

No one is certain yet just how frequently freak waves form; accurate numbers are extremely difficult to collect given the waves’ rare and transient nature. With more sophisticated monitoring and modeling—and as first-hand accounts are taken more seriously—the waves’ prevalence appears to be rising. “[Rogue waves] are all short-lived, and because ships are not everywhere, the probability that a ship encounters one is relatively small,” says Daniel Solli, who studies the optical version of rogue waves at the University of California, Los Angeles. “But with increasing amounts of oceanic traffic in the future, the likelihood of encountering them is getting larger.”

Some areas seem to breed the waves more than others. Janssen and his colleagues recently used computer models to determine that regions where wave energy is strongly focused could be up to 10 times more likely to generate a freak wave. He speculates that approximately three of every 10,000 waves on the oceans achieve rogue status, yet in certain spots—like coastal inlets and river mouths—these extreme waves can make up three out of every 1,000 waves. A paper describing these results was published last month in the Journal of Physical Oceanography.

Forming fearsome waves

Various theories exist for how rogue waves form. The simplest suggests that small waves coalesce into much larger ones in an accumulative fashion—a faster one-meter wave catches up with a slower two-meter wave adding up to a three-meter wave, for example. Janssen and his colleagues build on this with a more complex, nonlinear model in their recent paper. Waves might actually “communicate—sometimes in a bad way—and produce more constructive interferences,” Janssen explains. By communicating, he means exchanging energy. And because the conversations aren’t necessarily balanced, he says, “Communication can get amplified enough that a high-intensity large wave develops.” In other words, one burgeoning wave can actually soak up the energy of surrounding waves.

Again, in those places where variations in water depths and currents focus wave energies, this line of communication can get especially busy. Janssen’s models identified these rogue-prone zones. Certain conditions such as winds and wave dissipation, however, could not be included, limiting the simulation’s predictive power.

Meanwhile, Chin Wu, an environmental engineer at the University of Wisconsin–Madison sees another likely scenario spurring the monster waves: “If a wave propagates from east to west, and the current moves west to east, then a wave starts to build up,” says Wu, who studies wave–current interactions in a 15-meter pool. The wave basically climbs the current’s wall, rising out of what appears to be nowhere. Rogue waves have in fact been more common in regions such as the east coast of South Africa where surface waves meet currents running in the opposite direction.

Focusing on forecasts

The only way to really know what is going on in the unpredictable oceans is to watch, Wu says. He acknowledges, however, that the investments in the instruments and time necessary for such fieldwork are immense. “We need to identify places where [rogue waves] are more likely to occur,” he says, emphasizing the importance of numerical models—including the nontrivial accounting of wind and wave breaking—at this step, “and then focus on those areas.”

Focusing on an optical wave analogue may actually help scientists limit where they need to look. Light waves travel in optical fibers similarly to water waves traveling in the open ocean. “In optics we’re dealing with a similar phenomenon, but doing experiments on the tabletop and acquiring data in only a fraction of a second,” says U.C. Los Angeles’s Solli. Although he doesn’t suggest that optical experiments should replace ocean research, he suggests it could be a guide. Mapping light-wave conditions to the ocean could uncover parallel parameters that give rise to water waves. “Instead of looking for a needle in a haystack in the water, you could benefit from some beginning wisdom and narrow down the range,” adds Solli, who co-authored a paper on optical rogue waves in the December 2007 edition of Nature.

Janssen agrees with the need for more direct observations of ocean behavior. “We can make a theoretical prediction,” he says. “But then we have to go out and see if nature agrees.” If it does, the results “could provide a prediction scenario—made visible on maps—of hot spots that could change day to day,” Janssen says. This could work much like tornado forecasting.

Only two passengers were seriously hurt in the Teutonic incident—one suffered a broken jaw and the other a severed foot. They were fortunate. “Had it struck us later on in the day many passengers would have been promenading in the sunshine, without doubt,” Officer Bartlett told the Times. “There is no telling how many of them would have been injured.” Extreme waves do not always offer such merciful timing, however. Forecasts could be crucial in helping future ocean liners evade the voracious sea monsters.


Full article and photo:

Why not spend $21 billion on solar power from space?

space based_solar_power

The Japanese government is prepared to spend some 2 trillion yen on a one-gigawatt orbiting solar power station—and this week Mitsubishi and other Japanese companies have signed on to boost the effort. Boasting some four kilometers of solar panels—maybe of the superefficient Spectrolab variety but more likely domestically sourced from Mitsubishi or Sharp—the space solar power station would orbit some 36,000 kilometers above Earth and transmit power via microwave or laser beam.

The benefit? Constant solar energy production as the space-based power plant never passes out of sunlight. The downsides? Only enough power for roughly 300,000 Japanese homes at a price tag of $21 billion, according to Japan’s science ministry (about 127 million people live in Japan in some 47 million households, according to Wikipedia and the CIA’s World Factbook). The Japan Aerospace Exploration Agency (JAXA) aims to have a system in space by 2030.

The first step will be launching a test satellite that will gather solar power and beam it back to Earth, probably in 2015. Already, ground tests show that some solar power (180 watts) can be beamed successfully.

In the U.S., where space solar has been on the drawing board since at least the 1960s, California’s Pacific Gas & Electric has pledged to buy power from a planned 200-megawatt space solar station put together by Solaren that is still being developed.

But the U.S. government has mixed feelings about space solar. Despite some $80 million spent over decades by NASA, the alternative energy source is no closer to fruition using public funds. And other government agency estimates put the price tag for space solar at $1 billion per megawatt—making this the most expensive power source identified to date in any solar system.


Full article and photo:

Galaxy’s ‘cannibalism’ revealed

Andromeda galaxy (SPL)

The Andromeda galaxy is still expanding.

The vast Andromeda galaxy appears to have expanded by digesting stars from other galaxies, research has shown.

When an international team of scientists mapped Andromeda, they discovered stars that they said were “remnants of dwarf galaxies”.

The astronomers report their findings in the journal Nature.

This consumption of stars has been suggested previously, but the team’s ultra-deep survey has provided detailed images to show that it took place.

This shows the “hierarchical model” of galaxy formation in action.

The model predicts that large galaxies should be surrounded by relics of smaller galaxies they have consumed.

The scientists charted the outskirts of Andromeda in detail for the first time.

They discovered stars that could not have formed within the galaxy itself.

Pauline Barmby, an astronomer from the University of Western Ontario who was involved in the study, told BBC News the pattern of the stars’ orbits revealed their origin.

“Andromeda is so close that we can map out all the stars,” she said.

“And when you see a sort of lump of stars that far out, and with the same orbit, you know they can’t have been there forever.”

Andromeda, which is approximately 2.5 million light years away from Earth is still expanding, say the scientists.

The researchers also saw a “stream of stars” of a nearby galaxy called Triangulum “stretching” towards Andromeda.

Dr Scott Chapman, reader in astrophysics at the Institute of Astronomy, University of Cambridge, was also involved in the research.

He said: “Ultimately, these two galaxies may end up merging completely.

“Ironically, galaxy formation and galaxy destruction seem to go hand in hand.”

Nickolay Gnedin, an astrophysicist from the University of Chicago, who was not involved in this study, described the work as showing “galactic archaeology in action”.


Full article and photo:

A Doomed Planet, and Scientists Are Lucky to Have Spotted It

Were astronomers just lucky when they discovered the planet WASP-18b?

At first impression, the planet, described in the current issue of the journal Nature, fits a familiar profile for planets that have been discovered around other stars: big (about 10 times the mass of Jupiter), close to the parent star (about 1.9 million miles away, or just one-fiftieth of the distance between the Sun and Earth) and hot (3,800 degrees Fahrenheit). About one-quarter of the nearly 400 planets discovered so far have been such “hot Jupiters.”

But as an international team of astronomers looked more closely, they became more surprised that they had seen WASP-18b at all. The tidal forces between a star and a planet dissipate energy, and WASP-18b is so close that it should fall into its host star in less than a million years — an eye blink on the cosmic scale. (Andrew Collier Cameron, a professor of astronomy at the University of St. Andrew and a member of the team, noted that with the impending fiery fate of the planet, it seemed appropriate that it was located in the constellation Phoenix.)

The star system is about a billion years old, the astronomers reported, so the chances that they observed WASP-18b on the cusp of oblivion is about 1 in 1,000.

In an accompanying commentary in Nature, Douglas P. Hamilton, a professor of astronomy at the University of Maryland, noted that this was roughly the same unlikely probability as drawing two red aces in a row from a full deck of cards.

“Of those 400 objects, it’s unique,” Dr. Hamilton said. “It’s the only planet that’s going to be crashing into its star in one million years.”

But luck is not the only possibility. Ignorance could be another. It might be that astronomers do not understand the dynamics of stellar tides. The rate of energy dissipation depends on how well the star vibrates — ringing like a bell or thunking like a chunk of wood. (If the star is ringing, less energy is dissipated, and WASP-18b would not be falling as quickly.) This difficult-to-measure quantity, which depends on turbulence inside the star, is not known for individual stars, not even for the Sun.

The answer does not have to wait a million years. In fact, astronomers just have to wait 5 to 10 years. WASP-18b already whips around the star every 22 hours, 35 minutes, 41.5 seconds — a year in less than an Earth day. If it is falling inward as fast as predicted, its day will shorten noticeably in the coming years.


Full article :


See also: An orbital period of 0.94 days for the hot-Jupiter planet WASP-18b

Single molecule’s stunning image

Pentacene molecule image (IBM)
Even the bonds to the hydrogen atoms at the pentacene’s periphery can be seen

The detailed chemical structure of a single molecule has been imaged for the first time, say researchers.

The physical shape of single carbon nanotubes has been outlined before, using similar techniques – but the new method even shows up chemical bonds.

Understanding structure on this scale could help in the design of many things on the molecular scale, particularly electronics or even drugs.

The IBM researchers report their findings in the journal Science.

It is the same group that in July reported the feat of measuring the charge on a single atom.

Fine tuning

In both cases, a team from IBM Research Zurich used what is known as an atomic force microscope or AFM.

Their version of the device acts like a tiny tuning fork, with one of the prongs of the fork passing incredibly close to the sample and the other farther away.

When the fork is set vibrating, the prong nearest the sample will experience a minuscule shift in the frequency of its vibration, simply because it is getting close to the molecule.

Comparing the frequencies of the two prongs gives a measure of just how close the nearer prong is, effectively mapping out the molecule’s structure.

Atomic force microscope (SPL)
The microscope must be kept under high vacuum and exceptionally cold

The measurement requires extremes of precision. In order to avoid the effects of stray gas molecules bounding around, or the general atomic-scale jiggling that room-temperature objects experience, the whole setup has to be kept under high vacuum and at blisteringly cold temperatures.

However, the tip of the AFM’s prong is not well-defined and isn’t necessarily sharp on the scale of single atoms. The effect of this bluntness is to blur the instrument’s images.

The researchers have now hit on the idea of deliberately picking up just one small molecule – made of one atom of carbon and one of oxygen – with the AFM tip, forming the sharpest, most well-defined tip possible.

Their measurement of a pentacene molecule using this carbon monoxide tip shows the bonds between the carbon atoms in five linked rings, and even suggests the bonds to the hydrogen atoms at the molecule’s periphery.

Tip of the iceberg

Lead author of the research Leo Gross told BBC News that the group is aiming to combine their ability to measure individual charges with the new technique, characterising molecules at a truly unprecedented level of detail.

That will help in particular in the field of “molecular electronics”, a potential future for electronics in which individual molecules serve as switches and transistors.

Although the approach can trace out the ethereal bonds that connect atoms, it cannot distinguish between atoms of different types.

The team aims to use the new technique in tandem with a similar one known as scanning tunnelling microscopy – in which a tiny voltage is applied accross the sample – to determine if the two methods in combination can deduce the nature of each atom in the AFM images.

That would help the entire field of chemistry, in particular the synthetic chemistry used for drug design.

The results are of wide interest to others who study the nano-world with similar instruments. For them, implementing the same approach is as simple as picking up one of these carbon monoxide molecules with their AFM before taking a measurement.


Full article and photos:

US probe captures Saturn equinox

saturn (Nasa/JPL/Space Science Institute)
At equinox, the rings turn edge-on to the Sun, reflecting almost no sunlight

Raw images of the moment Saturn reached its equinox have been beamed to Earth by the US Cassini spacecraft.

Scientists are studying the unprocessed pictures to uncover new discoveries in the gas giant’s ring system.

Equinox is the moment when the Sun crosses a planet’s equator, making day and night the same length.

During this time, the Sun’s angle over Saturn is lowered, showing new objects and irregular structures as shadows on the otherwise flat plane of the rings.

Saturn’s orbit is so vast that Equinox happens only once every 15 Earth years.

At the moment of equinox, the rings turn edge-on to the Sun and reflect almost no sunlight.

This is the first equinox since 1994 and the first time there has been an observer, in the shape of the joint US and European spacecraft, Cassini.

Saturn at equinox (Nasa.JPL/Space Science Institute)

In an email, Dr Carolyn Porco, leader of Cassini’s imaging team, said the long-awaited images did not disappoint: “Even a cursory examination of them reveals strange new phenomena we hadn’t fully anticipated.

“Over the next week or two, the [Cassini] imaging team will be poring over these precious gems to see what other surprises await us, and, as usual, we will announce what we have found as soon as we can.”

Cassini was launched in October 1997 from Florida’s Cape Canaveral Air Force Station. It arrived at Saturn in July 2004 to embark on a four-year mission of exploration around the planet and its moons.

The spacecraft is still operating well and has been re-programmed to carry out new tasks. Its current mission is to answer some of the questions raised by its earlier observations.

Full article and photos:

Minister warned over ‘UK Roswell’

Sketch of a UFO sighting
The files include sketches of UFOs drawn by witnesses

A former head of the military told the defence secretary that a UFO sighting dubbed Britain’s Roswell could be a “banana skin”, official files show.

In 1985 Lord Hill-Norton wrote to Michael Heseltine about the “Rendlesham incident” in 1980, when US airmen in Suffolk thought they saw an alien ship.

Either a craft entered UK airspace with “impunity” or US airmen were capable of a “serious misperception”, he wrote.

It is among the latest MoD files on UFOs released by the National Archives.

It is also revealed that UFO sightings soared in 1996 – the year Will Smith’s alien-themed Men In Black was released.

‘Puzzling and disquieting’

The “Rendlesham incident” involved American airmen from RAF Woodbridge who reported seeing mysterious lights.

Witnesses said a UFO was transmitting blue pulsating lights and sending nearby farm animals into a “frenzy”.

In 2003, ex-US security policeman Kevin Conde admitted that he and another airman had shone patrol car lights through the trees and made noises on the loudspeaker as a prank.

But in 1985, Lord Hill-Norton – a former chief of the defence staff and First Sea Lord – wrote to Mr Heseltine, the then-defence secretary, to express his feelings about the event.

In his letter, Lord Hill-Norton said he rejected the official MoD line that the case was of “no defence interest”, adding that it displayed “puzzling and disquieting features which have never been satisfactorily explained by your department”.

He said it was either the case that a piloted craft had entered and left UK airspace with “complete impunity” or “a sizeable number of USAF personnel at an important base in British territory are capable of serious misperception”.

Lord Hill-Norton added: “There seems to be a head of steam building up on this matter, and I can see a potential ‘banana skin’ looming.”

The release is part of a three-year project by the MoD and the National Archives to release files related to UFOs on the National Archives website.

Other incidents recorded in the latest batch of documents, which cover the years 1981 to 1996, include:

• Two men from Staffordshire who told police that, as they returned home from an evening out in 1995, an alien appeared under a hovering UFO hoping to take them away

• More than 30 sightings of bright lights over central England during a six-hour period in 1993, which led to the assistant chief of defence staff being briefed – and turned out to be caused by a Russian rocket re-entering the atmosphere

• Several sightings in Bonnybridge, central Scotland, which became the UK’s UFO hotspot during the 1990s

• A UFO which was seen over the jazz stage at the Glastonbury Festival in June 1994. The two female witnesses reported that they turned to the people next to them to verify what they had seen but “they didn’t look hard enough or take it seriously”

 It is also revealed that UFO sightings leapt from 117 in 1995 to 609 in 1996 – the year that Will Smith aliens-on-earth blockbusters Independence Day and Men In Black were released, and alien conspiracy series The X Files was at the height of its popularity with UK audiences.

Dr David Clarke, a UFO expert and journalism lecturer at Sheffield Hallam University, said it was significant that one of the biggest years for reports previously had been 1978, which saw 750 – at the same time that Steven Spielberg’s blockbuster Close Encounters of the Third Kind was released.

He added: “Obviously, films and TV programmes raise awareness of UFOs and it’s fascinating to see how that appears to lead more people to report what they see.

“In the 1950s you have UFOs with flashing dials like in the b-movies of the time, and the aliens tend to come from Venus and Mars – that stops from the late ’60s when we find out how inhospitable these places are.

“From the mid-1980s you start to see triangular-shaped objects – this is the era of US stealth aircraft. I think it’s clear that people see what they expect to see.”


See also:

UFO sightings revealed in archives

Full article and photo:

As important as Darwin

In praise of astronomy, the most revolutionary of sciences

FOUR hundred years ago our understanding of the universe changed for ever. On August 25th 1609 an Italian mathematician called Galileo Galilei demonstrated his newly constructed telescope to the merchants of Venice. Shortly afterwards he turned it on the skies. He saw mountains casting shadows on the moon and realised this body was a world, like the Earth, endowed with complicated terrain. He saw the moons of Jupiter—objects that circled another heavenly body in direct disobedience of the church’s teaching. He saw the moonlike phases of Venus, indicating that this planet circled the sun, not the Earth, in even greater disobedience of the priests. He saw sunspots, demonstrating that the sun itself was not the perfect orb demanded by the Greek cosmology that had been adopted by the church. But he also saw something else, a thing that is often now forgotten. He saw that the Milky Way, that cloudy streak across the sky, is made of stars.

That observation was the first hint that, not only is the Earth not the centre of things, but those things are vastly, almost incomprehensibly, bigger than people up until that date had dreamed. And they have been getting bigger, and also older, ever since. Astronomers’ latest estimates put the age of the universe at about 13.7 billion years. That is three times as long as the Earth has existed and about 100,000 times the lifespan of modern humanity as a species. The true size of the universe is still unknown. Its age, and the finite speed of light, means no astronomer can look beyond a distance of 13.7 billion light-years. But it is probably bigger than that.

Nor does reality necessarily end with this universe. Physics, astronomy’s dutiful daughter, suggests that the object that people call the universe, vast though it is, may be just one of an indefinite number of similar structures, governed by slightly different rules from each other, that inhabit what is referred to, for want of a better term, as the multiverse.

Star trek

The shattering of the crystal spheres which Galileo’s contemporaries thought held the planets and the stars, with the sphere containing the stars representing the edge of the universe, is (along with Darwin’s discovery of evolution by natural selection) the biggest revolution in self-knowledge that mankind has undergone. The world that Galileo was born into was of comprehensible compass. The Greeks had a fair idea of the size of the Earth and the distance to the moon, and so did the medievals who read their work. But these were distances that the imagination might, at a stretch, embrace. And it was easier to believe that a human-sized universe was one that might have been brought into being with humanity in mind. It is harder, though, to argue that the modern version of cosmology, let alone any hypothetical one which is multiversal rather than universal, has come about for mankind’s convenience.

Four centuries on, it is difficult to think of Galileo’s intellectual heirs, meeting this week in Rio de Janeiro under the auspices of the International Astronomical Union, as firebrand revolutionaries. Yet their discoveries—from planets around other stars that may support alien life, to dark matter and energy of unknown nature that are the dominant stuff of reality—are no less world-changing than his. Moderns may be more comfortable than medievals with the idea that man’s notion of his place within the universe can suddenly change. That should not blind them to the wonder of it.

Full article and photo:

Kepler’s world

Celebrating the work of a neglected astronomer

 Kepler moved it elliptically

MUCH has been made of the 400th anniversary this year of Galileo pointing a telescope at the moon and jotting down what he saw (even though this had previously been accomplished by an Englishman, Thomas Harriot, using a Dutch telescope). But 2009 is also the 400th anniversary of the publication by Johannes Kepler, a German mathematician and astronomer, of “Astronomia Nova”. This was a treatise that contained an account of his discovery of how the planets move around the sun, correcting Copernicus’s own more famous but incorrectly formulated description of the solar system and establishing the laws for planetary motion on which Isaac Newton based his work.

Four centuries ago the received wisdom was that of Aristotle, who asserted that the Earth was the centre of the universe, and that it was encircled by the spheres of the moon, the sun, the planets and the stars beyond them. Copernicus had noticed inconsistencies in this theory and had placed the sun at the centre, with the Earth and the other planets travelling around the sun.

Some six decades later when Kepler tackled the motion of Mars, he proposed a number of geometric models, checking his results against the position of the planet as recorded by his boss, Tycho Brahe, a Danish astronomer. Kepler repeatedly found that his model failed to predict the correct position of the planet. He altered it and, in so doing, created first egg-shaped “orbits” (he coined the term) and, finally, an ellipse with the sun placed at one focus. Kepler went on to show that an elliptical orbit is sufficient to explain the movement of the other planets and to devise the laws of planetary motion that Newton built on.

A.E.L. Davis, of Imperial College, London, this week told astronomers and historians at the International Astronomical Union meeting in Rio de Janeiro that it was the rotation of the sun, as seen by Galileo and Harriot as they watched sunspots moving across its surface, that provided Kepler with what he thought was one of the causes of the planetary motion that his laws described, although his reasoning would today be considered entirely wrong.

In 1609 astronomy and astrology were seen as intimately related; mathematics and natural philosophy, meanwhile, were quite separate areas of endeavour. Kepler, however, sought physical mechanisms to explain his mathematical result. He wanted to know how it could be that the planets orbited the sun. Once he learned that the sun rotated, he comforted himself with the thought that the sun’s rays must somehow sweep the planets around it while a quasi magnetism accounted for the exact elliptical path. (Newton did not propose his theory of gravity for almost another 80 years.) As today’s astronomers struggle to determine whether they can learn from the past, Kepler’s tale provides a salutary reminder that only some explanations stand the test of time.

Full article and photo:

The next blue planet

The race is on to discover a second Earth

IN 1995, when Michel Mayor of the University of Geneva detected the first exoplanet (a planet that orbits a star other than the sun) he started a race that has gained pace ever since. Some 360 such planets have now been detected, but none is exactly equivalent to the Earth.

The closest so far is Gliese 581c, which was discovered in 2007 by Dr Mayor’s colleague, Stéphane Udry. It is both rocky and orbits its parent star at a distance where liquid water could reasonably be expected to exist. However, since its parent star is a red dwarf—a far smaller and fainter object than the sun—that orbit is, in fact, much smaller that the Earth’s around the sun. That, in turn, suggests Gliese 581c is likely to be tidally locked to its orbital period, so that one side of the planet always faces the star and the other never does. Having half a planet in permanent daylight and the other half in permanent darkness does not sound like a good recipe for life.

As astronomers heard this week at the International Astronomical Union meeting in Rio, two new missions—a French one launched in December 2006 and an American one launched on March 6th—are in the process of trying to add to the list. Dr Mayor told the meeting that the French mission, CoRoT, has now found 80 exoplanets. It does so by watching for small diminutions in the amount of light from a star as the planet in question passes in front of it, a phenomenon known technically as a transit. The details of all but seven of these transiting planets are still unpublished, but Dr Mayor gave the meeting a preview.

The planets discovered so far by CoRoT typically have a mass that is less than 30 times that of Earth, making them likely to have a solid, rocky surface. But they also orbit their stars rapidly, typically taking two or three months, rather than a year, to do so. For those who hanker after extraterrestrial life that is a pity. Such rapid orbits mean the planets in question are close to their parent stars, and thus likely to be tidally locked.

Other news from CoRoT is better, though. Some 80% of the planets Dr Mayor has found have siblings. The existence of so many neighbours suggests that planetary systems tend to be stable, and stability is good for the evolution of life. Dr Mayor described a system he has seen that has five rocky planets in it. They have masses of 11, 14, 26, 27 and 76 times that of the Earth. He concluded his talk by saying, “I am really confident that we have an Earth-like planet coming in the next two years.”

He and his team may, however, be pipped at the post. On August 6th America’s space agency, NASA, announced that its Kepler planet-detector (named after the man who worked out the laws of planetary motion) is also behaving well. A paper published in Science by William Borucki of the NASA Ames Research Centre based in Moffett Field, California, and his colleagues showed that Kepler, which also uses the transit-detection technique, has confirmed the existence of a Jupiter-like planet discovered in 2007 and provided more precise details of that planet’s mass and orbital period. And Kepler’s instruments are more sensitive than CoRoT’s, so it should be capable of finding Earth-sized planets more easily than its French cousin.

Yet such space probes are not the only way of searching for other Earths. As part of his efforts to find new worlds, Dr Mayor is using the HARPS spectrograph, which is based at the European Southern Observatory in La Silla, Chile. He and his colleagues are training HARPS on ten nearby, bright and quiet stars three times a night, for 15 minutes at a time, for 50 nights a year, for at least two years, in the hope of spotting a nearby Earth-sized planet. The device works by detecting the tiny wobble given to a parent star when a planet passes it by. The spectrograph has already found 16 planets.

Meanwhile, David Bennett of the University of Notre Dame in Indiana wants to use a technique called gravitational microlensing to spot planets that might be missed by other methods. He told the conference that his approach would pick up not only small rocky planets orbiting at great distances from their parent stars, but also planets that had been ejected from their orbits. The idea would be to stare at a distant star and report instances when its light had been bent by the gravity of a planet passing in front of it. Such signals would be brief and rare, but they would also be strong and unmistakable. Sooner or later, then, an Earth-sized planet will turn up. How Earth-like it will be in other ways, remains to be seen

Full article:

Time-Traveling for Dummies

Time travel

You might say we’re living in a golden age of time travel. From television shows like Heroes, Lost, and Flash Forward to this summer’s Star Trek movie, punctures in the space/time continuum are turning up all around us. As a physicist—and, perhaps redundantly, a science-fiction geek—I’m particularly sensitive to the pleasures of these mind-bending narratives. I’m also sensitive to their flaws. Most fictional accounts of time travel are rife with paradoxes, parallel universes, and plot holes that violate strict physical laws: Instead of exploring the limits of our understanding, they make a mockery of them.

That’s why I’m so excited about the film adaptation of Audrey Niffenegger’s The Time Traveler’s Wife, which tells the story of Henry DeTamble, a man with a rare genetic disorder that causes him to skip around in time while his long-suffering wife, Clare, waits for him at home. The premise is no more or less plausible than that of, say, Back to the Future, in which a tricked-out DeLorean must reach 88 mph to jump into the past. But The Time Traveler’s Wife follows through on its premise in a realistic way.

The notion that one version of time travel is more accurate than another might seem ridiculous on its surface, but physicists actually have rather a lot to say about how time travel should work. Some, in their more fanciful moments, have even devised ways to exploit Einstein’s theory of general relativity to come up with “practical” models of time machines. Kip Thorne, in Black Holes and Time Warps, describes how wormholes can be successfully used to travel back in time, while in Time Travel in Einstein’s Universe, J. Richard Gott does the same with gargantuan cosmic strings—threadlike concentrations of matter of almost unimaginable density and length—moving at close to the speed of light.

It’s not fair to demand exact design specifications for a fictional time machine, but even if we sweep the technical details under the TARDIS, time-travel narratives ought to still abide by a few fundamental ground rules. Here’s my list of the most important principles of time travel, real or imagined.

1) This is the only universe you’ve got.

In 1957, physicist Hugh Everett proposed what has become known as the “many worlds” interpretation of quantum mechanics. Quantum mechanics was one of the great breakthroughs of the 20th century, and it predicted, among much else, that the motions of electrons and other small particles are fundamentally random. Everett, then at the Pentagon, wondered whether the universe wasn’t branching off into two nearly identical copies each time one of these random events occurred. Since there are lots of particles in the universe and they move around and interact very quickly, these parallel universes would multiply almost without limit.

The many-worlds interpretation provides a fertile basis for time fiction, via the ubiquitous Back to the Future model of alternate histories: Someone visits the past, teaches his father about believing in himself (or some similar nonsense), and thereby leaps into a parallel universe with a different (ideally, better) future than the first.

For a physicist, though, there’s no reason to believe that a Back to the Future-style time machine is possible. For one, there’s no evidence that Everett’s parallel universes exist. (There’s no direct evidence against them, either.) More importantly, Einstein’s theory of general relativity­—the branch of physics that might make time travel possible—doesn’t take kindly to the idea. Every solution to Einstein’s equations involves just a single universe. Maybe I’m being overly dogmatic, but I don’t think it’s unfair to insist that movies stay within the realm of what we currently know about how physics works.

In a rule-abiding time-travel narrative, there are no parallel universes—just a single timeline. The Time-Traveler’s Wife follows this rule to a T, and there is a significant online presence dedicated to diagramming the unique, entangled history of Henry and Clare.

2) You can’t visit any time before your time machine was built.

According to Einstein’s picture of the universe, space and time are curved and very closely related to each other. This means that traveling through time would be much like traveling through a tunnel in space—in which case you’d need both an entrance and an exit. As a time traveler, you can’t visit an era unless there’s already a time machine when you get there—an off-ramp. This helps explain why we’re not visited by time-traveling tourists from our own future. Futuristic humans don’t drop in for dinner because we haven’t yet invented time travel.

The time-machine construction clause is one of the most often overlooked of the rules of time travel and is the only real mar on the otherwise exceptional Terminator (1984), which proposes a single historical line (or loop) with no alternate universes. (Subsequent movies in the series revert to the parallel-histories model.) The Time Traveler’s Wife very nearly gets it right: Since Henry is the time machine, he can’t visit any time before he was born. His daughter, on the other hand, bends those rules slightly: She manages to visit a time before her own birth but not so far back that her father hasn’t been born, either. (We might take Henry’s birth as the “invention” of time travel and the whole family as components of a single machine.)

3) You can’t kill your own grandfather.

Supposing you’ve inherited a time machine from your grandfather. Presumably, you could pop back for a visit to thank him and/or commit retro-grand-patricide, couldn’t you? Not so fast. To make the logic blindingly obvious, if you kill your grandfather, then you won’t have been born, which means you couldn’t have killed your grandfather, which (logically) means that you will be born.

If history is to have any consistency in a world with time travel, then the “grandfather paradox” (so named by writer René Barjavel) must be resolved. Physicists had little to say on this topic until the mid-1980s, when Igor Novikov of the University of Moscow used quantum mechanical arguments to develop what has become known as the “self-consistency theorem.” Quantum randomness must obey well-established laws, and Novikov showed that the probability of producing a different future with a time machine was zero. To put it more simply: You cannot alter history in any way that changes it from what it always was.

So, try as you might, you can’t kill your own grandfather, nor can you change history at all. The Terminator learned this the hard way, going back in time to prevent John Connor’s birth by killing his mother. When a human travels back in time to protect her, the two fall in love—and she becomes pregnant with … John Connor. Ta-da.

There’s no need for such finagling in The Time Traveler’s Wife. Since Henry DeTamble serves as his own time machine, there’s little chance of his preventing his own birth. Cf. rule No. 2.

4) You don’t have nearly as much free will as you think you do.

Novikov’s theorem can feel somewhat unsatisfying. As Kip Thorne writes, “something has to stay your hand as you try to kill your grandmother. What? How? The answer (if there is one) is far from obvious, since it entails the free will of human beings.” The concept may be easier to grasp if you think about it in terms of inanimate objects: Imagine you shot a pool ball into a time machine and it emerged a moment before you made the shot. Now suppose that you aimed the shot just right, so the outgoing ball (your ball, a second earlier) would block the original shot and prevent it from going into the time machine. This paradox, proposed by Joe Polchinski, then at the University of Texas,* turns out to be the same as the grandfather paradox, albeit with less profound implications.

Kip Thorne and his students worked out what looks like a compromise solution for the impossible pool shot. They argue that you’d line up your shot exactly, but as your ball headed toward the time machine, another one would fly out at a slight angle and graze its side. The first ball would still travel into the time machine but at a slightly different angle than you’d intended. Then it would come back out of the machine, a moment earlier, at the same barely skewed angle—and the Terminator-style loop would be complete.

If pool balls can be forced to succumb to their destiny, so can people. This fact is very easy to ignore if you don’t know what the future will bring; it certainly seems like you’ve got free choice. But if you’ve already seen what your destiny is, then the future is already written. Making that self-consistent future play out is one of the great challenges of time-travel fiction.

In The Time Traveler’s Wife, Henry and Clare enforce the (predetermined) future by giving each other instructions and hints about how things are supposed to happen. That gives them a feeling of free choice where none really exists. In a letter to Clare about their future, Henry explains, “I won’t tell you any more, so you can imagine it, so you can have it unrehearsed when the time comes, as it will, as it does come.”

Dave Goldberg is an associate professor of physics at Drexel University and author, with Jeff Blomquist, of the upcoming, A User’s Guide to the Universe: Surviving the Perils of Black Holes, Time Paradoxes and Quantum Uncertainty, to be published in March 2010.


Full article and photo:

Early Humans Used Heat-Treated Stone for Tools

Tool manufacturers know that sometimes you have to heat-treat a material to make it harder or stronger.

Ancient toolmakers learned that trick, too. And archaeological research from South Africa pushes back the date of the earliest use of heat treatment at least 45,000 years, to more than 70,000 years ago.

Kyle S. Brown, a doctoral student at the University of Cape Town, and colleagues report finding stone tools that show signs of being heated to about 600 degrees Fahrenheit. Heat-treating in this way, most likely by burying under a fire, made the stone easier to knap, or shape into a tool by striking with another stone.

Archaeologists were studying several sites on the South African coast, with artifacts dating from 72,000 to 164,000 years ago that would have been made by modern humans from the African Middle Stone Age. Mr. Brown, an archaeological knapper who tries to replicate ancient tools, said they noticed that blades found at the site, made from a stone called silcrete, did not match silcrete obtained from outcroppings in the area. “We realized we were missing something,” he said.

They experimented by heat-treating some of the stone themselves. “When we pulled it out of the fire and flaked it, it did look like the kind of stone we were finding at our site,” Mr. Brown said. Their findings are published in Science.

The researchers had to show that the tools they found were intentionally heated to improve workability, not accidentally through a bushfire or other means. They found tools in areas where there was no evidence of burning. And they conducted tests on some of the artifacts, including one that showed that flaked surfaces had a glossiness that occurs only when the stone has been heated, proving that the stones were heated first and then worked into tools.

Mr. Brown said that the consensus among archaeologists had been that systematic heat treatment first occurred in Europe about 25,000 years ago. The current work, he added, “is almost indisputable evidence” for heat treatment 72,000 years ago, and perhaps as early as 164,000 years ago, although researchers need more samples from the earlier period to be sure.


See also:


Full article:

Some Creative Destruction on a Cosmic Scale

Scientists Say Asteroid Blasts, Once Thought Apocalyptic, Fostered Life on Earth by Carrying Water and Protective Greenhouse Gas

In a paradox of creation, new evidence suggests that devastating avalanches of cosmic debris may have fostered life on Earth, not annihilated it. If so, life on our planet may be older than scientists previously thought — and more persistent.

Astronomers world-wide have been transfixed by a roiling gash the size of Earth in the atmosphere of Jupiter, caused by an errant comet or asteroid that smashed into the gas giant last month. The lingering turbulence is an echo of a cataclysmic bombardment that shaped the origin of life here 3.9 billion years ago, when millions of asteroids, comets and meteors pummeled our planet.

Known as the Late Heavy Bombardment, these intense showers of rubble created conditions so hellish that scientists named this opening chapter of Earth’s formation the Hadean era, after classical visions of the underworld and the realm of the dead. “The impact that killed the dinosaurs was just a firecracker compared to the impacts during this bombardment,” says planetary scientist Oleg Abramov at the University of Colorado at Boulder. “If you were standing on the surface, you would have been vaporized.”

Until recently, many researchers thought that this rain of rocks, lasting 20 million years or more, almost certainly wiped out early life on Earth — perhaps more than once. No one knows. The earliest known traces of life belong to a period shortly after the asteroid showers slackened. “The idea was that we were hit so many times and so hard that, if there had been any life forming then, it would have been wiped out and required to rise again,” says astrobiologist Lynn Rothschild at NASA’s Ames Research Center in Mountain View, Calif.

But in their super-heated plunge through the atmosphere, these asteroids and meteors may have helped create conditions ideal for emerging life. “Everyone focuses on the meteor that hits the ground,” says geochemist Richard Court at London’s Imperial College. “No one thinks about the products of its journey that get pumped into the atmosphere.”

As they vented, they collectively could have imported billions of tons of life-sustaining water into the air every year, Dr. Court and his colleague Mark Sephton recently determined. They calculated that these showers of volatile rocks delivered 10 times the daily outflow of the Mississippi River every year for 20 million years. By analyzing the fumes emitted under such extreme heat, they discovered these rocks also could have injected billions of tons of carbon dioxide into the air every year.

Combined with so much water vapor, the carbon dioxide could have induced a global greenhouse effect. That could have kept any life emerging on Earth safely in a planetary incubator at a time when the planet might easily have frozen because the Sun radiated 25% less energy than today. “The amount of CO2 that was produced is about the same we produce today through fossil fuel use and we know that is a climate-changing volume,” says Dr. Court.

They analyzed gases emitted by 12 meteorites of the sort believed to have hit during the bombardment using a new laboratory technique called pyrolysis-FTIR, which can instantly heat samples to 1,000 degrees Celsius. They found that, on average, each meteorite could release up to 12% of its mass as water vapor and 6% as carbon dioxide.

To study so many ruinous impacts, Dr. Abramov and Stephen Mojzsis at the University of Colorado developed a global computer simulation to gauge temperatures beneath individual impact craters — some caused by asteroids 50 miles or more in diameter.

By their calculations, our planet may have fared better than expected. Less than 25% of Earth’s crust would have melted during such a bombardment. “What we find is that under no circumstances can we sterilize the Earth during the bombardment,” says Dr. Mojzsis. “The surface zone was certainly sterile, but that is not where all life is.”

In fact, evolving microbes of the sort considered ancestral to all life forms today may have flourished underground in water heated by the impacts. Such habitable havens actually expanded during the bombardment, the computer simulation showed. Microbes able to live at temperatures ranging from 175 degrees to 230 degrees Fahrenheit could have survived unscathed. Some bacteria today thrive in even hotter water, such as those in hydrothermal vents at Yellowstone National Park.

No one knows what caused the bombardment. Nothing like it has happened since. But a controversial new perspective on orbital mechanics and the formation of the solar system suggests that Jupiter may have been partly responsible. The theory was developed by physicists at the Observatoire de la Côte d’Azur in France, the Universidade Federal do Rio de Janeiro in Brazil and the Southwest Research Institute in Boulder, Colo.

In their hypothesis, Neptune and Uranus originally orbited much closer to the Sun. Starting about four billion years ago, Jupiter and Saturn pushed them into the more distant orbits they follow today. Indeed, Neptune may have started closer to the Sun than Uranus, but ended up farther away. Disrupting the gravitational balance, these huge planets triggered a shotgun blast of planetary buckshot so violent that Mars, Mercury and the Moon still bear its scars.

“It is literally a revolution in our ideas about how our solar system evolved,” says asteroid expert William Bottke at the Southwest Research Institute. “It could be that our form of life today — every living thing that we see today — is due to this bombardment that happened 3.9 billion years ago.”

Earth still speeds through fields of rubble and star dust. This past week, the annual Perseid meteor shower peppered the planet with hundreds of meteors per hour. Every year, 40,000 tons or so of extraterrestrial dust and debris falls on Earth — a sprinkle compared with the millions of rocks still sheltered in the Asteroid Belt or the more distant Kuiper Belt and Oort Cloud.

“The object that hit Jupiter is not out of the ordinary for what we currently have in the solar system,” says Amy Simon-Miller, chief of the planetary systems laboratory at NASA’s Goddard Space Flight Center in Maryland. “From our perspective, the ones we worry about are the ones that cross the path of Earth.”

So far, astronomers have discovered 784 asteroids a half mile or so in diameter that intersect Earth’s orbit. They are tracking thousands of smaller ones and are searching for more. Despite close calls and false alarms, none of them so far threaten Earth, says comet expert Donald Yeomans, manager of NASA’s Near-Earth Object Program Office.

In a report released Wednesday, a panel of experts convened by the U.S. National Research Council warned that Earth could still be blindsided. They are studying ways to safely deflect any that do come too close.

In this game of orbital roulette, Dr. Yeomans does have his eye on one large near-Earth asteroid called Apophis. On its next close approach past Earth, there is a 1-in-45,000 chance that the interplay of gravitational forces could nudge it onto a potential collision course.

“In the unlikely event that happens, it will come back and hit us on April 13, 2036,” Dr. Yeomans says. “That’s Easter Sunday.”

Robert Lee Holz, Wall Street Journal


Full article:

New planet displays exotic orbit

Artist's impression of an exoplanet (Nasa/JPL-Caltech)
Planets with retrograde orbits should be rare
Astronomers have discovered the first planet that orbits in the opposite direction to the spin of its star.

Planets form out of the same swirling gas cloud that creates a star, so they are expected to orbit in the same direction that the star rotates.

The new planet is thought to have been flung into its “retrograde” orbit by a close encounter with either another planet or with a passing star.

The work has been submitted to the Astrophysical Journal for publication.

Co-author Coel Hellier, from Keele University in Staffordshire, UK, said planets with retrograde orbits were thought to be rare.

“With everything [in the star system] swirling around the same way and the star spinning the same way, you have to do quite a lot to it to make it go in the opposite direction,” he told BBC News.

The direction of orbit is known for roughly a dozen exoplanets (planets outside our solar system). This is the only example with a retrograde orbit. All others are prograde; they orbit in the same direction as the spin of their star.

Close encounters

Professor Hellier said a near-collision was probably responsible for this planet’s unusual orbit.

“If you have a near-collision, then you’ll have a large gravitational slingshot from that interaction,” he explained.

“This is the likeliest explanation. But it might be possible you can do it by gradually perturbing the orbit through the influence of a second planet. So far, we haven’t found any evidence of a second planet there.”

The new object has been named WASP-17b. It is the 17th exoplanet to have been discovered by the Wide Area Search for Planets (WASP) consortium of UK universities.

The gas giant is about twice the size of Jupiter, but has about half the mass.

WASP-17b was detected using an array of cameras set up to monitor hundreds of thousands of stars.

Astronomers were searching for small dips in light from these stars that occur when a planet passes in front of them. When this happens, the planets are said to transit their parent star.

A team from Geneva Observatory in Switzerland then looked for spectral signs that the star was wobbling due to gravitational tugs from an orbiting planet.

“If you look at how the spectrum of the star changes when the planet transits across it, you can work out which way the planet is travelling,” Professor Hellier added.

“That allows you to prove that it’s in a retrograde orbit.”

The size of the dip in light from the star during the transit allowed astronomers to work out the planet’s radius.

To work out how massive it was, they recorded the motion of the star as it was tugged on by the orbiting planet.


Full article and photo:

A saintly light

Why would a lightning-struck tree glow after being hit?

It was a dark and stormy night… I wasn’t there but Chris e-mails that he was walking in the woods towards dusk “a little after a thunderstorm” when he noticed the tree. One side of the tree, shattered by an earlier lightning stroke, stabbed the night like a broken pike. An eerie glow extended upward from the tree like tapered torchlight from the tip. But there was no heat. It was not on fire. Chris could clearly see the glow from about six feet away. The phenomenon persisted for many minutes.

If our atmosphere were made of neon, the tree would have glowed orangey-red, like a neon sign. Instead, the tree tips lit blue-violet, as electrons bombarded nitrogen and oxygen atoms of the air, exciting them and causing the air to glow.

Lucky Chris. Few land lubbers witness St. Elmo’s fire in nature. It’s a more common sight among sailors, who may give thanks to St. Elmo, patron saint of sailors, when they see blue fire off mast tips. They’ve survived a storm.

St. Elmo’s fire is a self-sustaining, continuous spark of electricity, called a coronal discharge.

It’s a “very, very slow spark, but going out in all directions,” e-mails physicist Erik Ramberg of Fermilab. When voltage gradients are just right, a clear ionization path doesn’t form between the charged object and ground potential. Instead, the electrons “essentially meander out of the electrode.”


St. Elmo’s-like fire (corona discharge) glowing from a 60,000-volt Tesla coil. Nikola Telsa was an inventor who, along with George Westinghouse, brought us alternating current power distribution.

We produce St. Elmo’s fire routinely in a neon tube or a copy machine. And you can generate the phenomenon yourself.

On a dry day, put on a pair of rubber-soled shoes. Scuffle across an acrylic carpet. The friction of your shoes generates negative charge and raises your voltage to at least 3000 volts. When you move your hand closer than a tenth of a centimetre to a doorknob, the voltage gradient (which is a measure of the electric field) is greater than 30,000 volts per centimetre. This is the field strength that breaks down air. Zap! A spark discharges your negative charge through ionized air to the doorknob.

But that’s not St. Elmo’s fire because the spark wasn’t sustained. Scuffling across the carpet, however, may not generate enough charge to do the job, especially if the air is too humid. You’ll probably need a Tesla coil to generate high enough voltages. If you, like Bloomfield, built one in your garage, you’re all set. The figure shows the results – St. Elmo’s fire glowing from the sharp point. The action of the coil created a charge (for example, negative) and raised the voltage of the pin to a high value (60,000 volts). Like charges repel. So, the negative charges tried to get away from each other and those on the pin tended to crowd into the point of the pin, where they were blocked from further travel by the air gap.

The concentration of negative charge in the small space at the point caused the electric field to increase locally. So the voltage gradient and the electric field is strong – easily 30,000 V/cm just off the pinpoint.

The strong electric-field force at the pinpoint accelerates electrons and protons of the air molecules away from each other. The result (called a plasma) is a gas-like mixture of positively-charged atoms (ions) and electrons that conducts electricity. The process of separating charges is called ionization.

The negative charge conducts through the plasma and eventually to ground in a gradual manner (we’ll discuss how in a moment). The discharge process imparts energy to the air molecules and atoms, which causes them to glow.

Click for the details of what makes St. Elmo’s fire glow.

The air only in the immediate vicinity of the pinpoint turns into plasma because the charge is intensely concentrated only there. Outside a small radius about the pinpoint the charges are too widely spaced to create an electric field gradient large enough to ionize the gas.

The spark to ground (St. Elmo’s fire) persists, because the ionization process controls its own radius of ionization with a sort of negative feedback. If the field gets too large at the pinpoint and creates too much charge, the excess of free charges piles around the pinpoint. The free charges around the point essentially enlarge the conducting pinpoint because the charges can conduct electricity, too. Now the shape of the pinpoint conductor is less sharp and the resulting electric field weakens, which slows the production of new free charges.

This self-correcting generation of charges (too much charge lessens the field, which lessens the generation of charge) provides a nice, steady supply of charge in the air near the pinpoint, which sustains the ionization and the resulting glow.

Chris saw St. Elmo’s fire emanating from the lightning-struck tree trunk. The violence of the thunderstorm created a strong electric field acting on the air – strong enough to generate St. Elmo’s fire off the sharp points of the shattered tree trunk.

“I suspect,” e-mails physicist Louis A. Bloomfield of the University of Virginia, “that one role of the lightning strike was to carbonize parts of the tree,” which then became electrically conducting. Carbon is an electrical conductor. When lightning struck the tree it formed a carbon track through the wood. The electrically conducting tree standing in the strong electric field of the thunderstorm was really like a “giant pin staring up at the sky and forming corona discharges.”

The ground below the storm was still electrically charged and a high voltage existed between the negatively-charged cloud overhead and positively-charged ground. The tree, rooted in the damp ground and electrically conducting, became positively charged. The shattered ends of the tree provided sharp points. The positive charges of the tree crowded into the tips of the shattered tree points, as like charges repelled each other and spread out.

The electric field, consequently, was extremely strong off the tips, strong enough to ionize the air there and provide a conducting path toward the negatively-charged cloud.

“The tree is sending an electric current upwards towards the storm [cloud],” e-mails research engineer William Beaty of the University of Washington. “And since this current is made from positive-charged air, there’d also be a slight wind blowing upwards off the glowing ‘flames’ on the broken wood. Sometimes electric currents occur as actual moving substances.”

The sharp-pointed tree tips enabled a self-sustaining spark that lasted many minutes. Long enough for Chris to see an eerie glow.

I am indebted to physicist Louis A. Bloomfield of the University of Virginia for his discussion of how to generate an electric field by scuffling your feet and then using a pin to create a spark.

Further Reading:

  • Phosphor glows: Why, What, How long, WonderQuest, 2003
  • Why electrons don’t crash into the nucleus, WonderQuest, 2003
  • How atoms got their energy, WonderQuest, 2006
  • How charge flows through metals, WonderQuest, 2006
  • Science Hobbyist by Bill Beaty
  • What causes the strange glow known as St. Elmo’s Fire? Is this phenomenon related to ball lightning? by William Beaty, Scientific American, September 22, 1997
  • How Everything Works: Making Physics Out of the Ordinary, by Louis Bloomfield (Wiley, 2008).
  • The Fire Of St. Elmo by Keith C. Heidorn, The Weather Doctor
  • Conceptual Physics, by Paul G. Hewitt (Addison Wesley, 2008)
  • __________

    Full article and photo:

    Skywatchers set for meteor shower

    Perseid (SPL)

    The Perseids occur when the Earth passes through dusty cometary debris

    Skygazers are getting ready to watch the annual Perseid meteor shower, which peaks on Wednesday.

    The Perseid shower occurs when the Earth passes through a stream of dusty debris from the comet Swift-Tuttle.

    As this cometary “grit” strikes our atmosphere, it burns up, often creating streaks of light across the sky.

    This impressive spectacle appears to originate from a point called a “radiant” in the constellation of Perseus – hence the name Perseid.

    “Earth passes through the densest part of the debris stream sometime on 12 August. Then, you could see dozens of meteors per hour,” said Bill Cooke of Nasa’s meteoroid environment office.

    No special equipment is required to watch the sky show. Astronomers say binoculars might help, but will also restrict the view to a small part of the sky.

    The Perseids can appear in any part of the sky, but their tails all point back to the radiant in the constellation Perseus.

    In the UK, the best times to see the Perseids are likely to be on the morning of 12 August before dawn and from late evening on the 12th through to the early hours of the 13 August.

    This year, light from the last quarter Moon will interfere significantly with the view.

    The rock and dust fragments which cause the shower were left behind by Comet Swift-Tuttle when it last came near the Sun.

    The comet orbits the Sun once every 130 years and last swept through the inner Solar System in 1992.


    Full article and photos:

    Techs and the city

    Lab by lab in and around San Francisco


    SAN FRANCISCO conjures up images of hippies and of free love, the psychedelic 60s and leftist politics. A member of Jefferson Airplane, a rock band, described it as “49 square miles surrounded by reality”. It has always had that air. In a letter written in 1889, Rudyard Kipling wrote of “a mad city, inhabited for the most part by perfectly insane people.”

    But as someone who writes about science (and in the interests of full disclosure, practices it for a living), I see a different side of San Francisco and the broader Bay Area around it. I don’t see a region full of people looking to escape reality; I see scientists and engineers at universities, companies, and national labs probing and investigating that reality on a daily basis. Instead of mind-altering drugs, I see the world-altering technology that flows out of Silicon Valley.

    A city built on science

    Plutonium was first discovered in a Berkeley lab (as were the aptly-named berkelium and californium). The Bay Area is the birthplace of “big science” and of the atom smashers that have told us so much about the fundamental building blocks of matter. Quarks were first discovered just down the peninsula at the Stanford linear accelerator.

    In the 1970s, two professors from Stanford and the University of California, San Francisco (UCSF) figured out how to use bacteria to clone segments of DNA. In the process, they gave birth to genetic engineering and the modern biotechnology industry. South of the city, Silicon Valley gave us the personal computer, the mouse, and the verbs “to google” and “to tweet”. Sit down in any local coffeeshop and you’re just as likely to end up next to someone nursing a startup as you are someone nursing a cappuccino. Now, the Valley’s venture capitalists are hoping that their magic will work just as well on the clean-technology industry.

    The Bay Area hosts the world’s biggest laser (at the National Ignition Facility in Livermore), the world’s most intense X-ray source (at the LCLS at Stanford’s national lab) and an institute devoted exclusively to the scientific search for extraterrestrial intelligence (the SETI Institute in Mountain View). NASA’s outpost here just launched a probe that will slam into the lunar surface in search of water.

    Between them, Stanford, Berkeley, and UCSF employ some 50 Nobel laureates spanning the full range of scientific disciplines. True, a handful of these are for economics, but we’ll cut the dismal science some slack.

    Science and technology are to the Bay Area what finance is to New York and what cars are (or were) to Detroit. They underpin the region’s economy, influence its culture and shape the very character of this region as much as its notoriously active seismic geography does.

    Over the next four days, I intend to explore a few of the different faces that science and technology present to residents here. From the stem cell research that promises to revolutionise medicine, to the science of growing and making the best wine, to the science-fiction sounding search for extraterrestrials, we’ll be taking a scientific road trip around the San Francisco Bay Area. Think Thelma and Louise meets Watson and Crick.


    I WAKE up slightly disoriented at 5:45am. Waking in darkness makes me feel more like a farmer than a scientist, but perhaps that’s appropriate for the task at hand today. I’m on my way to the University of California, Davis for their annual RAVE conference, a gathering of scientists, winegrowers and winemakers meant to share the most recent advances in the disciplines of viticulture and oenology.

    Gulping down a large coffee, I head east on I-80 across the Bay Bridge and through the sprawl of the East Bay. I pass the exit for Highway 37, which winds its way north and west to Napa and Sonoma, the heart of California’s wine country. In Napa alone, over 40,000 acres of vines produce an annual crop of grapes worth $400m. I manage to resist the pull of the wineries and instead follow I-80 east into the brightening dawn.

    The wine before the bottle

    You may not realise it when you pop the cork on a nice bottle of cabernet sauvignon, but many scientists spend their lives studying every facet of wine, from the best pruning and watering techniques for growing the tastiest grapes to the genetics of the bacteria used in their fermentation. And UC-Davis is one of the world’s great centres for wine science.

    Its Robert Mondavi Institute for Wine and Food Science, founded with a donation of $25m from the father of California’s wine industry, boasts 75,000 square feet of state-of-the art labs, kitchens, and sensory-testing equipment. Inside, the halls literally smell of wine and the researchers seem to be having a lot more fun than the typical science PhDs.

    Studying wine seems like a far cry from curing cancer or weaning the world off of fossil fuels, but the scientists who do so are no slouches. They make use of the latest techniques in biochemistry and biotechnology. Their analyses are sprinkled with complex mathematics and multivariate statistics.

    As I learn later in the morning, “whole genome shotgun sequencing”, originally developed for the Human Genome Project, was put to work on pinot noir in 2007. Besides shedding light on fundamental issues of plant evolution, wine scientists hope the grapevine genome will reveal some of the pathways that control wine flavour and resistance to various pests.

    The packed program includes lectures on viticultural practices, techniques for drying grapes into raisins, the perils of something called “berry shrivel” and how microbes contribute to flavour during fermentation. We hear about genomics, proteomics, and the “wired vine”, where all aspects of growing are monitored and controlled by sensors. Terpenes, norisoprenoids, oak lactones—the biochemical jargon comes thick and fast and eventually overwhelms me.

    But what comes through is a sense, as one speaker puts it, that wine is truly “chemistry in a glass”. Wine contains hundreds of complex chemical compounds, some of which are active in startlingly small amounts. Methoxypyrazine, which gives sauvignon blanc a slight bell-pepper odour, can be detected by the nose at less than two billionths of a gram in an entire bottle.

    To the purist, all of this measuring and quantifying might destroy the beauty of a perfectly balanced bottle paired with a delicious meal. But I think of the child who looks up at the night sky and grows up to become an astronomer. Science begins with and returns to beauty and wonder.

    As I hit the road back to the city, I think about the theory that it’s better to give grapes slightly less water than they want in order to stress them and to concentrate their intense flavours. Out of great struggle comes great wine—and great science.


    TONIGHT I’ve got two of the hottest tickets in town. As the bouncer checks my ID, I can hear the low bass emanating from the DJ’s turntables inside the glass doors. The crowd is dressed in slinky skirts, tight jeans, and sport coats. This is not the hippest new club in the city, but the normally staid halls of the California Academy of Sciences. My girlfriend and I head off for a stiff gin and tonic at one of the many bars (though not the one sitting beneath the watchful eye of Tyrannosaurus rex).

    To most people, the words “science” and “nightlife” don’t usually go together. This spring, however, the Academy opened its doors for a series of boozy evenings intended to give the residents of this young, tech-savvy city another view of the science museum. “NightLife”, as the event is called, has been selling out, with more than 3,000 people attending each week.

    It was only last September that the Academy returned to its home in Golden Gate Park. After the 1989 Loma Prieta earthquake damaged the aquarium here, the Academy undertook a complete rebuilding project that took $488m and the better part of a decade. Designed by Renzo Piano, the museum is now the largest public building in the world to have a LEED Platinum rating. Its design, which melds modern glass and steel with the classical architecture of the original building, reminds me of science itself—a combination of the new and modern with the solid, tested principles of the past.

    Inside, an exhibit demonstrates some of the building’s environmentally friendly features. Recycled blue jeans are stuffed into the walls to serve as insulation (which seems fitting, as San Francisco is home to both Levi’s and The Gap). Half of the building’s cement was made with recycled waste products from coal combustion and steel production, and the glass canopy outside houses 60,000 photovoltaic cells. Instead of using treated freshwater for the aquariums, water is pumped in directly from the Pacific at the other end of the park.

    We continue our stroll past the 90-foot diameter glass dome that houses a living rainforest. Next to the DJ, people are gaping through a glass window at scientists in white coats working on specimens—perhaps a nod to the traditional view of scientists in a museum.

    Downstairs, the Steinhart Aquarium is packed and people are noticeably tipsier. An alligator drifts towards the thick glass, having recently sent its albino tankmate Claude to the hospital with a nasty bite on the toe. A scantily clad girl sticks her tongue out at a lizard in its tank. It responds in kind and then lazily drops off its branch.

    We head back upstairs and onto the museum’s “living roof”, which is planted with native Californian grasses and flowers. They help reduce runoff and, from a distance, cause the building to mirror the hilly landscape of the city around it. A line is patiently snaking its way to a docent with a telescope trained on Saturn’s rings and the moon Titan.

    After we’ve had our fill of stargazing, we spill out into the beautiful evening and stroll out of the park. I’m left with the inescapable feeling that this taste of the nightlife has been high on style but a little light on the scientific substance. But that’s no terrible thing. Science will survive and grow, as this museum has.


    IT IS a classic San Francisco morning. The downtown skyline is shrouded in a blanket of fog. By noon the sun has finally burned its way through, but the fog will likely roll back in with the cool evening breeze. It’s a bit like scientific progress, actually—an endless ebb and flow from haziness to clarity and back again.

    Today I’m downtown to cover a town hall meeting hosted by the California Institute for Regenerative Medicine (CIRM). From the subway I head to one of the Palace Hotel’s elegant chandeliered ballrooms. It holds around 300, and eventually fills to standing capacity.

    Though it seems like a euphemism, “regenerative medicine” does not refer to plastic surgery (that is, dare I say, an Angeleno rather than a San Franciscan pastime). From its office in San Francisco’s Mission Bay, CIRM oversees California’s $3 billion investment in stem cell research.

    Soldiers awaiting orders

    In November 2004, California voters passed Proposition 71, a ballot measure allowing the state to fund research into human embryonic stem cells. Overnight, California became one of the largest backers of stem-cell research in the world. At a time when the federal government was unwilling to invest in regenerative medicine, the message from the state’s voters was clear: the incredible therapeutic promise of stem cells outweighs the moral objections to using them.

    That therapeutic promise, the meeting’s three panellists explain to us, comes from stem cells’ chameleon-like ability to turn into any of the cells that make up the body’s tissues and organs. Most cells are tailored to perform a particular function. Heart cells are good at beating, neurons transmit electrical signals and pancreatic islet cells produce insulin. While they all contain the full set of instructions of the human genome, each uses only the small subset that directs its particular task.

    A stem cell, on the other hand, is a cellular jack-of-all-trades. Given the right signals, it can become a brand new heart cell or neuron or insulin-producing cell. Bruce Conklin, a professor at the University of California, San Francisco and the second speaker of the evening, plays us a dramatic video of 2,000 human heart cells that had been derived from embryonic stem cells. Sitting in their Petri dish, they wriggle and beat, just like a human heart.

    Embryonic stem cells were first isolated in 1998, and since then the pace of progress has been furious. Much work has gone into figuring out how to reliably and efficiently generate the different cell types that doctors would like to use in patients. In addition, as the speakers emphasise, understanding exactly when and how implanted stem cells can go awry and cause tumours remains an essential research task that confronts all potential therapies.

    Such therapies are slowly, but surely, making their way towards the clinic. In January, Geron, a biotechnology giant, got FDA approval to conduct the first clinical trial testing the safety of an embryonic stem cell therapy. It will work with patients with severe spinal cord injuries. For its part, CIRM is hoping to get ten to 12 human stem cell trials going in the next four years. In December, it will award $20m to researchers and their corporate partners with that goal in mind.

    After the speakers finish their presentations, the moderator opens the floor to questions from the audience. From the front row, a young girl raises her hand. In a high-pitched, slightly faltering voice, she asks a deeply personal question: “I was burned very badly in August 2008. How might this help me, and how can I help in your research?”

    After the dry PowerPoints and data-filled charts, the scientists seem slightly taken aback by the raw emotion. They stammer through some answers, but none seems satisfying. Despite stem cells’ promise, the science just isn’t quite there yet. This moment brings home both the deep hopes and the urgent desperation that surround what are undoubtedly the early days of regenerative medicine.


    ONCE again I’m braving the early morning traffic on I-80, heading out of the city past Oakland and Berkeley. But just before I reach Davis, I veer north onto Interstate 5. It’s not the earthly delights of carefully cultivated varietals and nuanced terroir that concern me today. I’m heading into the mountains to get a tour of the Allen Telescope Array (ATA), a collection of 42 large telescopes that have just begun scanning the heavens for radio transmissions from intelligent extraterrestrials. Yes, you read that right—aliens.

    Three hours later, my small Toyota begins the climb into the mountains of Lassen National Park. Eureka, Whiskeytown, Old Oregon Trail—the road signs here recall the miners and pioneers who trudged through during California’s mid-19th century gold rush. The two-lane road I’m driving on used to be a trail for rattling stagecoaches.

    The San Francisco radio stations faded hours ago, and now only a few talk stations break through the static. Maybe I’ve lived in Haight-Ashbury for too long, but as I make a right turn into the observatory, Timothy Leary is in my head: “Turn on, tune in, drop out.” Here in Hat Creek, which is nearly devoid of manmade sounds, the ATA just turned on for science operations in May. For many years to come, it will tune into the radio sky to study the evolution of galaxies, the properties of black holes, and one of the most profound questions of all—whether we’re alone in the universe.

    My tour guide this afternoon is Garrett Keating, a former cop turned astronomer. We walk out towards one of the 42 telescopes, a gleaming aluminium dish six metres in diameter. Mr Keating opens a trap door and we poke our heads inside. The main dish reflects incoming radio waves onto a smaller dish off to our left. That in turn bounces them onto the telescope’s main receiver, a long pyramid with different sized antennas poking off of it.

    The antennas pick up an extremely wide range of frequencies, from those used for broadcast television on the low end up through the ones that transmit satellite television. In between is the emission frequency of hydrogen gas—the most common element in the universe and the raw material for the formation of stars and galaxies.

    Off in the distance, we hear the rumbles of an approaching storm, and several lightning bolts streak across the sky. Mr Keating insists we return to the lab. The antennae, he reassures me, are well grounded. I don’t tell him that it wasn’t the antennae I was worried about.

    Inside, fibre-optic cables carry the signals from the dishes to enormous racks of computers. By using the computers to combine data from each individual dish, the ATA is able to mimic a much larger telescope for a fraction of the cost. An initial donation of $25m from Paul Allen, the co-founder of Microsoft, and $25m from other sources financed these first 42 dishes. Eventually, the team hopes to collect enough funding to get up to 350.

    Operating together, the telescopes are quite sensitive. And they need to be, since a single mobile phone located on the moon would give off a much stronger signal than almost every astronomical object in the radio sky. In addition to its sensitivity, the ATA also views a large patch of the sky all at once. Most other radio telescopes are like telephoto lenses, zooming into a tiny region of space. The ATA, however, is the first that can take snapshots with a wide-angle lens.

    Just outside the sliding glass door to the control room, I notice a doormat with a bug-eyed alien and the caption “welcome all species”, a reminder of the ATA’s second mission. This telescope array represents a great leap forward for the enterprise known as SETI, the search for extraterrestrial intelligence.

    In the past, SETI has had to squeeze precious observation time out of existing telescopes around the world. With the ATA, the search for signals from intelligent life elsewhere in the universe will be carried out constantly, right alongside the astrophysics.

    So what exactly is SETI looking for? Essentially, something that seems not to belong—an odd man out in the cosmic radio haze. One possibility is a very powerful signal confined to a tiny frequency band, like the manmade transmissions that are continually leaking off of earth. As Mr Keating explains, “nature doesn’t produce pure tones”. In addition, if the signal really is extraterrestrial, its broadcast frequency should drift, as the alien planet orbits its own star.

    Over its lifetime, the ATA hopes to survey 1m promising candidate stars within a thousand light years of earth, and ten billion more in the central region of our own Milky Way galaxy. And as computers and algorithms improve, so will SETI’s ability to look for more complex alien transmissions in this mountain of data.

    Black holes, exploding stars, clouds of swirling hydrogen gas light-years across the galaxy—this is hallucinatory stuff. Yet if the little green men finally arrive, San Francisco—built as it is on science, tolerance and the counterculture—would seem like a natural first port-of-call.


    Full article and photos:

    The formula

    Why don’t Americans understand science better? Start with the scientists.


    Earlier this month, the Pew Research Center and the American Association for the Advancement of Science unveiled the latest embarrassing evidence of our nation’s scientific illiteracy. Only 52 percent of Americans in their survey knew why stem cells differ from other kinds of cells; just 46 percent knew that atoms are larger than electrons. On a highly contentious issue like global warming, meanwhile, the gap between scientists and the public was vast: 84 percent of scientists, but just 49 percent of Americans, think human emissions are causing global warming.

    Scientists are fond of citing statistics such as these in explaining conflicts between the public and the scientific community. On politicized issues like climate change, embryonic stem cell research, the teaching of evolution, and the safety of vaccines, many Americans not only question scientific expertise but even feel entitled to discard it completely. The reason, many scientists infer, is that the public is just clueless; perhaps we wouldn’t have these problems if the average citizen were better educated, more knowledgeable, better informed.

    Yet while scientific illiteracy is nothing to shrug at, the truth is that it’s only part of a broader problem for which scientists themselves must shoulder a significant portion of the responsibility. Decrying ignorance and scientific illiteracy, many scientists treat their fellow citizens as empty vessels waiting for an infusion of knowledge. That is exactly wrong, and exactly why so many people, in turn, see science and scientists as distant, inscrutable, aloof, arrogant.

    Rather than blaming, scientists ought to be engaging with the public, trying to personally make their knowledge hit home and to instill by example (rather than from a distance) the nature and virtues of the scientific mindset – while also encouraging average Americans to ask their own questions and have their say.

    Scientists must make it clear that while they don’t have all the answers, science is about searching for the truth, an imperfect process of doing the best one can with the information available, while knowing there is always more to learn – the epitome of humility.

    Ask yourself: How much would more scientific literacy help the public, really, in understanding the toughest, most contentious issues?

    Undoubtedly, the more scientifically literate Americans are, the more they will understand newspaper articles about science, and be able to follow public debates. But there’s a limit: Scientific literacy is no shield against anti-evolutionists or global warming deniers, for example, who are often scientists themselves, who couch their arguments in sophisticated scientific language, and who regularly cite articles in the peer-reviewed scientific literature. Having the knowledge equivalent of a PhD is more along the lines of what’s necessary to refute them, and even then, the task requires considerable research and intellectual labor, far more than most people have the time for.

    If members of the public aren’t all going to earn PhD’s, they need something else, an attribute the standard “scientific illiteracy” survey questions don’t really measure. We would describe it as a deep and abiding awareness of the importance of the scientific endeavor to their lives and the national future. This means that Americans would be more likely to see – much in the way that scientists currently see – how science-centered developments and controversies will shape the coming decades and guide countless critical political decisions, in areas ranging from energy policy to the ethics of various types of biomedical research.

    To that end, Americans should be far more engaged with scientists and what they’re doing. They should know the names of leading researchers (most Americans do not) and the nation’s top scientific agencies (again, most Americans do not). To the extent possible they should know scientists personally, both so they can get a sense of the nature of scientific reasoning and so they feel they are being heard, not just lectured to. Perhaps this way, when it comes to the toughest and most politicized questions, they will better recognize that scientists will not rally around a firm conclusion unless it really is precisely that.

    As matters currently stand, though, such reaching out to the public isn’t much rewarded in the scientific community. There’s little incentive for it. Advancement in science doesn’t happen, for the most part, due to one’s public engagement or media skills. Rather, it’s all about your published research: How many papers have you placed into leading journals, and how much are they being cited by other scientists?

    There are, admittedly, efforts afoot to change this problem. The National Science Foundation’s decade-old IGERT program – Integrative Graduate Education and Research Traineeship – strives to impart a far broader set of skills to young scientists, and is supporting some of the best courses in the nation to this end. For example, an IGERT-supported course entitled “Climate Change and Marine Ecosystems,” taught at the Scripps Institution of Oceanography by marine biologist Jeremy Jackson, introduces young scientists not only to the research on climate change, biodiversity, and conservation, but also to economic thinking, policy realities, the nature of the modern media, and even the work of filmmakers, improv comedians, and Internet organizers.

    However, the IGERT program is a shadow of what it could be: At present, the program disburses about 20 grants per year, yet each funding cycle, the NSF reports receiving more than 400 preliminary IGERT proposals.

    We need an entirely new project of public outreach on the part of the scientific community. It shouldn’t require every academic scientist to go door to door – not all will be interested in this work, and not all will be good at it. Rather, it should centrally focus on training those young researchers who are not destined for academic jobs – their numbers are growing today, as academic opportunities decline – so that they’re ideal emissaries for bringing science to the rest of society.

    The enthusiasm is already there in the youngest generation of American scientists, who want to give something back. Some will become our next crop of great researchers – yet some don’t want to follow in the footsteps of their professors, and are ideal candidates for becoming liaisons between science and society. But finding careers for them in public outreach is another matter entirely. As the free market surely won’t do it, universities, philanthropists, and scientific societies must create these careers – and of course, we need the help of government as well.

    Ultimately, all of this could lead to nothing less than a substantial redefinition of the role of the scientist in public life. No longer merely a distant voice of authority, the scientist could also become an everyday guide and ally, a listener as much as a lecturer. There’s no doubt members of the public must become much more knowledgeable about science and its importance. But scientists must also become far more involved with – and knowledgeable about – the public.

    Chris Mooney and Sheril Kirshenbaum are the co-authors of the new book “Unscientific America: How Scientific Illiteracy Threatens Our Future”, upon which this article is partly based.


    Full article and photo:

    Jupiter Gets a Black Eye

    We sometimes forget that the universe is a violent place.

    This week, astronomers in Hawaii recorded an exceedingly rare event. An amazing photograph revealed a comet or asteroid, probably no more than a mile across, plowing into Jupiter’s atmosphere. The impact created a fireball roughly the size of the planet earth.

    The good news is that Jupiter was just doing its job, cleaning out the solar system of stray comets and asteroids. Jupiter, 318 times more massive than the earth, acts like a cosmic vacuum cleaner, sucking in or deflecting debris left over from the solar system’s birth 4.5 billion years ago. If it weren’t for Jupiter’s colossal gravitational field, we wouldn’t be here, since the earth would be hit with deadly comet and meteor impacts every month or so. Most of the U.S. would just be an empty graveyard of bleak craters.

    The bad news is that a comet impact could happen to us. A black eye for Jupiter would be a body blow to the earth. We got a taste of this back in 1908, when something the size of an apartment building plowed into Tunguska, Siberia. This “city-buster” flattened 100 million trees with the force of a hydrogen bomb. But this recent Jupiter comet, much larger and coming in at perhaps 100,000 miles per hour, would have unleashed the power of hundreds of H-bombs. It might have engulfed most of the East Coast in a huge firestorm, triggering a massive tsunami and destabilizing the weather.

    According to Hollywood, we can always send our astronauts on a space shuttle to intercept a comet and blow it up with H-bombs. Wrong. Blowing up a comet with nuclear bombs creates chunks of debris, increasing the area of destruction. So we are sitting ducks to a potential impact from deep space.

    So what’s the lesson from all of this?

    Maybe Mother Nature has a sense of humor. An impact like the recent one in Jupiter happened 15 years ago, in late July, after the Shoemaker-Levy 9 comet broke up into 20 pieces, each of which plunged into Jupiter, creating a dazzling display of cosmic fireworks. Scientists used to believe that these collisions took place once every few thousand years, not 15 years. So perhaps Mother Nature was just trying to show what little scientists really understand about these cosmic collisions.

    But it also happened on the 40th anniversary of the moon landing. So maybe Mother Nature was reminding us that the universe is, after all, a violent place—that we may one day need a new home. The earth lies in the middle of a cosmic shooting gallery. The proof comes out every night when we gaze at the moon.

    When viewing the film of Neil Armstrong and Buzz Aldrin bobbing among the barren craters of the moon, we are reminded that each crater was gouged out by a titanic impact.

    In addition, there are more than 5,000 so-called near-earth objects, carefully tracked by telescope, that can cross near the orbit of the earth. One of them, the asteroid Apophis, is about the size of the Rose Bowl. It will graze the earth in 2029 and again in 2036, passing below some of our satellites.

    But there are also many unnamed comets outside the solar system whose orbits are totally unknown and unpredictable. They would give us little warning and catch us totally off-guard, like the comet that just hit Jupiter.

    So in the long term, perhaps we should look at the space program as an insurance policy. Not only has the space program given us a bonanza of benefits (such as weather satellites, the Global Positioning System, telecommunications, etc.), it also provides a gateway to the stars. Over the course of the next few centuries, maybe we should use that gateway to plan to be a “two planet species.” Life is too precious to place in one basket.

    In August, President Barack Obama will receive a major report from the U.S. human space flight plans committee about the future of space travel, which could be a turning point for NASA in the 21st century. He should remember the Jupiter hit as he considers the report.

    Mr. Kaku is the author of “Physics of the Impossible: a Scientific Exploration into the World of Phasers, Force Fields, Teleportation, and Time Travel” (Doubleday, 2008).


    Full article:

    Defining Data Down

    Like other complex human enterprises, science has a “front” and a “back.” The model here is a ­restaurant. In the front, waiters in spotless ­uniforms glide between tables murmuring suggestions and delivering exquisitely arranged platters. ­Meanwhile, the kitchen—the back—is a chaos of noise, heat, haste, breakage and rancor. Now and then a gross error in the back leaks out into the front, and a case of food poisoning shows up in the newspapers.

    So it is with science. “Plastic Fantastic,” Eugenie Samuel Reich’s readable account of a fairly recent ­science fraud, is valuable chiefly as a close look at the “kitchen” where scientific results are assembled and validated—and whence occasionally comes forth ­something that should not have seen the light of day.


    In late 1997, a young German postdoctoral physicist, H­endrick Schön, was hired by the famous Bell Labs of Murray Hill, N.J. Over the next four years he claimed sensational results in an arcane corner of materials science. Broadly speaking, Mr. Schön was seeking to persuade organic materials, like plastics, to ­exhibit behaviors useful in electronics. For such behaviors—superconductivity, bipolar transport and something called the “quantum Hall effect”—we currently rely on inorganic substances of a simpler but less robust microstructure, like silicon. Mr. Schön’s research, if it fulfilled its promise, would lead to smaller, cheaper and more reliable electronics. And Bell Labs was just the place for such work: The original transistor concept, on which modern electronics is based, emerged from there in 1947.

    Mr. Schön began with modest claims, publishing in academic physics journals in 1998. Two years later he advanced to articles in Science and Nature, the big ­generalist (though still peer-reviewed) magazines, and from there to coverage in newspapers and ­popular-science outlets. He won prizes for his work and became a well-known name in his small field.

    There were skeptics from early on. In August 2000, Stanford physicist Bob Laughlin protested the ­questionable quality of Mr. Schön’s data. Another ­researcher, Ivan Schuller of the University of California at San Diego, doubted that Mr. Schön’s materials could withstand the electromagnetic fields he claimed to be applying. It was not until early 2002, though, that fraud was suspected. In May of that year a formal committee investigation began at Bell Labs. Four months later the committee found that Mr. Schön had faked lab results and fabricated data. If plastic is fantastic, he hadn’t shown it. Bell Labs fired him soon after. He was stripped of his Ph.D., too, and is now employed in ­humble engineering work in his home country.

    How did he get away with it? Why did he do it? These are the main questions that a book like “Plastic Fantastic” should answer. Ms. Samuel Reich does better with the first question than with the second. Throughout her narrative, Mr. Schön remains a shadow, his ­personality obscure, his motives a mystery. Simple greed can probably be ruled out. Science prizes are rarely lucrative, and the ones awarded to Mr. Schön were fairly modest. A quest for glory can be ruled out, too, since Mr. Schön must have known that his ­un-replicable results would be debunked soon or later. There seems to have been no vindictiveness in the scheme, no desire to make a fool of anyone. Said a ­colleague: “No one had it in for Hendrick. He didn’t have enemies.” Mr. Schön was not obsessed by any pet theory or ideology. Perhaps there is no better explanation, at last, than that he did it because he could.

    As to the “how”: A key factor in Mr. Schön’s success seems to have been his skill at cultivating his ­managers, who continued to support him when his ­fellow researchers had become skeptical. Mr. Schön was an amiable employee, apparently, and a cheap one, delivering striking results from a minimum of ­resources. He was even reluctant to use the usual ­corporate American Express card to cover lab expenses.

    Circumstances at Bell Labs magnified the effect. ­Lucent Technologies, which ran the labs, was hard hit by the bursting of the dot-com bubble, its share price dropping 30% on Jan. 6, 2000. In that year’s “summer of insecurity” the labs were roiled by restructuring and cut-backs. It was precisely then, amid administrative distractions, that Mr. Schön had his first great burst of publishing productivity.

    What of the normal processes of authentication in science—peer review and the replication of results by other researchers? How did Mr. Schön’s bogus claims survive these safeguards? The glass-half-full answer is that they didn’t . . . eventually. The span from Mr. Schön’s first appearances in the technical literature in late 1998 to his unfrocking in 2002 was less than four years—longer than one would wish but not ­disgracefully long in a newish and arcane field ­concerned with hard-to-measure effects in microscopic quantities of hard-to-handle materials.

    The really serious failures were in the peer-review process. There are excuses one can make here. For ­example, science publications seem eager to balance out the flood of papers in sexy fields like genetics and neurobiology with some good old dry-goods physics and chemistry. Still, the failures are dismaying to read about. It appears that the people charged with vetting Mr. Schön’s work—whether editors or fellow scientists—did not do so carefully or thoroughly enough.

    A key moment in the denouement came in April 2002 when a Bell Labs researcher noticed that two of Mr. Schön’s papers from two years before, one in ­Science and one in Nature, had reported outputs that were identical—even down to the electrical “noise”—from two quite different devices. Two years, in two journals claiming a combined readership of nearly two million, and nobody noticed?

    Mr. Derbyshire’s “We Are Doomed: Reclaiming ­Conservative Pessimism” will be published in September.


    Full article:


    All Eyepieces on Jupiter After a Big Impact

    jupiter 2

    NASA released this infrared photo on Tuesday showing what scientists believe may be evidence that another object has crashed into Jupiter.

    Anybody get the number of that truck?

    Astronomers were scrambling to get big telescopes turned to Jupiter on Tuesday to observe the remains of what looks like the biggest smashup in the solar system since fragments of the Comet Shoemaker-Levy 9 crashed into the planet in July 1994.

    Something — probably a small comet — smacked into Jupiter on Sunday, leaving a bruise the size of the Pacific Ocean near its south pole. Just after midnight, Australian time, on Sunday, Jupiter came into view in the eyepiece of Anthony Wesley, an amateur astronomer in Murrumbateman. The planet was bearing a black eye spookily similar to the ones left in 1994.

    “This was a big event,” said Leigh Fletcher of the Jet Propulsion Laboratory. “In the inner solar system it would have been a disaster.”

    “As far as we can see it looks very much like what happened 15 years ago,” said Brian Marsden of the Harvard-Smithsonian Center for Astrophysics, who is director emeritus of the International Astronomical Union’s Central Bureau for Astronomical Telegrams. The bureau issues bulletins about breaking astronomical news.

    But astronomers admit they might never know for sure what hit Jupiter. “It’s like throwing a stone on the pond,” explained Dr. Fletcher. “You see the splash, but lose the stone. It’s the splash we can study.”

    Dr. Fletcher said that he and his colleagues were frantically writing proposals for telescope time. Among the telescopes they have recruited is the Hubble Space Telescope, making its early return to the fray after a successful repair mission by astronauts this summer. Mario Livio, an astronomer at the Space Telescope Science Institute, said the group was planning to look at Jupiter’s bruise on Thursday and release a picture as soon as possible.

    Mr. Wesley had thought about quitting for the night to watch sports on television, according to the account on his Web site, when he went back outside for another look and found the spot. He e-mailed other astronomers, among them Dr. Fletcher and his colleague Glenn Orton, who had scheduled observing time that night at NASA’s Infrared Telescope Facility on top of Hawaii’s Mauna Kea. Jupiter’s “scar” showed up in infrared light as a bright spot.

    Meanwhile, Franck Marchis, an astronomer at the SETI Institute and the University of California, Berkeley, heard about Mr. Wesley’s discovery through the Minor Planet Mailing List and blogged about it on his Web site.

    Paul Kalas, another Berkeley astronomer, and Michael Fitzgerald of the Lawrence Livermore Laboratory, who were then using the Keck II telescope on Mauna Kea next door to the NASA infrared telescope to look for a recently discovered exoplanet, saw the blog and with Dr. Marchis’s help, also turned their big eye on Jupiter.

    Dr. Marchis said the shape of the debris splash as revealed in the Keck images suggested that whatever hit Jupiter might have been pulled apart by tidal forces from the planet’s huge gravity before it hit. In an e-mail message, he said humans should be thankful for Jupiter.

    “The solar system would have been a very dangerous place if we did not have Jupiter,” he wrote. “We should thank our giant planet for suffering for us. Its strong gravitational field is acting like a shield protecting us from comets coming from the outer part of the solar system.”


    Full article and photo:

    New Earth-Size Blot on Jupiter, Found By an Amateur

    A large impact mark on Jupiter’s south polar region captured on Monday by NASA’s Infrared Telescope Facility in Mauna Kea, Hawaii.

    NASA has confirmed the discovery of a new hole the size of the Earth in Jupiter’s atmosphere, apparently showing that the planet was hit by something large in recent days. The impact mark was first spotted on Monday morning by an amateur astronomer in Australia, who then drew the attention of scientists at NASA’s Jet Propulsion Laboratory to the dark mark on Jupiter’s south polar region.

    The apparent impact comes almost exactly 15 years after a comet named Shoemaker-Levy 9 struck Jupiter, “sending up blazing fireballs and churning the Jovian atmosphere into dark storms, one of them as large as Earth,” as The New York Times reported on July 19, 1994.

    Images of the impact mark, as seen through a NASA telescope in Hawaii, were posted on the space agency’s Web site on Monday with this explanation:

    Following up on a tip by an amateur astronomer, Anthony Wesley of Australia, that a new dark “scar” had suddenly appeared on Jupiter, this morning between 3 and 9 a.m. PDT (6 a.m. and noon EDT) scientists at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., using NASA’s Infrared Telescope Facility at the summit of Mauna Kea, Hawaii, gathered evidence indicating an impact.

    New infrared images show the likely impact point was near the south polar region, with a visibly dark “scar” and bright upwelling particles in the upper atmosphere detected in near-infrared wavelengths.

    Glenn Orton, a scientist at the Jet Propulsion Laboratory, said “It could be the impact of a comet, but we don’t know for sure yet.”

    Mr. Orton told New Scientist magazine that the planet could have been hit by a block of ice or a comet that was too faint for astronomers to detect before the impact. Leigh Fletcher, an astronomer at the Jet Propulsion Lab told the magazine the impact scar “is about the size of the Earth.”

    In Australia, the Sydney Morning Herald reported that the amateur astronomer, Anthony Wesley, a 44-year-old computer programmer from a village north of Canberra, made the discovery “using his backyard 14.5-inch reflecting telescope.” The Herald explained: “Wesley, who has been keen on astronomy since he was a child, said telescopes and other astronomy equipment were so inexpensive now that the hobby had become a viable pastime for just about anybody. His own equipment cost about $10,000.”

    Mr. Wesley recorded the discovery of the impact mark, and posted several of the first images he took of it, in an observation report he posted online:

    I came back to the scope at about 12:40am I noticed a dark spot rotating into view in Jupiters south polar region started to get curious. When first seen close to the limb (and in poor conditions) it was only a vaguely dark spot, I thought likely to be just a normal dark polar storm. However as it rotated further into view, and the conditions improved I suddenly realised that it wasn’t just dark, it was black in all channels, meaning it was truly a black spot.

    My next thought was that it must be either a dark moon (like Callisto) or a moon shadow, but it was in the wrong place and the wrong size. Also I’d noticed it was moving too slow to be a moon or shadow. As far as I could see it was rotating in sync with a nearby white oval storm that I was very familiar with – this could only mean that the back feature was at the cloud level and not a projected shadow from a moon. I started to get excited.

    It took another 15 minutes to really believe that I was seeing something new – I’d imaged that exact region only 2 days earlier and checking back to that image showed no sign of any anomalous black spot.

    Now I was caught between a rock and a hard place – I wanted to keep imaging but also I was aware of the importance of alerting others to this possible new event. Could it actually be an impact mark on Jupiter? I had no real idea, and the odds on that happening were so small as to be laughable, but I was really struggling to see any other possibility given the location of the mark. If it really was an impact mark then I had to start telling people, and quickly.

    The Guardian reports that Mr. Wesley, who “spends about 20 hours a week on his passion of watching and photographing Jupiter,” almost missed making the discovery because he interrupted his work late on Sunday night to watch sports on television. Mr. Wesley told The Guardian:

    I was imaging Jupiter until about midnight and seriously thought about packing up and going back to the house to watch the golf and the cricket. In the end I decided to just take a break and I went back to the house to watch Tom Watson almost make history.

    I came back down half an hour later and I could see this black mark had turned into view.

    In another interview, Mr. Wesley told the Sydney Morning Herald that spotting the impact mark on Jupiter made him glad the huge planet is in Earth’s neighborhood: “If anything like that had hit the Earth it would have been curtains for us, so we can feel very happy that Jupiter is doing its vacuum-cleaner job and hoovering up all these large pieces before they come for us.”


    Full article and photo:

    On Navel Lint and Other Scientific Triumphs


    The world scarcely needs any more medical journals. The National Library of Medicine already indexes some 5,200 of them, from Applied Immunohistochemistry & Molecular Morphology to Gut.

    But based on the dozens of studies that pass my desk every day, I think there should be two more scholarly periodicals: I’d call them Duh!, for findings that never seemed to be in doubt in the first place, and Huh?, for those whose usefulness remains obscure, at least to lay readers.

    Duh!’s first issue could include findings such as these, which ran in prestigious journals or were presented at scientific conferences recently:

    •Toddlers become irritable when prevented from napping.

    •Cats make humans do what they want by purring.

    •TV crime dramas inaccurately portray violent crime in America.

    •People with high IQs make wise economic decisions.

    Huh?’s first issue could contain these head-scratchers:

    •Men are better than women at hammering in the dark.

    •Young orangutans, gorillas, chimpanzees and bonobos laugh when tickled.

    •Neither alcohol (in him) nor makeup (on her) effect a man’s ability to guess a woman’s age.

    •The more abdominal hair, the greater the tendency to collect belly-button lint.

    Not surprisingly, the researchers involved in each of these studies maintain that their work is neither obvious nor silly when understood in the proper context. (Having an advanced degree also helps.)

    “What makes you think that it was common knowledge that laughter has a pre-human basis?” Marina Davila Ross, a primatologist at the University of Portsmouth, U.K., wrote in an email. For their study in Current Biology last month, she and colleagues compared “tickle-induced vocalizations” from four types of great apes and human infants and determined that the sounds share a common evolutionary ancestry, going back at least 10 million years.

    Karen McComb, a University of Sussex behavioral ecologist, was equally protective of her cat-communication study, which ran in Current Biology last week. “Of course pet owners know that their cats and dogs have particular ways of getting their attention,” she emailed. “The interesting thing about our study is…WHY they do this.”

    After having 50 humans rate solicitation and non-solicitation purrs from 10 different cats, the researchers found that particularly urgent purrs have the same sound frequency as a meow, similar to a baby’s cry. Thus cats may be tapping into the intrinsic human compulsion to nurture offspring. “Even cat owners are surprised to find what is going on,” wrote Dr. McComb.

    (Listen to solicitation and non-solicitation purrs on the University of Sussex Web site: )

    Sometimes things that uninformed readers think are obvious aren’t at all to people who know more. When I suggested to Stephen V. Burks, a behavioral economist at the University of Minnesota, Morris, that it wasn’t surprising that people with high IQs make wise economic choices, he said, “If you think so, you knew more than a lot of economists did.” His multiyear study of 1,000 trucker-trainees, funded in part by the MacArthur Foundation and reported in the Proceedings of the National Academy of Science, is the first strong evidence that cognitive and noncognitive skills are linked, he said. On one level, it could help explain why some people are poor; on another level, it shows that even in service jobs, such as trucking, that don’t require a college degree but do require a lot of self-management, those with greater cognitive skills are more likely to succeed.

    Some studies attempt to show that a correlational relationship is actually causal (that is, two things don’t just go together; one thing causes the other). That was the case with the study finding that 2-to-3-year-olds get more worried, anxious and cranky when deprived of naps. Monique K. LeBourgeois, an assistant professor at Brown University’s Center for the Study of Human Development, explained that the study, presented at a sleep-research conference last month, is part of a larger research program looking at pathways to mental illness, and with some schools eliminating naps in all-day pre-kindergarten programs, “hard science is needed to inform public policy.”

    At times, a seemingly obvious research finding serves a secondary point. Mayo Clinic psychiatrist Timothy Lineberry says he and the two medical students who compared the murder situations depicted on six seasons of “CSI” and “CSI: Miami” with national crime data weren’t surprised that they didn’t match. They were emphasizing the point that alcohol plays a larger role in such crimes than the public may realize. The study was presented to the American Psychiatric Association in May.

    In the same vein, the report in the British Journal of Psychology in April that alcohol and makeup have little affect on a man’s ability to judge a woman’s age served, in part, to refute an oft-heard excuse given by male sex offenders for picking up underage women in bars.

    Interestingly, the more participants drank, the less likely they were to rate the women as attractive. “This seemingly flies in the face of the commonly held notion of ‘beer goggles,’ ” said lead researcher Vincent Egan, a forensic psychologist at the University of Leicester, U.K.

    Several of the researchers I contacted expressed frustration with the way their work had been characterized in the popular press.

    “What the media glommed on to was the difference between men and women hammering in the dark and the logical question was, ‘Why do we care? We’ll never hammer in the dark,’ ” said Duncan Irschick, an associate professor of human biomechanics at University of Massachusetts, Amherst. His study on hammering, presented at an experimental-biology conference last month, is part of a five-year $1.5 million grant from the National Institutes of Health to study how humans, from infancy to adulthood, learn to use tools under different conditions. “On the most profound level, it really addresses something about why we are human,” he said. “Animals don’t use tools.”

    When reminded that the press release for his study had highlighted the hammering-in-the-dark aspect, Dr. Irschick noted that press releases often focus on something simple to attract media attention.

    Indeed, in an effort to attract media coverage, academics—and their publicists— often underestimate the ability of the public to understand and appreciate what they are studying and why.

    Huh? and Duh! would attempt to bridge this communication gap between academic researchers and the popular press. Phrases such as “novel empirical construct” will be replaced by “which nobody has done quite this way before.” Words such as “whilst” will be banned.

    Given the current push for “evidence-based medicine,” we may well see more studies attempting to confirm the previously only suspected, providing ongoing fodder for Duh! (As editor-in-chief, I’m thinking of tapping Gordon C.S. Smith, a University of Cambridge obstetrician, who wrote a classic paper in the British Medical Journal in 2003 noting that he could find no randomized controlled trials testing whether parachutes prevent death and injuries in response to “gravitational challenge” —i.e., jumping out of aircraft.)

    Like Dr. Smith, a few academic researchers are having a bit of fun, which we will certainly encourage in Huh? Georg Steinhauser, a chemist at the Vienna University of Technology, said it was the surprise of his career that the journal Medical Hypotheses accepted his study entitled “The Nature of Navel Fluff.” Inspired by a question posed in the 2005 book, “Why Do Men Have Nipples?” Dr. Steinhauser theorized that belly-button lint is largely the result of abdominal hair channeling loose shirt fibers. To test his hypothesis, he collected 503 pieces of his own belly-button lint over three years, wearing different shirts. Then he shaved his abdominal hair and found that no more lint collected.

    “Many people nominated me for an ig-Nobel prize,” Dr. Steinhauser wrote.


    Full article and photo:

    Is the Sun Missing Its Spots?

    sunspots 1

    These photos show sunspots near solar maximum on July 19, 2000, and near solar minimum on March 18, 2009. Some global warming skeptics speculate that the Sun may be on the verge of an extended slumber.

    The Sun is still blank (mostly).

    Ever since Samuel Heinrich Schwabe, a German astronomer, first noted in 1843 that sunspots burgeon and wane over a roughly 11-year cycle, scientists have carefully watched the Sun’s activity. In the latest lull, the Sun should have reached its calmest, least pockmarked state last fall.

    sunspots 3

    Indeed, last year marked the blankest year of the Sun in the last half-century — 266 days with not a single sunspot visible from Earth. Then, in the first four months of 2009, the Sun became even more blank, the pace of sunspots slowing more.

    “It’s been as dead as a doornail,” David Hathaway, a solar physicist at NASA’s Marshall Space Flight Center in Huntsville, Ala., said a couple of months ago.

    The Sun perked up in June and July, with a sizeable clump of 20 sunspots earlier this month.

    Now it is blank again, consistent with expectations that this solar cycle will be smaller and calmer, and the maximum of activity, expected to arrive in May 2013 will not be all that maximum.

    For operators of satellites and power grids, that is good news. The same roiling magnetic fields that generate sunspot blotches also accelerate a devastating rain of particles that can overload and wreck electronic equipment in orbit or on Earth.

    A panel of 12 scientists assembled by the National Oceanic and Atmospheric Administration now predicts that the May 2013 peak will average 90 sunspots during that month. That would make it the weakest solar maximum since 1928, which peaked at 78 sunspots. During an average solar maximum, the Sun is covered with an average of 120 sunspots.

    But the panel’s consensus “was not a unanimous decision,” said Douglas A. Biesecker, chairman of the panel. One member still believed the cycle would roar to life while others thought the maximum would peter out at only 70.

    sunspots 2

    These photographs show an ultraviolet view of the Sun on the same days: July 19, 2000, left, and March 18, 2009, right. Most solar physicists do not think anything odd is going on with the Sun.

    Among some global warming skeptics, there is speculation that the Sun may be on the verge of falling into an extended slumber similar to the so-called Maunder Minimum, several sunspot-scarce decades during the 17th and 18th centuries that coincided with an extended chilly period.

    Most solar physicists do not think anything that odd is going on with the Sun. With the recent burst of sunspots, “I don’t see we’re going into that,” Dr. Hathaway said last week.

    Still, something like the Dalton Minimum — two solar cycles in the early 1800s that peaked at about an average of 50 sunspots — lies in the realm of the possible, Dr. Hathaway said. (The minimums are named after scientists who helped identify them: Edward W. Maunder and John Dalton.)

    With better telescopes on the ground and a fleet of Sun-watching spacecraft, solar scientists know a lot more about the Sun than ever before. But they do not understand everything. Solar dynamo models, which seek to capture the dynamics of the magnetic field, cannot yet explain many basic questions, not even why the solar cycles average 11 years in length.

    Predicting the solar cycle is, in many ways, much like predicting the stock market. A full understanding of the forces driving solar dynamics is far out of reach, so scientists look to key indicators that correlate with future events and create models based on those.

    For example, in 2006, Dr. Hathaway looked at the magnetic fields in the polar regions of the Sun, and they were strong. During past cycles, strong polar fields at minimum grew into strong fields all over the Sun at maximum and a bounty of sunspots. Because the previous cycle had been longer than average, Dr. Hathaway thought the next one would be shorter and thus solar minimum was imminent. He predicted the new solar cycle would be a ferocious one.

    Instead, the new cycle did not arrive as quickly as Dr. Hathaway anticipated, and the polar field weakened. His revised prediction is for a smaller-than-average maximum. Last November, it looked like the new cycle was finally getting started, with the new cycle sunspots in the middle latitudes outnumbering the old sunspots of the dying cycle that are closer to the equator.

    After a minimum, solar activity usually takes off quickly, but instead the Sun returned to slumber. “There was a long lull of several months of virtually no activity, which had me worried,” Dr. Hathaway said.

    The idea that solar cycles are related to climate is hard to fit with the actual change in energy output from the sun. From solar maximum to solar minimum, the Sun’s energy output drops a minuscule 0.1 percent.

    But the overlap of the Maunder Minimum with the Little Ice Age, when Europe experienced unusually cold weather, suggests that the solar cycle could have more subtle influences on climate.

    One possibility proposed a decade ago by Henrik Svensmark and other scientists at the Danish National Space Center in Copenhagen looks to high-energy interstellar particles known as cosmic rays. When cosmic rays slam into the atmosphere, they break apart air molecules into ions and electrons, which causes water and sulfuric acid in the air to stick together in tiny droplets. These droplets are seeds that can grow into clouds, and clouds reflect sunlight, potentially lowering temperatures.

    The Sun, the Danish scientists say, influences how many cosmic rays impinge on the atmosphere and thus the number of clouds. When the Sun is frenetic, the solar wind of charged particles it spews out increases. That expands the cocoon of magnetic fields around the solar system, deflecting some of the cosmic rays.

    But, according to the hypothesis, when the sunspots and solar winds die down, the magnetic cocoon contracts, more cosmic rays reach Earth, more clouds form, less sunlight reaches the ground, and temperatures cool.

    “I think it’s an important effect,” Dr. Svensmark said, although he agrees that carbon dioxide is a greenhouse gas that has certainly contributed to recent warming.

    Dr. Svensmark and his colleagues found a correlation between the rate of incoming cosmic rays and the coverage of low-level clouds between 1984 and 2002. They have also found that cosmic ray levels, reflected in concentrations of various isotopes, correlate well with climate extending back thousands of years.

    But other scientists found no such pattern with higher clouds, and some other observations seem inconsistent with the hypothesis.

    Terry Sloan, a cosmic ray expert at the University of Lancaster in England, said if the idea were true, one would expect the cloud-generation effect to be greatest in the polar regions where the Earth’s magnetic field tends to funnel cosmic rays.

    “You’d expect clouds to be modulated in the same way,” Dr. Sloan said. “We can’t find any such behavior.”

    Still, “I would think there could well be some effect,” he said, but he thought the effect was probably small. Dr. Sloan’s findings indicate that the cosmic rays could at most account for 20 percent of the warming of recent years.

    Even without cosmic rays, however, a 0.1 percent change in the Sun’s energy output is enough to set off El Niño- and La Niña-like events that can influence weather around the world, according to new research led by the National Center for Atmospheric Research in Boulder, Colo.

    Climate modeling showed that over the largely cloud-free areas of the Pacific Ocean, the extra heating over several years warms the water, increasing evaporation. That intensifies the tropical storms and trade winds in the eastern Pacific, and the result is cooler-than-normal waters, as in a La Niña event, the scientists reported this month in the Journal of Climate.

    In a year or two, the cool water pattern evolves into a pool of El Niño-like warm water, the scientists said.

    New instruments should provide more information for scientists to work with. A 1.7-meter telescope at the Big Bear Solar Observatory in Southern California is up and running, and one of its first photographs shows “a string of pearls,” each about 50 miles across.

    “At that scale, they can only be the fundamental fibril structure of the Sun’s magnetic field,” said Philip R. Goode, director of the solar observatory. Other telescopes may have caught hints of these tiny structures, he said, but “never so many in a row and not so clearly resolved.”

    Sun-watching spacecraft cannot match the acuity of ground-based telescopes, but they can see wavelengths that are blocked by the atmosphere — and there are never any clouds in the way. The National Aeronautics and Space Administration’s newest sun-watching spacecraft, the Solar Dynamics Observatory, which is scheduled for launching this fall, will carry an instrument that will essentially be able to take sonograms that deduce the convection flows generating the magnetic fields.

    That could help explain why strong magnetic fields sometimes coalesce into sunspots and why sometimes the strong fields remain disorganized without forming spots. The mechanics of how solar storms erupt out of a sunspot are also not fully understood.

    A quiet cycle is no guarantee no cataclysmic solar storms will occur. The largest storm ever observed occurred in 1859, during a solar cycle similar to what is predicted.

    Back then, it scrambled telegraph wires. Today, it could knock out an expanse of the power grid from Maine south to Georgia and west to Illinois. Ten percent of the orbiting satellites would be disabled. A study by the National Academy of Sciences calculated the damage would exceed a trillion dollars.

    But no one can quite explain the current behavior or reliably predict the future.

    “We still don’t quite understand this beast,” Dr. Hathaway said. “The theories we had for how the sunspot cycle works have major problems.”


    Full article and photos:

    In Search for Intelligence, a Silicon Brain Twitches

    By Replicating a Rat’s Gray Matter, Scientists Discover Simulated Cells That Self-Organize but Lack Certain Smarts

    For the last four years, Henry Markram has been building a biologically accurate artificial brain. Powered by a supercomputer, his software model closely mimics the activity of a vital section of a rat’s gray matter.

    Dubbed Blue Brain, the simulation shows some strange behavior. The artificial “cells” respond to stimuli and suddenly pulse and flash in spooky unison, a pattern that isn’t programmed but emerges spontaneously.

    “It’s the neuronal equivalent of a Mexican wave,” says Dr. Markram, referring to what happens when successive clusters of stadium spectators briefly stand and raise their arms, creating a ripple effect. Such synchronized behavior is common in flesh-and-blood brains, where it’s believed to be a basic step necessary for decision making. But when it arises in an artificial system, it’s more surprising.

    Blue Brain is based at the École Polytechnique Fédérale de Lausanne in Switzerland. The project hopes to tackle one of the most perplexing mysteries of neuroscience: How does human intelligence emerge? The Blue Brain scientists hope their computer model can shed light on the puzzle, and possibly even replicate intelligence in some way.

    “We’re building the brain from the bottom up, but in silicon,” says Dr. Markram, the leader of Blue Brain, which is powered by a supercomputer provided by International Business Machines Corp. “We want to understand how the brain learns, how it perceives things, how intelligence emerges.”

    Blue Brain is controversial, and its success is far from assured. Christof Koch of the California Institute of Technology, a scientist who studies consciousness, says the Swiss project provides vital data about how part of the brain works. But he says that Dr. Markram’s approach is still missing algorithms, the biological programming that yields higher-level functions.

    “You need to have a theory about how a particular circuit in the brain” can trigger complex, higher-order properties, Dr. Koch argues. “You can’t assemble ever larger data fields and shake it and say, ‘Ah, that’s how consciousness emerges.'”

    Despite the challenges, the push to understand, replicate and even re-enact higher behaviors in the brain has become one of the hottest areas of neuroscience. With the help of a $4.9 million grant from the U.S. Department of Defense, IBM is working on a separate project with five U.S. universities to build a tiny, low-power microchip that simulates the behavior of one million neurons and ten billion synapses. The goal, says IBM, is to develop brainy computers that can better predict the behavior of complex systems, such as weather or the financial markets.

    The Chinese government has provided about $1.5 million to a team at Xiamen University to create artificial-brain robots with microcircuits that evolve, learn and adapt to real-world situations. Similarly, Jeff Krichmar and colleagues at the University of California, Irvine, Calif., have built an artificial-brain robot that learns to sharpen its visual perception when moving around in a lab environment, another form of emergent behavior, a form of spontaneous self-organization. And researchers at Sensopac, a project backed by a grant of €6.7 million ($9.3 million) from the European Union, have built part of an artificial mouse brain.


    The scientists behind Blue Brain hope to have a virtual human brain functioning in ten years — a lengthy time period that underscores the scientific challenge. The human brain has 100 billion neurons that send electrical signals to each other via a network of at least 100 trillion connections, or synapses. How could this dizzying complexity ever be recreated in a virtual model?

    Dr. Markram has adopted a systematic, if painstaking approach. He decided to work out the blueprint of its wiring and then use that map to rebuild the brain in an artificial form. He focused on a rat’s neocortical column, or NCC, an elementary building block of the brain’s neocortex, which is responsible for higher functions and thought. In a rat’s case, that includes planning to obtain food.

    A rat’s NCC, comprised of about 10,000 neurons and their 10 billion connections, functions much like a computer microprocessor. All mammals have NCCs, and the ones in humans aren’t all that different from the ones in rats. However, humans have far more NCCs, which means far greater brain power. Dr. Markram figured that if a rat simulation did a good job of correctly mimicking activity in a real rat’s brain, he could use the same model as a road map for simulating the human brain.

    Dr. Markram began by collecting detailed information about the rat’s NCC, down to the level of genes, proteins, molecules and the electrical signals that connect one neuron to another. These complex relationships were then turned into millions of equations, written in software. He then recorded real-world data — the strength and path of each electrical signal — directly from rat brains to test the accuracy of the software.

    At the Lausanne lab one recent afternoon, a pink sliver of rat brain sat in a beaker containing a colorless liquid. The neurons in the brain slice were still alive and actively communicating with each other. Nearby, a modified microscope recorded some of this inner activity in another brain slice. “We’re intercepting the electro-chemical messages” in the cells, then testing the software against it for accuracy, said Dr. Markram.

    The rat’s NCC has 10,000 neurons, and it takes the power of one desktop computer to mimic the behavior of a single neuron. To model the entire NCC, Dr. Markram relies on an IBM computer that can perform 22.8 trillion operations a second. This enables the simulation to be rendered as a three-dimensional object. Thus, when Blue Brain is running, its deepest inner workings are seen in astonishing detail, in the form of a 3-D simulation that unfolds on a computer screen.

    In a darkened room, Blue Brain displays a virtual NCC as a column-like structure, its blue color signifying a state of rest. When zapped by a simulated electrical current, the neurons start to signal to each other and their wiring progressively sparks to life different colors. Tests indicate the same areas light up in the model as do in a real rat’s brain, suggesting that Blue Brain is accurate, says Dr. Markram.

    More complex things start to happen. First there’s a burst of red, then white, then red again, as the NCC’s wiring fills up with a cascade of myriad signals. There are so many connections, the NCC looks like an incredibly dense tangle of undergrowth.

    Then, two successive waves of yellow color suddenly race through Blue Brain. It’s a sign that the neurons have synchronized their behavior on their own. “The cells start to take on a life of their own,” says Dr. Markram. “That’s what your brain is [and when such patterns become sophisticated] it becomes your personality.”

    If Blue Brain ever gets sophisticated enough to closely mimic the human brain, will it exhibit consciousness? Says Dr. Markram: “If it does emerge, we’ll be able to tell you how it emerged. If it doesn’t, we’ll know that it’s the result of more than just 100 billion neurons interacting.”


    Full article and photo:

    ‘Mars Is the Planet of Our Destiny’

    Four decades after the first moon landing, NASA is setting its sights on Mars. NASA manager Jesco von Puttkamer talks to SPIEGEL about the lure of the red planet — and its potential as an alternative base for human life.


    Humanity’s future home? An image obtained from NASA on March 30, 2009 shows light-toned layered deposits on southern mid-latitude crater floor on Mars captured by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA’s Mars Reconnaissance Orbiter, in March 2009.

    SPIEGEL: Mr. Puttkamer, the first person set foot on the moon exactly 40 years ago. Why does NASA want to return to that barren, lifeless place?

    Puttkamer: The Apollo astronauts were only able to spend a couple of days up there — that was just a quick visit. When we fly there again in 2019-2020, we’ll stay much longer. The four-person team will gain experience for the real long-term goal — the journey to Mars. We want to build a lunar station where people could live for weeks or even months, as preparation for the larger Mars project.SPIEGEL: So NASA is not preparing to populate the moon?

    Puttkamer: No, the lunar station won’t be capable of continuous operation 365 days a year, since we’ll need to supply it constantly with air, water and food from Earth, and that would be insanely expensive. But the living conditions on Mars are actually very different. There are many natural resources there, and our probe just recently discovered traces that could originate from liquid water. It’s also been known for a long time that water in solid form — in other words, ice — exists there in large quantities.

    SPIEGEL: Will America fly to the moon alone again?

    Puttkamer: Certainly not — and especially not when we want to reach more distant destinations. The age of going it alone is over. The Apollo project took place during the Cold War, when we were involved in a dramatic race with the Soviets. But a lot has changed since then. We’ve moved away from that competitive way of thinking, and everyone is invited to take part in future missions. It functions that way already on the International Space Station, where 16 countries work together in an exemplary way. We’ve created a kind of United Nations in space.

    SPIEGEL: Yet the United States is going to build the new moon rockets alone again.

    Puttkamer: Unfortunately it can’t be done any other way. After our shuttle fleet is withdrawn from service next year, we’re going to need a new space carrier of our own as quickly as possible. To that end, we needed to commission industry to develop the new Ares rocket and the accompanying Orion spaceship as soon as possible. Then there’s also the Altair lunar lander. But in any case, the European Space Agency is already very interested in helping with the construction of infrastructure on the moon later. Our Russian partners would definitely participate as well. And I personally would be very happy to also see Germany involved.

    SPIEGEL: Aren’t you worried that enthusiasm for conquering the moon will drop off again just as quickly as it did after the Apollo flights?

    Puttkamer: That’s a danger we certainly can’t dismiss. Back then, we were definitely also a victim of our own success. The public got bored quite quickly because the Apollo flights proceeded with such breathtaking perfection. We launched a total of 13 Saturn V rockets, and almost every time it went like clockwork. That means the sense of adventure faded quickly among the general public. So that means we now face the challenge of getting people excited about lunar flights again. And we have to explain to the skeptics that the moon is the most important stopover on the way to Mars. If everything goes well, we could head for the red planet in just 25 years. The future Mars astronauts have already been born — they’re already little rugrats running around among us.

    SPIEGEL: Why is it so important to you to send people to Mars?

    Puttkamer: Mars is the planet of our destiny. There’s the well-founded hope that we might find traces of extraterrestrial life there for the first time, even if it’s only fossilized microbes. A human scientist who can take and analyze samples on the ground is much better suited to this search than a robot, no matter how sophisticated it is. But the most important thing is the fact that people will one day set foot on Mars and populate it. The red desert planet Mars, provided it doesn’t have any life of its own, could become a green Mars through so-called terraforming — in other words, the active transformation (of its environment). If that’s successful, humankind will have created itself a second home, just in case an asteroid impact or other major catastrophe wipes out life on Earth. Only through having Mars as a reserve planet will the human race really become immortal.

    SPIEGEL: The trip to a desert of a planet, millions of kilometers away, could end up as a journey of no return. Do you really believe the spacefaring nations will take this risk?

    Puttkamer: We are unfortunately lacking in Apollo-era daring these days, no question. When the German-born rocket scientist Wernher von Braun got President John F. Kennedy fired up about lunar flight in 1961, no one knew if the adventure would be successful or if we could bring the astronauts safely to the moon and back. A new TV documentary on the anniversary of the first moon landing shows the sense of excitement that prevailed then very well. Today, however, politicians, managers and engineers shy away from the risk, because they’re afraid they’ll be the ones crucified if something goes wrong. Yet if we want to venture forth in the universe, we need to overcome our exaggerated concerns about safety. If I could take a warm sweater with me, I’d board a Mars spaceship immediately.


    Jesco von Puttkamer, 75, is the program manager for the International Space Station and future manned space flight at NASA’s Washington D.C. headquarters. He studied engineering in Aachen, Germany before moving to the US in 1962 to help build the Saturn V moon rocket at the personal invitation of the legendary rocket scientist Wernher von Braun.


    Full article and photo:,1518,635223,00.html

    A glimpse of ancient dying stars


    Subtracting the rest of the galaxy from the images revealed the supernovae

    Astronomers have revealed faint images of the two oldest and most distant supernovae to be discovered to date.

    When a massive star effectively runs out of nuclear fuel, it explodes in a supernova – hurling much of its material into space.

    The scientists described in the journal Nature how they gathered images of the exploding stars by monitoring the same galaxies over five years.

    They used multiple images to pick out supernovae in the distant Universe.

    The furthest two supernovae the team found occurred about 11 billion years ago.

    Mark Sullivan, an astronomer from the University of Oxford in the UK, was one of the authors of the study. He explained that these stars exploded about 2.5 billion years after the Big Bang.

    “As a point of reference, the universe is currently about 13.5 billion years old,” said Dr Sullivan.

    The team gathered their data using the Canada-France-Hawaii telescope on Mauna Kea in Hawaii.

    This method will allow us… to witness some of the very first stars ever
    Jeff Cooke
    University of California, Irvine

    “We took all of the data in and combined it together,” said Dr Sullivan. “So instead of just using at data taken in a single night, which would typically be a single hour we had several hours worth of (images).”

    The faint light from the aftermath of the huge stellar explosions was visible for several months, so the scientists were able to isolate it from the blur of the galaxies in which they occurred.

    “What we’re looking for are things that were there one year, but which weren’t there the next,” explained Dr Sullivan.

    “You see an image of the galaxy in which a supernovae exploded. When you subtract the two years’ data, the galaxy disappears, because it hasn’t changed. So you’re just left with things that have changed – in this case that’s the supernovae.”

    Dr Sullivan said that this new technique opened up exciting possibilities for future experiments.

    “We have shown that this is the way in which we can find the most distant cosmic explosions,” he told BBC News.

    Seeing stars

    Ancient supernovae can reveal important clues about the birth of the Universe.

    “Elements such as iron, calcium and nickel are manufactured by these massive stars,” explained Jeff Cooke, from the University of California, Irvine, who was also involved in the study.

    “Upon their explosive death, they eject this material into space and ‘pollute’ their environments. This material then cools and can form recycled stars with disks of material around them that can then form planets.”

    By “looking back 11 billion years into the past”, Dr Cooke said that these new discoveries will help astronomers to understand exactly how this process works.

    “Moreover, the new method that we use here will allow us to observe as far back as 12 and a half billion years to witness some of the very first stars ever.”


    Full article and photos:

    Most complete Earth map published

    The most complete terrain map of the Earth’s surface has been published.

    deathvalley june 30

    An image of Death Valley – the lowest, driest, and hottest location in North America – composed of a simulated natural color image overlayed with digital topography data from the ASTER Global Digital Elevation Model.

    The data, comprising 1.3 million images, come from a collaboration between the US space agency Nasa and the Japanese trade ministry.

    The images were taken by Japan’s Advanced Spaceborne Thermal Emission and Reflection Radiometer (Aster) aboard the Terra satellite.

    The resulting Global Digital Elevation Map covers 99% of the Earth’s surface, and will be free to download and use.

    The Terra satellite, dedicated to Earth monitoring missions, has shed light on issues ranging from algal blooms to volcano eruptions.

    For the Aster measurements, local elevation was mapped with each point just 30m apart.

    “This is the most complete, consistent global digital elevation data yet made available to the world,” said Woody Turner, Nasa programme scientist on the Aster mission.

    “This unique global set of data will serve users and researchers from a wide array of disciplines that need elevation and terrain information.”

    Previously, the most complete such topographic map was Nasa’s Shuttle Radar Topography Mission, covering 80% of the Earth’s surface. However, the mission’s results were less accurate in steep terrain and in some deserts.

    Nasa is now working to combine those data with the new Aster observations to further improve on the global map.


    Full article and photo:

    Solved: riddle of Siberia’s flattened forest

    siberia june 28

    Trees flattened by the explosion at Tunguska – the blast was equivalent to a 20-megaton nuclear bomb.

     A century on, scientists say massive explosion was caused by comet collision

    A massive explosion that flattened an entire forest in northern Russia over an area of 800 square miles more than a century ago was almost certainly caused by the Earth colliding with a comet, according to a study by rocket scientists in the United States.

    The explosion in 1908 occurred in the sky at a remote location in Siberia near the Tunguska river. It is estimated to have been equivalent to a 20-megaton nuclear bomb, which would have decimated everything within the M25 had it occurred over London rather than a largely uninhabited region.

    Tunguska has long been the subject of intense speculation, with suggested causes ranging from the release of a gigantic cloud of explosive methane gas from underground, to a collision with anti-matter particles from deep space, or even the crash of a visiting extra-terrestrial spacecraft.

    One of the most likely explanations, however, had been that the Earth was hit by a piece of space rock – but a number of scientific expeditions to the area failed to find the impact crater, suggesting that if such a meteoroid had struck the Earth in 1908 it must have exploded high enough in the atmosphere to have disintegrated before reaching the ground.

    Now a team of researchers studying the plumes of water vapour created by the rocket engines of the Space Shuttle believe they have found the crucial evidence in favour of another theory: a collision with the icy heart of a comet.

    This would have released huge volumes of water vapour at very high altitude, creating highly reflective clouds that may explain why the sky was lit up for days after the collision, with people as far away as London saying that they could read newspapers outdoors at midnight, the scientists said.

    “It’s almost like putting together a 100-year-old murder mystery,” said Michael Kelley professor of engineering at Cornell University in New York. “The evidence is pretty strong that the Earth was hit by a comet in 1908.”

    The dramatic illumination of the night s