Opening Up A Colorful Cosmic Jewel Box

jewel box

The FORS1 instrument on the ESO Very Large Telescope (VLT) at ESO’s Paranal Observatory was used to take this exquisitely sharp close up view of the colorful Jewel Box cluster, NGC 4755. The telescope’s huge mirror allowed very short exposure times: just 2.6 seconds through a blue filter (B), 1.3 seconds through a yellow/green filter (V) and 1.3 seconds through a red filter (R). The field of view spans about seven arcminutes.

Star clusters are among the most visually alluring and astrophysically fascinating objects in the sky. One of the most spectacular nestles deep in the southern skies near the Southern Cross in the constellation of Crux.

The Kappa Crucis Cluster, also known as NGC 4755 or simply the “Jewel Box” is just bright enough to be seen with the unaided eye. It was given its nickname by the English astronomer John Herschel in the 1830s because the striking colour contrasts of its pale blue and orange stars seen through a telescope reminded Herschel of a piece of exotic jewellery.

Open clusters [1] such as NGC 4755 typically contain anything from a few to thousands of stars that are loosely bound together by gravity. Because the stars all formed together from the same cloud of gas and dust their ages and chemical makeup are similar, which makes them ideal laboratories for studying how stars evolve.

The position of the cluster amongst the rich star fields and dust clouds of the southern Milky Way is shown in the very wide field view generated from the Digitized Sky Survey 2 data. This image also includes one of the stars of the Southern Cross as well as part of the huge dark cloud of the Coal Sack [2].

A new image taken with the Wide Field Imager (WFI) on the MPG/ESO 2.2-metre telescope at ESO’s La Silla Observatory in Chile shows the cluster and its rich surroundings in all their multicoloured glory. The large field of view of the WFI shows a vast number of stars. Many are located behind the dusty clouds of the Milky Way and therefore appear red [3].

The FORS1 instrument on the ESO Very Large Telescope (VLT) allows a much closer look at the cluster itself. The telescope’s huge mirror and exquisite image quality have resulted in a brand-new, very sharp view despite a total exposure time of just 5 seconds. This new image is one of the best ever taken of this cluster from the ground.

The Jewel Box may be visually colourful in images taken on Earth, but observing from space allows the NASA/ESA Hubble Space Telescope to capture light of shorter wavelengths than can not be seen by telescopes on the ground. This new Hubble image of the core of the cluster represents the first comprehensive far ultraviolet to near-infrared image of an open galactic cluster. It was created from images taken through seven filters, allowing viewers to see details never seen before. It was taken near the end of the long life of the Wide Field Planetary Camera 2 ― Hubble’s workhorse camera up until the recent Servicing Mission, when it was removed and brought back to Earth. Several very bright, pale blue supergiant stars, a solitary ruby-red supergiant and a variety of other brilliantly coloured stars are visible in the Hubble image, as well as many much fainter ones. The intriguing colours of many of the stars result from their differing intensities at different ultraviolet wavelengths.

The huge variety in brightness of the stars in the cluster exists because the brighter stars are 15 to 20 times the mass of the Sun, while the dimmest stars in the Hubble image are less than half the mass of the Sun. More massive stars shine much more brilliantly. They also age faster and make the transition to giant stars much more quickly than their faint, less-massive siblings.

The Jewel Box cluster is about 6400 light-years away and is approximately 16 million years old.


[1] Open, or galactic, star clusters are not to be confused with globular clusters ― huge balls of tens of thousands of ancient stars in orbit around our galaxy and others. It seems that most stars, including our Sun, formed in open clusters.

[2] The Coal Sack is a dark nebula in the Southern Hemisphere, near the Southern Cross, that can be seen with the unaided eye. A dark nebula is not the complete absence of light, but an interstellar cloud of thick dust that obscures most background light in the visible.

[3] If the light from a distant star passes through dust clouds in space the blue light is scattered and absorbed more than the red. As a result the starlight looks redder when it arrives on Earth. The same effect creates the glorious red colours of terrestrial sunsets.


Full article and photo:

Gamma-ray Photon Race Ends In Dead Heat; Einstein Wins This Round


In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect, eliminating some approaches to a new theory of gravity.

Racing across the universe for the last 7.3-billion-years, two gamma-ray photons arrived at NASA’s orbiting Fermi Gamma-ray Space Telescope within nine-tenths of a second of one another. The dead-heat finish may stoke the fires of debate among physicists over Einstein’s special theory of relativity because one of the photons possessed a million times more energy than the other,

For Einstein’s theory, that’s no problem. In his vision of the structure of space and time, unified as space-time, all forms of electromagnetic radiation — gamma rays, radio waves, infrared, visible light and X-rays — are reckoned to travel through the vacuum of space at the same speed, no matter how energetic. But in some of the new theories of gravity, space-time is considered to have a “shifting, frothy structure” when viewed at a scale trillions of times smaller than an electron. Some of those models predict that such a foamy texture ought to slow down the higher-energy gamma-ray photon relative to the lower energy one. Clearly, it did not.

Even in the world of high-energy particle physics, where a minute deviation can sometimes make a massive difference, nine-tenths of a second spread over more than 7 billion years is so small that the difference is likely due to the detailed processes of the gamma-ray burst rather than confirming any modification of Einstein’s ideas.

“This measurement eliminates any approach to a new theory of gravity that predicts a strong energy-dependent change in the speed of light,” said Peter Michelson, professor of physics at Stanford University and principal investigator for Fermi’s Large Area Telescope (LAT), which detected the gamma-ray photons on May 10. “To one part in 100 million billion, these two photons traveled at the same speed. Einstein still rules.”

Michelson is one of the authors of a paper that details the research, published online Oct. 28 by Nature.

Physicists have yearned for years to develop a unifying theory of how the universe works. But no one has been able to come up with one that brings all four of the fundamental forces in the universe into one tent. The Standard Model of particle physics, which was well developed by the end of the 1970s, is considered to have succeeded in unifying three of the four: electromagnetism; the “strong force” (which holds nuclei together inside atoms); and the “weak force” (which is responsible for radioactive decay, among other things.) But in the Standard Model, gravity has always been the odd man out, never quite fitting in. Though a host of theories have been advanced, none has been shown successful.

But by the same token, Einstein’s theories of relativity also fail to unify the four forces.

“Physicists would like to replace Einstein’s vision of gravity — as expressed in his relativity theories — with something that handles all fundamental forces,” Michelson said. “There are many ideas, but few ways to test them.”

The two photons provided rare experimental evidence about the structure of space-time. Whether the evidence will prove sufficient to settle any debates remains to be seen.

The photons were launched on their pan-galactic marathon during a short gamma-ray burst, an outpouring of radiation likely generated by the collision of two neutron stars, the densest known objects in the universe.

A neutron star is created when a massive star collapses in on itself in an explosion called a supernova. The neutron star forms in the core as matter is compressed to the point where it is typically about 10 miles in diameter, yet contains more mass than our sun. When two such dense objects collide, the energy released in a gamma-ray burst can be millions of times brighter than the entire Milky Way, albeit only briefly. The burst (designated GRB 090510) that sent the two photons on their way lasted 2.1 seconds.

NASA’s Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership, developed in collaboration with the U.S. Department of Energy, along with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.


Full article and photo:

Galileo’s Notebooks May Reveal Secrets Of New Planet

Galileo knew he had discovered a new planet in 1613, 234 years before its official discovery date, according to a new theory by a University of Melbourne physicist.

Professor David Jamieson, Head of the School of Physics, is investigating the notebooks of Galileo from 400 years ago and believes that buried in the notations is the evidence that he discovered a new planet that we now know as Neptune.

A hypothesis of how to look for this evidence has been published in the journal Australian Physics and was presented at the first lecture in the 2009 July Lectures in Physics program at the University of Melbourne in the beginning of July.

If correct, the discovery would be the first new planet identified by humanity since deep antiquity.

Galileo was observing the moons of Jupiter in the years 1612 and 1613 and recorded his observations in his notebooks. Over several nights he also recorded the position of a nearby star which does not appear in any modern star catalogue.

“It has been known for several decades that this unknown star was actually the planet Neptune. Computer simulations show the precision of his observations revealing that Neptune would have looked just like a faint star almost exactly where Galileo observed it,” Professor Jamieson says.

But a planet is different to a star because planets orbit the Sun and move through the sky relative to the stars. It is remarkable that on the night of January 28 in 1613 Galileo noted that the “star” we now know is the planet Neptune appeared to have moved relative to an actual nearby star.”

There is also a mysterious unlabeled black dot in his earlier observations of January 6, 1613, which is in the right position to be Neptune.

“I believe this dot could reveal he went back in his notes to record where he saw Neptune earlier when it was even closer to Jupiter but had not previously attracted his attention because of its unremarkable star-like appearance.”

If the mysterious black dot on January 6 was actually recorded on January 28, Professor Jamieson proposes this would prove that Galileo believed he may have discovered a new planet.

By using the expertise of trace element analysts from the University of Florence, who have previously analyzed inks in Galileo’s manuscripts, dating the unlabelled dot in his notebook may be possible. This analysis may be conducted in October this year.

“Galileo may indeed have formed the hypothesis that he had seen a new planet which had moved right across the field of view during his observations of Jupiter over the month of January 1613,” Professor Jamieson says.

“If this is correct Galileo observed Neptune 234 years before its official discovery.”

But there could be an even more interesting possibility still buried in Galileo’s notes and letters.

“Galileo was in the habit of sending a scrambled sentence, an anagram, to his colleagues to establish his priority for the sensational discoveries he made with his new telescope. He did this when he discovered the phases of Venus and the rings of Saturn. So perhaps somewhere he wrote an as-yet undecoded anagram that reveals he knew he discovered a new planet,” Professor Jamieson speculates.

Professor Jamieson presented at the first of a series of lectures in July, aimed at giving an insight into fundamental questions in physics to celebrate the 2009 International Year of Astronomy.


Full article:

Geologists Point To Outer Space As Source Of The Earth’s Mineral Riches


New research suggests that the wealth of some minerals that lie in the rock beneath the Earth’s surface may be extraterrestrial in origin.

According to a new study by geologists at the University of Toronto and the University of Maryland, the wealth of some minerals that lie in the rock beneath the Earth’s surface may be extraterrestrial in origin.

“The extreme temperature at which the Earth’s core formed more than four billion years ago would have completely stripped any precious metals from the rocky crust and deposited them in the core,” says James Brenan of the Department of Geology at the University of Toronto and co-author of the study published in Nature Geoscience on October 18.

“So, the next question is why are there detectable, even mineable, concentrations of precious metals such as platinum and rhodium in the rock portion of the Earth today? Our results indicate that they could not have ended up there by any known internal process, and instead must have been added back, likely by a ‘rain’ of extraterrestrial debris, such as comets and meteorites.”

Geologists have long speculated that four and a half billion years ago, the Earth was a cold mass of rock mixed with iron metal which was melted by the heat generated from the impact of massive planet-sized objects, allowing the iron to separate from the rock and form the Earth’s core. Brenan and colleague William McDonough of the University of Maryland recreated the extreme pressure and temperature of this process, subjecting a similar mixture to temperatures above 2,000 degrees Celsius, and measured the composition of the resulting rock and iron.

Because the rock became void of the metal in the process, the scientists speculate that the same would have occurred when the Earth was formed, and that some sort of external source – such as a rain of extraterrestrial material – contributed to the presence of some precious metals in Earth’s outer rocky portion today.

“The notion of extraterrestrial rain my also explain another mystery, which is how the rock portion of the Earth came to have hydrogen, carbon and phosphorous – the essential components for life, which were likely lost during Earth’s violent beginning.”

The research was funded with the support of the Natural Sciences and Engineering Research Council of Canada and a NASA Cosmochemistry grant.


Full article and photo:

Color of Fabric Matters When Protecting Skin From Ultraviolet Rays

It takes more than sunscreen to keep the sun’s ultraviolet rays from harming your skin. The type of clothing you wear can offer protection, too — or not. Studies have shown that some lightweight fabrics do not provide enough UV protection.

But it is not just the type of fiber and the weave of the fabric that matters, but also the color. Ascención Riva of the Polytechnic University of Catalonia and colleagues have addressed the color issue, studying the effects of different dyes on the UV protection provided by lightweight woven cottons.

The researchers chose three fabrics, not dyed, with different initial levels of UV protection based on the weave and other factors. Then they dyed them in varying shades of blue, red and yellow and measured how much UV radiation was absorbed and transmitted.

They found that red and blue shades performed better than yellow, particularly in blocking UV-B rays, which are the most harmful. Protection increased as the shades were made darker and more intense. And if the initial protection level of the fabric was higher, the darker shades offered even greater improvement.

The researchers say the findings, reported in Industrial and Engineering Chemistry Research, should help fabric and garment manufacturers optimize their products for UV protection.

Henry Fountain, New York Times


Full article:

Scientists announce planet bounty

Gliese 667C (ESO/L. Calçada)

Artist’s impression: Astronomers are finding smaller and smaller planet
Astronomers have announced a haul of planets found beyond our Solar System.

The 32 “exoplanets” ranged in size from five times the mass of Earth to 5-10 times the mass of Jupiter, the researchers said.

They were found using a very sensitive instrument on a 3.6m telescope at the European Southern Observatory’s La Silla facility in Chile.

The discovery is exciting because it suggests that low-mass planets could be numerous in our galaxy.

“From [our] results, we know now that at least 40% of solar-type stars have low-mass planets. This is really important because it means that low-mass planets are everywhere, basically,” explained Stephane Udry from Geneva University, Switzerland.

“What’s very interesting is that models are predicting them, and we are finding them; and furthermore the models are predicting even more lower-mass planets like the Earth.”

Size selection

The discovery now takes the number of known exoplanets – planets outside our Solar System – to more than 400.

These have been identified using a range of astronomical techniques and telescopes, but this latest group was spotted as a result of observations made with the Harps spectrometer at La Silla.

The High Accuracy Radial Velocity Planet Searcher instrument employs what is sometimes called the “wobble technique”.

This is an indirect method of detection that infers the existence of orbiting planets from the way their gravity makes a parent star appear to twitch in its motion across the sky.

Astronomy is working right at the limits of the current technology capable of detecting exoplanets and most of those found so far are Jupiter-scale and bigger.

Harps, however, has focussed its efforts on small, relatively cool stars – so-called M-class stars – in the hope of finding low-mass planets, ones most likely to resemble the rocky planets in our own Solar System.

Of the 28 planets known with masses below 20 Earth-masses, Harps has now identified 24 of them – and six of those are in the newly announced group.

“We have two candidates at five Earth-masses and two at six Earth-masses,” Professor Udry told BBC News.

Combined approach

Harps has previously identified an object which is only twice as massive as the Earth (announced in April).

Scientists are confident this planet harbours no life, though, because it orbits so close to its parent star that surface temperatures would be scorching.

In revealing the new collection of planets on Monday, the Harps team-members said they expected to confirm the existence of another batch, similar in number, during the coming six months.

The ultimate goal is to find a rocky planet in a star’s “habitable zone”, an orbit where temperatures are in a range that would support the presence of liquid water.

Scientists believe the introduction of newer, more sensitive technologies will allow them to identify such an object within just a few years from now.

The US space agency (Nasa) recently launched its Kepler telescope.

This hopes to find Earth-size planets by looking for the tiny dip in light coming from a star as an object crosses its face as viewed from Earth.

To properly characterise a planet, different observing techniques are required. The Kepler “transit” method reveals the diameter of an object, but a Harps-like measurement is needed to resolve the mass.


Full article and photo:

LHC gets colder than deep space

Atlas (Cern/C. Marcelloni)

The giant Atlas detector will search for hints of the elusive Higgs boson particle

The Large Hadron Collider (LHC) experiment has once again become one of the coldest places in the Universe.

All eight sectors of the LHC have now been cooled to their operating temperature of 1.9 kelvin (-271C; -456F) – colder than deep space.

The large magnets that bend particle beams around the LHC are kept at this frigid temperature using liquid helium.

The magnets are arranged end-to-end in a 27km-long circular tunnel straddling the Franco-Swiss border.

The cool-down is an important milestone ahead of the collider’s scheduled re-start in the latter half of November.

The LHC has been shut down since 19 September 2008, when a magnet problem called a “quench” caused a tonne of liquid helium to leak into the LHC tunnel.

After the accident, the particle accelerator had to be warmed up so that repairs could take place.

The most powerful physics experiment ever built, the Large Hadron Collider will recreate the conditions just after the Big Bang. It is operated by the European Organization for Nuclear Research (Cern), based in Geneva.

Two beams of protons will be fired down pipes running through the magnets. These beams will travel in opposite directions around the main “ring” at close to the speed of light.

At allotted points around the tunnel, the proton beams cross paths, smashing into one another with cataclysmic energy. Scientists hope to see new particles in the debris of these collisions, revealing fundamental new insights into the nature of the cosmos.

Awesome energy

The operating temperature of the LHC is just a shade above “absolute zero” (-273.15C) – the coldest temperature possible. By comparison, the temperature in remote regions of outer space is about 2.7 kelvin (-270C; -454F).

The LHC’s magnets are designed to be “superconducting”, which means they channel electric current with zero resistance and very little power loss. But to become superconducting, the magnets must be cooled to very low temperatures.

For this reason, the LHC is innervated by a complex system of cryogenic lines using liquid helium as the refrigerant of choice.

No particle physics facility on this scale has ever operated in such frigid conditions.

But before a beam can be circulated around the 27km-long LHC ring, engineers will have to thoroughly test the machine’s new quench protection system and continue with magnet powering tests.

Particle beams have already been brought “to the door” of the Large Hadron Collider. A low-intensity beam could be injected into the LHC in as little as a week.

This beam test would involve only parts of the collider, rather than the whole “ring”.

LHC tunnel (Cern/M.Brice)

The LHC’s tunnel runs for 27 km under the Franco-Swiss border

Officials now plan to circulate a beam around the LHC in the second half of November. Engineers will then aim to smash low-intensity beams together, giving scientists their first data.

The beams’ energy will then be increased so that the first high-energy collisions can take place. These will mark the real beginning of the LHC’s research programme.

Collisions at high energy have been scheduled to occur in December, but now look more likely to happen in January, according to Cern’s director of communications James Gillies.

Feeling the squeeze

Mr Gillies said this would involve delicate operation of the accelerator.

“Whilst you’re accelerating [the beams], you don’t have to worry too much about how wide the beams are. But when you want to collide them, you want the protons as closely squeezed together as possible.

He added: “If you get it wrong you can lose beam particles – so it can take a while to perfect. Then you line up the beams to collide.

“In terms of the distances between the last control elements of the LHC and the collision point, it’s a bit like firing knitting needles from across the Atlantic and getting them to collide half way.”

Officials plan a brief hiatus over the Christmas and New Year break, when the lab will have to shut down.

Although managers had discussed working through this period, Mr Gillies said this would have been “too logistically complicated”.

The main determinant in the decision to close over winter were workers’ contracts, which would have needed to be re-negotiated, he said.

An upgraded early warning system, or quench protection system, should prevent incidents of the kind which shut the collider last year, officials say.

This has involved installing hundreds of new detectors around the machine.

Cern has spent about 40m Swiss Francs (£24m) on repairs following the accident, including upgrades to the quench protection system.


Full article and photos:

Galactic Magnetic Fields May Control Boundaries Of Our Solar System

The first all-sky maps developed by NASA’s Interstellar Boundary Explorer (IBEX) spacecraft, the initial mission to examine the global interactions occurring at the edge of the solar system, suggest that the galactic magnetic fields had a far greater impact on Earth’s history than previously conceived, and the future of our planet and others may depend, in part, on how the galactic magnetic fields change with time.

“The IBEX results are truly remarkable, with emissions not resembling any of the current theories or models of this never-before-seen region,” says Dr. David J. McComas, IBEX principal investigator and assistant vice president of the Space Science and Engineering Division at Southwest Research Institute. “We expected to see small, gradual spatial variations at the interstellar boundary, some 10 billion miles away. However, IBEX is showing us a very narrow ribbon that is two to three times brighter than anything else in the sky.”

A “solar wind” of charged particles continuously travels at supersonic speeds away from the Sun in all directions. This solar wind inflates a giant bubble in interstellar space called the heliosphere — the region of space dominated by the Sun’s influence in which the Earth and other planets reside. As the solar wind travels outward, it sweeps up newly formed “pickup ions,” which arise from the ionization of neutral particles drifting in from interstellar space. IBEX measures energetic neutral atoms (ENAs) traveling at speeds of roughly half a million to two and a half million miles per hour. These ENAs are produced from the solar wind and pick-up ions in the boundary region between the heliosphere and the local interstellar medium.

The IBEX mission just completed the first global maps of these protective layers called the heliosphere through a new technique that uses neutral atoms like light to image the interactions between electrically charged and neutral atoms at the distant reaches of our Sun’s influence, far beyond the most distant planets. It is here that the solar wind, which continually emanates from the Sun at millions of miles per hour, slams into the magnetized medium of charged particles, atoms and dust that pervades the galaxy and is diverted around the system. The interaction between the solar wind and the medium of our galaxy creates a complex host of interactions, which has long fascinated scientists, and is thought to shield the majority of harmful galactic radiation that reaches Earth and fills the solar system.

“The magnetic fields of our galaxy may change the protective layers of our solar system that regulate the entry of galactic radiation, which affects Earth and poses hazards to astronauts,” says Nathan Schwadron of Boston University’s Center for Space Physics and the lead for the IBEX Science Operations Center at BU.

Each six months, the IBEX mission, which was launched on October 18, 2008, completes its global maps of the heliosphere. The first IBEX maps are strikingly different than any of the predictions, which are now forcing scientists to reconsider their basic assumptions of how the heliosphere is created.

“The most striking feature is the ribbon that appears to be controlled by the magnetic field of our galaxy,” says Schwadron.

Although scientists knew that their models would be tested by the IBEX measurements, the existence of the ribbon is “remarkable” says Geoffrey Crew, a Research Scientist at MIT and the Software Design Lead for IBEX. “It suggests that the galactic magnetic fields are much stronger and exert far greater stresses on the heliosphere than we previously believed.”

The discovery has scientists thinking carefully about how different the heliosphere could be than they expected.

“It was really surprising that the models did not generate features at all like the ribbon we observed,” says Christina Prested, a BU graduate student working on IBEX. “Understanding the ribbon in detail will require new insights into the inner workings of the interactions at the edge of our Sun’s influence in the galaxy.”

Adds Schwadron,”Any changes to our understanding of the heliosphere will also affect how we understand the astrospheres that surround other stars. The harmful radiation that leaks into the solar system from the heliosphere is present throughout the galaxy and the existence of astrospheres may be important for understanding the habitability of planets surrounding other stars.”

IBEX is the latest in NASA’s series of low-cost, rapidly developed Small Explorers space missions. Southwest Research Institute in San Antonio, Texas, leads and developed the mission with a team of national and international partners. NASA’s Goddard Space Flight Center in Greenbelt, Md., manages the Explorers Program for NASA’s Science Mission Directorate in Washington.


Full article:

‘Magnetricity’ Observed And Measured For First Time

magnetricityThe magnetic equivalent of electricity in a ‘spin ice’ material: atom sized north and south poles in spin ice drift in opposite directions when a magnetic field is applied.

A magnetic charge can behave and interact just like an electric charge in some materials, according to new research led by the London Centre for Nanotechnology (LCN).

The findings could lead to a reassessment of current magnetism theories, as well as significant technological advances.

The research, published in Nature, proves the existence of atom-sized ‘magnetic charges’ that behave and interact just like more familiar electric charges. It also demonstrates a perfect symmetry between electricity and magnetism – a phenomenon dubbed ‘magnetricity’ by the authors from the LCN and the Science and Technology Facility Council’s ISIS Neutron and Muon Source.

In order to prove experimentally the existence of magnetic current for the first time, the team mapped Onsager’s 1934 theory of the movement of ions in water onto magnetic currents in a material called spin ice. They then tested the theory by applying a magnetic field to a spin ice sample at a very low temperature and observing the process using muons at ISIS.

The experiment allowed the team to detect magnetic charges in the spin ice (Dy2Ti2O7), to measure their currents, and to determine the elementary unit of the magnetic charge in the material. The monopoles they observed arise as disturbances of the magnetic state of the spin ice, and can exist only inside the material.

Professor Steve Bramwell, LCN co-author of the paper, said: “Magnetic monopoles were first predicted to exist in 1931, but despite many searches, they have never yet been observed as freely roaming elementary particles. These monopoles do at least exist within the spin ice sample, but not outside.

“It is not often in the field of physics you get the chance to ask ‘How do you measure something?’ and then go on to prove a theory unequivocally. This is a very important step to establish that magnetic charge can flow like electric charge. It is in the early stages, but who knows what the applications of magnetricity could be in 100 years time.”

Professor Keith Mason, Chief Executive of STFC said: “The unequivocal proof that magnetic charge is conducted in spin ice adds significantly to our understanding of electromagnetism. Whilst we will have to wait to see what applications magnetricity will find in technology, this research shows that curiosity driven research will always have the potential to make an impact on the way we live and work. Advanced materials research depends greatly on having access to central research labs like ISIS allowing the UK science community to flourish and make exciting discoveries like this.”

Dr Sean Giblin, instrument scientist at ISIS and co-author of the paper, added: “The results were astounding, using muons at ISIS we are finally able to confirm that magnetic charge really is conducted through certain materials at certain temperatures – just like the way ions conduct electricity in water.”

Full article and photo:

Looking beyond

Through-the-wall vision

A cheap way of using small radios to see inside buildings

SUPERMAN had X-ray vision, which was useful for looking through walls when rescuing heroines and collaring villains. But beyond Hollywood, the best that engineers have been able to come up with to see inside buildings are devices that use radar. Some are portable enough to be placed against an outside wall by, say, a police unit planning a raid—and sophisticated enough to show, with reasonable accuracy, the location of anyone inside. But the best models cost more than $100,000, so they are not widely deployed. Now a team led by Neal Patwari and Joey Wilson of the University of Utah has come up with a way to peer through the walls of a building using a network of little radios that cost only a few dollars each.

Radar works by recording radio waves that have been reflected from the object under observation. Dr Patwari’s and Mr Wilson’s insight was to look not for reflections but for shadows. Their device broadcasts a radio signal through a building and, when that signal comes out the other side, monitors variations in its strength. The need for variation means the system cannot see things that are stationary. When the signal is temporarily blocked by a moving object such as a person, however, it shows up loud and clear.

Using a network of small transmitters and receivers, the researchers have found it is possible to plot a person’s position quite accurately and display it on the screen of a laptop. They call the process radio tomographic imaging, because constructing an image by measuring the strengths of radio signals along several pathways is similar to the computerised tomographic body-scanning used by hospitals—though medical machines employ X-rays, not radio waves, to do the scanning.

The radios used by Dr Patwari and Mr Wilson are low-cost types designed for use in what are known as ZigBee networks. In that application they transmit data between devices such as thermostats, fire detectors and some automated factory equipment. They are not even as powerful as the radios used in Wi-Fi networks to link computers together.

Small and inexpensive as these ZigBee radios are, though, there is strength in their numbers. Each is in contact with all of the others. A building under examination is thus penetrated by a dense web of links. In one experiment, for example, a network of 34 radios was able to keep track of Mr Wilson’s position with an accuracy of less than a metre—a figure that Dr Patwari and Mr Wilson think could be improved greatly by using specially designed radios instead of off-the-shelf ones. Moreover, putting radios on the roof of a building as well as around its walls should make it possible to produce three-dimensional views of what is going on inside.

The ability to “see” people moving around in a building with such a cheap system has many plausible applications, and Mr Wilson has set up a company called Xandem to commercialise the idea. Besides military, police and private-security uses, radio networks might be employed to locate people trapped by fire or earthquake. More commercially, they might be used to measure what retailers call “footfall”—recording how people use stores and shopping centres. At the moment, this is done with cameras, or by triangulating the position of signals given off by mobile phones that customers are carrying. Radio tomography could be simpler, more accurate and, some might feel, less intrusive. Certainly less so than a man in tights with X-ray eyes.


Full article:

Welcome to the world of sci-fi science

Large Hadron Collider and the Time Machine
One of these devices may actually send things through time
Teleportation, time travel, antimatter and wireless electricity. It all sounds far-fetched, more fiction than fact, but it’s all true.

Everybody is used to science fiction featuring science that seems, well, not very scientific.

But you might be surprised at the way some things that seem fantastical have a solid grounding in actual science.


Actual Tardis-style time travel won’t be materialising any time soon

The theory: Build a machine that lets you change the past or visit the future.

The science fiction: The Time Machine by HG Wells, where the Time Traveller visits humanity’s far future and doesn’t like what he finds.

In practice: Einstein’s relativity allows time travel in extreme circumstances. Some interpretations of quantum mechanics permit particles to travel backwards in time. Two physicists recently suggested that the Large Hadron Collider may have malfunctioned because a Higgs boson particle, travelling back in time from a future experiment, wrecked the machine.

The layman’s explanation: Time travel seems paradoxical – what happens if you go back in time and kill your own grandfather? But current physical theories do not forbid it.

In relativity, particles can travel along “closed timelike curves”, going round a time-loop from past to present to future and back to the same past. One theoretical method uses a wormhole, which is a black hole linked to its time-reversal by a tube. If you pull the black hole around near the speed of light, you get a time machine. However, you need a special kind of matter to keep the wormhole open, and we don’t yet have any.

Quantum mechanics involves a fundamental symmetry in nature. If you swap positive and negative charges, reflect the universe in a mirror, and reverse the flow of time, then the laws of physics don’t change. So a Higgs boson travelling backwards in time is the same as an anti-Higgs travelling forwards.

Coming to a shop near you?: In about 15,000 years at this rate, assuming new laws of physics don’t rule it out.


Electricity travels between two Tesla coils
This wireless electricity would probably not charge your mobile.

The theory: Plug your gadgets into the mains without using a cable.

The science fiction: Isaac Asimov’s 1941 story Reason is about a solar power station run by robots that transmits energy to Earth.

In practice: Electricity and magnetism are “fields” in space, and can be converted into each other. Electromagnetic radiation is a wave, and can travel from one place to another. So in principle wireless transmission of electrical power should be a doddle. Edison thought about it in 1875.

The layman’s explanation: If you move a magnet, it creates an electrical field. If you move an electrical field, it creates a magnetic one. The two are different aspects of one basic force of nature – electromagnetism. In particular, electrical currents can be transported from one gadget to another over a distance, a process called induction. Electrical generators and motors use this.

Microwaves, which are effectively light with a very long wavelength, are a practical way to transport electrical power. It is also possible to turn electricity into light, using a laser, and then reverse the process at the other end.

Coming to a shop near you?: In 1975, an American team showed that it’s possible to transmit tens of kilowatts of power using microwaves. A few months ago a Japanese consortium announced a plan to build a $21bn facility in space to beam solar power to Earth – within 30 years it could supply 300,000 homes with electricity. This year, at the TED conference in Oxford, the company Witricity demonstrated a wireless power system that can recharge mobile phones and TV sets. In Tesco in time for Christmas.


Three characters
These three thought of antimatter as we think of super unleaded

The theory: There is a special kind of matter which explodes violently on contact with ordinary matter, producing more energy than a hydrogen bomb.

The science fiction: Star Trek uses antimatter to power its warp drives.

In practice: Paul Dirac should have predicted antimatter using quantum mechanics in 1928 but he fluffed it. Carl Anderson spotted the first antiparticle, the positron, in 1932. In 1995, the CERN particle accelerator facility in Geneva created atoms of antihydrogen.

The layman’s explanation: Matter is made of extremely tiny particles, which have various masses, electric charges, spins, and so on. Associated with each particle is an antiparticle with the same mass but opposite charge. If the two collide, they annihilate in a burst of energy. A small mass produces a lot of energy thanks to Einstein’s famous equation – energy = mass times the square of the speed of light. The Big Bang somehow produced a billion and one particles of matter for every billion particles of antimatter. No one really knows why, but if it hadn’t, we wouldn’t be here because there wouldn’t be a here for us to inhabit.

You might think of antimatter as a compact source of almost unbounded energy. Put some in a magnetic bottle, a magnetic field that confines the antimatter in a cavity so that it doesn’t touch any normal matter – the only way known to be able to contain it – and then release it very slowly, allowing it to react with normal matter. It would make fusion power seem like a car battery.

Coming to a store near you?: Positrons are very ho-hum. They occur in radioactive decay and are used routinely in medical PET (Positron Emission Tomography) scanners. Beyond those, it gets hard. Expect a few thousand atoms of antihydrogen within the next 50 years, costing the GNP of a small country, and an atom or two of heavier elements. Mass production looks like a long shot.


The theory: Going from here to somewhere else without passing through anywhere in between.

The science fiction: Beam me up, Scotty.

In practice: Take two particles of light and entangle them – now you can teleport quantum information – such as what their spin is – from one to the other, instantaneously.

The layman’s explanation: Photons, particles of light, have a property called “spin”. This can be up, down, or a mixture of the two. Alice has a photon, and she wants Bob to have one with the same spin. She can’t send him hers because the Post Office is on strike, and she can’t measure her spin and phone him, because the measurement can change the spin.

Fortunately, the last time she met Bob she gave him one photon from an entangled pair, and kept the other. “Entangled” means that the two photons were prepared so that their states were related in a special way. Alice lets her photon interact with her other photon from the entangled pair. This instantly teleports information about the spin to Bob’s half. However, he can’t “read” that information until a message arrives by more conventional means. A quick call on Alice’s mobile, telling him some measurements she has made, now puts his entangled photon into the desired state.

Quantum “teleportation” destroys the original state and can’t be used to send messages faster than light. It doesn’t actually teleport matter – just quantum information.

Coming to a store near you?: In 1998, the quantum optics group at Caltech used “squeezed light” to teleport the state of a photon in a laboratory. It’s now been done with atoms, too. In 2004 Austrian physicists teleported the state of a photon across the Danube river. Within another century it will be an amoeba. But be warned: when you are teleported, your body will be ripped to shreds and rebuilt at the other end.

Ian Stewart’s latest book is Professor Stewart’s Hoard of Mathematical Treasures, published by Profile.


Full article and photos:

French Investigate Scientist in Formal Terrorism Inquiry

A French court placed a physicist working at CERN, the high-energy research laboratory in Switzerland, under formal investigation on Monday for suspected “conspiracy with a terrorist enterprise.”

Although the physicist’s name had not been officially released by the French police, an official with direct knowledge of the investigation identified him as Adlène Hicheur, a French particle physicist born in Algeria. The official spoke on condition of anonymity.

Dr. Hicheur, 32, and a younger brother were arrested on Thursday in his home in Vienne, France, on suspicion of having contacts with a member of Al Qaeda in the Islamic Maghreb, a Sunni extremist group based in Algeria that has affiliated itself with Osama bin Laden’s terrorist network. The brother has been released.

Dr. Hicheur has not been charged with a crime, and the French authorities have not said what evidence they have in the case. A person informed of the investigation said that some incriminating information was in the form of e-mail messages and other communications obtained at the time of Dr. Hicheur’s arrest.

Under French law, a person in a terrorism case can be held under “provisional detention” with no time limit. In France, being placed under formal investigation does not necessarily lead to a trial and does not imply guilt.

In an interview with the journal Nature, published online on Tuesday, a brother of Dr. Hicheur said the accusations against his brother were “completely false.” The brother, Halim, said that his family traded e-mail messages with people in Algeria, but denied any contacts with Al Qaeda. According to news reports, Dr. Hicheur was born in Setif, Algeria, and is one of six children.

Dr. Hicheur is part of a 49-member team from the Laboratory for High Energy Physics at the École Polytechnique Fédérale de Lausanne that is working on one experiment at CERN’s Large Hadron Collider, as part of a 700-member international group.

The collider was built to accelerate protons to seven trillion electron volts of energy and then bang them together in search of forces and particles that existed in the early moments of the Big Bang.

The experiment the Lausanne team works on, called LHCb, is aimed at clarifying any difference between matter and its opposite, antimatter, and in that way explaining why the universe is made of the former and not the latter.

A spokesman for the technical school in Lausanne characterized Dr. Hicheur’s colleagues as being “extremely surprised and in emotional shock” at the possibility that he was a suspect. Dr. Hicheur spent most of his time at his office at CERN, the spokesman said, returning to Lausanne only once a week to teach a class — exactly what class, he said he was not allowed to say.

Dr. Hicheur has been working on various aspects of the antimatter problem for his entire career. A paper presented last year in La Thuile, Italy, was about so-called new physics that could emerge from the LHCb collaboration’s gigantic detector, one of four spaced around the collider tunnel underneath the Swiss-French border near Geneva.

Dr. Hicheur was awarded his Ph.D. in 2003 from the University of Savoie in Annecy, France, for work on aspects of the antimatter problem involving rare decays of the subatomic particles called B mesons. The research was done at the Stanford Linear Collider in California, where he worked for several months in 2002 as part of the BaBar collaboration, said Rob Brown, a spokesman for the Stanford lab.

According to archival physics Web sites, Dr. Hicheur is listed as an author on more than a hundred physics papers, most with the BaBar team. According to British press reports Dr. Hicheur also once worked at the Rutherford Appleton Laboratory at Chilton, in Oxfordshire, England.

As a member of the LHCb team, Dr. Hicheur had an office and an e-mail address at the CERN complex outside Geneva, but according to James Gillies, head of CERN’s press office, he did not have access to the tunnel.

Asked if radiation from the proton beams could be used to create radioactive materials for a dirty bomb, Dr. Gillies said it was unlikely. The isotopes produced would be too short-lived to be of use to terrorists, he said, or would be produced in quantities too small for a weapon.

“If someone were to try and introduce something into the tunnel, it would be impossible to expose it directly to the beam, so the flux of particles hitting it would be low,” he said. “There is no conceivable way to produce harmful radioactive materials that could be of interest to terrorists.”

In principle, antimatter could be used to make a powerful bomb, because particles and their antiparticles annihilate each other into pure energy on contact. This was the premise of the recent movie and book by Dan Brown, “Angels and Demons,” as well as a propulsion scheme in “Star Trek.”

CERN has in fact produced antimatter, and even anti-atoms in the quest to understand antimatter, but the lab produces so little, according to a calculation on the CERN Web site, that it would take two billion years to make enough for a bomb.

Dennis Overbye, New York Times


Full article:

Physicists Measure Elusive ‘Persistent Current’ That Flows Forever

ringHarris made the first definitive measurement of an electric current that flows continuously in tiny, but ordinary, metal rings.

Physicists at Yale University have made the first definitive measurements of “persistent current,” a small but perpetual electric current that flows naturally through tiny rings of metal wire even without an external power source.

The team used nanoscale cantilevers, an entirely novel approach, to indirectly measure the current through changes in the magnetic force it produces as it flows through the ring. “They’re essentially little floppy diving boards with the rings sitting on top,” said team leader Jack Harris, associate professor of physics and applied physics at Yale. The findings appear in the October 9 issue of Science.

The counterintuitive current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. “These are ordinary, non-superconducting metal rings, which we typically think of as resistors,” Harris said. “Yet these currents will flow forever, even in the absence of an applied voltage.”

Although persistent current was first theorized decades ago, it is so faint and sensitive to its environment that physicists were unable to accurately measure it until now. It is not possible to measure the current with a traditional ammeter because it only flows within the tiny metal rings, which are about the same size as the wires used on computer chips.

Past experiments tried to indirectly measure persistent current via the magnetic field it produces (any current passing through a metal wire produces a magnetic field). They used extremely sensitive magnetometers known as superconducting quantum interference devices, or SQUIDs, but the results were inconsistent and even contradictory.

“SQUIDs had long been established as the tool used to measure extremely weak magnetic fields. It was extremely optimistic for us to think that a mechanical device could be more sensitive than a SQUID,” Harris said.

The team used the cantilevers to detect changes in the magnetic field produced by the current as it changed direction in the aluminum rings. This new experimental setup allowed the team to make measurements a full order of magnitude more precise than any previous attempts. They also measured the persistent current over a wider range of temperature, ring size and magnetic field than ever before.

“These measurements could tell us something about how electrons behave in metals,” Harris said, adding that the findings could lead to a better understanding of how qubits, used in quantum computing, are affected by their environment, as well as which metals could potentially be used as superconductors.


Full article and photo:

Research in a Vacuum: DARPA Tries to Tap Elusive Casimir Effect for Breakthrough Technology

DARPA mainly hopes that research on this quantum quirk can produce futuristic microdevices


A FEW GOOD MEMS Harnessing the Casimir effect (which takes place between the two metal plates in the above diagram) could help researchers build tiny machines, such as microelectromechanical systems (MEMS), that today are hindered by surface interactions that can make nanomaterials sticky to the point of permanent adhesion.

Named for a Dutch physicist, the Casimir effect governs interactions of matter with the energy that is present in a vacuum. Success in harnessing this force could someday help researchers develop low-friction ballistics and even levitating objects that defy gravity. For now, the U.S. Defense Department’s Defense Advanced Research Projects Agency has launched a two-year, $10-million project encouraging scientists to work on ways to manipulate this quirk of quantum electrodynamics.

Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested, in work done in the 1940s with fellow Dutch physicist Dirk Polder, that two metal plates held apart in a vacuum could trap the waves, creating vacuum energy that, depending on the situation, could attract or repel the plates. As the boundaries of a region of vacuum move, the variation in vacuum energy (also called zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, Vrije University Amsterdam and elsewhere has proved Casimir correct—and given some experimental underpinning to DARPA’s request for research proposals.

Investigators from five institutions—Harvard, Yale University, the University of California, Riverside, and two national labs, Argonne and Los Alamos—received funding. DARPA will assess the groups’ progress in early 2011 to see if any practical applications might emerge from the research. “If the program delivers, there’s a good chance for a follow-on program to apply” the research, says Thomas Kenny, the DARPA physicist in charge of the initiative.

Program documents on the DARPA Web site state the goal of the Casimir Effect Enhancement program “is to develop new methods to control and manipulate attractive and repulsive forces at surfaces based on engineering of the Casimir force. One could leverage this ability to control phenomena such as adhesion in nanodevices, drag on vehicles, and many other interactions of interest to the [Defense Department].”

Nanoscale design is the most likely place to start and is also the arena where levitation could emerge. Materials scientists working to build tiny machines called microelectromechanical systems (MEMS) struggle with surface interactions, called van der Waals forces, that can make nanomaterials sticky to the point of permanent adhesion, a phenomenon known as “stiction”. To defeat stiction, many MEMS devices are coated with Teflon or similar low-friction substances or are studded with tiny springs that keep the surfaces apart. Materials that did not require such fixes could make nanotechnology more reliable. Such materials could skirt another problem posed by adhesion: Because surface stickiness at the nanoscale is much greater than it is for larger objects, MEMS designers resort to making their devices relatively stiff. That reduces adhesion (stiff structures do not readily bend against each other), but it reduces flexibility and increases power demands.

Under certain conditions, manipulating the Casimir effect could create repellant forces between nanoscale surfaces. Hong Tang and his colleagues at Yale School of Engineering & Applied Science sold DARPA on their proposal to assess Casimir forces between miniscule silicon crystals, like those that make up computer chips. “Then we’re going to engineer the structure of the surface of the silicon device to get some unusual Casimir forces to produce repulsion,” he says. In theory, he adds, that could mean building a device capable of levitation.

Such claims emit a strong scent of fantasy, but researchers say incremental successes could open the door to significant breakthroughs in key areas of nanotechnology, and perhaps larger structures. “What I can contribute is to understand the role of the Casimir force in real working devices, such as microwave switches, MEMS oscillators and gyroscopes, that normally are made of silicon crystals, not perfect metals,” Tang says.

The request for proposals closed in September. The project received “a lot of interest,” Kenny says. “I was surprised at the creativity of the proposals, and at the practicality,” he adds, although he declined to reveal how many teams submitted proposals. “It wasn’t pure theory. There were real designs that looked buildable, and the physics looked well understood.”

Still, the Casimir project was a “hard sell” for DARPA administrators, Kenny acknowledges. “It’s very fundamental, very risky, and even speculative on the physics side,” he says. “Convincing the agency management that the timing was right was difficult, especially given the number of programs that must compete for money within the agency.”

DARPA managers certainly would be satisfied if the Casimir project produced anything tangible, because earlier attempts had failed. Between 1996 and 2003, for example, NASA had a program to explore what it calls Breakthrough Propulsion Physics to build spacecraft capable of traveling at speeds faster than light (299,790 kilometers per second). One way to do that is by harnessing the Casimir force in a vacuum and using the energy to power a propulsion system. The program closed with this epitaph on its Web site: “No breakthroughs appear imminent.”

One of many problems with breakthrough propulsion based on the Casimir force is that whereas zero-point energy may be theoretically infinite, it is not necessarily limitless in practice—or even minutely accessible. “It’s not so much that these look like really good energy schemes so much as they are clever ways of broaching some really hard questions and testing them,” says Marc Millis, the NASA physicist who oversaw the propulsion program.

The DARPA program faces several formidable obstacles, as well, cautions Jeremy Munday, a physicist at California Institute of Technology who studies the Casimir effect. For starters, simply measuring the Casimir force is difficult enough. These experiments take many years to complete, adds Munday, who recently published a paper in Nature (Scientific American is part of the Nature Publishing Group) describing his own research. What’s more, he says, although several groups have measured the Casimir force, only a few have been able to modify it significantly. Still, Munday adds, the exploratory nature of the program means its goals and expectations are “quite reasonable.”

Tang is pragmatic about his efforts, given the unlikelihood that Casimir force will ever provide much energy to harness. “The force is really small,” he says. “After all, a vacuum is a vacuum.”

Yet sometimes the best science can hope for is baby steps. “To come up with anything that can lead to a viable energy conversion or a viable force producing effect, we’re not anywhere close,” Millis says. “But then, of course, you don’t make progress unless you try.

Adam Marcus, Scientific American


Full article and photo:

The Wonderful World of the Teeny-Tiny

Microscopic Photography

There are millions of photo competitions. But very few of them deal with objects that are normally invisible to the naked eye. SPIEGEL ONLINE brings you the winners of this year’s microscopic photo competition.

It isn’t uncommon for scientists to spend countless hours staring into a microscope. Only rarely, however, do they take pictures of what they see. And even then the images tend to be gray and amorphous, depicting malignant tissue or the activity of a particular protein inside a cell.


For the uninitiated, such images are impenetrable. Yet the micro-world can also be a beautiful place, full of splendour that normally remains hidden to the naked eye. Capturing that beauty is the aspiration of micro-photographers, those who magnify the miniature and take pictures of the tiny. The images that result are often full of unfamiliar shapes and forms — and surprisingly colorful. Only rarely is it possible to identify the subject being photographed.

Since 1974, though, depictions of the diminutive have been the subject of an annual photo contest, called the Nikon Small World Competition. A jury of photographers, science journalists and researchers choose the best of the best among microscopic photos.


micro 1

First place in this year’s Nikon Small World Competition went to Heiti Paves of Estonia. The image shows the anther of a thale cress (arabidopsis thaliana) magnified 20 times. The plants pollinate themselves and reproduce quickly, making them a favorite for genetics researchers.

micro 2

Second place, Gerd Günther of Germany. The spiny sow thistle (sonchus asper) can be found in Austria and Germany. This image is part of the plant’s flower stem magnified 150 times.

micro 3

Third place, Pedro Barrios-Perez of Canada. The image shows a wrinkled photoresist, a light-sensitive material used in a number of industrial processes, such as micro-electronics. The image was magnified 200 times.

micro 4

Fourth place, James Hayden of the US. This image is the result of viewing the ovary of an anglerfish through a special fluorescent microscope. Magnified four times.

micro 5

Fifth place, Bruno Vellutini of Brazil. A researcher at the University of Sao Paolo, Vellutini’s picture shows a young sea star magnified 40 times.

micro 6

Eleventh place, Dominik Paquet of Germany. Zebra fish are often used in the study of genetic Alzheimer’s. In this image, magnified 10 times, the nerve cells are stained green while the Alzheimer’s genes are colored blue and red.


One of those honored this year, Dominik Paquet of the Adolf Butenandt Institute in Munich, is a prime example as to how many of the images in the contest come into existence. His image, which came in 11th place, sprang from his research into the cellular processes related to Alzheimer’s disease. Zebra fish are often used to make the death of nerve cells visible. Tiny fish larvae are injected with an Alzheimer’s-causing gene, which is then colored using an antibody to make it easily perceptible. His laser microscope does the rest.

A Simple Sow Thistle

Paquet entered one of the resulting images to the photo contest. “Compelling images are important for research,” Paquet, 29, says. “And they help communicate what we are doing to the broader public.”

Some 2,000 photographers sent in their work to the contest, and the subject matter varies widely. Some photographers took pictures of magnified chemical compounds, others show details from the world of microbiology. And not all those who submitted photographs come from the world of science. Anyone with a microscope can participate in the contest. Although standard instruments are enough, many of the images were taken with highly specialized microscopes that can cost hundreds of thousands of euros.

But even the simplest of microscopes can result in impressive photos. An image submitted by photographer Gerd Günther from Düsseldorf took second place in this year’s contest — and was created using a simple, store-bought device. His subject? A simple sow thistle.


Full article and photos:,1518,654690,00.html

The Collider, the Particle and a Theory About Fate


SUICIDE MISSION? The core of the superconducting solenoid magnet at the Large Hadron Collider in Switzerland.

More than a year after an explosion of sparks, soot and frigid helium shut it down, the world’s biggest and most expensive physics experiment, known as the Large Hadron Collider, is poised to start up again. In December, if all goes well, protons will start smashing together in an underground racetrack outside Geneva in a search for forces and particles that reigned during the first trillionth of a second of the Big Bang.

Then it will be time to test one of the most bizarre and revolutionary theories in science. I’m not talking about extra dimensions of space-time, dark matter or even black holes that eat the Earth. No, I’m talking about the notion that the troubled collider is being sabotaged by its own future. A pair of otherwise distinguished physicists have suggested that the hypothesized Higgs boson, which physicists hope to produce with the collider, might be so abhorrent to nature that its creation would ripple backward through time and stop the collider before it could make one, like a time traveler who goes back in time to kill his grandfather.

Holger Bech Nielsen, of the Niels Bohr Institute in Copenhagen, and Masao Ninomiya of the Yukawa Institute for Theoretical Physics in Kyoto, Japan, put this idea forward in a series of papers with titles like “Test of Effect From Future in Large Hadron Collider: a Proposal” and “Search for Future Influence From LHC,” posted on the physics Web site in the last year and a half.

According to the so-called Standard Model that rules almost all physics, the Higgs is responsible for imbuing other elementary particles with mass.

“It must be our prediction that all Higgs producing machines shall have bad luck,” Dr. Nielsen said in an e-mail message. In an unpublished essay, Dr. Nielson said of the theory, “Well, one could even almost say that we have a model for God.” It is their guess, he went on, “that He rather hates Higgs particles, and attempts to avoid them.”

This malign influence from the future, they argue, could explain why the United States Superconducting Supercollider, also designed to find the Higgs, was canceled in 1993 after billions of dollars had already been spent, an event so unlikely that Dr. Nielsen calls it an “anti-miracle.”

You might think that the appearance of this theory is further proof that people have had ample time — perhaps too much time — to think about what will come out of the collider, which has been 15 years and $9 billion in the making.

The collider was built by CERN, the European Organization for Nuclear Research, to accelerate protons to energies of seven trillion electron volts around an 18-mile underground racetrack and then crash them together into primordial fireballs.

For the record, as of the middle of September, CERN engineers hope to begin to collide protons at the so-called injection energy of 450 billion electron volts in December and then ramp up the energy until the protons have 3.5 trillion electron volts of energy apiece and then, after a short Christmas break, real physics can begin.


Dr. Nielsen and Dr. Ninomiya started laying out their case for doom in the spring of 2008. It was later that fall, of course, after the CERN collider was turned on, that a connection between two magnets vaporized, shutting down the collider for more than a year.

Dr. Nielsen called that “a funny thing that could make us to believe in the theory of ours.”

He agreed that skepticism would be in order. After all, most big science projects, including the Hubble Space Telescope, have gone through a period of seeming jinxed. At CERN, the beat goes on: Last weekend the French police arrested a particle physicist who works on one of the collider experiments, on suspicion of conspiracy with a North African wing of Al Queda.

Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs.

Sure, it’s crazy, and CERN should not and is not about to mortgage its investment to a coin toss. The theory was greeted on some blogs with comparisons to Harry Potter. But craziness has a fine history in a physics that talks routinely about cats being dead and alive at the same time and about anti-gravity puffing out the universe.

As Niels Bohr, Dr. Nielsen’s late countryman and one of the founders of quantum theory, once told a colleague: “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.”

Dr. Nielsen is well-qualified in this tradition. He is known in physics as one of the founders of string theory and a deep and original thinker, “one of those extremely smart people that is willing to chase crazy ideas pretty far,” in the words of Sean Carroll, a Caltech physicist and author of a coming book about time, “From Eternity to Here.”

Another of Dr. Nielsen’s projects is an effort to show how the universe as we know it, with all its apparent regularity, could arise from pure randomness, a subject he calls “random dynamics.”

Dr. Nielsen admits that he and Dr. Ninomiya’s new theory smacks of time travel, a longtime interest, which has become a respectable research subject in recent years. While it is a paradox to go back in time and kill your grandfather, physicists agree there is no paradox if you go back in time and save him from being hit by a bus. In the case of the Higgs and the collider, it is as if something is going back in time to keep the universe from being hit by a bus. Although just why the Higgs would be a catastrophe is not clear. If we knew, presumably, we wouldn’t be trying to make one.

We always assume that the past influences the future. But that is not necessarily true in the physics of Newton or Einstein. According to physicists, all you really need to know, mathematically, to describe what happens to an apple or the 100 billion galaxies of the universe over all time are the laws that describe how things change and a statement of where things start. The latter are the so-called boundary conditions — the apple five feet over your head, or the Big Bang.

The equations work just as well, Dr. Nielsen and others point out, if the boundary conditions specify a condition in the future (the apple on your head) instead of in the past, as long as the fundamental laws of physics are reversible, which most physicists believe they are.

“For those of us who believe in physics,” Einstein once wrote to a friend, “this separation between past, present and future is only an illusion.”

In Kurt Vonnegut’s novel “Sirens of Titan,” all of human history turns out to be reduced to delivering a piece of metal roughly the size and shape of a beer-can opener to an alien marooned on Saturn’s moon so he can repair his spaceship and go home.

Whether the collider has such a noble or humble fate — or any fate at all — remains to be seen. As a Red Sox fan my entire adult life, I feel I know something about jinxes.

Dennis Overbye, New York Times


Full article and photo:

Bacterium Transforms Toxic Gold Compounds To Their Metallic Form

gold ccc

A C. metallidurans ultra-thin section containing a gold nanoparticle.

Australian scientists have found that the bacterium Cupriavidus metallidurans catalyses the biomineralisation of gold by transforming toxic gold compounds to their metallic form using active cellular mechanism.

Researchers reported the presence of bacteria on gold surfaces but have never clearly elucidated their role. Now, an international team of scientists has found that there may be a biological reason for the presence of these bacteria on gold grain surfaces.

“A number of years ago we discovered that the metal-resistant bacterium Cupriavidus metallidurans occurred on gold grains from two sites in Australia. The sites are 3500 km apart, in southern New South Wales and northern Queensland, so when we found the same organism on grains from both sites we thought we were onto something. It made us wonder why these organisms live in this particular environment. The results of this study point to their involvement in the active detoxification of Au complexes leading to formation of gold biominerals,” explains Frank Reith, leader of the research and working at the University of Adelaide (Australia).

The experiments showed that C. metallidurans rapidly accumulates toxic gold complexes from a solution prepared in the lab. This process promotes gold toxicity, which pushes the bacterium to induce oxidative stress and metal resistance clusters as well as an as yet uncharacterized Au-specific gene cluster in order to defend its cellular integrity. This leads to active biochemically-mediated reduction of gold complexes to nano-particulate, metallic gold, which may contribute to the growth of gold nuggets.

For this study scientists combined synchrotron techniques at the European Synchrotron Radiation Facility (ESRF) and the Advanced Photon Source (APS) and molecular microbial techniques to understand the biomineralisation in bacteria. It is the first time that these techniques have been used in the same study, so Frank Reith brought together a multinational team of experts in both areas for the success of the experiment. The team was made up of scientists from the University of Adelaide, the Commonwealth Scientific and Research Organization (CSIRO), the University of California (US), the University of Western Ontario and the University of Saskatchewan (Canada), Martin-Luther-Universität (Germany), University of Nebraska-Lincoln (US), SCK.CEN (Belgium) and the APS (US) and the ESRF (France).

This is the first direct evidence that bacteria are actively involved in the cycling of rare and precious metals, such as gold. These results open the doors to the production of biosensors.

“The discovery of an Au-specific operon means that we can now start to develop gold-specific biosensors, which will help mineral explorers to find new gold deposits. To achieve this we need to further characterize the gold-specific operon on a genomic as well as proteomic level. If funding for this research is granted I believe we can produce a functioning biosensor within three to five years,” concludes Reith.


Full article and photo:

Canadian Astronomers Capture Spectacular Meteor Footage And Images


Composite all-sky camera image of the end of the fireball as seen from Hamilton (Camera #3, McMaster).

Astronomers from The University of Western Ontario have released footage of a meteor that was approximately 100 times brighter than a full moon. The meteor lit up the skies of southern Ontario two weeks ago and Western astronomers are now hoping to enlist the help of local residents in recovering one or more possible meteorites that may have crashed in the area of Grimsby, Ontario.

The Physics and Astronomy Department at Western has a network of all-sky cameras in southern Ontario that scan the atmosphere monitoring for meteors. Associate Professor Peter Brown, who specializes in the study of meteors and meteorites, says that on Friday, September 25 at 9:03 p.m. EST seven all-sky cameras of Western’s Southern Ontario Meteor Network (SOMN) recorded a brilliant fireball in the evening sky over the west end of Lake Ontario.

Brown along with Phil McCausland, a postdoctoral fellow at Western’s Centre for Planetary Science & Exploration, are now working to get the word out amongst interested people who may be willing to see if they can spot any fallen meteorites.

“This particular meteorite fall, if any are found, is very important because its arrival was so well recorded. We have good camera records as well as radar and infrasound detections of the event, so that it will be possible to determine its orbit prior to collision with the Earth and to determine the energy of the fireball event,” says McCausland. “We can also figure out where it came from and how it got here, which is rare. In all of history, only about a dozen meteorite falls have that kind of record.”

The fireball was first detected by Western’s camera systems at an altitude of 100 km over Guelph moving southeastwards at 20.8 km/s. The meteoroid was initially the size of a child’s tricycle.

Analysis of the all-sky camera records as well as data from Western’s meteor radar and infrasound equipment indicates that this bright fireball was large enough to have dropped meteorites in a region south of Grimsby on the Niagara Peninsula, providing masses that may total as much as several kilograms.

Researchers at Western are interested in hearing from anyone within 10 km of Grimsby who may have witnessed or recorded this event, seen or heard unusual events at the time, or who may have found possible fragments of the freshly fallen meteorite.

According to McCausland, meteorites are of great scientific value. He also points out that in Canada meteorites belong to the owner of the land upon which they are discovered. If individuals intend to search they should, in all cases, obtain the permission of the land owner before searching on private land.

Meteorites may best be recognized by their dark and scalloped exterior, and are usually denser than normal rock and will often attract a fridge magnet due to their metal content. In this fall, meteorites may be found in a small hole produced by their dropping into soil. Meteorites are not dangerous, but any recovered meteorites should be placed in a clean plastic bag or container and be handled as little as possible to preserve their scientific information.

For video footage, still images and site maps, please visit


Full article and photo:

Classical Chaos Occurs In The Quantum World, Scientists Find


This image shows the kind of pictures Jessen’s team produces with tomography. The top two spheres are from a selected experimental snapshot taken after 40 cycles of changing the direction of the axis of spin of a cesium atom, the quantum “spinning top.” The two spheres below are theoretical models that agree remarkably with the experimental results.

Chaotic behavior is the rule, not the exception, in the world we experience through our senses, the world governed by the laws of classical physics.

Even tiny, easily overlooked events can completely change the behavior of a complex system, to the point where there is no apparent order to most natural systems we deal with in everyday life.

The weather is one familiar case, but other well-studied examples can be found in chemical reactions, population dynamics, neural networks and even the stock market.

Scientists who study “chaos” — which they define as extreme sensitivity to infinitesimally small tweaks in the initial conditions — have observed this kind of behavior only in the deterministic world described by classical physics.

Until now, no one has produced experimental evidence that chaos occurs in the quantum world, the world of photons, atoms, molecules and their building blocks.

This is a world ruled by uncertainty: An atom is both a particle and a wave, and it’s impossible to determine its position and velocity simultaneously.

And that presents a major problem. If the starting point for a quantum particle cannot be precisely known, then there is no way to construct a theory that is sensitive to initial conditions in the way of classical chaos.

Yet quantum mechanics is the most complete theory of the physical world, and therefore should be able to account for all naturally occurring phenomena.

“The problem is that people don’t see [classical] chaos in quantum systems,” said Professor Poul Jessen of the University of Arizona. “And we believe quantum mechanics is the fundamental theory, the theory that describes everything, and that we should be able to understand how classical physics follows as a limiting case of quantum physics.”

Experiments Reveal Classical Chaos In Quantum World

Now, however, Jessen and his group in UA’s College of Optical Sciences have performed a series of experiments that show just how classical chaos spills over into the quantum world.

The scientists report their research in the Oct. 8 issue of the journal Nature in an article titled, “Quantum signatures of chaos in a kicked top.”

Their experiments show clear fingerprints of classical-world chaos in a quantum system designed to mimic a textbook example of chaos known as the “kicked top.”

The quantum version of the top is the “spin” of individual laser-cooled cesium atoms that Jessen’s team manipulate with magnetic fields and laser light, using tools and techniques developed over a decade of painstaking laboratory work.

“Think of an atom as a microscopic top that spins on its axis at a constant rate of speed,” Jessen said. He and his students repeatedly changed the direction of the axis of spin, in a series of cycles that each consisted of a “kick” and a “twist”.

Because spinning atoms are tiny magnets, the “kicks” were delivered by a pulsed magnetic field. The “twists” were more challenging, and were achieved by subjecting the atom to an optical-frequency electric field in a precisely tuned laser beam.

They imaged the quantum mechanical state of the atomic spin at the end of each kick-and-twist cycle with a tomographic technique that is conceptually similar to the methods used in medical ultrasound and CAT scans.

The end results were pictures and stop-motion movies of the evolving quantum state, showing that it behaves like the equivalent classical system in some significant ways.

One of the most dramatic quantum signatures the team saw in their experiments was directly visible in their images: They saw that the quantum spinning top observes the same boundaries between stability and chaos that characterize the motion of the classical spinning top. That is, both quantum and classical systems were dynamically stable in the same areas, and dynamically erratic outside those areas.

A New Signature Of Chaos Called ‘Entanglement’

Jessen’s experiment revealed a new signature of chaos for the first time. It is related to the uniquely quantum mechanical property known as “entanglement.”

Entanglement is best known from a famous thought experiment proposed by Albert Einstein, in which two light particles, or photons, are emitted with polarizations that are fundamentally undefined but nevertheless perfectly correlated. Later, when the photons have traveled far apart in space, their polarizations are both measured at the same instant in time and found to be completely random but always at right angles to each other.

“It’s as though one photon instantly knows the result for the other and adjusts its own polarization accordingly,” Jessen said.

By itself, Einstein’s thought experiment is not directly related to quantum chaos, but the idea of entanglement has proven useful, Jessen added.

“Entanglement is an important phenomenon of the quantum world that has no classical counterpart. It can occur in any quantum system that consists of at least two independent parts,” he said.

Theorists have speculated that the onset of chaos will greatly increase the degree to which different parts of a quantum system become entangled.

Jessen took advantage of atomic physics to test this hypothesis in his laboratory experiments.

The total spin of a cesium atom is the sum of the spin of its valence electron and the spin of its nucleus, and those spins can become quantum correlated exactly as the photon polarizations in Einstein’s example.

In Jessen’s experiment, the electron and nuclear spins remained unentangled as a result of stable quantum dynamics, but rapidly became entangled if the dynamics were chaotic.

Entanglement is a buzzword in the science community because it is the foundation for quantum cryptography and quantum computing.

“Our work is not directly related to quantum computing and communications,” Jessen said. “It just shows that this concept of entanglement has tendrils in all sorts of areas of quantum physics because entanglement is actually common as soon as the system gets complicated enough.”


Full article and photo:

New ring detected around Saturn


A colossal new ring has been identified around Saturn.

The dusty hoop lies some 13 million km (eight million miles) from the planet, about 50 times more distant than the other rings and in a different plane.

Scientists tell the journal Nature that the tenuous ring is probably made up of debris kicked off Saturn’s moon Phoebe by small impacts.

They think this dust then migrates towards the planet where it is picked up by another Saturnian moon, Iapetus.

The discovery would appear to resolve a longstanding mystery in planetary science: why the walnut-shaped Iapetus has a two-tone complexion, with one side of the moon significantly darker than the other.

“It has essentially a head-on collision. The particles smack Iapetus like bugs on a windshield,” said Anne Verbiscer from the University of Virginia, US.

Observations of the material coating the dark face of Iapetus indicate it has a similar composition to the surface material on Phoebe.

The scale of the new ring feature is astonishing. Nothing like it has been seen elsewhere in the Solar System.

The more easily visible outlier in Saturn’s famous bands of ice and dust is its E-ring, which encompasses the orbit of the moon Enceladus. This circles the planet at a distance of just 240,000km.

The newly identified torus is not only much broader and further out, it is also tilted at an angle of 27 degrees to the plane on which the more traditional rings sit.

This in itself strongly links the ring’s origin to Phoebe, which also takes a highly inclined path around Saturn.

Scientists suspected the ring might be present and had the perfect tool in the Spitzer space telescope to confirm it.

The US space agency observatory is well suited to picking up the infrared signal expected from cold grains of dust about 10 microns (millionths of a metre) in size.

Phoebe (Nasa)
Impacts on the moon Phoebe are probably supplying the ring

The ring would probably have a range of particle sizes – some bigger than this, and some smaller.

Modelling indicates the pressure of sunlight would push the smallest of these grains towards the orbit of Iapetus, which is circling Saturn at a distance of 3.5 million km.

“The particles are very, very tiny, so the ring is very, very tenuous; and actually if you were standing in the ring itself, you wouldn’t even know it,” Dr Verbiscer told BBC News.

“In a cubic km of space, there are all of 10-20 particles.”

Indeed, so feeble is the ring that scientists have calculated that if all the material were gathered up, it would fill a crater on Phoebe no more than a kilometre across.

The moon is certainly a credible source for the dust. It is heavily pockmarked. It is clear that throughout its history, Phoebe has been hit many, many times by space rocks and clumps of ice.


Full article and photos:

Schrödinger’s virus

Quantum mechanics

An old thought experiment may soon be realised

But what about the other eight lives?

ONE of the most famous unperformed experiments in science is Schrödinger’s cat. In 1935 Erwin Schrödinger (pictured), who was one of the pioneers of quantum mechanics, imagined putting a cat, a flask of Prussic acid, a radioactive atom, a Geiger counter, an electric relay and a hammer in a sealed box. If the atom decays, the Geiger counter detects the radiation and sends a signal that trips the relay, which releases the hammer, which smashes the flask and poisons the cat.

The point of the experiment is that radioactive decay is a quantum process. The chance of the atom decaying in any given period is known. Whether it has actually decayed (and thus whether the cat is alive or dead) is not—at least until the box is opened. The animal exists, in the argot of the subject, in a “superposition” in which it is both alive and dead at the same time.

Schrödinger’s intention was to illuminate the paradoxes of the quantum world. But superposition (the existence of a thing in two or more quantum states simultaneously) is real and is, for example, the basis of quantum computing. A pair of researchers at the Max Planck Institute for Quantum Optics in Garching, Germany, now propose to do what Schrödinger could not, and put a living organism into a state of quantum superposition.

The organism Ignacio Cirac and Oriol Romero-Isart have in mind is the flu virus. Pedants might object that viruses are not truly alive, but that is a philosophical rather than a naturalistic argument, for they have genes and are capable of reproduction—a capability they lose if they are damaged. The reason for choosing a virus is that it is small. Actual superposition (as opposed to the cat-in-a-box sort) is easiest with small objects, for which there are fewer pathways along which the superposition can break down. Physicists have already put photons, electrons, atoms and even entire molecules into such a state and measured the outcome. In the view of Dr Cirac and Dr Romero-Isart, a virus is just a particularly large molecule, so existing techniques should work on it.

The other thing that helps maintain superposition is low temperature. The less something jiggles about because of heat-induced vibration, the longer it can remain superposed. Dr Cirac and Dr Romero-Isart therefore propose putting the virus inside a microscopic cavity and cooling it down to its state of lowest energy (ground state, in physics parlance) using a piece of apparatus known as a laser trap. This ingenious technique—which won its inventors, one of whom was Steven Chu, now America’s energy secretary, a Nobel prize—works by bombarding an object with laser light at a frequency just below that which it would readily absorb and re-emit if it were stationary. This slows down the movement, and hence the temperature, of its atoms to a fraction of a degree above absolute zero.

Once that is done, another laser pulse will jostle the virus from its ground state into an excited state, just as a single atom is excited by moving one of its electrons from a lower to a higher orbital. By properly applying this pulse, Dr Cirac believes it will be possible to leave the virus in a superposition of the ground and excited states.

For that to work, however, the virus will need to have certain physical properties. It will have to be an insulator and to be transparent to the relevant laser light. And it will have to be able to survive in a vacuum. Such viruses do exist. The influenza virus is one example. Its resilience is legendary. It can survive exposure to a vacuum, and it seems to be an insulator—which is why the researchers have chosen it. And if the experiment works on a virus, they hope to move on to something that is indisputably alive: a tardigrade.

Tardigrades are tiny but resilient arthropods. They can survive in vacuums and at very low temperatures. And, although the difference between ground state and an excited state is not quite the difference between life and death, Schrödinger would no doubt have been amused that his 70-year-old jeu d’esprit has provoked such an earnest following.


Full article and photo:

Nobel Awarded for Advances in Harnessing Light

nobel physics

Half of this year’s Nobel Prize in Physics went to Charles K. Kao, center. The other half of the prize was shared by two researchers at Bell Labs, Willard S. Boyle, left, and George E. Smith.

The mastery of light through technology was the theme of this year’s Nobel Prize in Physics as the Royal Swedish Academy of Sciences honored breakthroughs in fiber optics and digital photography.

Half of the $1.4 million prize went to Charles K. Kao for insights in the mid-1960s about how to get light to travel long distances through glass strands, leading to a revolution in fiber optic cables. The other half of the prize was shared by two researchers at Bell Labs, Willard S. Boyle and George E. Smith, for inventing the semiconductor sensor known as a charge-coupled device, or CCD for short. CCDs now fill digital cameras by the millions.

The prize will be awarded in Stockholm on Dec. 10.

Fiber optic cables and lasers capable of sending pulses of light down them already existed when Dr. Kao started working on fiber optics. But at that time, the light pulses could travel only about 20 meters through the glass fibers before 99 percent of the light had dissipated. His goal was to extend the 20 meters to a kilometer. At the time, many researchers thought tiny imperfections, like holes or cracks in the fibers, were scattering the light.

In January 1966, Dr. Kao, then working at the Standard Telecommunication Laboratories in England, presented his findings. It was not the manufacturing of the fiber that was at fault, but rather that the ingredient for the fiber — the glass — was not pure enough. A purer glass made of fused quartz would be more transparent, allowing the light to pass more easily. In 197o, researchers at Corning Glass Works were able to produce a kilometer-long ultrapure optical fiber.

According to the academy in its prize announcement, the optical cables in use today, if unraveled, would equal a fiber more than a billion kilometers long.

In September 1969, Dr. Boyle and Dr. Smith, working at Bell Labs in Murray Hill, N.J., sketched out an idea on a blackboard in Dr. Boyle’s office. Their idea, originally intended for electronic memory, takes advantage of the photoelectric effect, which was discovered by Albert Einstein and won him the Nobel in 1921. When light hits a piece of silicon, it knocks out electrons. The brighter the light, the more electrons are knocked out.

In a CCD, the knocked-out electrons are gathered in small wells, where they are counted — essentially one pixel of an image. The data from an array of CCDs can then be reconstructed as an image. A 10-megapixel camera contains 10 million CCDs.

Besides consumer cameras, CCDs also made possible the cosmic panoramas from the Hubble Space Telescope and the Martian postcards taken by NASA’s rovers.

All three of the winning scientists hold American citizenship. Dr. Kao is also a British citizen, and Dr. Boyle is also a Canadian citizen.


Full article and photos:

Physicists Create First Atomic-scale Map Of Quantum Dots


An atomic-scale map of the interface between an atomic dot and its substrate. Each peak represents a single atom. The map, made with high-intensity X-rays, is a slice through a vertical cross-section of the dot.

University of Michigan physicists have created the first atomic-scale maps of quantum dots, a major step toward the goal of producing “designer dots” that can be tailored for specific applications.

Quantum dots—often called artificial atoms or nanoparticles—are tiny semiconductor crystals with wide-ranging potential applications in computing, photovoltaic cells, light-emitting devices and other technologies. Each dot is a well-ordered cluster of atoms, 10 to 50 atoms in diameter.

Engineers are gaining the ability to manipulate the atoms in quantum dots to control their properties and behavior, through a process called directed assembly. But progress has been slowed, until now, by the lack of atomic-scale information about the structure and chemical makeup of quantum dots.

The new atomic-scale maps will help fill that knowledge gap, clearing the path to more rapid progress in the field of quantum-dot directed assembly, said Roy Clarke, U-M professor of physics and corresponding author of a paper on the topic published online Sept. 27 in the journal Nature Nanotechnology.

Lead author of the paper is Divine Kumah of the U-M’s Applied Physics Program, who conducted the research for his doctoral dissertation.

“I liken it to exploration in the olden days,” Clarke said of dot mapping. “You find a new continent and initially all you see is the vague outline of something through the mist. Then you land on it and go into the interior and really map it out, square inch by square inch.

“Researchers have been able to chart the outline of these quantum dots for quite a while. But this is the first time that anybody has been able to map them at the atomic level, to go in and see where the atoms are positioned, as well as their chemical composition. It’s a very significant breakthrough.”

To create the maps, Clarke’s team illuminated the dots with a brilliant X-ray photon beam at Argonne National Laboratory’s Advanced Photon Source. The beam acts like an X-ray microscope to reveal details about the quantum dot’s structure. Because X-rays have very short wavelengths, they can be used to create super-high-resolution maps.

“We’re measuring the position and the chemical makeup of individual pieces of a quantum dot at a resolution of one-hundredth of a nanometer,” Clarke said. “So it’s incredibly high resolution.”

A nanometer is one-billionth of a meter.

The availability of atomic-scale maps will quicken progress in the field of directed assembly. That, in turn, will lead to new technologies based on quantum dots. The dots have already been used to make highly efficient lasers and sensors, and they might help make quantum computers a reality, Clarke said.

“Atomic-scale mapping provides information that is essential if you’re going to have controlled fabrication of quantum dots,” Clarke said. “To make dots with a specific set of characteristics or a certain behavior, you have to know where everything is, so that you can place the atoms optimally. Knowing what you’ve got is the most important thing of all.”

In addition to Clarke, co-authors of the Nature Nanotechnology paper are Sergey Shusterman, Yossi Paltiel and Yizhak Yacoby.

The research was sponsored by a grant from the National Science Foundation. The U.S. Department of Energy supported work at Argonne National Laboratory’s Advanced Photon Source.

Full article and photo:

Ancient Rainforests Resilient To Climate Change


Earth’s first rainforests. (Credit: Courtesy of Mary Parrish, Smithsonian Institution)

Climate change wreaked havoc on the Earth’s first rainforests but they quickly bounced back, scientists reveal. The findings of the research team, led by Dr Howard Falcon-Lang from Royal Holloway, University of London, are based on spectacular discoveries of 300-million-year-old rainforests in coal mines in Illinois, USA.

Preserved over vast areas, these fossilized rainforests in Illinois are the largest of their kind in the world. The rocks at this site – in which the rainforests occur – contain evidence for climate fluctuations. During cold ‘ice ages’, fossils show that the tropics dried out and rainforests were pushed to the brink of extinction. However, rainforests managed to recover and return to their former glory.

Dr Falcon-Lang, from the Department of Earth Sciences, worked with colleagues at the Smithsonian Institution and Illinois Geological Survey. In their paper published in the journal Geology, they show that rainforest species all but vanished at the height of the ice ages. Yet they also reveal that the coal beds that formed shortly after, as the climate warmed, contain abundant rainforest species.

Falcon-Lang said, ‘These discoveries radically change our understanding of the Earth’s first rainforests. We used to think these were stable ecosystems, unchanged for tens of millions of years. Now we know they were incredibly dynamic, constantly buffeted by climate change’.

The research may also shed light on how climate change will affect the Amazon rainforest in the future. Dr Falcon-Lang commented, ‘If we can understand how climate shaped rainforests in the distant past, we can infer how they will respond in the future. We’ve shown that within certain limits, rainforests are resilient to climate change; however, extreme climate change may push rainforests beyond a point of no return’.

The work is part of a five-year project funded by the UK’s Natural Environment Research Council and aims to study how climate change affected the Earth’s first rainforests. These ancient rainforests date from the Carboniferous period, 300 million years ago, when most of the world’s coal resources were formed.


Full article and photo:

Algae And Pollen Grains Provide Evidence Of Remarkably Warm Period In Antarctica’s History

For Sophie Warny, LSU assistant professor of geology and geophysics and curator at the LSU Museum of Natural Science, years of patience in analyzing Antarctic samples with low fossil recovery finally led to a scientific breakthrough. She and colleagues from around the world now have proof of a sudden, remarkably warm period in Antarctica that occurred about 15.7 million years ago and lasted for a few thousand years.

Last year, as Warny was studying samples sent to her from the latest Antarctic Geologic Drilling Program, or ANDRILL AND-2A, a multinational collaboration between the Antarctic Programs of the United States (funded by the National Science Foundation), New Zealand, Italy and Germany, one sample stood out as a complete anomaly.

“First I thought it was a mistake, that it was a sample from another location, not Antarctica, because of the unusual abundance in microscopic fossil cysts of marine algae called dinoflagellates. But it turned out not to be a mistake, it was just an amazingly rich layer,” said Warny. “I immediately contacted my U.S. colleague, Rosemary Askin, our New Zealand colleagues, Michael Hannah and Ian Raine, and our German colleague, Barbara Mohr, to let them know about this unique sample as each of our countries had received a third of the ANDRILL samples.”

Some colleagues had noted an increase in pollen grains of woody plants in the sample immediately above, but none of the other samples had such a unique abundance in algae, which at first gave Warny some doubts about potential contamination.

“But the two scientists in charge of the drilling, David Harwood of University of Nebraska – Lincoln, and Fabio Florindo of Italy, were equally excited about the discovery,” said Warny. “They had noticed that this thin layer had a unique consistency that had been characterized by their team as a diatomite, which is a layer extremely rich in fossils of another algae called diatoms.”

All research parties involved met at the Antarctic Research Facility at Florida State University in Tallahassee. Together, they sampled the zone of interest in great detail and processed the new samples in various labs. One month later, the unusual abundance in microfossils was confirmed.

Among the 1,107 meters of sediments recovered and analyzed for microfossil content, a two-meter thick layer in the core displayed extremely rich fossil content. This is unusual because the Antarctic ice sheet was formed about 35 million years ago, and the frigid temperatures there impede the presence of woody plants and blooms of dinoflagellate algae.

“We all analyzed the new samples and saw a 2,000 fold increase in two species of fossil dinoflagellate cysts, a five-fold increase in freshwater algae and up to an 80-fold increase in terrestrial pollen,” said Warny. “Together, these shifts in the microfossil assemblages represent a relatively short period of time during which Antarctica became abruptly much warmer.”

These palynomorphs, a term used to described dust-size organic material such as pollen, spores and cysts of dinoflagellates and other algae, provide hard evidence that Antarctica underwent a brief but rapid period of warming about 15 million years before present.

“This event will lead to a better understanding of global connections and climate forcing, in other words, it will provide a better understanding of how external factors imposed fluctuations in Earth’s climate system,” said Harwood. “The Mid-Miocene Climate Optimum has long been recognized in global proxy records outside of the Antarctic region. Direct information from a setting proximal to the dynamic Antarctic ice sheets responsible for driving many of these changes is vital to the correct calibration and interpretation of these proxy records.”

These startling results will offer new insight into Antarctica’s climatic past – insights that could potentially help climate scientists better understand the current climate change scenario.

“In the case of these results, the microfossils provide us with quantitative data of what the environment was actually like in Antarctica at the time, showing how this continent reacted when climatic conditions were warmer than they are today,” said Warny.

According to the researchers, these fossils show that land temperatures reached a January average of 10 degrees Celsius – the equivalent of approximately 50 degrees Fahrenheit – and that estimated sea surface temperatures ranged between zero and 11.5 degrees Celsius. The presence of freshwater algae in the sediments suggests to researchers that an increase in meltwater and perhaps also in rainfall produced ponds and lakes adjacent to the Ross Sea during this warm period, which would obviously have resulted in some reduction in sea ice.

These findings most likely reflect a poleward shift of the jet stream in the Southern Hemisphere, which would have pushed warmer water toward the pole and allowed a few dinoflagellate species to flourish under such ice-free conditions. Researchers believe that shrub-like woody plants might also have been able to proliferate during an abrupt and brief warmer time interval.

“An understanding of this event, in the context of timing and magnitude of the change, has important implications for how the climate system operates and what the potential future response in a warmer global climate might be,” said Harwood. “A clear understanding of what has happened in the past, and the integration of these data into ice sheet and climate models, are important steps in advancing the ability of these computer models to reproduce past conditions, and with improved models be able to better predict future climate responses.”

While the results are certainly impressive, the work isn’t yet complete.

“The SMS Project Science Team is currently looking at the stratigraphic sequence and timing of climate events evident throughout the ANDRILL AND-2A drillcore, including those that enclose this event,” said Florindo. “A broader understanding of ice sheet behavior under warmer-than-present conditions will emerge.”


Full article:

World’s Most Sensitive Astronomical Camera Developed

photonA team of Université de Montréal researchers, led by physics PhD student Olivier Daigle, has developed the world’s most sensitive astronomical camera. Marketed by Photon etc., a young Quebec firm, the camera will be used by the Mont-Mégantic Observatory and NASA, which purchased the first unit.

The camera is made up of a CCD controller for counting photons; a digital imagery device that amplifies photons observed by astronomical cameras or by other instruments used in situations of very low luminosity. The controller produces 25 gigabytes of data per second.

Electric signals used to pilot the imagery chip are 500 times more precise than those of a conventional controller. This increased precision helps reduce noise that interferes with the weak signals coming from astronomical objects in the night sky. The controller allows to substantially increase the sensitivity of detectors, which can be compared to the mirror of the Mont-Mégantic telescope doubling its diameter.

“The first astronomical results are astounding and highlight the increased sensitivity acquired by the new controller,” says Daigle. “The clarity of the images brings us so much closer to the stars that we are attempting to understand.”

A thriving Quebec company Photon etc. developed a commercial version of the controller devised by Daigle and his team and integrated it in complete cameras. NASA was first to place an order for one of these cameras and was soon followed by a research group from the University of Sao Paulo, and by a European-Canadian consortium equipping a telescope in Chili. In addition, researchers in nuclear medicine, bioluminescence, Raman imaging and other fields requiring rapid imagery have expressed interest in purchasing the cameras.

Photon etc. is a Quebec research and development company that specializes in the manufacting of photonic measurement and analysis instruments. The company is growing rapidly after spending four years in the Université de Montréal and its affiliated École Polytechnique IT business incubator.

“The sensitivity of the cameras developed by the Centre de recherche en astrophysique du Québec (CRAQ) and Photon etc. will not only help us better understand the depths of the universe but also better perceive weak optical signals coming from the human body. These signals can reveal the early signs of several diseases such as macular degeneration and certain types of cancer. An early diagnostic leads to early intervention, hopefully before the disease becomes more serious thus saving lives and important costs,” says Sébastien Blais-Ouellette, president of Photon etc.

Scientific results for the camera were recently featured in the Publications of the Astronomical Society of the Pacific, a prestigious instrumentation journal.

This research was made possible thanks to the financial support of the Natural Sciences And Engineering Research Council of Canada, Photon etc., the Canada Foundation for Innovation, the Fonds québécois de la recherche sur la nature et les technologies.


Full article and photo:

Samoa tsunami: 10 facts about tsunamis

A tsunami in the Pacific has killed more than 100 people in Samoa. We look at what causes tsunamis and what to look out for.


Christopher Moore of NOAA looks at computer graphs at the Pacific Tsunami Warning Centre in Hawaii, concerning the earthquake and tsunami that hit American Samoa.

The word ‘tsunami’ is Japanese, and translates as ‘harbour wave’. Tsunamis used to be called ‘tidal waves’, but the term has fallen out of use with scientists as they have nothing to do with tides.

A tsunami consists of a series of waves, known as a wave train, rather than a single wave. For a large tsunami, these waves could arrive over a period of hours, and the first is not necessarily the largest.

Most tsunamis are caused by undersea earthquakes. A magnitude 8.0 earthquake is behind the Samoan disaster, according to the US Geological Survey. An earthquake will cause a tsunami if it is powerful enough and if it is under a sufficient depth of water.

About 80 per cent of all tsunamis take place in the Pacific Ocean.

The theory that underwater earthquakes were behind tsunamis was first put forward by the ancient Greek historian Thucydides, in 426BC, in his book History of the Peloponnesian War.

Volcanic eruptions, massive landslides, meteorite impacts and underwater nuclear explosions can also cause tsunamis, as can tropical cyclones or other weather conditions. A storm-caused tsunami is known as a ‘meteotsunami’; such an event devastated Burma in 2008.

Despite the enormous size of the waves when they hit the land, the amplitude (wave height) of a tsunami is often as little as three feet in the open ocean, while its wavelength (distance between two peaks) can be as long as 120 miles. At this point it will be travelling at more than 500mph.

As the tsunami reaches shallower water the waves compress, making the wavelength shorter and the amplitude higher. The wave slows down, although it will still be travelling at around 50mph.

Predicting a tsunami is near impossible. In some cases a few minutes’ warning can be gained when the water along the shore suddenly recedes, in a phenomenon called ‘drawback’. This happens when a tsunami’s trough reaches the land before the peak.

A 10-year-old English girl, Tilly Smith, saved nearly a hundred lives with this knowledge ahead of the 2004 Indian Ocean tsunami. She had learned about drawback in a geography lesson and warned her family, who in turn told others. She has since given a speech at the United Nations and had an asteroid named after her: 20002 Tillysmith.


Full article and photo:

Peering into the future

Building a bionic eye

A contact lens that could put names to faces and guide soldiers in combat

SINCE the late 19th century, people with imperfect vision have been able to use contact lenses to improve their eyesight. In the early days these lenses were made of glass and could perform only simple visual corrections. Now they are usually made of plastic and can be moulded into the more complex shapes appropriate to those who suffer from astigmatism or who require bifocals. They can also be tinted, for people who wish to change the colour of their eyes. Yet the main purpose of even the most sophisticated contact lens remains what it always has been: to improve a person’s sight. That is about to change.

Researchers at the University of Washington, in Seattle, led by Babak Parviz, have incorporated electronic circuitry into a plastic lens, including light-emitting diodes (LEDs) for “on-eye” displays, transistors for computing, a radio for wireless communication and an antenna for collecting power from a radio source, such as a mobile phone, in a person’s pocket.

Making a “smart” lens like this is not easy. Electronic components are usually manufactured at temperatures which would melt plastic and are made of materials that do not naturally adhere to a contact lens’s plastic. Dr Parviz and his colleagues have therefore designed a lens that is peppered with small wells, ten microns deep, that are connected by a network of tiny metal wires. Each well is sculpted so that a component of a particular shape will fit snugly into it and, at its bottom, it contains a small amount of an alloy with a low melting-point. In addition, wells that will accommodate LEDs must be fitted with microlenses to focus the light from the LED in a way that the eye can cope with.

The components are manufactured individually and suspended in a liquid. This suspension is then washed over the lens, allowing the components to blunder into holes of the appropriate shape, where they stay put. The alloy is then gently heated, melting the alloy and connecting the components to the wires and thus to one another.

The researchers say that the resulting circuitry requires so little power that it does not produce enough heat to cause discomfort. And although Dr Parviz has not, himself, worn the lens, he has tested it on rabbits—and the animals do not seem to find it uncomfortable.

So far, the prototype’s display is rudimentary (in truth, it consists of but a single LED). However, Dr Parviz and his colleagues are working on a lens that can accommodate an eight-by-eight array of LEDs. They are also exploring a design which produces images using tiny shutters, in the manner of a liquid-crystal display.

As well as an LED, the prototype contains a small radio chip and antenna so that it can be powered without wires. The researchers will discuss the performance of their wireless-power system, which taps into the mobile-phone frequencies in the range 900-megahertz to 6-gigahertz range and draws about 100 microwatts of power, at a conference in Beijing in November.

What the display will show, of course, is up to the imagination—the name, perhaps, of someone the wearer has met but does not recall, or the street directions in an unfamiliar city. Or, perhaps, the quickest route to a target that needs destroying. For this sort of technology surely has military applications as well.


Full article:

Finding Order in the Apparent Chaos of Currents


FLUID MOVEMENT Sensors near Santa Cruz, Calif., take surface current measurements in Monterey Bay.

Suppose a blob of dioxin-rich pesticide is spilled into Monterey Bay. It might quickly disperse to the Pacific Ocean. But hours later, a spill of the same size at the same spot could circle near the coastline, posing a greater danger to marine life. The briny surface waters of the bay churn so chaotically that a slight shift in the place or time an oil drop, a buoy — or even a person — falls in can dictate whether it is swept out to the open ocean or swirls near the shore.

But the results are not unpredictable. A team of scientists studying Monterey Bay since 2000 has found that underlying its complex, seemingly jumbled currents is a structure that guides the dispersal patterns, a structure that changes over time.

With the aid of high-frequency radar that tracks the speed and direction of the flowing waters, and computers that rapidly perform millions of calculations, the scientists found that a hidden skeleton guided whether floating debris lingered or exited the bay.

Over the past 10 years, scientists have made enormous strides in their ability to identify and make images of the underlying mechanics of flowing air and water, and to predict how objects move through these flows.

Assisted by instruments that can track in fine detail how parcels of fluid move, and by low-cost computers that can crunch vast amounts of data quickly, researchers have found hidden structures beyond Monterey Bay, structures that explain why aircraft meet unexpected turbulence, why the air flow around a car causes drag and how blood pumps from the heart’s ventricles. In December, the journal Chaos will highlight the research under way to track the moving skeletons embedded in complex flows, known as Lagrangian coherent structures.

“There’s been an explosion of interest in this area,” said David K. Campbell, editor in chief of Chaos, a physicist and provost at Boston University. “Why it’s become more interesting is that experimentalists can now watch these structures emerge.”

The patterns of flow have fascinated thinkers for centuries. In the 1500s, Leonardo da Vinci sketched the swirling eddies he saw in rivers and the vortexes of blood he imagined in the aortic valve. Just as those visible patterns of flow change quickly, eluding our ability to predict the fate of objects caught up in them, the hidden structures of flow also move and morph over time.

The concept of the structures grew out of dynamical systems theory, a branch of mathematics used to understand complicated phenomena that change over time. The discovery of the structures in a wide range of real-world cases has shown that they play a key role in complex and chaotic fluid flows in the atmosphere and ocean.

The structures are invisible because they often exist only as dividing lines between parts of a flow that are moving at different speeds and in different directions. In the ocean, the path of a drop of water on one side of such a structure might diverge from the path of a drop of water on the other side; they will drift farther apart as time passes.

“They aren’t something you can walk up to and touch,” Jerrold E. Marsden, an engineering and mathematics professor at Caltech, said of the structures. “But they are not purely mathematical constructions, either.”

As an analogy, Dr. Marsden suggests imagining a line that divides a part of a city that has been affected by a disease outbreak from a part that has not. The line is not a fence or a road, but it still marks a physical barrier. And as the outbreak spreads, the line will change.

To find the structures, scientists must track flow, not by watching it go by but from the perspective of the droplets of water or molecules of air moving in it. “It’s like being a surfer,” Dr. Campbell said. “You want to catch the wave and move with the wave.”

moving boundary

In the laboratory, researchers shine lasers on tiny particles caught in a flow, capturing their speed and trajectory with fast, high-resolution digital cameras similar to the way tracer rounds from machine guns track the path of bullets. In the ocean or atmosphere, scientists rely on instantaneous data from high-frequency radar, laser detection systems, buoys and satellites. In the human body, phase-contrast magnetic resonance imaging has helped researchers map the complex patterns of blood flow in detail. Computers take in the data from all those sources, applying algorithms that unveil the flow structures.

“We’re just recognizing that these things exist and are playing a role in a variety of scenarios,” said Thomas Peacock, a mechanical engineering professor at M.I.T. who is evaluating how Lagrangian coherent structures affect vehicle performance and efficiency. “The idea is that cars, airplanes and submarines down the line would be fitted with sensors that will help them adapt to these structures.”

Studies of the air flow patterns surrounding Hong Kong International Airport have shown that Lagrangian coherent structures cause unexpected jolts to planes during landing attempts, forcing pilots to waste fuel while they revert to holding patterns. George Haller, an engineering professor at McGill University in Montreal who forged the mathematical criteria for finding such structures in fluid flows, is working with the airport’s officials to design a tool that allows pilots to see and navigate around the structures. It will rely on data from laser scans, analyzed by computers as planes approach the airport.

At Stanford, researchers are mapping blood flow in patients with abdominal aortic aneurysms to see whether frequent exercise changes the flow structures in ways that correlate to slower bulging of the artery.

The scientists studying Monterey Bay found a Lagrangian coherent structure that acts as a moving ridge, separating a region of the bay that spreads pollutants out to sea and a region that recirculates them in the bay. They watched this ridge drift and change over 22 days and found that if computed in real time, it could be used to predict one-day windows when pollutants could do less damage to the bay environment.

The scientists proposed building a holding tank for the fertilizers and pesticides that wash from farmland into the neighboring watershed that could release pollutants only at times when they would quickly drift into the ocean, where they would be so diluted they would pose less harm to marine life. In a later experiment, scientists found that the path of buoys dispatched in the bay followed the path predicted by the computer simulations.

Researchers who studied the waters along the southeastern coast of Florida found a similar structure that they argued could be used to reduce the effects of pollution near Hollywood Beach, south of Fort Lauderdale.

Their research in Monterey Bay piqued the interest of Art Allen, a physical oceanographer for the Coast Guard who thinks that Lagrangian coherent structures could improve search-and-rescue operations for people lost at sea by offering more precision than current techniques.

Researchers in private industry and the French Navy have expressed interest in using models of the structures to track the spread of oil after spills in coastal areas, said Francois Lekien, an applied mathematics professor at the École Polytechnique at the Université Libre de Bruxelles in Belgium who was a co-author of the bay studies.

Strategies based on Lagrangian coherent structures have yet to be tested to see if they curb coastal pollution. And they have several limitations. Scientists cannot yet predict what happens to pollutants that do not float on the ocean surface. The models do not yet account for the interaction with wind patterns that also guide how floating objects or people drift at sea. The method also requires continuing, detailed data akin to what was available in Monterey Bay, which has an ocean monitoring program that far surpasses that of most coastal areas.

Even if the structures in flow do not guide engineering or pollution strategies as well as researchers hope, many scientists believe that unearthing and visualizing them provides useful insights. For example, the structures identified in coastal waters have exposed flaws in our intuition about flow. “There are myths out there that it’s O.K. to dump pollutants at high tide,” said Dr. Marsden, co-author of the Monterey Bay and coastal Florida studies. “But it’s really these structures that will determine where pollutants end up.”

Finding the structures in various settings has also given researchers a fresh perspective on what remains a great scientific puzzle: the dynamics of flow.

“In complex systems such as the atmosphere, there are a lot of things that people can’t explain offhand,” Dr. Haller said. “People used to attribute this to randomness or chaos. But it turns out, when you look at data sets and find these structures, you can actually explain those patterns.”

Bina Venkataraman, New York Times


Full article and photos:

Lab Demonstrates 3-D Printing In Glass

glass xx

An object printed from powdered glass, using the Solheim Lab’s new Vitraglyphic process.

A team of engineers and artists working at the University of Washington’s Solheim Rapid Manufacturing Laboratory has developed a way to create glass objects using a conventional 3-D printer. The technique allows a new type of material to be used in such devices.

The team’s method, which it named the Vitraglyphic process, is a follow-up to the Solheim Lab’s success last spring printing with ceramics.

“It became clear that if we could get a material into powder form at about 20 microns we could print just about anything,” said Mark Ganter, a UW professor of mechanical engineering and co-director of the Solheim Lab. (Twenty microns is less than one thousandth of an inch.)

Three-dimensional printers are used as a cheap, fast way to build prototype parts. In a typical powder-based 3-D printing system, a thin layer of powder is spread over a platform and software directs an inkjet printer to deposit droplets of binder solution only where needed. The binder reacts with the powder to bind the particles together and create a 3-D object.

Glass powder doesn’t readily absorb liquid, however, so the approach used with ceramic printing had to be radically altered.

“Using our normal process to print objects produced gelatin-like parts when we used glass powders,” said mechanical engineering graduate student Grant Marchelli, who led the experimentation. “We had to reformulate our approach for both powder and binder.”

By adjusting the ratio of powder to liquid the team found a way to build solid parts out of powdered glass purchased from Spectrum Glass in Woodinville, Wash. Their successful formulation held together and fused when heated to the required temperature.

Glass is a material that can be transparent or opaque, but is distinguished as an inorganic material (one which contains no carbon) that solidifies from a molten state without the molecules forming an ordered crystalline structure. Glass molecules remain in a disordered state, so glass is technically a super-cooled liquid rather than a true solid.

In an instance of new technology rediscovering and building on the past, Ganter points out that 3-D printed glass bears remarkable similarities to pate de verre, a technique for creating glassware. In pate de verre, glass powder is mixed with a binding material such as egg white or enamel, placed in a mold and fired. The technique dates from early Egyptian times. With 3-D printing the technique takes on a modern twist.

As with its ceramics 3-D printing recipe, the Solheim lab is releasing its method of printing glass for general use.

“By publishing these recipes without proprietary claims, we hope to encourage further experimentation and innovation within artistic and design communities,” said Duane Storti, a UW associate professor of mechanical engineering and co-director of the Solheim Lab.

Artist Meghan Trainor, a graduate student in the UW’s Center for Digital Arts and Experimental Media working at the Solheim Lab, was the first to use the new method to produce objects other than test shapes.

“Creating kiln-fired glass objects from digital models gives my ideas an immediate material permanence, which is a key factor in my explorations of digital art forms,” Trainor said. “Moving from idea to design to printed part in such a short period of time creates an engaging iterative process where the glass objects form part of a tactile feedback loop.”

Ronald Rael, an assistant professor of architecture at the University of California, Berkeley, has been working with the Solheim Lab to set up his own 3-D printer. Rael is working on new kinds of ceramic bricks that can be used for evaporative cooling systems.

“3-D printing in glass has huge potential for changing the thinking about applications of glass in architecture,” Rael said. “Before now, there was no good method of rapid prototyping in glass, so testing designs is an expensive, time-consuming process.” Rael adds that 3-D printing allows one to insert different forms of glass to change the performance of the material at specific positions as required by the design.

The new method would also create a way to repurpose used glass for new functions, Ganter said. He sees recycled glass as a low-cost material that can help bring 3-D printing within the budget of a broader community of artists and designers.

The Solheim Rapid Prototyping Laboratory, on the UW’s Seattle campus, specializes in advanced research and teaching in solid modeling, rapid prototyping, and innovative 3-D printing systems.


Full article and photo: