Showing posts with label quantum mechnics. Show all posts
Showing posts with label quantum mechnics. Show all posts
Juno to remain in current orbit at Jupiter

Juno to remain in current orbit at Jupiter


Juno to remain in current orbit at Jupiter
NASA's Juno spacecraft soared directly over Jupiter's south pole when JunoCam acquired this image on February 2, 2017 at 6:06 a.m. PT (9:06 a.m. ET), from an altitude of about 62,800 miles (101,000 kilometers) above the cloud tops. Credit: NASA
NASA's Juno mission to Jupiter, which has been in orbit around the gas giant since July 4, 2016, will remain in its current 53-day orbit for the remainder of the mission. This will allow Juno to accomplish its science goals, while avoiding the risk of a previously-planned engine firing that would have reduced the spacecraft's orbital period to 14 days.
"Juno is healthy, its instruments are fully operational, and the data and images we've received are nothing short of amazing," said Thomas Zurbuchen, associate administrator for NASA's Science Mission Directorate in Washington. "The decision to forego the burn is the right thing to do—preserving a valuable asset so that Juno can continue its exciting journey of discovery."
Juno has successfully orbited Jupiter four times since arriving at the giant planet, with the most recent orbit completed on Feb. 2. Its next close flyby of Jupiter will be March 27.
The orbital period does not affect the quality of the science collected by Juno on each flyby, since the altitude over Jupiter will be the same at the time of closest approach. In fact, the longer orbit provides new opportunities that allow further exploration of the far reaches of space dominated by Jupiter's magnetic field, increasing the value of Juno's research.
During each orbit, Juno soars low over Jupiter's cloud tops—as close as about 2,600 miles (4,100 kilometers). During these flybys, Juno probes beneath the obscuring cloud cover and studies Jupiter's auroras to learn more about the planet's origins, structure, atmosphere and magnetosphere.
The original Juno flight plan envisioned the spacecraft looping around Jupiter twice in 53-day orbits, then reducing its orbital period to 14 days for the remainder of the mission. However, two helium check valves that are part of the plumbing for the spacecraft's main engine did not operate as expected when the propulsion system was pressurized in October. Telemetry from the spacecraft indicated that it took several minutes for the valves to open, while it took only a few seconds during past main engine firings.
"During a thorough review, we looked at multiple scenarios that would place Juno in a shorter-period orbit, but there was concern that another main engine burn could result in a less-than-desirable orbit," said Rick Nybakken, Juno project manager at NASA's Jet Propulsion Laboratory in Pasadena, California. "The bottom line is a burn represented a risk to completion of Juno's science objectives."
Juno's larger 53-day orbit allows for "bonus science" that wasn't part of the original mission design. Juno will further explore the far reaches of the Jovian magnetosphere—the region of space dominated by Jupiter's magnetic field—including the far magnetotail, the southern magnetosphere, and the magnetospheric boundary region called the magnetopause. Understanding magnetospheres and how they interact with the solar wind are key science goals of NASA's Heliophysics Science Division.
"Another key advantage of the longer orbit is that Juno will spend less time within the strong radiation belts on each ," said Scott Bolton, Juno principal investigator from Southwest Research Institute in San Antonio. "This is significant because radiation has been the main life-limiting factor for Juno."
Juno will continue to operate within the current budget plan through July 2018, for a total of 12 science orbits. The team can then propose to extend the mission during the next science review cycle. The review process evaluates proposed mission extensions on the merit and value of previous and anticipated science returns.
The Juno science team continues to analyze returns from previous flybys. Revelations include that Jupiter's magnetic fields and aurora are bigger and more powerful than originally thought and that the belts and zones that give the gas giant's cloud top its distinctive look extend deep into the planet's interior. Peer-reviewed papers with more in-depth science results from Juno's first three flybys are expected to be published within the next few months. In addition, the mission's JunoCam—the first interplanetary outreach camera—is now being guided with assistance from the public. People can participate by voting on which features on Jupiter should be imaged during each flyby.
"Juno is providing spectacular results, and we are rewriting our ideas of how giant planets work," said Bolton. "The science will be just as spectacular as with our original plan."
Violating law of energy conservation in the early universe may explain dark energy

Violating law of energy conservation in the early universe may explain dark energy


universe
This is the "South Pillar" region of the star-forming region called the Carina Nebula. Like cracking open a watermelon and finding its seeds, the infrared telescope "busted open" this murky cloud to reveal star embryos tucked inside finger-like pillars of thick dust. Credit: NASA
Physicists have proposed that the violations of energy conservation in the early universe, as predicted by certain modified theories in quantum mechanics and quantum gravity, may explain the cosmological constant problem, which is sometimes referred to as "the worst theoretical prediction in the history of physics."
The physicists, Thibaut Josset and Alejandro Perez at the University of Aix-Marseille, France, and Daniel Sudarsky at the National Autonomous University of Mexico, have published a paper on their proposal in a recent issue Physical Review Letters.
"The main achievement of the work was the unexpected relation between two apparently very distinct issues, namely the accelerated expansion of the universe and microscopic physics," Josset told Phys.org. "This offers a fresh look at the cosmological constant problem, which is still far from being solved."
Einstein originally proposed the concept of the cosmological constant in 1917 to modify his theory of in order to prevent the universe from expanding, since at the time the universe was considered to be static.
Now that modern observations show that the universe is expanding at an accelerating rate, the cosmological constant today can be thought of as the simplest form of , offering a way to account for current observations.
However, there is a huge discrepancy—up to 120 orders of magnitude—between the large theoretical predicted value of the cosmological constant and the tiny observed value. To explain this disagreement, some research has suggested that the cosmological constant may be an entirely new constant of nature that must be measured more precisely, while another possibility is that the underlying mechanism assumed by theory is incorrect. The new study falls into the second line of thought, suggesting that scientists still do not fully understand the root causes of the cosmological constant.
The basic idea of the new paper is that violations of energy conservation in the could have been so small that they would have negligible effects at local scales and remain inaccessible to modern experiments, yet at the same time these violations could have made significant contributions to the present value of the cosmological constant.
To most people, the idea that conservation of energy is violated goes against everything they learned about the most fundamental laws of physics. But on the cosmological scale, conservation of energy is not as steadfast a law as it is on smaller scales. In this study, the physicists specifically investigated two theories in which violations of energy conservation naturally arise.
The first scenario of violations involves modifications to quantum theory that have previously been proposed to investigate phenomena such as the creation and evaporation of black holes, and which also appear in interpretations of quantum mechanics in which the wavefunction undergoes spontaneous collapse. In these cases, energy is created in an amount that is proportional to the mass of the collapsing object.
Violations of energy conservation also arise in some approaches to quantum gravity in which spacetime is considered to be granular due to the fundamental limit of length (the Planck length, which is on the order of 10-35 m). This spacetime discreteness could have led to either an increase or decrease in energy that may have begun contributing to the cosmological constant starting when photons decoupled from electrons in the early universe, during the period known as recombination.
As the researchers explain, their proposal relies on a modification to general relativity called unimodular gravity, first proposed by Einstein in 1919.
"Energy from matter components can be ceded to the gravitational field, and this 'loss of energy' will behave as a cosmological constant—it will not be diluted by later expansion of the universe," Josset said. "Therefore a tiny loss or creation of energy in the remote past may have significant consequences today on large scale."
Whatever the source of the energy conservation violation, the important result is that the energy that was created or lost affected the cosmological constant to a greater and greater extent as time went by, while the effects on matter decreased over time due to the expansion of the universe.
Another way to put it, as the physicists explain in their paper, is that the cosmological constant can be thought of as a record of the energy non-conservation during the history of the universe.
Currently there is no way to tell whether the violations of energy conservation investigated here truly did affect the cosmological constant, but the physicists plan to further investigate the possibility in the future.
"Our proposal is very general and any violation of energy conservation is expected to contribute to an effective cosmological constant," Josset said. "This could allow to set new constraints on phenomenological models beyond standard .
"On the other hand, direct evidence that dark energy is sourced by energy non-conservation seems largely out-of-reach, as we have access to the value of lambda [the ] today and constraints on its evolution at late time only."

Credit: Lisa Zyga  
 
Second-generation stars identified, giving clues about their predecessors

Second-generation stars identified, giving clues about their predecessors


The figure shows a sub-population of ancient stars, called Carbon-Enhanced Metal-Poor (CEMP) stars. These stars contain 100 to 1,000,000 times LESS iron (and other heavy elements) than the Sun, but 10 to 10,000 times MORE carbon, relative to iron. The unusual chemicalcompositions of these stars provides clues to their birth environments, and the nature of the stars in which the carbon formed. In the figure, A(C) is the absolute amount of carbon, while the horizontal axis represents the ratio of iron, relative to hydrogen, compared with the same ratio in the Sun. Credit: University of Notre Dame
University of Notre Dame astronomers have identified what they believe to be the second generation of stars, shedding light on the nature of the universe's first stars.
A subclass of carbon-enhanced metal-poor (CEMP) , the so-called CEMP-no stars, are ancient stars that have large amounts of carbon but little of the (such as iron) common to later-generation stars. Massive first-generation stars made up of pure hydrogen and helium produced and ejected by stellar winds during their lifetimes or when they exploded as supernovae. Those metals—anything heavier than helium, in astronomical parlance—polluted the nearby from which new stars formed.
Jinmi Yoon, a postdoctoral research associate in the Department of Physics; Timothy Beers, the Notre Dame Chair in Astrophysics; and Vinicius Placco, a research professor at Notre Dame, along with their collaborators, show in findings published in the Astrophysics Journal this week that the lowest metallicity stars, the most chemically primitive, include large fractions of CEMP stars. The CEMP-no stars, which are also rich in nitrogen and oxygen, are likely the stars born out of hydrogen and helium gas clouds that were polluted by the elements produced by the universe's first stars.
"The CEMP-no stars we see today, at least many of them, were born shortly after the Big Bang, 13.5 billion years ago, out of almost completely unpolluted material," Yoon says. "These stars, located in the halo system of our galaxy, are true second-generation stars—born out of the nucleosynthesis products of the very first stars."
Beers says it's unlikely that any of the universe's first stars still exist, but much can be learned about them from detailed studies of the next generation of stars.
"We're analyzing the chemical products of the very first stars by looking at what was locked up by the second-generation stars," Beers says. "We can use this information to tell the story of how the first elements were formed, and determine the distribution of the masses of those first stars. If we know how their masses were distributed, we can model the process of how the first stars formed and evolved from the very beginning."
The authors used high-resolution spectroscopic data gathered by many astronomers to measure the chemical compositions of about 300 stars in the halo of the Milky Way. More and heavier elements form as later generations of stars continue to contribute additional metals, they say. As new generations of stars are born, they incorporate the metals produced by prior generations. Hence, the more heavy metals a star contains, the more recently it was born. Our sun, for example, is relatively young, with an age of only 4.5 billion years.
A companion paper, titled "Observational constraints on first-star nucleosynthesis. II. Spectroscopy of an ultra metal-poor CEMP-no star," of which Placco was the lead author, was also published in the same issue of the journal this week. The paper compares theoretical predictions for the chemical composition of zero-metallicity supernova models with a newly discovered CEMP-no star in the Milky Way galaxy.

Credit ; Brian Wallheimer 
scientists find that Solar cells can be made with tin instead of lead

scientists find that Solar cells can be made with tin instead of lead

,

Solar power could become cheaper and more widespread
Credit: University of Warwick
A breakthrough in solar power could make it cheaper and more commercially viable, thanks to research at the University of Warwick.
In a paper published in Nature Energy, Dr Ross Hatton, Professor Richard Walton and colleagues, explain how solar cells could be produced with tin, making them more adaptable and simpler to produce than their current counterparts.
Solar cells based on a class of semiconductors known as lead perovskites are rapidly emerging as an efficient way to convert sunlight directly into electricity. However, the reliance on lead is a serious barrier to commercialisation, due to the well-known toxicity of lead.
Dr Ross Hatton and colleagues show that perovskites using tin in place of lead are much more stable than previously thought, and so could prove to be a viable alternative to lead perovskites for solar cells.
Lead-free cells could render cheaper, safer and more commercially attractive - leading to it becoming a more prevalent source of energy in everyday life.
This could lead to a more widespread use of solar power, with potential uses in products such as laptop computers, mobile phones and cars.
The team have also shown how the device structure can be greatly simplified without compromising performance, which offers the important advantage of reduced fabrication cost.
Dr Hatton comments that there is an ever-pressing need to develop renewable sources of energy:
"It is hoped that this work will help to stimulate an intensive international research effort into lead-free perovskite solar cells, like that which has resulted in the astonishingly rapid advancement of perovskite solar cells.
"There is now an urgent need to tackle the threat of climate change resulting from humanity's over reliance on fossil fuel, and the rapid development of new solar technologies must be part of the plan."
Perovskite solar cells are lightweight and compatible with flexible substrates, so could be applied more widely than the rigid flat plate silicon that currently dominate the photovoltaics market, particularly in consumer electronics and transportation applications.
The paper, 'Enhanced Stability and Efficiency in Hole-Transport Layer Free CsSnI3 Perovskite Photovoltaics', is published in Nature Energy, and is authored by Dr Ross Hatton, Professor Richard Walton and PhD student Kenny Marshall in the Department of Chemistry, along with Dr Marc Walker in the Department of Physics.

2.5 billion-year-old fossils of bacteria that predate the formation of oxygen

2.5 billion-year-old fossils of bacteria that predate the formation of oxygen


Life before oxygen
A microscopic image of 2.5 billion-year-old sulfur-oxidizing bacterium. Credit: Andrew Czaja, UC assistant professor of geology
Somewhere between Earth's creation and where we are today, scientists have demonstrated that some early life forms existed just fine without any oxygen.
While researchers proclaim the first half of our 4.5 billion-year-old planet's life as an important time for the development and evolution of early bacteria, evidence for these life forms remains sparse including how they survived at a time when oxygen levels in the atmosphere were less than one-thousandth of one percent of what they are today.
Recent geology research from the University of Cincinnati presents new evidence for bacteria found fossilized in two separate locations in the Northern Cape Province of South Africa.
"These are the oldest reported fossil sulfur bacteria to date," says Andrew Czaja, UC assistant professor of geology. "And this discovery is helping us reveal a diversity of life and ecosystems that existed just prior to the Great Oxidation Event, a time of major atmospheric evolution."
The 2.52 billion-year-old sulfur-oxidizing bacteria are described by Czaja as exceptionally large, spherical-shaped, smooth-walled microscopic structures much larger than most modern bacteria, but similar to some modern single-celled organisms that live in deepwater sulfur-rich ocean settings today, where even now there are almost no traces of oxygen.
Life before oxygen
UC Professor Andrew Czaja indicates the layer of rock from which fossil bacteria were collected on a 2014 field excursion near the town of Kuruman in the Northern Cape Province of South Africa. Credit: Aaron Satkoski, UWM postdoc on the excursion.
In his research published in the December issue of the journal Geology of the Geological Society of America, Czaja and his colleagues Nicolas Beukes from the University of Johannesburg and Jeffrey Osterhout, a recently graduated master's student from UC's department of geology, reveal samples of bacteria that were abundant in deep water areas of the ocean in a geologic time known as the Neoarchean Eon (2.8 to 2.5 billion years ago).
"These fossils represent the oldest known organisms that lived in a very dark, deep-water environment," says Czaja. "These bacteria existed two billion years before plants and trees, which evolved about 450 million years ago. We discovered these microfossils preserved in a layer of hard silica-rich rock called chert located within the Kaapvaal craton of South Africa."
With an atmosphere of much less than one percent oxygen, scientists have presumed that there were things living in deep water in the mud that didn't need sunlight or oxygen, but Czaja says experts didn't have any direct evidence for them until now.
Czaja argues that finding rocks this old is rare, so researchers' understanding of the Neoarchean Eon are based on samples from only a handful of geographic areas, such as this region of South Africa and another in Western Australia.

According to Czaja, scientists through the years have theorized that South Africa and Western Australia were once part of an ancient supercontinent called Vaalbara, before a shifting and upending of tectonic plates split them during a major change in the Earth's surface.
Based on radiometric dating and geochemical isotope analysis, Czaja characterizes his fossils as having formed in this early Vaalbara supercontinent in an ancient deep seabed containing sulfate from continental rock. According to this dating, Czaja's fossil bacteria were also thriving just before the era when other shallow-water bacteria began creating more and more oxygen as a byproduct of photosynthesis.
"We refer to this period as the Great Oxidation Event that took place 2.4 to 2.2 billion years ago," says Czaja.
Life before oxygen
Microstructures here have physical characteristics consistent with the remains of compressed coccodial (round) bacteria microorganisms. Credit: Andrew Czaja, permission to publish by Geological Society of America
Early recycling
Czaja's fossils show the Neoarchean bacteria in plentiful numbers while living deep in the sediment. He contends that these early bacteria were busy ingesting volcanic hydrogen sulfide—the molecule known to give off a rotten egg smell—then emitting sulfate, a gas that has no smell. He says this is the same process that goes on today as modern bacteria recycle decaying organic matter into minerals and gases.
"The waste product from one [bacteria] was food for the other," adds Czaja.
"While I can't claim that these early bacteria are the same ones we have today, we surmise that they may have been doing the same thing as some of our current bacteria," says Czaja. "These early bacteria likely consumed the molecules dissolved from sulfur-rich minerals that came from land rocks that had eroded and washed out to sea, or from the volcanic remains on the ocean's floor.
There is an ongoing debate about when sulfur-oxidizing bacteria arose and how that fits into the earth's evolution of life, Czaja adds. "But these fossils tell us that sulfur-oxidizing were there 2.52 billion years ago, and they were doing something remarkable."

credit; Melanie Schefft
 Statistics in to the quantum domain

Statistics in to the quantum domain


quantum change point
In the quantum change point problem, a quantum source emits particles that are received by a detector. At some unknown point, a change occurs in the state of the particles being emitted. Physicists have found that global measurement methods, which use quantum repeaters, outperform all classical measurement methods for accurately identifying when the change occurred. Credit: Sentis et al. ©2016 American Physical Society
(Phys.org)—The change point problem is a concept in statistics that pops up in a wide variety of real-world situations, from stock markets to protein folding. The idea is to detect the exact point at which a sudden change has occurred, which could indicate, for example, the trigger of a financial crisis or a misfolded protein step.
Now in a new paper published in Physical Review Letters, physicists Gael Sentรญs et al. have taken the change point problem to the quantum domain.
"Our work sets an important landmark in by porting a fundamental tool of classical statistical analysis into a fully quantum setup," Sentis, at the University of the Basque Country in Bilbao, Spain, told Phys.org.
"With an ever-growing number of promising applications of quantum technologies in all sorts of data processing, building a quantum statistical toolbox capable of dealing with real-world practical issues, of which change point detection is a prominent example, will be crucial. In our paper, we demonstrate the working principles of quantum change point detection and facilitate the grounds for further research on change points in applied scenarios."
Although change point problems can deal with very complex situations, they can also be understood with the simple example of playing a game of Heads or Tails. This game begins with a fair coin, but at some unknown point in the game the coin is switched with a biased one. By statistically analyzing the results of each coin toss from the beginning, it's possible to determine the most likely point at which the coin was switched.
Extending this problem to the quantum realm, the physicists looked at a quantum device that emits particles in a certain state, but at some unknown point the source begins to emit particles in a different state. Here the quantum change point problem can be understood as a problem of discrimination, since determining when the change in the source occurred is the same as distinguishing among all possible sequences of quantum states of the emitted particles.
Physicists can determine the change point in this situation in two different ways: either by measuring the state of each particle as soon as it arrives at the detector (a "local measurement"), or by waiting until all of the particles have reached the detector and making a measurement at the very end (a "global measurement").
Although the local measurement method sounds appealing because it can potentially detect the change point as soon as it occurs without waiting for all of the particles to be emitted, the researchers found that global measurements outperform even the best local measurement strategies.
The "catch" is that global measurements are more difficult to experimentally realize and require a to store the quantum states as they arrive at the detector one by one. The local measurement methods don't require a quantum memory, and instead can be implemented using much simpler devices in sequence. Since global detection requires a quantum memory, the results show that change point detection is another of the many problems for which quantum methods outperform all classical ones.
"We expected that would help, as coherent quantum operations tend to exploit genuinely quantum resources and generally outperform local operations in many information processing tasks," Sentis said. "However, this is a case-dependent advantage, and sometimes sophisticated and clever local strategies are enough to cover the gap. The fact that here there is a finite performance gap says something fundamental about change point detection in quantum scenarios."
The results have potential applications in any situation that involves analyzing data collected over time. Change point detection is also often used to divide a data sample into subsamples that can then be analyzed individually.
"The ability to accurately detect quantum change points has immediate impact on any process that requires careful control of quantum information," Sentis said. "It can be considered a quality testing device for any information processing task that requires (or produces) a sequence of identical quantum states. Applications may range from probing quantum optical fibers to boundary detection in solid state systems."
In the future, the researchers plan on exploring the many applications of quantum change point detection.
"We plan on extending our theoretical methods to deal with more realistic scenarios," Sentis said. "The possibilities are countless. A few examples of generalizations we are exploring are multiple change points, noisy quantum states, and detection of change points in optical setups."
Pulsar wind nebulae

Pulsar wind nebulae


Pulsar wind nebulae
The Crab Nebula seen in the optical by the Hubble Space Telescope. The Crab is an example of a pulsar wind nebula. Astronomers have modeled the detailed shape of another pulsar wind nebula to conclude, among other things, that the pulsar’s spin axis is pointed almost directly towards us. Credit: NASA/ Hubble Space Telescope
Neutron stars are the detritus of supernova explosions, with masses between one and several suns and diameters only tens of kilometers across. A pulsar is a spinning neutron star with a strong magnetic field; charged particles in the field radiate in a lighthouse-like beam that can sweep past the Earth with extreme regularity every few seconds or less. A pulsar also has a wind, and charged particles, sometimes accelerated to near the speed of light, form a nebula around the pulsar: a pulsar wind nebula. The particles' high energies make them strong X-ray emitters, and the nebulae can be seen and studied with X-ray observatories. The most famous example of a pulsar wind nebula is the beautiful and dramatic Crab Nebula.
When a pulsar moves through the interstellar medium, the nebula can develop a bow-shaped shock. Most of the wind particles are confined to a direction opposite to that of the pulsar's motion and form a tail of nebulosity. Recent X-ray and radio observations of fast-moving pulsars confirm the existence of the bright, extended tails as well as compact nebulosity near the pulsars. The length of an X-ray tail can significantly exceed the size of the compact nebula, extending several light-years or more behind the pulsar.
CfA astronomer Patrick Slane was a member of a team that used the Chandra X-ray Observatory to study the nebula around the pulsar PSR B0355+54, located about 3400 light-years away. The pulsar's observed movement over the sky (its proper motion) is measured to be about sixty kilometer per second. Earlier observations by Chandra had determined that the pulsar's nebula had a long tail, extending over at least seven light-years (it might be somewhat longer, but the field of the detector was limited to this size); it also has a bright compact core. The scientists used deep Chandra observations to examine the nebula's faint emission structures, and found that the shape of the nebula, when compared to the direction of the pulsar's motion through the medium, suggests that the spin axis of the pulsar is pointed nearly directly towards us. They also estimate many of the basic parameters of the nebula including the strength of its magnetic field, which is lower than expected (or else turbulence is re-accelerating the and modifying the field). Other conclusions include properties of the compact core and details of the physical mechanisms powering the X-ray and radio radiation.
 
Stars and gas produces dazzling eye-shaped feature in galaxy

Stars and gas produces dazzling eye-shaped feature in galaxy

Tsunami of stars and gas produces dazzling eye-shaped feature in galaxy
Dazzling eyelid-like features bursting with stars in galaxy IC 2163 formed from a tsunami of stars and gas triggered by a glancing collision with galaxy NGC 2207 (a portion of its spiral arm is shown on right side of image). ALMA image of carbon monoxide (orange), which revealed motion of the gas in these features, is shown on top of Hubble image (blue) of the galaxy. Credit: M. Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble Space Telescope  
Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have discovered a tsunami of stars and gas that is crashing midway through the disk of a spiral galaxy known as IC 2163. This colossal wave of material - which was triggered when IC 2163 recently sideswiped another spiral galaxy dubbed NGC 2207 - produced dazzling arcs of intense star formation that resemble a pair of eyelids.
"Although of this type are not uncommon, only a few galaxies with eye-like, or ocular, structures are known to exist," said Michele Kaufman, an astronomer formerly with The Ohio State University in Columbus and lead author on a paper published today in the Astrophysical Journal.
Kaufman and her colleagues note that the paucity of similar features in the observable universe is likely due to their ephemeral nature. "Galactic eyelids last only a few tens of millions of years, which is incredibly brief in the lifespan of a galaxy. Finding one in such a newly formed state gives us an exceptional opportunity to study what happens when one galaxy grazes another," said Kaufman.
The interacting pair of galaxies resides approximately 114 million light-years from Earth in the direction of the constellation Canis Major. These galaxies brushed past each other - scraping the edges of their outer spiral arms - in what is likely the first encounter of an eventual merger.
Using ALMA's remarkable sensitivity and resolution, the astronomers made the most detailed measurements ever of the motion of in the galaxy's narrow eyelid features. Carbon monoxide is a tracer of , which is the fuel for star formation.
Tsunami of stars and gas produces dazzling eye-shaped feature in galaxy
Annotated image showing dazzling eyelid-like features bursting with stars in galaxy IC 2163 formed from a tsunami of stars and gas triggered by a glancing collision with galaxy NGC 2207 (a portion of its spiral arm is shown on right side of image). ALMA image of carbon monoxide (orange), which revealed motion of the gas in these features, is shown on top of Hubble image (blue) of the galaxy. Credit: M. Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble Space Telescope
The data reveal that the gas in the outer portion of IC 2163's eyelids is racing inward at speeds in excess of 100 kilometers a second. This gas, however, quickly decelerates and its motion becomes more chaotic, eventually changing trajectory and aligning itself with the rotation of the galaxy rather than continuing its pell-mell rush toward the center.
"What we observe in this galaxy is very much like a massive ocean wave barreling toward shore until it interacts with the shallows, causing it to lose momentum and dump all of its water and sand on the beach," said Bruce Elmegreen, a scientist with IBM's T.J. Watson Research Center in Yorktown Heights, New York, and co-author on the paper.
"Not only do we find a rapid deceleration of the gas as it moves from the outer to the inner edge of the eyelids, but we also measure that the more rapidly it decelerates, the denser the molecular gas becomes," said Kaufman. "This direct measurement of compression shows how the encounter between the two galaxies drives gas to pile up, spawn new and form these dazzling eyelid features."
Computer models predict that such eyelid-like features could evolve if galaxies interacted in a very specific manner. "This evidence for a strong shock in the eyelids is terrific. It's all very well to have a theory and simulations suggesting it should be true, but real observational evidence is great," said Curtis Struck, a professor of astrophysics at Iowa State University in Ames and co-author on the paper.
Tsunami of stars and gas produces dazzling eye-shaped feature in galaxy
Galaxies IC 2163 (left) and NGC 2207 (right) recently grazed past each other, triggering a tsunami of stars and gas in IC 2163 and producing the dazzling eyelid-like features there. ALMA image of carbon monoxide (orange), which revealed motion of the gas in these features, is shown on top of Hubble image (blue) of the galaxy pair. Credit: M. Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble Space Telescope
"ALMA showed us that the velocities of the molecular gas in the eyelids are on the right track with the predictions we get from computer models," said Kaufman. "This critical test of encounter simulations was not possible before."
Astronomers believe that such collisions between galaxies were common in the early universe when galaxies were closer together. At that time, however, galactic disks were generally clumpy and irregular, so other processes likely overwhelmed the formation of similar eyelid features.
The authors continue to study this and currently are comparing the properties (e.g., locations, ages, and masses) of the star clusters previously observed with NASA's Hubble Space Telescope with the properties of the molecular clouds observed with ALMA. They hope to better understand the differences between molecular clouds and star clusters in the and those elsewhere in the galaxy pair.
Physicists demonstrate existence of new subatomic structure

Physicists demonstrate existence of new subatomic structure


James Vary, right, and coauthor Andrey Shirokov with an illustration of a tetraneutron. Credit: Christopher Gannon/Iowa State University
Iowa State University researchers have helped demonstrate the existence of a subatomic structure once thought unlikely to exist.
James Vary, a professor of physics and astronomy, and Andrey Shirokov, a visiting scientist, together with an international team, used sophisticated supercomputer simulations to show the quasi-stable existence of a tetraneutron, a structure comprised of four (subatomic particles with no charge).
The new finding was published in Physical Review Letters, a publication of the American Physical Society, on October 28.
On their own, neutrons are very unstable and will convert into protons—positively charged subatomic particles—after ten minutes. Groups of two or three neutrons do not form a stable structure, but the new simulations in this research demonstrate that four neutrons together can form a resonance, a structure stable for a period of time before decaying.
For the tetraneutron, this lifetime is only 5×10^(-22) seconds (a tiny fraction of a billionth of a nanosecond). Though this time seems very short, it is long enough to study, and provides a new avenue for exploring the strong forces between neutrons.
"This opens up a whole new line of research," Vary said. "Studying the tetraneutron will help us understand interneutron forces including previously unexplored features of the unstable two-neutron and three-neutron systems."
The advanced simulations demonstrating the tetraneutron corroborate the first observational evidence of the tetraneutron earlier this year in an experiment performed at the RIKEN Radioactive Ion Beam Factory (RIBF), in Saitama, Japan. The tetraneutron structure has been sought for 40 years with little evidence supporting its existence, until now. The properties predicted by the calculations in the simulations were consistent with the observed properties from the experiment in Japan.
The research in Japan used a beam of Helium-8, Helium with 4 extra neutrons, colliding with a regular Helium-4 atom. The collision breaks up the Helium-8 into another Helium-4 and a tetraneutron in its brief resonance state, before it, too, breaks apart, forming four lone neutrons.
"We know that additional experiments with state-of-the-art facilities are in preparation with the goal to get precise characteristics of the tetraneutron," Vary said. "We are providing our state-of-the-art predictions to help guide these experiments."
The existence of the tetraneutron, once confirmed and refined, will add an interesting new entry and gap to the chart of nuclides, a graph representing all known nuclei and their isotopes, or nuclei with a different number of neutrons. Similar to the periodic table, which organizes the chemical behavior of elements, the nuclide chart represents the radioactive behavior of elements and their isotopes. While most nuclei add or subtract neutrons one at a time, this research shows that a neutron itself will have a gap between a single neutron and a tetraneutron.
The only other known neutron structure is a neutron star, small but dense stars thought to be made almost entirely of neutrons. These stars may be only about seven miles in radius but have a mass similar to that of our sun. Neutron stars have neutrons on the order 10^57. Further research may explore if there are other numbers of neutrons that form a stable resonance along the path to reaching the size of a neutron star.
Supercomputer comes up with a profile of dark matter

Supercomputer comes up with a profile of dark matter


Supercomputer comes up with a profile of dark matter: Standard Model extension predicts properties of candidate particle
Simulated distribution of dark matter approximately three billion years after the Big Bang (illustration not from this work). Credit: The Virgo Consortium/Alexandre Amblard/ESA
In the search for the mysterious dark matter, physicists have used elaborate computer calculations to come up with an outline of the particles of this unknown form of matter. To do this, the scientists extended the successful Standard Model of particle physics which allowed them, among other things, to predict the mass of so-called axions, promising candidates for dark matter. The German-Hungarian team of researchers led by Professor Zoltรกn Fodor of the University of Wuppertal, Eรถtvรถs University in Budapest and Forschungszentrum Jรผlich carried out its calculations on Jรผlich's supercomputer JUQUEEN (BlueGene/Q) and presents its results in the journal Nature.
"Dark matter is an invisible form of matter which until now has only revealed itself through its gravitational effects. What it consists of remains a complete mystery," explains co-author Dr Andreas Ringwald, who is based at DESY and who proposed the current research. Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the . High-precision measurements using the European satellite "Planck" show that almost 85 percent of the entire mass of the universe consists of dark matter. All the stars, planets, nebulae and other objects in space that are made of conventional matter account for no more than 15 percent of the mass of the universe.
"The adjective 'dark' does not simply mean that it does not emit visible light," says Ringwald. "It does not appear to give off any other wavelengths either - its interaction with photons must be very weak indeed." For decades, physicists have been searching for particles of this new type of matter. What is clear is that these particles must lie beyond the Standard Model of particle physics, and while that model is extremely successful, it currently only describes the conventional 15 percent of all matter in the cosmos. From theoretically possible extensions to the Standard Model physicists not only expect a deeper understanding of the universe, but also concrete clues in what energy range it is particularly worthwhile looking for dark-matter candidates.
The unknown form of matter can either consist of comparatively few, but very heavy particles, or of a large number of light ones. The direct searches for heavy dark-matter candidates using large detectors in underground laboratories and the indirect search for them using large particle accelerators are still going on, but have not turned up any so far. A range of physical considerations make extremely light particles, dubbed axions, very promising candidates. Using clever experimental setups, it might even be possible to detect direct evidence of them. "However, to find this kind of evidence it would be extremely helpful to know what kind of mass we are looking for," emphasises theoretical physicist Ringwald. "Otherwise the search could take decades, because one would have to scan far too large a range."
The existence of axions is predicted by an extension to quantum chromodynamics (QCD), the quantum theory that governs the strong interaction, responsible for the nuclear force. The strong interaction is one of the four fundamental forces of nature alongside gravitation, electromagnetism and the weak nuclear force, which is responsible for radioactivity. "Theoretical considerations indicate that there are so-called topological quantum fluctuations in quantum chromodynamics, which ought to result in an observable violation of time reversal symmetry," explains Ringwald. This means that certain processes should differ depending on whether they are running forwards or backwards. However, no experiment has so far managed to demonstrate this effect.
The extension to quantum chromodynamics (QCD) restores the invariance of time reversals, but at the same time it predicts the existence of a very weakly interacting particle, the axion, whose properties, in particular its mass, depend on the strength of the topological quantum fluctuations. However, it takes modern supercomputers like Jรผlich's JUQUEEN to calculate the latter in the temperature range that is relevant in predicting the relative contribution of axions to the matter making up the universe. "On top of this, we had to develop new methods of analysis in order to achieve the required temperature range," notes Fodor who led the research.
The results show, among other things, that if axions do make up the bulk of dark matter, they should have a mass of 50 to 1500 micro-electronvolts, expressed in the customary units of , and thus be up to ten billion times lighter than electrons. This would require every cubic centimetre of the universe to contain on average ten million such ultra-lightweight particles. Dark matter is not spread out evenly in the universe, however, but forms clumps and branches of a weblike network. Because of this, our local region of the Milky Way should contain about one trillion axions per cubic centimetre.
Thanks to the Jรผlich supercomputer, the calculations now provide physicists with a concrete range in which their search for axions is likely to be most promising. "The results we are presenting will probably lead to a race to discover these particles," says Fodor. Their discovery would not only solve the problem of in the universe, but at the same time answer the question why the strong interaction is so surprisingly symmetrical with respect to time reversal. The scientists expect that it will be possible within the next few years to either confirm or rule out the existence of axions experimentally.
The Institute for Nuclear Research of the Hungarian Academy of Sciences in Debrecen, the Lendรผlet Lattice Gauge Theory Research Group at the Eรถtvรถs University, the University of Zaragoza in Spain, and the Max Planck Institute for Physics in Munich were also involved in the research.
To use electricity to track water, ID potential problems in concrete

To use electricity to track water, ID potential problems in concrete


Photograph of one of the cracked samples tested in this work. Image at the background shows the flow of water in crack. Credit: Julie Williams Dixon  
Researchers from North Carolina State University and the University of Eastern Finland have developed a new technique for tracking water in concrete structures - allowing engineers to identify potential issues before they become big problems.
"When we think about construction - from bridges and skyscrapers to nuclear plants and dams - they all rely on concrete," says Mohammad Pour-Ghaz, an assistant professor of civil, construction and environmental engineering at North Carolina State University and lead investigator on the project. Tracking concrete degradation is essential to public safety, and the culprit behind concrete degradation is water. Water contributes to the degradation by itself, or it can carry other chemicals - like the road salt used on bridges - that can expedite corrosion of both concrete and its underlying steel reinforcement structure.
"We have developed a technology that allows us to identify and track water movement in concrete using a small current of electricity that is faster, safer and less expensive than existing technologies - and is also more accurate when monitoring large samples, such as structures," Pour-Ghaz says. "The technology can not only determine where and whether water is infiltrating concrete, but how fast it is moving, how much water there is, and how existing cracks or damage are influencing the movement of the water."
Previous technologies for assessing water in concrete relied on X-rays or , but both have significant limitations. X-rays offer only limited penetration into concrete, making it impossible to use with large samples or on structures. Neutron radiation is more accurate, but also has limited penetration, is expensive, and poses health and safety risks.
"Our electrical imaging approach is something that you could use in the field to examine buildings or bridges, which would be difficult or impossible to do with previous technologies," Pour-Ghaz says.
For their electrical imaging technique, researchers apply electrodes around the perimeter of a structure. A computer program then runs a small current between two of the electrodes at a time, cycling through a number of possible electrode combinations.
New tech uses electricity to track water, ID potential problems in concrete
Quantitative imaging of moisture flow in concrete after 1, 2, 4, and 22 hours of water ingress. Actual specimen is shown in far left. Credit: Danny Smyl
Every time the current runs between two electrodes, a computer monitors and records the electrical potential at all of the electrodes on the structure. The researchers then use their own customized software to compute the changes in conductivity and produce a three-dimensional image of the water in the concrete.
"By rapidly repeating this process - and we can do it even more than once per second - we can also capture the rate, and therefore the volume, of the water flow," Pour-Ghaz says.
The researchers have already created and tested a prototype of the system in a lab, accurately capturing images of in concrete samples that are too large to be analyzed using X-rays or neutron radiation. The researchers have also been able to monitor flow through cracks in concrete, which is more difficult and time-consuming when older technologies are used.
"Our electrical imaging technology is ready to be packaged and commercialized for laboratory use, and we'd also be willing to work with the private sector to scale this up for use as an on-site tool to assess the integrity of structures," Pour-Ghaz says.
The work is described in three papers. Lead author on all three papers is Danny Smyl, a Ph.D. student at NC State. All three papers were co-authored by Aku Seppรคnen, of the University of Eastern Finland, and Pour-Ghaz. "Can Electrical Impedance Tomography be used for imaging unsaturated moisture flow in cement-based materials with discrete cracks?" is published in the journal Cement and Concrete Research, and was co-authored by Reza Rashetnia, a Ph.D. student at NC State.
"Quantitative electrical imaging of three-dimensional moisture flow in cement-based materials" was published in International Journal of Heat and Mass Transfer. "Three-Dimensional Electrical Impedance Tomography to Monitor Unsaturated Moisture Ingress in Cement-Based Materials" was published in the journal Transport in Porous Media. Both papers were co-authored by Milad Hallaji, a former Ph.D. student at NC State.
Team spots elusive intermediate compound in atmospheric chemistry

Team spots elusive intermediate compound in atmospheric chemistry


JILA researchers used their frequency comb spectroscopy technique (multicolored lightwaves between the mirrors) to follow each step of an important chemical reaction that occurs in the atmosphere. The technique identifies chemicals in real time based on the light they absorb inside a mirrored cavity. The reaction combines the hydroxyl molecule and carbon monoxide (both at lower left) to form the hydrocarboxl intermediate (red, black and yellow molecule in the foreground). Eventually the intermediate breaks down into hydrogen and carbon dioxide. Credit: Jun Ye group and Steve Burrows/JILA
JILA physicists and colleagues have identified a long-missing piece in the puzzle of exactly how fossil fuel combustion contributes to air pollution and a warming climate. Performing chemistry experiments in a new way, they observed a key molecule that appears briefly during a common chemical reaction in the atmosphere.
The combines the hydroxyl molecule (OH, produced by reaction of oxygen and water) and carbon monoxide (CO, a byproduct of incomplete ) to form hydrogen (H) and carbon dioxide (CO2, a "greenhouse gas" contributing to global warming), as well as heat.
Researchers have been studying this reaction for decades and observed that its speed has an abnormal pressure and temperature dependence, suggesting there is a short-livedintermediate, the hydrocarboxyl molecule, or HOCO. But until now, HOCO had not been observed directly under conditions like those in nature, so researchers were unable to calculate accurately the pressures at which the reaction either pauses at the HOCO stage or proceeds rapidly to create the final products.
As described in the October 28, 2016, issue of Science, JILA's direct detection of the intermediate compound and measurements of its rise and fall under different pressuresand different mixtures of atmospheric gases revealed the reaction mechanism, quantified product yields, and tested theoretical models that were incomplete despite rigorous efforts. JILA is a partnership of the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.
"We followed the reaction step by step in time, including seeing the short-lived, and thus elusive, intermediatesthat play decisive roles in the final products," JILA/NIST Fellow Jun Ye said. "By finally understanding the reaction in full, we can model the atmospheric chemical processes much more accurately, including how air pollution forms."
JILA researchers are performing chemistry in a new way, fully controlling reactions by artificial means instead of relying on nature. They used a laser to induce the reaction inside a container called a laboratory flow cell, through which samples of the molecules participating in the reaction and other gases passed. This process mimicked nature by using gases found in the atmosphere and no catalysts. To avoid any confusion in the results due to the presence of water (which contains hydrogen), the researchers used deuterium, or heavy hydrogen, in the hydroxyl molecule, OD, to start the reaction. Thus, they looked for the DOCO intermediate instead of HOCO. During the experiment, concentrations of CO and nitrogen gases were variedacross a range of pressures.
Using JILA's patented frequency comb spectroscopy technique, which identifies chemicals and measurestheir concentrations in real time based on colors of light they absorb, researchers measured the initial ODand the resulting DOCO over various pressures and atmospheric gas concentrations over time, looking forconditions under which DOCO stabilized or decomposed to form CO2.
The JILA team identified an important factor to be energy transfer due to collisions between the intermediate molecule and nearby CO and nitrogen molecules. These collisions can either stabilize the intermediateDOCO or deactivate it and encourage the reaction to proceed to its final products.
JILA's frequency comb spectroscopy technique analyzes chemicals inside a glass container, in which comblight bounces back and forth between two mirrors. The repeated, continuous measurements make thetechnique especially sensitive and accurate in identifying "fingerprints" of specific molecules. This latestexperiment used new "supermirrors," which have crystalline coatings that reduce light losses and improveddetection sensitivity 10-fold.
JILA's results, notably the effects of molecular collisions, need to be included in future atmospheric and combustion model predictions, according to the paper. For example, even at low pressures, the reaction produces a DOCO yield of nearly 50 percent, meaning about half the reactions pause at the intermediate stage.
This observation affects calculations that go beyond Earth: Other researchers have shown that HOCO can contribute 25-70 percent of the total CO2 concentration in the cold Martian atmosphere.
In the future, JILA researchers plan to extend the experimental approach to study other chemical productsand processes. One topic of interest is reactions involving water and CO2, to aid understanding of how atmosphericCO2 interacts with and acidifies the oceans. Also of interest are studies of engine combustion, which affects fuel economy. A car engine combines air (oxygen and nitrogen) and fuel (hydrocarbons) to produce CO2 and water. Incomplete combustion creates CO.

How is the universe expanding?




Five years ago, the Nobel Prize in Physics was awarded to three astronomers for their discovery, in the late 1990s, that the universe is expanding at an accelerating pace.


Their conclusions were based on analysis of Type Ia supernovae - the spectacular thermonuclear explosion of dying stars - picked up by the Hubble space telescope and large ground-based telescopes. It led to the widespread acceptance of the idea that the universe is dominated by a mysterious substance named 'dark energy' that drives this accelerating expansion.
Now, a team of scientists led by Professor Subir Sarkar of Oxford University's Department of Physics has cast doubt on this standard cosmological concept. Making use of a vastly increased data set - a catalogue of 740 Type Ia supernovae, more than ten times the original sample size - the researchers have found that the evidence for acceleration may be flimsier than previously thought, with the data being consistent with a constant rate of expansion.
The study is published in the Nature journal Scientific Reports.
Professor Sarkar, who also holds a position at the Niels Bohr Institute in Copenhagen, said: 'The discovery of the accelerating expansion of the universe won the Nobel Prize, the Gruber Cosmology Prize, and the Breakthrough Prize in Fundamental Physics. It led to the widespread acceptance of the idea that the universe is dominated by "dark energy" that behaves like a cosmological constant - this is now the "standard model" of cosmology.
'However, there now exists a much bigger database of supernovae on which to perform rigorous and detailed statistical analyses. We analysed the latest catalogue of 740 Type Ia supernovae - over ten times bigger than the original samples on which the discovery claim was based - and found that the evidence for accelerated expansion is, at most, what physicists call "3 sigma". This is far short of the "5 sigma" standard required to claim a discovery of fundamental significance.
'An analogous example in this context would be the recent suggestion for a new particle weighing 750 GeV based on data from the Large Hadron Collider at CERN. It initially had even higher significance - 3.9 and 3.4 sigma in December last year - and stimulated over 500 theoretical papers. However, it was announced in August that new data show that the significance has dropped to less than 1 sigma. It was just a statistical fluctuation, and there is no such particle.'
There is other data available that appears to support the idea of an accelerating universe, such as information on the cosmic microwave background - the faint afterglow of the Big Bang - from the Planck satellite. However, Professor Sarkar said: 'All of these tests are indirect, carried out in the framework of an assumed model, and the cosmic microwave background is not directly affected by dark energy. Actually, there is indeed a subtle effect, the late-integrated Sachs-Wolfe effect, but this has not been convincingly detected.
'So it is quite possible that we are being misled and that the apparent manifestation of dark energy is a consequence of analysing the data in an oversimplified theoretical model - one that was in fact constructed in the 1930s, long before there was any real data. A more sophisticated theoretical framework accounting for the observation that the universe is not exactly homogeneous and that its matter content may not behave as an ideal gas - two key assumptions of standard cosmology - may well be able to account for all observations without requiring dark energy. Indeed, vacuum energy is something of which we have absolutely no understanding in fundamental theory.'
Professor Sarkar added: 'Naturally, a lot of work will be necessary to convince the physics community of this, but our work serves to demonstrate that a key pillar of the standard cosmological model is rather shaky. Hopefully this will motivate better analyses of cosmological data, as well as inspiring theorists to investigate more nuanced cosmological models. Significant progress will be made when the European Extremely Large Telescope makes observations with an ultrasensitive "laser comb" to directly measure over a ten to 15-year period whether the expansion rate is indeed accelerating.'

Physicists retrieve 'lost' information from quantum measurements



Typically when scientists make a measurement, they know exactly what kind of measurement they're making, and their purpose is to obtain a measurement outcome. But in an "unrecorded measurement," both the type of measurement and the measurement outcome are unknown. Despite the fact that scientists do not know this information, experiments clearly show that unrecorded measurements unavoidably disturb the state of the system being measured for quantum (but not classical) systems. In classical systems, unrecorded measurements have no effect.
Although the information in unrecorded measurements appears to be completely lost, in a paper published recently in EPL, Michael Revzen and Ady Mann, both Professors Emeriti at the Technion-Israel Institute of Technology, have described a protocol that can retrieve some of the lost information.
The fact that it is possible to retrieve this lost information reveals new insight into the fundamental nature of quantum measurements, mainly by supporting the idea that quantum measurements contain both quantum and classical components.
Previously, analysis of quantum measurement theory has suggested that, while a quantum measurement starts out purely quantum, it becomes somewhat classical when the quantum state of the system being measured is reduced to a "classical-like" probability distribution. At this point, it is possible to predict the probability of the result of a quantum measurement.
As the physicists explain in the new paper, this step when a quantum state is reduced to a classical-like distribution is the traceable part of an unrecorded measurement—or in other words, it is the "lost" information that the new protocol retrieves. So the retrieval of the lost information provides evidence of the quantum-to-classical transition in a quantum measurement.
"We have demonstrated that analysis of quantum measurement is facilitated by viewing it as being made of two parts," Revzen told Phys.org. "The first, a pure quantum one, pertains to the non-commutativity of measurements' bases. The second relates to classical-like probabilities.
"This partitioning circumvents the ever-present polemic surrounding the whole issue of measurements and allowed us, on the basis of the accepted wisdom pertaining to classical measurements, to suggest and demonstrate that the non-commutative measurement basis may be retrieved by measuring an unrecorded measurement."
As the physicists explain, the key to retrieving the lost information is to use quantum entanglement to entangle the system being measured by an unrecorded measurement with a second system. Since the two systems are entangled, the unrecorded measurement affects both systems. Then a control measurement made on the entangled system can extract some of the lost information. The scientists explain that the essential role of entanglement in retrieving the lost information affirms the intimate connection between entanglement and measurements, as well as the uncertainty principle, which limits the precision with which certain measurements can be made. The scientists also note that the entire concept of retrieval has connections to quantum cryptography.
"Posing the problem of retrieval of unrecorded measurement is, we believe, new," Mann said. "The whole issue, however, is closely related to the problem of the combatting eavesdropper in quantum cryptography which aims, in effect, at detection of the existence of 'unrecorded measurement' (our aim is their identification). The issue of eavesdropper detection has been under active study for some time."
The scientists are continuing to build on the new results by showing that some of the lost information can never be retrieved, and that in other cases, it's impossible to determine whether certain information can be retrieved.
"At present, we are trying to find a comprehensive proof that the retrieval of the measurement basis is indeed the maximal possible retrieval, as well as to pin down the precise meaning of the ubiquitous 'undetermined' case," Revzen said. "This is, within our general study of quantum measurement, arguably the most obscure subject of the foundation of quantum mechanics."
Black hole hidden within its own exhaust

Black hole hidden within its own exhaust


Black hole hidden within its own exhaust
Artist impression of the heart of galaxy NGC 1068, which harbors an actively feeding supermassive black hole. Arising from the black hole's outer accretion disk, ALMA discovered clouds of cold molecular gas and dust.
 

Supermassive black holes, millions to billions of times the mass of our Sun, are found at the centers of galaxies. Many of these galactic behemoths are hidden within a thick doughnut-shape ring of dust and gas known as a torus. Previous observations suggest these cloaking, tire-like structures are formed from the native material found near the center of a galaxy.
New data from the Atacama Large Millimeter/submillimeter Array (ALMA), however, reveal that the black hole at the center of a galaxy named NGC 1068 is actually the source of its own dusty torus of dust and gas, forged from material flung out of the black hole's accretion disk.
This newly discovered cosmic fountain of cold gas and dust could reshape our understanding of how impact their host galaxy and potentially the intergalactic medium.
"Think of a black hole as an engine. It's fueled by material falling in on it from a flattened disk of dust and gas," said Jack Gallimore, an astronomer at Bucknell University in Lewisburg, Pennsylvania, and lead author on a paper published in Astrophysical Journal Letters. "But like any engine, a black hole can also emit exhaust." That exhaust, astronomers discovered, is the likely source of the torus of material that effectively obscures the region around the galaxy's super-massive black hole from optical telescopes.
NGC 1068 (also known as Messier 77) is a barred spiral galaxy approximately 47 million light-years from Earth in the direction of the constellation Cetus. At its center is an active galactic nucleus, a supermassive black hole that is being fed by a thin, rotating disk of gas and dust known as an accretion disk. As material in the disk spirals toward the central black hole, it becomes superheated and blazes bright with ultraviolet radiation. The outer reaches of the disk, however, are considerably cooler and glow more appreciably in infrared light and the millimeter-wavelength light that ALMA can detect.
ALMA image of the central region of galaxy NGC 1068. The torus of material harboring the supermassive black hole is highlighted in the pullout box. This region, which is approximately 40 light-years across, is the result of material flung …more
Using ALMA, an international team of astronomers peered deep into this region and discovered a sprinkling of cool clouds of carbon monoxide lifting off the outer portion of the accretion disk. The energy from the hot inner disk partially ionizes these clouds, enabling them to adhere to powerful magnetic field lines that wrap around the disk.
Like water being flung out of a rapidly rotating garden sprinkler, the clouds rising above the accretion disk get accelerated centrifugally along the to very high speeds—approximately 400 to 800 kilometers per second (nearly 2 million miles per hour). This is up to nearly three times faster than the rotational speed of the outer accretion disk, fast enough to send the clouds hurtling further out into the galaxy.
"These clouds are traveling so fast that they reach 'escape velocity' and are jettisoned in a cone-like spray from both sides of the disk," said Gallimore. "With ALMA, we can for the first time see that it is the gas that is thrown out that hides the black hole, not the gas falling in." This suggests that the general theory of an active black hole is oversimplified, he concludes.
With future ALMA observations, the astronomers hope to work out a fuel budget for this black hole engine: how much mass per year goes into the black hole and how much is ejected as exhaust.
"These are fundamental quantities for understanding black holes that we really don't have a good handle on at this time," concludes Gallimore.
This research is presented in the paper titled "High-velocity bipolar molecular emission from an AGN torus," by J. Gallimore et al., published in Astrophysical Journal Letters on 15 September 2016. [Preprint: arxiv.org/pdf/1608.02210v1.pdf ]

Translate

Ads