Remember the last post about SpaceX? Well they are at it again!
This time, SpaceX has propelled supplies to International space station on saturday.More so is that they used a verssel that has flown before.
The refurbished Dragon cargo capsule propeled into space annexed to a Falcon 9 rocket at 5:07 pm (2107 GMT) from Cape
Canaveral, Florida.
With a countdown made by NASA spokesman Mike Curie, the rocket blazed a steady vertical path into the clouds.
The last time this particular
spaceship(Dragon) flew to space was in 2014.
The Dragon on present mission is packed with almost
6,000 pounds (2,700 kilograms) of science research, crew supplies and hardware
and should arrive at the Monday(ISS time).
The supplies for special experiments
include live mice to study the effects of osteoporosis and fruit flies for
research on microgravity's impact on the heart.
The spacecraft is also loaded with
solar panels and equipment to study neutron stars.
After about 10 minutes after launch,
SpaceX successfully returned the first stage of the Falcon 9 rocket back to a
controlled landing at Cape Canaveral.
The rocket powered its engines and
guided itself down to Landing Zone One, not far from the launch site.
"The first stage is back,"
Curie said in a NASA live webcast, as video images showed the tall, narrow
portion of the rocket touch down steadily in a cloud of smoke.
SpaceX said it marked the company's
fifth successful landing on solid ground. Several of its Falcon 9 rockets have
returned upright to platforms floating in the ocean.
The effort is part of SpaceX's push
to make spaceflight cheaper by re-using costlyrocket
and spaceship components after each launch, rather than ditching them in the
ocean.
The launch was the 100th from NASA's
historic launch pad 39A, the starting point for the Apollo missions to the Moon
in the 1960s and 1970s, as well as a total of 82 shuttle flights.
A composite image of the Western hemisphere of the Earth. Credit: NASA
More than 90% of Earth's continental crust
is made up of silica-rich minerals, such as feldspar and quartz. But
where did this silica-enriched material come from? And could it provide a
clue in the search for life on other planets?
Conventional
theory holds that all of the early Earth's crustal ingredients were
formed by volcanic activity. Now, however, McGill University earth
scientists Don Baker and Kassandra Sofonio have published a theory with
a novel twist: some of the chemical components of this material settled
onto Earth's early surface from the steamy atmosphere that prevailed at
the time.
First, a bit of ancient geochemical history: Scientists believe that a
Mars-sized planetoid plowed into the proto-Earth around 4.5 billion
years ago, melting the Earth and turning it into an ocean of magma. In
the wake of that impact—which also created enough debris to form the
moon—the Earth's surface gradually cooled until it was more or less
solid. Baker's new theory, like the conventional one, is based on that
premise.
The atmosphere following that collision, however, consisted of
high-temperature steam that dissolved rocks on the Earth's immediate
surface—"much like how sugar is dissolved in coffee," Baker explains.
This is where the new wrinkle comes in. "These dissolved minerals rose
to the upper atmosphere and cooled off, and then these silicate materials that were dissolved at the surface would start to separate out and fall back to Earth in what we call a silicate rain."
To test this theory, Baker and co-author Kassandra Sofonio, a McGill
undergraduate research assistant, spent months developing a series of
laboratory experiments designed to mimic the steamy conditions on early
Earth. A mixture of bulk silicate earth materials and water was melted
in air at 1,550 degrees Celsius, then ground to a powder. Small amounts
of the powder, along with water, were then enclosed in gold palladium
capsules, placed in a pressure vessel and heated to about 727 degrees
Celsius and 100 times Earth's surface pressure to simulate conditions in
the Earth's atmosphere about 1 million years after the moon-forming
impact. After each experiment, samples were rapidly quenched and the
material that had been dissolved in the high temperature steam analyzed.
The experiments were guided by other scientists' previous experiments
on rock-water interactions at high pressures, and by the McGill team's
own preliminary calculations, Baker notes. Even so, "we were surprised
by the similarity of the dissolved silicate material produced by the
experiments" to that found in the Earth's crust.
Their resulting paper, published in the journal Earth and Planetary Science Letters,
posits a new theory of "aerial metasomatism"—a term coined by Sofonio
to describe the process by which silica minerals condensed and fell back
to earth over about a million years, producing some of the earliest
rock specimens known today.
"Our experiment shows the chemistry of this process," and could
provide scientists with important clues as to which exoplanets might
have the capacity to harbor life Baker says.
"This time in early Earth's history is still really exciting," he
adds. "A lot of people think that life started very soon after these
events that we're talking about. This is setting up the stages for the
Earth being ready to support life."
NASA's Juno spacecraft soared directly over Jupiter's south pole
when JunoCam acquired this image on February 2, 2017 at 6:06 a.m. PT
(9:06 a.m. ET), from an altitude of about 62,800 miles (101,000
kilometers) above the cloud tops. Credit: NASA
NASA's Juno mission to Jupiter, which has
been in orbit around the gas giant since July 4, 2016, will remain in
its current 53-day orbit for the remainder of the mission. This will
allow Juno to accomplish its science goals, while avoiding the risk of a
previously-planned engine firing that would have reduced the
spacecraft's orbital period to 14 days.
"Juno is healthy, its science
instruments are fully operational, and the data and images we've
received are nothing short of amazing," said Thomas Zurbuchen, associate
administrator for NASA's Science Mission Directorate in Washington.
"The decision to forego the burn is the right thing to do—preserving a
valuable asset so that Juno can continue its exciting journey of
discovery."
Juno has successfully orbited Jupiter four times since arriving at
the giant planet, with the most recent orbit completed on Feb. 2. Its
next close flyby of Jupiter will be March 27.
The orbital period does not affect the quality of the science
collected by Juno on each flyby, since the altitude over Jupiter will be
the same at the time of closest approach. In fact, the longer orbit
provides new opportunities that allow further exploration of the far
reaches of space dominated by Jupiter's magnetic field, increasing the
value of Juno's research.
During each orbit, Juno soars low over Jupiter's cloud tops—as close
as about 2,600 miles (4,100 kilometers). During these flybys, Juno
probes beneath the obscuring cloud cover and studies Jupiter's auroras
to learn more about the planet's origins, structure, atmosphere and
magnetosphere.
The original Juno flight plan envisioned the spacecraft looping
around Jupiter twice in 53-day orbits, then reducing its orbital period
to 14 days for the remainder of the mission. However, two helium check
valves that are part of the plumbing for the spacecraft's main engine
did not operate as expected when the propulsion system was pressurized
in October. Telemetry from the spacecraft indicated that it took several
minutes for the valves to open, while it took only a few seconds during
past main engine firings.
"During a thorough review, we looked at multiple scenarios that would
place Juno in a shorter-period orbit, but there was concern that
another main engine burn could result in a less-than-desirable orbit,"
said Rick Nybakken, Juno project manager at NASA's Jet Propulsion
Laboratory in Pasadena, California. "The bottom line is a burn
represented a risk to completion of Juno's science objectives."
Juno's larger 53-day orbit allows for "bonus science" that wasn't
part of the original mission design. Juno will further explore the far
reaches of the Jovian magnetosphere—the region of space dominated by
Jupiter's magnetic field—including the far magnetotail, the southern
magnetosphere, and the magnetospheric boundary region called the
magnetopause. Understanding magnetospheres and how they interact with
the solar wind are key science goals of NASA's Heliophysics Science
Division.
"Another key advantage of the longer orbit is that Juno will spend less time within the strong radiation belts on each orbit,"
said Scott Bolton, Juno principal investigator from Southwest Research
Institute in San Antonio. "This is significant because radiation has
been the main life-limiting factor for Juno."
Juno will continue to operate within the current budget plan through
July 2018, for a total of 12 science orbits. The team can then propose
to extend the mission during the next science review cycle. The review
process evaluates proposed mission extensions on the merit and value of
previous and anticipated science returns.
The Juno science team continues to analyze returns from previous
flybys. Revelations include that Jupiter's magnetic fields and aurora
are bigger and more powerful than originally thought and that the belts
and zones that give the gas giant's cloud top its distinctive look
extend deep into the planet's interior. Peer-reviewed papers with more
in-depth science results from Juno's first three flybys are expected to
be published within the next few months. In addition, the mission's
JunoCam—the first interplanetary outreach camera—is now being guided
with assistance from the public. People can participate by voting on
which features on Jupiter should be imaged during each flyby.
"Juno is providing spectacular results, and we are rewriting our
ideas of how giant planets work," said Bolton. "The science will be just
as spectacular as with our original plan."
This is the "South Pillar" region of the star-forming region
called the Carina Nebula. Like cracking open a watermelon and finding
its seeds, the infrared telescope "busted open" this murky cloud to
reveal star embryos tucked inside finger-like pillars of thick dust.
Credit: NASA
Physicists have proposed that the
violations of energy conservation in the early universe, as predicted by
certain modified theories in quantum mechanics and quantum gravity, may
explain the cosmological constant problem, which is sometimes referred
to as "the worst theoretical prediction in the history of physics."
The
physicists, Thibaut Josset and Alejandro Perez at the University of
Aix-Marseille, France, and Daniel Sudarsky at the National Autonomous
University of Mexico, have published a paper on their proposal in a
recent issue Physical Review Letters.
"The main achievement of the work was the unexpected relation between
two apparently very distinct issues, namely the accelerated expansion
of the universe and microscopic physics," Josset told Phys.org. "This offers a fresh look at the cosmological constant problem, which is still far from being solved."
Einstein originally proposed the concept of the cosmological constant in 1917 to modify his theory of general relativity in order to prevent the universe from expanding, since at the time the universe was considered to be static.
Now that modern observations show that the universe is expanding at
an accelerating rate, the cosmological constant today can be thought of
as the simplest form of dark energy, offering a way to account for current observations.
However, there is a huge discrepancy—up to 120 orders of
magnitude—between the large theoretical predicted value of the
cosmological constant and the tiny observed value. To explain this
disagreement, some research has suggested that the cosmological constant
may be an entirely new constant of nature that must be measured more
precisely, while another possibility is that the underlying mechanism
assumed by theory is incorrect. The new study falls into the second line
of thought, suggesting that scientists still do not fully understand
the root causes of the cosmological constant.
The basic idea of the new paper is that violations of energy conservation in the early universe
could have been so small that they would have negligible effects at
local scales and remain inaccessible to modern experiments, yet at the
same time these violations could have made significant contributions to
the present value of the cosmological constant.
To most people, the idea that conservation of energy is violated goes
against everything they learned about the most fundamental laws of
physics. But on the cosmological scale, conservation of energy is not as
steadfast a law as it is on smaller scales. In this study, the
physicists specifically investigated two theories in which violations of
energy conservation naturally arise.
The first scenario of violations involves modifications to quantum
theory that have previously been proposed to investigate phenomena such
as the creation and evaporation of black holes, and which also appear in
interpretations of quantum mechanics in which the wavefunction
undergoes spontaneous collapse. In these cases, energy is created in an
amount that is proportional to the mass of the collapsing object.
Violations of energy conservation also arise in some approaches to
quantum gravity in which spacetime is considered to be granular due to
the fundamental limit of length (the Planck length, which is on the
order of 10-35 m). This spacetime discreteness could have led
to either an increase or decrease in energy that may have begun
contributing to the cosmological constant starting when photons
decoupled from electrons in the early universe, during the period known
as recombination.
As the researchers explain, their proposal relies on a modification
to general relativity called unimodular gravity, first proposed by
Einstein in 1919.
"Energy from matter components can be ceded to the gravitational
field, and this 'loss of energy' will behave as a cosmological
constant—it will not be diluted by later expansion of the universe,"
Josset said. "Therefore a tiny loss or creation of energy in the remote
past may have significant consequences today on large scale."
Whatever the source of the energy conservation violation, the
important result is that the energy that was created or lost affected
the cosmological constant to a greater and greater extent as time went
by, while the effects on matter decreased over time due to the expansion
of the universe.
Another way to put it, as the physicists explain in their paper, is
that the cosmological constant can be thought of as a record of the
energy non-conservation during the history of the universe.
Currently there is no way to tell whether the violations of energy
conservation investigated here truly did affect the cosmological
constant, but the physicists plan to further investigate the possibility
in the future.
"Our proposal is very general and any violation of energy conservation
is expected to contribute to an effective cosmological constant,"
Josset said. "This could allow to set new constraints on
phenomenological models beyond standard quantum mechanics.
"On the other hand, direct evidence that dark energy is sourced by
energy non-conservation seems largely out-of-reach, as we have access to
the value of lambda [the cosmological constant] today and constraints on its evolution at late time only."
SpaceX is poised to blast off a Falcon 9
rocket on Saturday, marking its first return to flight since a costly
and complicated launchpad explosion in September.
The launch of
10 satellites for Iridium, a mobile and data communications company, is
scheduled from Vandenberg Air Force Base in California at 9:54 am (1754
GMT).
The launch window is "instantaneous," meaning that any technical
glitch or poor weather—the current forecast is just 60 percent
favorable—would push the launch forward to the next opportunity on
Sunday at 1749 GMT.
The stakes for SpaceX are high after a pair of accidents.
September's blast destroyed a $200 million satellite Facebook had
planned to use to beam high-speed internet to Africa. Another explosion
in June 2015 two minutes after liftoff obliterated a Dragon cargo ship packed with goods bound for the astronauts at the International Space Station.
The incidents cost SpaceX dearly, possibly pushing the privately
owned company into the red, the Wall Street Journal reported this week.
"That June 2015 disaster, followed by months of launch delays,
contributed to a quarter-billion dollar annual loss and a six percent
drop in revenue, after two years of surging sales and small profits,"
the paper said after a review of internal financial documents from 2011
to 2015, forecasts for the next decade and interviews with former SpaceX
employees.
Three weeks after last September's accident, the company removed a
long-standing phrase from its website saying it was "profitable and
cash-flow positive."
That "suggest(ed) both profit and cash flow had moved into the red
for 2016," the Journal said, noting that it found an operating loss for
every quarter in 2016 and negative cash flow of roughly $15 million.
SpaceX, headed by billionaire entrepreneur Elon Musk, declined to
comment on the findings and is not obligated to release its financial
figures because it is a private company, the report said.
"The company is in a financially strong position and is well
positioned for future growth," with $1 billion in cash and no debt,
SpaceX chief financial officer Bret Johnson was quoted as saying.
Problems fixed
The June 2015 accident—in which the unmanned Dragon cargo ship
exploded in a massive fireball two minutes after launch—was caused by a
faulty strut that allowed a helium tank to snap loose, SpaceX said.
Last September's explosion, during a test a day prior to a scheduled
launch, was traced to a problem with a pressure vessel in the
second-stage liquid oxygen tank.
SpaceX said it will change the way it fuels for now and redesign its pressure vessels in the future.
Musk, who cofounded PayPal and also owns Tesla Motors, has lofty
goals, including colonizing Mars and revolutionizing the launch industry
by making rocket components reusable.
Founded in 2002, SpaceX logged 18 successful launches of the Falcon 9 before the 2015 accident.
The company has a $1.6 billion contract with NASA to supply the
International Space Station using its Dragon space capsule, which is the
only cargo ship that can return to the Earth intact.
SpaceX had hoped to resume Falcon 9 flights as early as November, then in mid-December, before pushing the date to January.
The figure shows a sub-population of ancient stars, called
Carbon-Enhanced Metal-Poor (CEMP) stars. These stars contain 100 to
1,000,000 times LESS iron (and other heavy elements) than the Sun, but
10 to 10,000 times MORE carbon, relative to iron. The unusual
chemicalcompositions of these stars provides clues to their birth
environments, and the nature of the stars in which the carbon formed. In
the figure, A(C) is the absolute amount of carbon, while the horizontal
axis represents the ratio of iron, relative to hydrogen, compared with
the same ratio in the Sun. Credit: University of Notre Dame
University of Notre Dame astronomers have
identified what they believe to be the second generation of stars,
shedding light on the nature of the universe's first stars.
A subclass of carbon-enhanced metal-poor (CEMP) stars, the so-called CEMP-no stars, are ancient stars that have large amounts of carbon but little of the heavy metals
(such as iron) common to later-generation stars. Massive
first-generation stars made up of pure hydrogen and helium produced and
ejected heavier elements
by stellar winds during their lifetimes or when they exploded as
supernovae. Those metals—anything heavier than helium, in astronomical
parlance—polluted the nearby gas clouds from which new stars formed.
Jinmi Yoon, a postdoctoral research associate in the Department of
Physics; Timothy Beers, the Notre Dame Chair in Astrophysics; and
Vinicius Placco, a research professor at Notre Dame, along with their
collaborators, show in findings published in the Astrophysics Journal
this week that the lowest metallicity stars, the most chemically
primitive, include large fractions of CEMP stars. The CEMP-no stars,
which are also rich in nitrogen and oxygen, are likely the stars born
out of hydrogen and helium gas clouds that were polluted by the elements
produced by the universe's first stars.
"The CEMP-no stars we see today, at least many of them, were born
shortly after the Big Bang, 13.5 billion years ago, out of almost
completely unpolluted material," Yoon says. "These stars, located in the
halo system of our galaxy, are true second-generation stars—born out of
the nucleosynthesis products of the very first stars."
Beers says it's unlikely that any of the universe's first stars still
exist, but much can be learned about them from detailed studies of the
next generation of stars.
"We're analyzing the chemical products of the very first stars by
looking at what was locked up by the second-generation stars," Beers
says. "We can use this information to tell the story of how the first
elements were formed, and determine the distribution of the masses of
those first stars. If we know how their masses were distributed, we can
model the process of how the first stars formed and evolved from the
very beginning."
The authors used high-resolution spectroscopic data gathered by many
astronomers to measure the chemical compositions of about 300 stars in
the halo of the Milky Way. More and heavier elements form as later
generations of stars continue to contribute additional metals, they say.
As new generations of stars are born, they incorporate the metals
produced by prior generations. Hence, the more heavy metals a star
contains, the more recently it was born. Our sun, for example, is
relatively young, with an age of only 4.5 billion years.
A companion paper, titled "Observational constraints on first-star
nucleosynthesis. II. Spectroscopy of an ultra metal-poor CEMP-no star,"
of which Placco was the lead author, was also published in the same
issue of the journal this week. The paper compares theoretical
predictions for the chemical composition of zero-metallicity supernova
models with a newly discovered CEMP-no star in the Milky Way galaxy.
This artist’s view shows how the light coming from the surface
of a strongly magnetic neutron star (left) becomes linearly polarised as
it travels through the vacuum of space close to the star on its way to
the observer on Earth (right). …more
By
studying the light emitted from an extraordinarily dense and strongly
magnetized neutron star using ESO's Very Large Telescope, astronomers
may have found the first observational indications of a strange quantum
effect, first predicted in the 1930s. The polarization of the observed
light suggests that the empty space around the neutron star is subject
to a quantum effect known as vacuum birefringence.
A team led by
Roberto Mignani from INAF Milan (Italy) and from the University of
Zielona Gora (Poland), used ESO's Very Large Telescope (VLT) at the
Paranal Observatory in Chile to observe the neutron star RX
J1856.5-3754, about 400 light-years from Earth.
Despite being amongst the closest neutron stars,
its extreme dimness meant the astronomers could only observe the star
with visible light using the FORS2 instrument on the VLT, at the limits
of current telescope technology.
Neutron stars are the very dense remnant cores of massive stars—at
least 10 times more massive than our Sun—that have exploded as
supernovae at the ends of their lives. They also have extreme magnetic
fields, billions of times stronger than that of the Sun, that permeate
their outer surface and surroundings.
These fields are so strong that they even affect the properties of the empty space around the star. Normally a vacuum
is thought of as completely empty, and light can travel through it
without being changed. But in quantum electrodynamics (QED), the quantum
theory describing the interaction between photons and charged particles
such as electrons, space is full of virtual particles that appear and
vanish all the time. Very strong magnetic fields can modify this space so that it affects the polarisation of light passing through it.
Mignani explains: "According to QED, a highly magnetised vacuum
behaves as a prism for the propagation of light, an effect known as
vacuum birefringence."
Among the many predictions of QED, however, vacuum birefringence so
far lacked a direct experimental demonstration. Attempts to detect it in
the laboratory have not yet succeeded in the 80 years since it was
predicted in a paper by Werner Heisenberg (of uncertainty principle
fame) and Hans Heinrich Euler.
This wide field image shows the sky around the very faint
neutron star RX J1856.5-3754 in the southern constellation of Corona
Australis. This part of the sky also contains interesting regions of
dark and bright nebulosity surrounding the …more
"This
effect can be detected only in the presence of enormously strong
magnetic fields, such as those around neutron stars. This shows, once
more, that neutron stars are invaluable laboratories in which to study
the fundamental laws of nature." says Roberto Turolla (University of
Padua, Italy).
After careful analysis of the VLT data, Mignani and his team detected
linear polarisation—at a significant degree of around 16%—that they say
is likely due to the boosting effect of vacuum birefringence occurring
in the area of empty space (some of us already know that empty space don't exist) surrounding RX J1856.5-3754.
Vincenzo Testa (INAF, Rome, Italy) comments: "This is the faintest
object for which polarisation has ever been measured. It required one of
the largest and most efficient telescopes in the world, the VLT, and
accurate data analysis techniques to enhance the signal from such a
faint star."
"The high linear polarisation that we measured with the VLT can't be
easily explained by our models unless the vacuum birefringence effects
predicted by QED are included," adds Mignani.
"This VLT study is the very first observational support for
predictions of these kinds of QED effects arising in extremely strong
magnetic fields," remarks Silvia Zane (UCL/MSSL, UK).
Mignani is excited about further improvements to this area of study
that could come about with more advanced telescopes: "Polarisation
measurements with the next generation of telescopes, such as ESO's
European Extremely Large Telescope, could play a crucial role in testing
QED predictions of vacuum birefringence effects around many more
neutron stars."
"This measurement, made for the first time now in visible light, also
paves the way to similar measurements to be carried out at X-ray
wavelengths," adds Kinwah Wu (UCL/MSSL, UK).
This research was presented in the paper entitled "Evidence for
vacuum birefringence from the first optical polarimetry measurement of
the isolated neutron star RX J1856.5−3754", by R. Mignani et al., to
appear in Monthly Notices of the Royal Astronomical Society.
The Crab Nebula seen in the optical by the Hubble Space
Telescope. The Crab is an example of a pulsar wind nebula. Astronomers
have modeled the detailed shape of another pulsar wind nebula to
conclude, among other things, that the pulsar’s spin axis is pointed
almost directly towards us. Credit: NASA/ Hubble Space Telescope
Neutron stars are the detritus of supernova
explosions, with masses between one and several suns and diameters only
tens of kilometers across. A pulsar is a spinning neutron star with a
strong magnetic field; charged particles in the field radiate in a
lighthouse-like beam that can sweep past the Earth with extreme
regularity every few seconds or less. A pulsar also has a wind, and
charged particles, sometimes accelerated to near the speed of light,
form a nebula around the pulsar: a pulsar wind nebula. The particles'
high energies make them strong X-ray emitters, and the nebulae can be
seen and studied with X-ray observatories. The most famous example of a
pulsar wind nebula is the beautiful and dramatic Crab Nebula.
When a pulsar
moves through the interstellar medium, the nebula can develop a
bow-shaped shock. Most of the wind particles are confined to a direction
opposite to that of the pulsar's motion and form a tail of nebulosity.
Recent X-ray and radio observations of fast-moving pulsars confirm the
existence of the bright, extended tails as well as compact nebulosity
near the pulsars. The length of an X-ray tail can significantly exceed
the size of the compact nebula, extending several light-years or more
behind the pulsar.
CfA astronomer Patrick Slane was a member of a team that used the
Chandra X-ray Observatory to study the nebula around the pulsar PSR
B0355+54, located about 3400 light-years away. The pulsar's observed
movement over the sky (its proper motion) is measured to be about sixty
kilometer per second. Earlier observations by Chandra had determined
that the pulsar's nebula had a long tail, extending over at least seven
light-years (it might be somewhat longer, but the field of the detector
was limited to this size); it also has a bright compact core. The
scientists used deep Chandra observations to examine the nebula's faint
emission structures, and found that the shape of the nebula, when
compared to the direction of the pulsar's motion through the medium,
suggests that the spin axis of the pulsar is pointed nearly directly
towards us. They also estimate many of the basic parameters of the
nebula including the strength of its magnetic field, which is lower than
expected (or else turbulence is re-accelerating the particles
and modifying the field). Other conclusions include properties of the
compact core and details of the physical mechanisms powering the X-ray
and radio radiation.
Dazzling eyelid-like features bursting with stars in galaxy IC
2163 formed from a tsunami of stars and gas triggered by a glancing
collision with galaxy NGC 2207 (a portion of its spiral arm is shown on
right side of image). ALMA image of carbon monoxide (orange), which
revealed motion of the gas in these features, is shown on top of Hubble
image (blue) of the galaxy. Credit: M. Kaufman; B. Saxton
(NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble Space Telescope
Astronomers using the Atacama Large
Millimeter/submillimeter Array (ALMA) have discovered a tsunami of stars
and gas that is crashing midway through the disk of a spiral galaxy
known as IC 2163. This colossal wave of material - which was triggered
when IC 2163 recently sideswiped another spiral galaxy dubbed NGC 2207 -
produced dazzling arcs of intense star formation that resemble a pair
of eyelids.
"Although galaxy collisions
of this type are not uncommon, only a few galaxies with eye-like, or
ocular, structures are known to exist," said Michele Kaufman, an
astronomer formerly with The Ohio State University in Columbus and lead
author on a paper published today in the Astrophysical Journal.
Kaufman and her colleagues note that the paucity of similar features
in the observable universe is likely due to their ephemeral nature.
"Galactic eyelids last only a few tens of millions of years, which is
incredibly brief in the lifespan of a galaxy. Finding one in such a
newly formed state gives us an exceptional opportunity to study what
happens when one galaxy grazes another," said Kaufman.
The interacting pair of galaxies resides approximately 114 million
light-years from Earth in the direction of the constellation Canis
Major. These galaxies brushed past each other - scraping the edges of
their outer spiral arms - in what is likely the first encounter of an
eventual merger.
Using ALMA's remarkable sensitivity and resolution, the astronomers made the most detailed measurements ever of the motion of carbon monoxide gas in the galaxy's narrow eyelid features. Carbon monoxide is a tracer of molecular gas, which is the fuel for star formation.
Annotated image showing dazzling eyelid-like features bursting
with stars in galaxy IC 2163 formed from a tsunami of stars and gas
triggered by a glancing collision with galaxy NGC 2207 (a portion of its
spiral arm is shown on right side of image). ALMA image of carbon
monoxide (orange), which revealed motion of the gas in these features,
is shown on top of Hubble image (blue) of the galaxy. Credit: M.
Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble
Space Telescope
The data reveal that the gas in the outer
portion of IC 2163's eyelids is racing inward at speeds in excess of 100
kilometers a second. This gas, however, quickly decelerates and its
motion becomes more chaotic, eventually changing trajectory and aligning
itself with the rotation of the galaxy rather than continuing its
pell-mell rush toward the center.
"What we observe in this galaxy is very much like a massive ocean
wave barreling toward shore until it interacts with the shallows,
causing it to lose momentum and dump all of its water and sand on the
beach," said Bruce Elmegreen, a scientist with IBM's T.J. Watson
Research Center in Yorktown Heights, New York, and co-author on the
paper.
"Not only do we find a rapid deceleration of the gas as it moves from
the outer to the inner edge of the eyelids, but we also measure that
the more rapidly it decelerates, the denser the molecular gas becomes,"
said Kaufman. "This direct measurement of compression shows how the
encounter between the two galaxies drives gas to pile up, spawn new star clusters and form these dazzling eyelid features."
Computer models predict that such eyelid-like features could evolve
if galaxies interacted in a very specific manner. "This evidence for a
strong shock in the eyelids is terrific. It's all very well to have a
theory and simulations suggesting it should be true, but real
observational evidence is great," said Curtis Struck, a professor of
astrophysics at Iowa State University in Ames and co-author on the
paper.
Galaxies IC 2163 (left) and NGC 2207 (right) recently grazed
past each other, triggering a tsunami of stars and gas in IC 2163 and
producing the dazzling eyelid-like features there. ALMA image of carbon
monoxide (orange), which revealed motion of the gas in these features,
is shown on top of Hubble image (blue) of the galaxy pair. Credit: M.
Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble
Space Telescope
"ALMA showed us that the velocities of the
molecular gas in the eyelids are on the right track with the predictions
we get from computer models," said Kaufman. "This critical test of
encounter simulations was not possible before."
Astronomers believe that such collisions between galaxies were common
in the early universe when galaxies were closer together. At that time,
however, galactic disks were generally clumpy and irregular, so other
processes likely overwhelmed the formation of similar eyelid features.
The authors continue to study this galaxy pair
and currently are comparing the properties (e.g., locations, ages, and
masses) of the star clusters previously observed with NASA's Hubble
Space Telescope with the properties of the molecular clouds observed
with ALMA. They hope to better understand the differences between
molecular clouds and star clusters in the eyelids and those elsewhere in the galaxy pair.
When the Wright brothers accomplished their first powered flight more
than a century ago, they controlled the motion of their Flyer 1
aircraft using wires and pulleys that bent and twisted the
wood-and-canvas wings. This system was quite different than the
separate, hinged flaps and ailerons that have performed those functions
on most aircraft ever since. But now, thanks to some high-tech wizardry
developed by engineers at MIT and NASA, some aircraft may be returning
to their roots, with a new kind of bendable, "morphing" wing.
The
new wing architecture, which could greatly simplify the manufacturing
process and reduce fuel consumption by improving the wing's
aerodynamics, as well as improving its agility, is based on a system of
tiny, lightweight subunits that could be assembled by a team of small
specialized robots, and ultimately could be used to build the entire
airframe. The wing would be covered by a "skin" made of overlapping
pieces that might resemble scales or feathers.
The new concept is described in the journal Soft Robotics, in a paper
by Neil Gershenfeld, director of MIT's Center for Bits and Atoms (CBA);
Benjamin Jenett, a CBA graduate student; Kenneth Cheung PhD '12, a CBA
alumnus and NASA research scientist; and four others.
Researchers have been trying for many years to achieve a reliable way
of deforming wings as a substitute for the conventional, separate,
moving surfaces, but all those efforts "have had little practical
impact," Gershenfeld says. The biggest problem was that most of these
attempts relied on deforming the wing through the use of mechanical
control structures within the wing, but these structures tended to be so
heavy that they canceled out any efficiency advantages produced by the
smoother aerodynamic surfaces. They also added complexity and
reliability issues.
By contrast, Gershenfeld says, "We make the whole wing the mechanism.
It's not something we put into the wing." In the team's new approach,
the whole shape of the wing can be changed, and twisted uniformly along
its length, by activating two small motors that apply a twisting
pressure to each wingtip.
This approach to the manufacture of aircraft, and potentially other
technologies, is such a new idea that "I think we can say it is a
philosophical revolution, opening the gate to disruptive innovation,"
says Vincent Loubiere, a lead technologist for emerging technologies and
concepts at Airbus, who was not directly involved in this research. He
adds that "the perspectives and fields this approach opens are
thrilling."
In the team’s new approach, the whole shape of the wing can
be changed, and twisted uniformly along its length, by activating two
small motors that apply a twisting pressure to each wingtip. Credit:
Kenneth Cheung/NASA
Like building with blocks
The basic principle behind the new concept is the use of an array of
tiny, lightweight structural pieces, which Gershenfeld calls "digital
materials," that can be assembled into a virtually infinite variety of
shapes, much like assembling a structure from Lego blocks. The assembly,
performed by hand for this initial experiment, could be done by simple
miniature robots that would crawl along or inside the structure as it
took shape. The team has already developed prototypes of such robots.
The individual pieces are strong and stiff, but the exact choice of
the dimensions and materials used for the pieces, and the geometry of
how they are assembled, allow for a precise tuning of the flexibility of
the final shape. For the initial test structure, the goal was to allow
the wing to twist in a precise way that would substitute for the motion
of separate structural pieces (such as the small ailerons at the
trailing edges of conventional wings), while providing a single, smooth
aerodynamic surface.
Building up a large and complex structure from an array of small,
identical building blocks, which have an exceptional combination of
strength, light weight, and flexibility, greatly simplifies the manufacturing process,
Gershenfeld explains. While the construction of light composite wings
for today's aircraft requires large, specialized equipment for layering
and hardening the material, the new modular structures could be rapidly
manufactured in mass quantities and then assembled robotically in place.
Gershenfeld and his team have been pursuing this approach to building
complex structures for years, with many potential applications for
robotic devices of various kinds. For example, this method could lead to
robotic arms and legs whose shapes could bend continuously along their
entire length, rather than just having a fixed number of joints.
This research, says Cheung, "presents a general strategy for
increasing the performance of highly compliant —that is, 'soft'—robots
and mechanisms," by replacing conventional flexible materials with new
cellular materials "that are much lower weight, more tunable, and can be
made to dissipate energy at much lower rates" while having equivalent
stiffness.
The basic principle behind the new concept is the use of an
array of tiny, lightweight structural pieces, which Neil Gershenfeld,
director of MIT’s Center for Bits and Atoms (CBA), calls “digital
materials,” that can be assembled into a …more
Saving fuel, cutting emissions
While exploring possible applications of this nascent technology,
Gershenfeld and his team consulted with NASA engineers and others
seeking ways to improve the efficiency of aircraft manufacturing and
flight. They learned that "the idea that you could continuously deform a
wing shape to do pure lift and roll has been a holy grail in the field,
for both efficiency and agility," he says. Given the importance of fuel
costs in both the economics of the airline industry and that sector's
contribution to greenhouse gas emissions, even small improvements in
fuel efficiency could have a significant impact.
Wind-tunnel tests of this structure showed that it at least matches
the aerodynamic properties of a conventional wing, at about one-tenth
the weight.
The "skin" of the wing also enhances the structure's performance.
It's made from overlapping strips of flexible material, layered somewhat
like feathers or fish scales, allowing for the pieces to move across
each other as the wing flexes, while still providing a smooth outer
surface.
The modular structure also provides greater ease of both assembly and
disassembly: One of this system's big advantages, in principle,
Gershenfeld says, is that when it's no longer needed, the whole
structure can be taken apart into its component parts, which can then be
reassembled into something completely different. Similarly, repairs
could be made by simply replacing an area of damaged subunits.
"An inspection robot could just find where the broken part is and
replace it, and keep the aircraft 100 percent healthy at all times,"
says Jenett.
Following up on the successful wind tunnel tests, the team is now
extending the work to tests of a flyable unpiloted aircraft, and initial
tests have shown great promise, Jenett says. "The first tests were done
by a certified test pilot, and he found it so responsive that he
decided to do some aerobatics."
Some of the first uses of the technology may be to make small,
robotic aircraft—"super-efficient long-range drones," Gershenfeld says,
that could be used in developing countries as a way of delivering
medicines to remote areas.
"Ultralight, tunable, aeroelastic structures and flight controls open
up whole new frontiers for flight," says Gonzalo Rey, chief technology
officer for Moog Inc., a precision aircraft motion-controls company, who
was not directly involved in this work, though he has collaborated with
the team. "Digital materials and fabrication are a fundamentally new
way to make things and enable the conventionally impossible. The digital
morphing wing article demonstrates the ability to resolve in depth the engineering challenges necessary to apply the concept."
Rey adds that "The broader potential in this concept extends directly
to skyscrapers, bridges, and space structures, providing not only
improved performance and survivability but also a more sustainable
approach by achieving the same strength while using, and reusing,
substantially less raw material."
And Loubiere, from Airbus, suggests that many other technologies
could also benefit from this method, including wind turbines: "Simply
enabling the assembly of the windmill blades on the spot, instead of
using complex and fuel-consuming transport, would enhance greatly the
cost and overall performance," he says.
In the search for the mysterious dark
matter, physicists have used elaborate computer calculations to come up
with an outline of the particles of this unknown form of matter. To do
this, the scientists extended the successful Standard Model of particle
physics which allowed them, among other things, to predict the mass of
so-called axions, promising candidates for dark matter. The
German-Hungarian team of researchers led by Professor Zoltรกn Fodor of
the University of Wuppertal, Eรถtvรถs University in Budapest and
Forschungszentrum Jรผlich carried out its calculations on Jรผlich's
supercomputer JUQUEEN (BlueGene/Q) and presents its results in the
journal Nature.
"Dark matter
is an invisible form of matter which until now has only revealed itself
through its gravitational effects. What it consists of remains a
complete mystery," explains co-author Dr Andreas Ringwald, who is based
at DESY and who proposed the current research. Evidence for the
existence of this form of matter comes, among other things, from the
astrophysical observation of galaxies, which rotate far too rapidly to
be held together only by the gravitational pull of the visible matter.
High-precision measurements using the European satellite "Planck" show
that almost 85 percent of the entire mass of the universe consists of
dark matter. All the stars, planets, nebulae and other objects in space
that are made of conventional matter account for no more than 15 percent
of the mass of the universe.
"The adjective 'dark' does not simply mean that it does not emit
visible light," says Ringwald. "It does not appear to give off any other
wavelengths either - its interaction with photons must be very weak
indeed." For decades, physicists have been searching for particles of
this new type of matter. What is clear is that these particles must lie
beyond the Standard Model of particle physics, and while that model is
extremely successful, it currently only describes the conventional 15
percent of all matter in the cosmos. From theoretically possible
extensions to the Standard Model physicists not only expect a deeper
understanding of the universe, but also concrete clues in what energy
range it is particularly worthwhile looking for dark-matter candidates.
The unknown form of matter can either consist of comparatively few,
but very heavy particles, or of a large number of light ones. The direct
searches for heavy dark-matter candidates using large detectors in
underground laboratories and the indirect search for them using large
particle accelerators are still going on, but have not turned up any dark matter particles
so far. A range of physical considerations make extremely light
particles, dubbed axions, very promising candidates. Using clever
experimental setups, it might even be possible to detect direct evidence
of them. "However, to find this kind of evidence it would be extremely
helpful to know what kind of mass we are looking for," emphasises
theoretical physicist Ringwald. "Otherwise the search could take
decades, because one would have to scan far too large a range."
The existence of axions is predicted by an extension to quantum chromodynamics (QCD), the quantum theory that governs the strong interaction,
responsible for the nuclear force. The strong interaction is one of the
four fundamental forces of nature alongside gravitation,
electromagnetism and the weak nuclear force, which is responsible for
radioactivity. "Theoretical considerations indicate that there are
so-called topological quantum fluctuations in quantum chromodynamics,
which ought to result in an observable violation of time reversal
symmetry," explains Ringwald. This means that certain processes should
differ depending on whether they are running forwards or backwards.
However, no experiment has so far managed to demonstrate this effect.
The extension to quantum chromodynamics (QCD) restores the invariance
of time reversals, but at the same time it predicts the existence of a
very weakly interacting particle, the axion, whose properties, in
particular its mass, depend on the strength of the topological quantum
fluctuations. However, it takes modern supercomputers like Jรผlich's
JUQUEEN to calculate the latter in the temperature range that is
relevant in predicting the relative contribution of axions to the matter
making up the universe. "On top of this, we had to develop new methods
of analysis in order to achieve the required temperature range," notes
Fodor who led the research.
The results show, among other things, that if axions do make up the
bulk of dark matter, they should have a mass of 50 to 1500
micro-electronvolts, expressed in the customary units of particle physics,
and thus be up to ten billion times lighter than electrons. This would
require every cubic centimetre of the universe to contain on average ten
million such ultra-lightweight particles. Dark matter is not spread out
evenly in the universe, however, but forms clumps and branches of a
weblike network. Because of this, our local region of the Milky Way
should contain about one trillion axions per cubic centimetre.
Thanks to the Jรผlich supercomputer, the calculations now provide
physicists with a concrete range in which their search for axions is
likely to be most promising. "The results we are presenting will
probably lead to a race to discover these particles," says Fodor. Their
discovery would not only solve the problem of dark matter
in the universe, but at the same time answer the question why the
strong interaction is so surprisingly symmetrical with respect to time
reversal. The scientists expect that it will be possible within the next
few years to either confirm or rule out the existence of axions
experimentally.
The Institute for Nuclear Research of the Hungarian Academy of
Sciences in Debrecen, the Lendรผlet Lattice Gauge Theory Research Group
at the Eรถtvรถs University, the University of Zaragoza in Spain, and the
Max Planck Institute for Physics in Munich were also involved in the
research.
NASA’s Mars Reconnaissance Orbiter High Resolution Imaging
Science Experiment (HiRISE) imaged the ExoMars Schiaparelli module’s
landing site on 25 October 2016, following the module’s arrival at Mars
on 19 October.
A
high-resolution image taken by a NASA Mars orbiter this week reveals
further details of the area where the ExoMars Schiaparelli module ended
up following its descent on 19 October.
The latest
image was taken on 25 October by the high-resolution camera on NASA's
Mars Reconnaissance Orbiter and provides close-ups of new markings on
the planet's surface first found by the spacecraft's 'context camera'
last week.
Both cameras had already been scheduled to observe the centre of the
landing ellipse after the coordinates had been updated following the
separation of Schiaparelli from ESA's Trace Gas Orbiter on 16 October.
The separation manoeuvre, hypersonic atmospheric entry and parachute
phases of Schiaparelli's descent went according to plan, the module ended up within the main camera's footprint, despite problems in the final phase.
The new images provide a more detailed look at the major components of the Schiaparelli hardware used in the descent sequence.
The main feature of the context images was a dark fuzzy patch of
roughly 15 x 40 m, associated with the impact of Schiaparelli itself.
The high-resolution images show a central dark spot, 2.4 m across,
consistent with the crater made by a 300 kg object impacting at a few
hundred km/h.
The crater is predicted to be about 50 cm deep and more detail may be visible in future images.
The asymmetric surrounding dark markings are more difficult to
interpret. In the case of a meteoroid hitting the surface at 40 000–80
000 km/h, asymmetric debris surrounding a crater would typically point
to a low incoming angle, with debris thrown out in the direction of
travel.
But Schiaparelli was travelling considerably slower and, according to
the normal timeline, should have been descending almost vertically
after slowing down during its entry into the atmosphere from the west.
It is possible the hydrazine propellant tanks in the module exploded
preferentially in one direction upon impact, throwing debris from the
planet's surface in the direction of the blast, but more analysis is
needed to explore this idea further
The landing site of the Schiaparelli module within the predicted
landing ellipse in a mosaic of images from the Context Camera (CTX) on
NASA's Mars Reconnaissance Orbiter and the Thermal Emission Imaging
System (THEMIS) on NASA's 2001 Mars …moreAn
additional long dark arc is seen to the upper right of the dark patch
but is currently unexplained. It may also be linked to the impact and
possible explosion.
Finally, there are a few white dots in the image close to the impact
site, too small to be properly resolved in this image. These may or may
not be related to the impact – they could just be 'noise'. Further
imaging may help identify their origin.
Some 1.4 km south of Schiaparelli, a white feature seen in last
week's context image is now revealed in more detail. It is confirmed to
be the 12 m-diameter parachute used during the second stage of
Schiaparelli's descent, after the initial heatshield entry into the
atmosphere. Still attached to it, as expected, is the rear heatshield,
now clearly seen.
The parachute and rear heatshield were ejected from Schiaparelli
earlier than anticipated. Schiaparelli is thought to have fired its
thrusters for only a few seconds before falling to the ground from an
altitude of 2–4 km and reaching the surface at more than 300 km/h.
In addition to the Schiaparelli impact site and the parachute, a
third feature has been confirmed as the front heatshield, which was
ejected about four minutes into the six-minute descent, as planned.
The ExoMars and MRO teams identified a dark spot last week's image
about 1.4 km east of the impact site and this seemed to be a plausible
location for the front heatshield considering the timing and direction
of travel following the module's entry.
The mottled bright and dark appearance of this feature is interpreted
as reflections from the multilayered thermal insulation that covers the
inside of the front heatshield. Further imaging from different angles
should be able to confirm this interpretation.
The dark features around the front heatshield are likely from surface dust disturbed during impact.
Additional imaging by MRO is planned in the coming weeks. Based on
the current data and observations made after 19 October, this will
include images taken under different viewing and lighting conditions,
which in turn will use shadows to help determine the local heights of
the features and therefore a more conclusive analysis of what the
features are.
A pair of before-and-after images taken by the Context Camera
(CTX) on NASA's Mars Reconnaissance Orbiter on 29 May 2016 and 20
October 2016 show two new features appearing following the arrival of
the Schiaparelli test lander module on 19 …moreA
full investigation is now underway involving ESA and industry to
identify the cause of the problems encountered by Schiaparelli in its
final phase. The investigation started as soon as detailed telemetry
transmitted by Schiaparelli during its descent had been relayed back to
Earth by the Trace Gas Orbiter.
The full set of telemetry has to be processed, correlated and
analysed in detail to provide a conclusive picture of Schiaparelli's
descent and the causes of the anomaly.
Until this full analysis has been completed, there is a danger of
reaching overly simple or even wrong conclusions. For example, the team
were initially surprised to see a longer-than-expected 'gap' of two
minutes in the telemetry during the peak heating of the module as it
entered the atmosphere: this was expected to last up to only one minute.
However, further processing has since allowed the team to retrieve half
of the 'missing' data, ruling out any problems with this part of the
sequence.
The latter stages of the descent sequence, from the jettisoning of
the rear shield and parachute, to the activation and early shut-off of
the thrusters, are still being explored in detail. A report of the
findings of the investigative team is expected no later than
mid-November 2016.
The same telemetry is also an extremely valuable output of the
Schiaparelli entry, descent and landing demonstration, as was the main
purpose of this element of the ExoMars 2016 mission. Measurements were
made on both the front and rear shields during entry, the first time
that such data have been acquired from the back heatshield of a vehicle
entering the martian atmosphere.
The team can also point to successes in the targeting of the module
at its separation from the orbiter, the hypersonic atmospheric entry
phase, and the parachute deployment at supersonic speeds, and the
subsequent slowing of the module.
These and other data will be invaluable input into future lander
missions, including the joint European–Russian ExoMars 2020 rover and
surface platform.
Finally, the orbiter is working well and being prepared to make its
first set of measurements on 20 November to calibrate its science
instruments.
Five years ago, the Nobel Prize in Physics
was awarded to three astronomers for their discovery, in the late 1990s,
that the universe is expanding at an accelerating pace.
Their
conclusions were based on analysis of Type Ia supernovae - the
spectacular thermonuclear explosion of dying stars - picked up by the
Hubble space telescope and large ground-based telescopes. It led to the
widespread acceptance of the idea that the universe is dominated by a
mysterious substance named 'dark energy' that drives this accelerating expansion.
Now, a team of scientists led by Professor Subir Sarkar of Oxford
University's Department of Physics has cast doubt on this standard
cosmological concept. Making use of a vastly increased data set - a
catalogue of 740 Type Ia supernovae, more than ten times the original
sample size - the researchers have found that the evidence for
acceleration may be flimsier than previously thought, with the data
being consistent with a constant rate of expansion.
The study is published in the Nature journal Scientific Reports.
Professor Sarkar, who also holds a position at the Niels Bohr
Institute in Copenhagen, said: 'The discovery of the accelerating
expansion of the universe won the Nobel Prize, the Gruber Cosmology
Prize, and the Breakthrough Prize in Fundamental Physics. It led to the
widespread acceptance of the idea that the universe is dominated by
"dark energy" that behaves like a cosmological constant - this is now
the "standard model" of cosmology.
'However, there now exists a much bigger database of supernovae on
which to perform rigorous and detailed statistical analyses. We analysed
the latest catalogue of 740 Type Ia supernovae - over ten times bigger
than the original samples on which the discovery claim was based - and
found that the evidence for accelerated expansion is, at most, what
physicists call "3 sigma". This is far short of the "5 sigma" standard
required to claim a discovery of fundamental significance.
'An analogous example in this context would be the recent suggestion
for a new particle weighing 750 GeV based on data from the Large Hadron
Collider at CERN. It initially had even higher significance - 3.9 and
3.4 sigma in December last year - and stimulated over 500 theoretical
papers. However, it was announced in August that new data show that the
significance has dropped to less than 1 sigma. It was just a statistical
fluctuation, and there is no such particle.'
There is other data available that appears to support the idea of an
accelerating universe, such as information on the cosmic microwave background - the faint afterglow of the Big Bang - from the Planck
satellite. However, Professor Sarkar said: 'All of these tests are
indirect, carried out in the framework of an assumed model, and the cosmic microwave background
is not directly affected by dark energy. Actually, there is indeed a
subtle effect, the late-integrated Sachs-Wolfe effect, but this has not
been convincingly detected.
'So it is quite possible that we are being misled and that the
apparent manifestation of dark energy is a consequence of analysing the
data in an oversimplified theoretical model - one that was in fact
constructed in the 1930s, long before there was any real data. A more
sophisticated theoretical framework accounting for the observation that
the universe is not exactly homogeneous and that its matter content may
not behave as an ideal gas - two key assumptions of standard cosmology -
may well be able to account for all observations without requiring dark
energy. Indeed, vacuum energy is something of which we have absolutely
no understanding in fundamental theory.'
Professor Sarkar added: 'Naturally, a lot of work will be necessary
to convince the physics community of this, but our work serves to
demonstrate that a key pillar of the standard cosmological model is
rather shaky. Hopefully this will motivate better analyses of
cosmological data, as well as inspiring theorists to investigate more
nuanced cosmological models. Significant progress will be made when the
European Extremely Large Telescope makes observations with an
ultrasensitive "laser comb" to directly measure over a ten to 15-year
period whether the expansion rate is indeed accelerating.'