waves Nokia sues Apple for patent infringement

waves Nokia sues Apple for patent infringement


Nokia announced Wednesday it is suing Apple in German and US courts for patent infringement, claiming the US tech giant was using Nokia technology in "many" products without paying for it.
Finnish Nokia, once the world's top mobile phone maker, said the two companies had signed a licensing agreement in 2011, and since then "Apple has declined subsequent offers made by Nokia to license other of its patented inventions which are used by many of Apple's products."
"After several years of negotiations trying to reach agreement to cover Apple's use of these patents, we are now taking action to defend our rights," Ilkka Rahnasto, head of Nokia's patent business, said in a statement.
The complaints, filed in three German cities and a district court in Texas, concern 32 patents for innovations related to displays, user interface, software, antennae, chipsets and video coding. Nokia said it was preparing further legal action elsewhere.
Nokia was the world's leading mobile phone maker from 1998 until 2011 when it bet on Microsoft's Windows mobile platform, which proved to be a flop. Analysts say the company failed to grasp the growing importance of smartphone apps compared to hardware.
It sold its unprofitable handset unit in 2014 for some $7.2 billion to Microsoft, which dropped the Nokia name from its Lumia smartphone handsets.
Meanwhile Nokia has concentrated on developing its mobile network equipment business by acquiring its French-American rival Alcatel-Lucent.
Including its 2013 full acquisition of joint venture Nokia Siemens Networks, Nokia said the three companies united represent more than 115 billion euros of R&D investment, with a massive portfolio of tens of thousands of patents.
The 2011 licensing deal followed years of clashes with Apple, which has also sparred with main rival Samsung over patent claims.
At the time, Apple cut the deal to settle 46 separate complaints Nokia had lodged against it for violation of intellectual property.
Second-generation stars identified, giving clues about their predecessors

Second-generation stars identified, giving clues about their predecessors


The figure shows a sub-population of ancient stars, called Carbon-Enhanced Metal-Poor (CEMP) stars. These stars contain 100 to 1,000,000 times LESS iron (and other heavy elements) than the Sun, but 10 to 10,000 times MORE carbon, relative to iron. The unusual chemicalcompositions of these stars provides clues to their birth environments, and the nature of the stars in which the carbon formed. In the figure, A(C) is the absolute amount of carbon, while the horizontal axis represents the ratio of iron, relative to hydrogen, compared with the same ratio in the Sun. Credit: University of Notre Dame
University of Notre Dame astronomers have identified what they believe to be the second generation of stars, shedding light on the nature of the universe's first stars.
A subclass of carbon-enhanced metal-poor (CEMP) , the so-called CEMP-no stars, are ancient stars that have large amounts of carbon but little of the (such as iron) common to later-generation stars. Massive first-generation stars made up of pure hydrogen and helium produced and ejected by stellar winds during their lifetimes or when they exploded as supernovae. Those metals—anything heavier than helium, in astronomical parlance—polluted the nearby from which new stars formed.
Jinmi Yoon, a postdoctoral research associate in the Department of Physics; Timothy Beers, the Notre Dame Chair in Astrophysics; and Vinicius Placco, a research professor at Notre Dame, along with their collaborators, show in findings published in the Astrophysics Journal this week that the lowest metallicity stars, the most chemically primitive, include large fractions of CEMP stars. The CEMP-no stars, which are also rich in nitrogen and oxygen, are likely the stars born out of hydrogen and helium gas clouds that were polluted by the elements produced by the universe's first stars.
"The CEMP-no stars we see today, at least many of them, were born shortly after the Big Bang, 13.5 billion years ago, out of almost completely unpolluted material," Yoon says. "These stars, located in the halo system of our galaxy, are true second-generation stars—born out of the nucleosynthesis products of the very first stars."
Beers says it's unlikely that any of the universe's first stars still exist, but much can be learned about them from detailed studies of the next generation of stars.
"We're analyzing the chemical products of the very first stars by looking at what was locked up by the second-generation stars," Beers says. "We can use this information to tell the story of how the first elements were formed, and determine the distribution of the masses of those first stars. If we know how their masses were distributed, we can model the process of how the first stars formed and evolved from the very beginning."
The authors used high-resolution spectroscopic data gathered by many astronomers to measure the chemical compositions of about 300 stars in the halo of the Milky Way. More and heavier elements form as later generations of stars continue to contribute additional metals, they say. As new generations of stars are born, they incorporate the metals produced by prior generations. Hence, the more heavy metals a star contains, the more recently it was born. Our sun, for example, is relatively young, with an age of only 4.5 billion years.
A companion paper, titled "Observational constraints on first-star nucleosynthesis. II. Spectroscopy of an ultra metal-poor CEMP-no star," of which Placco was the lead author, was also published in the same issue of the journal this week. The paper compares theoretical predictions for the chemical composition of zero-metallicity supernova models with a newly discovered CEMP-no star in the Milky Way galaxy.

Credit ; Brian Wallheimer 
A Swiss firm acquires Mars One private project

A Swiss firm acquires Mars One private project


Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ve
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures
A British-Dutch project aiming to send an unmanned mission to Mars by 2018 announced Friday that the shareholders of a Swiss financial services company have agreed a takeover bid.
"The acquisition is now only pending approval by the board of Mars One Ventures," the company said in a joint statement with InFin Innovative Finance AG, adding approval from the Mars board would come "as soon as possible."
"The takeover provides a solid path to funding the next steps of Mars One's mission to establish a permanent human settlement on Mars," the statement added.
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures.
Mars One aims to establish a permanent human settlement on the Red Planet, and is currently "in the early mission concept phase," the company says, adding securing funding is one of its major challenges.
Some 200,000 hopefuls from 140 countries initially signed up for the Mars One project, which is to be partly funded by a television reality show about the endeavour.
Those have now been whittled down to just 100, out of which 24 will be selected for one-way trips to Mars due to start in 2026 after several unmanned missions have been completed.
"Once this deal is completed, we'll be in a much stronger financial position as we begin the next phase of our mission. Very exciting times," said Mars One chief executive Bas Lansdorp.
NASA is currently working on three Mars missions with the European Space Agency and plans to send another rover to Mars in 2020.
But NASA has no plans for a manned to Mars until the 2030s.
First discovered signs of weird quantum property of empty space?

First discovered signs of weird quantum property of empty space?


First signs of weird quantum property of empty space?
This artist’s view shows how the light coming from the surface of a strongly magnetic neutron star (left) becomes linearly polarised as it travels through the vacuum of space close to the star on its way to the observer on Earth (right). …more
By studying the light emitted from an extraordinarily dense and strongly magnetized neutron star using ESO's Very Large Telescope, astronomers may have found the first observational indications of a strange quantum effect, first predicted in the 1930s. The polarization of the observed light suggests that the empty space around the neutron star is subject to a quantum effect known as vacuum birefringence.
A team led by Roberto Mignani from INAF Milan (Italy) and from the University of Zielona Gora (Poland), used ESO's Very Large Telescope (VLT) at the Paranal Observatory in Chile to observe the neutron star RX J1856.5-3754, about 400 light-years from Earth.
Despite being amongst the closest , its extreme dimness meant the astronomers could only observe the star with visible light using the FORS2 instrument on the VLT, at the limits of current telescope technology.
Neutron stars are the very dense remnant cores of massive stars—at least 10 times more massive than our Sun—that have exploded as supernovae at the ends of their lives. They also have extreme magnetic fields, billions of times stronger than that of the Sun, that permeate their outer surface and surroundings.
These fields are so strong that they even affect the properties of the empty space around the star. Normally a is thought of as completely empty, and light can travel through it without being changed. But in quantum electrodynamics (QED), the quantum theory describing the interaction between photons and charged particles such as electrons, space is full of virtual particles that appear and vanish all the time. Very can modify this space so that it affects the polarisation of light passing through it.
Mignani explains: "According to QED, a highly magnetised vacuum behaves as a prism for the propagation of light, an effect known as vacuum birefringence."
Among the many predictions of QED, however, vacuum birefringence so far lacked a direct experimental demonstration. Attempts to detect it in the laboratory have not yet succeeded in the 80 years since it was predicted in a paper by Werner Heisenberg (of uncertainty principle fame) and Hans Heinrich Euler.
First signs of weird quantum property of empty space?
This wide field image shows the sky around the very faint neutron star RX J1856.5-3754 in the southern constellation of Corona Australis. This part of the sky also contains interesting regions of dark and bright nebulosity surrounding the …more
"This effect can be detected only in the presence of enormously strong magnetic fields, such as those around neutron stars. This shows, once more, that neutron stars are invaluable laboratories in which to study the fundamental laws of nature." says Roberto Turolla (University of Padua, Italy).
After careful analysis of the VLT data, Mignani and his team detected linear polarisation—at a significant degree of around 16%—that they say is likely due to the boosting effect of vacuum birefringence occurring in the area of surrounding RX J1856.5-3754.
Vincenzo Testa (INAF, Rome, Italy) comments: "This is the faintest object for which polarisation has ever been measured. It required one of the largest and most efficient telescopes in the world, the VLT, and accurate data analysis techniques to enhance the signal from such a faint star."
"The high linear polarisation that we measured with the VLT can't be easily explained by our models unless the vacuum birefringence effects predicted by QED are included," adds Mignani.
"This VLT study is the very first observational support for predictions of these kinds of QED effects arising in extremely strong magnetic fields," remarks Silvia Zane (UCL/MSSL, UK).
Mignani is excited about further improvements to this area of study that could come about with more advanced telescopes: "Polarisation measurements with the next generation of telescopes, such as ESO's European Extremely Large Telescope, could play a crucial role in testing QED predictions of vacuum birefringence effects around many more neutron stars."
"This measurement, made for the first time now in visible light, also paves the way to similar measurements to be carried out at X-ray wavelengths," adds Kinwah Wu (UCL/MSSL, UK).
This research was presented in the paper entitled "Evidence for vacuum birefringence from the first optical polarimetry measurement of the isolated neutron star RX J1856.5−3754", by R. Mignani et al., to appear in Monthly Notices of the Royal Astronomical Society.
Combining  quantum physics and photosynthesis to make discovery that could lead to highly efficient solar cells

Combining quantum physics and photosynthesis to make discovery that could lead to highly efficient solar cells


Physics, photosynthesis and solar cells
In a light harvesting quantum photocell, particles of light (photons) can efficiently generate electrons. When two absorbing channels are used, solar power entering the system through the two absorbers (a and b) efficiently generates power …more
A University of California, Riverside assistant professor has combined photosynthesis and physics to make a key discovery that could help make solar cells more efficient. The findings were recently published in the journal Nano Letters.
Nathan Gabor is focused on experimental condensed matter physics, and uses light to probe the fundamental laws of quantum mechanics. But, he got interested in photosynthesis when a question popped into his head in 2010: Why are plants green? He soon discovered that no one really knows.
During the past six years, he sought to help change that by combining his background in physics with a deep dive into biology.
He set out to re-think by asking the question: can we make materials for solar cells that more efficiently absorb the fluctuating amount of energy from the sun. Plants have evolved to do this, but current affordable solar cells - which are at best 20 percent efficient - do not control these sudden changes in solar power, Gabor said. That results in a lot of wasted energy and helps prevent wide-scale adoption of solar cells as an energy source.
Gabor, and several other UC Riverside physicists, addressed the problem by designing a new type of quantum photocell, which helps manipulate the flow of energy in . The design incorporates a heat engine photocell that absorbs photons from the sun and converts the photon energy into electricity.
Surprisingly, the researchers found that the quantum heat engine photocell could regulate solar power conversion without requiring active feedback or adaptive control mechanisms. In conventional photovoltaic technology, which is used on rooftops and solar farms today, fluctuations in solar power must be suppressed by voltage converters and feedback controllers, which dramatically reduce the overall efficiency.
Physics, photosynthesis and solar cells
Nathan Gabor's Laboratory of Quantum Materials Optoelectronics utilizes infrared laser spectroscopy techniques to explore natural regulation in quantum photocells composed of two-dimensional semiconductors. Credit: Max Grossnickle and QMO Lab
The goal of the UC Riverside teams was to design the simplest photocell that matches the amount of solar power from the sun as close as possible to the average power demand and to suppress energy fluctuations to avoid the accumulation of excess energy.
The researchers compared the two simplest quantum mechanical photocell systems: one in which the photocell absorbed only a single color of light, and the other in which the photocell absorbed two colors. They found that by simply incorporating two photon-absorbing channels, rather than only one, the regulation of energy flow emerges naturally within the photocell.
The basic operating principle is that one channel absorbs at a wavelength for which the average input power is high, while the other absorbs at low power. The photocell switches between high and low power to convert varying levels of solar power into a steady-state output.
When Gabor's team applied these simple models to the measured solar spectrum on Earth's surface, they discovered that the absorption of green light, the most radiant portion of the spectrum per unit wavelength, provides no regulatory benefit and should therefore be avoided. They systematically optimized the photocell parameters to reduce solar energy fluctuations, and found that the absorption spectrum looks nearly identical to the absorption spectrum observed in photosynthetic green plants.
The findings led the researchers to propose that natural regulation of energy they found in the quantum heat engine photocell may play a critical role in the photosynthesis in plants, perhaps explaining the predominance of green plants on Earth.
Other researchers have recently found that several molecular structures in plants, including chlorophyll a and b molecules, could be critical in preventing the accumulation of excess in plants, which could kill them. The UC Riverside researchers found that the molecular structure of the quantum heat engine photocell they studied is very similar to the structure of photosynthetic molecules that incorporate pairs of chlorophyll.
The hypothesis set out by Gabor and his team is the first to connect quantum mechanical structure to the greenness of plants, and provides a clear set of tests for researchers aiming to verify natural regulation. Equally important, their design allows regulation without active input, a process made possible by the photocell's quantum mechanical structure.

 The paper is called "Natural Regulation of Energy Flow in a Green Quantum Photocell.

credit; Sean Nealon
scientists find that Solar cells can be made with tin instead of lead

scientists find that Solar cells can be made with tin instead of lead

,

Solar power could become cheaper and more widespread
Credit: University of Warwick
A breakthrough in solar power could make it cheaper and more commercially viable, thanks to research at the University of Warwick.
In a paper published in Nature Energy, Dr Ross Hatton, Professor Richard Walton and colleagues, explain how solar cells could be produced with tin, making them more adaptable and simpler to produce than their current counterparts.
Solar cells based on a class of semiconductors known as lead perovskites are rapidly emerging as an efficient way to convert sunlight directly into electricity. However, the reliance on lead is a serious barrier to commercialisation, due to the well-known toxicity of lead.
Dr Ross Hatton and colleagues show that perovskites using tin in place of lead are much more stable than previously thought, and so could prove to be a viable alternative to lead perovskites for solar cells.
Lead-free cells could render cheaper, safer and more commercially attractive - leading to it becoming a more prevalent source of energy in everyday life.
This could lead to a more widespread use of solar power, with potential uses in products such as laptop computers, mobile phones and cars.
The team have also shown how the device structure can be greatly simplified without compromising performance, which offers the important advantage of reduced fabrication cost.
Dr Hatton comments that there is an ever-pressing need to develop renewable sources of energy:
"It is hoped that this work will help to stimulate an intensive international research effort into lead-free perovskite solar cells, like that which has resulted in the astonishingly rapid advancement of perovskite solar cells.
"There is now an urgent need to tackle the threat of climate change resulting from humanity's over reliance on fossil fuel, and the rapid development of new solar technologies must be part of the plan."
Perovskite solar cells are lightweight and compatible with flexible substrates, so could be applied more widely than the rigid flat plate silicon that currently dominate the photovoltaics market, particularly in consumer electronics and transportation applications.
The paper, 'Enhanced Stability and Efficiency in Hole-Transport Layer Free CsSnI3 Perovskite Photovoltaics', is published in Nature Energy, and is authored by Dr Ross Hatton, Professor Richard Walton and PhD student Kenny Marshall in the Department of Chemistry, along with Dr Marc Walker in the Department of Physics.

2.5 billion-year-old fossils of bacteria that predate the formation of oxygen

2.5 billion-year-old fossils of bacteria that predate the formation of oxygen


Life before oxygen
A microscopic image of 2.5 billion-year-old sulfur-oxidizing bacterium. Credit: Andrew Czaja, UC assistant professor of geology
Somewhere between Earth's creation and where we are today, scientists have demonstrated that some early life forms existed just fine without any oxygen.
While researchers proclaim the first half of our 4.5 billion-year-old planet's life as an important time for the development and evolution of early bacteria, evidence for these life forms remains sparse including how they survived at a time when oxygen levels in the atmosphere were less than one-thousandth of one percent of what they are today.
Recent geology research from the University of Cincinnati presents new evidence for bacteria found fossilized in two separate locations in the Northern Cape Province of South Africa.
"These are the oldest reported fossil sulfur bacteria to date," says Andrew Czaja, UC assistant professor of geology. "And this discovery is helping us reveal a diversity of life and ecosystems that existed just prior to the Great Oxidation Event, a time of major atmospheric evolution."
The 2.52 billion-year-old sulfur-oxidizing bacteria are described by Czaja as exceptionally large, spherical-shaped, smooth-walled microscopic structures much larger than most modern bacteria, but similar to some modern single-celled organisms that live in deepwater sulfur-rich ocean settings today, where even now there are almost no traces of oxygen.
Life before oxygen
UC Professor Andrew Czaja indicates the layer of rock from which fossil bacteria were collected on a 2014 field excursion near the town of Kuruman in the Northern Cape Province of South Africa. Credit: Aaron Satkoski, UWM postdoc on the excursion.
In his research published in the December issue of the journal Geology of the Geological Society of America, Czaja and his colleagues Nicolas Beukes from the University of Johannesburg and Jeffrey Osterhout, a recently graduated master's student from UC's department of geology, reveal samples of bacteria that were abundant in deep water areas of the ocean in a geologic time known as the Neoarchean Eon (2.8 to 2.5 billion years ago).
"These fossils represent the oldest known organisms that lived in a very dark, deep-water environment," says Czaja. "These bacteria existed two billion years before plants and trees, which evolved about 450 million years ago. We discovered these microfossils preserved in a layer of hard silica-rich rock called chert located within the Kaapvaal craton of South Africa."
With an atmosphere of much less than one percent oxygen, scientists have presumed that there were things living in deep water in the mud that didn't need sunlight or oxygen, but Czaja says experts didn't have any direct evidence for them until now.
Czaja argues that finding rocks this old is rare, so researchers' understanding of the Neoarchean Eon are based on samples from only a handful of geographic areas, such as this region of South Africa and another in Western Australia.

According to Czaja, scientists through the years have theorized that South Africa and Western Australia were once part of an ancient supercontinent called Vaalbara, before a shifting and upending of tectonic plates split them during a major change in the Earth's surface.
Based on radiometric dating and geochemical isotope analysis, Czaja characterizes his fossils as having formed in this early Vaalbara supercontinent in an ancient deep seabed containing sulfate from continental rock. According to this dating, Czaja's fossil bacteria were also thriving just before the era when other shallow-water bacteria began creating more and more oxygen as a byproduct of photosynthesis.
"We refer to this period as the Great Oxidation Event that took place 2.4 to 2.2 billion years ago," says Czaja.
Life before oxygen
Microstructures here have physical characteristics consistent with the remains of compressed coccodial (round) bacteria microorganisms. Credit: Andrew Czaja, permission to publish by Geological Society of America
Early recycling
Czaja's fossils show the Neoarchean bacteria in plentiful numbers while living deep in the sediment. He contends that these early bacteria were busy ingesting volcanic hydrogen sulfide—the molecule known to give off a rotten egg smell—then emitting sulfate, a gas that has no smell. He says this is the same process that goes on today as modern bacteria recycle decaying organic matter into minerals and gases.
"The waste product from one [bacteria] was food for the other," adds Czaja.
"While I can't claim that these early bacteria are the same ones we have today, we surmise that they may have been doing the same thing as some of our current bacteria," says Czaja. "These early bacteria likely consumed the molecules dissolved from sulfur-rich minerals that came from land rocks that had eroded and washed out to sea, or from the volcanic remains on the ocean's floor.
There is an ongoing debate about when sulfur-oxidizing bacteria arose and how that fits into the earth's evolution of life, Czaja adds. "But these fossils tell us that sulfur-oxidizing were there 2.52 billion years ago, and they were doing something remarkable."

credit; Melanie Schefft
Antarctic explorers help make discovery  100 years after their epic adventures

Antarctic explorers help make discovery 100 years after their epic adventures


Ice observations recorded in the ships' logbooks of explorers such as the British Captain Robert Scott and Ernest Shackleton and the German Erich von Drygalski have been used to compare where the Antarctic ice edge was during the Heroic Age of Antarctic Exploration (1897-1917) and where satellites show it is today.
The study, published in the European Geosciences Union journal The Cryosphere, suggests Antarctic sea ice is much less sensitive to the effects of climate change than that of the Arctic, which in stark contrast has experienced a dramatic decline during the 20th century.
The research, by climate scientists at the University of Reading, estimates the extent of Antarctic summer sea ice is at most 14% smaller now than during the early 1900s.
Jonathan Day, who led the study, said: "The missions of Scott and Shackleton are remembered in history as heroic failures, yet the data collected by these and other explorers could profoundly change the way we view the ebb and flow of Antarctic sea ice.
"We know that sea ice in the Antarctic has increased slightly over the past 30 years, since satellite observations began. Scientists have been grappling to understand this trend in the context of global warming, but these new findings suggest it may not be anything new.
"If ice levels were as low a century ago as estimated in this research, then a similar increase may have occurred between then and the middle of the century, when previous studies suggest ice levels were far higher."
The new study published in The Cryosphere is the first to shed light on sea ice extent in the period prior to the 1930s, and suggests the levels in the early 1900s were in fact similar to today, at between 5.3 and 7.4 million square kilometres. Although one region, the Weddell Sea, did have a significantly larger .
Published estimates suggest Antarctic sea ice extent was significantly higher during the 1950s, before a steep decline returned it to around 6 million square kilometres in recent decades.
The research suggests that the climate of Antarctica may have fluctuated significantly throughout the 20th century, swinging between decades of high ice cover and decades of low ice cover, rather than enduring a steady downward trend.
This study builds on international efforts to recover old weather and climate data from ships' logbooks. The public can volunteer to rescue more data at oldweather.org.
Day said: "The Southern Ocean is largely a 'black hole' as far as historical climate change data is concerned, but future activities planned to recover data from naval and whaling ships will help us to understand past climate variations and what to expect in the future."
Capt Scott perished along with his team in 1912 after missing out on being the first to reach the South Pole by a matter of weeks, while Shackleton's ship sank after becoming trapped in ice in 1915 as he and his crew journeyed to attempt the first ever cross-Antarctic trek.
In addition to using ship logbooks from three expeditions led by Scott and two by Shackleton, the researchers used sea-ice records from Belgian, German and French missions, among others. But the team was unable to analyse some logbooks from the Heroic Age period, which have not yet been imaged and digitised. These include the records from the Norwegian Antarctic expedition of 1910-12 lead by Roald Amundsen, the first person to reach both the south and north poles.
In highly lethal type of leukemia, cancer gene predicts treatment response

In highly lethal type of leukemia, cancer gene predicts treatment response


New research led by Washington University School of Medicine in St. Louis shows that patients with acute myeloid leukemia (AML) whose cancer cells carry TP53 mutations -- a feature that correlates with an extremely poor prognosis -- may live longer if they are treated with decitabine, a less intensive chemotherapy drug. The study's first author, John Welch, M.D., PhD, is pictured with Phillip Houghton, who is being treated for AML. Credit: Washington University
Patients with the most lethal form of acute myeloid leukemia (AML) - based on genetic profiles of their cancers - typically survive for only four to six months after diagnosis, even with aggressive chemotherapy. But new research indicates that such patients, paradoxically, may live longer if they receive a milder chemotherapy drug.
Treatment with the less intensive drug, decitabine, is not a cure. But surprisingly, AML patients whose carried mutations in a nefarious cancer gene called TP53 consistently achieved remission after treatment with decitabine. Their median survival was just over a year.
The study, by a team of scientists at Washington University School of Medicine in St. Louis, is published Nov. 24 in The New England Journal of Medicine.
In AML, treatment involves intensive chemotherapy to try to kill the patient's leukemia cells and put the cancer into remission. If successful, a follow-up bone-marrow transplant can offer a possible cure, but this course of treatment is recommended only for patients with a high risk of relapse because the procedure can cause severe complications, even death.
"What's really unique here is that all the patients in the study with TP53 mutations had a response to decitabine and achieved an initial remission," said the study's senior author, Timothy J. Ley, MD, the Lewis T. and Rosalind B. Apple Professor of Medicine, noting that in AML, TP53 mutations have been correlated with an extremely poor prognosis. "With standard aggressive chemotherapy, we only see about 20 to 30 percent of these patients achieving remission, which is the critical first step to have a chance to cure patients with additional therapies.
"The findings need to be validated in a larger trial," Ley added, "but they do suggest that TP53 mutations can reliably predict responses to decitabine, potentially prolonging survival in this ultra high-risk group of patients and providing a bridge to transplantation in some patients who might not otherwise be candidates."
In an accompanying editorial, Elihu Estey, MD, an AML expert at the University of Washington Medical Center and Fred Hutchinson Cancer Research Center in Seattle, noted that AML is not one disease but many, each driven by different genetic mutations. The results of the current trial, he said, point to the inevitable need to replace large cancer clinical trials evaluating homogeneous drug treatments with smaller trials that involve subgroups of patients, with treatments targeted to their specific mutations.
The current study involved 116 patients treated with decitabine at the Siteman Cancer Center at Washington University School of Medicine and Barnes-Jewish Hospital, and at the University of Chicago. The patients either had AML - a cancer of the bone marrow - or myelodysplastic syndrome (MDS), a group of blood cancers that often progresses to AML. This year, an estimated 20,000 people living in the U.S. will be diagnosed with AML, and at least 11,000 deaths will be attributed to the disease.
Decitabine often is given to older patients with AML or MDS because it is less toxic than standard chemotherapies. But fewer than half of patients who get the drug achieve an initial remission, so the researchers wanted to determine whether specific mutations in the patients' cancer cells could predict their responses to treatment.
To find out, they sequenced all the genes in patients' cancer cells or analyzed select cancer genes. They also conducted standard tests to look for broken, missing or rearranged chromosomes. Then, the researchers correlated these molecular markers with treatment response to identify subgroups of patients likely to benefit from decitabine.
Among the patients in the study, 46 percent achieved a remission with decitabine. But, remarkably, all 21 patients whose leukemia cells carried TP53 mutations went into remission.
Patients also were likely to respond to decitabine if they were deemed to have an "unfavorable risk" prognosis based on extensive chromosomal rearrangements in their cancer cells; many of these patients also had TP53 mutations. Indeed, 66 percent of patients with an unfavorable risk achieved remission, compared with 34 percent of patients who had more favorable prognoses.
"The challenge with using decitabine has been knowing which patients are most likely to respond," said co-author Amanda Cashen, MD, an associate professor of medicine who led an earlier clinical trial of decitabine in older patients with AML. "The value of this study is the comprehensive mutational analysis that helps us figure out which patients are likely to benefit. This information opens the door to using decitabine in a more targeted fashion to treat not just older patients, but also younger patients who carry TP53 mutations."
First author John Welch, MD, PhD, an assistant professor of medicine, added: "It's important to note that patients with an extremely poor prognosis in this relatively small study had the same survival outcomes as patients facing a better prognosis, which is encouraging. We don't yet understand why patients with TP53 mutations consistently respond to decitabine, and more work is needed to understand that phenomenon."
Responses to decitabine are usually short-lived, however, with remissions typically lasting for about a year. Decitabine does not completely clear all the leukemia cells that carry TP53 mutations, and these cells invariably become resistant to the drug, leading to relapse.
"Remissions with decitabine typically don't last long, and no one was cured with this drug," Ley explained. "But patients who responded to decitabine live longer than what you would expect with aggressive chemotherapy, and that can mean something. Some people live a year or two and with a good quality of life, because the chemotherapy is not too toxic."
Roughly 10 percent of AML patients carry TP53 mutations in their leukemia cells. Among patients in the study with such mutations, median survival was 12.7 months - which is not significantly different from the 15.4 months' survival seen in patients without the mutations - and is longer than the typical four- to six-month survival observed in such patients treated with more aggressive therapies.
Decitabine was approved by the FDA in 2006 as a treatment for MDS, but oncologists often prescribe it off-label as a treatment for AML, particularly in older patients. AML typically strikes in a person's mid-60s; the average age of people in the current study was 74.
"We're now planning a larger trial to evaluate decitabine in AML patients of all ages who carry TP53 mutations," Welch said. "It's exciting to think we may have a therapy that has the potential to improve response rates in this group of high-risk patients."


Best weather satellite ever built is lunched into space

Best weather satellite ever built is lunched into space


Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (United Launch Alliance via AP)  
The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives.
This new GOES-R spacecraft will track U.S. weather as never before: hurricanes, tornadoes, flooding, , wildfires, lightning storms, even solar flares. Indeed, about 50 TV meteorologists from around the country converged on the launch site—including NBC's Al Roker—along with 8,000 space program workers and guests.
"What's so exciting is that we're going to be getting more data, more often, much more detailed, higher resolution," Roker said. In the case of tornadoes, "if we can give people another 10, 15, 20 minutes, we're talking about lives being saved."
Think superhero speed and accuracy for forecasting. Super high-definition TV, versus black-and-white.
"Really a quantum leap above any NOAA has ever flown," said Stephen Volz, the National Oceanic and Atmospheric Administration's director of satellites.
"For the American public, that will mean faster, more accurate weather forecasts and warnings," Volz said earlier in the week. "That also will mean more lives saved and better environmental intelligence" for government officials responsible for hurricane and other evacuations.
Best weather satellite ever built rockets into space
Cell phones light up the beaches of Cape Canaveral and Cocoa Beach, Fla., north of the Cocoa Beach Pier as spectators watch the launch of the NOAA GOES-R weather satellite, Saturday, Nov. 19, 2016. It was launched from Launch Complex 41 at Cape Canaveral Air Force Station on a ULA Atlas V rocket. (Malcolm Denemark/Florida Today via AP)
Airline passengers also stand to benefit, as do rocket launch teams. Improved forecasting will help pilots avoid bad weather and help rocket scientists know when to call off a launch.
NASA declared success 3 1/2 hours after liftoff, following separation from the upper stage.
The first in a series of four high-tech satellites, GOES-R hitched a ride on an unmanned Atlas V rocket, delayed an hour by rocket and other problems. NOAA teamed up with NASA for the mission.
The satellite—valued by NOAA at $1 billion—is aiming for a 22,300-mile-high equatorial orbit. There, it will join three aging spacecraft with 40-year-old technology, and become known as GOES-16. After months of testing, this newest satellite will take over for one of the older ones. The second satellite in the series will follow in 2018. All told, the series should stretch to 2036.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
GOES stands for Geostationary Operational Environmental Satellite. The first was launched in 1975.
GOES-R's premier imager—one of six science instruments—will offer three times as many channels as the existing system, four times the resolution and five times the scan speed, said NOAA program director Greg Mandt. A similar imager is also flying on a Japanese weather satellite.
Typically, it will churn out full images of the Western Hemisphere every 15 minutes and the continental United States every five minutes. Specific storm regions will be updated every 30 seconds.
Forecasters will get pictures "like they've never seen before," Mandt promised.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, in Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
A first-of-its-kind lightning mapper, meanwhile, will take 500 snapshots a second.
This next-generation GOES program—$11 billion in all—includes four satellites, an extensive land system of satellite dishes and other equipment, and new methods for crunching the massive, nonstop stream of expected data.
Hurricane Matthew, interestingly enough, delayed the launch by a couple weeks. As the hurricane bore down on Florida in early October, launch preps were put on hold. Matthew stayed far enough offshore to cause minimal damage to Cape Canaveral, despite some early forecasts that suggested a direct strike.
Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, par 
credit; Marcia Dunn
A suit-X trio designed to support workers: Meet MAX

A suit-X trio designed to support workers: Meet MAX



(Tech Xplore)—Not all of us park our bodies in a chair in the morning and cross our legs to do our work. In fact, just think of vast numbers of workers doing physically demanding or just physically repetitive tasks including bending and lifting.
Workers on construction sites, factories and warehouses might cope with aches and pains brought on by their work. Hopefully, the future will provide an easy answer for workers to suit up in a suitable way for them to avoid these aches and pain.
There is a new kid on the block aiming to address such a solution, and a number of tech watchers have put them in the news this month. A California-based group aptly called suit-X announced its MAX, which stands for Modular Agile Exoskeleton. The company designs and makes exoskeletons.
"MAX is designed to support workers during the repetitive tasks that most frequently cause injury," said a company release.
Will Knight in MIT Technology Review said that this essentially is " a trio of devices that use robotic technologies to enhance the abilities of able-bodied workers and prevent common workplace injuries."
Target users, for example, could include those who carry out ceiling inspections, welding, installations and repairs. "It's not only lifting 75 pounds that can hurt your back; it is also lifting 20 pounds repeatedly throughout the day that will lead to injury," said Dr. Homayoon Kazerooni, founder and CEO, suitX."The MAX solution is designed for unstructured workplaces where no robot can work as efficiently as a human worker. Our goal is to augment and support workers who perform demanding and repetitive tasks in unstructured workplaces in order to prevent and reduce injuries."
Seeker referred to the MAX system as an exoskeleton device that could potentially change the way millions of people work.
Seeker noted its advantages as workplace exoskeletons in several ways, being lightweight such that the user can walk around unimpeded. "The exoskeleton units kick in only when you need them, and they don't require any external power source."
MAX is a product with three modules. You use them independently or in combination, depending on work needs. The three modules are backX, shoulderX, and legX.
According to the company, "All modules intelligently engage when you need them, and don't impede you otherwise."
The backX (lower back) reduces forces and torques.
The shoulderX reduces forces; it "enables the wearer to perform chest-to-ceiling level tasks for longer periods of time." In a video the company defines shoulderX as "an industrial arm exoskeleton that augments its wearer by reducing gravity-induced forces at the shoulder complex."
The legX was designed to support knee joint and quadriceps. It incorporates microcomputers in each leg. They communicate with each other to determine if the person is walking, bending, or taking the stairs." Seeker said these communicate via Bluetooth, monitoring spacing and position.
Credit: suitx
A suit-X trio designed to support workers: Meet MAX
Kazerooni spoke about his company and its mission, in Seeker. "My job is easy. I sit in front of a computer. But these guys work all day long, put their bodies through abuse. We can use bionics to help them." He also said he and his team did not create this "because of science fiction movies. We were responding to numbers from the Department of Labor, which said that back, knee and shoulder injuries are the most common form of injuries among workers."
Will Knight meanwhile has reflected on the bigger picture in developments. Can they help in preventing injury on the job and help prolong workers' careers? "New materials, novel mechanical designs, and cheaper actuators and motors have enabled a new generation of cheaper, more lightweight exoskeletons to emerge in recent years," he wrote. "For instance, research groups at Harvard and SRI are developing systems that are passive and use soft, lightweight materials."
Some companies, such as BMW, said Knight, have been experimenting with exoskeletons. "The MAX is another (bionic) step toward an augmented future of work."

credit;   Nancy Owano
Cannabinoids control memory through mitochondria

Cannabinoids control memory through mitochondria


Cannabinoids and memory
Few classes of drugs have galvanized the pharmaceutical industry in recent times like the cannabinoids. This class of molecules includes not only the natural forms, but also a vast new treasury of powerful synthetic analogs with up to several hundred times the potency as measured by receptor activity and binding affinity. With the FDA now fast tracking all manner of injectables, topicals, and sprays promising everything from relief of nebulous cancer pain to anti-seizure neuroprotection, more than a few skeptics have been generated.
What inquiring minds really want to know, beyond the thorny issue of how well they actually work, is how do they work at all? If you want to understand what something is doing in the cell, one useful approach is to ask what it does to their mitochondria. With drug companies now drooling over the possibility of targeting drugs and treatments directly to these organelles by attaching mitochondrial localization sequences (MLS) or other handler molecules, answers to this kind of question are now coming into focus.
But even with satisfactory explanations in hand, there would still be one large hurdle standing in the way of cannabinoid medical bliss: Namely, even if a patient can manage to avoid operating vehicles or heavy machinery throughout the course of their treatment, how do they cope with the endemic collateral memory loss these drugs invariably cause?
A recent paper published in Nature neatly ties all these subtleties together, and even suggests a possible way out of the brain fog by toggling the sites of cannabinoid action between mitochondria and other cellular compartments. By generating a panel of cannabinoid receptor and second messenger molecules with and without the appropriate MLS tags or accessory binding proteins, the authors were able to directly link cannabinoid-controlled mitochondrial activity to memory formation.
One confounder in this line of work is that these MLSs are very fickle beasts. The 22 or so leader amino acids that make up their 'code' is not a direct addresses in any sense. While the consensus sequences that localize protease action or sort nuclear, endoplasmic reticulum, and plasma membrane proteins generally contain clearly recognizable motifs, any regularities in the MLSs have only proven visible to a computer. That is not to say that MLSs are fictions—they clearly do work—but their predictable action is only witnessed whole once their 3-dimensional vibrating structures are fully-conformed.
The authors availed themselves of two fairly sophisticated programs called Mitoprot and PSQRT to remove any guesswork in identifying a potential MLS in CN1 cannabinoid receptors. CN1s had been previously associated by immunohistochemical methods to what we might call the mitochondrial penumbra, but their presence there may have been purely incidental. This in silico analysis theoretically confirmed the presence of a putative MLS in CB1 and encouraged them to carry out further manipulations of this pathway.
Namely, the researchers took a mouse with the mitochondrial mtCB1 receptor knocked out, and then added modified versions back using viral vectors. When they applied the synthetic cannabinoid ligands (known as WIN55,212 and HU210 ) they found that mitochondrial respiration and mobility, and subsequently memory formation, remained largely intact in animals without the MLS in their receptor.
The researchers were then able to look further downstream using the same general strategy of controlling localization of the second messenger molecule protein kinase A (PKA). By fusing a constitutively active mutant form of PKA to an MLS and putting it inside using an adenovirus they were able to trace the signal cascade into the heart of the complex I of the respiratory chain.
The presence and origin of full G-protein receptor signal pathways in mitochondria is now more than just an academic question. Exactly how retroviruses and other molecular agents of sequence modification managed to re-jigger gene duplicated backups of proteins like CN1 to add alternatively spliced MLS tags is still shrouded in mystery.
Our ability to now harness these same slow evolutionary processes in real time, and bend them to our needs, will undoubtedly have implication well beyond the cannabinoid market. Together the results above suggest the tantalizing possibility of preserving some of the desired benefits of while eliminating the unintended consequences like memory loss or full blown amnesia.

credit; John Hewitt report
New 'smart metal' technology to keep bridge operational in next big quake

New 'smart metal' technology to keep bridge operational in next big quake


A bridge that bends in an strong earthquake and not only remains standing, but remains usable is making its debut in its first real-world application as part of a new exit bridge ramp on a busy downtown Seattle highway.
"We've tested new materials, memory retaining metal rods and flexible concrete composites, in a number of bridge model studies in our large-scale shake table lab, it's gratifying to see the applied for the first time in an important setting in a seismically active area with heavy traffic loads," Saiid Saiidi, civil engineering professor and researcher at the University of Nevada, Reno, said. "Using these materials substantially reduces damage and allows the bridge to remain open even after a strong earthquake."
Saiidi, who pioneered this technology, has built and destroyed, in the lab, several large-scale 200-ton bridges, single bridge columns and concrete abutments using various combinations of innovative materials, replacements for the standard steel rebar and concrete materials and design in his quest for a safer, more resilient infrastructure.
"We have solved the problem of survivability, we can keep a bridge usable after a ," Saiidi said. "With these techniques and materials, we will usher in a new era of super earthquake-resilient structures."
The University partnered with the Washington Department of Transportation and the Federal Highway Administration to implement this new technology on their massive Alaska Way Viaduct Replacement Program, the centerpiece of which is a two-mile long tunnel, but includes 31 separate projects that began in 2007 along the State Route 99 corridor through downtown Seattle.
"This is potentially a giant leap forward," Tom Baker, bridge and structures engineer for the Washington State Department of Transportation, said. "We design for no-collapse, but in the future, we could be designing for no-damage and be able to keep bridges open to emergency vehicles, commerce and the public after a strong quake."
Modern bridges are designed to not collapse during an earthquake, and this new technology takes that design a step further. In the earthquake lab tests, bridge columns built using memory-retaining nickel/titanium rods and a flexible concrete composite returned to their original shape after an earthquake as strong as a magnitude 7.5.
"The tests we've conducted on 4-span bridges leading to this point aren't possible anywhere else in the world than our large-scale structures and earthquake engineering lab," Saiidi said. "We've had great support along the way from many state highway departments and funding agencies like the National Science Foundation, the Federal Highway Administration and the U.S. Department of Transportation. Washington DOT recognized the potential of this technology and understands the need to keep infrastructure operating following a large earthquake."
In an experiment in 2015, featured in a video, one of Saiidi's 's moved more than six inches off center at the base and returned to its original position, as designed, in an upright and stable position. Using the computer-controlled hydraulics, the earthquake engineering lab can increase the intensity of the recorded . Saiidi turned the dial up to 250 percent of the design parameters and still had excellent results.
"It had an incredible 9 percent drift with little damage," Saiidi said.
The Seattle off-ramp with the innovative columns is currently under construction and scheduled for completion in spring 2017. After the new SR 99 tunnel opens, this ramp, just south of the tunnel entrance, will take northbound drivers from SR 99 to Seattle's SODO neighborhood.
A new WSDOT video describes how this innovative technology works.
"Dr. Saiidi sets the mark for the level of excellence to which the College of Engineering aspires," Manos Maragakis, dean of the University's College of Engineering, said. "His research is original and innovative and has made a seminal contribution to seismic safety around the globe."
Use drones and insect biobots to map disaster areas

Use drones and insect biobots to map disaster areas


Tech would use drones and insect biobots to map disaster areas
Credit: North Carolina State University  
Researchers at North Carolina State University have developed a combination of software and hardware that will allow them to use unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map large, unfamiliar areas – such as collapsed buildings after a disaster.
"The idea would be to release a swarm of sensor-equipped biobots – such as remotely controlled cockroaches – into a collapsed building or other dangerous, unmapped area," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and co-author of two papers describing the work.
"Using remote-control technology, we would restrict the movement of the biobots to a defined area," Lobaton says. "That area would be defined by proximity to a beacon on a UAV. For example, the biobots may be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and would signal researchers via radio waves whenever they got close to each other. Custom software would then use an algorithm to translate the biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the UAV moves forward to hover over an adjacent, unexplored section. The biobots move with it, and the mapping process is repeated. The software program then stitches the new map to the previous one. This can be repeated until the entire region or structure has been mapped; that map could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS can't be used," Lobaton says. "A strong radio signal from the UAV could penetrate to a certain extent into a collapsed building, keeping the biobot swarm contained. And as long as we can get a signal from any part of the swarm, we are able to retrieve data on what the rest of the swarm is doing. Based on our experimental data, we know you're going to lose track of a few individuals, but that shouldn't prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots. However, to test their new mapping technology, the research team relied on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a maze-like space, with the effect of the UAV beacon emulated using an overhead camera and a physical boundary attached to a moving cart. The cart was moved as the robots mapped the area.
"We had previously developed proof-of-concept software that allowed us to map small areas with biobots, but this work allows us to map much larger areas and to stitch those maps together into a comprehensive overview," Lobaton says. "It would be of much more practical use for helping to locate survivors after a disaster, finding a safe way to reach survivors, or for helping responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching them together, "A Framework for Mapping with Biobotic Insect Networks: From Local to Global Maps," is published in Robotics and Autonomous Systems. An article on the theory of mapping based on the proximity of mobile sensors to each other, "Geometric Learning and Topological Inference with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.


credit;   Matt Shipman
How machine learning advances artificial intelligence

How machine learning advances artificial intelligence


Computers that learn for themselves are with us now. As they become more common in 'high-stakes' applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can trust them.
There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.
Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.
Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?
We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.
Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.
Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?
"Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data," says Ghahramani. "But what is going on inside the 'black box'? If the processes by which decisions were being made were more transparent, then trust would be less of an issue."
His team builds the algorithms that lie at the heart of these technologies (the "invisible bit" as he refers to it). Trust and transparency are important themes in their work: "We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.
"When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us."
One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.
Two years ago, Ghahramani's group launched the Automatic Statistician with funding from Google. The tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.
"The difficulty with machine learning systems is you don't really know what's going on inside – and the answers they provide are not contextualised, like a human would do. The Automatic Statistician explains what it's doing, in a human-understandable form."
Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.
Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: "A particular issue with new (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand." His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.
"We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode." A , for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.
Weller's theme of trust and transparency forms just one of the projects at the newly launched £10 million Leverhulme Centre for the Future of Intelligence (CFI). Ghahramani, who is Deputy Director of the Centre, explains: "It's important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society."
CFI brings together four of the world's leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.
Ghahramani describes the excitement felt across the field: "It's exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.
"We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us."
Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a speech delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.

Translate

Ads