The basis for machine-learning systems' decisions

The basis for machine-learning systems' decisions

Technique reveals the basis for machine-learning systems' decisions
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised a way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions. Credit: Christine Daniloff/MIT
In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.
But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it's sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.
At the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train so that they provide not only predictions and classifications but rationales for their decisions.
"In real-world applications, sometimes people really want to know why the model makes the predictions it does," says Tao Lei, an MIT graduate student in and computer science and first author on the new paper. "One major reason that doctors don't trust machine-learning methods is that there's no evidence."
"It's not only the medical domain," adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei's thesis advisor. "It's in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it."
"There's a broader aspect to this work, as well," says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. "You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that's trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model."
Virtual brains
Neural networks are so called because they mimic—approximately—the structure of the brain. They are composed of a large number of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks.
In a process referred to as "deep learning," training data is fed to a network's input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. The values stored in the network's output nodes are then correlated with the classification category that the network is trying to learn—such as the objects in an image, or the topic of an essay.
Over the course of the network's training, the operations performed by the individual nodes are continuously modified to yield consistently good results across the whole set of training examples. By the end of the process, the computer scientists who programmed the network often have no idea what the nodes' settings are. Even if they do, it can be very hard to translate that low-level information back into an intelligible description of the system's decision-making process.
In the new paper, Lei, Barzilay, and Jaakkola specifically address neural nets trained on textual data. To enable interpretation of a neural net's decisions, the CSAIL researchers divide the net into two modules. The first module extracts segments of text from the training data, and the segments are scored according to their length and their coherence: The shorter the segment, and the more of it that is drawn from strings of consecutive words, the higher its score.
The segments selected by the first module are then passed to the second module, which performs the prediction or classification task. The modules are trained together, and the goal of training is to maximize both the score of the extracted segments and the accuracy of prediction or classification.
One of the data sets on which the researchers tested their system is a group of reviews from a website where users evaluate different beers. The data set includes the raw text of the reviews and the corresponding ratings, using a five-star system, on each of three attributes: aroma, palate, and appearance.
What makes the data attractive to researchers is that it's also been annotated by hand, to indicate which sentences in the reviews correspond to which scores. For example, a review might consist of eight or nine sentences, and the annotator might have highlighted those that refer to the beer's "tan-colored head about half an inch thick," "signature Guinness smells," and "lack of carbonation." Each sentence is correlated with a different attribute rating.
As such, the data set provides an excellent test of the CSAIL researchers' system. If the first module has extracted those three phrases, and the second module has correlated them with the correct ratings, then the system has identified the same basis for judgment that the human annotator did.
In experiments, the system's agreement with the human annotations was 96 percent and 95 percent, respectively, for ratings of appearance and aroma, and 80 percent for the more nebulous concept of palate.
In the paper, the researchers also report testing their system on a database of free-form technical questions and answers, where the task is to determine whether a given question has been answered previously.
In unpublished work, they've applied it to thousands of pathology reports on breast biopsies, where it has learned to extract text explaining the bases for the pathologists' diagnoses. They're even using it to analyze mammograms, where the first module extracts sections of images rather than segments of text.
Structure of toxic tau aggregates determines type of dementia, rate of progression

Structure of toxic tau aggregates determines type of dementia, rate of progression

Structure of toxic tau aggregates determines type of dementia, rate of progression
Dr. Marc Diamond's lab replicated distinctly patterned tau strains, shown in green, in cultured cells. Credit: UT Southwestern
The distinct structures of toxic protein aggregates that form in degenerating brains determine which type of dementia will occur, which regions of brain will be affected, and how quickly the disease will spread, according to a study from the Peter O'Donnell Jr. Brain Institute.
The research helps explain the diversity of dementias linked to tau protein aggregation, which destroys brain cells of patients with Alzheimer's and other neurodegenerative syndromes. The study also has implications for earlier and more accurate diagnoses of various dementias through definition of the unique forms of tau associated with each.
"In addition to providing a framework to understand why patients develop different types of neurodegeneration, this work has promise for the development of drugs to treat specific neurodegenerative diseases, and for how to accurately diagnose them. The findings indicate that a one-size-fits-all strategy for therapy may not work, and that we have to approach clinical trials and drug development with an awareness of which forms of tau we are targeting," said study author Dr. Marc Diamond, founding Director of the Center for Alzheimer's and Neurodegenerative Diseases, and Professor of Neurology and Neurotherapeutics with the O'Donnell Brain Institute at UT Southwestern Medical Center.
Researchers used special cell systems to replicate distinct tau aggregate conformations. These different forms of pathological tau were then inoculated into the brains of mice. Each form created different pathological patterns, recapitulating the variation that occurs in diseases such as Alzheimer's, frontotemporal dementias, and traumatic encephalopathy.
The different forms of tau caused pathology that spread at different rates through the brain, and affected specific brain regions. This experiment demonstrated that the structure of pathological tau aggregates alone is sufficient to account for most if not all the variation seen in human neurodegenerative diseases that are linked to this protein.
The finding, published in Neuron, could have a notable impact on widespread efforts at the O'Donnell Brain Institute and elsewhere to develop treatments that eliminate tau and other toxic proteins from the brains of dementia patients.
Structure of toxic tau aggregates determines type of dementia, rate of progression
Add caThese tau strains were inoculated into the brains of mice and formed unique patterns of pathology that can be linked to specific dementias. Credit: UT Southwestern Medical Center ption
"The challenge for us now is to figure out how to rapidly and efficiently determine the forms of tau that are present in individual patients, and simultaneously, to develop specific therapies. This work says that it should be possible to predict patterns of disease in patients and responses to therapy based on knowledge of tau aggregate structure," said Dr. Diamond, who holds the Distinguished Chair in Basic Brain Injury and Repair.
Dr. Diamond's lab, at the forefront of many notable findings relating to tau, had previously determined that tau acts like a prion - an infectious protein that can self-replicate and spread like a virus through the brain. The lab has determined that tau protein in human brain can form many distinct strains, or self-replicating structures, and developed methods to reproduce them in the laboratory. This research led Dr. Diamond's team to the latest study to test whether these strains might account for different forms of dementia.
To make this link, 18 distinct tau aggregate strains were replicated in the lab from human neurodegenerative disease brain samples, or were created from mouse models or other artificial sources. Researchers inoculated the strains into different brain regions of mice and found striking differences among them.
While some strains had far reaching and rapid effects, others replicated only in limited parts of the brain, or caused widespread disease but did so very slowly. This surprising result answered a fundamental question that has dogged the field of neurodegenerative disease: Why are brain regions vulnerable in certain cases but not others, and why do some diseases progress more rapidly than others?
For instance, in Alzheimer's disease, problems begin in brain memory centers before spreading to other areas that control functions such as language. Conversely, due to initial degeneration of frontal and temporal brain regions in frontotemporal dementia, the memory centers are relatively spared, and patients often first show changes in personality and behavior.
The new study implies that with knowledge of tau aggregate structure in patients, or possibly even in healthy individuals, it should be possible to predict the most vulnerable to degeneration and the rate of disease progression.
Next-generation smartphone battery inspired by the gut

Next-generation smartphone battery inspired by the gut

Next-generation smartphone battery inspired by the gut
A computer visualization of villi-like battery material. Credit: Teng Zhao

Researchers have developed a prototype of a next-generation lithium-sulphur battery which takes its inspiration in part from the cells lining the human intestine. The batteries, if commercially developed, would have five times the energy density of the lithium-ion batteries used in smartphones and other electronics.
The new design, by researchers from the University of Cambridge, overcomes one of the key technical problems hindering the commercial development of lithium-sulphur batteries, by preventing the degradation of the battery caused by the loss of material within it. The results are reported in the journal Advanced Functional Materials.
Working with collaborators at the Beijing Institute of Technology, the Cambridge researchers based in Dr Vasant Kumar's team in the Department of Materials Science and Metallurgy developed and tested a lightweight nanostructured material which resembles villi, the finger-like protrusions which line the small intestine. In the human body, villi are used to absorb the products of digestion and increase the surface area over which this process can take place.
In the new lithium-sulphur battery, a layer of material with a villi-like structure, made from tiny zinc oxide wires, is placed on the surface of one of the battery's electrodes. This can trap fragments of the when they break off, keeping them electrochemically accessible and allowing the material to be reused.
"It's a tiny thing, this layer, but it's important," said study co-author Dr Paul Coxon from Cambridge's Department of Materials Science and Metallurgy. "This gets us a long way through the bottleneck which is preventing the development of better batteries."
A typical lithium-ion battery is made of three separate components: an anode (negative electrode), a cathode (positive electrode) and an electrolyte in the middle. The most common materials for the anode and cathode are graphite and lithium cobalt oxide respectively, which both have layered structures. Positively-charged lithium ions move back and forth from the cathode, through the electrolyte and into the anode.
The crystal structure of the electrode materials determines how much energy can be squeezed into the battery. For example, due to the atomic structure of carbon, each carbon atom can take on six lithium ions, limiting the maximum capacity of the battery.
Sulphur and lithium react differently, via a multi-electron transfer mechanism meaning that elemental sulphur can offer a much higher theoretical capacity, resulting in a lithium-sulphur battery with much higher energy density. However, when the battery discharges, the lithium and sulphur interact and the ring-like sulphur molecules transform into chain-like structures, known as a poly-sulphides. As the battery undergoes several charge-discharge cycles, bits of the poly-sulphide can go into the electrolyte, so that over time the battery gradually loses active material.
The Cambridge researchers have created a functional layer which lies on top of the cathode and fixes the active material to a conductive framework so the active material can be reused. The layer is made up of tiny, one-dimensional zinc oxide nanowires grown on a scaffold. The concept was trialled using commercially-available nickel foam for support. After successful results, the foam was replaced by a lightweight carbon fibre mat to reduce the battery's overall weight.
"Changing from stiff nickel foam to flexible carbon fibre mat makes the layer mimic the way small intestine works even further," said study co-author Dr Yingjun Liu.
This functional layer, like the intestinal villi it resembles, has a very high surface area. The material has a very strong chemical bond with the poly-sulphides, allowing the active material to be used for longer, greatly increasing the lifespan of the battery.
"This is the first time a chemically functional layer with a well-organised nano-architecture has been proposed to trap and reuse the dissolved active materials during battery charging and discharging," said the study's lead author Teng Zhao, a PhD student from the Department of Materials Science & Metallurgy. "By taking our inspiration from the natural world, we were able to come up with a solution that we hope will accelerate the development of next-generation batteries."
For the time being, the device is a proof of principle, so commercially-available lithium-sulphur batteries are still some years away. Additionally, while the number of times the battery can be charged and discharged has been improved, it is still not able to go through as many charge cycles as a lithium-ion battery. However, since a lithium-sulphur battery does not need to be charged as often as a , it may be the case that the increase in energy density cancels out the lower total number of charge-discharge cycles.
"This is a way of getting around one of those awkward little problems that affects all of us," said Coxon. "We're all tied in to our electronic devices - ultimately, we're just trying to make those devices work better, hopefully making our lives a little bit nicer."
New tool detects malicious websites before they cause harm

New tool detects malicious websites before they cause harm

Malicious websites promoting scams, distributing malware and collecting phished credentials pervade the web. As quickly as we block or blacklist them, criminals set up new domain names to support their activities. Now a research team including Princeton University computer science professor Nick Feamster and recently graduated Ph.D. student Shuang Hao has developed a technique to make it more difficult to register new domains for nefarious purposes.
In a paper presented at the 2016 ACM Conference on Computer and Communications Security on Oct. 27, the researchers describe a system called PREDATOR that distinguishes between legitimate and malicious purchasers of new websites. In doing so, the system yields important insights into how those two groups behave differently online even before the have done anything obviously bad or harmful. These early signs of likely evil-doers help security professionals take preemptive measures, instead of waiting for a security threat to surface.
"The intuition has always been that the way that malicious actors use online resources somehow differs fundamentally from the way legitimate actors use them," Feamster explained. "We were looking for those signals: what is it about a domain name that makes it automatically identifiable as a bad domain name?"
Once a website begins to be used for malicious purposes—when it's linked to in spam email campaigns, for instance, or when it installs on visitors' machines—then defenders can flag it as bad and start blocking it. But by then, the site has already been used for the very kinds of behavior that we want to prevent. PREDATOR, which stands for Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration, gets ahead of the curve.
The researchers' techniques rely on the assumption that malicious users will exhibit registration behavior that differs from those of normal users, such as buying and registering lots of domains at once to take advantage of bulk discounts, so that they can quickly and cheaply adapt when their sites are noticed and blacklisted. Additionally, criminals will often register multiple sites using slight variations on names: changing words like "home" and "homes" or switching word orders in phrases.
By identifying such patterns, Feamster and his collaborators were able to start sifting through the more than 80,000 new domains registered every day to preemptively identify which ones were most likely to be used for harm.
Testing their results against known blacklisted websites, they found that PREDATOR detected 70 percent of malicious websites based solely on information known at the time those domains were first registered. The false positive rate of the PREDATOR system, or rate of legitimate sites that were incorrectly identified as malicious by the tool, was only 0.35 percent.
Being able to detect malicious sites at the moment of registration, before they're being used, can have multiple security benefits, Feamster said. Those sites can be blocked sooner, making it difficult to use them to cause as much harm—or, indeed, any harm at all if the operators are not permitted to purchase them. "PREDATOR can achieve early detection, often days or weeks before existing blacklists, which generally cannot detect domain abuse until an attack is already underway," the authors write in their paper. "The key advantage is to respond promptly for defense and limit the window during which miscreants might profitably use a domain."
Additionally, existing blocking tools, which rely on detecting from websites and then blocking them, allow criminals to continue purchasing new websites. Cutting off the operators of malicious websites at the moment of registration prevents this perpetual cat-and-mouse dynamic. This more permanent form of protection against online threats is a rarity in the field of computer security, where adversaries often evade new lines of defense easily, the researchers said.
For the PREDATOR system to help everyday internet users, it will have to be used by existing domain blacklist services, like Spamhaus, that maintain lists of blocked websites, or by registrars, like, that sell new domain names.
"Part of what we envision is if a registrar is trying to make a decision about whether to register a domain name, then if PREDATOR suggests that domain name might be used for malicious ends, the registrar can at least wait and do more due diligence before it moves forward," Feamster said.
Although the registrars still must manually review domain registration attempts, PREDATOR offers them an effective tool to predict potential abuse. "Prior to work like this I don't think a registrar would have very easy go-to method for even figuring out if the domains they registered would turn out to be malicious," Feamster said.
Team spots elusive intermediate compound in atmospheric chemistry

Team spots elusive intermediate compound in atmospheric chemistry

JILA researchers used their frequency comb spectroscopy technique (multicolored lightwaves between the mirrors) to follow each step of an important chemical reaction that occurs in the atmosphere. The technique identifies chemicals in real time based on the light they absorb inside a mirrored cavity. The reaction combines the hydroxyl molecule and carbon monoxide (both at lower left) to form the hydrocarboxl intermediate (red, black and yellow molecule in the foreground). Eventually the intermediate breaks down into hydrogen and carbon dioxide. Credit: Jun Ye group and Steve Burrows/JILA
JILA physicists and colleagues have identified a long-missing piece in the puzzle of exactly how fossil fuel combustion contributes to air pollution and a warming climate. Performing chemistry experiments in a new way, they observed a key molecule that appears briefly during a common chemical reaction in the atmosphere.
The combines the hydroxyl molecule (OH, produced by reaction of oxygen and water) and carbon monoxide (CO, a byproduct of incomplete ) to form hydrogen (H) and carbon dioxide (CO2, a "greenhouse gas" contributing to global warming), as well as heat.
Researchers have been studying this reaction for decades and observed that its speed has an abnormal pressure and temperature dependence, suggesting there is a short-livedintermediate, the hydrocarboxyl molecule, or HOCO. But until now, HOCO had not been observed directly under conditions like those in nature, so researchers were unable to calculate accurately the pressures at which the reaction either pauses at the HOCO stage or proceeds rapidly to create the final products.
As described in the October 28, 2016, issue of Science, JILA's direct detection of the intermediate compound and measurements of its rise and fall under different pressuresand different mixtures of atmospheric gases revealed the reaction mechanism, quantified product yields, and tested theoretical models that were incomplete despite rigorous efforts. JILA is a partnership of the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.
"We followed the reaction step by step in time, including seeing the short-lived, and thus elusive, intermediatesthat play decisive roles in the final products," JILA/NIST Fellow Jun Ye said. "By finally understanding the reaction in full, we can model the atmospheric chemical processes much more accurately, including how air pollution forms."
JILA researchers are performing chemistry in a new way, fully controlling reactions by artificial means instead of relying on nature. They used a laser to induce the reaction inside a container called a laboratory flow cell, through which samples of the molecules participating in the reaction and other gases passed. This process mimicked nature by using gases found in the atmosphere and no catalysts. To avoid any confusion in the results due to the presence of water (which contains hydrogen), the researchers used deuterium, or heavy hydrogen, in the hydroxyl molecule, OD, to start the reaction. Thus, they looked for the DOCO intermediate instead of HOCO. During the experiment, concentrations of CO and nitrogen gases were variedacross a range of pressures.
Using JILA's patented frequency comb spectroscopy technique, which identifies chemicals and measurestheir concentrations in real time based on colors of light they absorb, researchers measured the initial ODand the resulting DOCO over various pressures and atmospheric gas concentrations over time, looking forconditions under which DOCO stabilized or decomposed to form CO2.
The JILA team identified an important factor to be energy transfer due to collisions between the intermediate molecule and nearby CO and nitrogen molecules. These collisions can either stabilize the intermediateDOCO or deactivate it and encourage the reaction to proceed to its final products.
JILA's frequency comb spectroscopy technique analyzes chemicals inside a glass container, in which comblight bounces back and forth between two mirrors. The repeated, continuous measurements make thetechnique especially sensitive and accurate in identifying "fingerprints" of specific molecules. This latestexperiment used new "supermirrors," which have crystalline coatings that reduce light losses and improveddetection sensitivity 10-fold.
JILA's results, notably the effects of molecular collisions, need to be included in future atmospheric and combustion model predictions, according to the paper. For example, even at low pressures, the reaction produces a DOCO yield of nearly 50 percent, meaning about half the reactions pause at the intermediate stage.
This observation affects calculations that go beyond Earth: Other researchers have shown that HOCO can contribute 25-70 percent of the total CO2 concentration in the cold Martian atmosphere.
In the future, JILA researchers plan to extend the experimental approach to study other chemical productsand processes. One topic of interest is reactions involving water and CO2, to aid understanding of how atmosphericCO2 interacts with and acidifies the oceans. Also of interest are studies of engine combustion, which affects fuel economy. A car engine combines air (oxygen and nitrogen) and fuel (hydrocarbons) to produce CO2 and water. Incomplete combustion creates CO.
Schiaparelli and its descent hardware on Mars, Detailed images

Schiaparelli and its descent hardware on Mars, Detailed images

NASA’s Mars Reconnaissance Orbiter High Resolution Imaging Science Experiment (HiRISE) imaged the ExoMars Schiaparelli module’s landing site on 25 October 2016, following the module’s arrival at Mars on 19 October.
A high-resolution image taken by a NASA Mars orbiter this week reveals further details of the area where the ExoMars Schiaparelli module ended up following its descent on 19 October.
The latest image was taken on 25 October by the high-resolution camera on NASA's Mars Reconnaissance Orbiter and provides close-ups of new markings on the planet's surface first found by the spacecraft's 'context camera' last week.
Both cameras had already been scheduled to observe the centre of the landing ellipse after the coordinates had been updated following the separation of Schiaparelli from ESA's Trace Gas Orbiter on 16 October. The separation manoeuvre, hypersonic atmospheric entry and parachute phases of Schiaparelli's went according to plan, the module ended up within the main camera's footprint, despite problems in the .
The new images provide a more detailed look at the major components of the Schiaparelli hardware used in the descent sequence.
The main feature of the context images was a dark fuzzy patch of roughly 15 x 40 m, associated with the impact of Schiaparelli itself. The high-resolution images show a central dark spot, 2.4 m across, consistent with the crater made by a 300 kg object impacting at a few hundred km/h.
The crater is predicted to be about 50 cm deep and more detail may be visible in future images.
The asymmetric surrounding dark markings are more difficult to interpret. In the case of a meteoroid hitting the surface at 40 000–80 000 km/h, asymmetric debris surrounding a crater would typically point to a low incoming angle, with debris thrown out in the direction of travel.
But Schiaparelli was travelling considerably slower and, according to the normal timeline, should have been descending almost vertically after slowing down during its entry into the atmosphere from the west.
It is possible the hydrazine propellant tanks in the module exploded preferentially in one direction upon impact, throwing debris from the planet's surface in the direction of the blast, but more analysis is needed to explore this idea further
The landing site of the Schiaparelli module within the predicted landing ellipse in a mosaic of images from the Context Camera (CTX) on NASA's Mars Reconnaissance Orbiter and the Thermal Emission Imaging System (THEMIS) on NASA's 2001 Mars …more
An additional long dark arc is seen to the upper right of the dark patch but is currently unexplained. It may also be linked to the impact and possible explosion.
Finally, there are a few white dots in the image close to the impact site, too small to be properly resolved in this image. These may or may not be related to the impact – they could just be 'noise'. Further imaging may help identify their origin.
Some 1.4 km south of Schiaparelli, a white feature seen in last week's context image is now revealed in more detail. It is confirmed to be the 12 m-diameter parachute used during the second stage of Schiaparelli's descent, after the initial heatshield entry into the atmosphere. Still attached to it, as expected, is the rear heatshield, now clearly seen.
The parachute and rear heatshield were ejected from Schiaparelli earlier than anticipated. Schiaparelli is thought to have fired its thrusters for only a few seconds before falling to the ground from an altitude of 2–4 km and reaching the surface at more than 300 km/h.
In addition to the Schiaparelli impact site and the parachute, a third feature has been confirmed as the front heatshield, which was ejected about four minutes into the six-minute descent, as planned.
The ExoMars and MRO teams identified a dark spot last week's image about 1.4 km east of the impact site and this seemed to be a plausible location for the front heatshield considering the timing and direction of travel following the module's entry.
The mottled bright and dark appearance of this feature is interpreted as reflections from the multilayered thermal insulation that covers the inside of the front heatshield. Further imaging from different angles should be able to confirm this interpretation.
The dark features around the front heatshield are likely from surface dust disturbed during impact.
Additional imaging by MRO is planned in the coming weeks. Based on the current data and observations made after 19 October, this will include images taken under different viewing and lighting conditions, which in turn will use shadows to help determine the local heights of the features and therefore a more conclusive analysis of what the features are.
A pair of before-and-after images taken by the Context Camera (CTX) on NASA's Mars Reconnaissance Orbiter on 29 May 2016 and 20 October 2016 show two new features appearing following the arrival of the Schiaparelli test lander module on 19 …more
A full investigation is now underway involving ESA and industry to identify the cause of the problems encountered by Schiaparelli in its final phase. The investigation started as soon as detailed telemetry transmitted by Schiaparelli during its descent had been relayed back to Earth by the Trace Gas Orbiter.
The full set of telemetry has to be processed, correlated and analysed in detail to provide a conclusive picture of Schiaparelli's descent and the causes of the anomaly.
Until this full analysis has been completed, there is a danger of reaching overly simple or even wrong conclusions. For example, the team were initially surprised to see a longer-than-expected 'gap' of two minutes in the telemetry during the peak heating of the module as it entered the atmosphere: this was expected to last up to only one minute. However, further processing has since allowed the team to retrieve half of the 'missing' data, ruling out any problems with this part of the sequence.
The latter stages of the descent sequence, from the jettisoning of the rear shield and parachute, to the activation and early shut-off of the thrusters, are still being explored in detail. A report of the findings of the investigative team is expected no later than mid-November 2016.
The same telemetry is also an extremely valuable output of the Schiaparelli entry, descent and landing demonstration, as was the main purpose of this element of the ExoMars 2016 mission. Measurements were made on both the front and rear shields during entry, the first time that such data have been acquired from the back heatshield of a vehicle entering the martian atmosphere.
The team can also point to successes in the targeting of the module at its separation from the orbiter, the hypersonic atmospheric entry phase, and the parachute deployment at supersonic speeds, and the subsequent slowing of the module.
These and other data will be invaluable input into future lander missions, including the joint European–Russian ExoMars 2020 rover and surface platform.
Finally, the orbiter is working well and being prepared to make its first set of measurements on 20 November to calibrate its science instruments.
New look at vitamin D challenges the current view of its benefits

New look at vitamin D challenges the current view of its benefits

vitamin D
A simple Google search for "what does vitamin D do?" highlights the widely used dietary supplement's role in regulating calcium absorption and promoting bone growth. But now it appears that vitamin D has much wider effects—at least in the nematode worm, C. elegans. Research at the Buck Institute shows that vitamin D works through genes known to influence longevity and impacts processes associated with many human age-related diseases. The study, published in Cell Reports, may explain why vitamin D deficiency has been linked to breast, colon and prostate cancer, as well as obesity, heart disease and depression.
"Vitamin D engaged with known longevity genes - it extended median lifespan by 33 percent and slowed the aging-related misfolding of hundreds of proteins in the worm," said Gordon Lithgow, PhD, senior author and Buck Institute professor. "Our findings provide a real connection between aging and disease and give clinicians and other researchers an opportunity to look at D in a much larger context."

Study provides links to human disease
The study shines a light on protein homeostasis, the ability of proteins to maintain their shape and function over time. It's a balancing act that goes haywire with normal aging—often resulting in the accumulation of toxic insoluble protein aggregates implicated in a number of conditions, including Alzheimer's, Parkinson's and Huntington's diseases, as well as type 2 diabetes and some forms of heart disease. "Vitamin D3, which is converted into the active form of vitamin D, suppressed protein insolubility in the worm and prevented the toxicity caused by human beta-amyloid which is associated with Alzheimer's disease," said Lithgow. "Given that aging processes are thought to be similar between the worm and mammals, including humans, it makes sense that the action of vitamin D would be conserved across species as well."
Postdoctoral fellow Karla Mark, PhD, led the team doing the experiments. She says the pathways and the molecular network targeted in the work (IRE-1/XBP-1/SKN-1) are involved in stress response and cellular detoxification. "Vitamin D3 reduced the age-dependent formation of insoluble proteins across a wide range of predicted functions and cellular compartments, supporting our hypothesis that decreasing protein insolubility can prolong lifespan."

Clinicians weigh in
"We've been looking for a disease to associate with vitamin D other than rickets for many years and we haven't come up with any strong evidence," said Clifford Rosen, MD, the director of the Center for Clinical and Translational Research and a senior scientist at the Maine Medical Center Research Institute studying osteoporosis and obesity. "But if it's a more global marker of health or longevity as this paper suggests, that's a paradigm shift. Now we're talking about something very different and exciting."
"This work is really appealing and challenging to the field," said Janice M. Schwartz, MD, a professor of medicine and bioengineering and therapeutic sciences the University of California, San Francisco, and a visiting research scientist at the Jewish Home in San Francisco. She has studied vitamin D supplementation in the elderly. "We focus on vitamin D and the bones because that's where we can measure its impact. I believe that vitamin D is as crucial for total body function and the muscles as it is for bones. Vitamin D influences hundreds of genes - most cells have vitamin D receptors, so it must be very important."
Current recommendations and controversies
How much vitamin D do humans need and how do they best get it? The issue is confusing with disagreement rampant among experts. The Institute of Medicine's (IOM) latest recommendations (from 2011) pertain only to vitamin D's role in bone health and fracture reduction. Experts concluded that evidence for other proposed benefits of vitamin D was inconsistent, inconclusive, or insufficient to set recommended intakes. The IOM recommends a daily intake of 600 International Units (IU) for people between 1 and 70 years old, and 800 IU daily for those older. The upper limit—the levels above which health risks are thought to increase—was set at 4,000 IU per day for adults. Excess vitamin D can raise blood levels of calcium which leads to vascular and tissue calcification, with subsequent damage to the heart, blood vessels and kidneys.
Many vitamin D researchers and some health organizations, including the Endocrine Society and the International Osteoporosis Foundation, disagreed with the IOM's recommendations for daily intake, instead recommending supplementation of 800 to 2,000 IU per day, at least for people known or likely to have low blood levels. The disagreement highlights another difficulty: measuring blood levels of vitamin D is problematic given a lack of standardization and reliability among labs. Blood levels of the precursor to the active vitamin D are measured in nanograms per milliliter (ng/mL) in the U.S. Many researchers and expert groups have argued that a blood level of at least 30 ng/mL is optimal; some call for optimum levels to be set at 40 or 50 ng/mL. But the IOM report concluded that blood levels starting at 20 ng/mL would be adequate for bone health in the vast majority of people.

Universal supplementation?
Based on problems with laboratory standards and a lack of agreed-upon meaning of results, both Rosen and Schwartz agree that the costs of universal testing for vitamin D levels would outweigh the benefits. Instead, both recommend universal supplementation of between 800 - 1000 IU of vitamin D daily for adults. "It's safe, there's no reason for anyone not to take it," said Schwartz, who has written about vitamin D for the popular press.
Schwartz says older adults may be particularly prone to vitamin D deficiency because the skin's ability to manufacture vitamin D from sun or UV light exposure declines with age, adding that the elderly are less likely to spend time in the sun, are more likely to have diets lacking in sources of vitamin D, and may suffer from gastrointestinal disorders that make it harder to absorb vitamin D. Others prone to vitamin D deficiency include those with darker skin and those who live in higher latitudes where the sun's angle is low in the sky.

Bringing it back to aging
Given adequate funding, senior author Lithgow plans to test vitamin D in mice to measure and determine how it affects aging, disease and function—and he hopes that clinical trials in humans will go after the same measurements. "Maybe if you're deficient in vitamin D, you're aging faster. Maybe that's why you're more susceptible to cancer or Alzheimer's," he said. "Given that we had responses to vitamin D in an organism that has no bone suggests that there are other key roles, not related to bone, that it plays in living organisms."
Lithgow gave a shout out to the tiny, short-lived nematode worms which populated this study. "Working in these simple animals allows us to identify novel molecular pathways that influence how animals age," he said. "This gives us a solid starting point to ask questions and seek definitive answers for how vitamin D could impact human health. We hope that this work will spur researchers and clinicians to look at vitamin D in a larger, whole-person context that includes the aging process."
How to Turn your living room into a wireless charging station

How to Turn your living room into a wireless charging station

This graphic illustrates how a flat-screen Fresnel zone wireless power transfer system could charge smart devices in your living room. Credit: Duke University
A flat-screen panel that resembles a TV on your living room wall could one day remotely charge any device within its line of sight, according to new research.
In a paper published Oct. 23, 2016, on the arXiv pre-print repository, engineers at the University of Washington, Duke University and Intellectual Ventures' Invention Science Fund (ISF) show that the technology already exists to build such a system—it's only a matter of taking the time to design it.
"There is an enormous demand for alternatives to today's clunky charging pads and cumbersome cables, which restrict the mobility of a smart phone or a tablet. Our proposed approach takes advantage of widely used LCD technology to seamlessly deliver wireless power to all kinds of smart devices," said co-author Matt Reynolds, UW associate professor of electrical engineering and of computer science and engineering.
"The ability to safely direct focused beams of microwave energy to charge specific devices, while avoiding unwanted exposure to people, pets and other objects, is a game-changer for wireless power. And we're looking into alternatives to liquid crystals that could allow energy transfer at much higher power levels over greater distances," Reynolds said.
Some wireless charging systems already exist to help power speakers, cell phones and tablets. These technologies rely on platforms that require their own wires, however, and the devices must be placed in the immediate vicinity of the charging station.
This is because existing chargers use the resonant magnetic near-field to transmit energy. The magnetic field produced by current flowing in a coil of wire can be quite large close to the coil and can be used to induce a similar current in a neighboring coil. Magnetic fields also have the added bonus of being considered safe for human exposure, making them a convenient choice for wireless power transfer.
The magnetic near-field approach is not an option for power transfer over larger distances. This is because the coupling between source and receiver—and thus the power transfer efficiency—drops rapidly with distance. The wireless power transfer system proposed in the new paper operates at much higher microwave frequencies, where the power transfer distance can extend well beyond the confines of a room.
To maintain reasonable levels of power transfer efficiency, the key to the system is to operate in the Fresnel zone—a region of an electromagnetic field that can be focused, allowing power density to reach levels sufficient to charge many devices with high efficiency.
"As long as you're within a certain distance, you can build antennas that gather and focus it, much like a lens can focus a beam of light," said lead author David Smith, professor and chair of the Department of Electrical and Computer Engineering at Duke. "Our proposed system would be able to automatically and continuously charge any device anywhere within a room, making dead batteries a thing of the past."
The problem to date has been that the antennas in a wireless power transfer system would need to be able to focus on any device within a room. This could be done, for example, with a movable antenna dish, but that would take up too much space, and nobody wants a big, moving satellite dish on their mantel.
Another solution is a phased array—an antenna with a lot of tiny antennas grouped together, each of which can be independently adjusted and tuned. That technology also exists, but would cost too much and consume too much energy for household use.
The solution proposed in the new paper instead relies on metamaterials—a synthetic material composed of many individual, engineered cells that together produce properties not found in nature.
"Imagine you have an electromagnetic wave front moving through a flat surface made of thousands of tiny electrical cells," said Smith. "If you can tune each cell to manipulate the wave in a specific way, you can dictate exactly what the field looks like when it comes out on the other side."
Smith and his laboratory used this same principle to create the world's first cloaking device that bends electromagnetic waves around an object held within. Several years ago, Nathan Kundtz, a former graduate student and postdoc from Smith's group, led an ISF team that developed the metamaterials technology for satellite communications. The team founded Kymeta, which builds powerful, flat antennas that could soon replace the gigantic revolving satellite dishes often seen atop large boats. Three other companies, Evolv, Echodyne and Pivotal have also been founded using different versions of the metamaterials for imaging, radar and wireless communications, respectively.
In the paper, the research team works through calculations to illustrate what a metamaterials-based wireless power system would be capable of. According to the results, a flat metamaterial device no bigger than a typical flat-screen television could focus beams of microwave energy down to a spot about the size of a cell phone within a distance of up to ten meters. It should also be capable of powering more than one device at the same time.
There are, of course, challenges to engineering such a system. A powerful, low-cost, and highly efficient electromagnetic energy source would need to be developed. The system would have to automatically shut off if a person or a pet were to walk into the focused electromagnetic beam. And the software and controls for the metamaterial lens would have to be optimized to focus powerful beams while suppressing any unwanted secondary "ghost" beams.
But the technology is there, the researchers say.
"All of these issues are possible to overcome—they aren't roadblocks," said Smith. "I think building a system like this, which could be embedded in the ceiling and wirelessly charge everything in a room, is a very feasible scheme."
New speech recognition system on par with human capabilities? Microsoft claims it true

New speech recognition system on par with human capabilities? Microsoft claims it true

Microsoft researchers from the Speech & Dialog research group include, from back left, Wayne Xiong, Geoffrey Zweig, Xuedong Huang, Dong Yu, Frank Seide, Mike Seltzer, Jasha Droppo and Andreas Stolcke. (Photo by Dan DeLong)
Engineers at Microsoft have written a paper describing their new speech recognition system and claim that the results indicate that their system is as good at recognizing conversational speech as humans. The neural network-based system, the team reports, has achieved a historic achievement—a word rate error of 5.9 percent—making it the first ever below 6 percent, and more importantly, demonstrating that its performance is equal to human performance—they describe it as "human parity." They have uploaded their paper to Cornell's arXiv preprint server.
The was taught using recordings made and released by the U.S. National Institute of Standards and Technology—the recordings were created for the purpose of research and included both single-topic and open-topic conversations between two people talking on the telephone. The researchers at Microsoft found that their system had an of 5.9 percent on the single-topic conversations and 11.1 percent on those that were open ended.
As a side note, the researchers report that they also tested the skills of humans by having the same phone conversations from NIST sent to a third-party transcription service, which allowed for measuring error rates. They were surprised to find the error rate was higher than expected—5.9 for the single topic conversations and 11.3 percent for open-ended conversations. These findings are in sharp contrast to the general consensus in the scientific community that humans on average have a 4 percent error rate.
The team reports that they believe they can improve their system even more by overcoming obstacles that still confuse their system—namely backchannel communications. These are noises people make during conversation that are not words but still have meaning, such as "uh," "er," and "uh-huh." The neural network still has a hard time figuring out what to do with such noises. We humans use them to allow for pauses, to signify understanding or to communicate uncertainty—or to cue another speaker, such as to signify they should continue with whatever they were talking about.
The researchers also report that the new technology will be used to improve Microsoft's commercial speech recognition system, known as Cortana, and that work will continue both in improving error rates and in getting their system to better understand what the transcribed words actually mean.

methods to detect dishonesty online

A new study by Kim-Kwang Raymond Choo, associate professor of information systems and cybersecurity and Cloud Technology Endowed Professor at The University of Texas at San Antonio (UTSA), describes a method for detecting people dishonestly posting online comments, reviews or tweets across multiple accounts, a practice known as "astroturfing."
geekkeep.tkThe study describes a statistical method that analyzes multiple writing samples. Choo, a member of the UTSA College of Business, and his collaborators found that it's challenging for authors to completely conceal their writing style in their text. Based on word choice, punctuation and context, the method is able to detect whether one person or multiple people are responsible for the samples.
Choo and his co-authors (two former students of his, Jian Peng and Sam Detchon, and Helen Ashman, associate professor of information technology and mathematical sciences at the University of South Australia) used writing samples from the most prolific online commenters on various news web sites, and discovered that many people espousing their opinions online were actually all linked to a few singular writers with multiple accounts.
Credit: University of Texas at San Antonio  
"Astroturfing is legal, but it's questionable ethically," Choo said. "As long as social media has been popular, this has existed."
The practice has been used by businesses to manipulate social media users or online shoppers, by having one paid associate post false reviews on web sites about products for sale. It's also used on social media wherein astroturfers create several false accounts to espouse opinions, creating the illusion of a consensus when actually one person is pretending to be many.
"It can be used for any number of reasons," Choo said. "Businesses can use this to encourage support for their products or services, or to sabotage other competing companies by spreading negative opinions through false identities."
Candidates for elected office have also been accused of astroturfing to create the illusion of public support for a cause or a campaign. For example, President George W. Bush, the Tea Party movement, former Secretary of State Hillary Clinton and current Republican presidential candidate Donald Trump have all been accused of astroturfing to claim widespread enthusiasm for their platforms.
Now that Choo has the capability to detect one person pretending to be many online, he is considering further applications for his top-tier research. Stressing that astroturfing, while frowned upon, is not illegal, he's now looking into whether the algorithm can be used to prevent plagiarism and contract cheating.
"In addition to raising public awareness of the problem, we hope to develop tools to detect astroturfers so that users can make informed choices and resist online social manipulation and propaganda," Choo said.
Helping guide urban planing trough combining cellphone data with perceptions of public spaces

Helping guide urban planing trough combining cellphone data with perceptions of public spaces

Combining cellphone data with perceptions of public spaces could help guide urban planning
Researchers used sample images, like the ones on the top row, to identify several visual features that are highly correlated with judgments that a particular area is safe or unsafe. The left side shows a low level of safety while the right shows a high level. Highlighted areas on the middle row show “unsafe” areas while the bottom row shows “safe” areas in the image. Credit: Massachusetts Institute of Technology
For years, researchers at the MIT Media Lab have been developing a database of images captured at regular distances around several major cities. The images are scored according to different visual characteristics—how safe the depicted areas look, how affluent, how lively, and the like
In a paper they presented last week at the Association for Computing Machinery's Multimedia Conference, the researchers, together with colleagues at the University of Trento and the Bruno Kessler Foundation, both in Trento, Italy, compared these safety scores, of neighborhoods in Rome and Milan, to the frequency with which people visited these places, according to cellphone data.
Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.
In the same paper, the researchers also identified several visual features that are highly correlated with judgments that a particular area is safe or unsafe. Consequently, the work could help guide city planners in decisions about how to revitalize declining neighborhoods.
"There's a big difference between a theory and a fact," says Luis Valenzuela, an urban planner and professor of design at Universidad Adolfo Ibáñez in Santiago, Chile, who was not involved in the research. "What this paper does is put the facts on the table, and that's a big step. It also opens up the ways in which we can build toward establishing the facts in difference contexts. It will bring up a lot of other research, in which, I don't have any doubt, this will be put up as a seminal step."
Valenzuela is particularly struck by the researchers' demographically specific results. "That, I would say, is quite a big breakthrough in urban-planning research," he says. "Urban planning—and there's a lot of literature about it—has been largely designed from a male perspective. ... This research gives scientific evidence that women have a specific perception of the appearance of safety in the city."
"Are the places that look safer places that people flock into?" asks César Hidalgo, the Asahi Broadcast Corporation Career Development Associate Professor of Media Arts and Sciences and one of the senior authors on the new paper. "That should connect with actual crime because of two theories that we mention in the introduction of the paper, which are the defensible-space theory of Oscar Newman and Jane Jacobs' eyes-on-the-street theory." Hidalgo is also the director of the Macro Connections group at MIT.

Jacobs' theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman's theory is an elaboration on Jacobs', suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny.
The researchers caution that they are not trained as urban planners, but they do feel that their analysis identifies some visual features of urban environments that contribute to perceptions of safety or unsafety. For one thing, they think the data support Jacobs' theory: Buildings with street-facing windows appear to increase people's sense of safety much more than buildings with few or no street-facing windows. And in general, upkeep seems to matter more than distinctive architectural features. For instance, everything else being equal, green spaces increase people's sense of safety, but poorly maintained green spaces lower it.
Joining Hidalgo on the paper are Nikhil Naik, a PhD student in media arts and sciences at MIT; Marco De Nadai, a PhD student at the University of Trento; Bruno Lepri, who heads the Mobile and Social Computing Lab at the Kessler Foundation; and five of their colleagues in Trento. Both De Nadai and Lepri are currently visiting scholars at MIT.
Hidalgo's group launched its project to quantify the emotional effects of urban images in 2011, with a website that presents volunteers with pairs of images and asks them to select the one that ranks higher according to some criterion, such as safety or liveliness. On the basis of these comparisons, the researchers' system assigns each image a score on each criterion.
So far, volunteers have performed more than 1.4 million comparisons, but that's still not nearly enough to provide scores for all the images in the researchers' database. For instance, the images in the data sets for Rome and Milan were captured every 100 meters or so. And the database includes images from 53 cities.
So three years ago, the researchers began using the scores generated by human comparisons to train a machine-learning system that would assign scores to the remaining images. "That's ultimately how you're able to take this type of research to scale," Hidalgo says. "You can never scale by crowdsourcing, simply because you'd have to have all of the Internet clicking on images for you."
The , which was used to determine how frequently people visited various neighborhoods, was provided by Telecom Italia Mobile and identified only the cell towers to which users connected. The researchers mapped the towers' broadcast ranges onto the geographic divisions used in census data, and compared the number of people who made calls from each region with that region's aggregate safety scores. They adjusted for population density, employee density, distance from the city center, and a standard poverty index.
To determine which features of visual scenes correlated with perceptions of safety, the designed an algorithm that selectively blocked out apparently continuous sections of images—sections that appear to have clear boundaries. The algorithm then recorded the changes to the scores assigned the images by the machine-learning system

Possible new law to accurately measure charged macromolecules

For biochemists, measuring the size and diffusion properties of large molecules such as proteins and DNA using dynamic light-scattering techniques and the Stokes-Einstein formula has been mostly straightforward for decades, except for one major snag - it doesn't work when these macromolecules carry an electric charge.
Now polymer theorist Murugappan Muthukumar at the University of Massachusetts Amherst has derived a solution to the 40-year dilemma, proposing a new theory that is allowing polymer chemists, engineers and biochemists for the first time to successfully apply the Stokes-Einstein law governing situations that involve charged . Details appear in the current early online edition of Proceedings of the National Academy of Sciences.
As Muthukumar explains, "The ability of molecules to diffuse becomes smaller as the molecule's size gets larger, but for charged molecules, it's not true, diffusion doesn't depend on size. This was very surprising to physicists and biochemists 40 years ago when they were trying to measure charged macromolecules using light scattering. They also found that molecules of the same charge were aggregating, or clumping when they should repel each other. It was very surprising and nobody understood why."
Further, experiments showed that when the repulsion between similarly-charged molecules is made weaker by adding salt to the solution, the clumps went away, he says. "People were mystified by not being able to measure the size of these molecules accurately, and by their unusual behavior."
After a long process of eliminating possible explanations, he now understands what is happening. "It turns out that these are not alone, there are small ions all around them, neutralizing the charges of the macromolecules," Muthukumar says. "These small ions are more agile and control the behavior of the macromolecules."
His paper offers formulae and testable predictions of a new theory or law governing charged macromolecules, DNA, proteins and synthetic poly-electrolytes. Experimental polymer scientists are already testing the new ideas in current investigations.
Muthukumar says this solution took him ten years to work out. "I began by simply believing the experimental facts and accepting that there must be an explanation. I started by taking a walk and asking myself, how could this be?"
As the theorist approached experimentalists with his ideas for solving the conundrum over the years, each had an objection that Muthukumar had to overcome. Finally, he reached the ion solution and heard no protest. "They have to be there," he now says. "The whole system has to be electrically neutral, otherwise you'd have an instability, which does not happen. Now we know how much the small ions are contributing. Using my formula, size of charged macromolecules can now be accurately determined using light scattering."
NTechLab focus on AI facial recognition capabilities

NTechLab focus on AI facial recognition capabilities

(geekkeep) How far have technology experts gone in achieving software for facial recognition? Moscow-based NTechLab is a group that focuses on AI intelligence algorithms, and they have gone far. The company is made up of a team of experts in machine learning and deep learning.
They have been at work on a facial recognition mission which has attracted great interest; their tool is effective but it also raises some concerns about privacy, if the tool were abused.
Their algorithm can extract facial feature characteristics, which has become a hot topic.
Luke Dormehl, UK-based tech writer at Digital Trends said in June that NTechLab may have stumbled upon one of the best facial recognition systems around.
NTechLab was founded last year by Artem Kuharenko.
They use techniques in artificial neural networks and machine learning to develop software products.
"A face recognition system already developed by our lab, has proved to be among the most accurate ones throughout the world," they stated.
Dormehl talked about their good performance at a competition. "At last year's "MegaFace" facial recognition competition in Washington, it managed to best a number of rivals—including Google's own FaceNet."
(Inverse said NTechLab at that event took fourth place with 73 percent accuracy in the competition, where MegaFace tasked its competitors with identifying faces from photos.)
Facial recognition in the bigger picture was the topic of an article in The Atlantic in June, which made the point that "machines still have limitations when it comes to facial recognition." Scientists are beginning to understand the constraints. "To begin to figure out how computers are struggling, researchers at the University of Washington created a massive database of faces—they call it MegaFace—and tested a variety of facial recognition algorithms as they scaled up in complexity."
Now in October, Nathaniel Mott reports in Inverse, security cameras around Moscow might soon be connected to the NTech Lab facial recognition tool. It would be used for scanning crowds and trying to identify each individual person within them.
Mott wrote that "Moscow has reportedly tapped a young startup called NTechLab to provide the facial recognition software used in this system."
Kelsey Atherton in Popular Science also looked at what they are doing. The article is headlined "Software that identifies any passing facer is ready for market," and the subhead is "Russian police are a likely first customer."

 Atherton wrote, "The program uses machine learning: training algorithms to recognize specific faces again and again by feeding images over and over until the algorithms get it right."

How is the universe expanding?

Five years ago, the Nobel Prize in Physics was awarded to three astronomers for their discovery, in the late 1990s, that the universe is expanding at an accelerating pace.

Their conclusions were based on analysis of Type Ia supernovae - the spectacular thermonuclear explosion of dying stars - picked up by the Hubble space telescope and large ground-based telescopes. It led to the widespread acceptance of the idea that the universe is dominated by a mysterious substance named 'dark energy' that drives this accelerating expansion.
Now, a team of scientists led by Professor Subir Sarkar of Oxford University's Department of Physics has cast doubt on this standard cosmological concept. Making use of a vastly increased data set - a catalogue of 740 Type Ia supernovae, more than ten times the original sample size - the researchers have found that the evidence for acceleration may be flimsier than previously thought, with the data being consistent with a constant rate of expansion.
The study is published in the Nature journal Scientific Reports.
Professor Sarkar, who also holds a position at the Niels Bohr Institute in Copenhagen, said: 'The discovery of the accelerating expansion of the universe won the Nobel Prize, the Gruber Cosmology Prize, and the Breakthrough Prize in Fundamental Physics. It led to the widespread acceptance of the idea that the universe is dominated by "dark energy" that behaves like a cosmological constant - this is now the "standard model" of cosmology.
'However, there now exists a much bigger database of supernovae on which to perform rigorous and detailed statistical analyses. We analysed the latest catalogue of 740 Type Ia supernovae - over ten times bigger than the original samples on which the discovery claim was based - and found that the evidence for accelerated expansion is, at most, what physicists call "3 sigma". This is far short of the "5 sigma" standard required to claim a discovery of fundamental significance.
'An analogous example in this context would be the recent suggestion for a new particle weighing 750 GeV based on data from the Large Hadron Collider at CERN. It initially had even higher significance - 3.9 and 3.4 sigma in December last year - and stimulated over 500 theoretical papers. However, it was announced in August that new data show that the significance has dropped to less than 1 sigma. It was just a statistical fluctuation, and there is no such particle.'
There is other data available that appears to support the idea of an accelerating universe, such as information on the cosmic microwave background - the faint afterglow of the Big Bang - from the Planck satellite. However, Professor Sarkar said: 'All of these tests are indirect, carried out in the framework of an assumed model, and the cosmic microwave background is not directly affected by dark energy. Actually, there is indeed a subtle effect, the late-integrated Sachs-Wolfe effect, but this has not been convincingly detected.
'So it is quite possible that we are being misled and that the apparent manifestation of dark energy is a consequence of analysing the data in an oversimplified theoretical model - one that was in fact constructed in the 1930s, long before there was any real data. A more sophisticated theoretical framework accounting for the observation that the universe is not exactly homogeneous and that its matter content may not behave as an ideal gas - two key assumptions of standard cosmology - may well be able to account for all observations without requiring dark energy. Indeed, vacuum energy is something of which we have absolutely no understanding in fundamental theory.'
Professor Sarkar added: 'Naturally, a lot of work will be necessary to convince the physics community of this, but our work serves to demonstrate that a key pillar of the standard cosmological model is rather shaky. Hopefully this will motivate better analyses of cosmological data, as well as inspiring theorists to investigate more nuanced cosmological models. Significant progress will be made when the European Extremely Large Telescope makes observations with an ultrasensitive "laser comb" to directly measure over a ten to 15-year period whether the expansion rate is indeed accelerating.'