(Tech Xplore)—Not all of us park our bodies in a chair in the morning
and cross our legs to do our work. In fact, just think of vast numbers
of workers doing physically demanding or just physically repetitive
tasks including bending and lifting.
Workers
on construction sites, factories and warehouses might cope with aches
and pains brought on by their work. Hopefully, the future will provide
an easy answer for workers to suit up in a suitable way for them to
avoid these aches and pain.
There is a new kid on the block aiming to address such a solution,
and a number of tech watchers have put them in the news this month. A
California-based group aptly called suit-X announced its MAX, which
stands for Modular Agile Exoskeleton. The company designs and makes
exoskeletons.
"MAX is designed to support workers during the repetitive tasks that most frequently cause injury," said a company release.
Will Knight in MIT Technology Review said that this essentially is " a trio of devices that use robotic technologies to enhance the abilities of able-bodied workers and prevent common workplace injuries."
Target users, for example, could include those who carry out ceiling
inspections, welding, installations and repairs. "It's not only lifting
75 pounds that can hurt your back; it is also lifting 20 pounds
repeatedly throughout the day that will lead to injury," said Dr.
Homayoon Kazerooni, founder and CEO, suitX."The MAX solution is designed
for unstructured workplaces where no robot can work as efficiently as a
human worker. Our goal is to augment and support workers who perform
demanding and repetitive tasks in unstructured workplaces in order to
prevent and reduce injuries." Seeker referred to the MAX system as an exoskeleton device that could potentially change the way millions of people work.
Seeker noted its advantages as workplace exoskeletons in
several ways, being lightweight such that the user can walk around
unimpeded. "The exoskeleton units kick in only when you need them, and
they don't require any external power source."
MAX is a product with three modules. You use them independently or in
combination, depending on work needs. The three modules are backX,
shoulderX, and legX.
According to the company, "All modules intelligently engage when you need them, and don't impede you otherwise."
The backX (lower back) reduces forces and torques.
The shoulderX reduces forces; it "enables the wearer to perform
chest-to-ceiling level tasks for longer periods of time." In a video the
company defines shoulderX as "an industrial arm exoskeleton that
augments its wearer by reducing gravity-induced forces at the shoulder complex."
The legX was designed to support knee joint and quadriceps. It
incorporates microcomputers in each leg. They communicate with each
other to determine if the person is walking, bending, or taking the
stairs." Seeker said these communicate via Bluetooth, monitoring spacing and position.
Credit: suitx
Kazerooni spoke about his company and its mission, in Seeker.
"My job is easy. I sit in front of a computer. But these guys work all
day long, put their bodies through abuse. We can use bionics to help
them." He also said he and his team did not create this "because of
science fiction movies. We were responding to numbers from the
Department of Labor, which said that back, knee and shoulder injuries
are the most common form of injuries among workers."
Will Knight meanwhile has reflected on the bigger picture in exoskeleton
developments. Can they help in preventing injury on the job and help
prolong workers' careers? "New materials, novel mechanical designs, and
cheaper actuators and motors have enabled a new generation of cheaper,
more lightweight exoskeletons to emerge in recent years," he wrote. "For
instance, research groups at Harvard and SRI are developing systems
that are passive and use soft, lightweight materials."
Some companies, such as BMW, said Knight, have been experimenting
with exoskeletons. "The MAX is another (bionic) step toward an augmented
future of work."
Cannabinoids and memory
Few classes of drugs have
galvanized the pharmaceutical industry in recent times like the
cannabinoids. This class of molecules includes not only the natural
forms, but also a vast new treasury of powerful synthetic analogs with
up to several hundred times the potency as measured by receptor activity
and binding affinity. With the FDA now fast tracking all manner of
injectables, topicals, and sprays promising everything from relief of
nebulous cancer pain to anti-seizure neuroprotection, more than a few
skeptics have been generated.
What inquiring
minds really want to know, beyond the thorny issue of how well they
actually work, is how do they work at all? If you want to understand
what something is doing in the cell, one useful approach is to ask what
it does to their mitochondria.
With drug companies now drooling over the possibility of targeting
drugs and treatments directly to these organelles by attaching
mitochondrial localization sequences (MLS) or other handler molecules,
answers to this kind of question are now coming into focus.
But even with satisfactory explanations in hand, there would still be
one large hurdle standing in the way of cannabinoid medical bliss:
Namely, even if a patient can manage to avoid operating vehicles or
heavy machinery throughout the course of their treatment, how do they
cope with the endemic collateral memory loss these drugs invariably
cause?
A recent paper published in Nature neatly ties all these
subtleties together, and even suggests a possible way out of the brain
fog by toggling the sites of cannabinoid action between mitochondria and
other cellular compartments. By generating a panel of cannabinoid
receptor and second messenger molecules with and without the appropriate
MLS tags or accessory binding proteins, the authors were able to
directly link cannabinoid-controlled mitochondrial activity to memory
formation.
One confounder in this line of work is that these MLSs are very
fickle beasts. The 22 or so leader amino acids that make up their 'code'
is not a direct addresses in any sense. While the consensus sequences
that localize protease action or sort nuclear, endoplasmic reticulum,
and plasma membrane proteins generally contain clearly recognizable
motifs, any regularities in the MLSs have only proven visible to a
computer. That is not to say that MLSs are fictions—they clearly do
work—but their predictable action is only witnessed whole once their
3-dimensional vibrating structures are fully-conformed.
The authors availed themselves of two fairly sophisticated programs
called Mitoprot and PSQRT to remove any guesswork in identifying a
potential MLS in CN1 cannabinoid receptors. CN1s had been previously
associated by immunohistochemical methods to what we might call the
mitochondrial penumbra, but their presence there may have been purely
incidental. This in silico analysis theoretically confirmed the presence
of a putative MLS in CB1 and encouraged them to carry out further
manipulations of this pathway.
Namely, the researchers took a mouse with the mitochondrial mtCB1
receptor knocked out, and then added modified versions back using viral
vectors. When they applied the synthetic cannabinoid ligands (known as
WIN55,212 and HU210 ) they found that mitochondrial respiration and
mobility, and subsequently memory formation, remained largely intact in
animals without the MLS in their receptor.
The researchers were then able to look further downstream using the
same general strategy of controlling localization of the second
messenger molecule protein kinase A (PKA). By fusing a constitutively
active mutant form of PKA to an MLS and putting it inside using an
adenovirus they were able to trace the signal cascade into the heart of
the complex I of the respiratory chain.
The presence and origin of full G-protein receptor signal pathways in
mitochondria is now more than just an academic question. Exactly how
retroviruses and other molecular agents of sequence modification managed
to re-jigger gene duplicated backups of proteins like CN1 to add
alternatively spliced MLS tags is still shrouded in mystery.
Our ability to now harness these same slow evolutionary processes in
real time, and bend them to our needs, will undoubtedly have implication
well beyond the cannabinoid market. Together the results above suggest
the tantalizing possibility of preserving some of the desired benefits
of cannabinoids while eliminating the unintended consequences like memory loss or full blown amnesia.
A bridge that bends in an strong earthquake and not only remains
standing, but remains usable is making its debut in its first real-world
application as part of a new exit bridge ramp on a busy downtown
Seattle highway.
"We've
tested new materials, memory retaining metal rods and flexible concrete
composites, in a number of bridge model studies in our large-scale
shake table lab, it's gratifying to see the new technology
applied for the first time in an important setting in a seismically
active area with heavy traffic loads," Saiid Saiidi, civil engineering
professor and researcher at the University of Nevada, Reno, said. "Using
these materials substantially reduces damage and allows the bridge to
remain open even after a strong earthquake."
Saiidi, who pioneered this technology, has built and destroyed, in
the lab, several large-scale 200-ton bridges, single bridge columns and
concrete abutments using various combinations of innovative materials,
replacements for the standard steel rebar and concrete materials and
design in his quest for a safer, more resilient infrastructure.
"We have solved the problem of survivability, we can keep a bridge usable after a strong earthquake," Saiidi said. "With these techniques and materials, we will usher in a new era of super earthquake-resilient structures."
The University partnered with the Washington Department of
Transportation and the Federal Highway Administration to implement this
new technology on their massive Alaska Way Viaduct Replacement Program,
the centerpiece of which is a two-mile long tunnel, but includes 31
separate projects that began in 2007 along the State Route 99 corridor
through downtown Seattle.
"This is potentially a giant leap forward," Tom Baker, bridge and
structures engineer for the Washington State Department of
Transportation, said. "We design for no-collapse, but in the future, we
could be designing for no-damage and be able to keep bridges open to
emergency vehicles, commerce and the public after a strong quake."
Modern bridges are designed to not collapse during an earthquake, and
this new technology takes that design a step further. In the earthquake
lab tests, bridge columns built using memory-retaining nickel/titanium
rods and a flexible concrete composite returned to their original shape
after an earthquake as strong as a magnitude 7.5.
"The tests we've conducted on 4-span bridges leading to this point
aren't possible anywhere else in the world than our large-scale
structures and earthquake engineering lab," Saiidi said. "We've had
great support along the way from many state highway departments and
funding agencies like the National Science Foundation, the Federal
Highway Administration and the U.S. Department of Transportation.
Washington DOT recognized the potential of this technology and
understands the need to keep infrastructure operating following a large
earthquake."
In an experiment in 2015, featured in a video, one of Saiidi's bridge's
moved more than six inches off center at the base and returned to its
original position, as designed, in an upright and stable position. Using
the computer-controlled hydraulics, the earthquake engineering lab can
increase the intensity of the recorded earthquake. Saiidi turned the dial up to 250 percent of the design parameters and still had excellent results.
"It had an incredible 9 percent drift with little damage," Saiidi said.
The Seattle off-ramp with the innovative columns is currently under
construction and scheduled for completion in spring 2017. After the new
SR 99 tunnel opens, this ramp, just south of the tunnel entrance, will
take northbound drivers from SR 99 to Seattle's SODO neighborhood.
A new WSDOT video describes how this innovative technology works.
"Dr. Saiidi sets the mark for the level of excellence to which the
College of Engineering aspires," Manos Maragakis, dean of the
University's College of Engineering, said. "His research is original and
innovative and has made a seminal contribution to seismic safety around
the globe."
Researchers at North Carolina State University have developed a
combination of software and hardware that will allow them to use
unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map
large, unfamiliar areas – such as collapsed buildings after a disaster.
"The
idea would be to release a swarm of sensor-equipped biobots – such as
remotely controlled cockroaches – into a collapsed building or other
dangerous, unmapped area," says Edgar Lobaton, an assistant professor of
electrical and computer engineering at NC State and co-author of two
papers describing the work.
"Using remote-control technology, we would restrict the movement of
the biobots to a defined area," Lobaton says. "That area would be
defined by proximity to a beacon on a UAV. For example, the biobots may
be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and
would signal researchers via radio waves whenever they got close to
each other. Custom software would then use an algorithm to translate the
biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the
UAV moves forward to hover over an adjacent, unexplored section. The
biobots move with it, and the mapping process is repeated. The software
program then stitches the new map to the previous one. This can be
repeated until the entire region or structure has been mapped; that map
could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS
can't be used," Lobaton says. "A strong radio signal from the UAV could
penetrate to a certain extent into a collapsed building,
keeping the biobot swarm contained. And as long as we can get a signal
from any part of the swarm, we are able to retrieve data on what the
rest of the swarm is doing. Based on our experimental data, we know
you're going to lose track of a few individuals, but that shouldn't
prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots.
However, to test their new mapping technology, the research team relied
on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a
maze-like space, with the effect of the UAV beacon emulated using an
overhead camera and a physical boundary attached to a moving cart. The
cart was moved as the robots mapped the area.
"We had previously developed
proof-of-concept software that allowed us to map small areas with
biobots, but this work allows us to map much larger areas and to stitch
those maps together into a comprehensive overview," Lobaton says. "It
would be of much more practical use for helping to locate survivors
after a disaster, finding a safe way to reach survivors, or for helping
responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching
them together, "A Framework for Mapping with Biobotic Insect Networks:
From Local to Global Maps," is published in Robotics and Autonomous Systems.
An article on the theory of mapping based on the proximity of mobile
sensors to each other, "Geometric Learning and Topological Inference
with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.
Computers that learn for themselves are with us now. As they become
more common in 'high-stakes' applications like robotic surgery,
terrorism detection and driverless cars, researchers ask what can be
done to make sure we can trust them.
There
would always be a first death in a driverless car and it happened in
May 2016. Joshua Brown had engaged the autopilot system in his Tesla
when a tractor-trailor drove across the road in front of him. It seems
that neither he nor the sensors in the autopilot noticed the white-sided
truck against a brightly lit sky, with tragic results.
Of course many people die in car crashes every day – in the USA there
is one fatality every 94 million miles, and according to Tesla this was
the first known fatality in over 130 million miles of driving with
activated autopilot. In fact, given that most road fatalities are the
result of human error, it has been said that autonomous cars should make
travelling safer.
Even so, the tragedy raised a pertinent question: how much do we
understand – and trust – the computers in an autonomous vehicle? Or, in
fact, in any machine that has been taught to carry out an activity that a
human would do?
We are now in the era of machine learning. Machines can be trained to
recognise certain patterns in their environment and to respond
appropriately. It happens every time your digital camera detects a face
and throws a box around it to focus, or the personal assistant on your
smartphone answers a question, or the adverts match your interests when
you search online.
Machine learning is a way to program computers to learn from
experience and improve their performance in a way that resembles how
humans and animals learn tasks. As machine learning techniques become
more common in everything from finance to healthcare, the issue of trust
is becoming increasingly important, says Zoubin Ghahramani, Professor
of Information Engineering in Cambridge's Department of Engineering.
Faced with a life or death decision, would a driverless car decide to
hit pedestrians, or avoid them and risk the lives of its occupants?
Providing a medical diagnosis, could a machine be wildly inaccurate
because it has based its opinion on a too-small sample size? In making
financial transactions, should a computer explain how robust is its
assessment of the volatility of the stock markets?
"Machines can now achieve near-human abilities at many cognitive
tasks even if confronted with a situation they have never seen before,
or an incomplete set of data," says Ghahramani. "But what is going on
inside the 'black box'? If the processes by which decisions were being
made were more transparent, then trust would be less of an issue."
His team builds the algorithms that lie at the heart of these
technologies (the "invisible bit" as he refers to it). Trust and
transparency are important themes in their work: "We really view the
whole mathematics of machine learning as sitting inside a framework of
understanding uncertainty. Before you see data – whether you are a baby
learning a language or a scientist analysing some data – you start with a
lot of uncertainty and then as you have more and more data you have
more and more certainty.
"When machines make decisions, we want them to be clear on what stage
they have reached in this process. And when they are unsure, we want
them to tell us."
One method is to build in an internal self-evaluation or calibration
stage so that the machine can test its own certainty, and report back.
Two years ago, Ghahramani's group launched the Automatic Statistician
with funding from Google. The tool helps scientists analyse datasets
for statistically significant patterns and, crucially, it also provides a
report to explain how sure it is about its predictions.
"The difficulty with machine learning systems is you don't really
know what's going on inside – and the answers they provide are not
contextualised, like a human would do. The Automatic Statistician
explains what it's doing, in a human-understandable form."
Where transparency becomes especially relevant is in applications
like medical diagnoses, where understanding the provenance of how a
decision is made is necessary to trust it.
Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: "A particular issue with new artificial intelligence
(AI) systems that learn or evolve is that their processes do not
clearly map to rational decision-making pathways that are easy for
humans to understand." His research aims both at making these pathways
more transparent, sometimes through visualisation, and at looking at
what happens when systems are used in real-world scenarios that extend
beyond their training environments – an increasingly common occurrence.
"We would like AI systems to monitor their situation dynamically,
detect whether there has been a change in their environment and – if
they can no longer work reliably – then provide an alert and perhaps
shift to a safety mode." A driverless car, for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.
Weller's theme of trust and transparency forms just one of the
projects at the newly launched £10 million Leverhulme Centre for the
Future of Intelligence (CFI). Ghahramani, who is Deputy Director of the
Centre, explains: "It's important to understand how developing
technologies can help rather than replace humans. Over the coming years,
philosophers, social scientists, cognitive scientists and computer
scientists will help guide the future of the technology and study its
implications – both the concerns and the benefits to society."
CFI brings together four of the world's leading universities
(Cambridge, Oxford, Berkeley and Imperial College, London) to explore
the implications of AI for human civilisation. Together, an
interdisciplinary community of researchers will work closely with
policy-makers and industry investigating topics such as the regulation
of autonomous weaponry, and the implications of AI for democracy.
Ghahramani describes the excitement felt across the machine learning
field: "It's exploding in importance. It used to be an area of research
that was very academic – but in the past five years people have
realised these methods are incredibly useful across a wide range of
societally important areas.
"We are awash with data, we have increasing computing power and we
will see more and more applications that make predictions in real time.
And as we see an escalation in what machines can do, they will challenge
our notions of intelligence and make it all the more important that we
have the means to trust what they tell us."
Artificial intelligence has the power to eradicate poverty and
disease or hasten the end of human civilisation as we know it –
according to a speech delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.
A portrait of Benjamin Franklin manipulated by Smilevector. Credit: Smithsonian National Portrait Gallery.
Tom White, senior lecturer in Victoria's School of Design, has
created Smilevector—a bot that examines images of people, then adds or
removes smiles to their faces.
"It
has examined hundreds of thousands of faces to learn the difference
between images, by finding relations and reapplying them," says Mr
White.
"When the computer finds an image it looks to identify if the person
is smiling or not. If there isn't a smile, it adds one, but if there is a
smile then it takes it away.
"It represents these changes as an animation, which moves parts of the face around, including crinkling and widening the eyes."
The bot can be used as a form of puppetry, says Mr White.
"These systems are domain independent, meaning you can do it with
anything—from manipulating images of faces to shoes to chairs. It's
really fun and interesting to work in this space. There are lots of
ideas to play around with."
The creation of the bot was sparked by Mr White's research into creative intelligence.
"Machine learning and artificial intelligence are starting to have
implications for people in creative industries. Some of these
implications have to do with the computer's capabilities, like
completing mundane tasks so that people can complete higher level
tasks," says Mr White.
"I'm interested in exploring what these systems are capable of doing
but also how it changes what we think of as being creative is in the
first place. Once you have a system that can automate processes, is that
still a creative act? If you can make something a completely push of
the button operation, does its meaning change?"
Mr White says people have traditionally used creative tools by giving commands.
"However, I think we're moving toward more of a collaboration with
computers—where there's an intelligent system that's making suggestions
and helping steer the process.
"A lot will happen in this space in the next five to ten years, and
now is the right time to progress. I also hope these techniques
influence teaching over the long term as they become more mainstream. It
is something that students could work with me on at Victoria University
as part of our Master of Design Innovation or our new Master of Fine
Arts (Creative Practice)."
The paper Sampling Generative Networks describing this research is
available as an arXiv preprint. The research will also be presented as
part of the Neural Information Processing Systems conference in Spain
and Generative Art conference in Italy in December.