Remember the last post about SpaceX? Well they are at it again!
This time, SpaceX has propelled supplies to International space station on saturday.More so is that they used a verssel that has flown before.
The refurbished Dragon cargo capsule propeled into space annexed to a Falcon 9 rocket at 5:07 pm (2107 GMT) from Cape
Canaveral, Florida.
With a countdown made by NASA spokesman Mike Curie, the rocket blazed a steady vertical path into the clouds.
The last time this particular
spaceship(Dragon) flew to space was in 2014.
The Dragon on present mission is packed with almost
6,000 pounds (2,700 kilograms) of science research, crew supplies and hardware
and should arrive at the Monday(ISS time).
The supplies for special experiments
include live mice to study the effects of osteoporosis and fruit flies for
research on microgravity's impact on the heart.
The spacecraft is also loaded with
solar panels and equipment to study neutron stars.
After about 10 minutes after launch,
SpaceX successfully returned the first stage of the Falcon 9 rocket back to a
controlled landing at Cape Canaveral.
The rocket powered its engines and
guided itself down to Landing Zone One, not far from the launch site.
"The first stage is back,"
Curie said in a NASA live webcast, as video images showed the tall, narrow
portion of the rocket touch down steadily in a cloud of smoke.
SpaceX said it marked the company's
fifth successful landing on solid ground. Several of its Falcon 9 rockets have
returned upright to platforms floating in the ocean.
The effort is part of SpaceX's push
to make spaceflight cheaper by re-using costlyrocket
and spaceship components after each launch, rather than ditching them in the
ocean.
The launch was the 100th from NASA's
historic launch pad 39A, the starting point for the Apollo missions to the Moon
in the 1960s and 1970s, as well as a total of 82 shuttle flights.
Images of garment prototype before exercise with flat ventilation flaps (F) and after exercise with curved ventilation flaps (G). Credit: Science Advances (2017). advances.sciencemag.org/content/3/5/e1601984
A team of MIT researchers has designed a breathable workout suit with ventilating flaps that open and close in response to an athlete's body heat and sweat. These flaps, which range from thumbnail- to finger-sized, are lined with live microbial cells that shrink and expand in response to changes in humidity. The cells act as tiny sensors and actuators, driving the flaps to open when an athlete works up a sweat, and pulling them closed when the body has cooled off.
The researchers have also fashioned a running shoe with an inner layer of similar cell-lined flaps to air out and wick away moisture. Details of both designs are published today in Science Advances.
Why use live cells in responsive fabrics? The researchers say that moisture-sensitive cells require no additional elements to sense and respond to humidity. The microbial cells they have used are also proven to be safe to touch and even consume. What's more, with new genetic engineering tools available today, cells can be prepared quickly and in vast quantities, to express multiple functionalities in addition to moisture response.
To demonstrate this last point, the researchers engineered moisture-sensitive cells to not only pull flaps open but also light up in response to humid conditions.
"We can combine our cells with genetic tools to introduce other functionalities into these living cells," says Wen Wang, the paper's lead author and a former research scientist in MIT's Media Lab and Department of Chemical Engineering. "We use fluorescence as an example, and this can let people know you are running in the dark. In the future we can combine odor-releasing functionalities through genetic engineering. So maybe after going to the gym, the shirt can release a nice-smelling odor."
Wang's co-authors include 14 researchers from MIT, specializing in fields including mechanical engineering, chemical engineering, architecture, biological engineering, and fashion design, as well as researchers from New Balance Athletics. Wang co-led the project, dubbed bioLogic, with former graduate student Lining Yao as part of MIT's Tangible Media group, led by Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences. Shape-shifting cells
In nature, biologists have observed that living things and their components, from pine cone scales to microbial cells and even specific proteins, can change their structures or volumes when there is a change in humidity. The MIT team hypothesized that natural shape-shifters such as yeast, bacteria, and other microbial cells might be used as building blocks to construct moisture-responsive fabrics.
"These cells are so strong that they can induce bending of the substrate they are coated on," Wang says.
The researchers first worked with the most common nonpathogenic strain of E. coli, which was found to swell and shrink in response to changing humidity. They further engineered the cells to express green fluorescent protein, enabling the cell to glow when it senses humid conditions.
They then used a cell-printing method they had previously developed to print E. coli onto sheets of rough, natural latex.
The team printed parallel lines of E. coli cells onto sheets of latex, creating two-layer structures, and exposed the fabric to changing moisture conditions. When the fabric was placed on a hot plate to dry, the cells began to shrink, causing the overlying latex layer to curl up. When the fabric was then exposed to steam, the cells began to glow and expand, causing the latex flatten out. After undergoing 100 such dry/wet cycles, Wang says the fabric experienced "no dramatic degradation" in either its cell layer or its overall performance. No sweat
The researchers worked the biofabric into a wearable garment, designing a running suit with cell-lined latex flaps patterned across the suit's back. They tailored the size of each flap, as well as the degree to which they open, based on previously published maps of where the body produces heat and sweat.
"People may think heat and sweat are the same, but in fact, some areas like the lower spine produce lots of sweat but not much heat," Yao says. "We redesigned the garment using a fusion of heat and sweat maps to, for example, make flaps bigger where the body generates more heat."
Support frames underneath each flap keep the fabric's inner cell layer from directly touching the skin, while at the same time, the cells are able to sense and react to humidity changes in the air lying just over the skin. In trials to test the running suit, study participants donned the garment and worked out on exercise treadmills and bicycles while researchers monitored their temperature and humidity using small sensors positioned across their backs.
After five minutes of exercise, the suit's flaps started opening up, right around the time when participants reported feeling warm and sweaty. According to sensor readings, the flaps effectively removed sweat from the body and lowered skin temperature, more so than when participants wore a similar running suit with nonfunctional flaps.
When Wang tried on the suit herself, she found that the flaps created a welcome sensation. After pedaling hard for a few minutes, Wang recalls that "it felt like I was wearing an air conditioner on my back." Ventilated running shoes
The team also integrated the moisture-responsive fabric into a rough prototype of a running shoe. Where the bottom of the foot touches the sole of the shoe, the researchers sewed multiple flaps, curved downward, with the cell-lined layer facing toward—though not touching—a runner's foot. They again designed the size and position of the flaps based on heat and sweat maps of the foot.
"In the beginning, we thought of making the flaps on top of the shoe, but we found people don't normally sweat on top of their feet," Wang says. "But they sweat a lot on the bottom of their feet, which can lead to diseases like warts. So we thought, is it possible to keep your feet dry and avoid those diseases?"
As with the workout suit, the flaps on the running shoe opened and lit up when researchers increased the surrounding humidity; in dry conditions the flaps faded and closed.
Going forward, the team is looking to collaborate with sportswear companies to commercialize their designs, and is also exploring other uses, including moisture-responsive curtains, lampshades, and bedsheets.
"We are also interested in rethinking packaging," Wang says. "The concept of a second skin would suggest a new genre for responsive packaging."
"This work is an example of harnessing the power of biology to design new materials and devices and achieve new functions," says Xuanhe Zhao, the Robert N. Noyce Career Development Associate Professor in the Department of Mechanical Engineering and a co-author on the paper. "We believe this new field of 'living' materials and devices will find important applications at the interface between engineering and biological systems."
A gene known to suppress tumor
formation in a broad range of tissues plays a key role in keeping stem
cells in muscles dormant until needed, a finding that may have
implications for both human health and animal production, according to a
Purdue University study.
Shihuan Kuang,
professor of animal sciences, and Feng Yue, a postdoctoral researcher in
Kuang's lab, reported their findings in two papers published in the
journals Cell Reports and Nature Communications. The results suggest modifying expression of the PTEN gene could one day play a role in increasing muscle mass in agricultural animals and improve therapies for muscle injuries in humans.
Muscle stem cells, called satellite cells,
normally sit in a quiescent, or dormant, state until called upon to
build muscle or repair a damaged muscle. Inability to maintain the
quiescence would lead to a loss of satellite cells. As humans age, the
number of satellite cells gradually declines and the remaining cells
become less effective in regenerating muscles, resulting in muscle loss –
a condition called sarcopenia.
Kuang and Yue, in the Nature Communications paper, explored
the role tumor-suppressor gene PTEN plays in satellite cells. The PTEN
gene encodes a protein that suppresses the growth signaling, thereby,
limiting the growth of fast-growing tumor cells. Mutation of the PTEN
gene is associated with many types of cancers, but how the gene
functions in muscle stem cells is unknown.
To understand the function of a gene, the authors first wanted to know how the gene is expressed.
"This gene is highly expressed in the satellite cells when the cells
are in the quiescent state. When they become differentiated, the PTEN
level reduces," Yue said.
By knocking out the PTEN gene in resting satellite cells, the
researchers found that satellite cells quickly differentiate and become muscle cells. So PTEN plays an essential role in keeping satellite cells in their quiescent state.
"You no longer have the stem cells once you knock out the gene," Kuang said.
In their Cell Reports paper, Kuang and Yue took a step further
to examine PTEN function in proliferating stem cells. This time, they
knocked out PTEN in embryonic progenitor cells, those that will later
become muscle in the mouse. They found that as the mouse grew, muscle
mass increased significantly—by as much as 40 percent in some
muscles—over that of a normal mouse.
"That would be significant in an animal production point of view," Kuang said.
The increased muscle came with a cost, however. Besides creating muscle, those progenitor cells
create satellite cells. Without PTEN, not only fewer satellite cells
were created, but the resulting satellite cells cannot maintain
dormancy, leading to an accelerated rate of depletion during aging.
The faster depletion of satellite cells during aging wouldn't matter
much in an animal production scenario, Kuang said. Beef cattle, for
example, are harvested before they age. The increase in muscle mass,
however, would be a significant advantage in production efficiency.
The findings may lead to improvement in human health, the authors
said. The ability to control the expression of PTEN could lead to
therapies for quicker healing of muscle injuries.
"If you want to quickly boost up the stem cells to repair something,
you need to suppress PTEN," Kuang said. "After that, you'd need to
increase PTEN to return the cells back to quiescent state. If we could
do that, you would suspect that the muscle would repair more quickly."
Knowing that PTEN also suppresses tumors in many types of tissues,
the authors noted that the elimination of the gene did not cause tumor
formation in the muscle
cells they studied. That suggests regulation of PTEN could be a
feasible method for improving human health and animal agriculture.
Blitab, a tablet with a Braille interface, looks like a
promising step up for blind and low vision people who want to be part
of the educational, working and entertainment worlds of digital life.
A
video of the Blitab Technology founder, Kristina Tsvetanova, said the
idea for such a tablet came to her during her studies as an industrial
engineer. At the time, a blind colleague of hers asked her to sign him
for an online course and a question nagged her: How could technology
help him better?
Worldwide, she said, there are more than 285 million blind and visually impaired people.
She was aware that in general blind and low vision people were coping
with old, bulky technology, contributing to low literacy rates among
blind children. She and her team have been wanting to change that.
There was ample room for improvements. The conventional interfaces
for the blind, she said, have been slow and expensive. She said the
keyboard can range from about $5000 to $8000. Also, she said, they are
limited to what the blind person can read, just a few words at a time.
Imagine, she said, reading Moby Dick, five words at a time.
They have engineered a tablet device with a 14-line Braille display on the top and a touch screen on the bottom.
Part of their technology involves a high performance membrane, and
their press statement said the tablet uses smart micro fluids to develop
small physical bubbles instead of a screen display.
They have produced a tactile tablet, she said, where people with sight loss can learn, work and play using that device.
The user can control the tablet with voice-over if the person wants
to listen to an ebook or by pressing one button, dots will be activated
on the screen and the surface of the screen will change.
Romain Dillet, in TechCrunch: "The magic happens when you
press the button on the side of the device. The top half of the device
turns into a Braille reader. You can load a document, a web
page—anything really—and then read the content using Braille."
Tsvetanova told Dillet, "We're not excluding voice over; we combine
both of these things." She said they offer both "the tactile experience
and the voice over experience."
Rachel Metz reported in MIT Technology Review: "The Blitab's
Braille display includes 14 rows, each made up of 23 cells with six dots
per cell. Every cell can present one letter of the Braille alphabet. Underneath the grid are numerous layers of fluids and a special kind of membrane," she wrote.
Credit: Blitab
At heart, it's an Android tablet, Dillet said, "so it has Wi-Fi and Bluetooth and can run all sorts of Android apps."
Metz said that with eight hours of use per day, it's estimated to last for five days on one battery charge.
The tablet team have set a price to this device, at $500.
How they will proceed: First, she said they will sell directly from
their web site, then scale through global distributors, and distribute
to less developed world.
What's next? Dillet said in the Jan.6 article that "the team of 10 plans to ship the tablet in six months with pre-orders starting later this month."
Blitab Technology recently took first place in the Digital Wellbeing category of the 2016 EIT Digital Challenge. EIT Digital is described as a European open innovation organization. They seek to foster digital technology innovation and entrepreneurial talent.
The figure shows a sub-population of ancient stars, called
Carbon-Enhanced Metal-Poor (CEMP) stars. These stars contain 100 to
1,000,000 times LESS iron (and other heavy elements) than the Sun, but
10 to 10,000 times MORE carbon, relative to iron. The unusual
chemicalcompositions of these stars provides clues to their birth
environments, and the nature of the stars in which the carbon formed. In
the figure, A(C) is the absolute amount of carbon, while the horizontal
axis represents the ratio of iron, relative to hydrogen, compared with
the same ratio in the Sun. Credit: University of Notre Dame
University of Notre Dame astronomers have
identified what they believe to be the second generation of stars,
shedding light on the nature of the universe's first stars.
A subclass of carbon-enhanced metal-poor (CEMP) stars, the so-called CEMP-no stars, are ancient stars that have large amounts of carbon but little of the heavy metals
(such as iron) common to later-generation stars. Massive
first-generation stars made up of pure hydrogen and helium produced and
ejected heavier elements
by stellar winds during their lifetimes or when they exploded as
supernovae. Those metals—anything heavier than helium, in astronomical
parlance—polluted the nearby gas clouds from which new stars formed.
Jinmi Yoon, a postdoctoral research associate in the Department of
Physics; Timothy Beers, the Notre Dame Chair in Astrophysics; and
Vinicius Placco, a research professor at Notre Dame, along with their
collaborators, show in findings published in the Astrophysics Journal
this week that the lowest metallicity stars, the most chemically
primitive, include large fractions of CEMP stars. The CEMP-no stars,
which are also rich in nitrogen and oxygen, are likely the stars born
out of hydrogen and helium gas clouds that were polluted by the elements
produced by the universe's first stars.
"The CEMP-no stars we see today, at least many of them, were born
shortly after the Big Bang, 13.5 billion years ago, out of almost
completely unpolluted material," Yoon says. "These stars, located in the
halo system of our galaxy, are true second-generation stars—born out of
the nucleosynthesis products of the very first stars."
Beers says it's unlikely that any of the universe's first stars still
exist, but much can be learned about them from detailed studies of the
next generation of stars.
"We're analyzing the chemical products of the very first stars by
looking at what was locked up by the second-generation stars," Beers
says. "We can use this information to tell the story of how the first
elements were formed, and determine the distribution of the masses of
those first stars. If we know how their masses were distributed, we can
model the process of how the first stars formed and evolved from the
very beginning."
The authors used high-resolution spectroscopic data gathered by many
astronomers to measure the chemical compositions of about 300 stars in
the halo of the Milky Way. More and heavier elements form as later
generations of stars continue to contribute additional metals, they say.
As new generations of stars are born, they incorporate the metals
produced by prior generations. Hence, the more heavy metals a star
contains, the more recently it was born. Our sun, for example, is
relatively young, with an age of only 4.5 billion years.
A companion paper, titled "Observational constraints on first-star
nucleosynthesis. II. Spectroscopy of an ultra metal-poor CEMP-no star,"
of which Placco was the lead author, was also published in the same
issue of the journal this week. The paper compares theoretical
predictions for the chemical composition of zero-metallicity supernova
models with a newly discovered CEMP-no star in the Milky Way galaxy.
A British-Dutch project aiming to send an
unmanned mission to Mars by 2018 announced Friday that the shareholders
of a Swiss financial services company have agreed a takeover bid.
"The acquisition is now
only pending approval by the board of Mars One Ventures," the company
said in a joint statement with InFin Innovative Finance AG, adding
approval from the Mars board would come "as soon as possible."
"The takeover provides a solid path to funding the next steps of Mars
One's mission to establish a permanent human settlement on Mars," the
statement added.
Mars One consists of two entities: the Dutch not-for-profit Mars One
Foundation and a British public limited company Mars One Ventures.
Mars One aims to establish a permanent human settlement on the Red
Planet, and is currently "in the early mission concept phase," the
company says, adding securing funding is one of its major challenges.
Some 200,000 hopefuls from 140 countries initially signed up for the
Mars One project, which is to be partly funded by a television reality
show about the endeavour.
Those have now been whittled down to just 100, out of which 24 will
be selected for one-way trips to Mars due to start in 2026 after several
unmanned missions have been completed.
"Once this deal is completed, we'll be in a much stronger financial
position as we begin the next phase of our mission. Very exciting
times," said Mars One chief executive Bas Lansdorp.
NASA is currently working on three Mars missions with the European
Space Agency and plans to send another rover to Mars in 2020.
But NASA has no plans for a manned mission to Mars until the 2030s.
This artist’s view shows how the light coming from the surface
of a strongly magnetic neutron star (left) becomes linearly polarised as
it travels through the vacuum of space close to the star on its way to
the observer on Earth (right). …more
By
studying the light emitted from an extraordinarily dense and strongly
magnetized neutron star using ESO's Very Large Telescope, astronomers
may have found the first observational indications of a strange quantum
effect, first predicted in the 1930s. The polarization of the observed
light suggests that the empty space around the neutron star is subject
to a quantum effect known as vacuum birefringence.
A team led by
Roberto Mignani from INAF Milan (Italy) and from the University of
Zielona Gora (Poland), used ESO's Very Large Telescope (VLT) at the
Paranal Observatory in Chile to observe the neutron star RX
J1856.5-3754, about 400 light-years from Earth.
Despite being amongst the closest neutron stars,
its extreme dimness meant the astronomers could only observe the star
with visible light using the FORS2 instrument on the VLT, at the limits
of current telescope technology.
Neutron stars are the very dense remnant cores of massive stars—at
least 10 times more massive than our Sun—that have exploded as
supernovae at the ends of their lives. They also have extreme magnetic
fields, billions of times stronger than that of the Sun, that permeate
their outer surface and surroundings.
These fields are so strong that they even affect the properties of the empty space around the star. Normally a vacuum
is thought of as completely empty, and light can travel through it
without being changed. But in quantum electrodynamics (QED), the quantum
theory describing the interaction between photons and charged particles
such as electrons, space is full of virtual particles that appear and
vanish all the time. Very strong magnetic fields can modify this space so that it affects the polarisation of light passing through it.
Mignani explains: "According to QED, a highly magnetised vacuum
behaves as a prism for the propagation of light, an effect known as
vacuum birefringence."
Among the many predictions of QED, however, vacuum birefringence so
far lacked a direct experimental demonstration. Attempts to detect it in
the laboratory have not yet succeeded in the 80 years since it was
predicted in a paper by Werner Heisenberg (of uncertainty principle
fame) and Hans Heinrich Euler.
This wide field image shows the sky around the very faint
neutron star RX J1856.5-3754 in the southern constellation of Corona
Australis. This part of the sky also contains interesting regions of
dark and bright nebulosity surrounding the …more
"This
effect can be detected only in the presence of enormously strong
magnetic fields, such as those around neutron stars. This shows, once
more, that neutron stars are invaluable laboratories in which to study
the fundamental laws of nature." says Roberto Turolla (University of
Padua, Italy).
After careful analysis of the VLT data, Mignani and his team detected
linear polarisation—at a significant degree of around 16%—that they say
is likely due to the boosting effect of vacuum birefringence occurring
in the area of empty space (some of us already know that empty space don't exist) surrounding RX J1856.5-3754.
Vincenzo Testa (INAF, Rome, Italy) comments: "This is the faintest
object for which polarisation has ever been measured. It required one of
the largest and most efficient telescopes in the world, the VLT, and
accurate data analysis techniques to enhance the signal from such a
faint star."
"The high linear polarisation that we measured with the VLT can't be
easily explained by our models unless the vacuum birefringence effects
predicted by QED are included," adds Mignani.
"This VLT study is the very first observational support for
predictions of these kinds of QED effects arising in extremely strong
magnetic fields," remarks Silvia Zane (UCL/MSSL, UK).
Mignani is excited about further improvements to this area of study
that could come about with more advanced telescopes: "Polarisation
measurements with the next generation of telescopes, such as ESO's
European Extremely Large Telescope, could play a crucial role in testing
QED predictions of vacuum birefringence effects around many more
neutron stars."
"This measurement, made for the first time now in visible light, also
paves the way to similar measurements to be carried out at X-ray
wavelengths," adds Kinwah Wu (UCL/MSSL, UK).
This research was presented in the paper entitled "Evidence for
vacuum birefringence from the first optical polarimetry measurement of
the isolated neutron star RX J1856.5−3754", by R. Mignani et al., to
appear in Monthly Notices of the Royal Astronomical Society.
A breakthrough in solar power could make it cheaper and more
commercially viable, thanks to research at the University of Warwick.
In a paper published in Nature Energy,
Dr Ross Hatton, Professor Richard Walton and colleagues, explain how
solar cells could be produced with tin, making them more adaptable and
simpler to produce than their current counterparts.
Solar cells based on a class of semiconductors known as lead
perovskites are rapidly emerging as an efficient way to convert sunlight
directly into electricity. However, the reliance on lead is a serious
barrier to commercialisation, due to the well-known toxicity of lead.
Dr Ross Hatton and colleagues show that perovskites using tin in
place of lead are much more stable than previously thought, and so could
prove to be a viable alternative to lead perovskites for solar cells.
Lead-free cells could render solar power cheaper, safer and more commercially attractive - leading to it becoming a more prevalent source of energy in everyday life.
This could lead to a more widespread use of solar power, with
potential uses in products such as laptop computers, mobile phones and
cars.
The team have also shown how the device structure can be greatly
simplified without compromising performance, which offers the important
advantage of reduced fabrication cost.
Dr Hatton comments that there is an ever-pressing need to develop renewable sources of energy:
"It is hoped that this work will help to stimulate an intensive
international research effort into lead-free perovskite solar cells,
like that which has resulted in the astonishingly rapid advancement of lead perovskite solar cells.
"There is now an urgent need to tackle the threat of climate change
resulting from humanity's over reliance on fossil fuel, and the rapid
development of new solar technologies must be part of the plan."
Perovskite solar cells are lightweight and compatible with flexible
substrates, so could be applied more widely than the rigid flat plate
silicon solar cells that currently dominate the photovoltaics market, particularly in consumer electronics and transportation applications.
The paper, 'Enhanced Stability and Efficiency in Hole-Transport Layer Free CsSnI3 Perovskite Photovoltaics', is published in Nature Energy,
and is authored by Dr Ross Hatton, Professor Richard Walton and PhD
student Kenny Marshall in the Department of Chemistry, along with Dr
Marc Walker in the Department of Physics.
Cannabinoids and memory
Few classes of drugs have
galvanized the pharmaceutical industry in recent times like the
cannabinoids. This class of molecules includes not only the natural
forms, but also a vast new treasury of powerful synthetic analogs with
up to several hundred times the potency as measured by receptor activity
and binding affinity. With the FDA now fast tracking all manner of
injectables, topicals, and sprays promising everything from relief of
nebulous cancer pain to anti-seizure neuroprotection, more than a few
skeptics have been generated.
What inquiring
minds really want to know, beyond the thorny issue of how well they
actually work, is how do they work at all? If you want to understand
what something is doing in the cell, one useful approach is to ask what
it does to their mitochondria.
With drug companies now drooling over the possibility of targeting
drugs and treatments directly to these organelles by attaching
mitochondrial localization sequences (MLS) or other handler molecules,
answers to this kind of question are now coming into focus.
But even with satisfactory explanations in hand, there would still be
one large hurdle standing in the way of cannabinoid medical bliss:
Namely, even if a patient can manage to avoid operating vehicles or
heavy machinery throughout the course of their treatment, how do they
cope with the endemic collateral memory loss these drugs invariably
cause?
A recent paper published in Nature neatly ties all these
subtleties together, and even suggests a possible way out of the brain
fog by toggling the sites of cannabinoid action between mitochondria and
other cellular compartments. By generating a panel of cannabinoid
receptor and second messenger molecules with and without the appropriate
MLS tags or accessory binding proteins, the authors were able to
directly link cannabinoid-controlled mitochondrial activity to memory
formation.
One confounder in this line of work is that these MLSs are very
fickle beasts. The 22 or so leader amino acids that make up their 'code'
is not a direct addresses in any sense. While the consensus sequences
that localize protease action or sort nuclear, endoplasmic reticulum,
and plasma membrane proteins generally contain clearly recognizable
motifs, any regularities in the MLSs have only proven visible to a
computer. That is not to say that MLSs are fictions—they clearly do
work—but their predictable action is only witnessed whole once their
3-dimensional vibrating structures are fully-conformed.
The authors availed themselves of two fairly sophisticated programs
called Mitoprot and PSQRT to remove any guesswork in identifying a
potential MLS in CN1 cannabinoid receptors. CN1s had been previously
associated by immunohistochemical methods to what we might call the
mitochondrial penumbra, but their presence there may have been purely
incidental. This in silico analysis theoretically confirmed the presence
of a putative MLS in CB1 and encouraged them to carry out further
manipulations of this pathway.
Namely, the researchers took a mouse with the mitochondrial mtCB1
receptor knocked out, and then added modified versions back using viral
vectors. When they applied the synthetic cannabinoid ligands (known as
WIN55,212 and HU210 ) they found that mitochondrial respiration and
mobility, and subsequently memory formation, remained largely intact in
animals without the MLS in their receptor.
The researchers were then able to look further downstream using the
same general strategy of controlling localization of the second
messenger molecule protein kinase A (PKA). By fusing a constitutively
active mutant form of PKA to an MLS and putting it inside using an
adenovirus they were able to trace the signal cascade into the heart of
the complex I of the respiratory chain.
The presence and origin of full G-protein receptor signal pathways in
mitochondria is now more than just an academic question. Exactly how
retroviruses and other molecular agents of sequence modification managed
to re-jigger gene duplicated backups of proteins like CN1 to add
alternatively spliced MLS tags is still shrouded in mystery.
Our ability to now harness these same slow evolutionary processes in
real time, and bend them to our needs, will undoubtedly have implication
well beyond the cannabinoid market. Together the results above suggest
the tantalizing possibility of preserving some of the desired benefits
of cannabinoids while eliminating the unintended consequences like memory loss or full blown amnesia.
Researchers at North Carolina State University have developed a
combination of software and hardware that will allow them to use
unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map
large, unfamiliar areas – such as collapsed buildings after a disaster.
"The
idea would be to release a swarm of sensor-equipped biobots – such as
remotely controlled cockroaches – into a collapsed building or other
dangerous, unmapped area," says Edgar Lobaton, an assistant professor of
electrical and computer engineering at NC State and co-author of two
papers describing the work.
"Using remote-control technology, we would restrict the movement of
the biobots to a defined area," Lobaton says. "That area would be
defined by proximity to a beacon on a UAV. For example, the biobots may
be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and
would signal researchers via radio waves whenever they got close to
each other. Custom software would then use an algorithm to translate the
biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the
UAV moves forward to hover over an adjacent, unexplored section. The
biobots move with it, and the mapping process is repeated. The software
program then stitches the new map to the previous one. This can be
repeated until the entire region or structure has been mapped; that map
could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS
can't be used," Lobaton says. "A strong radio signal from the UAV could
penetrate to a certain extent into a collapsed building,
keeping the biobot swarm contained. And as long as we can get a signal
from any part of the swarm, we are able to retrieve data on what the
rest of the swarm is doing. Based on our experimental data, we know
you're going to lose track of a few individuals, but that shouldn't
prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots.
However, to test their new mapping technology, the research team relied
on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a
maze-like space, with the effect of the UAV beacon emulated using an
overhead camera and a physical boundary attached to a moving cart. The
cart was moved as the robots mapped the area.
"We had previously developed
proof-of-concept software that allowed us to map small areas with
biobots, but this work allows us to map much larger areas and to stitch
those maps together into a comprehensive overview," Lobaton says. "It
would be of much more practical use for helping to locate survivors
after a disaster, finding a safe way to reach survivors, or for helping
responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching
them together, "A Framework for Mapping with Biobotic Insect Networks:
From Local to Global Maps," is published in Robotics and Autonomous Systems.
An article on the theory of mapping based on the proximity of mobile
sensors to each other, "Geometric Learning and Topological Inference
with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.
Virtual reality technology has you thinking you are
doing many things, but there is much uncharted territory in eating
virtually.
Imagine
what the tourism industry could do with VR technology extending sensory
stimulation beyond the eyes and ears. Imagine inviting prospective
restaurant clients in virtual reality mode to the meat, fish and chicken
specialties, pizza or chocolate cakes. Imagine any number of
applications where the sensory experience in virtual reality expands.
Scientists are focusing on VR technology that can fool you into
thinking you are tasting food that is not of course really there.
Researchers from Singapore and another team from Japan have their own
studies that explore the realm of tasting and even chewing.
Vlad Dudau, Neowin, said these explorers managed to replicate the tastes and textures of different foods.
A recent conference in Japan on user interface was given much "food" tech for thought.
The work titled "Virtual Sweet: Simulating Sweet Sensation Using
Thermal Stimulation on the Tip of the Tongue," explored what it is like
to be tasting sweet food virtually.
"Being a pleasurable sensation, sweetness is recognized as the most
preferred sensation among the five primary taste sensations. In this
paper, we present a novel method to virtually simulate the sensation of sweetness by applying thermal stimulation to the tip of the human tongue.
To digitally simulate the sensation of sweetness, the system delivers
rapid heating and cooling stimuli to the tongue via a 2x2 grid of
Peltier elements. To achieve distinct, controlled, and synchronized
temperature variations in the stimuli, a control module is used to
regulate each of the Peltier elements. Results from our preliminary
experiments suggest that the participants were able to perceive mild
sweetness on the tip of their tongue while using the proposed system."
Nimesha Ranasinghe and Ellen Yi-Luen Do of the National University of
Singapore are the two explorers. This is a device where changes in
temperature serve to mimic the sensation of sweetness on the tongue.
Victoria Turk in New Scientist wrote about what their
technology does: "The user places the tip of their tongue on a square of
thermoelectric elements that are rapidly heated or cooled, hijacking thermally sensitive neurons that normally contribute to the sensory code for taste."
MailOnline described it as a "virtual sweetness instrument"
which makes use of "a grid of four elements which generate temperature
changes of 5°C in a few seconds. "When applied to the tip of the tongue,
said the report, "the temperature change results in a virtual sweet
sensation." A 9V battery is put to use. Results: Out of 15 people, eight
registered a very mild sweet taste, said MailOnline.
Applications could include a taste-enhancing technology for dieters. Dr Ranashinghe told MailOnline:
'We believe this will especially helpful for the people on restricted
diets for example salt (hypertension and heart problems) and sugar
(diabetics)." New Scientist said Ranasinghe and Do could see a system like this embedded in a glass or mug to make low sugar drinks taste sweeter.
Another group from the University of Tokyo is using electrodes to
stimulate the jaw muscles. Tokyo Researchers Arinobu Niijima and
Takefumi Ogawa are reporting results from an electrical muscle
stimulation (EMS) test of jaw movements in chewing.
"We propose Electric Food Texture System, which can present virtual
food texture such as hardness and elasticity by electrical muscle
stimulation (EMS) to the masseter muscle," said the researchers in a
video posted last month on their work, "Study on Control Method of
Virtual Food Texture by Electrical Muscle Stimulation."
Dudau in Neowin described their experiment, where "scientists
attached electrodes to jaw muscles and managed to simulate the sensation
of biting into different materials. For example, by varying the
electrical stimulation, users reported that while eating a real cookie,
it felt like biting into something soft, or chewing something hard
alternatively."
Turk in New Scientist also talked about the Tokyo team who
presented "a device that uses electricity to simulate the experience of
chewing foods of different textures. Arinobu Niijima and Takefumi
Ogawa's Electric Food Texture System also uses electrodes, but not on
the tongue, instead they place them on the masseter muscle – a muscle in
the jaw used for chewing – to give sensations of hardness or chewiness
as a user bites down. 'There is no food in the mouth, but users feel as
if they are chewing some food due to haptic feedback by electrical
muscle stimulation,' says Niijima."
Getting into technical details, MailOnline said "By delivering
short pulses of between 100 to 250 Hz they were able to stimulate the
masseter muscles, used to chew solid foods."
So if the 'sugar' researchers were looking at taste sensation, these
researchers were looking at food texture. They said, "In this paper, we
investigated the feasibility to control virtual food texture by EMS."
The researchers said on their video page, "We conducted an experiment
to reveal the relationship of the parameters of EMS and those of
virtual food texture. The experimental results show that the higher
strength of EMS is, the harder virtual food texture is, and the longer
duration of EMS is, the more elastic virtual food texture is."
If at higher frequency, the sensation was that of eating tougher,
chewy food but a longer pulse simulated a more elastic texture.
Rice University’s low-cost, open-source Light Plate Apparatus
can easily be used by nonengineers and noncomputer programmers and can
be assembled by a nonexpert in one day from components costing less than
$150. Credit: Jeff Fitlow/Rice University
Nobody likes a cheater, but Rice University
bioengineering graduate student Karl Gerhardt wants people to copy his
answers. That's the whole point.
Gerhardt and Rice colleagues have created the first low-cost, easy-to-use optogenetics
hardware platform that biologists who have little or no training in
engineering or software design can use to incorporate optogenetics
testing in their labs.
Rice's Light Plate Apparatus (LPA) is described in a paper available for free online this week in the open-access journal Scientific Reports.
The LPA, which was created in the lab of Jeffrey Tabor, assistant
professor of bioengineering, uses open-source hardware and software. The
apparatus can deliver two independent light signals to each well in a
standard 24-well plate and has sockets that accept LEDs of wavelengths
ranging from blue to far red. Total component costs for the LPA are less
than $400—$150 for labs with a 3-D printer—and each unit can be
assembled and calibrated by a nonexpert in one day.
"Our intent is to bring optogenetics to any researcher interested in
using it," said Tabor, whose students created the LPA. In doing so, they
found ways to make most of its parts with 3-D printers and also created
software called Iris that uses simple buttons and pull-down menus to
allow researchers to program the instrument for a wide range of
experiments.
Rice bioengineers Karl Gerhardt (left) and Jeffrey Tabor with
the Light Plate Apparatus, a low-cost, open-source optogenetics
platform. Credit: Jeff Fitlow/Rice University
Optogenetics, which was developed in the
past 15 years, involves genetically modifying cells with light-sensing
molecules so that light can be used to turn genes and other cellular
processes on or off. Its most notable successes have come in
neuroscience following the invention of brain-implantable optical neuro
interfaces, which have explored the cells and mechanisms associated with
aggression, parenting, drug addiction, mating, same-sex attraction,
anxiety, obsessive-compulsive disorders and more.
"Over the past 5-10 years, practically every biological process has
been put under optogenetics control," said Gerhardt, who works in
Tabor's lab. "The problem is that while everyone has been developing the
biological tools to do optogenetics—the light-sensing proteins,
gene-expression systems, protein interactions, etc.—outside of
neuroscience, no one has really developed good hardware that makes it
easy to use those tools."
To demonstrate the broad applicability of LPA, Tabor, Gerhardt and
co-authors used the system to perform a series of optogenetics tests on a
diverse set of model organisms, including gut bacteria, yeast,
mammalian cells and photosynthetic cyanobacteria.
Gerhardt didn't come to Rice intending to invent the world's first
easy-to-use optogenetics research platform. A biochemist by training, he
initially was interested in simply creating something that would allow
him to incorporate optogenetics in his own research. In early 2014,
Gerhardt was studying the social amoeba Dictyostelium discoideum. Evan
Olson, another Ph.D. student in Tabor's group, had just created the
"light tube array," or LTA, an automated system for doing optogenetics
on up to 64 test tubes at a time.
Unfortunately for Gerhardt, D. discoideum,
which biologists commonly call "dicty," prefers to grow on flat
surfaces, like Petri dishes and flat-bottomed well plates. Dicty is also
sensitive to vibrations and movement. Like dicty, many organisms
commonly studied in biology labs, including many animal cell lines and
virtually all human cells, require similar conditions.
"I couldn't culture dicty in the LTA, so I built a sort of
plate-based version, and I used it for a couple of experiments, but it
didn't work very well," Gerhardt said. "Then, some other people in our
lab who had training in electrical engineering and Evan, with his
physics background, said, 'We can take this version and make it a lot
better.'"
Gerhardt said the group kept innovating and coming up with new
versions of the hardware. For example, to make it easy to change the
wavelength of light, the team incorporated standard sockets so it would
be easy to swap out different colored LEDs. They also added a low-cost
microcontroller with an SD card reader, drivers capable of producing
more than 4,000 levels of light intensity and millisecond time control.
"We got more and more ambitious in terms of the features we wanted to
add, and now we're on version three or four of the hardware," he said.
"Then Lucas (Hartsough), Brian (Landry) and Felix (Ekness), members of
our group who had expertise in programming and website design, said,
'We'll make the software,' and that's where Iris came from."
Rice University graduate student Sebastiรกn Castillo-Hair
conducts tests with the Light Plate Apparatus, an open-source
optogenetics research platform developed in the laboratory of Rice’s
Jeffrey Tabor, assistant professor of bioengineering. Credit: Jeff
Fitlow/Rice University
Iris makes use of a graphical user
interface to allow people without specialized computer training to
easily program experiments for the LPA.
"Programming is a major barrier for some biologists who want to work
with this kind of hardware," Gerhardt said. "Optogenetics hardware, most
of the time, requires someone with programming experience who can go
into the command line and write code. We wanted to eliminate that
barrier."
To simplify the process for getting started with LPA, Tabor and
Gerhardt have published all the software, design files and
specifications for the system on GitHub, a site that caters to the
do-it-yourself community by making it easy to create, share and
distinguish different versions of software and files for open-source
platforms like LPA.
Gerhardt said at least a half-dozen research groups began making LPAs
after an early version of the paper was posted on a biology preprint
server, and he hopes many more begin using it now that the Scientific Reports paper has been published.
"I hope this becomes the standard format for doing general
optogenetics experiments, especially for people on the biology end of
the spectrum who would never think about building their own hardware,"
Gerhardt said. "I hope they'll see this and say, 'OK. We can do
optogenetics now.'"
Good luck ever trusting a recording again. as it is right now, records done and presented in court as evidence will hardly have any value. A
low quality video has emerged from the Adobe conference MAX showing a
demo for a prototype of a new software, called Project VoCo, that
appears to be a Photoshop for audio.The
program is shown synthesizing a man's voice to read different sentences
based on the software's analysis of a real clip of him speaking. Just
copy and paste to change it from "I kissed my dog and my wife" to "I
kissed my wife and my wife." Or even insert entirely new words—they
still sound eerily authentic.In case you were confused about what the software's intended purpose is, Adobe issued a statement:
When
recording voiceovers, dialog, and narration, people would often like to
change or insert a word or a few words due to either a mistake they
made or simply because they would like to change part of the narrative.
We have developed a technology called Project VoCo in which you can
simply type in the word or words that you would like to change or insert
into the voiceover. The algorithm does the rest and makes it sound like
the original speaker said those words.
The
crowd laughs and cheers uproariously as the program is demod, seemingly
unaware of the disturbing implications for a program like this
especially in the context of an election cycle where distortions in
truth are commonplace. Being able to synthesize —or claim that real
audio was synthesized—would only muddy waters even further.
Somehow
the clip also involves the comedian Jordan Peele, present at the
conference, whose shocked expression is the only indication that anyone
there is thinking about how this software will be used out in the real
world.
A person is able to perceive and localize individual phosphenes or
spots of light...well, big deal. Meaning, no big deal for those who see.
However,
for the blind wishing for a discovering or rediscovering of the gift of
sight, this could be an encouraging experience. Second Sight is a group
that is in the news with a relevant announcement. The Daily Mail as well as a number of other sites discussed their work.
Late last month, Second Sight announced its visual cortical
stimulator was successfully implanted and activated in its first human
subject. This is a "visual cortical prosthesis," as the company
described it.
Second Sight develops, makes and markets implantable visual prosthetics.
The patient, 30 years old, according to the company announcement,
"was able to perceive and localize individual phosphenes or spots of
light with no significant adverse side effects."
The device was developed as part of the Orion 1 program by Second Sight. The Daily Mail said a chip bypasses the eyes and sends wireless signals directly to the brain.
This patient experience, note, is an encouraging step forward but the technology did not include a camera.
"This implant was performed as part of a proof of concept clinical
trial whose purpose is to demonstrate initial safety and feasibility of
human visual cortex
stimulation. The initial success of this study, coupled with the
significant additional pre-clinical work gathered to-date readies Second
Sight to submit an application to the FDA in early 2017 to gain
approval for conducting an initial clinical trial of the complete Orion I
system, including the camera and glasses. Assuming positive initial
results in patients and discussions with regulators, an expanded pivotal
clinical trial for global market approvals is then planned."
Significance? Second Sight's Dr. Robert Greenberg, chairman of the board, said, "By bypassing the optic nerve
and directly stimulating the visual cortex, the Orion I has the
potential to restore useful vision to patients completely blinded due to
virtually any reason, including glaucoma, cancer, diabetic retinopathy,
or trauma." For those who look in vain for available therapy today, the
Orion I could offer hope to increase their independence.
Dr. Nader Pouratian is a UCLA neurosurgeon who performed the surgery. The procedure was performed as part of a proof-of-concept trial at UCLA, said MassDevice. "The trial looks to show initial safety and feasibility for human visual cortex stimulation."
What's next: "When the team receives approval from the US Food and
Drug Administration, which they hope will be early next year, they will
try sending video signals from a system called the Orion I," said Daily Mail, "which captures images in front of the eyes using a camera on the bridge of a pair of glasses."
RT similarly discussed the company's next-step, in connecting the implant to a camera on a pair of glasses.
The Daily Mail said that during the six weeks of testing, the
patient consistently saw "the exact signals the scientists sent to her
visual cortex, the section of the brain which usually receives images from the optic nerve."
U.S. company headquarters are in Sylmar, California and European headquarters are in Lausanne, Switzerland.
Dazzling eyelid-like features bursting with stars in galaxy IC
2163 formed from a tsunami of stars and gas triggered by a glancing
collision with galaxy NGC 2207 (a portion of its spiral arm is shown on
right side of image). ALMA image of carbon monoxide (orange), which
revealed motion of the gas in these features, is shown on top of Hubble
image (blue) of the galaxy. Credit: M. Kaufman; B. Saxton
(NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble Space Telescope
Astronomers using the Atacama Large
Millimeter/submillimeter Array (ALMA) have discovered a tsunami of stars
and gas that is crashing midway through the disk of a spiral galaxy
known as IC 2163. This colossal wave of material - which was triggered
when IC 2163 recently sideswiped another spiral galaxy dubbed NGC 2207 -
produced dazzling arcs of intense star formation that resemble a pair
of eyelids.
"Although galaxy collisions
of this type are not uncommon, only a few galaxies with eye-like, or
ocular, structures are known to exist," said Michele Kaufman, an
astronomer formerly with The Ohio State University in Columbus and lead
author on a paper published today in the Astrophysical Journal.
Kaufman and her colleagues note that the paucity of similar features
in the observable universe is likely due to their ephemeral nature.
"Galactic eyelids last only a few tens of millions of years, which is
incredibly brief in the lifespan of a galaxy. Finding one in such a
newly formed state gives us an exceptional opportunity to study what
happens when one galaxy grazes another," said Kaufman.
The interacting pair of galaxies resides approximately 114 million
light-years from Earth in the direction of the constellation Canis
Major. These galaxies brushed past each other - scraping the edges of
their outer spiral arms - in what is likely the first encounter of an
eventual merger.
Using ALMA's remarkable sensitivity and resolution, the astronomers made the most detailed measurements ever of the motion of carbon monoxide gas in the galaxy's narrow eyelid features. Carbon monoxide is a tracer of molecular gas, which is the fuel for star formation.
Annotated image showing dazzling eyelid-like features bursting
with stars in galaxy IC 2163 formed from a tsunami of stars and gas
triggered by a glancing collision with galaxy NGC 2207 (a portion of its
spiral arm is shown on right side of image). ALMA image of carbon
monoxide (orange), which revealed motion of the gas in these features,
is shown on top of Hubble image (blue) of the galaxy. Credit: M.
Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble
Space Telescope
The data reveal that the gas in the outer
portion of IC 2163's eyelids is racing inward at speeds in excess of 100
kilometers a second. This gas, however, quickly decelerates and its
motion becomes more chaotic, eventually changing trajectory and aligning
itself with the rotation of the galaxy rather than continuing its
pell-mell rush toward the center.
"What we observe in this galaxy is very much like a massive ocean
wave barreling toward shore until it interacts with the shallows,
causing it to lose momentum and dump all of its water and sand on the
beach," said Bruce Elmegreen, a scientist with IBM's T.J. Watson
Research Center in Yorktown Heights, New York, and co-author on the
paper.
"Not only do we find a rapid deceleration of the gas as it moves from
the outer to the inner edge of the eyelids, but we also measure that
the more rapidly it decelerates, the denser the molecular gas becomes,"
said Kaufman. "This direct measurement of compression shows how the
encounter between the two galaxies drives gas to pile up, spawn new star clusters and form these dazzling eyelid features."
Computer models predict that such eyelid-like features could evolve
if galaxies interacted in a very specific manner. "This evidence for a
strong shock in the eyelids is terrific. It's all very well to have a
theory and simulations suggesting it should be true, but real
observational evidence is great," said Curtis Struck, a professor of
astrophysics at Iowa State University in Ames and co-author on the
paper.
Galaxies IC 2163 (left) and NGC 2207 (right) recently grazed
past each other, triggering a tsunami of stars and gas in IC 2163 and
producing the dazzling eyelid-like features there. ALMA image of carbon
monoxide (orange), which revealed motion of the gas in these features,
is shown on top of Hubble image (blue) of the galaxy pair. Credit: M.
Kaufman; B. Saxton (NRAO/AUI/NSF); ALMA (ESO/NAOJ/NRAO); NASA/ESA Hubble
Space Telescope
"ALMA showed us that the velocities of the
molecular gas in the eyelids are on the right track with the predictions
we get from computer models," said Kaufman. "This critical test of
encounter simulations was not possible before."
Astronomers believe that such collisions between galaxies were common
in the early universe when galaxies were closer together. At that time,
however, galactic disks were generally clumpy and irregular, so other
processes likely overwhelmed the formation of similar eyelid features.
The authors continue to study this galaxy pair
and currently are comparing the properties (e.g., locations, ages, and
masses) of the star clusters previously observed with NASA's Hubble
Space Telescope with the properties of the molecular clouds observed
with ALMA. They hope to better understand the differences between
molecular clouds and star clusters in the eyelids and those elsewhere in the galaxy pair.