HACKING COMPUTERS VIA THE FAN HARDWARE



Here’s a security update to haunt your dreams, and to make the FBI’s quest for un-exploitable cryptographic backdoors look all the more absurd: a team of Israeli researchers has now shown that the sounds made by a computer’s fan can be analyzed to extract everything from usernames and passwords to full encryption keys. It’s not really a huge programming feat, as we’ll discuss below, but from a conceptual standpoint is shows how wily modern cyber attackers can be — and why the weakest link in any security system still involves the human element.
In hacking, there’s a term called “phreaking” that used to refer to phone hacking via automated touch-tone systems, but which today colloquially refers any kind of system investigation or manipulation that uses sound as its main mechanism of action. Phone phreakers used to make free long distance phone calls by playing the correct series of tones into a phone receiver — but phreaks can listen to sounds just as easily as they can produce them, often with even greater effect.

That’s because sound has the potential to get around one of the most powerful and widely used methods in high-level computer security: air-gapping, or the separation of a system from any externally connected network an attack might be able to use for entry. (The term pre-dates wireless internet, and a Wi-Fi-connected computer is not air-gapped, despite the literal gap of air around it.)
So how do you hack your way into an air-gapped computer? Use something that moves easily through the air, and which all computers are creating to one extent or another: Sound.
One favorite worry of paranoiacs is something called Van Eck Phreaking, in which you listen to the sound output of a device to derive something about what the device is doing; in extreme cases, it’s alleged that an attacker can recreate the image on the screen of a properly mic’ed up CRT monitor. Another, more recent phreaking victory showed that it is possible to break RSA encryption with a full copy of the encrypted message — and an audio recording of the processor as it goes through the normal, authorized decryption process.
Note that in order to do any of this, you have to get physically close enough to your target to put a microphone within listening range. If your target system is inside CIA Headquarters, or Google X, you’re almost certainly going to need an agent on the inside to make that happen — and if you’ve got one of those available, you can probably use them to do a lot more than place microphones in places. On the other hand, once placed, this microphone’s security hole won’t be detectable in the system logs, since it’s not actually interacting with the system in any way, just hoovering up incidental leakage of information.
This new fan-attack actually requires even more specialized access, since you have to not only get a mic close to the machine, but infect the machine with a fan-exploiting malware. The idea is that most security software actively looks for anything that might be unusual or harmful behavior, from sending out packets of data over the internet to making centrifuges spin up and down more quickly. Security researchers might have enough foresight to look at fan activity from a safety perspective, and make sure no malware turns them off and melts the computer or something like that, but will they be searching for data leaks in such an out of the way part of the machine? After this paper, the answer is: “You’d better hope so.”

The team used two fan speeds to represent the 1s and 0s of their code (1,000 and 1,600 RPM, respectively,) and listened to the sequence of fan-whines to keep track. Their maximum “bandwidth” is about 1,200 bits an hour, or about 0.15 kilobytes. That might not sound like a lot, but 0.15KB of sensitive, identifying information can be crippling, especially if it’s something like a password that grants further access. You can fit a little over 150 alpha-numeric characters into that space — that’s a whole lot of passwords to lose in a single hour.
There is simply no way to make any system immune to infiltration. You can limit the points of vulnerability, then supplement those point with other measures — that’s what air-gapping is, condensing the vulnerabilities down to physical access to the machine, then shoring that up with big locked metal doors, security cameras, and armed guards.
But if Iran can’t keep its nuclear program safe, and the US can’t keep its energy infrastructure safe, and Angela Merkel can’t keep her cell phone safe — how likely are the world’s law enforcement agencies to be able to ask a bunch of software companies to keep millions of diverse and security-ignorant customers safe, with one figurative hand tied behind their backs?

On the other hand, this story also illustrates the laziness of the claim that the FBI can’t develop ways of hack these phones on their own, a reality that is equally distressing in its own way. The FBI has bragged that it’s getting better at such attacks “every day,” meaning that the only things protecting you from successful attacks against your phone are: the research resources available to the FBI, and the access to your phone that the FBI can rely on having, for instance by seizing it.
Nobody should be campaigning to make digital security weaker, to any extent, for any reason — as this story shows, our most sensitive information is already more than vulnerable enough as it is.




 


GROWING MUSCLES WITHOUT WORKING OUT

GROWING MUSCLES WITHOUT WORKING OUT



USC researcher Megan L. McCain and colleagues have devised a way to develop bigger, stronger muscle fibers. But instead of popping up on the bicep of a bodybuilder, these muscles grow on a tiny scaffold or "chip" molded from a type of water-logged gel made from gelatin.
First authors Archana Bettadapur and Gio C. Suh describe these muscles-on-a-chip in a new study published in Scientific Reports.
During normal embryonic development, skeletal muscles form when cells called myoblasts fuse to form muscle fibers, known as myotubes.
In past experiments, mouse myotubes have detached or delaminated from protein-coated plastic scaffolds after approximately one week and failed to thrive.
In this experiment, the researchers fabricated a gel scaffold from gelatin, a derivative of the naturally occurring muscle protein collagen, and achieved much better results. After three weeks, many of the mouse myotubes were still adhering to these gelatin chips, and they were longer, wider and more developed as a result.
The researchers anticipate that human myotubes would thrive equally well on gelatin chips. These new and improved "muscles-on-a-chip" could then be used to study human muscle development and disease, as well as provide a relevant testing ground for new potential drugs.
"Disease and disorders involving skeletal muscle—ranging from severe muscular dystrophies to the gradual decrease in muscle mass with aging—dramatically reduce the quality of life for millions of people," said McCain, assistant professor of biomedical engineering at the USC Viterbi School of Engineering, and stem cell biology and regenerative medicine at the Keck School of Medicine of USC. "By creating an inexpensive and accessible platform for studying skeletal muscle in the laboratory, we hope to enable research that will usher in new treatments for these patients."
McCain is already putting the gelatin chips into action as the winner of an Eli and Edythe Broad Innovation Awards in Stem Cell Biology and Regenerative Medicine at USC. The award provides $120,000 to McCain and her two collaborators: Justin Ichida, assistant professor of stem cell biology and regenerative medicine; and Dion Dickman, assistant professor of biological sciences at the USC Dornsife College of Letters, Arts and Sciences. In their project, they will use the gelatin chips for studying amyotrophic lateral sclerosis (ALS), or Lou Gehrig's disease, which damages the intersections between motor nerve cells and muscle cells, called neuromuscular junctions (NMJs). McCain, Ichida and Dickman will use skin or blood cells from patients with ALS to generate and study NMJs on gelatin chips.

Liddiard's Toyota Echo, 360 degree driving!


Have you ever driven to an event, and you  see this small space where you could park yur car, yet you could not because you cannot drive in a specific  direction? Have you ever tried to position your car in a specific condition to aid efficient access? have you ever wanted to free up some space in the parking space you own?  well this might just be the answer
Contributing Editor Andrew Liszewski, Gizmodo, also took note of how Liddiard's Toyota Echo could "move in any direction, spin 360-degrees, and slide into a parking spot making parallel parking easier than actual driving."
William Liddiard is the inventor of a set of wheels that can move his car not only forward but sideways too. The one task drivers usually hate is parking. The advantage that easily comes to mind about these wheels is using the set in parallel parking.
Another advantage, said Mandelbaum, would be in edging your car "closer to the drive-thru window" so that you do not have to reach out so far.
In a video that he watched, "the wheels scoot the tiny Toyota Echo around the driveway forward, backwards, left, right and in circles. The car moves forward and backwards the regular way, and left/right when the tire tubes rotating inwards or outwards."
But wait, how do they actually work? Matthew Reynolds in Wired: "He doesn't give any details about the technology behind his creation, but other omnidirectional wheels work by having small discs around the outside edge of the wheel which allow the wheel to slide sideways as well as be driven forwards and backwards. Such wheels have been around for nearly a century and are quite common in small autonomous robots but they've never been fitted to widely available everyday cars."
No, it is not the first of its kind, in that "The concept of an omni-directional wheel isn't exactly new, since several tires like this exist specifically for construction vehicles that need to move in specific ways, but this particular model stands out," said Catrina Dennis in Inverse.
Liddiard Wheels are powered by 24,000 pounds of torque applied directly on the tire, said the video notes.
Liddiard said they can be bolted on to any car. "This is a world first bolt-on application for anything with wheels."
Earlier this year, in March, The London Free Press, took a look at Liddiard's work. "Omni-directional wheels, or mecanum wheels, have been around for the past 50 years. Companies such as Honda and Toyota have already invented several different versions. Liddiard said his wheel has a better design, and can function on all surfaces."
What's next? Liddiard said in the video's notes that "These are proof of concept prototypes to show that they work. Finished wheels will be refined to target application requirements."
Liddiard, who spoke with Inverse, said, "I would like to see [the wheel] used in every market." He gave as examples in material handling, mobile robotics, personal mobility and autonomous cars. "As for taking the product to market, Liddiard is more than ready," wrote Dennis. Liddiard said, "Ultimately I will bring this to market myself or a suitable company can obtain rights to it."

PLUEUROBOT, I STEP TOWARDS MECHANISED MAN





By the application of X-ray to the viewing the movements of creatures like salamanders, komodo dragons, and the likes, EPFL scientists has been able to view and understand the mechanics behind their movements, harness in and keep a detailed record of how vertebrae works.
EPFL scientists have put up a new robot that mimics the movement of a salamander at unbelievable accuracy,it  was like the creatures brain was transferred into a new robotic body. The robot features 3D-printed bones, motorized joints and electronic circuitry as its "nervous system". Inspired by the salamander species Pleurodeles waltl, "Pleurobot" can walk, crawl, and even swim underwater. The results are featured today in Journal of the Royal Society Interface.
Auke Ijspeert and his team at EPFL's Biorobotics Laboratory have built salamander robots before, but this is the first time that they have built a robot that is accurately based on the 3D motion of the animal's skeleton. The scientists used x-ray videos of a salamander from the top and the side, tracking up to 64 points along its skeleton while it performed different types of motion in water and on the ground.
"What is new is really our approach to building Pleurobot,It involves striking a balance between designing a simplified bone structure and replicating the salamander's gait in three dimensions."said Auke Ijspeert
 Pleurobot was putup with fewer “bones” and joints than the real-life creature. The robot features only 27 motors and 11 segments along its spine, while the amphibian has 40 vertebrae and multiple joints, some of which can even rotate freely and move side-to-side or up and down. In the design process, the researchers identified the minimum number of motorized segments required, as well as the optimal placement along the robot's body. As a result, it could replicate many of the salamander's types of movement.
"Animal locomotion is an inherently complex process," says Kostas Karakasilliotis who designed the first versions of the Pleurobot. "Modern tools like cineradiography, 3D printing, and fast computing help us draw closer and closer to understanding and replicating it."
From Auke Ijspeert’s view, vertebrate locomotion is a sophisticated interplay between the spinal cord, the body and the environment. It is the spinal cord that controls motion, not the brain; so mimicking the salamander's movement gives insight into how the spinal cord works and how it interacts with the body. A robot that so closely mimics the biomechanical properties of the body can become a useful scientific tool to investigate these interactions.
Learning about the salamander's spinal cord provides insight into its function in all vertebrates, including humans. The morphology of the amphibian closely resembles that of the first terrestrial creatures, which means that from an evolutionary point of view, the salamander is our ancestor.
Neurobiologists have shown that electrical stimulation of the spinal cord is what determines whether the salamander walks, crawls or swims. At the lowest level of stimulation, the salamander walks; with higher stimulation, its pace increases, and beyond some threshold the salamander begins to swim. Pleurobot is programmed to accurately mimic all of these functions.
Ijspeert believes that understanding the fundamentals of this interplay between the spinal cord and the body's locomotion will help with the development of future therapies and neuroprosthetic devices for paraplegic patients and amputees. He also thinks that the design methodology used for the Pleurobot can help develop other types of "biorobots", which will become important tools in neuroscience and biomechanics.

LEARNING A WHOLE DEGREE BY SIMPLY GAMING








Have you ever been told that all you do is play games , thus you are useless in real world? Then check this out.

Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by subject-matter expert and retired United States Air Force Colonel Gene Lee - who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise - in a high-fidelity air combat simulator.
The artificial intelligence, dubbed ALPHA, was the victor in that simulated scenario, and according to Lee, is "the most aggressive, responsive, dynamic and credible AI I've seen to date."
Details on ALPHA - a significant breakthrough in the application of what's called genetic-fuzzy systems are published in the most-recent issue of the Journal of Defense Management, as this application is specifically designed for use with Unmanned Combat Aerial Vehicles (UCAVs) in simulated air-combat missions for research purposes.
The tools used to create ALPHA as well as the ALPHA project have been developed by Psibernetix, Inc., recently founded by UC College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest, now president and CEO of the firm; as well as David Carroll, programming lead, Psibernetix, Inc.; with supporting technologies and research from Gene Lee; Kelly Cohen, UC aerospace professor; Tim Arnett, UC aerospace doctoral student; and Air Force Research Laboratory sponsors.
High pressure and fast pace: An artificial intelligence sparring partner
ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research. In other words, it defeated other AI opponents.
In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to
In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.
Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.
Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
He added that with most AIs, "an experienced pilot can beat up on it (the AI) if you know what you're doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios."
But, now, it's been Lee, who has trained with thousands of U.S. Air Force pilots, flown in several fighter aircraft and graduated from the U.S. Fighter Weapons School (the equivalent of earning an advanced degree in air combat tactics and strategy), as well as other pilots who have been feeling pressured by ALPHA.
And, anymore, when Lee flies against ALPHA in hours-long sessions that mimic real missions, "I go home feeling washed out. I'm tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge."
An artificial intelligence wingman: How an AI combat role might develop
Explained Ernest, "ALPHA is already a deadly opponent to face in these simulated environments. The goal is to continue developing ALPHA, to push and extend its capabilities, and perform additional testing against other trained pilots. Fidelity also needs to be increased, which will come in the form of even more realistic aerodynamic and sensor models. ALPHA is fully able to accommodate these additions, and we at Psibernetix look forward to continuing development."
In the long term, teaming artificial intelligence with U.S. air capabilities will represent a revolutionary leap. Air combat as it is performed today by human pilots is a highly dynamic application of aerospace physics, skill, art, and intuition to maneuver a fighter aircraft and missiles against adversaries, all moving at very high speeds. After all, today's fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. Microseconds matter, and the cost for a mistake is very high.
Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA's human opponents could blink.
So it's likely that future air combat, requiring reaction times that surpass human capabilities, will integrate AI wingmen - Unmanned Combat Aerial Vehicles (UCAVs) - capable of performing air combat and teamed with manned aircraft wherein an onboard battle management system would be able to process situational awareness, determine reactions, select tactics, manage weapons use and more. So, AI like ALPHA could simultaneously evade dozens of hostile missiles, take accurate shots at multiple targets, coordinate actions of squad mates, and record and learn from observations of enemy tactics and capabilities.
UC's Cohen added, "ALPHA would be an extremely easy AI to cooperate with and have as a teammate. ALPHA could continuously determine the optimal ways to perform tasks commanded by its manned wingman, as well as provide tactical and situational advice to the rest of its flight."
A programming victory: Low computing power, high-performance results
It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.
However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.
According to a lead engineer for autonomy at AFRL, "ALPHA shows incredible potential, with a combination of high performance and low computational cost that is a critical enabling capability for complex coordinated operations by teams of unmanned aircraft.
Ernest began working with UC engineering faculty member Cohen to resolve that computing-power challenge about three years ago while a doctoral student. (Ernest also earned his UC undergraduate degree in aerospace engineering and engineering mechanics in 2011 and his UC master's, also in aerospace engineering and engineering mechanics, in 2012.)
They tackled the problem using language-based control (vs. numeric based) and using what's called a "Genetic Fuzzy Tree" (GFT) system, a subtype of what's known as fuzzy logic algorithms.
States UC's Cohen, "Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily. However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved - unless that challenge and all those inputs are broken down into a cascade of sub decisions."
That's where the Genetic Fuzzy Tree system and Cohen and Ernest's years' worth of work come in.
According to Ernest, "The easiest way I can describe the Genetic Fuzzy Tree system is that it's more like how humans approach problems. Take for example a football receiver evaluating how to adjust what he does based upon the cornerback covering him. The receiver doesn't think to himself: 'During this season, this cornerback covering me has had three interceptions, 12 average return yards after interceptions, two forced fumbles, a 4.35 second 40-yard dash, 73 tackles, 14 assisted tackles, only one pass interference, and five passes defended, is 28 years old, and it's currently 12 minutes into the third quarter, and he has seen exactly 8 minutes and 25.3 seconds of playtime.'"
That receiver - rather than standing still on the line of scrimmage before the play trying to remember all of the different specific statistics and what they mean individually and combined to how he should change his performance - would just consider the cornerback as 'really good.'
The cornerback's historic capability wouldn't be the only variable. Specifically, his relative height and relative speed should likely be considered as well. So, the receiver's control decision might be as fast and simple as: 'This cornerback is really good, a lot taller than me, but I am faster.'
At the very basic level, that's the concept involved in terms of the distributed computing power that's the foundation of a Genetic Fuzzy Tree system wherein, otherwise, scenarios/decision making would require too high a number of rules if done by a single controller.
Added Ernest, "Only considering the relevant variables for each sub-decision is key for us to complete complex tasks as humans. So, it makes sense to have the AI do the same thing."
In this case, the programming involved breaking up the complex challenges and problems represented in aerial fighter deployment into many sub-decisions, thereby significantly reducing the required "space" or burden for good solutions. The branches or sub divisions of this decision-making tree consists of high-level tactics, firing, evasion and defensiveness.
That's the "tree" part of the term "Genetic Fuzzy Tree" system.
Programming that's language based, genetic and generational
Most AI programming uses numeric-based control and provides very precise parameters for operations. In other words, there's not a lot of leeway for any improvement or contextual decision making on the part of the programming.
The AI algorithms that Ernest and his team ultimately developed are language based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control or fuzzy logic, while much less about complex mathematics, can be verified and validated.
Another benefit of this linguistic control is the ease in which expert knowledge can be imparted to the system. For instance, Lee worked with Psibernetix to provide tactical and maneuverability advice which was directly plugged in to ALPHA. (That "plugging in" occurs via inputs into a fuzzy logic controller. Those inputs consist of defined terms, e.g., close vs. far in distance to a target; if/then rules related to the terms; and inputs of other rules or specifications.)
Finally, the ALPHA programming is generational. It can be improved from one generation to the next, from one version to the next. In fact, the current version of ALPHA is only that - the current version. Subsequent versions are expected to perform significantly better.
Again, from UC's Cohen, "In a lot of ways, it's no different than when air combat began in World War I. At first, there were a whole bunch of pilots. Those who survived to the end of the war were the aces. Only in this case, we're talking about code."
To reach its current performance level, ALPHA's training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then "bred" with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that's the one that is utilized.
This is the "genetic" part of the "Genetic Fuzzy Tree" system.
Said Cohen, "All of these aspects are combined, the tree cascade, the language-based programming and the generations. In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles what the IBM/Deep Blue vs. Kasparov was to chess."


SCI-FI WORLD NOW BECOMING REAL




Do you remember the movie “G.I.Joe?”  Did you see how the Cobra healed himself with some nanobots? how possible do you think it is to create that technology?  The hat is on, production of such nanobots are already up and about.
"It's the magic of how DNA works," said Henderson, a professor of genetics, development and cell biology at Iowa State University.
Henderson, along with his former graduate student Divita Mathur, studies how to build nanomachines that may have real-world medical applications someday soon. He and Mathur recently published an article in the peer-reviewed Scientific Reports describing his laboratory's successful effort to design a nanomachine capable of detecting a mockup of the Ebola virus.
He said such a machine would prove valuable in the developing world, where access to diagnostic medical equipment can be rare. He said his nanotechnology could be fabricated cheaply and deployed easily. Used in conjunction with a smartphone app, nearly anyone could use the technology to detect Ebola or any number of other diseases and pathogens without the need for traditional medical facilities.
The trick lies in understanding the rules that govern how DNA works, Henderson said.
"It's possible to exploit that rule set in a way that creates advantages for medicine and biotechnology," he said.
The iconic double-helix structure of DNA means that one strand of DNA will bind only with a complementary side. Even better, those compatible strands find each other automatically, like a castle that builds itself. Henderson harnessed those same principles for his nanomachines. The components, once added to water and then heated and cooled, find each other and assemble correctly without any further effort from the individual deploying the machines.
And just how "nano" is a nanomachine? Henderson said about 40 billion individual machines fit in a single drop of water.
The machines act as a diagnostic tool that detects certain maladies at the genetic level. For the recently published paper, Henderson and Mathur, now a postdoctoral research fellow at the Center for Biomolecular Science and Engineering at the Naval Research Laboratory in Washington, D.C., designed the machines to look for signs of Ebola, though the experiments in the study used a mock version of the viral genome and not the real thing. Henderson employed an embedded photonic system that tests for the presence of the target molecules. If the machines sniff out what they're looking for, the photonic system flashes a light, which can be detected with a machine called a fluorometer.
Henderson said this sort of technology could be modified to find certain kinds of molecules or pathogens, allowing for virtually anyone, anywhere to run diagnostic tests without access to medical facilities.
He also envisions a time when similar nanoscale architectures could be used to deliver medication precisely where it needs to go at precisely the right time. These nanomachines, built from DNA, essentially would encapsulate the medication and guide it to its target.
Henderson said such advances aren't that far beyond the reach of modern medicine. It just requires scientists in the field to think small. Really small, in this case.


Translate

Ads