Showing posts with label developers. Show all posts
Showing posts with label developers. Show all posts
Raspberry Pi brings out shiny Compute Module 3

Raspberry Pi brings out shiny Compute Module 3



Raspberry Pi brings out shiny Compute Module 3
Compute Module 3
Another Raspberry Pi launch announcement—and another burst of news items explaining what's new, at what price.
This time it is about the Raspberry Pi Compute Module 3 (CM3). Trusted Reviews said it comes with 64-bit and multi-core functionality.
"The new Compute Module is based on the BCM2837 processor – the same as found in the Raspberry Pi 3 – running at 1.2 GHz with 1 gigabyte of RAM," said Hackaday.
The Raspberry Pi blog provided the CM3 launch announcement:
"Way back in April of 2014 we launched the original Compute Module (CM1), which was based around the BCM2835 processor of the original Raspberry Pi. CM1 was a great success and we've seen a lot of uptake from various markets, particularly in IoT and home and factory automation."
Now it has a new CM3 based on the Raspberry Pi 3 hardware. Take note: It is "providing twice the RAM and roughly 10x the CPU performance of the original Module," according to the blog.
Ars Technica noted that it was the first big upgrade since 2014. That year, said Trusted Reviews, The original module "combined the guts of a first-generation Pi with a small SODIMM-layout module."
The new version, said Joe Roberts in Trusted Reviews, "which uses the same BCM2837, a quad-core 64-bit ARMv8 part, as the Pi 3, brings the Compute Module fully up to date."
There will be two flavors—CM3 and CM3L (lite) —The 'L' version is a CM3 without eMMC Flash—that is, as described by RS Components,"not fitted with eMMC Flash and the SD/eMMC interface. But pins are available for the designer to connect their own SD/eMMC device."
According to the blog, the Lite version "brings the SD card interface to the Module pins so a user can wire this up to an eMMC or SD card of their choice."
Jon Brodkin in Ars Technica said that the Compute Module's stripped-down form factor makes it more suitable for embedded computing, as it fits into a standard SODIMM connector. The new Compute Module can run Windows IoT Core and supports Linux.
The latest version is being used by NEC, said Brodkin, in displays intended for digital signs, streaming, and presentations. The Raspberry Pi blog, meanwhile, said that "we're already excited to see NEC displays, an early adopter, launching their CM3-enabled display solution."
It stated pricing for the two flavors. The CM3 and CM3L are priced at $30 and $25, respectively (excluding tax and shipping), and this price applies to any size order. The original Compute Module is also reduced to $25. The blog said one can "Head on over to our partners element14 (or Farnell UK) and RS Components" to buy them.
What about backwards compatibility? According to the blog "The CM3 is largely backwards-compatible with CM1 designs which have followed our design guidelines."
The blog presented the caveats: The module is 1mm taller than the original module; "the processor core supply (VBAT) can draw significantly more current. Consequently, the processor itself will run much hotter under heavy CPU load, so designers need to consider thermals based on expected use cases."

credit: Nancy Owano 

Blitab Technology :createing tablet for the blind and visually impaired



Blitab Technology develops tablet for the blind and visually impaired
Blitab, a tablet with a Braille interface, looks like a promising step up for blind and low vision people who want to be part of the educational, working and entertainment worlds of digital life.
A video of the Blitab Technology founder, Kristina Tsvetanova, said the idea for such a tablet came to her during her studies as an industrial engineer. At the time, a blind colleague of hers asked her to sign him for an online course and a question nagged her: How could technology help him better?
Worldwide, she said, there are more than 285 million blind and visually impaired people.
She was aware that in general blind and low vision people were coping with old, bulky technology, contributing to low literacy rates among blind children. She and her team have been wanting to change that.
There was ample room for improvements. The conventional interfaces for the blind, she said, have been slow and expensive. She said the keyboard can range from about $5000 to $8000. Also, she said, they are limited to what the blind person can read, just a few words at a time. Imagine, she said, reading Moby Dick, five words at a time.
They have engineered a with a 14-line Braille display on the top and a touch screen on the bottom.


Part of their technology involves a high performance membrane, and their press statement said the tablet uses smart micro fluids to develop small physical bubbles instead of a screen display.
They have produced a tactile tablet, she said, where people with sight loss can learn, work and play using that device.
The user can control the tablet with voice-over if the person wants to listen to an ebook or by pressing one button, dots will be activated on the screen and the surface of the screen will change.
Romain Dillet, in TechCrunch: "The magic happens when you press the button on the side of the device. The top half of the device turns into a Braille reader. You can load a document, a web page—anything really—and then read the content using Braille."
Tsvetanova told Dillet, "We're not excluding voice over; we combine both of these things." She said they offer both "the tactile experience and the voice over experience."
Rachel Metz reported in MIT Technology Review: "The Blitab's Braille display includes 14 rows, each made up of 23 cells with six dots per cell. Every cell can present one letter of the Braille alphabet. Underneath the grid are numerous layers of fluids and a special kind of membrane," she wrote.

Blitab Technology develops tablet for the blind and visually impaired
Credit: Blitab
At heart, it's an Android tablet, Dillet said, "so it has Wi-Fi and Bluetooth and can run all sorts of Android apps."
Metz said that with eight hours of use per day, it's estimated to last for five days on one battery charge.
The tablet team have set a price to this device, at $500.
How they will proceed: First, she said they will sell directly from their web site, then scale through global distributors, and distribute to less developed world.
What's next? Dillet said in the Jan.6 article that "the team of 10 plans to ship the in six months with pre-orders starting later this month."
Blitab Technology recently took first place in the Digital Wellbeing category of the 2016 EIT Digital Challenge. EIT Digital is described as a European open innovation organization. They seek to foster digital innovation and entrepreneurial talent.


credit ;Nancy Owano 
waves Nokia sues Apple for patent infringement

waves Nokia sues Apple for patent infringement


Nokia announced Wednesday it is suing Apple in German and US courts for patent infringement, claiming the US tech giant was using Nokia technology in "many" products without paying for it.
Finnish Nokia, once the world's top mobile phone maker, said the two companies had signed a licensing agreement in 2011, and since then "Apple has declined subsequent offers made by Nokia to license other of its patented inventions which are used by many of Apple's products."
"After several years of negotiations trying to reach agreement to cover Apple's use of these patents, we are now taking action to defend our rights," Ilkka Rahnasto, head of Nokia's patent business, said in a statement.
The complaints, filed in three German cities and a district court in Texas, concern 32 patents for innovations related to displays, user interface, software, antennae, chipsets and video coding. Nokia said it was preparing further legal action elsewhere.
Nokia was the world's leading mobile phone maker from 1998 until 2011 when it bet on Microsoft's Windows mobile platform, which proved to be a flop. Analysts say the company failed to grasp the growing importance of smartphone apps compared to hardware.
It sold its unprofitable handset unit in 2014 for some $7.2 billion to Microsoft, which dropped the Nokia name from its Lumia smartphone handsets.
Meanwhile Nokia has concentrated on developing its mobile network equipment business by acquiring its French-American rival Alcatel-Lucent.
Including its 2013 full acquisition of joint venture Nokia Siemens Networks, Nokia said the three companies united represent more than 115 billion euros of R&D investment, with a massive portfolio of tens of thousands of patents.
The 2011 licensing deal followed years of clashes with Apple, which has also sparred with main rival Samsung over patent claims.
At the time, Apple cut the deal to settle 46 separate complaints Nokia had lodged against it for violation of intellectual property.
A Swiss firm acquires Mars One private project

A Swiss firm acquires Mars One private project


Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ve
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures
A British-Dutch project aiming to send an unmanned mission to Mars by 2018 announced Friday that the shareholders of a Swiss financial services company have agreed a takeover bid.
"The acquisition is now only pending approval by the board of Mars One Ventures," the company said in a joint statement with InFin Innovative Finance AG, adding approval from the Mars board would come "as soon as possible."
"The takeover provides a solid path to funding the next steps of Mars One's mission to establish a permanent human settlement on Mars," the statement added.
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures.
Mars One aims to establish a permanent human settlement on the Red Planet, and is currently "in the early mission concept phase," the company says, adding securing funding is one of its major challenges.
Some 200,000 hopefuls from 140 countries initially signed up for the Mars One project, which is to be partly funded by a television reality show about the endeavour.
Those have now been whittled down to just 100, out of which 24 will be selected for one-way trips to Mars due to start in 2026 after several unmanned missions have been completed.
"Once this deal is completed, we'll be in a much stronger financial position as we begin the next phase of our mission. Very exciting times," said Mars One chief executive Bas Lansdorp.
NASA is currently working on three Mars missions with the European Space Agency and plans to send another rover to Mars in 2020.
But NASA has no plans for a manned to Mars until the 2030s.
scientists find that Solar cells can be made with tin instead of lead

scientists find that Solar cells can be made with tin instead of lead

,

Solar power could become cheaper and more widespread
Credit: University of Warwick
A breakthrough in solar power could make it cheaper and more commercially viable, thanks to research at the University of Warwick.
In a paper published in Nature Energy, Dr Ross Hatton, Professor Richard Walton and colleagues, explain how solar cells could be produced with tin, making them more adaptable and simpler to produce than their current counterparts.
Solar cells based on a class of semiconductors known as lead perovskites are rapidly emerging as an efficient way to convert sunlight directly into electricity. However, the reliance on lead is a serious barrier to commercialisation, due to the well-known toxicity of lead.
Dr Ross Hatton and colleagues show that perovskites using tin in place of lead are much more stable than previously thought, and so could prove to be a viable alternative to lead perovskites for solar cells.
Lead-free cells could render cheaper, safer and more commercially attractive - leading to it becoming a more prevalent source of energy in everyday life.
This could lead to a more widespread use of solar power, with potential uses in products such as laptop computers, mobile phones and cars.
The team have also shown how the device structure can be greatly simplified without compromising performance, which offers the important advantage of reduced fabrication cost.
Dr Hatton comments that there is an ever-pressing need to develop renewable sources of energy:
"It is hoped that this work will help to stimulate an intensive international research effort into lead-free perovskite solar cells, like that which has resulted in the astonishingly rapid advancement of perovskite solar cells.
"There is now an urgent need to tackle the threat of climate change resulting from humanity's over reliance on fossil fuel, and the rapid development of new solar technologies must be part of the plan."
Perovskite solar cells are lightweight and compatible with flexible substrates, so could be applied more widely than the rigid flat plate silicon that currently dominate the photovoltaics market, particularly in consumer electronics and transportation applications.
The paper, 'Enhanced Stability and Efficiency in Hole-Transport Layer Free CsSnI3 Perovskite Photovoltaics', is published in Nature Energy, and is authored by Dr Ross Hatton, Professor Richard Walton and PhD student Kenny Marshall in the Department of Chemistry, along with Dr Marc Walker in the Department of Physics.

Best weather satellite ever built is lunched into space

Best weather satellite ever built is lunched into space


Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (United Launch Alliance via AP)  
The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives.
This new GOES-R spacecraft will track U.S. weather as never before: hurricanes, tornadoes, flooding, , wildfires, lightning storms, even solar flares. Indeed, about 50 TV meteorologists from around the country converged on the launch site—including NBC's Al Roker—along with 8,000 space program workers and guests.
"What's so exciting is that we're going to be getting more data, more often, much more detailed, higher resolution," Roker said. In the case of tornadoes, "if we can give people another 10, 15, 20 minutes, we're talking about lives being saved."
Think superhero speed and accuracy for forecasting. Super high-definition TV, versus black-and-white.
"Really a quantum leap above any NOAA has ever flown," said Stephen Volz, the National Oceanic and Atmospheric Administration's director of satellites.
"For the American public, that will mean faster, more accurate weather forecasts and warnings," Volz said earlier in the week. "That also will mean more lives saved and better environmental intelligence" for government officials responsible for hurricane and other evacuations.
Best weather satellite ever built rockets into space
Cell phones light up the beaches of Cape Canaveral and Cocoa Beach, Fla., north of the Cocoa Beach Pier as spectators watch the launch of the NOAA GOES-R weather satellite, Saturday, Nov. 19, 2016. It was launched from Launch Complex 41 at Cape Canaveral Air Force Station on a ULA Atlas V rocket. (Malcolm Denemark/Florida Today via AP)
Airline passengers also stand to benefit, as do rocket launch teams. Improved forecasting will help pilots avoid bad weather and help rocket scientists know when to call off a launch.
NASA declared success 3 1/2 hours after liftoff, following separation from the upper stage.
The first in a series of four high-tech satellites, GOES-R hitched a ride on an unmanned Atlas V rocket, delayed an hour by rocket and other problems. NOAA teamed up with NASA for the mission.
The satellite—valued by NOAA at $1 billion—is aiming for a 22,300-mile-high equatorial orbit. There, it will join three aging spacecraft with 40-year-old technology, and become known as GOES-16. After months of testing, this newest satellite will take over for one of the older ones. The second satellite in the series will follow in 2018. All told, the series should stretch to 2036.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
GOES stands for Geostationary Operational Environmental Satellite. The first was launched in 1975.
GOES-R's premier imager—one of six science instruments—will offer three times as many channels as the existing system, four times the resolution and five times the scan speed, said NOAA program director Greg Mandt. A similar imager is also flying on a Japanese weather satellite.
Typically, it will churn out full images of the Western Hemisphere every 15 minutes and the continental United States every five minutes. Specific storm regions will be updated every 30 seconds.
Forecasters will get pictures "like they've never seen before," Mandt promised.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, in Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
A first-of-its-kind lightning mapper, meanwhile, will take 500 snapshots a second.
This next-generation GOES program—$11 billion in all—includes four satellites, an extensive land system of satellite dishes and other equipment, and new methods for crunching the massive, nonstop stream of expected data.
Hurricane Matthew, interestingly enough, delayed the launch by a couple weeks. As the hurricane bore down on Florida in early October, launch preps were put on hold. Matthew stayed far enough offshore to cause minimal damage to Cape Canaveral, despite some early forecasts that suggested a direct strike.
Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, par 
credit; Marcia Dunn
A suit-X trio designed to support workers: Meet MAX

A suit-X trio designed to support workers: Meet MAX



(Tech Xplore)—Not all of us park our bodies in a chair in the morning and cross our legs to do our work. In fact, just think of vast numbers of workers doing physically demanding or just physically repetitive tasks including bending and lifting.
Workers on construction sites, factories and warehouses might cope with aches and pains brought on by their work. Hopefully, the future will provide an easy answer for workers to suit up in a suitable way for them to avoid these aches and pain.
There is a new kid on the block aiming to address such a solution, and a number of tech watchers have put them in the news this month. A California-based group aptly called suit-X announced its MAX, which stands for Modular Agile Exoskeleton. The company designs and makes exoskeletons.
"MAX is designed to support workers during the repetitive tasks that most frequently cause injury," said a company release.
Will Knight in MIT Technology Review said that this essentially is " a trio of devices that use robotic technologies to enhance the abilities of able-bodied workers and prevent common workplace injuries."
Target users, for example, could include those who carry out ceiling inspections, welding, installations and repairs. "It's not only lifting 75 pounds that can hurt your back; it is also lifting 20 pounds repeatedly throughout the day that will lead to injury," said Dr. Homayoon Kazerooni, founder and CEO, suitX."The MAX solution is designed for unstructured workplaces where no robot can work as efficiently as a human worker. Our goal is to augment and support workers who perform demanding and repetitive tasks in unstructured workplaces in order to prevent and reduce injuries."
Seeker referred to the MAX system as an exoskeleton device that could potentially change the way millions of people work.
Seeker noted its advantages as workplace exoskeletons in several ways, being lightweight such that the user can walk around unimpeded. "The exoskeleton units kick in only when you need them, and they don't require any external power source."
MAX is a product with three modules. You use them independently or in combination, depending on work needs. The three modules are backX, shoulderX, and legX.
According to the company, "All modules intelligently engage when you need them, and don't impede you otherwise."
The backX (lower back) reduces forces and torques.
The shoulderX reduces forces; it "enables the wearer to perform chest-to-ceiling level tasks for longer periods of time." In a video the company defines shoulderX as "an industrial arm exoskeleton that augments its wearer by reducing gravity-induced forces at the shoulder complex."
The legX was designed to support knee joint and quadriceps. It incorporates microcomputers in each leg. They communicate with each other to determine if the person is walking, bending, or taking the stairs." Seeker said these communicate via Bluetooth, monitoring spacing and position.
Credit: suitx
A suit-X trio designed to support workers: Meet MAX
Kazerooni spoke about his company and its mission, in Seeker. "My job is easy. I sit in front of a computer. But these guys work all day long, put their bodies through abuse. We can use bionics to help them." He also said he and his team did not create this "because of science fiction movies. We were responding to numbers from the Department of Labor, which said that back, knee and shoulder injuries are the most common form of injuries among workers."
Will Knight meanwhile has reflected on the bigger picture in developments. Can they help in preventing injury on the job and help prolong workers' careers? "New materials, novel mechanical designs, and cheaper actuators and motors have enabled a new generation of cheaper, more lightweight exoskeletons to emerge in recent years," he wrote. "For instance, research groups at Harvard and SRI are developing systems that are passive and use soft, lightweight materials."
Some companies, such as BMW, said Knight, have been experimenting with exoskeletons. "The MAX is another (bionic) step toward an augmented future of work."

credit;   Nancy Owano
Use drones and insect biobots to map disaster areas

Use drones and insect biobots to map disaster areas


Tech would use drones and insect biobots to map disaster areas
Credit: North Carolina State University  
Researchers at North Carolina State University have developed a combination of software and hardware that will allow them to use unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map large, unfamiliar areas – such as collapsed buildings after a disaster.
"The idea would be to release a swarm of sensor-equipped biobots – such as remotely controlled cockroaches – into a collapsed building or other dangerous, unmapped area," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and co-author of two papers describing the work.
"Using remote-control technology, we would restrict the movement of the biobots to a defined area," Lobaton says. "That area would be defined by proximity to a beacon on a UAV. For example, the biobots may be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and would signal researchers via radio waves whenever they got close to each other. Custom software would then use an algorithm to translate the biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the UAV moves forward to hover over an adjacent, unexplored section. The biobots move with it, and the mapping process is repeated. The software program then stitches the new map to the previous one. This can be repeated until the entire region or structure has been mapped; that map could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS can't be used," Lobaton says. "A strong radio signal from the UAV could penetrate to a certain extent into a collapsed building, keeping the biobot swarm contained. And as long as we can get a signal from any part of the swarm, we are able to retrieve data on what the rest of the swarm is doing. Based on our experimental data, we know you're going to lose track of a few individuals, but that shouldn't prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots. However, to test their new mapping technology, the research team relied on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a maze-like space, with the effect of the UAV beacon emulated using an overhead camera and a physical boundary attached to a moving cart. The cart was moved as the robots mapped the area.
"We had previously developed proof-of-concept software that allowed us to map small areas with biobots, but this work allows us to map much larger areas and to stitch those maps together into a comprehensive overview," Lobaton says. "It would be of much more practical use for helping to locate survivors after a disaster, finding a safe way to reach survivors, or for helping responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching them together, "A Framework for Mapping with Biobotic Insect Networks: From Local to Global Maps," is published in Robotics and Autonomous Systems. An article on the theory of mapping based on the proximity of mobile sensors to each other, "Geometric Learning and Topological Inference with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.


credit;   Matt Shipman
How machine learning advances artificial intelligence

How machine learning advances artificial intelligence


Computers that learn for themselves are with us now. As they become more common in 'high-stakes' applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can trust them.
There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.
Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.
Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?
We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.
Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.
Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?
"Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data," says Ghahramani. "But what is going on inside the 'black box'? If the processes by which decisions were being made were more transparent, then trust would be less of an issue."
His team builds the algorithms that lie at the heart of these technologies (the "invisible bit" as he refers to it). Trust and transparency are important themes in their work: "We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.
"When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us."
One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.
Two years ago, Ghahramani's group launched the Automatic Statistician with funding from Google. The tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.
"The difficulty with machine learning systems is you don't really know what's going on inside – and the answers they provide are not contextualised, like a human would do. The Automatic Statistician explains what it's doing, in a human-understandable form."
Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.
Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: "A particular issue with new (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand." His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.
"We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode." A , for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.
Weller's theme of trust and transparency forms just one of the projects at the newly launched £10 million Leverhulme Centre for the Future of Intelligence (CFI). Ghahramani, who is Deputy Director of the Centre, explains: "It's important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society."
CFI brings together four of the world's leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.
Ghahramani describes the excitement felt across the field: "It's exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.
"We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us."
Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a speech delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.
internet robot  investigate creativity

internet robot investigate creativity


A portrait of Benjamin Franklin manipulated by Smilevector. Credit: Smithsonian National Portrait Gallery.
Tom White, senior lecturer in Victoria's School of Design, has created Smilevector—a bot that examines images of people, then adds or removes smiles to their faces.
"It has examined hundreds of thousands of faces to learn the difference between images, by finding relations and reapplying them," says Mr White.
"When the computer finds an image it looks to identify if the person is smiling or not. If there isn't a smile, it adds one, but if there is a smile then it takes it away.
"It represents these changes as an animation, which moves parts of the face around, including crinkling and widening the eyes."
The bot can be used as a form of puppetry, says Mr White.
"These systems are domain independent, meaning you can do it with anything—from manipulating images of faces to shoes to chairs. It's really fun and interesting to work in this space. There are lots of ideas to play around with."
The creation of the bot was sparked by Mr White's research into creative intelligence.
"Machine learning and artificial intelligence are starting to have implications for people in creative industries. Some of these implications have to do with the computer's capabilities, like completing mundane tasks so that people can complete higher level tasks," says Mr White.
"I'm interested in exploring what these systems are capable of doing but also how it changes what we think of as being creative is in the first place. Once you have a system that can automate processes, is that still a creative act? If you can make something a completely push of the button operation, does its meaning change?"
Mr White says people have traditionally used creative tools by giving commands.
"However, I think we're moving toward more of a collaboration with computers—where there's an intelligent system that's making suggestions and helping steer the process.
"A lot will happen in this space in the next five to ten years, and now is the right time to progress. I also hope these techniques influence teaching over the long term as they become more mainstream. It is something that students could work with me on at Victoria University as part of our Master of Design Innovation or our new Master of Fine Arts (Creative Practice)."
The paper Sampling Generative Networks describing this research is available as an arXiv preprint. The research will also be presented as part of the Neural Information Processing Systems conference in Spain and Generative Art conference in Italy in December.

List of speech editing software

(geekkeep)-voice editing software has become tools that people has come to work with. The military, hackers, hosts, animators , up to an ever increasing list of people has come to rely on for achieving their aims.

 Animating studios has come to use these applications in productions of character lines without relying on hiring voice artists( this has become beneficial to rising studios)In this age the security of some units uses bio scans, speech recognition units is prey common and that brings the downside. Agents, buggers, military infiltration rely on speech editing software to bypass these systems hence gaining unauthorized access to the units.

you must have seen the ever rising artificial intelligence struggle in the tech market or hubs. AI like Deep-mind, Cortana,Clever bot, Virtual Assistant Denise,    Verbots,    MadomaVirtual Assistant,
DesktopMates,     Braina,      Syn Virtual Assistant   uses these speech recognition softwares to make the voice of these assistants or artificial intelligence.


lists of speech recognition software are



 WavePad audio editing software                                                                                                            This audio editing software is a full-featured professional audio and music editor for Windows and Mac. It lets you record and edit music, voice and other audio recordings. When editing audio files, you can cut, copy and paste parts of recordings, and then add effects like echo, amplification and noise reduction. WavePad works as a wav or mp3 editor, but it also supports a number of other file formats including vox, gsm, wma, real audio, au, aif, flac, ogg, and more.               



Free Audio Editor can digitize sound recordings of your rare music cassette tapes, vinyl LPs and videos, creating standard digital sound files. Timer and input level triggered recording are included. There is a button to activate the system Windows Mixer without visiting the control panel. The recording can be directly loaded into the waveform window for further perfection.
You can edit audio using the traditional Waveform View or the frequency-based Spectral Display that makes it easy to isolate and remove unwanted noise. Intuitive cut/copy/paste/trim/mute and more actions can be performed easily. The selection tools make the editing operations performed with millisecond precision.Enhance your audio with more than 30 native signal and effects processing engines, including compression, EQ, fade in/out, delay, chorus, reverb, time stretching, pitch shifting and more. It significantly increases your audio processing capabilities. The real-time preview enables you to hear the results before mixing down to a single file.This free audio editor supports a large amount of input formats including MP3, WMA, WAV, AAC, FLAC, OGG, APE, AC3, AIFF, MP2, MPC, MPGA, M4A, CDA, VOX, RA, RAM, ARW, AIF, AIFC, TTA, G721, G723, G726 and many more as source formats. Any audio files can be saved to the most popular audio formats like MP3, WMA, WAV, OGG, etc. Furthermore, it is available to control the output quality by adjusting the parameters & our software also prepares many presets with different combinations of settings for playback on all kinds of software applications and devices.


Audacity can record live audio through a microphone or mixer, or digitize recordings from other media. With some sound cards, and on any recent version of Windows, Audacity can also capture streaming audio.
  • Device Toolbar manages multiple recording and playback devices.
  • Level meters can monitor volume levels before, during and after recording. Clipping can be displayed in the waveform or in a label track.
  • Record from microphone, line input, USB/Firewire devices and others.
  • Record computer playback on Windows Vista and later by choosing “Windows WASAPI” host in Device Toolbar then a “loopback” input.
  • Timer Record and Sound Activated Recording features.
  • Dub over existing tracks to create multi-track recordings.
  • Record at very low latencies on supported devices on Linux by using Audacity with JACK.
  • Record at sample rates up to 192,000 Hz (subject to appropriate hardware and host selection). Up to 384,000 Hz is supported for appropriate high-resolution devices on Windows (using WASAPI), Mac OS X, and Linux.
  • Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or Linux (using ALSA or JACK host).
  • Record multiple channels at once (subject to appropriate hardware).




Power Sound Editor

Power Sound Editor Free is a visual audio editing and recording software solution, which supports many advanced and powerful operations with audio data.
You can use Power Sound Editor Free to record your own music, voice, or other audio files, edit it, mix it with other audio or musical parts, add effects like Reverb, Chorus, and Echo, and burn it on a CD, post it on the World Wide Web or e-mail it.

mp3DirectCut

mp3DirectCut is a fast and extensive audio editor and recorder for compressed mp3. You can directly cut, copy, paste or change the volume with no need to decompress your files for audio editing. Using Cue sheets, pause detection or Auto cue you can easily divide long files.

Music Editor Free

Music Editor Free (MEF) is a multi-award winning music editor software tool. MEF helps you to record and edit music and sounds. It lets you make and edit music, voice and other audio recordings. When editing audio files you can cut, copy and paste parts of recordings and, if required, add effects like echo, amplification and noise reduction.

Wavosaur

Wavosaur is a free sound editor, audio editor, wav editor software for editing, processing and recording sounds, wav and mp3 files. Wavosaur has all the features to edit audio (cut, copy, paste, etc.) produce music loops, analyze, record, batch convert. Wavosaur supports VST plugins, ASIO driver, multichannel wav files, real time effect processing. The program has no installer and doesn’t write in the registry. Use it as a free mp3 editor, for mastering, sound design.

Traverso DAW

Traverso DAW is a GPL licensed, cross platform multitrack audio recording and editing suite, with an innovative and easy to master User Interface. It’s suited for both the professional and home user, who needs a robust and solid DAW. Adding and removal of effects plugins, moving Audio Clips and creating new Tracks during playback are all perfectly safe, giving you instant feedback on your work!

Ardour

Ardour is a digital audio workstation. You can use it to record, edit and mix multi-track audio. You can produce your own CDs, mix video soundtracks, or just experiment with new ideas about music and sound. Ardour capabilities include: multichannel recording, non-destructive editing with unlimited undo/redo, full automation support, a powerful mixer, unlimited tracks/busses/plugins, timecode synchronization, and hardware control from surfaces like the Mackie Control Universal. If you’ve been looking for a tool similar to ProTools, Nuendo, Pyramix, or Sequoia, you might have found it.

Rosegarden

Rosegarden is a well-rounded audio and MIDI sequencer, score editor, and general-purpose music composition and editing environment. Rosegarden is an easy-to-learn, attractive application that runs on Linux, ideal for composers, musicians, music students, and small studio or home recording environments.

Hydrogen

Hydrogen is an advanced drum machine for GNU/Linux. It’s main goal is to bring professional yet simple and intuitive pattern-based drum programming.

Sound Engine

SoundEngine is the best tool for personal use, because it enables you to easily edit a wave data while it has many functions required for a mastering process.

Expstudio Audio Editor

Expstudio Audio Editor is a visual music file editor that has many different options and a multiple functionality to edit your music files like editing text files. With a given audio data it can perform many different operations such as displaying a waveform image of an audio file, filtering, applying various audio effects, format conversion and more.

DJ Audio Editor

DJ Audio Editor is easy-to-use and well-organized audio application which allows you to perform various operations with audio data. You can create and edit audio files professionally, also displaying a waveform image of audio file makes your work faster.

Eisenkraut

Eisenkraut is a cross-platform audio file editor. It requires Java 1.4+ and SuperCollider 3. It supports multi-channel and multi-mono files and floating-point encoding. An OSC scripting interface and experimental sonagramme functionality are provided.

FREE WAVE MP3 Editor

Free Wave MP3 Editor is a sound editor program for Windows. This software lets you make and edit voice and other audio recordings. You can cut, copy and paste parts of recording and, if required, add effects like echo, amplification and noise reduction.

Kangas Sound Editor

Fun Kangaroo-themed program that allows the user to create music and sound effects. It uses a system of frequency ratios for pitch control, rather than conventional music notation and equal temperament. It allows instruments, both musical and percussion, to be created.

Ecawave

Ecawave is a simple graphical audio file editor. The user-interface is based on Qt libraries, while almost all audio functionality is taken directly from ecasound libraries. As ecawave is designed for editing large audio files, all processing is done direct-to-disk. Simple waveform caching is used to speed-up file operations. Ecawave supports all audio file formats and effect algorithms provided by ecasound libraries. This includes JACK, ALSA, OSS, aRts, over 20 file formats, over 30 effect types, LADSPA plugins and multi-operator effect presets.

Audiobook Cutter

Audiobook Cutter splits your MP3 audio books and podcasts in a fast and user friendly way. The split files can easily be used on mobile MP3 players because of their small-size. Their duration allows smooth navigation through the book. The split points are determined automatically based on silence detection.

Jokosher

Jokosher is a simple yet powerful multi-track studio. With it you can create and record music, podcasts and more, all from an integrated simple environment.

LMMS

LMMS is a free cross-platform alternative to commercial programs like FL Studio, which allow you to produce music with your computer. This includes the creation of melodies and beats, the synthesis and mixing of sounds, and arranging of samples. You can have fun with your MIDI-keyboard and much more; all in a user-friendly and modern interface.

Mp3Splt

Mp3Splt-project is a utility to split mp3 and ogg files selecting a begin and an end time position, without decoding. It’s very useful to split large mp3/ogg to make smaller files or to split entire albums to obtain original tracks. If you want to split an album, you can select split points and filenames manually or you can get them automatically from CDDB (internet or a local file) or from .cue files. Supports also automatic silence split, that can be used also to adjust cddb/cue splitpoints. You can extract tracks from Mp3Wrap or AlbumWrap files in few seconds.

Qtractor

Qtractor is an Audio/MIDI multi-track sequencer application written in C++ with the Qt4 framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio, and the Advanced Linux Sound Architecture (ALSA) for MIDI, are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

ReZound

ReZound aims to be a stable, open source, and graphical audio file editor primarily for.

Sweep

Sweep is an audio editor and live playback tool for GNU/Linux, BSD and compatible systems. It supports many music and voice formats including WAV, AIFF, Ogg Vorbis, Speex and MP3, with multichannel editing and LADSPA effects plugins.

Wavesurfer

WaveSurfer is an Open Source tool for sound visualization and manipulation. It has been designed to suit both novice and advanced users. WaveSurfer has a simple and logical user interface that provides functionality in an intuitive way and which can be adapted to different tasks.

Tasting and chewing explored in virtual reality

Tasting and chewing explored in virtual reality


Tasting and chewing explored in virtual reality
Virtual reality technology has you thinking you are doing many things, but there is much uncharted territory in eating virtually.
Imagine what the tourism industry could do with VR technology extending sensory stimulation beyond the eyes and ears. Imagine inviting prospective restaurant clients in virtual reality mode to the meat, fish and chicken specialties, pizza or chocolate cakes. Imagine any number of applications where the sensory experience in virtual reality expands.
Scientists are focusing on VR technology that can fool you into thinking you are tasting food that is not of course really there. Researchers from Singapore and another team from Japan have their own studies that explore the realm of tasting and even chewing.
Vlad Dudau, Neowin, said these explorers managed to replicate the tastes and textures of different foods.
A recent conference in Japan on user interface was given much "food" tech for thought.
The work titled "Virtual Sweet: Simulating Sweet Sensation Using Thermal Stimulation on the Tip of the Tongue," explored what it is like to be tasting sweet food virtually.
"Being a pleasurable sensation, sweetness is recognized as the most preferred sensation among the five primary taste sensations. In this paper, we present a novel method to virtually simulate the sensation of sweetness by applying thermal stimulation to the tip of the human tongue. To digitally simulate the sensation of sweetness, the system delivers rapid heating and cooling stimuli to the tongue via a 2x2 grid of Peltier elements. To achieve distinct, controlled, and synchronized temperature variations in the stimuli, a control module is used to regulate each of the Peltier elements. Results from our preliminary experiments suggest that the participants were able to perceive mild sweetness on the tip of their tongue while using the proposed system."
Nimesha Ranasinghe and Ellen Yi-Luen Do of the National University of Singapore are the two explorers. This is a device where changes in temperature serve to mimic the sensation of sweetness on the tongue.
Victoria Turk in New Scientist wrote about what their technology does: "The user places the tip of their tongue on a square of thermoelectric elements that are rapidly heated or cooled, hijacking thermally sensitive neurons that normally contribute to the sensory code for taste."
MailOnline described it as a "virtual sweetness instrument" which makes use of "a grid of four elements which generate temperature changes of 5°C in a few seconds. "When applied to the tip of the tongue, said the report, "the temperature change results in a virtual sweet sensation." A 9V battery is put to use. Results: Out of 15 people, eight registered a very mild sweet taste, said MailOnline.

Applications could include a taste-enhancing technology for dieters. Dr Ranashinghe told MailOnline: 'We believe this will especially helpful for the people on restricted diets for example salt (hypertension and heart problems) and sugar (diabetics)."
New Scientist said Ranasinghe and Do could see a system like this embedded in a glass or mug to make low sugar drinks taste sweeter.
Another group from the University of Tokyo is using electrodes to stimulate the jaw muscles. Tokyo Researchers Arinobu Niijima and Takefumi Ogawa are reporting results from an electrical muscle stimulation (EMS) test of jaw movements in chewing.
"We propose Electric Food Texture System, which can present virtual food texture such as hardness and elasticity by electrical muscle stimulation (EMS) to the masseter muscle," said the researchers in a video posted last month on their work, "Study on Control Method of Virtual Food Texture by Electrical Muscle Stimulation."
Dudau in Neowin described their experiment, where "scientists attached electrodes to jaw muscles and managed to simulate the sensation of biting into different materials. For example, by varying the electrical stimulation, users reported that while eating a real cookie, it felt like biting into something soft, or chewing something hard alternatively."
Turk in New Scientist also talked about the Tokyo team who presented "a device that uses electricity to simulate the experience of chewing foods of different textures. Arinobu Niijima and Takefumi Ogawa's Electric Food Texture System also uses electrodes, but not on the tongue, instead they place them on the masseter muscle – a muscle in the jaw used for chewing – to give sensations of hardness or chewiness as a user bites down. 'There is no food in the mouth, but users feel as if they are chewing some food due to haptic feedback by electrical muscle stimulation,' says Niijima."
Getting into technical details, MailOnline said "By delivering short pulses of between 100 to 250 Hz they were able to stimulate the masseter muscles, used to chew solid foods."
So if the 'sugar' researchers were looking at taste sensation, these researchers were looking at food texture. They said, "In this paper, we investigated the feasibility to control virtual food texture by EMS."
The researchers said on their video page, "We conducted an experiment to reveal the relationship of the parameters of EMS and those of virtual food texture. The experimental results show that the higher strength of EMS is, the harder virtual food texture is, and the longer duration of EMS is, the more elastic virtual food texture is."
If at higher frequency, the sensation was that of eating tougher, chewy food but a longer pulse simulated a more elastic texture.
Lab creates open-source optogenetics hardware, software

Lab creates open-source optogenetics hardware, software


Lab creates open-source optogenetics hardware, software
Rice University’s low-cost, open-source Light Plate Apparatus can easily be used by nonengineers and noncomputer programmers and can be assembled by a nonexpert in one day from components costing less than $150. Credit: Jeff Fitlow/Rice University
Nobody likes a cheater, but Rice University bioengineering graduate student Karl Gerhardt wants people to copy his answers. That's the whole point.
Gerhardt and Rice colleagues have created the first low-cost, easy-to-use hardware platform that biologists who have little or no training in engineering or software design can use to incorporate optogenetics testing in their labs.
Rice's Light Plate Apparatus (LPA) is described in a paper available for free online this week in the open-access journal Scientific Reports. The LPA, which was created in the lab of Jeffrey Tabor, assistant professor of bioengineering, uses open-source hardware and software. The apparatus can deliver two independent light signals to each well in a standard 24-well plate and has sockets that accept LEDs of wavelengths ranging from blue to far red. Total component costs for the LPA are less than $400—$150 for labs with a 3-D printer—and each unit can be assembled and calibrated by a nonexpert in one day.
"Our intent is to bring optogenetics to any researcher interested in using it," said Tabor, whose students created the LPA. In doing so, they found ways to make most of its parts with 3-D printers and also created software called Iris that uses simple buttons and pull-down menus to allow researchers to program the instrument for a wide range of experiments.
Rice bioengineers Karl Gerhardt (left) and Jeffrey Tabor with the Light Plate Apparatus, a low-cost, open-source optogenetics platform. Credit: Jeff Fitlow/Rice University
Optogenetics, which was developed in the past 15 years, involves genetically modifying cells with light-sensing molecules so that light can be used to turn genes and other cellular processes on or off. Its most notable successes have come in neuroscience following the invention of brain-implantable optical neuro interfaces, which have explored the cells and mechanisms associated with aggression, parenting, drug addiction, mating, same-sex attraction, anxiety, obsessive-compulsive disorders and more.
"Over the past 5-10 years, practically every biological process has been put under optogenetics control," said Gerhardt, who works in Tabor's lab. "The problem is that while everyone has been developing the biological tools to do optogenetics—the light-sensing proteins, gene-expression systems, protein interactions, etc.—outside of neuroscience, no one has really developed good hardware that makes it easy to use those tools."
To demonstrate the broad applicability of LPA, Tabor, Gerhardt and co-authors used the system to perform a series of optogenetics tests on a diverse set of model organisms, including gut bacteria, yeast, mammalian cells and photosynthetic cyanobacteria.
Gerhardt didn't come to Rice intending to invent the world's first easy-to-use optogenetics research platform. A biochemist by training, he initially was interested in simply creating something that would allow him to incorporate optogenetics in his own research. In early 2014, Gerhardt was studying the social amoeba Dictyostelium discoideum. Evan Olson, another Ph.D. student in Tabor's group, had just created the "light tube array," or LTA, an automated system for doing optogenetics on up to 64 test tubes at a time.
Lab creates open-source optogenetics hardware, software
Rice University graduate students Karl Gerhardt (left) and Sebastián Castillo-Hair prepare cell cultures for optogenetics testing with the Light Plate Apparatus, an open-source system they developed with colleagues in the laboratory of Rice’s Jeffrey Tabor, assistant professor of bioengineering. Credit: Jeff Fitlow/Rice University
Unfortunately for Gerhardt, D. discoideum, which biologists commonly call "dicty," prefers to grow on flat surfaces, like Petri dishes and flat-bottomed well plates. Dicty is also sensitive to vibrations and movement. Like dicty, many organisms commonly studied in biology labs, including many animal cell lines and virtually all human cells, require similar conditions.
"I couldn't culture dicty in the LTA, so I built a sort of plate-based version, and I used it for a couple of experiments, but it didn't work very well," Gerhardt said. "Then, some other people in our lab who had training in electrical engineering and Evan, with his physics background, said, 'We can take this version and make it a lot better.'"
Gerhardt said the group kept innovating and coming up with new versions of the hardware. For example, to make it easy to change the wavelength of light, the team incorporated standard sockets so it would be easy to swap out different colored LEDs. They also added a low-cost microcontroller with an SD card reader, drivers capable of producing more than 4,000 levels of light intensity and millisecond time control.
"We got more and more ambitious in terms of the features we wanted to add, and now we're on version three or four of the hardware," he said. "Then Lucas (Hartsough), Brian (Landry) and Felix (Ekness), members of our group who had expertise in programming and website design, said, 'We'll make the software,' and that's where Iris came from."
Rice University graduate student Sebastián Castillo-Hair conducts tests with the Light Plate Apparatus, an open-source optogenetics research platform developed in the laboratory of Rice’s Jeffrey Tabor, assistant professor of bioengineering. Credit: Jeff Fitlow/Rice University
Iris makes use of a graphical user interface to allow people without specialized computer training to easily program experiments for the LPA.
"Programming is a major barrier for some biologists who want to work with this kind of hardware," Gerhardt said. "Optogenetics hardware, most of the time, requires someone with programming experience who can go into the command line and write code. We wanted to eliminate that barrier."
To simplify the process for getting started with LPA, Tabor and Gerhardt have published all the software, design files and specifications for the system on GitHub, a site that caters to the do-it-yourself community by making it easy to create, share and distinguish different versions of software and files for open-source platforms like LPA.
Gerhardt said at least a half-dozen research groups began making LPAs after an early version of the paper was posted on a biology preprint server, and he hopes many more begin using it now that the Scientific Reports paper has been published.
"I hope this becomes the standard format for doing general optogenetics experiments, especially for people on the biology end of the spectrum who would never think about building their own hardware," Gerhardt said. "I hope they'll see this and say, 'OK. We can do optogenetics now.'"

Translate

Ads