Showing posts with label computing. Show all posts
Showing posts with label computing. Show all posts
Raspberry Pi brings out shiny Compute Module 3

Raspberry Pi brings out shiny Compute Module 3



Raspberry Pi brings out shiny Compute Module 3
Compute Module 3
Another Raspberry Pi launch announcement—and another burst of news items explaining what's new, at what price.
This time it is about the Raspberry Pi Compute Module 3 (CM3). Trusted Reviews said it comes with 64-bit and multi-core functionality.
"The new Compute Module is based on the BCM2837 processor – the same as found in the Raspberry Pi 3 – running at 1.2 GHz with 1 gigabyte of RAM," said Hackaday.
The Raspberry Pi blog provided the CM3 launch announcement:
"Way back in April of 2014 we launched the original Compute Module (CM1), which was based around the BCM2835 processor of the original Raspberry Pi. CM1 was a great success and we've seen a lot of uptake from various markets, particularly in IoT and home and factory automation."
Now it has a new CM3 based on the Raspberry Pi 3 hardware. Take note: It is "providing twice the RAM and roughly 10x the CPU performance of the original Module," according to the blog.
Ars Technica noted that it was the first big upgrade since 2014. That year, said Trusted Reviews, The original module "combined the guts of a first-generation Pi with a small SODIMM-layout module."
The new version, said Joe Roberts in Trusted Reviews, "which uses the same BCM2837, a quad-core 64-bit ARMv8 part, as the Pi 3, brings the Compute Module fully up to date."
There will be two flavors—CM3 and CM3L (lite) —The 'L' version is a CM3 without eMMC Flash—that is, as described by RS Components,"not fitted with eMMC Flash and the SD/eMMC interface. But pins are available for the designer to connect their own SD/eMMC device."
According to the blog, the Lite version "brings the SD card interface to the Module pins so a user can wire this up to an eMMC or SD card of their choice."
Jon Brodkin in Ars Technica said that the Compute Module's stripped-down form factor makes it more suitable for embedded computing, as it fits into a standard SODIMM connector. The new Compute Module can run Windows IoT Core and supports Linux.
The latest version is being used by NEC, said Brodkin, in displays intended for digital signs, streaming, and presentations. The Raspberry Pi blog, meanwhile, said that "we're already excited to see NEC displays, an early adopter, launching their CM3-enabled display solution."
It stated pricing for the two flavors. The CM3 and CM3L are priced at $30 and $25, respectively (excluding tax and shipping), and this price applies to any size order. The original Compute Module is also reduced to $25. The blog said one can "Head on over to our partners element14 (or Farnell UK) and RS Components" to buy them.
What about backwards compatibility? According to the blog "The CM3 is largely backwards-compatible with CM1 designs which have followed our design guidelines."
The blog presented the caveats: The module is 1mm taller than the original module; "the processor core supply (VBAT) can draw significantly more current. Consequently, the processor itself will run much hotter under heavy CPU load, so designers need to consider thermals based on expected use cases."

credit: Nancy Owano 

How To Rename Multiple Files at One Time in Windows 10 ??

In the Windows 10 File Explorer this process of renaming files in large batches is simple but for many users, myself included, the feature is not well known.
In this Quick Tip article I want to share with you how easy it is do use this capability of File Explorer.


 Process :-  

Step 1 : Select the image you want to rename
In Windows 10 there is always more than one way to accomplish most tasks so once you have File Explorer open to the directory of files you want to rename you can use the keyboard shortcut CTRL + A to select all of the files or use the Select All button on the Home view of File Explorer.Or select only those image you want to rename at once.

When You have selected the images/files that you want to rename as a group. 
Move to step 2  
Step 2 : Rename the files
Renaming files in a batch is done as you do same with the one file  rename one file .
Once all of the images/files you want to rename are selected, right click on the first image/file and select Rename from the context menu.

You will then have an editable name field for the first image/file in the sequence - just give it whatever name you choose for the group of images/files. Hit the Enter key once you have the new name typed in.

Now you will see all the files with the new name followed by a sequential number in parentheses. You have now successfully renamed your files in one batch.

Here is one last interesting thing with this feature - if you click on any other image/file in the collection it will give that file the first sequential number and then continuing from that image/file in sequential order until it hits the end of the list. At that point it will go back up to the first one and continue to renaming until the file/image just before the one you started the renaming with at the beginning.
So a key aspect of this process is to make sure you have the files in the order you want them numbered in and start with the first image/file in the directory.


Screenshot :-

Blitab Technology :createing tablet for the blind and visually impaired



Blitab Technology develops tablet for the blind and visually impaired
Blitab, a tablet with a Braille interface, looks like a promising step up for blind and low vision people who want to be part of the educational, working and entertainment worlds of digital life.
A video of the Blitab Technology founder, Kristina Tsvetanova, said the idea for such a tablet came to her during her studies as an industrial engineer. At the time, a blind colleague of hers asked her to sign him for an online course and a question nagged her: How could technology help him better?
Worldwide, she said, there are more than 285 million blind and visually impaired people.
She was aware that in general blind and low vision people were coping with old, bulky technology, contributing to low literacy rates among blind children. She and her team have been wanting to change that.
There was ample room for improvements. The conventional interfaces for the blind, she said, have been slow and expensive. She said the keyboard can range from about $5000 to $8000. Also, she said, they are limited to what the blind person can read, just a few words at a time. Imagine, she said, reading Moby Dick, five words at a time.
They have engineered a with a 14-line Braille display on the top and a touch screen on the bottom.


Part of their technology involves a high performance membrane, and their press statement said the tablet uses smart micro fluids to develop small physical bubbles instead of a screen display.
They have produced a tactile tablet, she said, where people with sight loss can learn, work and play using that device.
The user can control the tablet with voice-over if the person wants to listen to an ebook or by pressing one button, dots will be activated on the screen and the surface of the screen will change.
Romain Dillet, in TechCrunch: "The magic happens when you press the button on the side of the device. The top half of the device turns into a Braille reader. You can load a document, a web page—anything really—and then read the content using Braille."
Tsvetanova told Dillet, "We're not excluding voice over; we combine both of these things." She said they offer both "the tactile experience and the voice over experience."
Rachel Metz reported in MIT Technology Review: "The Blitab's Braille display includes 14 rows, each made up of 23 cells with six dots per cell. Every cell can present one letter of the Braille alphabet. Underneath the grid are numerous layers of fluids and a special kind of membrane," she wrote.

Blitab Technology develops tablet for the blind and visually impaired
Credit: Blitab
At heart, it's an Android tablet, Dillet said, "so it has Wi-Fi and Bluetooth and can run all sorts of Android apps."
Metz said that with eight hours of use per day, it's estimated to last for five days on one battery charge.
The tablet team have set a price to this device, at $500.
How they will proceed: First, she said they will sell directly from their web site, then scale through global distributors, and distribute to less developed world.
What's next? Dillet said in the Jan.6 article that "the team of 10 plans to ship the in six months with pre-orders starting later this month."
Blitab Technology recently took first place in the Digital Wellbeing category of the 2016 EIT Digital Challenge. EIT Digital is described as a European open innovation organization. They seek to foster digital innovation and entrepreneurial talent.


credit ;Nancy Owano 
A Swiss firm acquires Mars One private project

A Swiss firm acquires Mars One private project


Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ve
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures
A British-Dutch project aiming to send an unmanned mission to Mars by 2018 announced Friday that the shareholders of a Swiss financial services company have agreed a takeover bid.
"The acquisition is now only pending approval by the board of Mars One Ventures," the company said in a joint statement with InFin Innovative Finance AG, adding approval from the Mars board would come "as soon as possible."
"The takeover provides a solid path to funding the next steps of Mars One's mission to establish a permanent human settlement on Mars," the statement added.
Mars One consists of two entities: the Dutch not-for-profit Mars One Foundation and a British public limited company Mars One Ventures.
Mars One aims to establish a permanent human settlement on the Red Planet, and is currently "in the early mission concept phase," the company says, adding securing funding is one of its major challenges.
Some 200,000 hopefuls from 140 countries initially signed up for the Mars One project, which is to be partly funded by a television reality show about the endeavour.
Those have now been whittled down to just 100, out of which 24 will be selected for one-way trips to Mars due to start in 2026 after several unmanned missions have been completed.
"Once this deal is completed, we'll be in a much stronger financial position as we begin the next phase of our mission. Very exciting times," said Mars One chief executive Bas Lansdorp.
NASA is currently working on three Mars missions with the European Space Agency and plans to send another rover to Mars in 2020.
But NASA has no plans for a manned to Mars until the 2030s.
Best weather satellite ever built is lunched into space

Best weather satellite ever built is lunched into space


Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (United Launch Alliance via AP)  
The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives.
This new GOES-R spacecraft will track U.S. weather as never before: hurricanes, tornadoes, flooding, , wildfires, lightning storms, even solar flares. Indeed, about 50 TV meteorologists from around the country converged on the launch site—including NBC's Al Roker—along with 8,000 space program workers and guests.
"What's so exciting is that we're going to be getting more data, more often, much more detailed, higher resolution," Roker said. In the case of tornadoes, "if we can give people another 10, 15, 20 minutes, we're talking about lives being saved."
Think superhero speed and accuracy for forecasting. Super high-definition TV, versus black-and-white.
"Really a quantum leap above any NOAA has ever flown," said Stephen Volz, the National Oceanic and Atmospheric Administration's director of satellites.
"For the American public, that will mean faster, more accurate weather forecasts and warnings," Volz said earlier in the week. "That also will mean more lives saved and better environmental intelligence" for government officials responsible for hurricane and other evacuations.
Best weather satellite ever built rockets into space
Cell phones light up the beaches of Cape Canaveral and Cocoa Beach, Fla., north of the Cocoa Beach Pier as spectators watch the launch of the NOAA GOES-R weather satellite, Saturday, Nov. 19, 2016. It was launched from Launch Complex 41 at Cape Canaveral Air Force Station on a ULA Atlas V rocket. (Malcolm Denemark/Florida Today via AP)
Airline passengers also stand to benefit, as do rocket launch teams. Improved forecasting will help pilots avoid bad weather and help rocket scientists know when to call off a launch.
NASA declared success 3 1/2 hours after liftoff, following separation from the upper stage.
The first in a series of four high-tech satellites, GOES-R hitched a ride on an unmanned Atlas V rocket, delayed an hour by rocket and other problems. NOAA teamed up with NASA for the mission.
The satellite—valued by NOAA at $1 billion—is aiming for a 22,300-mile-high equatorial orbit. There, it will join three aging spacecraft with 40-year-old technology, and become known as GOES-16. After months of testing, this newest satellite will take over for one of the older ones. The second satellite in the series will follow in 2018. All told, the series should stretch to 2036.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
GOES stands for Geostationary Operational Environmental Satellite. The first was launched in 1975.
GOES-R's premier imager—one of six science instruments—will offer three times as many channels as the existing system, four times the resolution and five times the scan speed, said NOAA program director Greg Mandt. A similar imager is also flying on a Japanese weather satellite.
Typically, it will churn out full images of the Western Hemisphere every 15 minutes and the continental United States every five minutes. Specific storm regions will be updated every 30 seconds.
Forecasters will get pictures "like they've never seen before," Mandt promised.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, in Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
A first-of-its-kind lightning mapper, meanwhile, will take 500 snapshots a second.
This next-generation GOES program—$11 billion in all—includes four satellites, an extensive land system of satellite dishes and other equipment, and new methods for crunching the massive, nonstop stream of expected data.
Hurricane Matthew, interestingly enough, delayed the launch by a couple weeks. As the hurricane bore down on Florida in early October, launch preps were put on hold. Matthew stayed far enough offshore to cause minimal damage to Cape Canaveral, despite some early forecasts that suggested a direct strike.
Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, par 
credit; Marcia Dunn
A suit-X trio designed to support workers: Meet MAX

A suit-X trio designed to support workers: Meet MAX



(Tech Xplore)—Not all of us park our bodies in a chair in the morning and cross our legs to do our work. In fact, just think of vast numbers of workers doing physically demanding or just physically repetitive tasks including bending and lifting.
Workers on construction sites, factories and warehouses might cope with aches and pains brought on by their work. Hopefully, the future will provide an easy answer for workers to suit up in a suitable way for them to avoid these aches and pain.
There is a new kid on the block aiming to address such a solution, and a number of tech watchers have put them in the news this month. A California-based group aptly called suit-X announced its MAX, which stands for Modular Agile Exoskeleton. The company designs and makes exoskeletons.
"MAX is designed to support workers during the repetitive tasks that most frequently cause injury," said a company release.
Will Knight in MIT Technology Review said that this essentially is " a trio of devices that use robotic technologies to enhance the abilities of able-bodied workers and prevent common workplace injuries."
Target users, for example, could include those who carry out ceiling inspections, welding, installations and repairs. "It's not only lifting 75 pounds that can hurt your back; it is also lifting 20 pounds repeatedly throughout the day that will lead to injury," said Dr. Homayoon Kazerooni, founder and CEO, suitX."The MAX solution is designed for unstructured workplaces where no robot can work as efficiently as a human worker. Our goal is to augment and support workers who perform demanding and repetitive tasks in unstructured workplaces in order to prevent and reduce injuries."
Seeker referred to the MAX system as an exoskeleton device that could potentially change the way millions of people work.
Seeker noted its advantages as workplace exoskeletons in several ways, being lightweight such that the user can walk around unimpeded. "The exoskeleton units kick in only when you need them, and they don't require any external power source."
MAX is a product with three modules. You use them independently or in combination, depending on work needs. The three modules are backX, shoulderX, and legX.
According to the company, "All modules intelligently engage when you need them, and don't impede you otherwise."
The backX (lower back) reduces forces and torques.
The shoulderX reduces forces; it "enables the wearer to perform chest-to-ceiling level tasks for longer periods of time." In a video the company defines shoulderX as "an industrial arm exoskeleton that augments its wearer by reducing gravity-induced forces at the shoulder complex."
The legX was designed to support knee joint and quadriceps. It incorporates microcomputers in each leg. They communicate with each other to determine if the person is walking, bending, or taking the stairs." Seeker said these communicate via Bluetooth, monitoring spacing and position.
Credit: suitx
A suit-X trio designed to support workers: Meet MAX
Kazerooni spoke about his company and its mission, in Seeker. "My job is easy. I sit in front of a computer. But these guys work all day long, put their bodies through abuse. We can use bionics to help them." He also said he and his team did not create this "because of science fiction movies. We were responding to numbers from the Department of Labor, which said that back, knee and shoulder injuries are the most common form of injuries among workers."
Will Knight meanwhile has reflected on the bigger picture in developments. Can they help in preventing injury on the job and help prolong workers' careers? "New materials, novel mechanical designs, and cheaper actuators and motors have enabled a new generation of cheaper, more lightweight exoskeletons to emerge in recent years," he wrote. "For instance, research groups at Harvard and SRI are developing systems that are passive and use soft, lightweight materials."
Some companies, such as BMW, said Knight, have been experimenting with exoskeletons. "The MAX is another (bionic) step toward an augmented future of work."

credit;   Nancy Owano
Use drones and insect biobots to map disaster areas

Use drones and insect biobots to map disaster areas


Tech would use drones and insect biobots to map disaster areas
Credit: North Carolina State University  
Researchers at North Carolina State University have developed a combination of software and hardware that will allow them to use unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map large, unfamiliar areas – such as collapsed buildings after a disaster.
"The idea would be to release a swarm of sensor-equipped biobots – such as remotely controlled cockroaches – into a collapsed building or other dangerous, unmapped area," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and co-author of two papers describing the work.
"Using remote-control technology, we would restrict the movement of the biobots to a defined area," Lobaton says. "That area would be defined by proximity to a beacon on a UAV. For example, the biobots may be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and would signal researchers via radio waves whenever they got close to each other. Custom software would then use an algorithm to translate the biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the UAV moves forward to hover over an adjacent, unexplored section. The biobots move with it, and the mapping process is repeated. The software program then stitches the new map to the previous one. This can be repeated until the entire region or structure has been mapped; that map could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS can't be used," Lobaton says. "A strong radio signal from the UAV could penetrate to a certain extent into a collapsed building, keeping the biobot swarm contained. And as long as we can get a signal from any part of the swarm, we are able to retrieve data on what the rest of the swarm is doing. Based on our experimental data, we know you're going to lose track of a few individuals, but that shouldn't prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots. However, to test their new mapping technology, the research team relied on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a maze-like space, with the effect of the UAV beacon emulated using an overhead camera and a physical boundary attached to a moving cart. The cart was moved as the robots mapped the area.
"We had previously developed proof-of-concept software that allowed us to map small areas with biobots, but this work allows us to map much larger areas and to stitch those maps together into a comprehensive overview," Lobaton says. "It would be of much more practical use for helping to locate survivors after a disaster, finding a safe way to reach survivors, or for helping responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching them together, "A Framework for Mapping with Biobotic Insect Networks: From Local to Global Maps," is published in Robotics and Autonomous Systems. An article on the theory of mapping based on the proximity of mobile sensors to each other, "Geometric Learning and Topological Inference with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.


credit;   Matt Shipman
How machine learning advances artificial intelligence

How machine learning advances artificial intelligence


Computers that learn for themselves are with us now. As they become more common in 'high-stakes' applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can trust them.
There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.
Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.
Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?
We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.
Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.
Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?
"Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data," says Ghahramani. "But what is going on inside the 'black box'? If the processes by which decisions were being made were more transparent, then trust would be less of an issue."
His team builds the algorithms that lie at the heart of these technologies (the "invisible bit" as he refers to it). Trust and transparency are important themes in their work: "We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.
"When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us."
One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.
Two years ago, Ghahramani's group launched the Automatic Statistician with funding from Google. The tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.
"The difficulty with machine learning systems is you don't really know what's going on inside – and the answers they provide are not contextualised, like a human would do. The Automatic Statistician explains what it's doing, in a human-understandable form."
Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.
Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: "A particular issue with new (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand." His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.
"We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode." A , for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.
Weller's theme of trust and transparency forms just one of the projects at the newly launched £10 million Leverhulme Centre for the Future of Intelligence (CFI). Ghahramani, who is Deputy Director of the Centre, explains: "It's important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society."
CFI brings together four of the world's leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.
Ghahramani describes the excitement felt across the field: "It's exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.
"We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us."
Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a speech delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.
internet robot  investigate creativity

internet robot investigate creativity


A portrait of Benjamin Franklin manipulated by Smilevector. Credit: Smithsonian National Portrait Gallery.
Tom White, senior lecturer in Victoria's School of Design, has created Smilevector—a bot that examines images of people, then adds or removes smiles to their faces.
"It has examined hundreds of thousands of faces to learn the difference between images, by finding relations and reapplying them," says Mr White.
"When the computer finds an image it looks to identify if the person is smiling or not. If there isn't a smile, it adds one, but if there is a smile then it takes it away.
"It represents these changes as an animation, which moves parts of the face around, including crinkling and widening the eyes."
The bot can be used as a form of puppetry, says Mr White.
"These systems are domain independent, meaning you can do it with anything—from manipulating images of faces to shoes to chairs. It's really fun and interesting to work in this space. There are lots of ideas to play around with."
The creation of the bot was sparked by Mr White's research into creative intelligence.
"Machine learning and artificial intelligence are starting to have implications for people in creative industries. Some of these implications have to do with the computer's capabilities, like completing mundane tasks so that people can complete higher level tasks," says Mr White.
"I'm interested in exploring what these systems are capable of doing but also how it changes what we think of as being creative is in the first place. Once you have a system that can automate processes, is that still a creative act? If you can make something a completely push of the button operation, does its meaning change?"
Mr White says people have traditionally used creative tools by giving commands.
"However, I think we're moving toward more of a collaboration with computers—where there's an intelligent system that's making suggestions and helping steer the process.
"A lot will happen in this space in the next five to ten years, and now is the right time to progress. I also hope these techniques influence teaching over the long term as they become more mainstream. It is something that students could work with me on at Victoria University as part of our Master of Design Innovation or our new Master of Fine Arts (Creative Practice)."
The paper Sampling Generative Networks describing this research is available as an arXiv preprint. The research will also be presented as part of the Neural Information Processing Systems conference in Spain and Generative Art conference in Italy in December.

List of speech editing software

(geekkeep)-voice editing software has become tools that people has come to work with. The military, hackers, hosts, animators , up to an ever increasing list of people has come to rely on for achieving their aims.

 Animating studios has come to use these applications in productions of character lines without relying on hiring voice artists( this has become beneficial to rising studios)In this age the security of some units uses bio scans, speech recognition units is prey common and that brings the downside. Agents, buggers, military infiltration rely on speech editing software to bypass these systems hence gaining unauthorized access to the units.

you must have seen the ever rising artificial intelligence struggle in the tech market or hubs. AI like Deep-mind, Cortana,Clever bot, Virtual Assistant Denise,    Verbots,    MadomaVirtual Assistant,
DesktopMates,     Braina,      Syn Virtual Assistant   uses these speech recognition softwares to make the voice of these assistants or artificial intelligence.


lists of speech recognition software are



 WavePad audio editing software                                                                                                            This audio editing software is a full-featured professional audio and music editor for Windows and Mac. It lets you record and edit music, voice and other audio recordings. When editing audio files, you can cut, copy and paste parts of recordings, and then add effects like echo, amplification and noise reduction. WavePad works as a wav or mp3 editor, but it also supports a number of other file formats including vox, gsm, wma, real audio, au, aif, flac, ogg, and more.               



Free Audio Editor can digitize sound recordings of your rare music cassette tapes, vinyl LPs and videos, creating standard digital sound files. Timer and input level triggered recording are included. There is a button to activate the system Windows Mixer without visiting the control panel. The recording can be directly loaded into the waveform window for further perfection.
You can edit audio using the traditional Waveform View or the frequency-based Spectral Display that makes it easy to isolate and remove unwanted noise. Intuitive cut/copy/paste/trim/mute and more actions can be performed easily. The selection tools make the editing operations performed with millisecond precision.Enhance your audio with more than 30 native signal and effects processing engines, including compression, EQ, fade in/out, delay, chorus, reverb, time stretching, pitch shifting and more. It significantly increases your audio processing capabilities. The real-time preview enables you to hear the results before mixing down to a single file.This free audio editor supports a large amount of input formats including MP3, WMA, WAV, AAC, FLAC, OGG, APE, AC3, AIFF, MP2, MPC, MPGA, M4A, CDA, VOX, RA, RAM, ARW, AIF, AIFC, TTA, G721, G723, G726 and many more as source formats. Any audio files can be saved to the most popular audio formats like MP3, WMA, WAV, OGG, etc. Furthermore, it is available to control the output quality by adjusting the parameters & our software also prepares many presets with different combinations of settings for playback on all kinds of software applications and devices.


Audacity can record live audio through a microphone or mixer, or digitize recordings from other media. With some sound cards, and on any recent version of Windows, Audacity can also capture streaming audio.
  • Device Toolbar manages multiple recording and playback devices.
  • Level meters can monitor volume levels before, during and after recording. Clipping can be displayed in the waveform or in a label track.
  • Record from microphone, line input, USB/Firewire devices and others.
  • Record computer playback on Windows Vista and later by choosing “Windows WASAPI” host in Device Toolbar then a “loopback” input.
  • Timer Record and Sound Activated Recording features.
  • Dub over existing tracks to create multi-track recordings.
  • Record at very low latencies on supported devices on Linux by using Audacity with JACK.
  • Record at sample rates up to 192,000 Hz (subject to appropriate hardware and host selection). Up to 384,000 Hz is supported for appropriate high-resolution devices on Windows (using WASAPI), Mac OS X, and Linux.
  • Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or Linux (using ALSA or JACK host).
  • Record multiple channels at once (subject to appropriate hardware).




Power Sound Editor

Power Sound Editor Free is a visual audio editing and recording software solution, which supports many advanced and powerful operations with audio data.
You can use Power Sound Editor Free to record your own music, voice, or other audio files, edit it, mix it with other audio or musical parts, add effects like Reverb, Chorus, and Echo, and burn it on a CD, post it on the World Wide Web or e-mail it.

mp3DirectCut

mp3DirectCut is a fast and extensive audio editor and recorder for compressed mp3. You can directly cut, copy, paste or change the volume with no need to decompress your files for audio editing. Using Cue sheets, pause detection or Auto cue you can easily divide long files.

Music Editor Free

Music Editor Free (MEF) is a multi-award winning music editor software tool. MEF helps you to record and edit music and sounds. It lets you make and edit music, voice and other audio recordings. When editing audio files you can cut, copy and paste parts of recordings and, if required, add effects like echo, amplification and noise reduction.

Wavosaur

Wavosaur is a free sound editor, audio editor, wav editor software for editing, processing and recording sounds, wav and mp3 files. Wavosaur has all the features to edit audio (cut, copy, paste, etc.) produce music loops, analyze, record, batch convert. Wavosaur supports VST plugins, ASIO driver, multichannel wav files, real time effect processing. The program has no installer and doesn’t write in the registry. Use it as a free mp3 editor, for mastering, sound design.

Traverso DAW

Traverso DAW is a GPL licensed, cross platform multitrack audio recording and editing suite, with an innovative and easy to master User Interface. It’s suited for both the professional and home user, who needs a robust and solid DAW. Adding and removal of effects plugins, moving Audio Clips and creating new Tracks during playback are all perfectly safe, giving you instant feedback on your work!

Ardour

Ardour is a digital audio workstation. You can use it to record, edit and mix multi-track audio. You can produce your own CDs, mix video soundtracks, or just experiment with new ideas about music and sound. Ardour capabilities include: multichannel recording, non-destructive editing with unlimited undo/redo, full automation support, a powerful mixer, unlimited tracks/busses/plugins, timecode synchronization, and hardware control from surfaces like the Mackie Control Universal. If you’ve been looking for a tool similar to ProTools, Nuendo, Pyramix, or Sequoia, you might have found it.

Rosegarden

Rosegarden is a well-rounded audio and MIDI sequencer, score editor, and general-purpose music composition and editing environment. Rosegarden is an easy-to-learn, attractive application that runs on Linux, ideal for composers, musicians, music students, and small studio or home recording environments.

Hydrogen

Hydrogen is an advanced drum machine for GNU/Linux. It’s main goal is to bring professional yet simple and intuitive pattern-based drum programming.

Sound Engine

SoundEngine is the best tool for personal use, because it enables you to easily edit a wave data while it has many functions required for a mastering process.

Expstudio Audio Editor

Expstudio Audio Editor is a visual music file editor that has many different options and a multiple functionality to edit your music files like editing text files. With a given audio data it can perform many different operations such as displaying a waveform image of an audio file, filtering, applying various audio effects, format conversion and more.

DJ Audio Editor

DJ Audio Editor is easy-to-use and well-organized audio application which allows you to perform various operations with audio data. You can create and edit audio files professionally, also displaying a waveform image of audio file makes your work faster.

Eisenkraut

Eisenkraut is a cross-platform audio file editor. It requires Java 1.4+ and SuperCollider 3. It supports multi-channel and multi-mono files and floating-point encoding. An OSC scripting interface and experimental sonagramme functionality are provided.

FREE WAVE MP3 Editor

Free Wave MP3 Editor is a sound editor program for Windows. This software lets you make and edit voice and other audio recordings. You can cut, copy and paste parts of recording and, if required, add effects like echo, amplification and noise reduction.

Kangas Sound Editor

Fun Kangaroo-themed program that allows the user to create music and sound effects. It uses a system of frequency ratios for pitch control, rather than conventional music notation and equal temperament. It allows instruments, both musical and percussion, to be created.

Ecawave

Ecawave is a simple graphical audio file editor. The user-interface is based on Qt libraries, while almost all audio functionality is taken directly from ecasound libraries. As ecawave is designed for editing large audio files, all processing is done direct-to-disk. Simple waveform caching is used to speed-up file operations. Ecawave supports all audio file formats and effect algorithms provided by ecasound libraries. This includes JACK, ALSA, OSS, aRts, over 20 file formats, over 30 effect types, LADSPA plugins and multi-operator effect presets.

Audiobook Cutter

Audiobook Cutter splits your MP3 audio books and podcasts in a fast and user friendly way. The split files can easily be used on mobile MP3 players because of their small-size. Their duration allows smooth navigation through the book. The split points are determined automatically based on silence detection.

Jokosher

Jokosher is a simple yet powerful multi-track studio. With it you can create and record music, podcasts and more, all from an integrated simple environment.

LMMS

LMMS is a free cross-platform alternative to commercial programs like FL Studio, which allow you to produce music with your computer. This includes the creation of melodies and beats, the synthesis and mixing of sounds, and arranging of samples. You can have fun with your MIDI-keyboard and much more; all in a user-friendly and modern interface.

Mp3Splt

Mp3Splt-project is a utility to split mp3 and ogg files selecting a begin and an end time position, without decoding. It’s very useful to split large mp3/ogg to make smaller files or to split entire albums to obtain original tracks. If you want to split an album, you can select split points and filenames manually or you can get them automatically from CDDB (internet or a local file) or from .cue files. Supports also automatic silence split, that can be used also to adjust cddb/cue splitpoints. You can extract tracks from Mp3Wrap or AlbumWrap files in few seconds.

Qtractor

Qtractor is an Audio/MIDI multi-track sequencer application written in C++ with the Qt4 framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio, and the Advanced Linux Sound Architecture (ALSA) for MIDI, are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

ReZound

ReZound aims to be a stable, open source, and graphical audio file editor primarily for.

Sweep

Sweep is an audio editor and live playback tool for GNU/Linux, BSD and compatible systems. It supports many music and voice formats including WAV, AIFF, Ogg Vorbis, Speex and MP3, with multichannel editing and LADSPA effects plugins.

Wavesurfer

WaveSurfer is an Open Source tool for sound visualization and manipulation. It has been designed to suit both novice and advanced users. WaveSurfer has a simple and logical user interface that provides functionality in an intuitive way and which can be adapted to different tasks.

Synthesize Speech In Any Voice,New Software that can can cause controversy

 

Good luck ever trusting a recording again. as it is right now, records done and presented in court as evidence will hardly have any value. 
A low quality video has emerged from the Adobe conference MAX showing a demo for a prototype of a new software, called Project VoCo, that appears to be a Photoshop for audio.The program is shown synthesizing a man's voice to read different sentences based on the software's analysis of a real clip of him speaking. Just copy and paste to change it from "I kissed my dog and my wife" to "I kissed my wife and my wife." Or even insert entirely new words—they still sound eerily authentic.In case you were confused about what the software's intended purpose is, Adobe issued a statement:
When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative. We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.
The crowd laughs and cheers uproariously as the program is demod, seemingly unaware of the disturbing implications for a program like this especially in the context of an election cycle where distortions in truth are commonplace. Being able to synthesize —or claim that real audio was synthesized—would only muddy waters even further.
Somehow the clip also involves the comedian Jordan Peele, present at the conference, whose shocked expression is the only indication that anyone there is thinking about how this software will be used out in the real world.

God Of War 4 News Updates And All You Need To Know Inside

Kratos Is Back!

God Of War 4 Is Almost Here

Sony's bearded hero is about to be unleashed.
Santa Monica Studio, the creator of God of War series, provided us a major revelation during the E3 event. The God of War series started last 2005 and it's getting bigger and better every year. God of War is based on greek mythology hero named Kratos, a spartan warrior who was betrayed by his former master Ares, the God of War. Now, Kratos' seek to free himself from the influence of the gods and starts its quest for revenge against Ares.
"Santa Monica always delivers some of the generation's best-looking games, and looks set to continue to do so, with even the brief glimpse we saw at E3 blowing everything that's come before out of the water." Brett Phipps said.
God Of War is an action adventure game who received high praise on all of its series. Gameplay, graphics and audio, you name it. God Of War eclipses all the expectation from fans, gamers and critics alike.
With God of War 4 on the horizon, here's some of the things you need to know.
When is the release date of God Of War 4?
No release date provided. However at the E3 2016 convention, it was announced that God of War is "currently in development". PS4 Pro features will be supported, that's a guarantee.
Is the gameplay the same with its predecessors?
There's a slight difference when it comes to gameplay basing on the trailer that we've seen. Kratos' seems to be more relax and his brutal ways of killing was not emphasize on the trailer. But don't worry, there are still large,mystic and mythic creatures that you can kill using your favorite weapon.
Will it adapt the same story?
Kratos' will still be Kratos. But at this time, he has to mentor, guide and protect his son. Our bearded hero finds himself in the world of Norse Mythology.

Supercomputer comes up with a profile of dark matter

Supercomputer comes up with a profile of dark matter


Supercomputer comes up with a profile of dark matter: Standard Model extension predicts properties of candidate particle
Simulated distribution of dark matter approximately three billion years after the Big Bang (illustration not from this work). Credit: The Virgo Consortium/Alexandre Amblard/ESA
In the search for the mysterious dark matter, physicists have used elaborate computer calculations to come up with an outline of the particles of this unknown form of matter. To do this, the scientists extended the successful Standard Model of particle physics which allowed them, among other things, to predict the mass of so-called axions, promising candidates for dark matter. The German-Hungarian team of researchers led by Professor Zoltán Fodor of the University of Wuppertal, Eötvös University in Budapest and Forschungszentrum Jülich carried out its calculations on Jülich's supercomputer JUQUEEN (BlueGene/Q) and presents its results in the journal Nature.
"Dark matter is an invisible form of matter which until now has only revealed itself through its gravitational effects. What it consists of remains a complete mystery," explains co-author Dr Andreas Ringwald, who is based at DESY and who proposed the current research. Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the . High-precision measurements using the European satellite "Planck" show that almost 85 percent of the entire mass of the universe consists of dark matter. All the stars, planets, nebulae and other objects in space that are made of conventional matter account for no more than 15 percent of the mass of the universe.
"The adjective 'dark' does not simply mean that it does not emit visible light," says Ringwald. "It does not appear to give off any other wavelengths either - its interaction with photons must be very weak indeed." For decades, physicists have been searching for particles of this new type of matter. What is clear is that these particles must lie beyond the Standard Model of particle physics, and while that model is extremely successful, it currently only describes the conventional 15 percent of all matter in the cosmos. From theoretically possible extensions to the Standard Model physicists not only expect a deeper understanding of the universe, but also concrete clues in what energy range it is particularly worthwhile looking for dark-matter candidates.
The unknown form of matter can either consist of comparatively few, but very heavy particles, or of a large number of light ones. The direct searches for heavy dark-matter candidates using large detectors in underground laboratories and the indirect search for them using large particle accelerators are still going on, but have not turned up any so far. A range of physical considerations make extremely light particles, dubbed axions, very promising candidates. Using clever experimental setups, it might even be possible to detect direct evidence of them. "However, to find this kind of evidence it would be extremely helpful to know what kind of mass we are looking for," emphasises theoretical physicist Ringwald. "Otherwise the search could take decades, because one would have to scan far too large a range."
The existence of axions is predicted by an extension to quantum chromodynamics (QCD), the quantum theory that governs the strong interaction, responsible for the nuclear force. The strong interaction is one of the four fundamental forces of nature alongside gravitation, electromagnetism and the weak nuclear force, which is responsible for radioactivity. "Theoretical considerations indicate that there are so-called topological quantum fluctuations in quantum chromodynamics, which ought to result in an observable violation of time reversal symmetry," explains Ringwald. This means that certain processes should differ depending on whether they are running forwards or backwards. However, no experiment has so far managed to demonstrate this effect.
The extension to quantum chromodynamics (QCD) restores the invariance of time reversals, but at the same time it predicts the existence of a very weakly interacting particle, the axion, whose properties, in particular its mass, depend on the strength of the topological quantum fluctuations. However, it takes modern supercomputers like Jülich's JUQUEEN to calculate the latter in the temperature range that is relevant in predicting the relative contribution of axions to the matter making up the universe. "On top of this, we had to develop new methods of analysis in order to achieve the required temperature range," notes Fodor who led the research.
The results show, among other things, that if axions do make up the bulk of dark matter, they should have a mass of 50 to 1500 micro-electronvolts, expressed in the customary units of , and thus be up to ten billion times lighter than electrons. This would require every cubic centimetre of the universe to contain on average ten million such ultra-lightweight particles. Dark matter is not spread out evenly in the universe, however, but forms clumps and branches of a weblike network. Because of this, our local region of the Milky Way should contain about one trillion axions per cubic centimetre.
Thanks to the Jülich supercomputer, the calculations now provide physicists with a concrete range in which their search for axions is likely to be most promising. "The results we are presenting will probably lead to a race to discover these particles," says Fodor. Their discovery would not only solve the problem of in the universe, but at the same time answer the question why the strong interaction is so surprisingly symmetrical with respect to time reversal. The scientists expect that it will be possible within the next few years to either confirm or rule out the existence of axions experimentally.
The Institute for Nuclear Research of the Hungarian Academy of Sciences in Debrecen, the Lendület Lattice Gauge Theory Research Group at the Eötvös University, the University of Zaragoza in Spain, and the Max Planck Institute for Physics in Munich were also involved in the research.
The basis for machine-learning systems' decisions

The basis for machine-learning systems' decisions


Technique reveals the basis for machine-learning systems' decisions
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have devised a way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions. Credit: Christine Daniloff/MIT
In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.
But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it's sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.
At the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train so that they provide not only predictions and classifications but rationales for their decisions.
"In real-world applications, sometimes people really want to know why the model makes the predictions it does," says Tao Lei, an MIT graduate student in and computer science and first author on the new paper. "One major reason that doctors don't trust machine-learning methods is that there's no evidence."
"It's not only the medical domain," adds Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Lei's thesis advisor. "It's in any domain where the cost of making the wrong prediction is very high. You need to justify why you did it."
"There's a broader aspect to this work, as well," says Tommi Jaakkola, an MIT professor of electrical engineering and computer science and the third coauthor on the paper. "You may not want to just verify that the model is making the prediction in the right way; you might also want to exert some influence in terms of the types of predictions that it should make. How does a layperson communicate with a complex model that's trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model."
Virtual brains
Neural networks are so called because they mimic—approximately—the structure of the brain. They are composed of a large number of processing nodes that, like individual neurons, are capable of only very simple computations but are connected to each other in dense networks.
In a process referred to as "deep learning," training data is fed to a network's input nodes, which modify it and feed it to other nodes, which modify it and feed it to still other nodes, and so on. The values stored in the network's output nodes are then correlated with the classification category that the network is trying to learn—such as the objects in an image, or the topic of an essay.
Over the course of the network's training, the operations performed by the individual nodes are continuously modified to yield consistently good results across the whole set of training examples. By the end of the process, the computer scientists who programmed the network often have no idea what the nodes' settings are. Even if they do, it can be very hard to translate that low-level information back into an intelligible description of the system's decision-making process.
In the new paper, Lei, Barzilay, and Jaakkola specifically address neural nets trained on textual data. To enable interpretation of a neural net's decisions, the CSAIL researchers divide the net into two modules. The first module extracts segments of text from the training data, and the segments are scored according to their length and their coherence: The shorter the segment, and the more of it that is drawn from strings of consecutive words, the higher its score.
The segments selected by the first module are then passed to the second module, which performs the prediction or classification task. The modules are trained together, and the goal of training is to maximize both the score of the extracted segments and the accuracy of prediction or classification.
One of the data sets on which the researchers tested their system is a group of reviews from a website where users evaluate different beers. The data set includes the raw text of the reviews and the corresponding ratings, using a five-star system, on each of three attributes: aroma, palate, and appearance.
What makes the data attractive to researchers is that it's also been annotated by hand, to indicate which sentences in the reviews correspond to which scores. For example, a review might consist of eight or nine sentences, and the annotator might have highlighted those that refer to the beer's "tan-colored head about half an inch thick," "signature Guinness smells," and "lack of carbonation." Each sentence is correlated with a different attribute rating.
Validation
As such, the data set provides an excellent test of the CSAIL researchers' system. If the first module has extracted those three phrases, and the second module has correlated them with the correct ratings, then the system has identified the same basis for judgment that the human annotator did.
In experiments, the system's agreement with the human annotations was 96 percent and 95 percent, respectively, for ratings of appearance and aroma, and 80 percent for the more nebulous concept of palate.
In the paper, the researchers also report testing their system on a database of free-form technical questions and answers, where the task is to determine whether a given question has been answered previously.
In unpublished work, they've applied it to thousands of pathology reports on breast biopsies, where it has learned to extract text explaining the bases for the pathologists' diagnoses. They're even using it to analyze mammograms, where the first module extracts sections of images rather than segments of text.

Translate

Ads