Showing posts with label software. Show all posts
Showing posts with label software. Show all posts
Violating law of energy conservation in the early universe may explain dark energy

Violating law of energy conservation in the early universe may explain dark energy


universe
This is the "South Pillar" region of the star-forming region called the Carina Nebula. Like cracking open a watermelon and finding its seeds, the infrared telescope "busted open" this murky cloud to reveal star embryos tucked inside finger-like pillars of thick dust. Credit: NASA
Physicists have proposed that the violations of energy conservation in the early universe, as predicted by certain modified theories in quantum mechanics and quantum gravity, may explain the cosmological constant problem, which is sometimes referred to as "the worst theoretical prediction in the history of physics."
The physicists, Thibaut Josset and Alejandro Perez at the University of Aix-Marseille, France, and Daniel Sudarsky at the National Autonomous University of Mexico, have published a paper on their proposal in a recent issue Physical Review Letters.
"The main achievement of the work was the unexpected relation between two apparently very distinct issues, namely the accelerated expansion of the universe and microscopic physics," Josset told Phys.org. "This offers a fresh look at the cosmological constant problem, which is still far from being solved."
Einstein originally proposed the concept of the cosmological constant in 1917 to modify his theory of in order to prevent the universe from expanding, since at the time the universe was considered to be static.
Now that modern observations show that the universe is expanding at an accelerating rate, the cosmological constant today can be thought of as the simplest form of , offering a way to account for current observations.
However, there is a huge discrepancy—up to 120 orders of magnitude—between the large theoretical predicted value of the cosmological constant and the tiny observed value. To explain this disagreement, some research has suggested that the cosmological constant may be an entirely new constant of nature that must be measured more precisely, while another possibility is that the underlying mechanism assumed by theory is incorrect. The new study falls into the second line of thought, suggesting that scientists still do not fully understand the root causes of the cosmological constant.
The basic idea of the new paper is that violations of energy conservation in the could have been so small that they would have negligible effects at local scales and remain inaccessible to modern experiments, yet at the same time these violations could have made significant contributions to the present value of the cosmological constant.
To most people, the idea that conservation of energy is violated goes against everything they learned about the most fundamental laws of physics. But on the cosmological scale, conservation of energy is not as steadfast a law as it is on smaller scales. In this study, the physicists specifically investigated two theories in which violations of energy conservation naturally arise.
The first scenario of violations involves modifications to quantum theory that have previously been proposed to investigate phenomena such as the creation and evaporation of black holes, and which also appear in interpretations of quantum mechanics in which the wavefunction undergoes spontaneous collapse. In these cases, energy is created in an amount that is proportional to the mass of the collapsing object.
Violations of energy conservation also arise in some approaches to quantum gravity in which spacetime is considered to be granular due to the fundamental limit of length (the Planck length, which is on the order of 10-35 m). This spacetime discreteness could have led to either an increase or decrease in energy that may have begun contributing to the cosmological constant starting when photons decoupled from electrons in the early universe, during the period known as recombination.
As the researchers explain, their proposal relies on a modification to general relativity called unimodular gravity, first proposed by Einstein in 1919.
"Energy from matter components can be ceded to the gravitational field, and this 'loss of energy' will behave as a cosmological constant—it will not be diluted by later expansion of the universe," Josset said. "Therefore a tiny loss or creation of energy in the remote past may have significant consequences today on large scale."
Whatever the source of the energy conservation violation, the important result is that the energy that was created or lost affected the cosmological constant to a greater and greater extent as time went by, while the effects on matter decreased over time due to the expansion of the universe.
Another way to put it, as the physicists explain in their paper, is that the cosmological constant can be thought of as a record of the energy non-conservation during the history of the universe.
Currently there is no way to tell whether the violations of energy conservation investigated here truly did affect the cosmological constant, but the physicists plan to further investigate the possibility in the future.
"Our proposal is very general and any violation of energy conservation is expected to contribute to an effective cosmological constant," Josset said. "This could allow to set new constraints on phenomenological models beyond standard .
"On the other hand, direct evidence that dark energy is sourced by energy non-conservation seems largely out-of-reach, as we have access to the value of lambda [the ] today and constraints on its evolution at late time only."

Credit: Lisa Zyga  
 

How To Rename Multiple Files at One Time in Windows 10 ??

In the Windows 10 File Explorer this process of renaming files in large batches is simple but for many users, myself included, the feature is not well known.
In this Quick Tip article I want to share with you how easy it is do use this capability of File Explorer.


 Process :-  

Step 1 : Select the image you want to rename
In Windows 10 there is always more than one way to accomplish most tasks so once you have File Explorer open to the directory of files you want to rename you can use the keyboard shortcut CTRL + A to select all of the files or use the Select All button on the Home view of File Explorer.Or select only those image you want to rename at once.

When You have selected the images/files that you want to rename as a group. 
Move to step 2  
Step 2 : Rename the files
Renaming files in a batch is done as you do same with the one file  rename one file .
Once all of the images/files you want to rename are selected, right click on the first image/file and select Rename from the context menu.

You will then have an editable name field for the first image/file in the sequence - just give it whatever name you choose for the group of images/files. Hit the Enter key once you have the new name typed in.

Now you will see all the files with the new name followed by a sequential number in parentheses. You have now successfully renamed your files in one batch.

Here is one last interesting thing with this feature - if you click on any other image/file in the collection it will give that file the first sequential number and then continuing from that image/file in sequential order until it hits the end of the list. At that point it will go back up to the first one and continue to renaming until the file/image just before the one you started the renaming with at the beginning.
So a key aspect of this process is to make sure you have the files in the order you want them numbered in and start with the first image/file in the directory.


Screenshot :-

Blitab Technology :createing tablet for the blind and visually impaired



Blitab Technology develops tablet for the blind and visually impaired
Blitab, a tablet with a Braille interface, looks like a promising step up for blind and low vision people who want to be part of the educational, working and entertainment worlds of digital life.
A video of the Blitab Technology founder, Kristina Tsvetanova, said the idea for such a tablet came to her during her studies as an industrial engineer. At the time, a blind colleague of hers asked her to sign him for an online course and a question nagged her: How could technology help him better?
Worldwide, she said, there are more than 285 million blind and visually impaired people.
She was aware that in general blind and low vision people were coping with old, bulky technology, contributing to low literacy rates among blind children. She and her team have been wanting to change that.
There was ample room for improvements. The conventional interfaces for the blind, she said, have been slow and expensive. She said the keyboard can range from about $5000 to $8000. Also, she said, they are limited to what the blind person can read, just a few words at a time. Imagine, she said, reading Moby Dick, five words at a time.
They have engineered a with a 14-line Braille display on the top and a touch screen on the bottom.


Part of their technology involves a high performance membrane, and their press statement said the tablet uses smart micro fluids to develop small physical bubbles instead of a screen display.
They have produced a tactile tablet, she said, where people with sight loss can learn, work and play using that device.
The user can control the tablet with voice-over if the person wants to listen to an ebook or by pressing one button, dots will be activated on the screen and the surface of the screen will change.
Romain Dillet, in TechCrunch: "The magic happens when you press the button on the side of the device. The top half of the device turns into a Braille reader. You can load a document, a web page—anything really—and then read the content using Braille."
Tsvetanova told Dillet, "We're not excluding voice over; we combine both of these things." She said they offer both "the tactile experience and the voice over experience."
Rachel Metz reported in MIT Technology Review: "The Blitab's Braille display includes 14 rows, each made up of 23 cells with six dots per cell. Every cell can present one letter of the Braille alphabet. Underneath the grid are numerous layers of fluids and a special kind of membrane," she wrote.

Blitab Technology develops tablet for the blind and visually impaired
Credit: Blitab
At heart, it's an Android tablet, Dillet said, "so it has Wi-Fi and Bluetooth and can run all sorts of Android apps."
Metz said that with eight hours of use per day, it's estimated to last for five days on one battery charge.
The tablet team have set a price to this device, at $500.
How they will proceed: First, she said they will sell directly from their web site, then scale through global distributors, and distribute to less developed world.
What's next? Dillet said in the Jan.6 article that "the team of 10 plans to ship the in six months with pre-orders starting later this month."
Blitab Technology recently took first place in the Digital Wellbeing category of the 2016 EIT Digital Challenge. EIT Digital is described as a European open innovation organization. They seek to foster digital innovation and entrepreneurial talent.


credit ;Nancy Owano 
waves Nokia sues Apple for patent infringement

waves Nokia sues Apple for patent infringement


Nokia announced Wednesday it is suing Apple in German and US courts for patent infringement, claiming the US tech giant was using Nokia technology in "many" products without paying for it.
Finnish Nokia, once the world's top mobile phone maker, said the two companies had signed a licensing agreement in 2011, and since then "Apple has declined subsequent offers made by Nokia to license other of its patented inventions which are used by many of Apple's products."
"After several years of negotiations trying to reach agreement to cover Apple's use of these patents, we are now taking action to defend our rights," Ilkka Rahnasto, head of Nokia's patent business, said in a statement.
The complaints, filed in three German cities and a district court in Texas, concern 32 patents for innovations related to displays, user interface, software, antennae, chipsets and video coding. Nokia said it was preparing further legal action elsewhere.
Nokia was the world's leading mobile phone maker from 1998 until 2011 when it bet on Microsoft's Windows mobile platform, which proved to be a flop. Analysts say the company failed to grasp the growing importance of smartphone apps compared to hardware.
It sold its unprofitable handset unit in 2014 for some $7.2 billion to Microsoft, which dropped the Nokia name from its Lumia smartphone handsets.
Meanwhile Nokia has concentrated on developing its mobile network equipment business by acquiring its French-American rival Alcatel-Lucent.
Including its 2013 full acquisition of joint venture Nokia Siemens Networks, Nokia said the three companies united represent more than 115 billion euros of R&D investment, with a massive portfolio of tens of thousands of patents.
The 2011 licensing deal followed years of clashes with Apple, which has also sparred with main rival Samsung over patent claims.
At the time, Apple cut the deal to settle 46 separate complaints Nokia had lodged against it for violation of intellectual property.
Best weather satellite ever built is lunched into space

Best weather satellite ever built is lunched into space


Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (United Launch Alliance via AP)  
The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives.
This new GOES-R spacecraft will track U.S. weather as never before: hurricanes, tornadoes, flooding, , wildfires, lightning storms, even solar flares. Indeed, about 50 TV meteorologists from around the country converged on the launch site—including NBC's Al Roker—along with 8,000 space program workers and guests.
"What's so exciting is that we're going to be getting more data, more often, much more detailed, higher resolution," Roker said. In the case of tornadoes, "if we can give people another 10, 15, 20 minutes, we're talking about lives being saved."
Think superhero speed and accuracy for forecasting. Super high-definition TV, versus black-and-white.
"Really a quantum leap above any NOAA has ever flown," said Stephen Volz, the National Oceanic and Atmospheric Administration's director of satellites.
"For the American public, that will mean faster, more accurate weather forecasts and warnings," Volz said earlier in the week. "That also will mean more lives saved and better environmental intelligence" for government officials responsible for hurricane and other evacuations.
Best weather satellite ever built rockets into space
Cell phones light up the beaches of Cape Canaveral and Cocoa Beach, Fla., north of the Cocoa Beach Pier as spectators watch the launch of the NOAA GOES-R weather satellite, Saturday, Nov. 19, 2016. It was launched from Launch Complex 41 at Cape Canaveral Air Force Station on a ULA Atlas V rocket. (Malcolm Denemark/Florida Today via AP)
Airline passengers also stand to benefit, as do rocket launch teams. Improved forecasting will help pilots avoid bad weather and help rocket scientists know when to call off a launch.
NASA declared success 3 1/2 hours after liftoff, following separation from the upper stage.
The first in a series of four high-tech satellites, GOES-R hitched a ride on an unmanned Atlas V rocket, delayed an hour by rocket and other problems. NOAA teamed up with NASA for the mission.
The satellite—valued by NOAA at $1 billion—is aiming for a 22,300-mile-high equatorial orbit. There, it will join three aging spacecraft with 40-year-old technology, and become known as GOES-16. After months of testing, this newest satellite will take over for one of the older ones. The second satellite in the series will follow in 2018. All told, the series should stretch to 2036.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
GOES stands for Geostationary Operational Environmental Satellite. The first was launched in 1975.
GOES-R's premier imager—one of six science instruments—will offer three times as many channels as the existing system, four times the resolution and five times the scan speed, said NOAA program director Greg Mandt. A similar imager is also flying on a Japanese weather satellite.
Typically, it will churn out full images of the Western Hemisphere every 15 minutes and the continental United States every five minutes. Specific storm regions will be updated every 30 seconds.
Forecasters will get pictures "like they've never seen before," Mandt promised.
Best weather satellite ever built rockets into space
An Atlas V rocket lifts off from Complex 41 at Cape Canaveral Air Force Station, in Fla., Saturday evening, Nov. 19, 2016. The rocket is carrying the GOES-R weather satellite. The most advanced weather satellite ever built rocketed into space Saturday night, part of an $11 billion effort to revolutionize forecasting and save lives. (Craig Bailey/Florida Today via AP)
A first-of-its-kind lightning mapper, meanwhile, will take 500 snapshots a second.
This next-generation GOES program—$11 billion in all—includes four satellites, an extensive land system of satellite dishes and other equipment, and new methods for crunching the massive, nonstop stream of expected data.
Hurricane Matthew, interestingly enough, delayed the launch by a couple weeks. As the hurricane bore down on Florida in early October, launch preps were put on hold. Matthew stayed far enough offshore to cause minimal damage to Cape Canaveral, despite some early forecasts that suggested a direct strike.
Best weather satellite ever built rockets into space
This photo provided by United Launch Alliance shows a United Launch Alliance (ULA) Atlas V rocket carrying GOES-R spacecraft for NASA and NOAA lifting off from Space Launch Complex-41 at 6:42 p.m. EST at Cape Canaveral Air Force Station, Fla., Saturday, Nov. 19, 2016. The most advanced weather satellite ever built rocketed into space Saturday night, par 
credit; Marcia Dunn
Use drones and insect biobots to map disaster areas

Use drones and insect biobots to map disaster areas


Tech would use drones and insect biobots to map disaster areas
Credit: North Carolina State University  
Researchers at North Carolina State University have developed a combination of software and hardware that will allow them to use unmanned aerial vehicles (UAVs) and insect cyborgs, or biobots, to map large, unfamiliar areas – such as collapsed buildings after a disaster.
"The idea would be to release a swarm of sensor-equipped biobots – such as remotely controlled cockroaches – into a collapsed building or other dangerous, unmapped area," says Edgar Lobaton, an assistant professor of electrical and computer engineering at NC State and co-author of two papers describing the work.
"Using remote-control technology, we would restrict the movement of the biobots to a defined area," Lobaton says. "That area would be defined by proximity to a beacon on a UAV. For example, the biobots may be prevented from going more than 20 meters from the UAV."
The biobots would be allowed to move freely within a defined area and would signal researchers via radio waves whenever they got close to each other. Custom software would then use an algorithm to translate the biobot sensor data into a rough map of the unknown environment.
Once the program receives enough data to map the defined area, the UAV moves forward to hover over an adjacent, unexplored section. The biobots move with it, and the mapping process is repeated. The software program then stitches the new map to the previous one. This can be repeated until the entire region or structure has been mapped; that map could then be used by first responders or other authorities.
"This has utility for areas – like collapsed buildings – where GPS can't be used," Lobaton says. "A strong radio signal from the UAV could penetrate to a certain extent into a collapsed building, keeping the biobot swarm contained. And as long as we can get a signal from any part of the swarm, we are able to retrieve data on what the rest of the swarm is doing. Based on our experimental data, we know you're going to lose track of a few individuals, but that shouldn't prevent you from collecting enough data for mapping."
Co-lead author Alper Bozkurt, an associate professor of electrical and computer engineering at NC State, has previously developed functional cockroach biobots. However, to test their new mapping technology, the research team relied on inch-and-a-half-long robots that simulate cockroach behavior.
In their experiment, researchers released these robots into a maze-like space, with the effect of the UAV beacon emulated using an overhead camera and a physical boundary attached to a moving cart. The cart was moved as the robots mapped the area.
"We had previously developed proof-of-concept software that allowed us to map small areas with biobots, but this work allows us to map much larger areas and to stitch those maps together into a comprehensive overview," Lobaton says. "It would be of much more practical use for helping to locate survivors after a disaster, finding a safe way to reach survivors, or for helping responders determine how structurally safe a building may be.
"The next step is to replicate these experiments using biobots, which we're excited about."
An article on the framework for developing local maps and stitching them together, "A Framework for Mapping with Biobotic Insect Networks: From Local to Global Maps," is published in Robotics and Autonomous Systems. An article on the theory of mapping based on the proximity of mobile sensors to each other, "Geometric Learning and Topological Inference with Biobotic Networks," is published in IEEE Transactions on Signal and Information Processing over Networks.


credit;   Matt Shipman
How machine learning advances artificial intelligence

How machine learning advances artificial intelligence


Computers that learn for themselves are with us now. As they become more common in 'high-stakes' applications like robotic surgery, terrorism detection and driverless cars, researchers ask what can be done to make sure we can trust them.
There would always be a first death in a driverless car and it happened in May 2016. Joshua Brown had engaged the autopilot system in his Tesla when a tractor-trailor drove across the road in front of him. It seems that neither he nor the sensors in the autopilot noticed the white-sided truck against a brightly lit sky, with tragic results.
Of course many people die in car crashes every day – in the USA there is one fatality every 94 million miles, and according to Tesla this was the first known fatality in over 130 million miles of driving with activated autopilot. In fact, given that most road fatalities are the result of human error, it has been said that autonomous cars should make travelling safer.
Even so, the tragedy raised a pertinent question: how much do we understand – and trust – the computers in an autonomous vehicle? Or, in fact, in any machine that has been taught to carry out an activity that a human would do?
We are now in the era of machine learning. Machines can be trained to recognise certain patterns in their environment and to respond appropriately. It happens every time your digital camera detects a face and throws a box around it to focus, or the personal assistant on your smartphone answers a question, or the adverts match your interests when you search online.
Machine learning is a way to program computers to learn from experience and improve their performance in a way that resembles how humans and animals learn tasks. As machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important, says Zoubin Ghahramani, Professor of Information Engineering in Cambridge's Department of Engineering.
Faced with a life or death decision, would a driverless car decide to hit pedestrians, or avoid them and risk the lives of its occupants? Providing a medical diagnosis, could a machine be wildly inaccurate because it has based its opinion on a too-small sample size? In making financial transactions, should a computer explain how robust is its assessment of the volatility of the stock markets?
"Machines can now achieve near-human abilities at many cognitive tasks even if confronted with a situation they have never seen before, or an incomplete set of data," says Ghahramani. "But what is going on inside the 'black box'? If the processes by which decisions were being made were more transparent, then trust would be less of an issue."
His team builds the algorithms that lie at the heart of these technologies (the "invisible bit" as he refers to it). Trust and transparency are important themes in their work: "We really view the whole mathematics of machine learning as sitting inside a framework of understanding uncertainty. Before you see data – whether you are a baby learning a language or a scientist analysing some data – you start with a lot of uncertainty and then as you have more and more data you have more and more certainty.
"When machines make decisions, we want them to be clear on what stage they have reached in this process. And when they are unsure, we want them to tell us."
One method is to build in an internal self-evaluation or calibration stage so that the machine can test its own certainty, and report back.
Two years ago, Ghahramani's group launched the Automatic Statistician with funding from Google. The tool helps scientists analyse datasets for statistically significant patterns and, crucially, it also provides a report to explain how sure it is about its predictions.
"The difficulty with machine learning systems is you don't really know what's going on inside – and the answers they provide are not contextualised, like a human would do. The Automatic Statistician explains what it's doing, in a human-understandable form."
Where transparency becomes especially relevant is in applications like medical diagnoses, where understanding the provenance of how a decision is made is necessary to trust it.
Dr Adrian Weller, who works with Ghahramani, highlights the difficulty: "A particular issue with new (AI) systems that learn or evolve is that their processes do not clearly map to rational decision-making pathways that are easy for humans to understand." His research aims both at making these pathways more transparent, sometimes through visualisation, and at looking at what happens when systems are used in real-world scenarios that extend beyond their training environments – an increasingly common occurrence.
"We would like AI systems to monitor their situation dynamically, detect whether there has been a change in their environment and – if they can no longer work reliably – then provide an alert and perhaps shift to a safety mode." A , for instance, might decide that a foggy night in heavy traffic requires a human driver to take control.
Weller's theme of trust and transparency forms just one of the projects at the newly launched £10 million Leverhulme Centre for the Future of Intelligence (CFI). Ghahramani, who is Deputy Director of the Centre, explains: "It's important to understand how developing technologies can help rather than replace humans. Over the coming years, philosophers, social scientists, cognitive scientists and computer scientists will help guide the future of the technology and study its implications – both the concerns and the benefits to society."
CFI brings together four of the world's leading universities (Cambridge, Oxford, Berkeley and Imperial College, London) to explore the implications of AI for human civilisation. Together, an interdisciplinary community of researchers will work closely with policy-makers and industry investigating topics such as the regulation of autonomous weaponry, and the implications of AI for democracy.
Ghahramani describes the excitement felt across the field: "It's exploding in importance. It used to be an area of research that was very academic – but in the past five years people have realised these methods are incredibly useful across a wide range of societally important areas.
"We are awash with data, we have increasing computing power and we will see more and more applications that make predictions in real time. And as we see an escalation in what machines can do, they will challenge our notions of intelligence and make it all the more important that we have the means to trust what they tell us."
Artificial intelligence has the power to eradicate poverty and disease or hasten the end of human civilisation as we know it – according to a speech delivered by Professor Stephen Hawking 19 October 2016 at the launch of the Centre for the Future of Intelligence.
internet robot  investigate creativity

internet robot investigate creativity


A portrait of Benjamin Franklin manipulated by Smilevector. Credit: Smithsonian National Portrait Gallery.
Tom White, senior lecturer in Victoria's School of Design, has created Smilevector—a bot that examines images of people, then adds or removes smiles to their faces.
"It has examined hundreds of thousands of faces to learn the difference between images, by finding relations and reapplying them," says Mr White.
"When the computer finds an image it looks to identify if the person is smiling or not. If there isn't a smile, it adds one, but if there is a smile then it takes it away.
"It represents these changes as an animation, which moves parts of the face around, including crinkling and widening the eyes."
The bot can be used as a form of puppetry, says Mr White.
"These systems are domain independent, meaning you can do it with anything—from manipulating images of faces to shoes to chairs. It's really fun and interesting to work in this space. There are lots of ideas to play around with."
The creation of the bot was sparked by Mr White's research into creative intelligence.
"Machine learning and artificial intelligence are starting to have implications for people in creative industries. Some of these implications have to do with the computer's capabilities, like completing mundane tasks so that people can complete higher level tasks," says Mr White.
"I'm interested in exploring what these systems are capable of doing but also how it changes what we think of as being creative is in the first place. Once you have a system that can automate processes, is that still a creative act? If you can make something a completely push of the button operation, does its meaning change?"
Mr White says people have traditionally used creative tools by giving commands.
"However, I think we're moving toward more of a collaboration with computers—where there's an intelligent system that's making suggestions and helping steer the process.
"A lot will happen in this space in the next five to ten years, and now is the right time to progress. I also hope these techniques influence teaching over the long term as they become more mainstream. It is something that students could work with me on at Victoria University as part of our Master of Design Innovation or our new Master of Fine Arts (Creative Practice)."
The paper Sampling Generative Networks describing this research is available as an arXiv preprint. The research will also be presented as part of the Neural Information Processing Systems conference in Spain and Generative Art conference in Italy in December.

List of speech editing software

(geekkeep)-voice editing software has become tools that people has come to work with. The military, hackers, hosts, animators , up to an ever increasing list of people has come to rely on for achieving their aims.

 Animating studios has come to use these applications in productions of character lines without relying on hiring voice artists( this has become beneficial to rising studios)In this age the security of some units uses bio scans, speech recognition units is prey common and that brings the downside. Agents, buggers, military infiltration rely on speech editing software to bypass these systems hence gaining unauthorized access to the units.

you must have seen the ever rising artificial intelligence struggle in the tech market or hubs. AI like Deep-mind, Cortana,Clever bot, Virtual Assistant Denise,    Verbots,    MadomaVirtual Assistant,
DesktopMates,     Braina,      Syn Virtual Assistant   uses these speech recognition softwares to make the voice of these assistants or artificial intelligence.


lists of speech recognition software are



 WavePad audio editing software                                                                                                            This audio editing software is a full-featured professional audio and music editor for Windows and Mac. It lets you record and edit music, voice and other audio recordings. When editing audio files, you can cut, copy and paste parts of recordings, and then add effects like echo, amplification and noise reduction. WavePad works as a wav or mp3 editor, but it also supports a number of other file formats including vox, gsm, wma, real audio, au, aif, flac, ogg, and more.               



Free Audio Editor can digitize sound recordings of your rare music cassette tapes, vinyl LPs and videos, creating standard digital sound files. Timer and input level triggered recording are included. There is a button to activate the system Windows Mixer without visiting the control panel. The recording can be directly loaded into the waveform window for further perfection.
You can edit audio using the traditional Waveform View or the frequency-based Spectral Display that makes it easy to isolate and remove unwanted noise. Intuitive cut/copy/paste/trim/mute and more actions can be performed easily. The selection tools make the editing operations performed with millisecond precision.Enhance your audio with more than 30 native signal and effects processing engines, including compression, EQ, fade in/out, delay, chorus, reverb, time stretching, pitch shifting and more. It significantly increases your audio processing capabilities. The real-time preview enables you to hear the results before mixing down to a single file.This free audio editor supports a large amount of input formats including MP3, WMA, WAV, AAC, FLAC, OGG, APE, AC3, AIFF, MP2, MPC, MPGA, M4A, CDA, VOX, RA, RAM, ARW, AIF, AIFC, TTA, G721, G723, G726 and many more as source formats. Any audio files can be saved to the most popular audio formats like MP3, WMA, WAV, OGG, etc. Furthermore, it is available to control the output quality by adjusting the parameters & our software also prepares many presets with different combinations of settings for playback on all kinds of software applications and devices.


Audacity can record live audio through a microphone or mixer, or digitize recordings from other media. With some sound cards, and on any recent version of Windows, Audacity can also capture streaming audio.
  • Device Toolbar manages multiple recording and playback devices.
  • Level meters can monitor volume levels before, during and after recording. Clipping can be displayed in the waveform or in a label track.
  • Record from microphone, line input, USB/Firewire devices and others.
  • Record computer playback on Windows Vista and later by choosing “Windows WASAPI” host in Device Toolbar then a “loopback” input.
  • Timer Record and Sound Activated Recording features.
  • Dub over existing tracks to create multi-track recordings.
  • Record at very low latencies on supported devices on Linux by using Audacity with JACK.
  • Record at sample rates up to 192,000 Hz (subject to appropriate hardware and host selection). Up to 384,000 Hz is supported for appropriate high-resolution devices on Windows (using WASAPI), Mac OS X, and Linux.
  • Record at 24-bit depth on Windows (using Windows WASAPI host), Mac OS X or Linux (using ALSA or JACK host).
  • Record multiple channels at once (subject to appropriate hardware).




Power Sound Editor

Power Sound Editor Free is a visual audio editing and recording software solution, which supports many advanced and powerful operations with audio data.
You can use Power Sound Editor Free to record your own music, voice, or other audio files, edit it, mix it with other audio or musical parts, add effects like Reverb, Chorus, and Echo, and burn it on a CD, post it on the World Wide Web or e-mail it.

mp3DirectCut

mp3DirectCut is a fast and extensive audio editor and recorder for compressed mp3. You can directly cut, copy, paste or change the volume with no need to decompress your files for audio editing. Using Cue sheets, pause detection or Auto cue you can easily divide long files.

Music Editor Free

Music Editor Free (MEF) is a multi-award winning music editor software tool. MEF helps you to record and edit music and sounds. It lets you make and edit music, voice and other audio recordings. When editing audio files you can cut, copy and paste parts of recordings and, if required, add effects like echo, amplification and noise reduction.

Wavosaur

Wavosaur is a free sound editor, audio editor, wav editor software for editing, processing and recording sounds, wav and mp3 files. Wavosaur has all the features to edit audio (cut, copy, paste, etc.) produce music loops, analyze, record, batch convert. Wavosaur supports VST plugins, ASIO driver, multichannel wav files, real time effect processing. The program has no installer and doesn’t write in the registry. Use it as a free mp3 editor, for mastering, sound design.

Traverso DAW

Traverso DAW is a GPL licensed, cross platform multitrack audio recording and editing suite, with an innovative and easy to master User Interface. It’s suited for both the professional and home user, who needs a robust and solid DAW. Adding and removal of effects plugins, moving Audio Clips and creating new Tracks during playback are all perfectly safe, giving you instant feedback on your work!

Ardour

Ardour is a digital audio workstation. You can use it to record, edit and mix multi-track audio. You can produce your own CDs, mix video soundtracks, or just experiment with new ideas about music and sound. Ardour capabilities include: multichannel recording, non-destructive editing with unlimited undo/redo, full automation support, a powerful mixer, unlimited tracks/busses/plugins, timecode synchronization, and hardware control from surfaces like the Mackie Control Universal. If you’ve been looking for a tool similar to ProTools, Nuendo, Pyramix, or Sequoia, you might have found it.

Rosegarden

Rosegarden is a well-rounded audio and MIDI sequencer, score editor, and general-purpose music composition and editing environment. Rosegarden is an easy-to-learn, attractive application that runs on Linux, ideal for composers, musicians, music students, and small studio or home recording environments.

Hydrogen

Hydrogen is an advanced drum machine for GNU/Linux. It’s main goal is to bring professional yet simple and intuitive pattern-based drum programming.

Sound Engine

SoundEngine is the best tool for personal use, because it enables you to easily edit a wave data while it has many functions required for a mastering process.

Expstudio Audio Editor

Expstudio Audio Editor is a visual music file editor that has many different options and a multiple functionality to edit your music files like editing text files. With a given audio data it can perform many different operations such as displaying a waveform image of an audio file, filtering, applying various audio effects, format conversion and more.

DJ Audio Editor

DJ Audio Editor is easy-to-use and well-organized audio application which allows you to perform various operations with audio data. You can create and edit audio files professionally, also displaying a waveform image of audio file makes your work faster.

Eisenkraut

Eisenkraut is a cross-platform audio file editor. It requires Java 1.4+ and SuperCollider 3. It supports multi-channel and multi-mono files and floating-point encoding. An OSC scripting interface and experimental sonagramme functionality are provided.

FREE WAVE MP3 Editor

Free Wave MP3 Editor is a sound editor program for Windows. This software lets you make and edit voice and other audio recordings. You can cut, copy and paste parts of recording and, if required, add effects like echo, amplification and noise reduction.

Kangas Sound Editor

Fun Kangaroo-themed program that allows the user to create music and sound effects. It uses a system of frequency ratios for pitch control, rather than conventional music notation and equal temperament. It allows instruments, both musical and percussion, to be created.

Ecawave

Ecawave is a simple graphical audio file editor. The user-interface is based on Qt libraries, while almost all audio functionality is taken directly from ecasound libraries. As ecawave is designed for editing large audio files, all processing is done direct-to-disk. Simple waveform caching is used to speed-up file operations. Ecawave supports all audio file formats and effect algorithms provided by ecasound libraries. This includes JACK, ALSA, OSS, aRts, over 20 file formats, over 30 effect types, LADSPA plugins and multi-operator effect presets.

Audiobook Cutter

Audiobook Cutter splits your MP3 audio books and podcasts in a fast and user friendly way. The split files can easily be used on mobile MP3 players because of their small-size. Their duration allows smooth navigation through the book. The split points are determined automatically based on silence detection.

Jokosher

Jokosher is a simple yet powerful multi-track studio. With it you can create and record music, podcasts and more, all from an integrated simple environment.

LMMS

LMMS is a free cross-platform alternative to commercial programs like FL Studio, which allow you to produce music with your computer. This includes the creation of melodies and beats, the synthesis and mixing of sounds, and arranging of samples. You can have fun with your MIDI-keyboard and much more; all in a user-friendly and modern interface.

Mp3Splt

Mp3Splt-project is a utility to split mp3 and ogg files selecting a begin and an end time position, without decoding. It’s very useful to split large mp3/ogg to make smaller files or to split entire albums to obtain original tracks. If you want to split an album, you can select split points and filenames manually or you can get them automatically from CDDB (internet or a local file) or from .cue files. Supports also automatic silence split, that can be used also to adjust cddb/cue splitpoints. You can extract tracks from Mp3Wrap or AlbumWrap files in few seconds.

Qtractor

Qtractor is an Audio/MIDI multi-track sequencer application written in C++ with the Qt4 framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio, and the Advanced Linux Sound Architecture (ALSA) for MIDI, are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

ReZound

ReZound aims to be a stable, open source, and graphical audio file editor primarily for.

Sweep

Sweep is an audio editor and live playback tool for GNU/Linux, BSD and compatible systems. It supports many music and voice formats including WAV, AIFF, Ogg Vorbis, Speex and MP3, with multichannel editing and LADSPA effects plugins.

Wavesurfer

WaveSurfer is an Open Source tool for sound visualization and manipulation. It has been designed to suit both novice and advanced users. WaveSurfer has a simple and logical user interface that provides functionality in an intuitive way and which can be adapted to different tasks.

Tasting and chewing explored in virtual reality

Tasting and chewing explored in virtual reality


Tasting and chewing explored in virtual reality
Virtual reality technology has you thinking you are doing many things, but there is much uncharted territory in eating virtually.
Imagine what the tourism industry could do with VR technology extending sensory stimulation beyond the eyes and ears. Imagine inviting prospective restaurant clients in virtual reality mode to the meat, fish and chicken specialties, pizza or chocolate cakes. Imagine any number of applications where the sensory experience in virtual reality expands.
Scientists are focusing on VR technology that can fool you into thinking you are tasting food that is not of course really there. Researchers from Singapore and another team from Japan have their own studies that explore the realm of tasting and even chewing.
Vlad Dudau, Neowin, said these explorers managed to replicate the tastes and textures of different foods.
A recent conference in Japan on user interface was given much "food" tech for thought.
The work titled "Virtual Sweet: Simulating Sweet Sensation Using Thermal Stimulation on the Tip of the Tongue," explored what it is like to be tasting sweet food virtually.
"Being a pleasurable sensation, sweetness is recognized as the most preferred sensation among the five primary taste sensations. In this paper, we present a novel method to virtually simulate the sensation of sweetness by applying thermal stimulation to the tip of the human tongue. To digitally simulate the sensation of sweetness, the system delivers rapid heating and cooling stimuli to the tongue via a 2x2 grid of Peltier elements. To achieve distinct, controlled, and synchronized temperature variations in the stimuli, a control module is used to regulate each of the Peltier elements. Results from our preliminary experiments suggest that the participants were able to perceive mild sweetness on the tip of their tongue while using the proposed system."
Nimesha Ranasinghe and Ellen Yi-Luen Do of the National University of Singapore are the two explorers. This is a device where changes in temperature serve to mimic the sensation of sweetness on the tongue.
Victoria Turk in New Scientist wrote about what their technology does: "The user places the tip of their tongue on a square of thermoelectric elements that are rapidly heated or cooled, hijacking thermally sensitive neurons that normally contribute to the sensory code for taste."
MailOnline described it as a "virtual sweetness instrument" which makes use of "a grid of four elements which generate temperature changes of 5°C in a few seconds. "When applied to the tip of the tongue, said the report, "the temperature change results in a virtual sweet sensation." A 9V battery is put to use. Results: Out of 15 people, eight registered a very mild sweet taste, said MailOnline.

Applications could include a taste-enhancing technology for dieters. Dr Ranashinghe told MailOnline: 'We believe this will especially helpful for the people on restricted diets for example salt (hypertension and heart problems) and sugar (diabetics)."
New Scientist said Ranasinghe and Do could see a system like this embedded in a glass or mug to make low sugar drinks taste sweeter.
Another group from the University of Tokyo is using electrodes to stimulate the jaw muscles. Tokyo Researchers Arinobu Niijima and Takefumi Ogawa are reporting results from an electrical muscle stimulation (EMS) test of jaw movements in chewing.
"We propose Electric Food Texture System, which can present virtual food texture such as hardness and elasticity by electrical muscle stimulation (EMS) to the masseter muscle," said the researchers in a video posted last month on their work, "Study on Control Method of Virtual Food Texture by Electrical Muscle Stimulation."
Dudau in Neowin described their experiment, where "scientists attached electrodes to jaw muscles and managed to simulate the sensation of biting into different materials. For example, by varying the electrical stimulation, users reported that while eating a real cookie, it felt like biting into something soft, or chewing something hard alternatively."
Turk in New Scientist also talked about the Tokyo team who presented "a device that uses electricity to simulate the experience of chewing foods of different textures. Arinobu Niijima and Takefumi Ogawa's Electric Food Texture System also uses electrodes, but not on the tongue, instead they place them on the masseter muscle – a muscle in the jaw used for chewing – to give sensations of hardness or chewiness as a user bites down. 'There is no food in the mouth, but users feel as if they are chewing some food due to haptic feedback by electrical muscle stimulation,' says Niijima."
Getting into technical details, MailOnline said "By delivering short pulses of between 100 to 250 Hz they were able to stimulate the masseter muscles, used to chew solid foods."
So if the 'sugar' researchers were looking at taste sensation, these researchers were looking at food texture. They said, "In this paper, we investigated the feasibility to control virtual food texture by EMS."
The researchers said on their video page, "We conducted an experiment to reveal the relationship of the parameters of EMS and those of virtual food texture. The experimental results show that the higher strength of EMS is, the harder virtual food texture is, and the longer duration of EMS is, the more elastic virtual food texture is."
If at higher frequency, the sensation was that of eating tougher, chewy food but a longer pulse simulated a more elastic texture.
Oracle buy NetSuite

Oracle buy NetSuite


The software giant Oracle said that its proposed $9.3 billion acquisition of the cloud storage company NetSuite would move forward, after more than half of eligible NetSuite shareholders backed the bid.
Oracle said in a statement on Saturday that holders of 53 percent of unaffiliated NetSuite shares agreed to tender their shares by the deadline of Friday. The deal will be completed on Monday, Oracle said.
Oracle offered to buy NetSuite in July for $109 a share in response to challenges from rival enterprise software companies like Workday and Salesforce that have popular cloud-based software products.
The investment manager T. Rowe Price, NetSuite’s second-largest shareholder after Oracle’s chief executive, Lawrence J. Ellison, had objected that Oracle’s offer was too low and said it would not tender its shares. T. Rowe sent a letter last week to Oracle suggesting that the company raise its offer to $133 a share.
As of July, T. Rowe owned 12.2 million NetSuite shares.
NetSuite’s chief executive, Zachary Nelson, has worked at Oracle and is close to Mr. Ellison.
NetSuite shares went on a roller-coaster ride ahead of Oracle’s offer deadline on Friday. A day earlier, NetSuite shares jumped by more than 6 percent before trading was temporarily halted. NetSuite shares fell 3.8 percent on Friday, closing at $90.34.
According to terms of the Oracle agreement, a majority of NetSuite’s 40.8 million unaffiliated shares, or shares not tied to Mr. Ellison and other insiders, had to be tendered to complete the deal.

Synthesize Speech In Any Voice,New Software that can can cause controversy

 

Good luck ever trusting a recording again. as it is right now, records done and presented in court as evidence will hardly have any value. 
A low quality video has emerged from the Adobe conference MAX showing a demo for a prototype of a new software, called Project VoCo, that appears to be a Photoshop for audio.The program is shown synthesizing a man's voice to read different sentences based on the software's analysis of a real clip of him speaking. Just copy and paste to change it from "I kissed my dog and my wife" to "I kissed my wife and my wife." Or even insert entirely new words—they still sound eerily authentic.In case you were confused about what the software's intended purpose is, Adobe issued a statement:
When recording voiceovers, dialog, and narration, people would often like to change or insert a word or a few words due to either a mistake they made or simply because they would like to change part of the narrative. We have developed a technology called Project VoCo in which you can simply type in the word or words that you would like to change or insert into the voiceover. The algorithm does the rest and makes it sound like the original speaker said those words.
The crowd laughs and cheers uproariously as the program is demod, seemingly unaware of the disturbing implications for a program like this especially in the context of an election cycle where distortions in truth are commonplace. Being able to synthesize —or claim that real audio was synthesized—would only muddy waters even further.
Somehow the clip also involves the comedian Jordan Peele, present at the conference, whose shocked expression is the only indication that anyone there is thinking about how this software will be used out in the real world.

Supercomputer comes up with a profile of dark matter

Supercomputer comes up with a profile of dark matter


Supercomputer comes up with a profile of dark matter: Standard Model extension predicts properties of candidate particle
Simulated distribution of dark matter approximately three billion years after the Big Bang (illustration not from this work). Credit: The Virgo Consortium/Alexandre Amblard/ESA
In the search for the mysterious dark matter, physicists have used elaborate computer calculations to come up with an outline of the particles of this unknown form of matter. To do this, the scientists extended the successful Standard Model of particle physics which allowed them, among other things, to predict the mass of so-called axions, promising candidates for dark matter. The German-Hungarian team of researchers led by Professor Zoltán Fodor of the University of Wuppertal, Eötvös University in Budapest and Forschungszentrum Jülich carried out its calculations on Jülich's supercomputer JUQUEEN (BlueGene/Q) and presents its results in the journal Nature.
"Dark matter is an invisible form of matter which until now has only revealed itself through its gravitational effects. What it consists of remains a complete mystery," explains co-author Dr Andreas Ringwald, who is based at DESY and who proposed the current research. Evidence for the existence of this form of matter comes, among other things, from the astrophysical observation of galaxies, which rotate far too rapidly to be held together only by the gravitational pull of the . High-precision measurements using the European satellite "Planck" show that almost 85 percent of the entire mass of the universe consists of dark matter. All the stars, planets, nebulae and other objects in space that are made of conventional matter account for no more than 15 percent of the mass of the universe.
"The adjective 'dark' does not simply mean that it does not emit visible light," says Ringwald. "It does not appear to give off any other wavelengths either - its interaction with photons must be very weak indeed." For decades, physicists have been searching for particles of this new type of matter. What is clear is that these particles must lie beyond the Standard Model of particle physics, and while that model is extremely successful, it currently only describes the conventional 15 percent of all matter in the cosmos. From theoretically possible extensions to the Standard Model physicists not only expect a deeper understanding of the universe, but also concrete clues in what energy range it is particularly worthwhile looking for dark-matter candidates.
The unknown form of matter can either consist of comparatively few, but very heavy particles, or of a large number of light ones. The direct searches for heavy dark-matter candidates using large detectors in underground laboratories and the indirect search for them using large particle accelerators are still going on, but have not turned up any so far. A range of physical considerations make extremely light particles, dubbed axions, very promising candidates. Using clever experimental setups, it might even be possible to detect direct evidence of them. "However, to find this kind of evidence it would be extremely helpful to know what kind of mass we are looking for," emphasises theoretical physicist Ringwald. "Otherwise the search could take decades, because one would have to scan far too large a range."
The existence of axions is predicted by an extension to quantum chromodynamics (QCD), the quantum theory that governs the strong interaction, responsible for the nuclear force. The strong interaction is one of the four fundamental forces of nature alongside gravitation, electromagnetism and the weak nuclear force, which is responsible for radioactivity. "Theoretical considerations indicate that there are so-called topological quantum fluctuations in quantum chromodynamics, which ought to result in an observable violation of time reversal symmetry," explains Ringwald. This means that certain processes should differ depending on whether they are running forwards or backwards. However, no experiment has so far managed to demonstrate this effect.
The extension to quantum chromodynamics (QCD) restores the invariance of time reversals, but at the same time it predicts the existence of a very weakly interacting particle, the axion, whose properties, in particular its mass, depend on the strength of the topological quantum fluctuations. However, it takes modern supercomputers like Jülich's JUQUEEN to calculate the latter in the temperature range that is relevant in predicting the relative contribution of axions to the matter making up the universe. "On top of this, we had to develop new methods of analysis in order to achieve the required temperature range," notes Fodor who led the research.
The results show, among other things, that if axions do make up the bulk of dark matter, they should have a mass of 50 to 1500 micro-electronvolts, expressed in the customary units of , and thus be up to ten billion times lighter than electrons. This would require every cubic centimetre of the universe to contain on average ten million such ultra-lightweight particles. Dark matter is not spread out evenly in the universe, however, but forms clumps and branches of a weblike network. Because of this, our local region of the Milky Way should contain about one trillion axions per cubic centimetre.
Thanks to the Jülich supercomputer, the calculations now provide physicists with a concrete range in which their search for axions is likely to be most promising. "The results we are presenting will probably lead to a race to discover these particles," says Fodor. Their discovery would not only solve the problem of in the universe, but at the same time answer the question why the strong interaction is so surprisingly symmetrical with respect to time reversal. The scientists expect that it will be possible within the next few years to either confirm or rule out the existence of axions experimentally.
The Institute for Nuclear Research of the Hungarian Academy of Sciences in Debrecen, the Lendület Lattice Gauge Theory Research Group at the Eötvös University, the University of Zaragoza in Spain, and the Max Planck Institute for Physics in Munich were also involved in the research.

Translate

Ads