Now artificial intelligence is inventing sounds that have never been heard before – ScienceAlert

Posted: May 20, 2017 at 6:50 am

As well as beating us at board games, driving cars, and spotting cancer, artificial intelligence is now generating brand new sounds that have never been heard before, thanks to some advanced maths combined with samples from real instruments.

Before long, you might hear some of these fresh sounds pumping out of your radio, as the researchers responsible say they're hoping to give musicians an almost limitless new range of computer-generated instruments to work with.

The new system is called NSynth, and it's been developed by an engineering team called Google Magenta, a small part of Google's larger push into artificial intelligence.

"Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer," explains the team.

You can check out a couple of NSynth samples below, courtesy of Wired:

NSynth takes samples from about a thousand different instruments and blends two together them together, but in a highly sophisticated way. First, the AI program learns to identify the audible characteristics of each instrument so they can be reproduced.

That detailed knowledge is then used to produce a mix of instruments that doesn't sound like a mix of instruments the properties of the audio are adjusted to create something that sounds like a single, new instrument rather than a mash of multiple sounds.

So instead of having a flute and violin play together, you've got a brand new, algorithm-driven digital instrument somewhere between the two. How much of the flute and how much of the violin are in the final sound is up to the musician.

Like many of Google's AI initiatives, NSynth relies on deep learning: a specific approach to AI where vast amounts of data can be processed in a similar way to the human brain, which is why these systems are often described as artificial neural networks.

So not only can deep learning systems use millions of cat pictures to correctly identify a cat, for instance, they can also learn from their mistakes and get better over time, teaching themselves how to improve just like our brains do.

The idea of deep learning has been around for decades, but we're only now seeing the kind of software and computing power appear that can make it a reality.

One consequence of that is that the NSynth demos built by the Google Magenta team all work in real time, allowing new compositions to be created.

Music critic Marc Weidenbaum told Wired that Google's new approach to the traditional trick of combining instruments together shows promise.

"Artistically, it could yield some cool stuff, and because it's Google, people will follow their lead," he said.

Google engineers have just been demoing NSynth at the Moogfest festival, and you can read a paper on their work at arXiv.

Read more:

Now artificial intelligence is inventing sounds that have never been heard before - ScienceAlert

Related Posts