Generate your own sounds with NSynth

in #music7 years ago

NSynth is, in my opinion, one of the most exciting developments in audio synthesis since granular and concatenative synthesis. It is one of the only neural networks capable of learning and directly generating raw audio samples. Since the release of WaveNet in 2016, Google Brain’s Magenta and DeepMind have gone on to explore what’s possible with this model in the musical domain. They’ve built an enormous dataset of musical notes and also released a model trained on all of this data. That means you can encode your own audio using their model, and then use the encoding to produce fun and bizarre new explorations of sound.

Since NSynth is such a large model, naively generating audio would take a few minutes per sample, making it pretty much a nonstarter for creative exploration. Luckily, there is plenty of knowledge floating out there about how to make the WaveNet decoder faster1. As part of development for my course on Creative Applications of Deep Learning, I spent a few days implementing a faster sampler in NSynth, submitted a pull request to the Magenta folks, and it’s now part of the official repo. The whole team was incredibly supportive and guided me through the entire process, offering a ton of feedback. I’d highly encourage you to reach out via GitHub or the Magenta Google Group with any questions if you are interested in contributing to the project. Read Full

maxresdefault.jpg

Sort:  

That's nice

Errr I don't want to be a downer but checked it out on it's web site and it doesn't seem that great to me.
I think we've had enough 'synthetic shit' to make us gag.
https://steemit.com/music/@worldsoulfusion/happiness-seems-so-last-century-these-days-and

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.030
BTC 68363.69
ETH 2642.16
USDT 1.00
SBD 2.69