Project Magenta talk at I/O 2017 : Generative Art/Music Using Machine Learning (and Surviving it?)

in #music7 years ago

One thing that gets my brain buzzing like a fridge more than the blockchain horizon, is generative art using machine learning/neural nets.

check this presentation from Douglas Eck of Project Magenta (Google Mind):

 

So this talk previewed mostly sound design, the drawing half of the content can be (and has been) applied to musical composition. Check out this great post too. 

While it's easy to see/hear the results and say "ok, yup, our jobs are still safe", generative art has been winding up for over a hundred years and will only progress faster. I can only speak confidently about music creation but there is very little about the process that I can't see being automated and optimized by machine learning (eventually). For now we (humans) are the most efficient but I can think of two options for surviving the inevitable.

One, if we are to compete with data-trained AI, maybe it's best to think about which ways we are 'trained' most extensively. 

For performing musicians, one training set would be the thousands and thousands of hours of practicing and rehearsal. Although very few make it to the point of virtuosic attention to detail in an effort to maximize expression (some lose their way just playing difficult things to play difficult things, but there is a reason that isn't popular, and it's not because people are stupid, it's because that kind of music isn't translated efficiently by most listeners). The sad(/exciting?) truth is that machines already replace many world-class performers in the studio. While initially disheartening if you've spent years of your life approaching muscle memory (I definitely tried) , it's not really the largest data-set we have offered our brain.

But, hearing? been doing that non-stop our whole lives. If we compare our sensory input (hearing+memory-context/feeling in this case) to the data fed into the seeds of a generative model, then we have the benefit of being "trained" our entire life. Granted a human's reinforcement scheme will be less efficient from the start, we have the blessing of not requiring any translation between data types. And I think one role that benefits from that training most is Production. And I mean that in a slightly more traditional sense, how a song is 'dressed up' and therefore how it is put into context against an ocean of music. 

Will an algorithm be able to decide if a guitar tone is "too Strokes and not enough Squeeze" or if guitars are irrelevant and that melody should be played on a Juno-60 in spring reverb? I am confident we have a few years left to make such important decisions but will be slowly backed into a corner or editing, curation, and as suggested by Douglas Eck, using these processes as tools for new types of composition. 

Which is basically option Two, we join the machines. I'm thinking Wet-ware. best of both worlds. I CAN'T WAIT.

anyone out there using these tools to make new stuff yet? 

Sort:  

Congratulations @donmoyer! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

You got a First Reply
Award for the number of upvotes received

Click on any badge to view your own Board of Honnor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

As someone who is both a software engineer and a musician, as well as a massive AI enthusiast, I am super excited for things like this! Thanks for sharing!

Coin Marketplace

STEEM 0.18
TRX 0.14
JST 0.030
BTC 58639.60
ETH 3167.30
USDT 1.00
SBD 2.43