Introducing The Amanuensis: Automated Songwriting and Recording | Blog #1 | Demo Video plus 2 Recordings

in #utopian-io6 years ago

Repository

https://github.com/to-the-sun/amanuensis

Introduction

In this post I will introduce The Amanuensis, explain a little bit about how it works, describe the roadmap I envision for its future and provide a video demo of it in action, as well as the pair of recordings made during said demo.

Post Body

What is The Amanuensis?

It is a totally novel system of writing music. In one sense it's a "smart looper", except that it writes entire songs and not just loops. Really the closest thing to compare it to is an entire DAW application; it's just one tailored to very specific (and fun!) method of writing and recording. It's half recording software, half Guitar Hero.

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you on a daily basis.

The Amanuensis was conceived after participating in so many jam sessions, and so many with exceptional moments in them, being frustrated by how few of those made it into actual songs, despite the sessions being recorded. One could spend hours digging through those recordings and hours more trying to salvage anything for an actual studio recording. The apparent truth was that, musically, the best things always happen when you're not paying attention to whether or not it's being documented. So if you love to get lost playing your instrument, but hate the tedious and often contrived nature of engineering a recording, The Amanuensis is for you.

How it works

Currently The Amanuensis relies on MIDI to denote played notes and uses it to identify when a steady beat has been achieved. Longer steady spans are retained while the shorter ones they overlap are discarded. This is essentially the criteria used to determine which portions of your jam are "best" and works well in practice, although any other method could be implemented in future versions.

Although MIDI must always be present for The Amanuensis to work, this does not mean you can't use ordinary audio sources like acoustic drums or guitar. The program includes a versatile feature called The Singing Stream which can interpret any stream of data into MIDI, whether that be audio or readings from an external controller or a Kinect. This MIDI can then simply accompany that audio source (or another) in order to run the rhythmic analysis, or be used to play VST instruments or samples.

These VSTs and samples are preloaded into The Amanuensis itself so no other software is needed to play MIDI. In addition, the PGUP and PGDN hotkeys allow you to quickly cue up a new sound at random or cycle back to previous ones. This either means a random plug-in from your specified VST folder with a random preset already selected or a new random sample from your specified drum sample folder for each pitch. These hotkeys are basically the only deliberately controlled functions that exist in the system; everything else is meant to be as automatic as possible. But if you're not feeling the sound you're working with, switching to another can be tedious and take you out of the moment. This alleviates that.

For increased automation, there is a function that will "shuffle" your instrument at specified intervals of time or after periods of silence. In addition to giving you a new sound this will also switch you to a random MIDI channel, thereby allowing you to work with another layer of the song. Each song can be comprised of up to 16 tracks (one for each MIDI channel) which are each analyzed for rhythm and built independently, as well as simultaneously. This means an entire band can write a song together or a single player can rotate through every instrument. The number keys as well as + and - can also be used to jump to other channels.

Demo

Below is a quick video provided here simply to give people a first-hand view of The Amanuensis in action, so they can get a real idea of what it is and how it works. I won't go into anything in detail here. That I will leave for future blog posts and video demos.

Don't expect to hear any world-class music in it; it's purely for illustrative purposes. Unfortunately it must be stated that I'm currently a bit of a cripple and don't have the use of my hands, so the "songs" I'm producing are actually comically bad. If I could play some guitar or drums I could show you some really impressive music being written, but as it is I'm having to rely on my (less and less nonexistent) beat boxing skills. But it only goes to show how versatile the system is; just about any way you could think of to make music can be used to play The Amanuensis.

Basically two recordings are created during the video. The first uses "god mode" to create quick and clean recordings in each track. You can listen to the result below.

The second method is much more fun and chaotic. The resulting song is highly, shall we say, "experimental", but the idea behind it is the process itself. I look forward to writing music like this every night. It's like a videogame. Or a jam session with some random participant throwing audio back at you that you have to deal with. It requires you to stay very much in the moment, in that right brained flow, which is the whole point. Eventually the program will be refined enough that the songs produced by playing this "game" are actually really high quality.

Roadmap for the future

Currently The Amanuensis is in a version 1.x beta (playtesters wanted!) The main tasks at hand are making it more user-friendly, giving it a nice UI (especially in visualizing its analysis and moving it closer and closer to the interactive sort of video game I ideally imagine it to be) and finding any bugs, but most if not all of the essential functionality is intact and working.

  • Version 2.0

At this point perhaps one use of an open-source effort might be in creating a brand-new version 2.0 from the ground up. One major upgrade would be to decouple The Amanuensis from its reliance on MIDI. Although more difficult to get working correctly, this would be more streamlined and straightforward from a user's perspective, as well as address certain idiosyncrasies such as what happens when MIDI notes have long attacks or when dealing with glissando in vocals or other instruments. This could be accomplished by reworking the algorithm that judges what parts of the music are best. Instead of relying entirely on analyzing rhythm, i.e. specific moments in time, a more comprehensive machine learning approach could be implemented.

I imagine a deep neural net focused on spectral analysis that on the lowest level is looking at a period of time equivalent to that of maybe a beat, determining where in that window an onset is likely to occur. This would be the layer that judges rhythm, by taking notice of any deviance in their positions. The next layer might look at windows the length of a measure and be able to identify repeated riffs and the like. Additional layers would find patterns on the scale of whole choruses and verses. In theory, deviations in pitch from an established scale could also be detected implicitly. A rating could then be derived based more heavily on the essentials like rhythm, but also taking into account the repetition of higher-level motifs.

At any given moment then, a percentage score could be calculated based on what the user is actually playing versus what the program expected would be most likely to hear. 100% would not necessarily be ideal; good music requires the establishment of patterns and then the breaking away from them. So perhaps a target percentage more like 50 or 75% similar to the expected could be used. At this point, it would be only a small step further to imagine an AI for you to jam right along with. The algorithm could actually generate the audio it expects to come next, rather than just judging based upon it. It would probably be ideal if the AI trained on each instrument it heard independently, so it could truly play alongside you with a different instrument than yours rather than the same one right on top of you, which would be jarring. This is also one of the issues of a player using the system by themselves: as soon as it starts looping, it's often playing the same thing (not quite) totally in sync with you. An AI would be something to brace against while you got going and you wouldn't even need to hear the audio in the track currently being captured until later.

It is also the eventual goal for this system to blend seamlessly into any situation, so music truly can flow out from you wherever you go. I imagine walking down the street and because the sensors on your feet are picking up a steady repetition, a beat starts up spontaneously. Then, as you're compelled to start tapping your fingers on your thigh, the sensors there begin adding a melody of some kind to this inchoate song. A personal soundtrack of new, original music at all times. This is basically my goal in life. The first requirement in moving toward this reality would be an Amanuensis for mobile devices. For this reason it might be ideal to write version 2.0 using JUCE, which I hear is totally cross-platform. A more in-depth framework and one I have not used, but if it's going to be done, it should probably be done right.

  • Versions beyond

The absolute best way to play music is with other people. If your band's not around and the AI (trained to play just like you) isn't stimulating enough, it would be great to be able to jump online and write music with any random person in the world. I find this feedback loop, the give and take with foreign minds, to be essential. Sites like JamKazam.com are great, but if augmented with The Amanuensis, they could be taken to another level. Jamming with The Amanuensis becomes a lot like a game, a far more open-ended version of Guitar Hero with the goal to score highly and create the best music possible. An online matchmaking system could be every bit like those in videogames, throwing players together to compete against (or with) each other to write songs.

And I'll raise the ante once more. There's no reason this sort of pattern detection system has to be limited to audio. If your instrument plays video in addition to audio, you're basically describing what happens in a videogame. In the same way The Amanuensis currently plays back bits of audio that you must interact and deal with, a video Amanuensis would also send visual obstacles and enemies back your way that would then inform and determine your future playing. If you're not familiar with the game Crypt of the Necrodancer, I highly recommend checking it out. What I have in mind would be something like that but on a grander scale, with a soundtrack and even the levels themselves created by and evolving with you as you play, instead of ones that are set from the beginning.

This blog will continue…

I intend to continue to release demo video walk-throughs of The Amanuensis as I use it day-to-day. In each new blog I will go over some new aspect of the system. If you'd like to try it out and contribute some videos of your own, please do! I'd love to see people using it in all the different ways it's possible to be used.

Until next time, farewell, and may your vessel reach the singularity intact


https://tothesun.bandcamp.com/
https://soundcloud.com/to_the_sun
https://sellfy.com/to_the_sun

Sort:  

Thank you for your nice introduction to the project. Even though I am not that proficient in the field, I haven't heard an app, a plugin or a feature in any DAW like this before. The idea sounds promising to me. Besides, I can't say the same for the demos/examples in the post. I think that adding better samples would be way better to introduce the project and the concept. I except to see more in the upcoming posts.

Also, the contribution doesn't fit the tutorials category since contributions in tutorials category are expected to teach instead of introducing as stated in Utopian Guidelines.

Contributions in this category include technical instructions using text and illustrations to clearly explain and teach significant aspects of an Open Source project.

But the contribution is a perfect match for blog posts category. We welcome posts introducing or promoting a project with details and overviews which describe history, present and future of a project. Therefore, the contribution is moved to blog posts category and reviewed under the guidelines of the category. For the upcoming blog posts, you should use "blog" tag as the second tag.

Additionally, using resources without properly referencing and quoting is not welcomed in Utopian even if the content belongs to you. The certain part of "What is The Amanuensis?" section could be paraphrased or to be shown in a blockquote.

To provide a better presentation in the upcoming posts of the series, I suggest using higher degree headers(h2,h3 instead of h3,h4 i.e) and increasing number of visuals in the post. Also you might think about separating the text into smaller sections. In this current form, the contribution has a low readability and it's hard to follow.

I tried to point out everything that could improve the quality of the post. I would be glad if I could help you in any ways. I hope I managed!

Thank you for sharing the great project in Utopian, I'll be looking forward for your future posts.

Your contribution has been evaluated according to Utopian policies and guidelines, as well as a predefined set of questions pertaining to the category.

To view those questions and the relevant answers related to your post, click here.


Need help? Write a ticket on https://support.utopian.io/.
Chat with us on Discord.
[utopian-moderator]

Hey @roj
Here's a tip for your valuable feedback! @Utopian-io loves and incentivises informative comments.

Contributing on Utopian
Learn how to contribute on our website.

Want to chat? Join us on Discord https://discord.gg/h52nFrV.

Vote for Utopian Witness!

Utopian Rocks!

Thanks for all the tips! Sorry about the wrong category. I intended it for the blog category and that's what I had originally written, but voice recognition screwed me again… It loves to erase the things I write and substitute default entries from the past as soon as I click "submit". I'll be more careful next time.

Also I realize my demos aren't extremely impressive right now (no hands). It's something I'm working hard to improve. If I may ask, which would you say is a bigger problem: the quality of the video (editing, etc.) or the actual musical quality of the recordings produced?

Thanks for your feedback!

Yeah, the video recording quality is bad but I'm not judging it, since it would improved easily. My worry about the quality of the music produced is the difference between potential and current sample(to my expectations of course). I think that the result would be much better with better input and instrument samples(for demonstration purposes). It's a nice practice to show results with existing samples as input when you don't have anything to jam with at the moment of recording.

Thanks @roj, unfortunately I don't have any better recordings already made to use. This thing with my hands has been a chronic issue and there's no telling when I might get them back. But I've been thinking a lot about getting someone else to try out the program and demo it instead. Is there a utopian category for that? Somewhere to request beta testers/video makers?

Nice detailed feedback! keep up the good work!

Hey @to-the-sun
Thanks for contributing on Utopian.
We’re already looking forward to your next contribution!

Want to chat? Join us on Discord https://discord.gg/h52nFrV.

Vote for Utopian Witness!

Coin Marketplace

STEEM 0.21
TRX 0.20
JST 0.033
BTC 92268.82
ETH 3102.93
USDT 1.00
SBD 3.03