The Amanuensis: Automated Songwriting and Recording | Blog #3 | Getting Set Up | video walk-through

in #utopian-io6 years ago (edited)

The Amanuensis is an automated songwriting and recording system aimed at ridding the process of anything left-brained, so one need never leave a creative, spontaneous and improvisational state of mind, from the inception of the song until its final master. The program will construct a cohesive song structure, using the best of what you give it, looping around you and growing in real-time as you play. All you have to do is jam and fully written songs will flow out behind you wherever you go.

If you want to try it out, please get a hold of me! Playtesters wanted! You only need to be a musician who likes to jam, no programming knowledge required.

Repository

This also happens to be an open-source project, if you'd like to get involved: https://github.com/to-the-sun/amanuensis

Introduction

The purpose of this blog is to promote the project, by giving people a first-hand view of it in action, go over features and discuss the current state of development.

As requested, this blog post will go over the set up of The Amanuensis: the basic settings, setting up individual tracks and using The Singing Stream.

Post body

Demo video

First, one annoying little caveat…

At this stage in development, if you want to use synths, there is a manual requirement of listing them followed by their total number of presets in an external file called sampleTaste.txt. The very first thing shown in the video is this file and what its contents should look like.

The file must reside in the same folder as Amanuensis.maxpat. I'm using three VSTs in my set up so it's the first three highlighted lines in the video that are relevant. You can ignore the rest. The important thing to note is the format. The name of the VST must include extension and be a symbol (quote surrounded if it has spaces), followed by a comma, the number of presets and finally a semicolon.

Once that's taken care of you can open up Amanuensis.maxpat and wait for it to load.

The Singing Stream

There's a gray window that pops up with the title Preloading Drum Samples. It will automatically close once it's done with its job and this is the signal that the program is ready to run.

The very first thing I do in the video is set up all the non-MIDI data streams that I would like to have turned into MIDI so I can play music with them. These include a Guitar Hero guitar and a DDR dance pad. The Singing Stream handles this job, which is the little window in the bottom right.

The way to load a new datastream is to simply show it to The Singing Stream. So as soon as the program is ready to go, I hit each button on the external controllers that I want to use. You can see that the number of "inputs waiting to load" starts increasing until it reaches 9. This means there will be 9 new controls on external controllers that I will set up to generate MIDI.

You can also generate MIDI from audio

Opening the drop down menu reveals all of the data streams that have been seen so far. Ones with checkmarks by them are activated and will produce MIDI. There are some controls on these external controllers that are always sending so I take the time to turn each of these off one by one, leaving only the ones I want to use.

After assessing this list, I decide there's one more datastream I would like checked. That's the actual audio coming in through the microphone plugged into input 1 on my interface. Unlike the game controllers, audio sources are deactivated by default so I must select this one and activate it manually. Now I can pair this MIDI source up with the track the microphone is actually recording into to run the rhythmic analysis of that audio (I'll get into this in a moment).

Choosing the pitch generated for individual sources

The MIDI generated by the microphone does not need to be any specific pitch; it is there simply to denote moments in time for analysis. However, the other sources are either going to be used as actual instruments to play synths or to trigger hotkeys, so in either case a specific pitch will need to be deliberately chosen for them to generate.

The Singing Stream is designed to be as generic and accept as many human interface devices as possible, so unfortunately individual controls are not named but simply numbered according to the input that has been received from them. To figure out which input is which, I select each one and hit the buttons on the controller. When I've found the corresponding button the input can be seen fluctuate in the graph.

With the correct source selected, you can click on the bar labeled "producing pitch" with the big bold number after it. Dragging up and down with the mouse will change the number, or as I do, you can type in the desired pitch and then hit enter. For the guitar's controls I choose pitches that constitute a playable little scale.

On the other hand, the dance pad is going to be used for more utilitarian purposes. I would like to stomp on it to activate certain hotkeys. MIDI pitches 0 through 5 are currently designated to activate these. So for this setup I specify one of the dance pad controls to generate pitch 0 (toggle god mode on/off), another pitch 2 (a new random sound to play with) and the third pitch 4 (move the current MIDI source up one track).

The general settings

The next necessity is to go into the "settings" menu, choose your audio driver, choose your drum sample folder, choose your VST instrument folder and a folder for project files and the recordings ultimately produced to be exported to.

Side note: in the video I was actually fooled by the default values in the menu, but I had deleted the settings file for the purposes of doing this demo from scratch, and I did actually need to come back and choose the folders for samples and synths before they would load.

I also adjust the "tolerance", putting it way up to 48. Normally I would want to keep it low, meaning that I would really have to be very much in sync with my playing for it to capture anything, but right now I'm unfortunately without the use of my hands and relying on beat boxing as well as playing a Guitar Hero guitar with my feet. Needless to say these methods are much less precise than if I could actually get a guitar in my hands, so basically in turning up the tolerance, I am turning down the difficulty. You may want to set it high at first as well, just to get the feel of things, and then turn it down again later.

Track set up

Recording sources shouldn't have to be adjusted too much. Mainly you should just give yourself all the options you might want when constructing a song and leave it at that. You don't have to use every one each time you play.

Here I decide I would like to have one track potentially playing drum samples, two tracks on synths, which can create some nice layering, and I happen to have two microphones plugged into inputs 1 and 2 of my interface so I set up those accordingly as well, although I usually only ever use the first one.

Mixing and matching your instrument with its sound

As soon as any MIDI enters the program you will see a label denoting that source appear on one of the tracks. By the time the settings menu is closed, four separate sources have already been seen. The task is then simply to move them to the track you would like to use them to play. In the one sense the MIDI source can be seen as your "instrument", whereas the recording source is the "sound" it produces.

When the MIDI is being used to play drum samples or synths, this association is fairly straightforward. For example, if I assign the Guitar Hero guitar to a synth track it will play that synth. If I move it to the samples track, the buttons will play drum samples.

I mentioned earlier that I would use the MIDI generated by my microphone to run the analysis of that audio. Basically I'm using the microphone (my instrument) to play the beatboxing (the sound) that will be entering it.

But this can be mixed and matched as well. If I wanted to, I could assign the computer keyboard as the MIDI source for the microphone and the recording I would get would be the clicking of the keys. Or if you happen to make some very predictable and rhythmic grunting while you're playing DDR, you could assign the dance pad as the instrument for the microphone and build the song out of that (you could actually still play DDR simultaneously, by the way). Or you could use your voice as the instrument assigned to a synth track and produce those sounds with it.

Moving MIDI sources

The + and - keys are used to move MIDI sources up or down one track. The number keys will move them directly to a specific track. MIDI pitches 1 and 2 emulate the - and + keys, respectively. The MIDI source that is moved is always the one most recently to send MIDI to the system.

You can watch in the video as I assign sources to tracks as I see fit. At one point, the "Focusrite input 1" source starts rotating through tracks. Although I would have liked it to remain on track 1, the microphone was picking up my voice and creating MIDI from it, so that became the source in question when I started moving things.

Calibration in The Singing Stream

I bring this up because it became necessary to recalibrate that input stream. You'll see that I go back to The Singing Stream, choose the audio input 1 from the menu and click "recalibrate". I then talk loudly away from the microphone to establish a base level of background noise that it should ignore. From then on it will be less sensitive and only acknowledge the sounds I'm deliberately making directly into it.

That's all there is to it…

At this point (really at any point as long as an audio driver is selected) I'm free to just start playing and the system will begin to pick up things on its own, constructing a song with them as it sees fit. And of course, everyone knows set up sucks, so I'll note that everything covered in this run-through will be saved and the next time I start up I won't have to go through it all again.

So here is where the left-brain leaves off and the right-brain takes over.…

Until next time, farewell, and may your vessel reach the singularity intact


https://tothesun.bandcamp.com/
https://soundcloud.com/to_the_sun
https://sellfy.com/to_the_sun

Series Backlinks

Blog #1 - Introducing The Amanuensis: Automated Songwriting and Recording
Blog #2 - Birth of a Song and the Battle for Its Evolution

Sort:  

Hi, thank you for the contribution.

This project actually seems very interesting to me, but I must be honest, and say that I still can not understand very well how it works. You're doing a good job using a video as a resource, but it's hard to understand what are you doing in the video without audio and checking constantly the submission text.

Your contribution has been evaluated according to Utopian policies and guidelines, as well as a predefined set of questions pertaining to the category.

To view those questions and the relevant answers related to your post, click here.


Need help? Write a ticket on https://support.utopian.io/.
Chat with us on Discord.
[utopian-moderator]

Hey @to-the-sun
Thanks for contributing on Utopian.
We’re already looking forward to your next contribution!

Want to chat? Join us on Discord https://discord.gg/h52nFrV.

Vote for Utopian Witness!

Coin Marketplace

STEEM 0.17
TRX 0.15
JST 0.028
BTC 57676.72
ETH 2356.36
USDT 1.00
SBD 2.39