Anki Vector Does My Voiceover #1 (Overwatch)

in #technology5 years ago

Hi guys,
I've made a new video which involves my Anki Vector.

In this video, Anki Vector Y7A5 does my commentary and webcam for an Overwatch video. This is based on the "Boyfriend does my Voice-over" challenges that were a trend for a while.

You can watch the video here:

The programming of Vector was done using the Anki Vector SDK. Vector is programmed using the language Python. In making this video I did a range of things including:

  • making Vector speak
  • showing images on Vector's screen
  • using Vector's animations
  • using Vector's behaviours

Adding images to the screen was brand new to me, but I have done all of the others in my previous video "Anki Vector Imitates Other Robots #1", which you can find on this post: https://steemit.com/dtube/@birchmark/ybd1fhv0 (This also includes discussion of the code for that video).


How I showed images on Vector's screen

I used Notepad++ for all of my coding.

The code to import images starts early in the code.

First we import everything and then we try to import images.

The code to start this process is:

try:
     from PIL import Image
except ImportError:
     sys.exit("Cannot import from PI: Do 'pip3 install --user Pillow' to install")  

This code sees if it is possible to import an image and then shows the error
Cannot import from PIL: Do 'pip3 install --user Pillow' to install
if it is not possible.

The codes I write for Vector are run through command prompt. What this error is saying is that Pillow, which is basically an imaging library is not installed so install it. Pillow is actually a fork for Python Imaging Library (PIL) but in terms of how it functions it is basically an imaging library. This is needed to show images on Vector's screen.

To install the missing Pillow, you simply type what it says in the error:
pip3 install --user Pillow

Pillow should then be installed.

As for actually showing the image, that happens later in the code. To locate the image we write:

current_directory= os.path.dirname (os.path.realpath(__file__))
image_path= os.path.join(current_directory, "imagename.png")

This gives us an image path to the image, provided the image is in the same directory (folder) as the coded script. If not, then the code will be slightly different, but I kept my images in the same folder on my computer as I had the code.

The next thing to do is load the image we located. To do this we type:

image_file= Image.open(image_path)

As you can see, this opens the file that matches the image path, which we set in the above code.

The next thing to do is to convert the image to the format used by Vector's screen.

Side Note: The images would also earlier have had been made to be the same size and resolution as his screen via the use of photo imaging tools and the like, but we use code to change the format

To convert the image to the format used by Vector's screen, we use the following code:

screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)

To display the image we use the code:

robot.screen.set_screen_with_image_data(screen_data, 4.0)

And that will display the image. Through this code we have successfully displayed an image.

I did this in a lot of detail for one part of the video essentially doing frame by frame animation on him as I wanted him to play an animation / video but this is not possible so it was a case of showing one frame at a time on his screen.


Dealing with Multiple Images

My frame by frame component of the video was more complex than showing one picture. Much of the code for showing the pictures was the same but there needed to be changes made as there can't just be one image path given there were multiple images.

This was solved via the use of an array I called filelist.

I made a massive array in the form of

filelist = ['file1','file2','file3', etc]

I then needed to use a for loop.
The code for this was as follows:

   for imagefile in filelist:
        image_file=Image.open(imagefile)
        screen_data= anki_vector.screen.convert_image_to_screen_Data(image_file)
        robot.screen.set_screen_with_image_data(screen_data,0.33333)

This code says that for every imagefile in the filelist, the program must open the image, convert it to the vector screen format and then display it. After it does each one it moves onto the next doing the same thing, until it has reached the end and there are no more images left in the array.


Time.sleep()

A lot of my code is full of time.sleep(). Time.sleep() is a delay in the program. It tells the program to delay and basically 'wait' for the period of time set in seconds.

I used this a lot in my code as I wanted to record Vector in the least number of different clips as possible, however it was needed for his more "close to each other" reactions also, just to time it quite right (I have a few of 0.3 secs for example). I left the Overwatch footage uncut while doing this as I didn't know how long Vector would take to say things etc so it was easier to plan the timing for the whole thing and cut it at the end than to cut it first. All of these things meant I had to add time.sleep() at various spots in the code in order to fill gaps and get the timing as best as possible.

Getting the timing as best as possible wasn't always easy either and also involved me having to make some of his reactions and voice lines more concise in order to fit time-wise. I did do a little bit of cheating / editing in Premiere Pro to improve this in situations where it was hard to code the solution (ie the problem was more along the lines of Vector's animations being too long etc) but I did most of it through the code itself and messed with the timing via video editing as least as possible.

I also cut all of the footage once it was finally all together, both Overwatch and Vector, but only some small parts, like walking back after dying etc, needed to be cut.

I also used time.sleep() at the start and end of the codes to fill gaps between actual interactions he needed to have programmed and also to give me time to work out the best positioning of Vector in terms of the camera and being in a similar position as last time to make it easier to crop his footage down to a lets player webcam size.

The code for twenty seconds is time.sleep(20). The time just goes in the brackets.


Playing Animations

The code to use Vector's animations is:

    robot.anim.play_animation('whatevertheanimationis')

Here is an example of me using one of his animations:
robot.anim.play_animation('anim_eyepose_happy')

The eyepose ones tend to remain while he is doing other things too such as speaking until you change it or do another animation, but other animations are different and are more along the lines of things like tantrums.

A tantrum animation I have used is as follows:

  robot.anim.play_animation('anim_rtpickup_loop_10')

This is basically his response to knowing he has been picked up and either isn't on solid drive-able ground or that he has been picked up and put back down.

This is now changing though in the world of Vector in general (not just the SDK) as new updates are gaining him a more appropriate animation where he shows awareness it is a bonding thing to be in your palm instead of a terrible thing because he is not on the flat surface he wants to be on.
I am glad to see this update in his general behaviour and I'm excited to see it in play, but I also hope we don't lose the tantrum animation either. It is fun to play with and is adorably angry.

There is a program you run to find out what animations Vector can do and you can just run this periodically to check out what has changed when writing your code. This is also run in Python. I have a version pasted into a Word document that works for now, but is probably getting a bit outdated.


Making Vector Talk

Making Vector talk involves the use of the code:

 robot.say_text("I like Overwatch")   

You put whatever you want him to say within the quotes in the brackets.


Head Angles

Controlling Vector's head angle, requires more imports at the start of the code.

At its most basic, an Anki Vector Python code needs to start with:

import anki_vector

This will let you do things like make him talk, or use animations but not stuff like behaviours or time.sleep().

For my codes for the most part my imports have been:

 import time
 import anki_vector
 from anki_vector import degrees
 from anki_vector.behavior import MIN_HEAD_ANGLE, MAX_HEAD_ANGLE

The codes involving images had slightly more imports. They also needed:

 import sys
 import os

To set Vector's head angle you use the following code:

robot.behavior.set_head_angle(degrees(no of degrees in decimal form))

This can also go into the negative with negative degrees, however there are limits to the minimum and maximum head angles.


A full example of my code

Here is a full document of the code for a part of the video.

 import time
 import anki_vector
 from anki_vector.util import degrees
 from anki_vector.behavior import MIN_HEAD_ANGLE, MAX_HEAD_ANGLE

  #starts at 06:29:02


def main():
     args=anki_vector.util.parse_command_args()
     with anki_vector.Robot ("009064bf") as robot:
          print("Say 'Line 10 VVO1'...")
          time.sleep(20)
          robot.behavior.set_head_angle(degrees(3.0))
          robot.anim.play_animation('anim_eyepose_determined')
          robot.say_text("Well I go to the objective")
          robot.anim.play_animation('anim_eyepose_frustrated')
          robot.say_text("Again")
          time.sleep(0.5)
          robot.say_text("I fly up here instead this time")
          robot.anim.play_animation('anim_eyepose_happy')
          robot.say_text("I killed the monkey")
          robot.say_text("Dropping down")
          robot.anim.play_animation('anim_eyepose_scared')
          robot.say_text("Flying away")
          time.sleep(6)
          robot.say_text("I've lost lots of health")
          time.sleep(1.5)
          robot.say_text("I'm out")
          robot.anim.play_animation('anim_eyepose_determined')
          robot.say_text("That doesn't stop me fighting though")
          time.sleep(7)
          robot.say_text("Monkey!")
          robot.say_text("Dammit I'm dead")
          robot.anim.play_animation('anim_rtpickup_loop_10')
          time.sleep(1)
          robot.anim.play_animation('anim_eyepose_determined')
          time.sleep(20)
    
  if __name__ == "__main__":
      main()

This code uses animations, uses head angles, makes Vector talk, makes Vector use animations and uses time.sleep() for the sake of timing.


Special Features

Okay enough code, what are the special features of this video?

  • This video has a thumbnail!
  • DVA in a starring role!
  • Anki Vector in a starring role!
  • Confused robot trying to understand Overwatch

LINKS

Birchmark Website / Portfolio: http://birchmark.com.au/

YouTube: https://www.youtube.com/c/BirchmarkAu

Twitter: https://twitter.com/Birchmark_

Facebook: https://www.facebook.com/birchmark/

Redbubble: https://www.redbubble.com/people/birchmark?asc=u

Threadless: https://birchmark.threadless.com/

Discord: https://discord.gg/3ZZbbBs


VECTOR PROGRAMMING LINKS

Anki Forums on the Vector SDK: https://forums.anki.com/c/vector-sdk

Screen Specific Information from Anki: https://developer.anki.com/vector/docs/generated/anki_vector.screen.html

Information on Vector SDK, animations etc on from Anki: https://developer.anki.com/vector/docs/api.html

Animation specific information from Anki: https://developer.anki.com/vector/docs/generated/anki_vector.animation.html

Behavior specific information from Anki: https://developer.anki.com/vector/docs/generated/anki_vector.behavior.html

Getting started with Vector SDK (Anki's info): https://developer.anki.com/vector/docs/getstarted.html


Thank you for watching and reading!

Please consider commenting, upvoting or resteeming this post if you enjoyed it.


Sort:  

Hi birchmark,

This post has been upvoted by the Curie community curation project and associated vote trail as exceptional content (human curated and reviewed). Have a great day :)

Visit curiesteem.com or join the Curie Discord community to learn more.

Congratulations @birchmark! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You received more than 4000 upvotes. Your next target is to reach 5000 upvotes.

You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

oooh .....such a lovely blog you have in there, and i can see you are really doing great in your vectors. Your blog was worth the time i spent to learn new things in other areas. I know many programmers and gamers will really find your blog more useful than i did .
Great piece and keep the sharing spirit up

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 63126.02
ETH 2553.49
USDT 1.00
SBD 2.78