How to build a cat detector with a Raspberry Pi and a Pi Noir camera using Deep Learning - Part II

in #raspberrypi6 years ago

Part I of the story is here:
https://steemit.com/raspberrypi/@mragic/how-to-build-a-cat-detector-with-a-raspberry-pi-and-a-pi-noir-camera-using-deep-learning-part-i

In this second part, we will create the train and test dataset needed later to train and test the deep neural network which will be able to distinguish if a cat is in a image or not.
First, we will setup motion such, that it records the images when something moved in the field of view of the camera.
To do so, we open motion.conf again with:

sudo nano /etc/motion/motion.conf

. We enable the recording of images and we set the target directory, where we want to store the images:

output_pictures on
target_dir /opt/motion

For every image, we call a python script with the arguments being the coordinates of the rectangle where the motion happened in the frame. Motion can hand over arguments with the %.

on_picture_save /home/pi/cat_recognition/on_picture_save.py %f %v %q %i %J %K %L
%f is the filename of the created image
%v the event 
%q frame number
%i width of motion area
%J height of motion area
%K X coordinate of motion center
%L Y coordinate of motion center

Then create the directories where the images will be stored:

sudo mkdir -p /opt/motion/cropped
sudo chown -R motion:motion /opt/motion
sudo chmod a+rw -R /opt/motion

The python script takes these arguments, and writes them in a json file. Furthermore, it cuts out the area of motion within the frame, and writes these as a new image to /opt/motion/cropped :

#!/home/pi/miniconda3/envs/ml/bin/python
import os, sys
import json
import crop_images
if __name__ == "__main__":
    out_dict = {}
    out_dict['filename'] = sys.argv[1]
    out_dict['event'] = int(sys.argv[2])
    out_dict['frame_nr'] =int(sys.argv[3])
    out_dict['width'] = int(sys.argv[4])
    out_dict['height'] = int(sys.argv[5])
    out_dict['center_x'] = int(sys.argv[6])
    out_dict['center_y'] = int(sys.argv[7])
    filename = out_dict['filename']
    filename_txt = os.path.splitext(filename)[0]+'.json'
    with open(filename_txt,'w') as outfile:
        json.dump(out_dict,outfile)
    os.system('sync')
    crop_images.crop_image(filename_txt)

The crop_images python file is:

#!/home/pi/miniconda3/envs/ml/bin/python
from PIL import Image
import json
import os
import sys
def crop_image(jfilename):
    try:
        data = json.load(open(jfilename))
    except ValueError as e:
        print('problem decoding {0}'.format(jfilename))
        return
    ifilename = data['filename']
    img = Image.open(ifilename)
    left = int(data['center_x'] - data['width']/2)
    right = int(data['center_x'] + data['width']/2)
    top = int(data['center_y'] - data['height']/2)
    bottom = int(data['center_y'] + data['height']/2)
    cropped_img = img.crop((left,top,right,bottom))
    path,filename = os.path.split(ifilename)
    path = os.path.join(path,'cropped')

    filename_cropped = os.path.join(path,os.path.splitext(filename)[0] + '_crop\
ped.jpg')
    print(filename_cropped)
    cropped_img.save(filename_cropped)
    return filename_cropped

if __name__=="__main__":
    path = sys.argv[1]
    import glob
    jfilenames = glob.glob(os.path.join(path, '*.json'))
    length = len(jfilenames)
    for ii,j in enumerate(jfilenames):
        crop_image(j)
        if ii%500 == 0:
            print ('{0}/{1}'.format(ii,length))

You can find all this code in my github repository under:
https://github.com/magictimelapse/garden

You can clone it from the pi home directory:

git clone https://github.com/magictimelapse/garden
ln -snf /home/pi/garden/cats/ cat_recognition

Install miniconda and all required dependencies

I use miniconda to install all required packages. You can install it with:

wget http://repo.continuum.io/miniconda/Miniconda3-latest-Linux-armv7l.sh
sudo md5sum Miniconda3-latest-Linux-armv7l.sh # (optional) check md5
sudo /bin/bash Miniconda3-latest-Linux-armv7l.sh # -> change default directory to /home/pi/miniconda3
sudo reboot -h now

Then create the environment from the requirements.yml file with:

/home/pi/miniconda3/bin/conda create -f requirements.yml
source /home/pi/miniconda3/bin/activate ml

Later down the road we will need tensorflow and keras as the deep learning environment. Installing tensorflow on the raspberry pi is fortunately straight forward:

sudo apt-get install libatlas-base-dev libblas-dev
wget https://github.com/samjabrahams/tensorflow-on-raspberry-pi/releases/download/v1.1.0/tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl
source /home/pi/miniconda3/bin/activate ml
pip install tensorflow-1.1.0-cp34-cp34m-linux_armv7l.whl

All the rest of the required packages can be automatically installed with pip from the requirements.txt file in the cat_recognition directory:

source /home/pi/miniconda3/bin/activate ml
pip install  -r requirements.txt

Testing

You might have to restart your pi now. When it has restarted, check the status of your motion service again:

sudo service motion status

The output must be active(running) in green.
Move your hand in front of the camera and check if images are written in /opt/motion and in /opt/motion/cropped
97-20180424110053-05_cropped.jpg

Coin Marketplace

STEEM 0.19
TRX 0.14
JST 0.030
BTC 61240.20
ETH 3247.86
USDT 1.00
SBD 2.45