Raspberry Pi… Camera… Action!

PiCamera Module
The next step in the Blue Block Challenge is getting the simple object detect code that I’ve already developed on the Mac, running on the Raspberry Pi.

However, the Pi is not a super powerful computer so it is important to optimize video capture and processing in order to get the best performance out of the robot.

To that end I have decided to use the PiCamera and take advantage of its low-level C interface and high-speed bus connection to attain the most optimum image acquisition path possible.

In the process of researching all this I was fortunate enough to come across a cool project by Chris Cummings (Thanks, Chris) that I’ve taken and modified to fit my needs.

The modified code is available as PiCamCVTest on GitHub. It is only meant as a quick and dirty test to see what kind of performance can be reasonably achieved on the Pi.

Setting up the Pi and Camera for the Test Code

I’m going to detail how to prep the Pi in order to build and run PiCamCVTest.  

To make things easier I’m going to start with a fresh Raspbian instance.

First get the Raspbian image from the downloads page.

Refer to the image installation page on how to write the Raspbian image to an appropriate SD card.

[I personally use the command line tools on my Mac to do the job.]

Power up the Pi with the new card.

You should eventually get the config screen:

raspi-config

If not then type ‘raspi-config’ at a command line prompt.

On the config screen do the following:

  • Set a login password.
  • Extend SD partition to use whole disk.
  • Enable the PiCamera hardware

Re-boot and login as “pi” with the password you just set.

From the command line perform the following actions:

Ensure PiCamera support is enabled by checking the folder ‘/opt/vc’ exists and looks something like:

$ ls /opt/vc
bin include lib sbin src

Update all existing packages – this is very important:

$ sudo apt-get update
$ sudo apt-get upgrade

And finally we need these packages for PiCamCVTest to build:

$ sudo apt-get install cmake libopencv-dev

Installing and Building PiCamCVTest

It is simplest to just clone PiCamCVTest locally:

$ git clone https://github.com/solderspot/PiCamCVTest.git

And then build using cmake:

$ cd PiCamCVTest
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release ..
$ make

You should now have the executable ‘PiCamVCTest’ in the build folder.

Using PiCamCVTest

I’ve added some command line options to the code to make it more useful. You can list them by invoking the program with the argument ‘help’:

$ ./PiCamCVTest help
Usage: ./PiCamCVTest [options]
  Where options are:
    -fh              :  flip image horizontally
    -fv              :  flip image vertically
    -w <pixels>      :  capture image width - defailt 320
    -h <pixels>      :  capture image height - defailt 320
    -r               :  don't process input
    -H <min>-<max>   :  hue range (0..360) - defailt 0..60
    -S <min>-<max>   :  saturation range (0..255) - defailt 100..255
    -V <min>-<max>   :  calue range (0..255) - defailt 100..255

  example: ./PiCamCVTest -H 10..50 -S 50..255 -V 100..200

The options should be self explanatory.

[Note that you cannot use arbitrary image dimensions. The camera will round-up any sizes given to the nearest ones it can support. However, the rendering code will not be aware of this and so the image will not display correctly.]

If you run the code without any arguments then it will use the defaults:

$ ./PiCamCVTest
Init camera output with 320/320
Creating pool with 3 buffers of size 409600
Camera successfully created
Starting capture: 320x320
Applying color threshold: hue 0..60, sat 100..255, val 100..255
Frame rate: 10.46 fps
Frame rate: 10.15 fps
Frame rate: 10.15 fps
Frame rate: 10.15 fps

At regular interval it will output the effective frame rate.

You can use the -r option to disable any OpenCV processing and display the video feed directly. This is so you can verify the camera is working correctly, view the image quality and see if you need to flip the image orientation using the -fh and -fv flags.

Performance and Conclusion

Running the code on my Pi, which is over clocked at 900Mhz, I get the following results:

Image Size 160×160 320×320 640×640
no image processing 40 fps 30 fps 15 fps
image processing 20 fps 10 fps 3 fps

Not too shabby – not brilliant – but I think this demonstrates that performing basic robotic vision on the Pi is viable.

One issue that has revealed itself during these tests has to do with auto white balance. The images I’m getting have a strong brown cast to them. However, I believe this is due to the fact that Chris’s code is pulling YUV data from the camera and converting to RGB via the video core without any color balance corrections, through I could be wrong about that.

I think the next step is to implement a simple camera abstraction layer on top of MMAL and see if I can get the camera’s auto white balance working correctly. Having crappy white balance will severely degrade the robot’s ability to use color based object detection effectively.

Advertisements

Comments welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s