Book Image

Instant OpenCV for iOS

4 (1)
Book Image

Instant OpenCV for iOS

4 (1)

Overview of this book

Computer vision on mobile devices is becoming more and more popular. Personal gadgets are now powerful enough to process high-resolution images, stitch panoramas, and detect and track objects. OpenCV, with its decent performance and wide range of functionality, can be an extremely useful tool in the hands of iOS developers. Instant OpenCV for iOS is a practical guide that walks you through every important step for building a computer vision application for the iOS platform. It will help you to port your OpenCV code, profile and optimize it, and wrap it into a GUI application. Each recipe is accompanied by a sample project or an example that helps you focus on a particular aspect of the technology. Instant OpenCV for iOS starts by creating a simple iOS application and linking OpenCV before moving on to processing images and videos in real-time. It covers the major ways to retrieve images, process them, and view or export results. Special attention is also given to performance issues, as they greatly affect the user experience.Several computer vision projects will be considered throughout the book. These include a couple of photo filters that help you to print a postcard or add a retro effect to your images. Another one is a demonstration of the facial feature detection algorithm. In several time-critical cases, the processing speed is measured and optimized using ARM NEON and the Accelerate framework. OpenCV for iOS gives you all the information you need to build a high-performance computer vision application for iOS devices.
Table of Contents (7 chapters)

Capturing a video from camera (Simple)


In this recipe, we will use the CvVideoCamera class to capture live video from camera.

Getting ready

The source code can be found in the Recipe10_CapturingVideo folder in the code bundle that accompanies this book. For this recipe, you can't use Simulator, as it doesn't support camera.

How to do it...

The high-quality camera, in the latest iOS devices, is one of important factors of the popularity of these devices. The ability to capture and encode H.264 high-definition video with hardware acceleration was accepted with great enthusiasm by users and developers.

Most of the functions related to communicating with camera are included in the AVFoundation framework. This framework contains a lot of simple and easy-to-use classes for taking photos and videos. But setting up a camera, retrieving frames, displaying them, and handling rotations, take a lot of code. So, in this recipe, we will use the CvVideoCamera class from OpenCV, which encapsulates the functionality of the AVFoundation framework.

The following are the steps required to capture video on iOS:

  1. The ViewController interface should implement the protocol from CvVideoCameraDelegate, and should have a member of the CvVideoCamera* type.

  2. You will also need a couple of buttons, one to start capturing process (stream preview video to display), and second to stop the process.

  3. Then we have to initialize everything in the viewDidLoad method as usual.

  4. Finally, we'll implement the camera control with GUI buttons.

Let's implement the described steps:

  1. Similar to the Taking photos from camera (Intermediate) recipe, in order to work with camera, we need to implement a specific protocol (CvVideoCameraDelegate) in our ViewController class. We also should include the special header file with interfaces of the OpenCV camera classes.

    #import <opencv2/highgui/ios.h>
    
    @interface ViewController : UIViewController<CvVideoCameraDelegate>
    {
        CvVideoCamera* videoCamera;
        BOOL isCapturing;
    }
    
    @property (nonatomic, strong) CvVideoCamera* videoCamera;
    @property (nonatomic, strong) IBOutlet UIImageView* imageView;
    @property (nonatomic, strong) IBOutlet UIToolbar* toolbar;
    @property (nonatomic, weak) IBOutlet
        UIBarButtonItem* startCaptureButton;
    @property (nonatomic, weak) IBOutlet
        UIBarButtonItem* stopCaptureButton;
    
    -(IBAction)startCaptureButtonPressed:(id)sender;
    -(IBAction)stopCaptureButtonPressed:(id)sender;
    
    @end
  2. We will need two buttons, so we have to add two corresponding properties and two methods with IBAction macros. As before, you should connect these properties and actions with corresponding GUI elements using Assistant editor and storyboard files:

  3. In order to work with the camera, you should add additional frameworks to the project: AVFoundation, Accelerate, AssetsLibrary, CoreMedia, CoreVideo, CoreImage, and QuartzCore. The simplest way to do this is using project properties by navigating to Project | Build Phases | Link Binary With Libraries.

  4. In the viewDidLoad method, we should initialize the camera parameters:

    - (void)viewDidLoad
    {
        [super viewDidLoad];
    
        self.videoCamera = [[CvVideoCamera alloc]
                            initWithParentView:imageView];
        self.videoCamera.delegate = self;
        self.videoCamera.defaultAVCaptureDevicePosition =
                                    AVCaptureDevicePositionFront;
        self.videoCamera.defaultAVCaptureSessionPreset =
                                    AVCaptureSessionPreset640x480;
        self.videoCamera.defaultAVCaptureVideoOrientation =
                                    AVCaptureVideoOrientationPortrait;
        self.videoCamera.defaultFPS = 30;
        
        isCapturing = NO;
    }
  5. We'll use the first button with the Start capture caption to begin capturing from camera, and the other one with the Stop capture caption to stop:

    -(IBAction)startCaptureButtonPressed:(id)sender
    {
        [videoCamera start];
        isCapturing = YES;
    }
    
    -(IBAction)stopCaptureButtonPressed:(id)sender
    {
        [videoCamera stop];
        isCapturing = NO;
    }
  6. To monitor the status of the capturing process, we'll use the isCapturing variable, which would be set to YES when capturing is active and NO otherwise.

  7. According to the CvVideoCameraDelegate protocol, our ViewController class needs to implement a processImage method (handle the processImage message).

    - (void)processImage:(cv::Mat&)image
    {
        // Do some OpenCV processing with the image
    }
  8. Finally, you can add some code to this method for processing video on the fly; we will do it in another recipe.

How it works...

As we mentioned earlier, the iOS part of the OpenCV library has two classes for working with camera: CvPhotoCamera and CvVideoCamera. The difference between the two classes is rather conventional. The first one was designed to only capture static images and you can process images only after capturing them (offline mode). The other class provides more opportunities. It can capture video, process it on the fly, and save the processed stream as an H.264 video file. Those classes have a quite similar interface and are inherited from the common CvAbstractCamera ancestor.

The CvVideoCamera class is easy to use. You can leave the default values for resolution, frames-per-second (FPS), and so on, or customize them when needed. The parameters are the same as the ones in the CvPhotoCamera class; however, there is one new parameter called defaultFPS. Usually, this value is chosen between 20 and 30; 30 being standard for video.

Previously, we recommended using AVCaptureSessionPresetPhoto as a resolution parameter of the CvPhotoCamera class. In case of video capturing, the better way is to choose a smaller resolution. In order to do so, you can use one of the fixed resolutions (for example, AVCaptureSessionPreset640x480, AVCaptureSessionPreset1280x720, and so on) or one of the relative ones (AVCaptureSessionPresetHigh, AVCaptureSessionPresetMedium, and AVCaptureSessionPresetLow). The resulting resolution in the latter case will depend on the respective device and camera. Some of the values are listed in the following table:

Preset

iPhone 3G

iPhone 3GS

iPhone 4 back

iPhone 4 front

AVCaptureSessionPresetHigh

400 x 304

640 x 480

1280 x 720

640 x 480

AVCaptureSessionPresetMedium

400 x 304

480 x 360

480 x 360

480 x 360

AVCaptureSessionPresetMedium

400 x 304

192 x 144

192 x 144

192 x 144

Tip

Using the lowest possible resolution and reasonable frame rate can save a lot of power and make apps more responsive. So, set up your camera preview resolution and FPS to the lowest reasonable values.

To work with camera on an iOS device using the OpenCV class, you should first initialize the CvVideoCamera object and set its parameters; you can do it in the viewDidLoad method.

In order to start the capturing process, we should call the start method of the camera object. In our sample, we'll do it in the button's actions (callback functions). After pressing the button, the user will see the camera preview on the screen. In order to stop capturing, you should call the stop method. You should also implement the processImage method that allows you to process camera images on the fly; this method will be called for each frame. Its input parameter is already converted to cv::Mat that simplifies calling the OpenCV functions.

It is also recommended to stop the camera when the application is closing. Add the following code to guarantee that the camera stops in case the user doesn't click on the Stop capture button:

- (void)viewDidDisappear:(BOOL)animated
{
    [super viewDidDisappear:animated];
    if (isCapturing) {
        [videoCamera stop];
    }
}

There's more...

CvVideoCamera simply wraps AVFoundation functions. So, if you need more control on the camera, you should use this framework directly. The other way is to add OpenCV classes for working with the camera to your project directly. For that purpose, you should copy cap_ios_abstract_camera.mm, cap_ios_photo_camera.mm, cap_ios_video_camera.mm, and cap_ios.h from the highgui module and modify the included files. You will need to rename the classes to avoid conflict with the classes of OpenCV.

Real-time video processing on mobile devices is often a computationally intensive task, so it is recommended to use dedicated frameworks, such as Accelerate and CoreImage. Such frameworks are highly optimized and accelerated with special hardware, so you can expect decent processing time and significant power savings.