Book Image

Instant OpenCV for iOS

4 (1)
Book Image

Instant OpenCV for iOS

4 (1)

Overview of this book

Computer vision on mobile devices is becoming more and more popular. Personal gadgets are now powerful enough to process high-resolution images, stitch panoramas, and detect and track objects. OpenCV, with its decent performance and wide range of functionality, can be an extremely useful tool in the hands of iOS developers. Instant OpenCV for iOS is a practical guide that walks you through every important step for building a computer vision application for the iOS platform. It will help you to port your OpenCV code, profile and optimize it, and wrap it into a GUI application. Each recipe is accompanied by a sample project or an example that helps you focus on a particular aspect of the technology. Instant OpenCV for iOS starts by creating a simple iOS application and linking OpenCV before moving on to processing images and videos in real-time. It covers the major ways to retrieve images, process them, and view or export results. Special attention is also given to performance issues, as they greatly affect the user experience.Several computer vision projects will be considered throughout the book. These include a couple of photo filters that help you to print a postcard or add a retro effect to your images. Another one is a demonstration of the facial feature detection algorithm. In several time-critical cases, the processing speed is measured and optimized using ARM NEON and the Accelerate framework. OpenCV for iOS gives you all the information you need to build a high-performance computer vision application for iOS devices.
Table of Contents (7 chapters)

Taking photos from camera (Intermediate)


In this recipe, we will learn how we can capture images the camera. We'll use the CvPhotoCamera class, which is a part of OpenCV, and apply our retro effect from the previous recipe.

Getting ready

For this recipe, you will need a real iOS device, because we're going to take photos. The source code can be found in the Recipe08_TakingPhotosFromCamera folder in the code bundle that accompanies this book.

How to do it...

The following are the steps required to apply our filter to a photo, taken with camera app:

  1. The ViewController interface should implement the protocol from CvPhotoCameraDelegate, and should have a member of the CvPhotoCamera* type.

  2. You will also need a couple of buttons, one to start capturing (stream preview video to display), and another for taking a photo.

  3. Then we have to initialize everything in the viewDidLoad method as usual.

  4. The last step will be the processing of the captured frame in the applyEffect method.

Let's implement the described steps:

  1. The iOS part of the OpenCV library has two classes for working with a camera: CvPhotoCamera and CvVideoCamera. The first one was designed to get static images, and we'll get familiar with it in this recipe. We should add support for a certain protocol in our Controller class for working with a camera. In our case, we use the delegate of CvPhotoCamera. The ViewController class accesses the image through the delegation from CvPhotoCameraDelegate:

    @interface ViewController : UIViewController<CvPhotoCameraDelegate>
    {
        CvPhotoCamera* photoCamera;
        UIImageView* resultView;
        RetroFilter::Parameters params;
    }
    
    @property (nonatomic, strong) CvPhotoCamera* photoCamera;
    @property (nonatomic, strong) IBOutlet UIImageView* imageView;
    @property (nonatomic, strong) IBOutlet UIToolbar* toolbar;
    @property (nonatomic, weak) IBOutlet
        UIBarButtonItem* takePhotoButton;
    @property (nonatomic, weak) IBOutlet
        UIBarButtonItem* startCaptureButton;
    
    -(IBAction)takePhotoButtonPressed:(id)sender;
    -(IBAction)startCaptureButtonPressed:(id)sender;
    
    - (UIImage*)applyEffect:(UIImage*)image;
    
    @end 
  2. As you can see, we need to add a CvPhotoCamera* property in order to work with a camera. We do also add two buttons to the UI. Thus, we add two corresponding properties and two methods with IBAction macros. As done before, you should connect these properties and actions with the corresponding GUI elements with Assistant editor and storyboard files.

  3. In order to work with a camera, you should add additional frameworks to the project: AVFoundation, Accelerate, AssetsLibrary, CoreMedia, CoreVideo, CoreImage, QuartzCore. The simplest way to do this is using project properties by navigating to Project | Build Phases | Link Binary With Libraries.

  4. In the viewDidLoad method, we should initialize camera parameters.

    photoCamera = [[CvPhotoCamera alloc]
                            initWithParentView:imageView];
    photoCamera.delegate = self;
    photoCamera.defaultAVCaptureDevicePosition =
                            AVCaptureDevicePositionFront;
    photoCamera.defaultAVCaptureSessionPreset =
                            AVCaptureSessionPresetPhoto;
    photoCamera.defaultAVCaptureVideoOrientation =
                            AVCaptureVideoOrientationPortrait;
  5. We'll use two buttons to control the camera. The first one will have a Start capture caption and we'll use it to begin capturing:

    -(IBAction)startCaptureButtonPressed:(id)sender;
    {
        [photoCamera start];
        
        [self.view addSubview:imageView];
        [takePhotoButton setEnabled:YES];
        [startCaptureButton setEnabled:NO];
    }
  6. In order to be compliant with the protocol of CvPhotoCameraDelegate, we should implement two methods inside the ViewController class:

    - (void)photoCamera:(CvPhotoCamera*)camera
                        capturedImage:(UIImage *)image;
    {
        [camera stop];
        resultView = [[UIImageView alloc]
                      initWithFrame:imageView.bounds];
        
        UIImage* result = [self applyEffect:image];
       
        [resultView setImage:result];
        [self.view addSubview:resultView];
        
        [takePhotoButton setEnabled:NO];
        [startCaptureButton setEnabled:YES];
    }
    
    - (void)photoCameraCancel:(CvPhotoCamera*)camera;
    {
    } 
  7. Finally, we retrieve the picture in the Take photo button's action. In this callback, we call the camera method for taking pictures:

    -(IBAction)takePhotoButtonPressed:(id)sender;
    {
        [photoCamera takePicture];
    }
  8. Finally, we should implement the applyEffect function that wraps the call to the RetroFilter class on the Objective-C side, as discussed in the previous recipe.

How it works...

In order to work with a camera on an iOS device using OpenCV classes, you need to initialize the CvPhotoCamera object first and set its parameters. This is done in the viewDidLoad method that is called once when the View is loaded onscreen. In the initialization code, we should specify what GUI component will be used to preview the camera capture. In our case, we'll use UIImageView as we did before.

Our main UIImageView component will be used to show the video preview from the camera and help users to take a good photo. Because our app also needs to display the final result on the screen, we create another UIImageView to display the processed image. In order to do it, we can create the second component right from the code:

resultView = [[UIImageView alloc]
                  initWithFrame:imageView.bounds];    
UIImage* result = [self applyEffect:image];   
[resultView setImage:result];
[self.view addSubview:resultView]; 

In this code, we create the UIImageView component with the same size as that of manually added imageView property. After that, we use the addSubview method of the main View to add newly created components to our GUI. If we want see the camera preview results again, we should use the same method for the imageView property:

[self.view addSubview:imageView];

There are three important parameters for camera: defaultAVCaptureDevicePosition, defaultAVCaptureSessionPreset, and defaultAVCaptureVideoOrientation. The first one is designed to choose between front and back cameras of the device. The second one is used to set the image resolution. The third parameter allows you to specify the device orientation during the capturing process.

There are many possible values for the resolution; some of them are as follows:

  • AVCaptureSessionPresetHigh

  • AVCaptureSessionPresetMedium

  • AVCaptureSessionPresetLow

  • AVCaptureSessionPreset352x288

  • AVCaptureSessionPreset640x480

For capturing static, high-resolution images, we recommend using the value of AVCaptureSessionPresetPhoto. The resulting resolution depends on your device, but it will be the largest possible resolution.

In order to start the capture process, we should call the start method of the camera object. In our sample, we'll do it in the button's action. After clicking on the button, the user will see the camera image on the screen and will be able to click on the Take photo button that calls the takePicture method.

The CvPhotoCameraDelegate camera protocol contains only one important method—capturedImage. It is executed when somebody calls the takePicture function and allows you to get the current frame as the function argument.

If you want to stop the camera capturing process, you should call the stop method.

There's more...

If you want to start capturing at the time the application is launched, you have to call the start method inside viewDidAppear:

- (void)viewDidAppear:(BOOL)animated
{
    [photoCamera start];
}