In this recipe, we will learn how we can capture images the camera. We'll use the CvPhotoCamera
class, which is a part of OpenCV, and apply our retro effect from the previous recipe.
For this recipe, you will need a real iOS device, because we're going to take photos. The source code can be found in the Recipe08_TakingPhotosFromCamera
folder in the code bundle that accompanies this book.
The following are the steps required to apply our filter to a photo, taken with camera app:
The
ViewController
interface should implement the protocol fromCvPhotoCameraDelegate
, and should have a member of theCvPhotoCamera*
type.You will also need a couple of buttons, one to start capturing (stream preview video to display), and another for taking a photo.
Then we have to initialize everything in the
viewDidLoad
method as usual.The last step will be the processing of the captured frame in the
applyEffect
method.
Let's implement the described steps:
The iOS part of the OpenCV library has two classes for working with a camera:
CvPhotoCamera
andCvVideoCamera
. The first one was designed to get static images, and we'll get familiar with it in this recipe. We should add support for a certain protocol in our Controller class for working with a camera. In our case, we use the delegate ofCvPhotoCamera
. TheViewController
class accesses the image through the delegation fromCvPhotoCameraDelegate
:@interface ViewController : UIViewController<CvPhotoCameraDelegate> { CvPhotoCamera* photoCamera; UIImageView* resultView; RetroFilter::Parameters params; } @property (nonatomic, strong) CvPhotoCamera* photoCamera; @property (nonatomic, strong) IBOutlet UIImageView* imageView; @property (nonatomic, strong) IBOutlet UIToolbar* toolbar; @property (nonatomic, weak) IBOutlet UIBarButtonItem* takePhotoButton; @property (nonatomic, weak) IBOutlet UIBarButtonItem* startCaptureButton; -(IBAction)takePhotoButtonPressed:(id)sender; -(IBAction)startCaptureButtonPressed:(id)sender; - (UIImage*)applyEffect:(UIImage*)image; @end
As you can see, we need to add a
CvPhotoCamera*
property in order to work with a camera. We do also add two buttons to the UI. Thus, we add two corresponding properties and two methods withIBAction
macros. As done before, you should connect these properties and actions with the corresponding GUI elements with Assistant editor and storyboard files.In order to work with a camera, you should add additional frameworks to the project: AVFoundation, Accelerate, AssetsLibrary, CoreMedia, CoreVideo, CoreImage, QuartzCore. The simplest way to do this is using project properties by navigating to Project | Build Phases | Link Binary With Libraries.
In the
viewDidLoad
method, we should initialize camera parameters.photoCamera = [[CvPhotoCamera alloc] initWithParentView:imageView]; photoCamera.delegate = self; photoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront; photoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetPhoto; photoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
We'll use two buttons to control the camera. The first one will have a Start capture caption and we'll use it to begin capturing:
-(IBAction)startCaptureButtonPressed:(id)sender; { [photoCamera start]; [self.view addSubview:imageView]; [takePhotoButton setEnabled:YES]; [startCaptureButton setEnabled:NO]; }
In order to be compliant with the protocol of
CvPhotoCameraDelegate
, we should implement two methods inside theViewController
class:- (void)photoCamera:(CvPhotoCamera*)camera capturedImage:(UIImage *)image; { [camera stop]; resultView = [[UIImageView alloc] initWithFrame:imageView.bounds]; UIImage* result = [self applyEffect:image]; [resultView setImage:result]; [self.view addSubview:resultView]; [takePhotoButton setEnabled:NO]; [startCaptureButton setEnabled:YES]; } - (void)photoCameraCancel:(CvPhotoCamera*)camera; { }
Finally, we retrieve the picture in the Take photo button's action. In this callback, we call the camera method for taking pictures:
-(IBAction)takePhotoButtonPressed:(id)sender; { [photoCamera takePicture]; }
Finally, we should implement the
applyEffect
function that wraps the call to theRetroFilter
class on the Objective-C side, as discussed in the previous recipe.
In order to work with a camera on an iOS device using OpenCV classes, you need to initialize the CvPhotoCamera
object first and set its parameters. This is done in the viewDidLoad
method that is called once when the View is loaded onscreen. In the initialization code, we should specify what GUI component will be used to preview the camera capture. In our case, we'll use UIImageView
as we did before.
Our main UIImageView
component will be used to show the video preview from the camera and help users to take a good photo. Because our app also needs to display the final result on the screen, we create another UIImageView
to display the processed image. In order to do it, we can create the second component right from the code:
resultView = [[UIImageView alloc] initWithFrame:imageView.bounds]; UIImage* result = [self applyEffect:image]; [resultView setImage:result]; [self.view addSubview:resultView];
In this code, we create the UIImageView
component with the same size as that of manually added imageView
property. After that, we use the addSubview
method of the main View to add newly created components to our GUI. If we want see the camera preview results again, we should use the same method for the imageView
property:
[self.view addSubview:imageView];
There are three important parameters for camera: defaultAVCaptureDevicePosition
, defaultAVCaptureSessionPreset
, and defaultAVCaptureVideoOrientation
. The first one is designed to choose between front and back cameras of the device. The second one is used to set the image resolution. The third parameter allows you to specify the device orientation during the capturing process.
There are many possible values for the resolution; some of them are as follows:
AVCaptureSessionPresetHigh
AVCaptureSessionPresetMedium
AVCaptureSessionPresetLow
AVCaptureSessionPreset352x288
AVCaptureSessionPreset640x480
For capturing static, high-resolution images, we recommend using the value of AVCaptureSessionPresetPhoto
. The resulting resolution depends on your device, but it will be the largest possible resolution.
In order to start the capture process, we should call the start
method of the camera object. In our sample, we'll do it in the button's action. After clicking on the button, the user will see the camera image on the screen and will be able to click on the Take photo button that calls the takePicture
method.
The CvPhotoCameraDelegate
camera protocol contains only one important method—capturedImage
. It is executed when somebody calls the takePicture
function and allows you to get the current frame as the function argument.
If you want to stop the camera capturing process, you should call the stop
method.