In this recipe, we will use the CvVideoCamera
class to capture live video from camera.
The source code can be found in the Recipe10_CapturingVideo
folder in the code bundle that accompanies this book. For this recipe, you can't use Simulator, as it doesn't support camera.
The high-quality camera, in the latest iOS devices, is one of important factors of the popularity of these devices. The ability to capture and encode H.264 high-definition video with hardware acceleration was accepted with great enthusiasm by users and developers.
Most of the functions related to communicating with camera are included in the AVFoundation framework. This framework contains a lot of simple and easy-to-use classes for taking photos and videos. But setting up a camera, retrieving frames, displaying them, and handling rotations, take a lot of code. So, in this recipe, we will use the CvVideoCamera
class from OpenCV, which encapsulates the functionality of the AVFoundation framework.
The following are the steps required to capture video on iOS:
The
ViewController
interface should implement the protocol fromCvVideoCameraDelegate
, and should have a member of theCvVideoCamera*
type.You will also need a couple of buttons, one to start capturing process (stream preview video to display), and second to stop the process.
Then we have to initialize everything in the
viewDidLoad
method as usual.Finally, we'll implement the camera control with GUI buttons.
Let's implement the described steps:
Similar to the Taking photos from camera (Intermediate) recipe, in order to work with camera, we need to implement a specific protocol (
CvVideoCameraDelegate
) in ourViewController
class. We also should include the special header file with interfaces of the OpenCV camera classes.#import <opencv2/highgui/ios.h> @interface ViewController : UIViewController<CvVideoCameraDelegate> { CvVideoCamera* videoCamera; BOOL isCapturing; } @property (nonatomic, strong) CvVideoCamera* videoCamera; @property (nonatomic, strong) IBOutlet UIImageView* imageView; @property (nonatomic, strong) IBOutlet UIToolbar* toolbar; @property (nonatomic, weak) IBOutlet UIBarButtonItem* startCaptureButton; @property (nonatomic, weak) IBOutlet UIBarButtonItem* stopCaptureButton; -(IBAction)startCaptureButtonPressed:(id)sender; -(IBAction)stopCaptureButtonPressed:(id)sender; @end
We will need two buttons, so we have to add two corresponding properties and two methods with
IBAction
macros. As before, you should connect these properties and actions with corresponding GUI elements using Assistant editor and storyboard files:In order to work with the camera, you should add additional frameworks to the project: AVFoundation, Accelerate, AssetsLibrary, CoreMedia, CoreVideo, CoreImage, and QuartzCore. The simplest way to do this is using project properties by navigating to Project | Build Phases | Link Binary With Libraries.
In the
viewDidLoad
method, we should initialize the camera parameters:- (void)viewDidLoad { [super viewDidLoad]; self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView]; self.videoCamera.delegate = self; self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront; self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPreset640x480; self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait; self.videoCamera.defaultFPS = 30; isCapturing = NO; }
We'll use the first button with the Start capture caption to begin capturing from camera, and the other one with the Stop capture caption to stop:
-(IBAction)startCaptureButtonPressed:(id)sender { [videoCamera start]; isCapturing = YES; } -(IBAction)stopCaptureButtonPressed:(id)sender { [videoCamera stop]; isCapturing = NO; }
To monitor the status of the capturing process, we'll use the
isCapturing
variable, which would be set toYES
when capturing is active andNO
otherwise.According to the
CvVideoCameraDelegate
protocol, ourViewController
class needs to implement aprocessImage
method (handle theprocessImage
message).- (void)processImage:(cv::Mat&)image { // Do some OpenCV processing with the image }
Finally, you can add some code to this method for processing video on the fly; we will do it in another recipe.
As we mentioned earlier, the iOS part of the OpenCV library has two classes for working with camera: CvPhotoCamera
and CvVideoCamera
. The difference between the two classes is rather conventional. The first one was designed to only capture static images and you can process images only after capturing them (offline mode). The other class provides more opportunities. It can capture video, process it on the fly, and save the processed stream as an H.264 video file. Those classes have a quite similar interface and are inherited from the common CvAbstractCamera
ancestor.
The CvVideoCamera
class is easy to use. You can leave the default values for resolution, frames-per-second (FPS), and so on, or customize them when needed. The parameters are the same as the ones in the CvPhotoCamera
class; however, there is one new parameter called defaultFPS
. Usually, this value is chosen between 20 and 30; 30 being standard for video.
Previously, we recommended using AVCaptureSessionPresetPhoto
as a resolution parameter of the CvPhotoCamera
class. In case of video capturing, the better way is to choose a smaller resolution. In order to do so, you can use one of the fixed resolutions (for example, AVCaptureSessionPreset640x480
, AVCaptureSessionPreset1280x720
, and so on) or one of the relative ones (AVCaptureSessionPresetHigh
, AVCaptureSessionPresetMedium
, and AVCaptureSessionPresetLow
). The resulting resolution in the latter case will depend on the respective device and camera. Some of the values are listed in the following table:
Preset |
iPhone 3G |
iPhone 3GS |
iPhone 4 back |
iPhone 4 front |
---|---|---|---|---|
|
400 x 304 |
640 x 480 |
1280 x 720 |
640 x 480 |
|
400 x 304 |
480 x 360 |
480 x 360 |
480 x 360 |
|
400 x 304 |
192 x 144 |
192 x 144 |
192 x 144 |
Tip
Using the lowest possible resolution and reasonable frame rate can save a lot of power and make apps more responsive. So, set up your camera preview resolution and FPS to the lowest reasonable values.
To work with camera on an iOS device using the OpenCV class, you should first initialize the CvVideoCamera
object and set its parameters; you can do it in the viewDidLoad
method.
In order to start the capturing process, we should call the start
method of the camera object. In our sample, we'll do it in the button's actions (callback functions). After pressing the button, the user will see the camera preview on the screen. In order to stop capturing, you should call the stop
method. You should also implement the processImage
method that allows you to process camera images on the fly; this method will be called for each frame. Its input parameter is already converted to cv::Mat
that simplifies calling the OpenCV functions.
It is also recommended to stop the camera when the application is closing. Add the following code to guarantee that the camera stops in case the user doesn't click on the Stop capture button:
- (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; if (isCapturing) { [videoCamera stop]; } }
CvVideoCamera
simply wraps AVFoundation functions. So, if you need more control on the camera, you should use this framework directly. The other way is to add OpenCV classes for working with the camera to your project directly. For that purpose, you should copy cap_ios_abstract_camera.mm
, cap_ios_photo_camera.mm
, cap_ios_video_camera.mm
, and cap_ios.h
from the highgui
module and modify the included files. You will need to rename the classes to avoid conflict with the classes of OpenCV.
Real-time video processing on mobile devices is often a computationally intensive task, so it is recommended to use dedicated frameworks, such as Accelerate and CoreImage. Such frameworks are highly optimized and accelerated with special hardware, so you can expect decent processing time and significant power savings.