Book Image

Instant OpenCV for iOS

4 (1)
Book Image

Instant OpenCV for iOS

4 (1)

Overview of this book

Computer vision on mobile devices is becoming more and more popular. Personal gadgets are now powerful enough to process high-resolution images, stitch panoramas, and detect and track objects. OpenCV, with its decent performance and wide range of functionality, can be an extremely useful tool in the hands of iOS developers. Instant OpenCV for iOS is a practical guide that walks you through every important step for building a computer vision application for the iOS platform. It will help you to port your OpenCV code, profile and optimize it, and wrap it into a GUI application. Each recipe is accompanied by a sample project or an example that helps you focus on a particular aspect of the technology. Instant OpenCV for iOS starts by creating a simple iOS application and linking OpenCV before moving on to processing images and videos in real-time. It covers the major ways to retrieve images, process them, and view or export results. Special attention is also given to performance issues, as they greatly affect the user experience.Several computer vision projects will be considered throughout the book. These include a couple of photo filters that help you to print a postcard or add a retro effect to your images. Another one is a demonstration of the facial feature detection algorithm. In several time-critical cases, the processing speed is measured and optimized using ARM NEON and the Accelerate framework. OpenCV for iOS gives you all the information you need to build a high-performance computer vision application for iOS devices.
Table of Contents (7 chapters)

Applying effects to live video (Intermediate)


In this recipe, we'll consider an example showing how to take a live video feed and apply an image filter in real-time. As we discussed previously, you should only implement the processImage method. Also, we'll add displaying the FPS number directly in camera images, it can help you in the optimization process. The following is an example snapshot of the application:

Getting ready

We will use the Recipe10_CapturingVideo project as a starting point, trying to apply previously implemented RetroFilter to the video stream. We also suppose that the RetroFilter class, and its resources were added to the CvEffects static library project. Source code can be found in the Recipe12_ProcessingVideo folder in the code bundle that accompanies this book. For this recipe, you can't use Simulator, as it doesn't support working with camera.

How to do it...

The following are the required steps:

  1. Add instance variables for storing retro filter properties.

  2. Add an initialization of the filter to the button's action.

  3. Finally, we'll implement applying the filter in the processImage function.

Let's implement the described steps:

  1. First, we should add the RetroFilter::Parameters variable and a pointer to the filter to the Controller interface. Also, we'll add a variable for storing the previous time for FPS calculation:

    @interface ViewController : UIViewController<CvVideoCameraDelegate>
    {
        CvVideoCamera* videoCamera;
        BOOL isCapturing;
        RetroFilter::Parameters params;
        cv::Ptr<RetroFilter> filter;
        uint64_t prevTime;
    }
  2. In order to initialize filter properties, we should add some code to the viewDidLoad function:

    // Load textures
    UIImage* resImage = [UIImage imageNamed:@"scratches.png"];
    UIImageToMat(resImage, params.scratches);
    
    resImage = [UIImage imageNamed:@"fuzzy_border.png"];
    UIImageToMat(resImage, params.fuzzyBorder);
    
    filter = NULL;
    prevTime = mach_absolute_time(); 
  3. As we know the resolution of the camera only after session starts, we should create a filter object when the StartCapture button is pressed:

    -(IBAction)startCaptureButtonPressed:(id)sender
    {
        [videoCamera start];
        isCapturing = YES;
        
        params.frameSize = cv::Size(videoCamera.imageWidth,
                                    videoCamera.imageHeight);
        
        if (!filter)
            filter = new RetroFilter(params);
    } 
  4. Finally, we should apply the filter to a camera image:

    - (void)processImage:(cv::Mat&)image
    {
        cv::Mat inputFrame = image;
        
        BOOL isNeedRotation = image.size() != params.frameSize;
        if (isNeedRotation)
            inputFrame = image.t();
        
        // Apply filter
        cv::Mat finalFrame;
        filter->applyToVideo(inputFrame, finalFrame);
    
        if (isNeedRotation)
            finalFrame = finalFrame.t();
        
        // Add fps label to the frame
        uint64_t currTime = mach_absolute_time();
        double timeInSeconds = machTimeToSecs(currTime - prevTime);
        prevTime = currTime;
        double fps = 1.0 / timeInSeconds;
        NSString* fpsString =
                        [NSString stringWithFormat:@"FPS = %3.2f", fps];
        cv::putText(finalFrame, [fpsString UTF8String],
                    cv::Point(30, 30), cv::FONT_HERSHEY_COMPLEX_SMALL,
                    0.8, cv::Scalar::all(255));
    
        finalFrame.copyTo(image);
    } 
  5. We will use the following function to convert the measured time to seconds:

    static double machTimeToSecs(uint64_t time)
    {
        mach_timebase_info_data_t timebase;
        mach_timebase_info(&timebase);
        return (double)time * (double)timebase.numer /
                              (double)timebase.denom / 1e9;
    }
  6. As you can see, this code contains the mach_timebase_info structure that is defined in the following header file:

    #import <mach/mach_time.h>

How it works...

In the previous cases, we always created the filter object right before using it. In the case of live video, we cannot do it, because the performance issues come out on top. So we'll initialize the RetroFilter object only once. For this purpose, we have to add a smart pointer, which points to the filter object, to the Controller interface and initialize it after starting the video capturing process. We can't do it in the viewDidLoad method, because we should know the camera resolution from before.

To calculate FPS, we have to add the prevTime field property. We will measure the time between processImage calls with this variable. At the time of the first call to this method, we'll initialize this property with the current time. During the next call, we will be able to measure the working time of the filter function, plus the time needed to get the camera image as a difference between current time and value of the prevTime variable. After that, we can convert it to seconds and calculate the resulting FPS value. In order to display the number on the screen, we'll use the cv::putText function.

There's more...

Even on the latest iOS devices (iPad 4 and iPhone 5) our filter shows good FPS (~30) only on low resolutions, for example 352 x 288. In the next recipes, we'll consider a few ways to optimize the OpenCV applications with iOS- and ARM-specific techniques.