Book Image

Mastering OpenCV with Practical Computer Vision Projects

By : Mora Saragih, Eugene Khvedchenia, Daniel L√É∆í¬©lis Baggio, Shervin Emami, Khvedchenia Ievgen, Jason Saragih, Daniel Lelis Baggio, OpenCV Project, David Millán Escrivá, Roy Shilkrot, Naureen Mahmood
Book Image

Mastering OpenCV with Practical Computer Vision Projects

By: Mora Saragih, Eugene Khvedchenia, Daniel L√É∆í¬©lis Baggio, Shervin Emami, Khvedchenia Ievgen, Jason Saragih, Daniel Lelis Baggio, OpenCV Project, David Millán Escrivá, Roy Shilkrot, Naureen Mahmood

Overview of this book

Computer Vision is fast becoming an important technology and is used in Mars robots, national security systems, automated factories, driver-less cars, and medical image analysis to new forms of human-computer interaction. OpenCV is the most common library for computer vision, providing hundreds of complex and fast algorithms. But it has a steep learning curve and limited in-depth tutorials.Mastering OpenCV with Practical Computer Vision Projects is the perfect book for developers with just basic OpenCV skills who want to try practical computer vision projects, as well as the seasoned OpenCV experts who want to add more Computer Vision topics to their skill set or gain more experience with OpenCV's new C++ interface before migrating from the C API to the C++ API.Each chapter is a separate project including the necessary background knowledge, so try them all one-by-one or jump straight to the projects you're most interested in.Create working prototypes from this book including real-time mobile apps, Augmented Reality, 3D shape from video, or track faces & eyes, fluid wall using Kinect, number plate recognition and so on. Mastering OpenCV with Practical Computer Vision Projects gives you rapid training in nine computer vision areas with useful projects.
Table of Contents (15 chapters)
Mastering OpenCV with Practical Computer Vision Projects
About the Authors
About the Reviewers

Porting from desktop to Android

Now that the program works on the desktop, we can make an Android or iOS app from it. The details given here are specific to Android, but also apply when porting to iOS for Apple iPhone and iPad or similar devices. When developing Android apps, OpenCV can be used directly from Java, but the result is unlikely to be as efficient as native C/C++ code and doesn't allow the running of the same code on the desktop as it does for your mobile. So it is recommended to use C/C++ for most OpenCV+Android app development (readers who want to write OpenCV apps purely in Java can use the JavaCV library by Samuel Audet, available at, to run the same code on the desktop that we run on Android).


This Android project uses a camera for live input, so it won't work on the Android Emulator. It needs a real Android 2.2 (Froyo) or later device with a camera.

The user interface of an Android app should be written using Java, but for the image processing we will use the same cartoon.cpp C++ file that we used for the desktop. To use C/C++ code in an Android app, we must use the NDK (Native Development Kit) that is based on JNI (Java Native Interface). We will create a JNI wrapper for our cartoonifyImage() function so it can be used from Android with Java.

Setting up an Android project that uses OpenCV

The Android port of OpenCV changes significantly each year, as does Android's method for camera access, so a book is not the best place to describe how it should be set up. Therefore the reader can follow the latest instructions at to set up and build a native (NDK) Android app with OpenCV. OpenCV comes with an Android sample project called Sample3Native that accesses the camera using OpenCV and displays the modified image on the screen. This sample project is useful as a base for the Android app developed in this chapter, so readers should familiarize themselves with this sample app (currently available at We will then modify an Android OpenCV base project so that it can cartoonify the camera's video frames and display the resulting frames on the screen.

If you are stuck with OpenCV development for Android, for example if you are receiving a compile error or the camera always gives blank frames, try searching these websites for solutions:

  1. The Android Binary Package NDK tutorial for OpenCV, mentioned previously.

  2. The official Android-OpenCV Google group (!forum/android-opencv).

  3. OpenCV's Q & A site (

  4. StackOverflow Q & A site (

  5. The Web (for example

  6. If you still can't fix your problem after trying all of these, you should post a question on the Android-OpenCV Google group with details of the error message, and so on.

Color formats used for image processing on Android

When developing for the desktop, we only have to deal with BGR pixel format because the input (from camera, image, or video file) is in BGR format and so is the output (HighGUI window, image, or video file). But when developing for mobiles, you typically have to convert native color formats yourself.

Input color format from the camera

Looking at the sample code in jni\jni_part.cpp, the myuv variable is the color image in Android's default camera format: "NV21" YUV420sp. The first part of the array is the grayscale pixel array, followed by a half-sized pixel array that alternates between the U and V color channels. So if we just want to access a grayscale image, we can get it directly from the first part of a YUV420sp semi-planar image without any conversions. But if we want a color image (for example, BGR or BGRA color format), we must convert the color format using cvtColor().

Output color format for display

Looking at the Sample3Native code from OpenCV, the mbgra variable is the color image to be displayed on the Android device, in BGRA format. OpenCV's default format is BGR (the opposite byte order of RGB), and BGRA just adds an unused byte on the end of each pixel, so that each pixel is stored as Blue-Green-Red-Unused. You can either do all your processing in OpenCV's default BGR format and then convert your final output from BGR to BGRA before display on the screen, or you can ensure your image processing code can handle the BGRA format instead of or in addition to BGR format. This can often be simple to allow in OpenCV because many OpenCV functions accept the BGRA, but you must ensure that you create images with the same number of channels as the input, by seeing if the Mat::channels() value in your images are 3 or 4. Also, if you directly access pixels in your code, you would need separate code to handle 3-channel BGR and 4-channel BGRA images.


Some CV operations run faster with BGRA pixels (since it is aligned to 32-bit) while some run faster with BGR (since it requires less memory to read and write), so for maximum efficiency you should support both BGR and BGRA and then find which color format runs fastest overall in your app.

Let's begin with something simple: getting access to the camera frame in OpenCV but not processing it, and instead just displaying it on the screen. This can be done easily with Java code, but it is important to know how to do it using OpenCV too. As mentioned previously, the camera image arrives at our C++ code in YUV420sp format and should leave in BGRA format. So if we prepare our cv::Mat for input and output, we just need to convert from YUV420sp to BGRA using cvtColor. To write C/C++ code for an Android Java app, we need to use special JNI function names that match the Java class and package name that will use that JNI function, in the format:

JNIEXPORT <Return> JNICALL Java_<Package>_<Class>_<Function>(JNIEnv* env, jobject, <Args>)

So let's create a ShowPreview() C/C++ function that is used from a CartoonifierView Java class in a Cartoonifier Java package. Add this ShowPreview() C/C++ function to jni\jni_part.cpp:

// Just show the plain camera image without modifying it.
JNIEXPORT void JNICALL Java_com_Cartoonifier_CartoonifierView_ShowPreview(
  JNIEnv* env, jobject,
  jint width, jint height, jbyteArray yuv, jintArray bgra)
  jbyte* _yuv  = env->GetByteArrayElements(yuv, 0);
  jint*  _bgra = env->GetIntArrayElements(bgra, 0);

  Mat myuv = Mat(height + height/2, width, CV_8UC1, (uchar *)_yuv);
  Mat mbgra = Mat(height, width, CV_8UC4, (uchar *)_bgra);

  // Convert the color format from the camera's
  // NV21 "YUV420sp" format to an Android BGRA color image.
  cvtColor(myuv, mbgra, CV_YUV420sp2BGRA);

  // OpenCV can now access/modify the BGRA image "mbgra" ...

  env->ReleaseIntArrayElements(bgra, _bgra, 0);
  env->ReleaseByteArrayElements(yuv, _yuv, 0);

While this code looks complex at first, the first two lines of the function just give us native access to the given Java arrays, the next two lines construct cv::Mat objects around the given pixel buffers (that is, they don't allocate new images, they make myuv access the pixels in the _yuv array, and so on), and the last two lines of the function release the native lock we placed on the Java arrays. The only real work we did in the function is to convert from YUV to BGRA format, so this function is the base that we can use for new functions. Now let's extend this to analyze and modify the BGRA cv::Mat before display.


The jni\jni_part.cpp sample code in OpenCV v2.4.2 uses this code:

cvtColor(myuv, mbgra, CV_YUV420sp2BGR, 4);

This looks like it converts to 3-channel BGR format (OpenCV's default format), but due to the "4" parameter it actually converts to 4-channel BGRA (Android's default output format) instead! So it's identical to this code, which is less confusing:

cvtColor(myuv, mbgra, CV_YUV420sp2BGRA);

Since we now have a BGRA image as input and output instead of OpenCV's default BGR, it leaves us with two options for how to process it:

  • Convert from BGRA to BGR before we perform our image processing, do our processing in BGR, and then convert the output to BGRA so it can be displayed by Android

  • Modify all our code to handle BGRA format in addition to (or instead of) BGR format, so we don't need to perform slow conversions between BGRA and BGR

For simplicity, we will just apply the color conversions from BGRA to BGR and back, rather than supporting both BGR and BGRA formats. If you are writing a real-time app, you should consider adding 4-channel BGRA support in your code to potentially improve performance. We will do one simple change to make things slightly faster: we are converting the input from YUV420sp to BGRA and then from BGRA to BGR, so we might as well just convert straight from YUV420sp to BGR!

It is a good idea to build and run with the ShowPreview() function (shown previously) on your device so you have something to go back to if you have problems with your C/C++ code later. To call it from Java, we add the Java declaration just next to the Java declaration of CartoonifyImage() near the bottom of

public native void ShowPreview(int width, int height,
byte[] yuv, int[] rgba);

We can then call it just like the OpenCV sample code called FindFeatures(). Put this in the middle of the processFrame() function of

ShowPreview(getFrameWidth(), getFrameHeight(), data, rgba);

You should build and run it now on your device, just to see the real-time camera preview.

Adding the cartoonifier code to the Android NDK app

We want to add the cartoon.cpp file that we used for the desktop app. The file jni\ sets the C/C++/Assembly source files, header search paths, native libraries, and GCC compiler settings for your project:

  1. Add cartoon.cpp (and ImageUtils_0.7.cpp if you want easier debugging) to LOCAL_SRC_FILES, but remember that they are in the desktop folder instead of the default jni folder. So add this after: LOCAL_SRC_FILES := jni_part.cpp:

    LOCAL_SRC_FILES += ../../Cartoonifier_Desktop/cartoon.cpp
    LOCAL_SRC_FILES += ../../Cartoonifier_Desktop/ImageUtils_0.7.cpp
  2. Add the header file search path so it can find cartoon.h in the common parent folder:

    LOCAL_C_INCLUDES += $(LOCAL_PATH)/../../Cartoonifier_Desktop
  3. In the file jni\jni_part.cpp, insert this near the top instead of #include <vector>:

    #include "cartoon.h"    // Cartoonifier.
    #include "ImageUtils.h"    // (Optional) OpenCV debugging // functions.
  4. Add a JNI function CartoonifyImage() to this file; this will cartoonify the image. We can start by duplicating the function ShowPreview() we created previously, which just shows the camera preview without modifying it. Notice that we convert directly from YUV420sp to BGR since we don't want to process BGRA images:

    // Modify the camera image using the Cartoonifier filter.
    JNIEXPORT void JNICALL Java_com_Cartoonifier_CartoonifierView_CartoonifyImage(
        JNIEnv* env, jobject,
       jint width, jint height, jbyteArray yuv, jintArray bgra)
        // Get native access to the given Java arrays.
        jbyte* _yuv  = env->GetByteArrayElements(yuv, 0);
        jint* _bgra = env->GetIntArrayElements(bgra, 0);
        // Create OpenCV wrappers around the input & output data.
        Mat myuv(height + height/2, width, CV_8UC1, (uchar *)_yuv);
        Mat mbgra(height, width, CV_8UC4, (uchar *)_bgra);
        // Convert the color format from the camera's YUV420sp // semi-planar
        // format to OpenCV's default BGR color image.
        Mat mbgr(height, width, CV_8UC3);  // Allocate a new image buffer.
        cvtColor(myuv, mbgr, CV_YUV420sp2BGR);
        // OpenCV can now access/modify the BGR image "mbgr", and should
        // store the output as the BGR image "displayedFrame".
        Mat displayedFrame(mbgr.size(), CV_8UC3);
        // TEMPORARY: Just show the camera image without modifying it.
        displayedFrame = mbgr;
          // Convert the output from OpenCV's BGR to Android's BGRA //format.
          cvtColor(displayedFrame, mbgra, CV_BGR2BGRA);
         // Release the native lock we placed on the Java arrays.
         env->ReleaseIntArrayElements(bgra, _bgra, 0);
         env->ReleaseByteArrayElements(yuv, _yuv, 0);
  5. The previous code does not modify the image, but we want to process the image using the cartoonifier we developed earlier in this chapter. So now let's insert a call to our existing cartoonifyImage() function that we created in cartoon.cpp for the desktop app. Replace the temporary line of code displayedFrame = mbgr with this:

    cartoonifyImage(mbgr, displayedFrame);
  6. That's it! Build the code (Eclipse should compile the C/C++ code for you using ndk-build) and run it on your device. You should have a working Cartoonifier Android app (right at the beginning of this chapter there is a sample screenshot showing what you should expect)! If it does not build or run, go back over the steps and fix the problems (look at the code provided with this book if you wish). Continue with the next steps once it is working.

Reviewing the Android app

You will quickly notice four issues with the app that is now running on your device:

  • It is extremely slow; many seconds per frame! So we should just display the camera preview and only cartoonify a camera frame when the user has touched the screen to say it is a good photo.

  • It needs to handle user input, such as to change modes between sketch, paint, evil, or alien modes. We will add these to the Android menu bar.

  • It would be great if we could save the cartoonified result to image files, to share with others. Whenever the user touches the screen for a cartoonified image, we will save the result as an image file on the user's SD card and display it in the Android Gallery.

  • There is a lot of random noise in the sketch edge detector. We will create a special "pepper" noise reduction filter to deal with this later.

Cartoonifying the image when the user taps the screen

To show the camera preview (until the user wants to cartoonify the selected camera frame), we can just call the ShowPreview() JNI function we wrote earlier. We will also wait for touch events from the user before cartoonifying the camera image. We only want to cartoonify one image when the user touches the screen; therefore we set a flag to say the next camera frame should be cartoonified and then that flag is reset, so it continues with the camera preview again. But this would mean the cartoonified image is only displayed for a fraction of a second and then the next camera preview will be displayed again. So we will use a second flag to say that the current image should be frozen on the screen for a few seconds before the camera frames overwrite it, to give the user some time to see it:

  1. Add the following header imports near the top of the file in the src\com\Cartoonifier folder:

    import android.view.View;
    import android.view.View.OnTouchListener;
    import android.view.MotionEvent;
  2. Modify the class definition near the top of

    public class CartoonifierApp
    extends Activity implements OnTouchListener {
  3. Insert this code on the bottom of the onCreate() function:

    // Call our "onTouch()" callback function whenever the user // touches the screen.
  4. Add the function onTouch() to process the touch event:

    public boolean onTouch(View v, MotionEvent m) {
        // Ignore finger movement event, we just care about when the 
        // finger first touches the screen.
        if (m.getAction() != MotionEvent.ACTION_DOWN) {
            return false; // We didn't use this touch movement event.
        Log.i(TAG, "onTouch down event");
        // Signal that we should cartoonify the next camera frame and save 
        // it, instead of just showing the preview.
        return true;
  5. Now we need to add the nextFrameShouldBeSaved()function to

    // Cartoonify the next camera frame & save it instead of preview.
    protected void nextFrameShouldBeSaved(Context context) {
        bSaveThisFrame = true;
  6. Add these variables near the top of the CartoonifierView class:

    private boolean bSaveThisFrame = false;
    private boolean bFreezeOutput = false;
    private static final int FREEZE_OUTPUT_MSECS = 3000;
  7. The processFrame() function of CartoonifierView can now switch between cartoon and preview, but should also make sure to only display something if it is not trying to show a frozen cartoon image for a few seconds. So replace processFrame() with this:

    protected Bitmap processFrame(byte[] data) {
        // Store the output image to the RGBA member variable.
        int[] rgba = mRGBA;
        // Only process the camera or update the screen if we aren't 
        // supposed to just show the cartoon image.
        if (bFreezeOutputbFreezeOutput) {
            // Only needs to be triggered here once.
            bFreezeOutput = false;
            // Wait for several seconds, doing nothing!
            try {
            } catch (InterruptedException e) {
            return null;
        if (!bSaveThisFrame) {
            ShowPreview(getFrameWidth(), getFrameHeight(), data, rgba);
        else {
            // Just do it once, then go back to preview mode.
            bSaveThisFrame = false;
            // Don't update the screen for a while, so the user can // see the cartoonifier output.
    bFreezeOutput = true;
    CartoonifyImage(getFrameWidth(), getFrameHeight(), data,
            rgba, m_sketchMode, m_alienMode, m_evilMode,
    // Put the processed image into the Bitmap object that will be // returned for display on the screen.
    Bitmap bmp = mBitmap;
    bmp.setPixels(rgba, 0, getFrameWidth(), 0, 0, getFrameWidth(),
    return bmp;
  8. You should be able to build and run it to verify that the app works nicely now.

Saving the image to a file and to the Android picture gallery

We will save the output both as a PNG file and display in the Android picture gallery. The Android Gallery is designed for JPEG files, but JPEG is bad for cartoon images with solid colors and edges, so we'll use a tedious method to add PNG images to the gallery. We will create a Java function savePNGImageToGallery() to perform this for us. At the bottom of the processFrame() function just seen previously, we see that an Android Bitmap object is created with the output data; so we need a way to save the Bitmap object to a PNG file. OpenCV's imwrite() Java function can be used to save to a PNG file, but this would require linking to both OpenCV's Java API and OpenCV's C/C++ API (just like the OpenCV4Android sample project "tutorial-4-mixed" does). Since we don't need the OpenCV Java API for anything else, the following code will just show how to save PNG files using the Android API instead of the OpenCV Java API:

  1. Android's Bitmap class can save files to PNG format, so let's use it. Also, we need to choose a filename for the image. Let's use the current date and time, to allow saving many files and making it possible for the user to remember when it was taken. Insert this just before the return bmp statement of processFrame():

    if (bFreezeOutput) {
    // Get the current date & time
    SimpleDateFormat s = new SimpleDateFormat("yyyy-MM-dd,HH-mm-ss");
    String timestamp = s.format(new Date());
    String baseFilename = "Cartoon" + timestamp + ".png";
    // Save the processed image as a PNG file on the SD card and show // it in the Android Gallery.
    savePNGImageToGallery(bmp, mContext, baseFilename);
  2. Add this to the top section of

    // For saving Bitmaps to file and the Android picture gallery.
    import android.os.Environment;
    import android.provider.MediaStore;
    import android.provider.MediaStore.Images;
    import android.text.format.DateFormat;
    import android.util.Log;
    import java.text.SimpleDateFormat;
    import java.util.Date;
  3. Insert this inside the CartoonifierView class, on the top:

    private static final String TAG = "CartoonifierView";
    private Context mContext;  // So we can access the Android // Gallery.
  4. Add this to your nextFrameShouldBeSaved() function in CartoonifierView:

    mContext = context;  // Save the Android context, for GUI // access.
  5. Add the savePNGImageToGallery() function to CartoonifierView:

    // Save the processed image as a PNG file on the SD card
    // and shown in the Android Gallery.  
    protected void savePNGImageToGallery(Bitmap bmp, Context context,
            String baseFilename)
        try {
        // Get the file path to the SD card.
        String baseFolder = \
        Environment.getExternalStoragePublicDirectory( \
        Environment.DIRECTORY_PICTURES).getAbsolutePath() \
        + "/";
        File file = new File(baseFolder + baseFilename);
        Log.i(TAG, "Saving the processed image to file [" + \
        file.getAbsolutePath() + "]");
        // Open the file.
        OutputStream out = new BufferedOutputStream(
        new FileOutputStream(file));
        // Save the image file as PNG.
        bmp.compress(CompressFormat.PNG, 100, out);
        // Make sure it is saved to file soon, because we are about
        // to add it to the Gallery.
        // Add the PNG file to the Android Gallery.
        ContentValues image = new ContentValues();
        image.put(Images.Media.TITLE, baseFilename);
        image.put(Images.Media.DISPLAY_NAME, baseFilename);
        "Processed by the Cartoonifier App");
        System.currentTimeMillis()); // msecs since 1970 UTC.
        image.put(Images.Media.MIME_TYPE, "image/png");
        image.put(Images.Media.ORIENTATION, 0);
        image.put(Images.Media.DATA, file.getAbsolutePath());
        Uri result = context.getContentResolver().insert(
    catch (Exception e) {
  6. Android apps need permission from the user during installation if they need to store files on the device. So insert this line in AndroidManifest.xml just next to the similar line requesting permission for camera access:

    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
  7. Build and run the app! When you touch the screen to save a photo, you should eventually see the cartoonified image shown on the screen (perhaps after 5 or 10 seconds of processing). Once it is shown on the screen, it means it should be saved to your SD card and to your photo gallery. Exit the Cartoonifier app, open the Android Gallery app, and view the Pictures album. You should see the cartoon image as a PNG image in your screen's full resolution.

Showing an Android notification message about a saved image

If you want to show a notification message whenever a new image is saved to the SD card and Android Gallery, follow these steps; otherwise feel free to skip this section:

  1. Add the following to the top section of

    // For showing a Notification message when saving a file.
    import android.content.ContentValues;
    import android.content.Intent;
  2. Add this near the top section of CartoonifierView:

    private int mNotificationID = 0;
    // To show just 1 notification.
  3. Insert this inside the if statement below the call to savePNGImageToGallery() in processFrame():

    showNotificationMessage(mContext, baseFilename);
  4. Add the showNotificationMessage() function to CartoonifierView:

    // Show a notification message, saying we've saved another image.
    protected void showNotificationMessage(Context context,
        String filename)
    // Popup a notification message in the Android status   
    // bar. To make sure a notification is shown for each 
    // image but only 1 is kept in the status bar at a time, // use a different ID each time
    // but delete previous messages before creating it.
    final NotificationManager mgr = (NotificationManager) \
    // Close the previous popup message, so we only have 1 //at a time, but it still shows a popup message for each //one.
    if (mNotificationID > 0)
    Notification notification = new Notification(R.drawable.icon,
    "Saving to gallery (image " + mNotificationID + ") ...",
    Intent intent = new Intent(context, CartoonifierView.class);
    // Close it if the user clicks on it.
    notification.flags |= Notification.FLAG_AUTO_CANCEL;
    PendingIntent pendingIntent = PendingIntent.getActivity(context,
    0, intent, 0);
    notification.setLatestEventInfo(context, "Cartoonifier saved " +
    mNotificationID + " images to Gallery", "Saved as '" +
    filename + "'", pendingIntent);
    mgr.notify(mNotificationID, notification);
  5. Once again, build and run the app! You should see a notification message pop up whenever you touch the screen for another saved image. If you want the notification message to pop up before the long delay of image processing rather than after, move the call to showNotificationMessage() before the call to cartoonifyImage(), and move the code for generating the date and time string so that the same string is given to the notification message and the actual file is saved.

Changing cartoon modes through the Android menu bar

Let's allow the user to change modes through the menu:

  1. Add the following headers near the top of the file src\com\Cartoonifier\

    import android.view.Menu;
    import android.view.MenuItem;
  2. Insert the following member variables inside the CartoonifierApp class:

    // Items for the Android menu bar.
    private MenuItem mMenuAlien;
    private MenuItem mMenuEvil;
    private MenuItem mMenuSketch;
    private MenuItem mMenuDebug;
  3. Add the following functions to CartoonifierApp:

    /** Called when the menu bar is being created by Android. */
    public boolean onCreateOptionsMenu(Menu menu) {
    Log.i(TAG, "onCreateOptionsMenu");
    mMenuSketch = menu.add("Sketch or Painting");
    mMenuAlien = menu.add("Alien or Human");
    mMenuEvil = menu.add("Evil or Good");
    mMenuDebug = menu.add("[Debug mode]");
    return true;
    /** Called whenever the user pressed a menu item in the menu bar. */
    public boolean onOptionsItemSelected(MenuItem item) {
    Log.i(TAG, "Menu Item selected: " + item);
    if (item == mMenuSketch)
    else if (item == mMenuAlien)
    else if (item == mMenuEvil)
    else if (item == mMenuDebug)
    return true;
  4. Insert the following member variables inside the CartoonifierView class:

    private boolean m_sketchMode = false;
    private boolean m_alienMode = false;
    private boolean m_evilMode = false;
    private boolean m_debugMode = false;
  5. Add the following functions to CartoonifierView:

    protected void toggleSketchMode() {
    m_sketchMode = !m_sketchMode;
    protected void toggleAlienMode() {
    m_alienMode = !m_alienMode;
    protected void toggleEvilMode() {
    m_evilMode = !m_evilMode;
    protected void toggleDebugMode() {
    m_debugMode = !m_debugMode;
  6. We need to pass the mode values to the cartoonifyImage() JNI code, so let's send them as arguments. Modify the Java declaration of CartoonifyImage() in CartoonifierView:

    public native void CartoonifyImage(int width, int height,byte[] yuv,
    int[] rgba, boolean sketchMode, boolean alienMode,
    boolean evilMode, boolean debugMode);
  7. Now modify the Java code so we pass the current mode values in processFrame():

    CartoonifyImage(getFrameWidth(), getFrameHeight(), data,rgba,
    m_sketchMode, m_alienMode, m_evilMode, m_debugMode);
  8. The JNI declaration of CartoonifyImage() in jni\jni_part.cpp should now be:

    JNIEXPORT void JNICALL Java_com_Cartoonifier_CartoonifierView_CartoonifyImage(
      JNIEnv* env, jobject, jint width, jint height,
      jbyteArray yuv, jintArray bgra, jboolean sketchMode,
      jboolean alienMode, jboolean evilMode, jboolean debugMode)
  9. We then need to pass the modes to the C/C++ code in cartoon.cpp from the JNI function in jni\jni_part.cpp. When developing for Android we can only show one GUI window at a time, but on a desktop it is handy to show extra windows while debugging. So instead of taking a Boolean flag for debugMode, let's pass a number that would be 0 for non-debug, 1 for debug on mobile (where creating a GUI window in OpenCV would cause a crash!), and 2 for debug on desktop (where we can create as many extra windows as we want):

    int debugType = 0;
    if (debugMode)
      debugType = 1;
    cartoonifyImage(mbgr, displayedFrame, sketchMode, alienMode, evilMode, debugType);
  10. Update the actual C/C++ implementation in cartoon.cpp:

    void cartoonifyImage(Mat srcColor, Mat dst, bool sketchMode,
    bool alienMode, bool evilMode, int debugType)
  11. And update the C/C++ declaration in cartoon.h:

    void cartoonifyImage(Mat srcColor, Mat dst, bool sketchMode,
    bool alienMode, bool evilMode, int debugType);
  12. Build and run it; then try pressing the small options-menu button on the bottom of the window. You should find that the sketch mode is real-time, whereas the paint mode has a large delay due to the bilateral filter.

Reducing the random pepper noise from the sketch image

Most of the cameras in current smartphones and tablets have significant image noise. This is normally acceptable, but it has a large effect on our 5 x 5 Laplacian-edge filter. The edge mask (shown as the sketch mode) will often have thousands of small blobs of black pixels called "pepper" noise, made of several black pixels next to each other in a white background. We are already using a Median filter, which is usually strong enough to remove pepper noise, but in our case it may not be strong enough. Our edge mask is mostly a pure white background (value of 255) with some black edges (value of 0) and the dots of noise (also values of 0). We could use a standard closing morphological operator, but it will remove a lot of edges. So, instead, we will apply a custom filter that removes small black regions that are surrounded completely by white pixels. This will remove a lot of noise while having little effect on actual edges.

We will scan the image for black pixels, and at each black pixel we'll check the border of the 5 x 5 square around it to see if all the 5 x 5 border pixels are white. If they are all white we know we have a small island of black noise, so we fill the whole block with white pixels to remove the black island. For simplicity in our 5 x 5 filter, we will ignore the two border pixels around the image and leave them as they are.

The following figure shows the original image from an Android tablet on the left side, with a sketch mode in the center (showing small black dots of pepper noise), and the result of our pepper-noise removal shown on the right side, where the skin looks cleaner:

The following code can be named as the function removePepperNoise(). This function will edit the image in place for simplicity:

void removePepperNoise(Mat &mask)
for (int y=2; y<mask.rows-2; y++) {
  // Get access to each of the 5 rows near this pixel.
  uchar *pUp2 = mask.ptr(y-2);
  uchar *pUp1 = mask.ptr(y-1);
  uchar *pThis = mask.ptr(y);
  uchar *pDown1 = mask.ptr(y+1);
  uchar *pDown2 = mask.ptr(y+2);

  // Skip the first (and last) 2 pixels on each row.
  pThis += 2;
  pUp1 += 2;
  pUp2 += 2;
  pDown1 += 2;
  pDown2 += 2;
  for (int x=2; x<mask.cols-2; x++) {
    uchar value = *pThis;  // Get this pixel value (0 or 255).
    // Check if this is a black pixel that is surrounded by
    // white pixels (ie: whether it is an "island" of black).
    if (value == 0) {
    bool above, left, below, right, surroundings;
    above = *(pUp2 - 2) && *(pUp2 - 1) && *(pUp2) &&
    *(pUp2 + 1) && *(pUp2 + 2);
    left = *(pUp1 - 2) && *(pThis - 2) && *(pDown1 - 2);
    below = *(pDown2 - 2) && *(pDown2 - 1) && *(pDown2) &&
    *(pDown2 + 1) && *(pDown2 + 2);
    right = *(pUp1 + 2) && *(pThis + 2) && *(pDown1 + 2);
    surroundings = above && left && below && right;
    if (surroundings == true) {
      // Fill the whole 5x5 block as white. Since we know
      // the 5x5 borders are already white, we just need to
      // fill the 3x3 inner region.
      *(pUp1 - 1) = 255;
      *(pUp1 + 0) = 255;
      *(pUp1 + 1) = 255;
      *(pThis - 1) = 255;
      *(pThis + 0) = 255;
      *(pThis + 1) = 255;
      *(pDown1 - 1) = 255;
      *(pDown1 + 0) = 255;
      *(pDown1 + 1) = 255;
      // Since we just covered the whole 5x5 block with
      // white, we know the next 2 pixels won't be black,
     // so skip the next 2 pixels on the right.
      pThis += 2;
      pUp1 += 2;
      pUp2 += 2;
      pDown1 += 2;
      pDown2 += 2;
    // Move to the next pixel on the right.

Showing the FPS of the app

If you want to show the frames per second (FPS) speed—which is less important for a slow app such as this, but still useful—on the screen, perform the following steps:

  1. Copy the file src\org\opencv\samples\imagemanipulations\ from the ImageManipulations sample folder in OpenCV (for example, C:\OpenCV-2.4.1\samples\android\image-manipulations) to your src\com\Cartoonifier folder.

  2. Replace the package name at the top of to be com.Cartoonifier.

  3. In the file, declare your FpsMeter member variable after private byte[] mBuffer;:

    private FpsMeter  mFps;
  4. Initialize the FpsMeter object in the CartoonifierViewBase() constructor, after mHolder.addCallback(this);:

    mFps = new FpsMeter();
  5. Measure the FPS of each frame in run() after the try/catch block:

  6. Draw the FPS onto the screen for each frame, in run() after the canvas.drawBitmap()function:

    mFps.draw(canvas, (canvas.getWidth() - bmp.getWidth()) /2, 0);

Using a different camera resolution

If you want your app to run faster, knowing that the quality will suffer, you should definitely consider either asking for a smaller camera image from the hardware or shrinking the image once you have it. The sample code that the Cartoonifier is based on uses the closest camera preview resolution to the screen height. So if your device has a 5 megapixel camera and the screen is just 640 x 480, it might use a camera resolution of 720 x 480, and so on. If you want to control which camera resolution is chosen, you can modify the parameters to setupCamera() in the surfaceChanged() function in For example:

public void surfaceChanged(SurfaceHolder _holder, int format,
  int width, int height) {
  Log.i(TAG, "Screen size: " + width + "x" + height);
  // Use a camera resolution of roughly half the screen height.
  setupCamera(width/2, height/2);

An easy method to obtain the highest preview resolution from a camera is to pass a large size such as 10,000 x 10,000 and it will choose the maximum resolution available (note that it will only give the maximum preview resolution, which is the camera's video resolution and therefore is often much less than the camera's still-image resolution). Or if you want it to run really fast, pass 1 x 1 and it will find the lowest camera preview resolution (for example 160 x 120) for you.

Customizing the app

Now that you have created a whole Android Cartoonifier app, you should know the basics of how it works and which parts do what; you should customize it! Change the GUI, the app behavior and workflow, the cartoonifier filter constants, the skin detector algorithm, or replace the cartoonifier code with your own ideas.

You can improve the skin-detection algorithm in many ways, such as by using a more complex skin-detection algorithm (for example, using trained Gaussian models from many recent CVPR or ICCV conference papers at or by adding face detection (see the Face Detection section of Chapter 8, Face Recognition using Eigenfaces) to the skin detector, so that it detects where the user's face is rather than asking the user to put their face in the center of the screen. Beware that face detection may take many seconds on some devices or high-resolution cameras, so this approach may be limited by the comparatively slow processing speed, but smartphones and tablets are getting significantly faster every year, so this will become less of a problem.

The most significant way to speed up mobile computer vision apps is to reduce the camera resolution as much as possible (for example, 0.5 megapixel instead of 5 megapixel), allocate and free up images as rarely as possible, and do image conversions as rarely as possible (for instance, by supporting BGRA images throughout your code). You can also look for optimized image processing or math libraries from the CPU vendor of your device (for example, NVIDIA Tegra, Texas Instruments OMAP, Samsung Exynos, Apple Ax, or QualComm Snapdragon) or for your CPU family (for example, the ARM Cortex-A9). Remember, there may be an optimized version of OpenCV for your device.

To make customizing NDK and desktop image-processing code easier, this book comes with files ImageUtils.cpp and ImageUtils.h to help you experiment. It includes functions such as printMatInfo(), which prints a lot of information about a cv::Mat object, making debugging OpenCV much easier. There are also timing macros to easily add detailed timing statistics to your C/C++ code. For example:


void myImageFunction(Mat img) {
  printMatInfo(img, "input");

  bilateralFilter(img, …);
  SHOW_TIMING(myFilter, "My Filter");

You would then see something like the following printed to your console:

input: 800w600h 3ch 8bpp, range[19,255][17,243][47,251]
My Filter: time:  213ms   (ave=215ms min=197ms max=312ms, across 57 runs).

This is useful when your OpenCV code is not working as expected; particularly for mobile development where it is often quite difficult to use an IDE debugger, and printf() statements generally won't work in Android NDK. However, the functions in ImageUtils work on both Android and desktop.