Book Image

JavaScript Mobile Application Development

Book Image

JavaScript Mobile Application Development

Overview of this book

Table of Contents (15 chapters)
JavaScript Mobile Application Development
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Cordova architecture


The following diagram includes the main components of an Apache Cordova application (HTML, CSS, and JavaScript files). It can also contain helper files (such as application's JSON resource bundle files). Here, HTML files include JavaScript and CSS files. In order to access a device's native feature, JavaScript application objects (or functions) call Apache Cordova APIs.

Apache Cordova creates a single screen in the native application; this screen contains only a single WebView that consumes the available space on the device screen. Apache Cordova uses the native application's WebView in order to load the application's HTML and its related JavaScript and CSS files.

It is important to note that WebView is a component that is used to display a web page or content (basically HTML) in the application window. We can simply say that it is an embedded mobile web browser inside your native application that allows you to display the web content.

When the application launches, Apache Cordova loads the application's default startup page (usually index.html) in the application's WebView and then passes the control to the WebView, allowing the user to interact with the application. Application users can interact with the application by doing many things such as entering data in input fields, clicking on action buttons, and viewing results in the application's WebView.

Thanks to this technique and because WebView is a native component that provides web content rendering, users feel that they are interacting with a native application screen if the application's CSS is designed to have the mobile platform look and feel.

Tip

WebView has an implementation in all the major mobile platforms. For example, in Android, WebView refers to the android.webkit.WebView class. In iOS, however, it refers to the UIWebView class that belongs to the System/Library/Frameworks/UIKit framework. In the Windows Phone platform, meanwhile, it refers to the WebView class that belongs to the Windows.UI.Xaml.Controls classes.

In order to allow you to access a mobile's native functions such as audio recording or camera photo capture, Apache Cordova provides a suite of JavaScript APIs that developers can use from their JavaScript code, as shown in the following diagram:

The calls to Apache Cordova JavaScript APIs are translated to the native device API calls using a special bridge layer. In Apache Cordova, the device native APIs are accessed from Apache Cordova plugins.

Tip

You will learn how to develop your own custom Cordova plugin in Chapter 6, Developing Custom Cordova Plugins.

The beautiful thing behind this approach is that you can use a unified API interface in order to perform a specific native function (such as camera photo capturing or audio recording) transparently across the various mobile platforms. It is important to note that in order to perform these native functions as a native developer, you will need to call completely different native APIs that are usually implemented using different native programming languages. All of the Cordova JavaScript-unified APIs and their corresponding native code implementations are implemented using plugins. We will illustrate Cordova plugins in much more detail in Chapter 6, Developing Custom Cordova Plugins.

If you are interested to know what will happen when a call is performed to a Cordova JavaScript API, then we can take a look at a complete example for a Cordova API call under Android and Windows Phone platforms. In order to get a complete picture, you simply call the following Cordova JavaScript API:

navigator.camera.getPicture(onSuccess, onFail, { quality: 50,
    destinationType: Camera.DestinationType.DATA_URL
});

function onSuccess(imageData) {
    var image = document.getElementById('myImage');
    image.src = "data:image/jpeg;base64," + imageData;
}

function onFail(message) {
    alert('Failed because: ' + message);
}

As shown in preceding code snippet, a simple call to the getPicture() method of the camera object is performed with the following three parameters:

  • onSuccesscallback: This parameter is called if the getPicture operation succeeds.

  • onFailcallback: This parameter is called if the getPicture operation fails.

  • { quality: 50, destinationType: Camera.DestinationType.DATA_URL }: This is a JavaScript object that contains the configuration parameters. In our example, only the two parameters, quality, which refers to the quality of the output picture (it should be a value from 0 to 100), and destinationType, which refers to the format of the return value, are specified. It can have one of the three values: DATA_URL, which means that the format of the returned image will be Base64-encoded string, FILE_URI, which means that the image file URI will be returned, or NATIVE_URI, which refers to the image native URI.

As we set destinationType to Camera.DestinationType.DATA_URL, the parameter of onSuccess will represent the Base-64 encoded string of the captured image.

This simple call to the getPicture() method of the camera object calls the following Android Java native code. Please note that this code is the actual code for the Apache Cordova Camera plugin Version 3. If you are a native Android developer, then the following two code snippets will look very familiar to you:

public void takePicture(int returnType, int encodingType) {
    // Code is omitted for simplicity ...

    // Display camera
    Intent intent = new Intent("android.media.action.IMAGE_CAPTURE");

    // Specify file so that large image is captured and returned
    File photo = createCaptureFile(encodingType);
    
    intent.putExtra(android.provider.MediaStore.EXTRA_OUTPUT, Uri.fromFile(photo));
    this.imageUri = Uri.fromFile(photo);

    if (this.cordova != null) {
        this.cordova.startActivityForResult((CordovaPlugin) this, intent, (CAMERA + 1) * 16 + returnType + 1);
    }
}

As shown in the previous code, in order to open a camera in an Android device, you need to start the "android.media.action.IMAGE_CAPTURE" intent and receive the result back using the startActivityForResult() API of the Android Activity class. In order to receive the image capture intent result in Android, your Android Activity class needs to implement the onActivityResult() callback, as shown in the following Apache Cordova Android Camera plugin code:

public void onActivityResult(int requestCode, int resultCode, Intent intent) {
    // Get src and dest types from request code
    int srcType = (requestCode / 16) - 1;
    int destType = (requestCode % 16) - 1;
    int rotate = 0;

    // If CAMERA
    if (srcType == CAMERA) {
        
        // If image available
        if (resultCode == Activity.RESULT_OK) {
                // ... Code is omitted for simplicity ...

                Bitmap bitmap = null;
                Uri uri = null;

                // If sending base64 image back
                if (destType == DATA_URL) {
                    bitmap = getScaledBitmap(FileHelper.stripFileProtocol(imageUri.toString()));
            
                    // ... Code is omitted for simplicity ...

                    this.processPicture(bitmap);
                }

                // If sending filename back
                else if (destType == FILE_URI || destType == NATIVE_URI) {
                    if (this.saveToPhotoAlbum) {
                        Uri inputUri = getUriFromMediaStore();
                        
                        //Just because we have a media URI doesn't mean we have a real file, we need to make it
                        uri = Uri.fromFile(new File(FileHelper.getRealPath(inputUri, this.cordova)));
                    } else {
                        uri = Uri.fromFile(new File(DirectoryManager.getTempDirectoryPath(this.cordova.getActivity()), System.currentTimeMillis() + ".jpg"));
                    }
                        
                    if (uri == null) {
                        this.failPicture("Error capturing image - no media storage found.");
                    }
                        
                    // ... Code is omitted for simplicity ...
                    // Send Uri back to JavaScript for viewing image
                    this.callbackContext.success(uri.toString());
                }
                        
                // ... Code is omitted for simplicity ...
            } catch (IOException e) {
                e.printStackTrace();
                this.failPicture("Error capturing image.");
            }
        }
                        
        // If cancelled
        else if (resultCode == Activity.RESULT_CANCELED) {
            this.failPicture("Camera cancelled.");
        }
                        
        // If something else
        else {
            this.failPicture("Did not complete!");
        }
    }
}

If the camera capture operation succeeds, then resultCode == Activity.RESULT_OK will be true, and if the user requires the result of the captured image as a Base-64 encoded string, then the captured bitmap image is retrieved and processed in the processPicture(bitmap) method. As shown in the following code snippet, processPicture(bitmap) compresses the bitmap image and then converts it to a byte array, which is encoded to Base-64 array. This is then finally converted to a string that is returned to the JavaScript Cordova client using this.callbackContext.success(). We will illustrate Android CallbackContext in more detail later in this book.

If the user requires the result of the captured image as a file or native URI string, then the file URI of the image file is retrieved and sent to the JavaScript Cordova client using this.callbackContext.success().

public void processPicture(Bitmap bitmap) {
    ByteArrayOutputStream jpeg_data = new ByteArrayOutputStream();
    try {
        if (bitmap.compress(CompressFormat.JPEG, mQuality, jpeg_data)) {
            byte[] code = jpeg_data.toByteArray();
            byte[] output = Base64.encode(code, Base64.DEFAULT);
            String js_out = new String(output);
            this.callbackContext.success(js_out);
            js_out = null;
            output = null;
            code = null;
        }
    } catch (Exception e) {
        this.failPicture("Error compressing image.");
    }
    jpeg_data = null;
}

Note

In Android native development, an Android Activity class is generally a thing that the user can do. The Activity class is also responsible for the creation of a window for you in which you can place your User Interface (UI) while using the setContentView() API. An Android Intent is an abstract description of an operation to be performed so that it can be used with startActivity or startActivityForResult to launch an activity, as shown in the previous example of Camera photo capturing.

If you are using Microsoft Windows Platform 7 or 8, for example, the call to the getPicture() method of the camera object will call the following Windows Phone C# native code. Please note that this code is the actual code for Apache Cordova Camera Windows Phone plugin. If you are a native Windows Phone developer, the next two code snippets will look very familiar to you:

CameraCaptureTask cameraTask;

public void takePicture(string options)
{
    // ... Code is omitted for simplifying things ...

    if (cameraOptions.PictureSourceType == CAMERA)
    {
        cameraTask = new CameraCaptureTask();
        cameraTask.Completed += onCameraTaskCompleted;
        cameraTask.Show();        
    }
    
    // ... Code is omitted for simplifying things ...
}

As shown in the preceding code, in order to open a camera in a Windows Phone device, you need to create an instance of CameraCaptureTask and call the Show() method. In order to receive the image capture result on the Windows Phone platform, you need to define an event handler that will be executed once the camera task completes. In the previous code, onCameraTaskCompleted is the event handler that will be executed once the camera task completes. The following code snippet shows the onCameraTaskCompleted handler code with its helper methods:

public void onCameraTaskCompleted(object sender, PhotoResult e)
{
    // ... Code is omitted for simplifying things ...    
    switch (e.TaskResult)
    {
        case TaskResult.OK:
            try
            {
                string imagePathOrContent = string.Empty;

                if (cameraOptions.DestinationType == FILE_URI)
                {
                    // Save image in media library
                    if (cameraOptions.SaveToPhotoAlbum)
                    {
                        MediaLibrary library = new MediaLibrary();
                        Picture pict = library.SavePicture(e.OriginalFileName, e.ChosenPhoto); // to save to photo-roll ...
                    }

                    int orient = ImageExifHelper.getImageOrientationFromStream(e.ChosenPhoto);
                    int newAngle = 0;
                
                    // ... Code is omitted for simplifying things ...

                    Stream rotImageStream = ImageExifHelper.RotateStream(e.ChosenPhoto, newAngle);

                    // we should return stream position back after saving stream to media library
                    rotImageStream.Seek(0, SeekOrigin.Begin);

                    WriteableBitmap image = PictureDecoder.DecodeJpeg(rotImageStream);

                    imagePathOrContent = this.SaveImageToLocalStorage(image, Path.GetFileName(e.OriginalFileName));
                }
                else if (cameraOptions.DestinationType == DATA_URL)
                {
                    imagePathOrContent = this.GetImageContent(e.ChosenPhoto);
                }
                else
                {
                    // TODO: shouldn't this happen before we launch the camera-picker?
                    DispatchCommandResult(new PluginResult(PluginResult.Status.ERROR, "Incorrect option: destinationType"));
                    return;
                }

                DispatchCommandResult(new PluginResult(PluginResult.Status.OK, imagePathOrContent));

            }
            catch (Exception)
            {
                DispatchCommandResult(new PluginResult(PluginResult.Status.ERROR, "Error retrieving image."));
            }
            break;
            
            // ... Code is omitted for simplifying things ...
    }
}

If the camera capture operation succeeds, then e.TaskResult == TaskResult.OK will be true, and if the user requires the result of the captured image as a Base-64 encoded string, then the captured image is retrieved and processed in the GetImageContent(stream) method. The GetImageContent(stream) function, which is shown in the following code snippet, converts the image to a Base-64 encoded string that is returned to the JavaScript Cordova client using the DispatchCommandResult() method. We will illustrate the DispatchCommandResult() method in more detail later on in this book.

If the user requires the result of the captured image as a file URI string, then the file URI of the image file is retrieved using the SaveImageToLocalStorage() method (whose implementation is shown in the following code snippet) and is then sent to the JavaScript Cordova client using DispatchCommandResult():

private string GetImageContent(Stream stream)
{
    int streamLength = (int)stream.Length;
    byte[] fileData = new byte[streamLength + 1];
    stream.Read(fileData, 0, streamLength);

    //use photo's actual width & height if user doesn't provide width & height
    if (cameraOptions.TargetWidth < 0 && cameraOptions.TargetHeight < 0)
    {
        stream.Close();
        return Convert.ToBase64String(fileData);
    }
    else
    {
        // resize photo
        byte[] resizedFile = ResizePhoto(stream, fileData);
        stream.Close();
        return Convert.ToBase64String(resizedFile);
    }
}

private string SaveImageToLocalStorage(WriteableBitmap image, string imageFileName)
{
    // ... Code is omitted for simplifying things ...
    var isoFile = IsolatedStorageFile.GetUserStoreForApplication();
    if (!isoFile.DirectoryExists(isoFolder))
    {
        isoFile.CreateDirectory(isoFolder);
    }

    string filePath = System.IO.Path.Combine("///" + isoFolder + "/", imageFileName);

    using (var stream = isoFile.CreateFile(filePath))
    {
        // resize image if Height and Width defined via options 
        if (cameraOptions.TargetHeight > 0 && cameraOptions.TargetWidth > 0)
        {
            image.SaveJpeg(stream, cameraOptions.TargetWidth, cameraOptions.TargetHeight, 0, cameraOptions.Quality);
        }
        else
        {
            image.SaveJpeg(stream, image.PixelWidth, image.PixelHeight, 0, cameraOptions.Quality);
        }
    }

    return new Uri(filePath, UriKind.Relative).ToString();
}

As you can see from the examples of Android and Windows Phone Platforms, in order to implement a photo capture using the device camera on two mobile platforms, we had to use two different programming languages and deal with totally different APIs. Thanks to Apache Cordova unified programming JavaScript interface, you don't even need to know how every mobile platform is handling the native stuff behind the scene, and you can only focus on implementing your cross-platform mobile application's business logic with a neat unified code base.

By now, you should have been comfortable with knowing and understanding the Apache Cordova architecture. In the upcoming chapters of this book, however, we will explain the bits of Apache Cordova in more detail, and you will acquire a deeper understanding of the Apache Cordova architecture by creating your own custom Cordova plugin in Chapter 6, Developing Custom Cordova Plugins.