Book Image

Building Serverless Architectures

By : Cagatay Gurturk
Book Image

Building Serverless Architectures

By: Cagatay Gurturk

Overview of this book

Over the past years, all kind of companies from start-ups to giant enterprises started their move to public cloud providers in order to save their costs and reduce the operation effort needed to keep their shops open. Now it is even possible to craft a complex software system consisting of many independent micro-functions that will run only when they are needed without needing to maintain individual servers. The focus of this book is to design serverless architectures, and weigh the advantages and disadvantages of this approach, along with decision factors to consider. You will learn how to design a serverless application, get to know that key points of services that serverless applications are based on, and known issues and solutions. The book addresses key challenges such as how to slice out the core functionality of the software to be distributed in different cloud services and cloud functions. It covers basic and advanced usage of these services, testing and securing the serverless software, automating deployment, and more. By the end of the book, you will be equipped with knowledge of new tools and techniques to keep up with this evolution in the IT industry.
Table of Contents (10 chapters)

Gradle

We must also have Gradle installed on our system. Gradle is a modern build tool that got popularity with Android. It uses Groovy-based DSL instead of XML files and mixes declarative and imperative type of build configuration. With Gradle you can define dependencies and project properties, and also you can write functions. We will leverage Gradle to build our deployment system, so with only one command we will be able to deploy all our software to cloud.

Throughout the book, we will use the Gradle wrapper that locks the Gradle version for the project, thus providing integrity between different teams. However, in order to run the gradle wrapper task, which will create the gradle wrapper files in our project, we have to have at least one Gradle version locally in our system.

If you do not have it already, execute the following:

    $ curl -s https://get.sdkman.io | bash  

Then, open a new terminal and type this:

    $ sdk install gradle 2.14  

This will install the Gradle 2.14 version.

Creating the project

Finally, we can start creating our project. We will create our project in our home directory, so we can start with these commands:

    $ mkdir -p ~ /serverlessbook
    $ cd ~/serverlessbook 

Once we create the working directory, we can create the build.gradle file, which will be the main build file of our project:

    $ touch build.gradle

We can start with the Gradle wrapper task, which will generate Gradle files in our project. Write this block into the build.gradle file:

task wrapper(type: Wrapper) { 
  gradleVersion = '2.14' 
} 

And then execute the command:

    $ gradle wrapper 

This will create Gradle wrapper files in our project. This means that in the root directory of the project, ./gradlew can be called instead of the local gradle. It is a nice feature of Gradle: Let's assume that you distributed your project to other team members and you are not sure whether they have Gradle installed on their system (or its version if they already have). With the Gradle wrapper, you make sure that everybody who checked out the project will run Gradle 2.14 if they run ./gradlew. If they do not have any Gradle version in their system, the script will download it.

We can now proceed to add the declarations needed for all projects. Add this code block to the build.gradle file:

// allprojects means this configuration 
// will be inherited by the root project itself and subprojects 
allprojects { 
   // Artifact Id of the projct 
   group 'com.serverlessbook' 
   // Version of the project 
   version '1.0' 
   // Gradle JAVA plugin needed for JAVA support 
   apply plugin: 'java' 
   // We will be using JAVA 8, then 1.8 
   sourceCompatibility = 1.8 
} 

With this code block, we tell to Gradle that we are building a Java 8 project with the artifact ID com.serverlessbook and version 1.0.

Also, we need to create the settings.gradle file, which will include some generic settings about the project and subproject names in the future. In the root project, create a new file with the name settings.gradle and type this line:

rootProject.name = 'forum' 

Actually, this line is optional. When the root project name is not given a name explicitly, Gradle assigns the name of the directory where the project is placed as the project name. For consistency, however, it is a good idea to name the project explicitly because other developers may always check out our code to a directory with another name and we would not love that our project has another name then.

In our Gradle build script, we get access to important values about the project with variables such as project.name and project.version.

Now we should add repositories to fetch the dependencies for the project itself and the build script. In order to accomplish this, first, we have to add this block to the build.gradle file:

allprojects {
repositories {
mavenCentral()
jcenter()
maven {
url "https://jitpack.io"
}
}
}

Here, we defined Maven Central, Bintray JCenter, and Jitpack as the three most popular repositories. We need the same dependencies for the build script, thus we add the following block to the same file:

buildscript {
repositories {
mavenCentral()
jcenter()
maven {
url "https://jitpack.io"
}
}
}
Repositories and dependencies defined in buildscript are used only in the Gradle build script itself. We will excessively use build script dependencies because our Gradle script will manage the deployment process. Therefore, it is important that you have these repositories for the build script as well.

Implementing the Lambda Dependency

In the previous section, we already finished the generic Gradle setup. In this section, we will learn how to write Lambda functions and create the very core part of our project that will be the entry point for all our Lambda functions.

In our project, we will have more than one specific AWS Lambda function, one for each REST endpoint and several more for auxiliary services. These functions will share some common code and dependencies; therefore, is convenient to create a subproject under our root project. In Gradle, subprojects act like different projects but they can inherit the build configuration from their root project. In any case, these projects will be compiled independently and produce different JAR files in their respective build directories.

In our project structure, one subproject will include the common code we will need for every single Lambda function, and this project will be required as a dependency by other subprojects that implement the Lambda function. As a naming convention, the core Lambda subproject will be called lambda, while the individual Lambda function that will be deployed will be named by the lambda- prefix.

We can start implementing this core AWS Lambda subproject and create a new directory under our root directory called with the its name:

    $ mkdir lambda  

Then, let's create a new build. gradle file for the newly created subproject:

    $ touch lambda/build.gradle

By default, Gradle will not recognize the new subproject just because we created a new directory under the root directory. To make Gradle recognize it as a subproject, we must add a new include directive to the settings.gradle file. This command will add the new line to settings.gradle:

    $ echo $"include 'lambda'" >> settings.gradle

After this point, our subproject can inherit the directives from the root project so we will not have to repeat the most of those.

Now we can define the required dependencies for our main Lambda library. At this point, we will need only the aws-lambda-java-core and jackson-databind packages. While the former is the standard AWS library for Lambda functions, the latter is used for JSON serialization and deserialization purposes, which we will be using heavily. In order to add these dependencies, just add these lines in the lambda/build.gradle file:

dependencies {
compile 'com.amazonaws:aws-lambda-java-core:1.1.0'
compile 'com.fasterxml.jackson.core:jackson-databind:2.6.+'
}

Previously, we mentioned that AWS Lambda is invoking a specific method for every Lambda function to inject the event data and accept this method's response as the Lambda response. To determine which method to invoke, AWS Lambda leverages interfaces. aws-lambda-java core includes the RequestStreamHandler interface in the com.amazonaws.services.lambda.runtime package. In our base Lambda package, we will create a method that implements this interface.

Now let's create our first package and implement the LambdaHandler<I, O> method inside it:

    $ mkdir -p lambda/src/main/java/com/serverlessbook/lambda
$ touch lambda/src/main/java/com/serverlessbook/lambda/
LambdaHandler.java

Let's start implementing our class:

package com.serverlessbook.lambda; 

import com.fasterxml.jackson.databind.ObjectMapper; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestStreamHandler; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream;
import java.lang.reflect.ParameterizedType; public abstract class LambdaHandler<I, O> implements RequestStreamHandler { @Override public void handleRequest(InputStream input, OutputStream output,
Context context) throws IOException { } public abstract O handleRequest(I input, Context context); }

As you may have noted, this class is using generics. It is expected that the implemented handleRequest abstract method in the inheriting classes accept one POJO (Plain Old Java Object) and return another POJO. On the other hand, in the overridden handleRequest method gets AWS Lambda event data as InputStream and it should return OutputStream including the output JSON. Our base LambdaHandler method will implement methods that convert JSON into InputStream and OutputStream into JSON. The I and O type references are the key points in this case because using this information, our base class will know which POJO classes it should use when it carries out the transformation.

If you have ever read the AWS Lambda documentation, you might have seen the RequestHandler class in the AWS Lambda library, which exactly does what we will do in the base class. However, Lambda's built-in JSON serialization does not meet the requirements for our project because it does not support advanced features of the Jackson JSON library. That's why we are implementing our own JSON serializer. If you are building a simple Lambda function that does not require these advanced options, you can check out https://docs.aws.amazon.com/lambda/latest/dg/java-handler-io-type-pojo.html and use the built-in serializer.

Before we go on implementing the base Lambda handler method, I suggest that you look at the TDD (Test Driven Development) approach and write a test class for the planned implementation. Having the test class will explain better which type of implementation we need and will draw a clear picture about the next step.

Before we start implementing the test, first, we have to add Junit as a dependency to our project. Open build.gradle in the root project and add these files to the end:

allprojects { 
  dependencies { 
    testCompile group: 'junit', name: 'junit', version: '4.11' 
  } 
} 

Then, let's create our first test file:

    $ mkdir -p lambda/src/test/java/com/serverlessbook/lambda
    $ touch lambda/src/test/java/com/serverlessbook/lambda/
LambdaHandlerTest.java

We can then to start implementing it writing the following code to LambdaHandlerTest file we've just created. First of all, inside the test class we will create two stub POJO's and a LambdaHandler class to run the test against:

public class LambdaHandlerTest { 
  protected static class TestInput { 
    public String value; 
  } 
  protected static class TestOutput { 
    public String value; 
  } 
  protected static class TestLambdaHandler extends LambdaHandler<TestInput,
TestOutput> { @Override public TestOutput handleRequest(TestInput input, Context context) { TestOutput testOutput = new TestOutput(); testOutput.value = input.value; return testOutput; } } }

Here, we have the sample TestInput and TestOutput classes, which are simple POJO classes with one variable each and one TestLambdaHandler class that implements the LambdaHandler class with type references to these POJO classes. As you may have noted, the stub class does not do too much and simply returns a TestOutput object with the same value it gets.

Finally, we can add the test method that will exactly emulate the AWS Lambda runtime and carry out a black-box text over our TestLambdaHandler method:

@Test 
public void handleRequest() throws Exception { 
   String jsonInputAndExpectedOutput = "{\"value\":\"testValue\"}"; 
   InputStream exampleInputStream = new   
ByteArrayInputStream(jsonInputAndExpectedOutput.getBytes(
StandardCharsets.UTF_8)); OutputStream exampleOutputStream = new OutputStream() { private final StringBuilder stringBuilder = new StringBuilder();
@Override public void write(int b) { stringBuilder.append((char) b); } @Override public String toString() { return stringBuilder.toString(); } }; TestLambdaHandler lambdaHandler = new TestLambdaHandler(); lambdaHandler.handleRequest(exampleInputStream, exampleOutputStream, null); assertEquals(jsonInputAndExpectedOutput, exampleOutputStream.toString()); }

To run the test, we can execute this command:

    $ ./gradlew test

Once you run the command, you will see that test will fail. It is normal for our test to fail because we did not complete the implementation of our LambdaHandler method and this is how Test Driven Development works: first, write the test, and then implement it until the test returns to green.

I think it is time to move on to implementation. Open the LambdaHandler class again and add a field with Jackson's ObjectMapper type and create the default constructor to initiate this object. You can add the following code to beginning of the class:

final ObjectMapper mapper; 
 
protected LambdaHandler() { 
    mapper = new ObjectMapper(); 
} 
AWS Lambda does not create an object from the handler class for every new request. Instead, it creates an instance of the class for the first request (called the 'heat up' stage) and uses the same instance for other requests. This created object will stay in the memory for about 20 minutes if there is no consequent request for that Lambda function. It is good to know about this undocumented fact because it means that we can cache objects among different requests using object properties, like we do here for ObjectMapper. In this case ObjectMapper will not be created for every request, and it will be 'cached' in the memory. However, you can think of the handler object like Servlets and you should pay attention to thread safety before you decide to use object properties.

Now we need helper methods in the handler for serialization and deserialization. First, we need a method to get the Class object for the I type reference:

@SuppressWarnings("unchecked") 
private Class<I> getInputType() { 
  return (Class<I>) ((ParameterizedType)
getClass().getGenericSuperclass()).getActualTypeArguments()[0]; }

We can use the deserializer and serializer methods:

private I deserializeEventJson(InputStream inputStream, Class<I> clazz) throws
IOException { return mapper.readerFor(clazz).readValue(inputStream); } private void serializeOutput(OutputStream outputStream, O output) throws
IOException { mapper.writer().writeValue(outputStream, output);
}

Finally, we can implement the handler method:

@Override 
public void handleRequest(InputStream input, OutputStream output,
Context context) throws IOException { I inputObject = deserializeEventJson(input, getInputType()); O handlerResult = handleRequest(inputObject, context); serializeOutput(output, handlerResult); }

It seems we are good to go. Let's run the test again:

    $ ./gradlew test 

Congratulations! We completed an important step and built the base class for our Lambda functions.

Hello Lambda!

We are now ready to implement our first Lambda function, which will just upload to the cloud via AWS CLI and invoke manually.

First, we have to create a new subproject, like we did earlier. This time, the subproject will be called lambda-test. We can easily do that with these two commands:

    $ mkdir -p lambda-test/src/main/java/com/serverlessbook/lambda/test
    $ echo $"include 'lambda'" >> settings.gradle
    $ touch lambda-test/src/main/java/com/serverlessbook/lambda/
test/Handler.java

We can create a blank class in Handler.java like this:

package com.serverlessbook.lambda.test; 
public class Handler {} 

Note that we've already chosen a naming convention for package naming: while our base Lambda package sits in the com.serverlessbook.lambda package, individual Lambda functions are in packages named with the com.serverlessbook.lambda.{function-name} format. We will also call handler classes Handler because it sounds perfect in English: Handler implements LambdaHandler. This naming convention is, of course, up to you and your team, but it is convenient to keep things organized.

If you are already familiar with the Gradle build mechanism, you might have realized that before we proceed to implement Lambda's handler function, we have to add the lambda subproject to lambda-test as a dependency, and that is a very valid point. The easiest way to do that would be by creating a build.gradle file for the lambda-test subproject, add the dependency in the dependencies {} block, and move on. On the other hand, we know that our project will include more than one Lambda function, and all of them will share the same build configuration. Putting this configuration in a central location is a very good idea for clear organization and maintainability. Fortunately, Gradle is a very powerful tool that allows such scenarios. We can create a build configuration block in our root project and apply this configuration only to subprojects whose name starts with lambda-, in accordance with our subproject naming convention. Then, we can edit our root build.gradle and add this block to the end of the file:

configure(subprojects.findAll()) {
if (it.name.startsWith("lambda-")) {
}
}

It tells Gradle to apply this configuration only to Lambda projects. Inside this block, we will have an important configuration, but for now, we can start with the most important dependency and edit the block to appear like this:

configure(subprojects.findAll()) { 
  if (it.name.startsWith("lambda-")) { 
    dependencies { 
      compile project(':lambda') 
    } 
  } 
} 

In this step, we have to add another important build configuration, which is the Shadow plugin. The Shadow plugin creates an uber-JAR (also known as a fat JAR or JAR with dependencies) that is required by AWS Lambda. After each build phase, this plugin will compile all the dependencies along with that project's source into a second-and bigger-JAR file, which will be our deployment package for AWS Lambda. To install this plugin, first, we have to edit the buildscript configuration of the root build.gradle file. After editing, the buildscript section should look like this:

buildscript { 
  repositories { 
    mavenCentral() 
    jcenter() 
    maven { 
      url "https://jitpack.io" 
    } 
  } 
 
  dependencies { 
    classpath "com.github.jengelman.gradle.plugins:shadow:1.2.3" 
  } 
} 

We have to apply the plugin to all lambda functions. We have to add two lines to the lambda subproject's configuration, and the final version should look like this:

configure(subprojects.findAll()) { 
  if (it.name.startsWith("lambda-")) { 
     dependencies { 
        compile project(':lambda') 
     } 
 
     apply plugin: "com.github.johnrengelman.shadow" 
     build.finalizedBy shadowJar 
  } 
} 

The first line applies the Shadow plugin, which adds shadowJar task to every lambda subproject. The second directive ensures that after every build task, the shadowJar is automatically executed, thus an uber-JAR is placed into the build directory.

You can try our basic build configuration by running this command in the root directory:

    $ ./gradlew build

You can see the uber-JAR file lambda-test-1.0-all.jar in the lambada-test/build/libs directory.

Now we are going to implement the handler function with very basic functionality, like what we did previously to test the base handler. For the sake of simplicity, we will define input and output classes as inner static classes, although this is not the recommended way of creating classes in Java. Now open the Handler class and edit it like this:

package com.serverlessbook.lambda.test; 
 
import com.amazonaws.services.lambda.runtime.Context; 
import com.serverlessbook.lambda.LambdaHandler; 
 
public class Handler extends LambdaHandler<Handler.TestInput, Handler.TestOutput> { 
    static class TestInput { 
        public String value; 
    } 
    static class TestOutput { 
        public String value; 
    } 
    @Override 
    public TestOutput handleRequest(TestInput input, Context context) { 
        TestOutput testOutput = new TestOutput(); 
        testOutput.value = input.value; 
        return testOutput; 
    } 
} 

That's it; we have now a very basic Lambda function, which is ready to deploy to the cloud. In the next section, we will deploy and run it on AWS Lambda runtime.

Deploying to the Cloud

Approaching the end of this chapter, we have a latest step, which is deploying our code to the cloud. In the next chapters, we will learn how to use CloudFormation for a production-ready deployment process. However, nothing is preventing us from using CLI to play a bit with Lambda at this stage.

Previously, we mentioned that AWS resources are protected by IAM policies and created a user and attached a policy to it. IAM has another entity type, which is called a role. Roles are very similar to users, and they are also identities and can access resources that are allowed by policies attached to them. However, while a user is associated with one person, roles can be assumed by whoever needs them. Lambda functions use roles to access other AWS resources. Every Lambda function should be associated with a role (execution role), and the Lambda function can call any resource that the policies attached to that role allow.

In the following chapters, while we create our CloudFormation stack, we will create very advanced role definitions. However, at this stage, our test Lambda function does not need to access any AWS resources; thus, a basic role with minimum access rights will be sufficient to run the example. In this section, you create an IAM role using the following predefined role type and access policy:

  • The AWS service role of the AWS Lambda type. This role grants AWS Lambda permission to assume the role.
  • The AWSLambdaBasicExecutionRole access policy that you attach to the role. This managed policy grants permissions for Amazon CloudWatch actions that your Lambda function needs for logging and monitoring.

To create the IAM role:

  1. Sign in to the Identity and Access Management (IAM) console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Roles and then choose Create New Role.
  3. Enter a role name, say, lambda-execution-role, and then choose Next Step.
  4. On the next screen, select AWS Lambda in the AWS Service Roles section.
  5. In Attach Policy, choose AWSLambdaBasicExecutionRole and then proceed.
  6. Take down the ARN of the role you just created.

Now we are ready to deploy our first Lambda function. First, let's build our project again using the build command:

    $ ./gradlew build

Check whether the uber-JAR file is created in the build folder. Then, create the function using AWS CLI:

    $ aws lambda create-function \ 
      --region us-east-1\ 
      --function-name book-test \ 
      --runtime java8 \ 
      --role ROLE_ARN_YOU_CREATED \ 
      --handler com.serverlessbook.lambda.test.Handler \ 
      --zip-file fileb://${PWD}/lambda-test/build/libs/
lambda-test-1.0-all.jar

If everything goes well, the following happens:

{ 
   "CodeSha256": "6cSUk4g8GdlhvApF6LfpT1dCOgemO2LOtrH7pZ6OATk=", 
   "FunctionName": "book-test", 
   "CodeSize": 1481805, 
   "MemorySize": 128, 
   "FunctionArn": "arn:aws:lambda:us-east-1:YOUR_ACCOUNT_ID:
function:book-test", "Version": "$LATEST", "Role": "arn:aws:iam::YOUR_ACCOUNT-ID:role/lambda-execution-role", "Timeout": 3, "LastModified": "2016-08-22T22:12:30.419+0000", "Handler": "com.serverlessbook.lambda.test.Handler", "Runtime": "java8", "Description": "" }

This means that your function has already been created. You can navigate to https://eu-central-1.console.aws.amazon.com/lambda to check whether your function is already there or not. To execute the function, you can use the following command:

    $ aws lambda invoke --invocation-type RequestResponse \ 
                        --region us-east-1 \ 
                        --profile serverlessbook \ 
                        --function-name book-test \ 
                        --payload '{"value":"test"}' \ 
                        --log-type Tail \ 
                        /tmp/test.txt 

You can see the output value in the /tmp/test.txt file and try the command with different values to see different outputs. Note that the first invocation is always slower, while the subsequent calls are significantly faster. This is because of the heat-up mechanism of AWS Lambda that we will mention later in the book.

Congratulations, and welcome to the world of AWS Lambda officially!