Book Image

Hands-On Serverless Applications with Kotlin

By : Hardik Trivedi, Ameya Kulkarni
Book Image

Hands-On Serverless Applications with Kotlin

By: Hardik Trivedi, Ameya Kulkarni

Overview of this book

Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Many companies now use serverless architectures to cut costs and improve scalability. Thanks to its concise and expressive syntax and a smooth learning curve, Kotlin is a great fit for developing serverless applications. With this book, you’ll be able to put your knowledge to work by implementing serverless technology in your applications and become productive in no time. Complete with detailed explanation of essential concepts and examples, this book will help you understand the serverless architecture fundamentals and how to design serverless architectures for your applications. You’ll also explore how AWS Lambda functions work. The book will guide you in designing, building, securing, and deploying your application to production, along with implementing non-functional requirements such as auditing and logging. Furthermore, you’ll discover how to scale up and orchestrate serverless applications using an open source framework and handle distributed serverless systems in production. By the end of the book, you’ll be able to build scalable and cost-efficient Kotlin applications with a serverless framework.
Table of Contents (11 chapters)
3
Designing a Kotlin Serverless Application

Pros and cons of serverless

Now that we have defined serverless computing, we will explore its pros and cons.

Advantages of serverless systems

The following sections will cover the advantages of serverless systems.

Reduced operational costs

The reduction in the operational costs of serverless systems is on two dimensions. There are upfront savings on hardware, and cost savings achieved by outsourcing infrastructure management activities.

Optimized resource utilization

For a system with sporadic or seasonal traffic, it doesn't make sense for companies to invest in the upkeep of hardware capacity catering to peak loads. Serverless empowers companies to design applications that scale up and down transparently, as per the demands of the load. This enables optimum resource utilization, saving costs and reducing the impact on the environment.

Faster time to market

The promise of serverless is to empower the developer to focus only on developing business logic and delivering cutting edge user experiences. Serverless stays true to this by abstracting away the infrastructure plumbing and wiring as a turnkey solution. The time to market is therefore greatly reduced.

For example, suppose that an API that you wrote is seeing exceptional traction. To further drive adoption and fuel growth, an Alexa skill seems like the perfect next level. Exposing the feature as an Alexa skill is easy, leveraging the already implemented integration of AWS Lambda and AWS Lex.

High-development velocity and laser-sharp focus on authoring code

As mentioned in the previous section, serverless empowers developers to have a laser-sharp focus on authoring business logic and new user experiences. This greatly accelerates the development velocity and enables a faster time to market.

Promoting a microservices architecture

The serverless paradigm is tailor-made for designing a system based on a microservices architecture. Because of the nature of how serverless computing providers offer their services, one ends up developing serverless systems as a set of loosely coupled and highly-cohesive systems, with separated concerns.

Although traditional architectures can be reimagined in a microservices-based architecture, there is a hidden cost with respect to the maintenance and infrastructural management that is not immediately visible, but becomes acute at scale.

The drawbacks of serverless systems

There are no free lunches in life, and serverless architectures come with their own set of drawbacks that have to be considered by architects creating such systems.

Nascent ecosystem

As discussed previously, the serverless paradigm is a recent advancement. There are teething problems, as is expected. The knowledge base of the serverless paradigm can be significantly smaller than those of its traditional counterparts. This can be attributed to it being a new paradigm, seeing a steady adoption curve. Nonetheless, troubleshooting and clearing blocker issues can be a daunting and time-consuming task, especially if one encounters a hitherto unknown issue.

Yielding of control

As with cloud computing, adopters of the serverless paradigm make a conscious decision to host their artifacts in the cloud provider's infrastructure. This is referred to as yielding control to the providers. It is obvious that the production systems are exposed to the vagaries of the environment of the provider. Internal issues affecting the providers indirectly affect your production systems. The big players in the market, like AWS, Google, and Azure, among others, invest heavily in mitigating and reducing such impacts, but there are times when things do go south. Adopters need to take cognizance of this fact and design their serverless systems to be adaptable and fault tolerant.

For example, during the outage in the AWS US-East-1 region in early 2017, adopters that relied solely on the service uptime guarantee of AWS faced significant outage. But adopters that had a backup planned for it, like Netflix, did not face any outage.

For systems requiring stricter compliance, serverless might not be a fair choice to make, as typically, such compliances require on-premise and strictly controlled hardware.

Opinionated offerings

As mentioned previously, Serverless is not only Function as a Service, but encompasses other peripheral and mission-critical components, abstracted away as turnkey offerings. Because they are abstracted away, these offerings are designed in an opinionated manner that the provider deems appropriate. This takes some of the flexibility away from the adopters when they want to support a custom use case for their systems.

Provider limits

Although serverless claims to work on a share nothing paradigm, the reality is that providers operate in a multi-tenant fashion. To cater to every customer based on a fair usage policy, providers enforce limits to avoid resource hogging.

Limits are typically enforced on the duration of the execution, the size of the function, network utilization, storage capacity, memory usage, thread count, request and response size, and the number of concurrent executions per customer. These limits will be increased as more and more hardware capacity is added, but there will always be a hard stop. Serverless systems need to be designed with these limits in mind.

Standardized and provider-agnostic offerings

Because the serverless ecosystem is at a nascent state, there is no standardized implementation of services across vendors. This makes an adopter lock in to a vendor. While that is not necessarily a bad thing in the case of established players like AWS or Google, there are business requirements that mandate a provider migration. This exercise is in no way trivial, and can incur significant rewrites.

Tooling

It is early in the days of serverless systems, and the toolchain is still evolving. As compared to their traditional counterparts, who have battle-tested and widely adopted tooling for building, deployment, configuration management, monitoring, and so on, serverless systems don't have a standardized, go-to tooling chain. However, frameworks like serverless are quickly evolving to fill this gap.

Competition from containers

Containers are another exciting paradigm, providing new ways to develop modern systems. They tend to solve some of the issues of serverless, like limitless scaling, flexibility, control, and testability, but at the cost of maintainability. The adoption of Docker and Kubernetes has been on the rise, and has yielded many success stories.

There will be a time when the concepts of serverless and containers will merge and create a hybrid paradigm, leveraging the best of both worlds. It is indeed an exciting time.

Rethinking serverless

There are some concepts in serverless architectures that are not immediately obvious to someone seasoned in developing systems the traditional way. Although these are not necessarily drawbacks of serverless architectures, their ramifications need be examined as well as those that precipitate a change in the well-established mindset of the adopter.

Let's take a look at some of them, in detail.

An absence of local states

In traditional architectures, because the code is guaranteed to execute in a single runtime, it is taken for granted that it is possible to chain or pipe output from one component to another. This is called a local state. Because serverless systems are in fact ephemeral computational units, it is impossible to pass the local state created or mutated as a part of the computation to downstream functions or components without storing it in a temporary datastore.

It is important to note that this is not necessarily a drawback, as modern systems are recommended to be stateless, and should share nothing. However, it takes a significant mindset shift, especially for new serverless adopters.

For example, with the AuthN of REST API, created using AWS Lambda, creating sticky sessions (like one would in a traditional web application) is impossible. AuthN is achieved by using bearer authentication. The clients are identified by tokens, which are issued for the first time and are subsequently sent in every request. Such tokens have to be stored in read and write optimized datastores, like Redis. These tokens can then be accessed by the ephemeral functions by performing a simple lookup. This is a simple example to eliminate the need of using local state.

Applying modern software development practices

The nascent nature of serverless architectures makes it difficult to develop them by applying modern development practices, like CI, versioning, deployment, unit testing, and so on. Tooling platforms like serverless are quickly creating mechanisms to enable this, but those might not be very obvious to a new adopter coming from a traditional mindset.

Time-boxed execution

As we explained previously, serverless systems' building blocks are ephemeral functions that execute in a time-boxed manner. The corollary to this is obvious; each function has to have a well-defined execution boundary. So, the ideal candidates to run as Functions as a Service are deterministic computations that are guaranteed to return execution results in a finite amount of time. Adopters have to be careful when architecting long-running, probabilistic jobs in a serverless manner. Running such jobs can incur heavy costs, which defeats the purpose of adopting serverless.

Startup latency

Serverless' building blocks are ephemeral and time-boxed functions that get executed based on specific triggers or events, generated upstream of the execution. The runtime for these functions are configured and provisioned by the providers on demand. In the case of a runtime that requires some startup time, like JVM, the execution time of the function is buffered by the time taken for the startup. This can be a tricky situation for real-time operations, as it presents a lagging user experience. There are, of course, workarounds for such problems, but this has to be taken into account when creating solutions powered by serverless architectures.

Testability

The development of traditional systems has been governed by a well-defined protocol for integration testing. Applying that knowledge to the serverless world is tricky, and often requires jumping through hoops to achieve it. Because serverless systems run in ephemeral environments, with an inability to chain output to downstream components, integration testing is not as straightforward as it is in traditional systems.

Debugging

Because serverless systems run in environments not under the adopters' control, debugging issues in production can be difficult. In traditional systems, one could attach a remote debugger to the production runtime when troubleshooting issues. Such a mechanism is not possible in the serverless world. Previously, the only way to work around this was to instrument the code execution. But providers have taken cognizance of this fact and are shipping tooling to support this. It is not complete and overarching, but the tooling will get there in due time.

It is important to note that even these drawbacks are not really deal breakers; there are workarounds for them, and, as the serverless paradigm evolves and the tooling gets standardized, we will see their impact being mitigated in the near future.