Book Image

Cloud Native Applications with Ballerina

By : Dhanushka Madushan
Book Image

Cloud Native Applications with Ballerina

By: Dhanushka Madushan

Overview of this book

The Ballerina programming language was created by WSO2 for the modern needs of developers where cloud native development techniques have become ubiquitous. Ballerina simplifies how programmers develop and deploy cloud native distributed apps and microservices. Cloud Native Applications with Ballerina will guide you through Ballerina essentials, including variables, types, functions, flow control, security, and more. You'll explore networking as an in-built feature in Ballerina, which makes it a first-class language for distributed computing. With this app development book, you'll learn about different networking protocols as well as different architectural patterns that you can use to implement services on the cloud. As you advance, you'll explore multiple design patterns used in microservice architecture and use serverless in Amazon Web Services (AWS) and Microsoft Azure platforms. You will also get to grips with Docker, Kubernetes, and serverless platforms to simplify maintenance and the deployment process. Later, you'll focus on the Ballerina testing framework along with deployment tools and monitoring tools to build fully automated observable cloud applications. By the end of this book, you will have learned how to apply the Ballerina language for building scalable, resilient, secured, and easy-to-maintain cloud native Ballerina projects and applications.
Table of Contents (15 chapters)
1
Section 1: The Basics
4
Section 2: Building Microservices with Ballerina
8
Section 3: Moving on with Cloud Native

Serverless architecture

There are a lot of components that developers need to handle in the design of microservices. The developer needs to create an installation script to containerize and deploy applications. Engineering a microservice architecture embraces these additional charges for managing the infrastructure layer functionality. The serverless architecture offloads server management to the cloud provider and only business logic programming is of concern to developers.

FaaS is a serverless architecture implementation. Common FaaS platform providers include AWS Lambda, Azure Functions, and Google Cloud Functions. Unlike in microservice architectures, functions are the smallest FaaS modules that can be deployed. Developers build separate functions to handle each request. Hardware provisioning and container management are taken care of by cloud providers. A serverless architecture is a single toolkit that can manage deployment, provisioning, and maintenance. Functions are event-driven in that an event can be triggered by the end user or by another function.

Features such as AWS Step Functions make it easier to build serverless systems. There are multiple advantages associated with using a serverless architecture instead of a microservice architecture.

The price of this type of platform depends on the number of requests that are processed and the duration of the execution. FaaS can scale up with incoming traffic loads. This eliminates servers that are always up and running. Instead, the functions are in an idle state if there are no requests. When requests flow in, they will be activated, and requests will be processed. A key issue associated with serverless architecture is cloud lock-in, where the system is closely bound to the cloud platform and its features. Also, you cannot run a long-running process on functions as most FaaS vendors restrict the execution time for a certain period of time. There are other concerns, such as security, multitenancy, and lack of monitoring tools in serverless architectures. However, it provides developers with an agile and rapid method of development to build applications more easily than in microservice architectures.

Definition of cloud native

In the developer community, cloud native has several definitions, but the underlying concept is the same. The Cloud Native Computing Foundation (CNCF) brings together cloud native developers from all over the world and offers a stage to create cloud native applications that are more flexible and robust. The cloud native definition from the CNCF can be found on their GitHub page.

According to the CNCF definition of cloud native, cloud native empowers organizations to build scalable applications on different cloud platforms, such as public, private, and hybrid clouds. Technologies such as containers, container orchestration tools, and configurable infrastructure make cloud native much more convenient.

Having a loosely coupled system is a key feature of cloud native applications that allows the building of a much more resilient, manageable, and observable system. Continuous Integration and Continuous Deployment (CI/CD) simplify and speed up the deployment process.

Other than the CNCF definition, pioneers in cloud native development have numerous definitions, and there are common features that cloud native applications should have across all the definitions. The key aim of being cloud native is to empower companies by offering a much more scalable, resilient, and maintainable application on cloud platforms.

By looking at the definition, we can see there are a few properties that cloud native applications should have:

  • Cloud native systems should be loosely coupled; each service is capable of operating independently. This allows cloud native applications to be simple and scalable.
  • Cloud native applications should be able to recover from failures.
  • Application deployment and maintenance should be easy.
  • Cloud native application system internals should be observable.

If we drill down a little further into cloud native applications, they all share the following common characteristics:

  • Statelessness: Cloud systems do not preserve the status of running instances. All states that are necessary to construct business logic are kept within the database. All services are expected to read the data from the database, process data, and return the data where it is needed. This characteristic is critical when resilience and scalability come into play. Services are to be produced and destroyed, based on what the system administrator has said. If the service keeps its state on the running instance, it will be a problem to scale the system up and down. Simply put, all services should be disposable at any time.
  • Modular design: Applications should be minimal and concentrate on solving a particular business problem. In the architecture of microservices, services are the smallest business unit that solves a particular business problem. Services may be exposed and managed as APIs where other modules do not require the internal implementation of each of the modules. Interservice communication protocols can be used to communicate with each provider and perform a task.
  • Automated delivery: Cloud native systems should be able to be deployed automatically. As cloud native systems are intended to develop large applications, there could be several independent services running. If a new version is released for a specific module, the system should be able to adapt to changes with zero downtime. Maintaining the system should be achieved with less effort at a low cost.
  • Isolation from the server and OS dependencies: Services run in an isolated system in which the host computer is not directly involved in services. This makes the services independent of the host computer and able to operate on any OS. Container technologies help to accomplish this by wrapping code with the container and offering OS-independent platforms to work with.
  • Multitenancy: Multitenancy cloud applications offer users the ability to isolate user data from different tenants. Users can view their own information only. Multi-tenant architectures greatly increase the security of cloud applications and let multiple entities use the same system.

Why should you select a cloud native architecture?

The latest trend in the industry is cloud native applications, with businesses increasingly striving to move to the cloud due to the many benefits associated with it. The following are some of those benefits:

  • Scalability
  • Reliability
  • Maintainability
  • Cost-effectiveness
  • Agile development

Let's talk about each in detail.

Scalability

As applications are stateless by nature, the system administrator can easily scale up or scale down the application by simply increasing or decreasing the number of services. If the traffic is heavy, the system can be scaled up and distribute the traffic. On the other hand, if the traffic is low, the system can be scaled down to avoid consuming resources.

Reliability

If one service goes down, the load can be distributed to another service and the work can be continued. There is no specificity about particular services because of the statelessness of a cloud native application. Services can easily be replaced in the event of failure by another new service. The stateless nature helps to achieve this benefit of building a reliable system. This ensures fault tolerance and reliability for the entire application.

Maintainability

The whole system can be automated by using automation tools. Whenever someone wants to modify the system, it's as simple as sending a pull request to Git. When it's merged, the system upgrades with a new version. Deployment is also simple as services are separate, and developers need to consider part of the system rather than the entire system. Developers can easily deploy changes to a development environment with CI/CD pipelines. Then, they can move on to the testing and production environment with a single click. The whole system can be automated by using automation tools.

Cost-effectiveness

Organizations can easily offload infrastructure management to third parties instead of working with on-site platforms that need to invest a lot of money in management and maintenance. This allows the system to scale based on the pay-as-you-go model. Organizations simply don't need to keep paying for idle servers.

Agile development

In cloud native applications, services are built as various independent components. Each team that develops the service will determine which technologies should be used for implementation, such as programming languages, frameworks, and libraries. For example, developers can select the Python language to create a machine learning service and the Go language to perform some calculations. The developer team will more regularly and efficiently deliver applications with the benefits of automated deployment.