Book Image

Hands-On Microservices with C# 8 and .NET Core 3 - Third Edition

By : Gaurav Aroraa, Ed Price
Book Image

Hands-On Microservices with C# 8 and .NET Core 3 - Third Edition

By: Gaurav Aroraa, Ed Price

Overview of this book

<p>The microservice architectural style promotes the development of complex applications as a suite of small services based on specific business capabilities. With this book, you'll take a hands-on approach to build microservices and deploy them using ASP .NET Core and Microsoft Azure. </p><p>You'll start by understanding the concept of microservices and their fundamental characteristics. This microservices book will then introduce a real-world app built as a monolith, currently struggling under increased demand and complexity, and guide you in its transition to microservices using the latest features of C# 8 and .NET Core 3. You'll identify service boundaries, split the application into multiple microservices, and define service contracts. You'll also explore how to configure, deploy, and monitor microservices using Docker and Kubernetes, and implement autoscaling in a microservices architecture for enhanced productivity. Once you've got to grips with reactive microservices, you'll discover how keeping your code base simple enables you to focus on what's important rather than on messy asynchronous calls. Finally, you'll delve into various design patterns and best practices for creating enterprise-ready microservice applications. </p><p>By the end of this book, you'll be able to deconstruct a monolith successfully to create well-defined microservices.</p>
Table of Contents (16 chapters)

Prerequisites for microservices

To gain a better understanding of microservices, let's look at an imaginary example of FlixOne Inc. With this example as our base, we can discuss all the concepts in detail and see what it looks like to be ready for microservices.

FlixOne is an e-commerce player that is spread all over India. They are growing at a very fast pace and diversifying their business at the same time. They have built their existing system on .NET Framework, and this is a traditional three-tier architecture. They have a massive database that is central to this system, and there are peripheral applications in their ecosystem. One such application is for their sales and logistics team, and it happens to be an Android app. These applications connect to their centralized data center and face performance issues. FlixOne has an in-house development team supported by external consultants. Refer to the following diagram:

The previous diagram depicts a broader sense of our current application, which is a single .NET assembly application. Here, we have the user interfaces we use to search and order products, track the order, and check out. Now look at the following diagram:

The previous diagram depicts our Shopping cart module only. The application is built with C#, MVC5, and Entity Framework, and it has a single project application. This diagram is just a pictorial overview of the architecture of our application. This application is web-based and can be accessed from any browser. Initially, any request that uses the HTTP protocol will land on the user interface that is developed using MVC5 and jQuery. For cart activities, the UI interacts with the Shopping cart module, which is a business logic layer that interacts with the database layer (written in C#). We are storing data within the database (SQL Server 2008R2).

Functional overview of the application

Here, we are going to understand the functional overview of the FlixOne bookstore application. This is only for the purpose of visualizing our application. The following is a simplified functional overview of the application that shows the process from Home page to Checkout:

In the current application, the customer lands on the home page, where they see featured/highlighted books. They also have the option to search for a book item. After getting the desired result, the customer can choose book items and add them to their shopping cart. Customers can verify the book items before the final checkout. As soon as the customer decides to check out, the existing cart system redirects them to an external payment gateway for the specified amount you need to pay for the book items in the shopping cart.

As discussed previously, our application is a monolithic application; it is structured to be developed and deployed as a single unit. This application has a large code base that is still growing. Small updates need to deploy the whole application at once.

In this section, we have discussed the functional overview of the application. We still need to analyze and address the challenges and find the best solution for the current challenges. So, let's discuss those things next.

Solutions for the current challenges

The business is growing rapidly, so we decide to open our e-commerce website in 20 more cities. However, we are still facing challenges with the existing application and struggling to serve the existing user base properly. In this case, before we start the transition, we should make our monolithic application ready for its transition to microservices.

In the very first approach, the Shopping cart module will be segregated into smaller modules, then you'll be able to make these modules interact with each other, as well as external or third-party software:

This proposed solution is not sufficient for our existing application, though developers would be able to divide the code and reuse it. However, the internal processing of the business logic would remain the same in the way it would interact with the UI or the database. The new code would interact with the UI and the database layer, with the database still remaining as the same old single database. With our database remaining undivided and as tightly coupled layers, the problem of having to update and deploy the whole code base would still remain. So, this solution is not suitable for resolving our problem.

Handling deployment problems

In the previous section, we discussed the deployment challenges we will face with the current .NET monolithic application. In this section, let's take a look at how we can overcome these challenges by making or adapting a few practices within the same .NET stack.

With our .NET monolithic application, our deployment is made up of XCOPY deployments. After dividing our modules into different submodules, we can adapt to deployment strategies with the help of these. We can simply deploy our business logic layer or some common functionality. We can adapt to continuous integration and deployment. The XCOPY deployment is a process where all the files are copied to the server; it is mostly used for web projects.

Making better monolithic applications

Now that we understand all the challenges with our existing monolithic application, our new application should serve us better with new changes. As we are expanding, we can't miss the opportunity to get new customers. If we do not overcome a challenge, then we will lose business opportunities as well. Let's discuss a few points to solve these problems.

Introducing dependency injections

Our modules are interdependent, so we are facing issues, such as the reusability of code and unresolved bugs due to changes in one module. These are deployment challenges. To tackle these issues, let's segregate our application in such a way that we will be able to divide modules into submodules. We can divide our Order module so that it will implement the interface, and this can be initiated from the constructor.

Dependency injection (DI) is a design pattern and provides a technique so that you can make a class independent of its dependencies. It can be achieved by decoupling an object from its creation.

Here is a short code snippet that shows how we can apply this to our existing monolithic application. The following code example shows our Order class, where we use constructor injection:

using System;
using System.Collections.Generic;
using FlixOne.BookStore.Models;

namespace FlixOne.BookStore.Common
{
public class Order : IOrder
{
private readonly IOrderRepository _orderRepository;
public Order() => _orderRepository = new OrderRepository();
public Order(IOrderRepository orderRepository) => _orderRepository = orderRepository;
public IEnumerable<OrderModel> Get() => _orderRepository.GetList();
public OrderModel GetBy(Guid orderId) => _orderRepository.Get(orderId);
}
}
Inversion of control, or IoC, is the way in which objects do not create other objects on whom they rely to do their work.

In the previous code snippet, we abstracted our Order module in such a way that it could use the IOrder interface. Afterward, our Order class implements the IOrder interface, and with the use of inversion of control, we create an object, as this is resolved automatically with the help of inversion of control.

Furthermore, the code snippet for IOrderRepository is as follows:

using FlixOne.BookStore.Models;
using System;
using System.Collections.Generic;

namespace FlixOne.BookStore.Common
{
public interface IOrderRepository
{
IEnumerable<OrderModel> GetList();
OrderModel Get(Guid orderId);
}
}

We have the following code snippet for OrderRepository, which implements the IOrderRepository interface:

using System;
using System.Collections.Generic;
using System.Linq;
using FlixOne.BookStore.Models;

namespace FlixOne.BookStore.Common
{
public class OrderRepository : IOrderRepository
{
public IEnumerable<OrderModel> GetList() => DummyData();
public OrderModel Get(Guid orderId) => DummyData().FirstOrDefault(x => x.OrderId == orderId);
}
}

In the preceding code snippet, we have a method called DummyData(), which is used to create Order data for our sample code.

The following is a code snippet showing the DummyData() method:

private IEnumerable<OrderModel> DummyData()
{
return new List<OrderModel>
{
new OrderModel
{
OrderId = new Guid("61d529f5-a9fd-420f-84a9-
ab86f3eaf8ad"),
OrderDate = DateTime.Now,
OrderStatus = "In Transit"
},
...
};
}

Here, we are trying to showcase how our Order module gets abstracted. In the previous code snippet, we returned default values (using sample data) for our order just to demonstrate the solution to the actual problem.

Finally, our presentation layer (the MVC controller) will use the available methods, as shown in the following code snippet:

using FlixOne.BookStore.Common;
using System;
using System.Web.Mvc;

namespace FlixOne.BookStore.Controllers
{
public class OrderController : Controller
{
private readonly IOrder _order;
public OrderController() => _order = new Order();
public OrderController(IOrder order) => _order = order;
// GET: Order
public ActionResult Index() => View(_order.Get());
// GET: Order/Details/5
public ActionResult Details(string id)
{
var orderId = Guid.Parse(id);
var orderModel = _order.GetBy(orderId);
return View(orderModel);
}
}
}

The following diagram is a class diagram that depicts how our interfaces and classes are associated with each other and how they expose their methods, properties, and so on:

Here, we again used constructor injection where IOrder passed and got the Order class initialized. Consequently, all the methods are available within our controller.

Getting this far means we have overcome a few problems, including the following:

  • Reduced module dependency: With the introduction of IOrder in our application, we have reduced the interdependency of the Order module. This way, if we are required to add or remove anything to or from this module, then other modules will not be affected, as IOrder is only implemented by the Order module. Let's say we want to make an enhancement to our Order module; this would not affect our Stock module. This way, we reduce module interdependency.
  • Introducing code reusability: If you are required to get the order details of any application modules, you can easily do so using the IOrder type.
  • Improvements in code maintainability: We have now divided our modules into submodules or classes and interfaces. We can now structure our code in such a manner that all the types (that is, all the interfaces) are placed under one folder and follow the structure for the repositories. With this structure, it will be easier for us to arrange and maintain code.
  • Unit testing: Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.

Database refactoring

As discussed in the previous section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring. To refactor our application database, we follow these points:

  • Schema correction: In general practice (not required), our schema depicts our modules. As discussed in previous sections, our huge database has a single schema (which is now dbo), and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema names, such as Order. So, whenever we need to use the tables, we can use them with their own schema, instead of a general dbo schema. This will not impact any functionality related to how data is retrieved from the database, but it will have structured or arranged our tables in such a way that we will be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful when we are in the stage of transitioning a monolithic application to microservices. Refer to the following diagram depicting the Order schema and Stock schema of the database:

In the previous diagram, we see how the database schema is separated logically. It is not separated physically as our Order schema and Stock schema belong to the same database. Consequently, here, we will separate the database schema logically, not physically.

We can also take the example of our users; not all users are an admin or belong to a specific zone, area, or region. However, our user table should be structured in such a way that we should be able to identify the users by the table name or the way they are structured. Here, we can structure our user table on the basis of regions. We should map our user table to a region table in such a way that it should not impact or make any changes to the existing code base.

  • Moving the business logic to code from stored procedures: In the current database, we have thousands of lines of stored procedure with a lot of business logic. We should move the business logic to our code base. In our monolithic application, we are using Entity Framework; here, we can avoid the creation of stored procedures, and we can write all of our business logic as code.

Database sharding and partitioning

When it comes to database sharding and partitioning, we choose database sharding. Here, we will break it into smaller databases. These smaller databases will be deployed on a separate server:

In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. The word sharding comes from shard and spreading, which means dividing a database into chunks (shards) and spreading it to different servers.

Sharding can come in different forms. One would be splitting customers and orders into different databases, but one could also split customers into multiple databases for optimization. For instance, customers A-G, customers H-P, and customers Q-Z (based on surname).

The previous diagram is a pictorial overview of how our database is divided into smaller databases. Take a look at the following diagram:

The preceding diagram illustrates that our application now has a smaller database or each service has its own database.

DevOps culture

In the previous sections, we discussed the challenges and problems faced by the team. Here, we'll propose a solution for the DevOps team: the collaboration of the development team with another operational team should be emphasized. We should also set up a system where development, QA, and the infrastructure teams work in collaboration.

Automation

Infrastructure setup can be a very time-consuming job, and developers remain idle while the infrastructure is being readied for them. They will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it will reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developers can be ready to start work on day one of joining the team.

Chef is a DevOps tool that provides a framework to automate and manage your infrastructure. PowerShell can be used to create our Azure machines and to set up Azure DevOps (formerly TFS).

Testing

We are going to introduce automated testing as a solution to the problems that we faced while testing during deployment. In this part of the solution, we have to divide our testing approach as follows:

  • Adopt test-driven development (TDD). With TDD, a developer writes the test before its actual code. In this way, they will test their own code. The test is another piece of code that can validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed as you know this is where the problem is. In order to achieve this, we can utilize frameworks, such as MS tests or unit tests.
  • The QA team can use scripts to automate their tasks. They can create scripts by utilizing QTP or the Selenium framework.

Versioning

The current system does not have any kind of versioning system, so there is no way to revert if something happens during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either Azure DevOps or Git. With the use of version control, we can now revert our change if it is found to break some functionality or introduce any unexpected behavior in the application. We now have the capability of tracking the changes being made by the team members working on this application, at an individual level. However, in the case of our monolithic application, we did not have the capability to do this.

Deployment

In our application, deployment is a huge challenge. To resolve this, we'll introduce continuous integration (CI). In this process, we need to set up a CI server. With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control Azure DevOps or Git, in our case, the CI process kicks into action. This ensures that the new code is built and unit tests are run along with the integration test. In the scenario of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.

Next, we move onto continuous deployment. Here, we introduce various environments, namely, a development environment, a staging environment, a QA environment, and so on. Now, as soon as the code is checked in by any team member, CI kicks into action. This invokes the unit/integration test suites, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time for the development team to provide a suitable build for QA is reduced to a minimum.

As a monolith application, we have various challenges related to deployment that affect the development team as well. We have discussed CI/CD and seen how deployment works.

The next section covers identifying decomposition candidates within a monolith architecture, which can cause problems.