Book Image

The Art of Micro Frontends

By : Florian Rappl
Book Image

The Art of Micro Frontends

By: Florian Rappl

Overview of this book

Micro frontend is a web architecture for frontend development borrowed from the idea of microservices in software development, where each module of the frontend is developed and shipped in isolation to avoid complexity and a single point of failure for your frontend. Complete with hands-on tutorials, projects, and self-assessment questions, this easy-to-follow guide will take you through the patterns available for implementing a micro frontend solution. You’ll learn about micro frontends in general, the different architecture styles and their areas of use, how to prepare teams for the change to micro frontends, as well as how to adjust the UI design for scalability. Starting with the simplest variants of micro frontend architectures, the book progresses from static approaches to fully dynamic solutions that allow maximum scalability with faster release cycles. In the concluding chapters, you'll reinforce the knowledge you’ve gained by working on different case studies relating to micro frontends. By the end of this book, you'll be able to decide if and how micro frontends should be implemented to achieve scalability for your user interface (UI).
Table of Contents (21 chapters)
1
Section 1: The Hive - Introducing Frontend Modularization
6
Section 2: Dry Honey - Implementing Micro frontend Architectures
14
Section 3: Busy Bees - Scaling Organizations

Faster TTM

As already written, there are technical and business reasons why micro frontends have been a bit more difficult to fully embrace than their backend counterparts. My personal opinion is that the technical reasons are significant, but not as crucial as the business reasons.

When implementing micro frontends, we should never look for technical reasons alone. Ultimately, our applications are created for the end users. As a result, end users should not be impacted negatively by our technical choices.

But not only may the user experience be a driver from a business perspective—the productivity level and the development process should also be taken into consideration. In the end, this impacts the user experience just as much and represents the basis for any implementation: how fast we can ship new features or adjust to some user surveys.

Decreasing onboarding time

20 years ago, most developers that entered a company stayed for quite a while. Over time this decreased, to be around 2 years for most companies (source: https://hackerlife.co/blog/san-francisco-large-corporation-employee-tenure). The average employment duration is shown here:

Figure 1.4 – Average time of developers at a single company

Figure 1.4 – Average time of developers at a single company

If a standard developer gets productive in a larger code base after 6 months, this means that a large fraction of the investment (about 25%) is not really fulfilled. Being productive from day one should be the goal, even if it is unrealistic.

One way to achieve faster onboarding time is by decreasing the required cognitive load, and a strong modularization with independent repositories can help a lot with this. After all, if a new developer fixes a single bug, a smaller repository can be debugged and understood much faster than a large repository.

Also, the documentation can be kept up to date much more easily with a lot of smaller repositories than a few larger ones. Together with a smaller (and more focused) commit history, the entry level is far smaller.

The downside is that complex bugs may be more problematic to debug. To understand the system fully, most likely a lot more expertise may be required. Knowing which code resides in which repository becomes a virtue and gives some senior developers and architects a special status that makes it difficult to remove them safely from a project.

A solution is to think in terms of feature teams that have main responsibilities but may share common infrastructure expertise.

Multiple teams

If we are capable of onboarding new developers quickly, we can also leverage multiple sources for recruiting. Recruiting great talent in information technology (IT) has become absurdly difficult. The top players already attract a fair share of all the top developers needed, especially in smaller ventures. How can new features be developed quickly when it takes months to acquire a single new developer?

A solution to this challenge is to consider the help of IT agencies or service providers who can offer on-demand development help. Historically, bringing in externals can be a source of trouble too. Non-disclosure agreements (NDAs) need to be filled out; access needs to be granted. The biggest pain point may be procedural differences.

By using modularization as a key architecture feature, different teams may work in their own repositories. They can use their own processes and have their own release schedules. This can come in handy internally too, but obviously has the greatest benefit when working with external developers and teams.

One of the best advantages of being able to modularize the frontend is that we can come up with new ways to cut the teams. For instance, we can introduce true full-stack teams, where each team is responsible for one backend service and one frontend module. While this boundary may not always make sense, it can be very helpful in a number of scenarios. Giving the same team that made a backend API its frontend counterpart reduces alignment requirements and enables more stable releases.

Combining the ideas of domain-driven design (DDD) with multiple teams is an efficient way to reduce cognitive load and establish clear architectural boundaries.

DDD is a set of techniques to formalize how modularization can be performed without ending up in unmaintainable spaghetti code. In Chapter 4, Domain Decomposition, we'll have a closer look at these techniques, which can help us to derive a modularization referred to as domain decomposition. Deriving the right domain decomposition is, however, not easy at all. Keeping the different functional domains separated is key here.

Isolated features

My favorite business driver for introducing micro frontends is the ability to ship independent features using their own schedules.

Let's say one of the product owners (POs) has an idea for a new feature. Right now, it is not really known how this feature will be accepted by the users. What can be done here? Surely, we want to start with a small POC presented to the PO. If this is well accepted, we can refine it and publish a minimum viable product (MVP), which would serve to gather some end user feedback.

Historically, a lot of alignment would be needed just to get this feature into the application in the desired way. Using micro frontends, we can roll out the feature progressively. We would first only make it available to the PO, and then adjust the rollout to the target end users too. Finally, we could release it in full regions or worldwide.

A selective rollout is only possible when a feature is isolated. If we have two features— feature A and feature B, with A requiring B—then a progressive rollout is never possible for feature B. It would always need to be a superset of the rollout rules of feature A.

In general, such dependencies may still be possible (or even desired), but they should never be direct in nature. As such, all available features may only weakly reference other features. We will see later that weak references form a basis for scalability and are crucial for sound micro frontend solutions.

A good test to verify whether a given micro frontend is well designed is to turn it off. Is the overall application still working? Maybe it may not be useful any more, but that is not the key point. If the application is still technically working, then the micro frontend solution is still sound. Always ask yourself: Does it—in principle—work without this module?

Being able to turn off micro frontends consequently allows us to also swap individual micro frontends. As the inner workings are not required to be available, we can also fully replace them. This can be used to our advantage to gather useful user feedback.

A/B testing

We may find at least a dozen more good reasons to use micro frontends from a business perspective; however, one—unfortunately, often forgotten—reason is to simplify gathering user feedback. A great way to do this is by introducing A/B testing.

In A/B testing, we introduce two variants of a feature. One is variant A, while the other is labeled variant B. While A may be the current state of the feature (usually called the baseline), B may be a new way of approaching the problem. The following diagram illustrates this:

Figure 1.5 – A/B testing of a feature results in different experiences for different users

Figure 1.5 – A/B testing of a feature results in different experiences for different users

In scenarios where the feature is new anyway, an A/B test may be served to all users interested in taking part here. If we work with a baseline, it may make sense to survey only users that are new to the feature to avoid having existing habits contaminating the data.

Most frontend monoliths require quite a few code changes to include the ability of A/B testing for specific features. In the worst case, branching from variant A to variant B can be found in dozens—if not hundreds—of places. By introducing a suitable modularization, we can introduce A/B testing without any code change. This is an ideal scenario, where only the part shipping the micro frontends needs to know about the rules of the active A/B test.

Actually, including the feedback and evaluating the results is then, of course, a completely different story. Here, micro frontends don't really help us. The connection to dedicated tools for supporting the data analysis is unchanged, with a monolithic solution.