Book Image

JavaScript at Scale

By : Adam Boduch
Book Image

JavaScript at Scale

By: Adam Boduch

Overview of this book

Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready for next generation browsers.
Table of Contents (12 chapters)
11
Index

Making architectural trade-offs

There's a lot to consider from the various perspectives of our architecture, if we're going to build something that scales. We'll never get everything that we need out of every perspective simultaneously. This is why we make architectural trade-offs—we trade one aspect of our design for another more desirable aspect.

Defining your constants

Before we start making trade-offs, it's important to state explicitly what cannot be traded. What aspects of our design are so crucial to achieving scale that they must remain constant? For instance, a constant might be the number of entities rendered on a given page, or a maximum level of function call indirection. There shouldn't be a ton of these architectural constants, but they do exist. It's best if we keep them narrow in scope and limited in number. If we have too many strict design principles that cannot be violated or otherwise changed to fit our needs, we won't be able to easily adapt to changing influencers of scale.

Does it make sense to have constant design principles that never change, given the unpredictability of scaling influencers? It does, but only once they emerge and are obvious. So this may not be an up-front principle, though we'll often have at least one or two up-front principles to follow. The discovery of these principles may result from the early refactoring of code or the later success of our software. In any case, the constants we use going forward must be made explicit and be agreed upon by all those involved.

Performance for ease of development

Performance bottlenecks need to be fixed, or avoided in the first place where possible. Some performance bottlenecks are obvious and have an observable impact on the user experience. These need to be fixed immediately, because it means our code isn't scaling for some reason, and might even point to a larger design issue.

Other performance issues are relatively small. These are generally noticed by developers running benchmarks against code, trying by all means necessary to improve the performance. This doesn't scale well, because these smaller performance bottlenecks that aren't observable by the end user are time-consuming to fix. If our application is of a reasonable size, with more than a few developers working on it, we're not going to be able to keep up with feature development if everyone's fixing minor performance problems.

These micro-optimizations introduce specialized solutions into our code, and they're not exactly easy reading for other developers. On the other hand, if we let these minor inefficiencies go, we will manage to keep our code cleaner and thus easier to work with. Where possible, trade off optimized performance for better code quality. This improves our ability to scale from a number of perspectives.

Configurability for performance

It's nice to have generic components where nearly every aspect is configurable. However, this approach to component design comes at a performance cost. It's not noticeable at first, when there are few components, but as our software scales in feature count, the number of components grows, and so does the number of configuration options. Depending on the size of each component (its complexity, number of configuration options, and so forth) the potential for performance degradation increases exponentially. Take a look at the following diagram:

Configurability for performance

The component on the left has twice as many configuration options as the component on the right. It's also twice as difficult to use and maintain.

We can keep our configuration options around as long as there're no performance issues affecting our users. Just keep in mind that we may have to remove certain options in an effort to remove performance bottlenecks. It's unlikely that configurability is going to be our main source of performance issues. It's also easy to get carried away as we scale and add features. We'll find, retrospectively, that we created configuration options at design time that we thought would be helpful, but turned out to be nothing but overhead. Trade off configurability for performance when there's no tangible benefit to having the configuration option.

Performance for substitutability

A related problem to that of configurability is substitutability. Our user interface performs well, but as our user base grows and more features are added, we discover that certain components cannot be easily substituted with another. This can be a developmental problem, where we want to design a new component to replace something pre-existing. Or perhaps we need to substitute components at runtime.

Our ability to substitute components lies mostly with the component communication model. If the new component is able to send/receive messages/events the same as the existing component, then it's a fairly straightforward substitution. However, not all aspects of our software are substitutable. In the interest of performance, there may not even be a component to replace.

As we scale, we may need to re-factor larger components into smaller components that are replaceable. By doing so, we're introducing a new level of indirection, and a performance hit. Trade off minor performance penalties to gain substitutability that aids in other aspects of scaling our architecture.

Ease of development for addressability

Assigning addressable URIs to resources in our application certainly makes implementing features more difficult. Do we actually need URIs for every resource exposed by our application? Probably not. For the sake of consistency though, it would make sense to have URIs for almost every resource. If we don't have a router and URI generation scheme that's consistent and easy to follow, we're more likely to skip implementing URIs for certain resources.

It's almost always better to have the added burden of assigning URIs to every resource in our application than to skip out on URIs. Or worse still, not supporting addressable resources at all. URIs make our application behave like the rest of the Web; the training ground for all our users. For example, perhaps URI generation and routes are a constant for anything in our application—a trade-off that cannot happen. Trade off ease of development for addressability in almost every case. The ease of development problem with regard to URIs can be tackled in more depth as the software matures.

Maintainability for performance

The ease with which features are developed in our software boils down to the development team and it's scaling influencers. For example, we could face pressure to hire entry-level developers for budgetary reasons. How well this approach scales depends on our code. When we're concerned with performance, we're likely to introduce all kinds of intimidating code that relatively inexperienced developers will have trouble swallowing. Obviously, this impedes the ease of developing new features, and if it's difficult, it takes longer. This obviously does not scale with respect to customer demand.

Developers don't always have to struggle with understanding the unorthodox approaches we've taken to tackle performance bottlenecks in specific areas of the code. We can certainly help the situation by writing quality code that's understandable. Maybe even documentation. But we won't get all of this for free; if we're to support the team as a whole as it scales, we need to pay the productivity penalty in the short term for having to coach and mentor.

Trade off ease of development for performance in critical code paths that are heavily utilized and not modified often. We can't always escape the ugliness required for performance purposes, but if it's well-hidden, we'll gain the benefit of the more common code being comprehensible and self-explanatory. For example, low-level JavaScript libraries perform well and have a cohesive API that's easy to use. But if you look at some of the underlying code, it isn't pretty. That's our gain—having someone else maintain code that's ugly for performance reasons.

Maintainability for performance

Our components on the left follow coding styles that are consistent and easy to read; they all utilize the high-performance library on the right, giving our application performance while isolating optimized code that's difficult to read and understand.

Less features for maintainability

When all else fails, we need to take a step back and look holistically at the featureset of our application. Can our architecture support them all? Is there a better alternative? Scrapping an architecture that we've sunk many hours into almost never makes sense—but it does happen. The majority of the time, however, we'll be asked to introduce a challenging set of features that violate one or more of our architectural constants.

When that happens, we're disrupting stable features that already exist, or we're introducing something of poor quality into the application. Neither case is good, and it's worth the time, the headache, and the cursing to work with the stakeholders to figure out what has to go.

If we've taken the time to figure out our architecture by making trade-offs, we should have a sound argument for why our software can't support hundreds of features.

Less features for maintainability

When an architecture is full, we can't continue to scale. The key is understanding where that breaking threshold lies, so we can better understand and communicate it to stakeholders.

Leveraging frameworks

Frameworks exist to help us implement our architecture using a cohesive set of patterns. There's a lot of variety out there, and choosing which framework is a combination of personal taste, and fitness based on our design. For example, one JavaScript application framework will do a lot for us out-of-the-box, while another has even more features, but a lot of them we don't need.

JavaScript application frameworks vary in size and sophistication. Some come with batteries included, and some tend toward mechanism over policy. None of these frameworks were specifically designed for our application. Any purported ability of a framework needs to be taken with a grain of salt. The features advertised by frameworks are applied to a general case, and a simple one at that. Applied in the context of our architecture is something else entirely.

That being said, we can certainly use a given framework of our liking as input to the design process. If we really like the tool, and our team has experience using it, we can let it influence our design decisions. Just as long as we understand that the framework does not automatically respond to scaling influencers—that part is up to us.

Tip

It's worth the time investigating the framework to use for our project because choosing the wrong framework is a costly mistake. The realization that we should have gone with something else usually comes after we've implemented lots of functionality. The end result is lots of re-writing, re-planning, re-training, and re-documenting. Not to mention the time lost on the first implementation. Choose your frameworks wisely, and be cautious about being framework-coupling.