Book Image

WildFly Performance Tuning

Book Image

WildFly Performance Tuning

Overview of this book

Table of Contents (17 chapters)
WildFly Performance Tuning
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Scalability


A system needs to be able to handle an increasing load in order for the business to stay attractive to customers. The response times need to be kept down for each individual user and the total throughput needs to increase as the amount of transactions increase. We say that the system needs to scale to the load, and scalability is the capability that the system needs in order to increase total throughput.

When a system needs to be scaled, there are two major disciplines that can be followed: vertical scaling and horizontal scaling. Vertical scaling (or scaling up), as shown in the following diagram, involves adding more hardware resources, such as processor cores and memory, to an existing computer. This was the prevalent way to scale in the days of the mainframes but is used to some extent even today as virtualization has gained momentum:

Vertical scaling involves adding more resources to an existing computer.

Horizontal scaling (or scaling out) involves adding more computers that are connected through a network. A simple example of horizontal scaling is shown in the following diagram. This has been the concept of most computer topologies for several years and has gained enormous momentum with cloud services and big data:

Horizontal scaling involves adding more computers to a networked collective of computers, such as a cluster.

In general, adding more resources to a single computer becomes more expensive than adding more computers at some point. The single computer will be of low volume and will often be very specialized, as it needs to have an advanced and expensive architecture to handle a lot of processors and memory, whereas the many cheap computers can be simple, off-the-shelf products. The single computer will be a better computer standing next to any of the cheap ones, but at some point, the grand number of cheap computers will collectively be cheaper, faster, and thereby, better than the expensive one.

In the extreme, having just one computer can be hazardous as it will be a single point of failure. Using one computer will, however, be easier from an administrative point of view. There will only be one place to make changes or configurations. It will also be easier from a developer's point of view, as the programming model won't have to deal with many of the more complex scenarios that a distributed model can require.

As with all things, there are pros and cons with the two different types of scaling. There are also several factors that bridge the gap between them. The single computer in the vertical scaling scenario is seldom a single one. It is most common to have at least one backup server. In the horizontal-scaling scenario, the programming model has been simplified thanks to modern enterprise frameworks and the topology of computers can be adapted to let each cheap server in the network work on its own without the need (and complexity) to know about the rest.