Book Image

WildFly Performance Tuning

Book Image

WildFly Performance Tuning

Overview of this book

Table of Contents (17 chapters)
WildFly Performance Tuning
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

The iterative performance-tuning process


The tuning of a system involves testing in order to find bottlenecks in the system and eliminate them by tuning the system and related components.

Test cases and iteration

Before performance tuning actually starts, it must be determined what test-cases, or rather indicators, to focus on. This set of indicators might stay static, but after some work (iterations), it is also common that some that are deemed not to be as fruitful as expected are simply removed. Similarly, some new ones might be added over time, both to follow the evolution of the system as well as to widen or deepen our understanding of it. All this is done to improve its efficiency.

It is important that the bulk of test cases are kept between test iterations and even between product releases. This is done in order to be able to compare results and see how different changes affect the system. All these results will build into a knowledge base that can help when tuning and making predictions to both the system at hand and others in similar cases and environments.

The result of a performance test case is normally a measure of the response time, throughput, or utilization efficiency of one or more components. These components may be of any size or complexity. If the components are subcomponents of a larger application or system, the set of test cases often overlap—some covering the entire system and some covering the various subcomponents.

Setting the baseline

The first time a test is to be performed, there might not be much real data to lean on. The requirements should give, or at least indicate, some hard numbers. These values are called a baseline.

Running tests and collecting data

With the baseline set, tests are set up and run. As the tests execute, data must be collected. For some, only the end result might matter, but for most, tests getting data during the entire test run and from various points of measure will give a more detailed picture of the health of the tested systems, possible bottlenecks, and tuning points to explore.

Analyzing the data

Analyzing the data might involve several people and tools, each with some area, or areas, of specialty. The collective input and analysis from all these people and resources will normally be your best guide to what to make of the test data, as in, what to tune and in what order.

Tuning and retesting

After the analysis is done, the system will be tuned and the baseline will possibly be refined, and more retests will follow. Each of these retests will explore a possible tuning alternative.

Tip

It is vital that only one individual thing is changed from one test till its retest. Change more than one thing and you won't know for sure what caused any new effects that are seen. Also, consider that several changes might neutralize or hide their individual effects.

Remember that not only direct code or configuration changes to a system require a performance test. Any and all changes to a system or its environment actually make the system an aspirant for performance tuning. Also, note that not all changes require performance tuning to be performed.

As you can imagine, following all tuning possibilities and always doing complete retests could easily spin out of control. It would result in infinite branches of tuning and tests—a situation that would be uncontrollable for any organization. It is, therefore, important to choose carefully between the various possibilities using knowledge, experience, and some healthy common sense. The tuning leads should be followed one by one, normally starting with the one identified to give the most effect (improved performance or reduction of bottleneck).

Note

The performance-tuning process is normally complete when all requirements are satisfied or when enough of an improvement has been reached (normally defined by the product owner or architect in charge). The tuning process is an iterative process that is realized by the major steps shown in the following diagram.

Apart from resolving bottlenecks and living up to requirements, it is equally important to not over-optimize a system. First, it is not cost efficient. If no one has asked for that extra performance—in terms of business or architectural/operational requirements—it should simply not be done. Second, over-optimizing some things (such as very minor bottlenecks) in a system can very easily, turn its balance off, thus creating new problems elsewhere.

The iterative performance-tuning process.

Test data

Possibly one of the hardest areas of the software development and QA processes is related to having / finding / creating useful test data. Really good test data should have the following properties:

  • It should be realistic and have the same properties as real live data

  • It should not expose real user data or other sensitive information

  • It should have coverage for all test cases

  • It should be useful for both positive and negative tests

For tuning during load testing, the test data should also exist in large quantities.

As one can imagine, it requires a lot of work, and it can be very expensive to have a full set of up-to-date test data with all these properties available, especially, as the data and its properties can be more or less dynamic and change over time.

We highly encourage all efforts to use test data with the preceding properties. As always, it will be a balancing act between the available resources of an organization such as financial strength, people, and getting things done.

For load testing, the test data is normally generated more or less from scratch or taken from real production data. It is important that the data is complete enough for the relevant test scenarios. The test data does not, however, need to be as complete as for functional testing. Volume, is more important.

Documentation

Throughout the performance-tuning process, it is important to have a stable and complete documentation routine in order. For each iteration, at a minimum, all test cases with traceable system configuration setups and measurement results should be documented and saved. This will add to the knowledge base of the organization, especially if it is made available to various departments of the organization.

It can then be a force to efficiently compare data of old releases with over time or to make good estimates of hardware procurement or other resources. Never forget the mantra of performance tuning:

Test, tune one thing at time and test again.