Book Image

Mastering vRealize Operations Manager

By : Chris Slater, Scott Norris
Book Image

Mastering vRealize Operations Manager

By: Chris Slater, Scott Norris

Overview of this book

Table of Contents (23 chapters)
Mastering vRealize Operations Manager
About the Authors
About the Reviewers
Just Messing Around

Multi-node deployment and high availability

So far, we have focused on the new architecture and components of vROps 6.0 as well as mentioning the major architectural changes that the GemFire-based controller and the analytics and persistence layers have introduced. Now, before we close this chapter, we will dive down a little deeper into how data is handled in a multi-node deployment and finally, how high availability works in vROps 6.0, and which design decisions revolve around a successful deployment.

The Operations Manager's migration to GemFire

At the core of the vROps 6.0 architecture is the powerful GemFire in-memory clustering and distributed cache. GemFire provides the internal transport bus (replacing Apache ActiveMQ in vCOps 5.x) as well as the ability to balance CPU and memory consumption across all nodes through compute pooling, memory sharing, and data partitioning. With this change, it is better to think of the Controller, Analytics, and Persistence layers as components that span nodes rather than individual components on individual nodes.


During deployment, ensure that all your vROps 6.0 nodes are configured with the same amount of vCPUs and memory. This is because from a load balancing point of view, the Operations Manager expects all nodes to have the same amount of resources as part of the controller's round-robin load balancing.

The migration to GemFire is probably the single largest underlying architectural change from vCOps 5.x, and the result of moving to a distributed in-memory database has made many of the new vROps 6.0 features possible including:

  • Elasticity and scaling: Nodes can be added on demand, allowing vROps to scale as required. This allows a single Operations Manager instance to scale up to 8 nodes and up to 64,000 objects.

  • Reliability: When GemFire's high availability is enabled, a backup copy of all the data is stored in both the analytics GemFire cache and the persistence layer.

  • Availability: Even if GemFire's high availability mode is disabled, in the event of a failure, other nodes take over the failed services and the load of the failed node (assuming that the failure was not in the master node).

  • Data partitioning: Operations Manager leverages GemFire Data partitioning to distribute data across nodes in units called buckets. A partition region will contain multiple buckets that are configured during a startup or that are migrated during a rebalance operation. Data partitioning allows the use of the GemFire MapReduce function. This function is a "data aware query," which supports parallel data querying on a subset of the nodes. The result of this is then returned to the coordinator node for final processing.

GemFire sharding

When we described the Persistence layer earlier, we listed the new components related to persistence in vROps 6.0 and which components were sharded and which were not. Now, it's time to discuss what sharding actually is.

GemFire sharding is the process of splitting data across multiple GemFire nodes for placement in various partitioned buckets. It is this concept in conjunction with the controller and locator service that balances the incoming resources and metrics across multiple nodes in the Operations Manager cluster. It is important to note that data is sharded per resource and not per adapter instance. For example, this allows the load balancing of incoming and outgoing data even if only one adapter instance is configured. From a design perspective, a single Operations Manager cluster could then manage a maximum configuration vCenter with up to 10,000 VMs by distributing the incoming metrics across multiple data nodes.

The Operations Manager data is sharded in both the analytics and persistence layers, which is referred to as GemFire cache sharding and GemFire persistence sharding, respectively.

Just because the data is held in the GemFire cache on one node does not necessarily result in the data shard persisting on the same node. In fact, as both layers are balanced independently, the chances of both the cache shard and persistence shard existing on the same node is 1/N, where N is the number of nodes.

Adding, removing, and balancing nodes

One of the biggest advantages of the GemFire-based cluster is the elasticity of adding nodes to the cluster as the number of resources and metrics grow in your environment. This allows administrators to add or remove nodes if the size of their environment changes unexpectedly, for example, a merger with another IT department or catering for seasonal workloads that only exist for a small period of the year.

Although adding nodes to an existing cluster is something that can be done at any time, there is a slight cost incurred when doing so. As just mentioned, when adding new nodes, it is important that they are sized the same as the existing cluster nodes, which will ensure, during a rebalance operation, that the load is distributed equally between the cluster nodes.

When adding new nodes to the cluster, sometime after initial deployment, it is recommended that the Rebalance Disk option be selected under cluster management. As seen in the preceding screenshot, the warning advises that This is a very disruptive operation that may take hours..., and as such, it is recommended that this be a planned maintenance activity. The amount of time this operation will take will vary depending on the size of the existing cluster and the amount of data in the FSDB. As you can probably imagine, if you are adding an 8th node to an existing 7-node cluster with tens of thousands of resources, there could potentially be several TB of data that needs to be resharded over the entire cluster. It is also strongly recommended that when adding new nodes, the disk capacity and performance should match with that of the existing nodes, as the Rebalance Disk operation assumes this is the case.

This activity is not required to achieve the compute and network load balancing benefits of the new node. This can be achieved by selecting the Rebalance GemFire option that is a far less disruptive process. As per the description, this process repartitions the JVM buckets that balance the memory across all active nodes in the GemFire Federation. With the GemFire cache balanced across all the nodes, the compute and network demand should be roughly equal across all the nodes in the cluster.

Although this allows early benefit from adding a new node into an existing cluster, unless a large amount of new resources are discovered by the system shortly afterwards the majority of disk I/O is persisted, sharded data will occur on other nodes.

Apart from adding nodes, Operations Manager also allows the removal of a node at any time as long as it has been taken offline first. This can be valuable if a cluster was originally well oversized for requirement, and it considered a waste of physical computational resource. However, this task should not be taken lightly though, as the removal of a data node without enabling high availability will result in the loss of all metrics on that node. As such, it is recommended that you should generally avoid removing nodes from the cluster.


If the permanent removal of a data node is necessary, ensure that high availability is first enabled to prevent data loss.