Book Image

Mastering vRealize Operations Manager

Book Image

Mastering vRealize Operations Manager

Overview of this book

Table of Contents (23 chapters)
Mastering vRealize Operations Manager
Credits
Foreword
About the Authors
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
14
Just Messing Around
Index

The vRealize Operations Manager component architecture


With a new common platform design comes a completely new architecture. As mentioned in the previous table, this architecture is common across all deployed nodes as well as the vApp and other installable versions. The following diagram shows the five major components of the Operations Manager architecture:

The five major components of the Operations Manager architecture depicted in the preceding figure are:

  • The user interface

  • Collector and the REST API

  • Controller

  • Analytics

  • Persistence

The user interface

In vROps 6.0, the UI is broken into two components—the Product UI and the Admin UI. Unlike the vCOps 5.x vApp, the vROps 6.0 Product UI is present on all nodes with the exception of nodes that are deployed as remote collectors. Remote collectors will be discussed in more detail in the next section.

The Admin UI is a web application hosted by Pivotal tc Server(A Java application Apache web server) and is responsible for making HTTP REST calls to the Admin API for node administration tasks. The Cluster and Slice Administrator (CaSA) is responsible for cluster administrative actions such as:

  • Enabling/disabling the Operations Manager cluster

  • Enabling/disabling cluster nodes

  • Performing software updates

  • Browsing logfiles

The Admin UI is purposely designed to be separate from the Product UI and always be available for administration and troubleshooting tasks. A small database caches data from the Product UI that provides the last known state information to the Admin UI in the event that the Product UI and analytics are unavailable.

Tip

The Admin UI is available on each node at https://<NodeIP>/admin.

The Product UI is the main Operations Manager graphical user interface. Like the Admin UI, the Product UI is based on Pivotal tc Server and can make HTTP REST calls to the CaSA for administrative tasks. However, the primary purpose of the Product UI is to make GemFire calls to the Controller API to access data and create views, such as dashboards and reports. GemFire is part of the major underlying architectural change of vROps 6.0, which is discussed in more detail later in this chapter.

As shown in the following figure, the Product UI is simply accessed via HTTPS on TCP port 443. Apache then provides a reverse proxy to the Product UI running in Pivotal tc Server using the Apache AJP protocol.

Collector

The collector's role has not differed much from that in vCOps 5.x. The collector is responsible for processing data from solution adapter instances. As shown in the following figure, the collector uses adapters to collect data from various sources and then contacts the GemFire locator for connection information of one or more controller cache servers. The collector service then connects to one or more Controller API GemFire cache servers and sends the collected data.

It is important to note that although an instance of an adapter can only be run on one node at a time, this does not imply that the collected data is being sent to the controller on that node. This will be discussed in more detail later under the Multi-node deployment and high availability section.

Controller

The controller manages the storage and retrieval of the inventory of the objects within the system. The queries are performed by leveraging the GemFire MapReduce function that allows you to perform selective querying. This allows efficient data querying as data queries are only performed on selective nodes rather than all nodes.

We will go in detail to know how the controller interacts with the analytics and persistence stack a little later as well as its role in creating new resources, feeding data in, and extracting views.

Analytics

Analytics is at the heart of vROps as it is essentially the runtime layer for data analysis. The role of the analytics process is to track the individual states of every metric and then use various forms of correlation to determine whether there are problems.

At a high level, the analytics layer is responsible for the following tasks:

  • Metric calculations

  • Dynamic thresholds

  • Alerts and alarms

  • Metric storage and retrieval from the Persistence layer

  • Root cause analysis

  • Historic Inventory Server (HIS) version metadata calculations and relationship data

Tip

One important difference between vROps 6.0 and vCOps 5.x is that analytics tasks are now run on every node (with the exception of remote collectors). The vCOps 5.x Installable provides an option of installing separate multiple remote analytics processors for dynamic threshold (DT) processing. However, these remote DT processors only support dynamic threshold processing and do not include other analytics functions.

Although its primary tasks have not changed much from vCOps 5.x, the analytics component has undergone a significant upgrade under the hood to work with the new GemFire-based cache and the Controller and Persistence layers.

Persistence

The Persistence layer, as its name implies, is the layer where the data is persisted to a disk. The layer primarily consists of a series of databases that replace the existing vCOps 5.x filesystem database (FSDB) and PostgreSQL combination.

Understanding the persistence layer is an important aspect of vROps 6.0, as this layer has a strong relationship with the data and service availability of the solution. vROps 6.0 has four primary database services built on the EMC Documentum xDB (an XML database) and the original FSDB. These services include:

Common name

Role

DB type

Sharded

Location

Global xDB

Global data

Documentum xDB

No

/storage/vcops/xdb

Alarms xDB

Alerts and Alarms data

Documentum xDB

Yes

/storage/vcops/alarmxdb

HIS xDB

Historical Inventory Service data

Documentum xDB

Yes

/storage/vcops/hisxdb

FSDB

Filesystem Database metric data

FSDB

Yes

/storage/db/vcops/data

CaSA DB

Cluster and Slice Administrator data

HSQLDB (HyperSQL database)

N/A

/storage/db/casa/webapp/hsqldb

Sharding is the term that GemFire uses to describe the process of distributing data across multiple systems to ensure that computational, storage, and network loads are evenly distributed across the cluster.

We will discuss persistence in more detail, including the concept of sharding, a little later under the Multi-node deployment and high availability section in this chapter.

Global xDB

Global xDB contains all of the data that, for the release of vROps, can not be sharded. The majority of this data is user configuration data that includes:

  • User created dashboards and reports

  • Policy settings and alert rules

  • Super metric formulas (not super metric data, as this is sharded in the FSDB)

  • Resource control objects (used during resource discovery)

As Global xDB is used for data that cannot be sharded, it is solely located on the master node (and master replica if high availability is enabled). More on this topic will be discussed under the Multi-node deployment and high availability section.

Alarms xDB

Alerts and Alarms xDB is a sharded xDB database that contains information on DT breaches. This information then gets converted into vROps alarms based on active policies.

HIS xDB

HIS xDB is a sharded xDB database that holds historical information on all resource properties and parent/child relationships. HIS is used to change data back to the analytics layer based on the incoming metric data that is then used for DT calculations and symptom/alarm generation.

FSDB

The role of the Filesystem Database is not differed much from vCOps 5.x. The FSDB contains all raw time series metrics for the discovered resources.

Tip

The FSDB metric data, HIS object, and Alarms data for a particular resource share the same GemFire shard key. This ensures that the multiple components that make up the persistence of a given resource are always located on the same node.