Book Image

Cisco ACI Cookbook

By : Stuart Fordham
Book Image

Cisco ACI Cookbook

By: Stuart Fordham

Overview of this book

Cisco Application Centric Infrastructure (ACI) is a tough architecture that automates IT tasks and accelerates data-center application deployments. This book focuses on practical recipes to help you quickly build, manage, and customize hybrid environment for your organization using Cisco ACI. You will begin by understanding the Cisco ACI architecture and its major components. You will then configure Cisco ACI policies and tenants. Next you will connect to hypervisors and other third-party devices. Moving on, you will configure routing to external networks and within ACI tenants and also learn to secure ACI through RBAC. Furthermore, you will understand how to set up quality of service and network programming with REST, XML, Python and so on. Finally you will learn to monitor and troubleshoot ACI in the event of any issues that arise. By the end of the book, you will gain have mastered automating your IT tasks and accelerating the deployment of your applications.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

An overview of the ACI fabric


A fabric is a fancy term for how the computing, network, and software components of a data center are laid out. The name itself comes from the crisscrossing of the network, much like an item of weaved clothing. The ACI fabric is relatively simplistic. It employs a two-tier design made of spine and leaf switches, but very particular switches.

ACI hardware

In Figure 4, we can see a typical deployment with two spines and three leaves. The Nexus 9500 modular switches are deployed at the top of the topology and act as spines, which in a traditional three-tiered network design would be the aggregation or core switches. Line cards are used to provide the ASICs (application-specific integrated circuitry) required for ACI. There are different line cards available, so make sure you are not purchasing a card that is NX-OS mode only.

The next component is the leaf switches. These are the Nexus 9300-series switches.

The spines connect to the leaves through 40 GE ports, but the spines and leaves are never connected (spine to spine or leaf to leaf).

Figure 4

We can also extend the network, offering greater port density through Nexus 2000-series fabric extenders:

Figure 5

We can also add storage directly to the leaves themselves:

Figure 6

Alternatively, we could use another pair of switches, such as Cisco Nexus 5000-series switches, which would connect to the leaf switches:

Figure 7

The APIC controllers connect to the leaf switches.

Figure 8

The APIC controllers are completely separate from the day-to-day running of ACI. We need them to push new policies. They do not play a part in the functioning of the ACI fabric's data plane, however--only in the control plane.

Note

The control plane is what we learn, such as routing information. The data plane is the movement of packets based upon information in the control plane.

If we lose one controller, then our tenants’ traffic still flows. If we lose all our controllers, then our tenants’ traffic still flows; we are just unable to push new policies until the controllers are reinstated into the network.

ACI best practice states that we should have a minimum of three controllers. Having three controllers offers high availability; it offers physical redundancy as well as database redundancy. Why not just two controllers, though? Three controllers (well, just an odd number greater than one) work better in a split-brain scenario (one controller disagreeing with another). In such an event, the majority would rule. The controllers use LLDP to find each other, which is part of the process discussed in the ACI fabric overlay section later on in this chapter. We will look at how to use multiple controllers in the troubleshooting section, as the majority of this book uses a much simpler design with just one controller, one spine, and two leaf switches, as seen when we look at the fabric menu later on in this chapter.

Understanding third-party integration

One of the most attractive reasons to deploy ACI is the ease of integration with other Cisco products (such as the ASAv firewall) and third-party systems.

This integration is performed through OpFlex. OpFlex is an open standards-based southbound protocol, designed to facilitate multi-vendor integration in both data center and cloud networks. OpFlex is important as it distinguishes ACI from other SDN models, which have integration but do not support the full feature set. The easiest way to try and explain this would be to look at it in the context of SNMP.

SNMP (Simple Network Management Protocol) allows monitoring network hardware, and all devices support the most basic MIB (Management Information Base) of iso.org.dod.internet.mgmt, so at the most basic level, you can pull out data such as interfaces and IP addresses. We are getting data but at the lowest common denominator. We need extra information, by way of specific MIBs, to be able to monitor our firewall’s VPN tunnels or the nodes in our load balancers. OpFlex gives us all the information, but the data is not bound to any particular format. It is a declarative model, which benefits any interested party. This declarative model is based on promise theory.

Promise theory, developed by Mark Burgess in the 1990s, sets ACI apart from other SDN implementations. They use imperative control, in which we have a controlling system, and the system being controlled is relieved of the burden of doing the thinking. While this does offer more autonomy to the controller, it can also create a bottleneck within the system. ACI, however, uses a declarative model. This model states what should happen but not how it should be done (leaving that up to the node being controlled). The node then makes a promise to achieve the desired state and, importantly, communicates back to the controller the success or failure of the task, along with the reason why. The controller is no longer a bottleneck in the system, and the commands are simpler; instead of separate commands to implement the same function on different vendor equipment, we have one command set understandable by both vendors' equipment. This is the benefit of open standards.

Even with open standards, though, there can be some ulterior motive. It is all well and good having the next best thing for integrating different technologies, but when this is designed for those technologies to run under one particular company's product, there can be some hesitation. However, there is a large backing from several renowned companies, such as Microsoft, IBM, Citrix, Red Hat, F5, SunGard Availability Services, and Canonical. So why has OpFlex gathered such a wide backing?

With the traditional SDN model, there is a bottleneck: the SDN controller. As we scale out, there is an impact on both performance and resilience. We also lose simplicity and agility; we still need to make sure that all the components are monitored and safeguarded, which invariably means bolting on more technology to achieve this.

OpFlex takes a different approach. A common language ensures that we do not need to add any extras that are not part of the original design. There is still complexity, but it is moved toward the edges of the network, and we maintain resilience, scalability, and simplicity. If we lose all of the controllers, then the network continues to operate--we may not be able to make policy changes until we restore the controllers, but the tenant’s data still flows, uninterrupted.

The protocol itself uses XML or JSON as the transmission medium. It allows us to see each node as a managed object (MO). Each MO consists of the following:

  • Properties
  • Child relations
  • Parent relations
  • MO relations
  • Statistics
  • Faults
  • Health

While the ins and outs of these are beyond the scope of this book, you can read about them more in the IETF drafts. The first one in 2014 (https://tools.ietf.org/html/draft-smith-opflex-00) listed all these seven items, but subsequent drafts--the most recent being October 27, 2016 (https://tools.ietf.org/html/draft-smith-opflex-03)--compress the last four items into one, labeled observables.

What all this means is that for third-parties, OpFlex means greater integration across SDN platforms. If and when OpFlex does become a truly open standard, different vendors, equipment will be able to speak the same language using a simple JSON file.