Book Image

Implementing Cisco UCS Solutions

Book Image

Implementing Cisco UCS Solutions

Overview of this book

Cisco Unified Computing System(UCS) provides unique features for the contemporary data centres. Cisco UCS is a unified solution that consolidates computing, network and storage connectivity components along-with centralized management. Cisco UCS reduces TCO and improves scalability and flexibility. Stateless computing blade server's design simplifies the troubleshooting, and Cisco-patented extended memory technology provides higher virtualized servers consolidation results. A hands-on guide to take you through deployment in Cisco UCS. With real-world examples for configuring and deploying Cisco UCS components, this book will prepare you for the practical deployments of Cisco UCS data centre solutions. If you want to learn and enhance your hands-on skills with Cisco UCS solutions, this book is certainly for you. Starting with the description of Cisco UCS equipment options, this hands-on guide then introduces Cisco UCS Emulator which is an excellent resource to practically learn Cisco UCS components' deployment. You will also be introduced to all areas of UCS solutions with practical configuration examples. You will also discover the Cisco UCS Manager, which is the centralized management interface for Cisco UCS. Once you get to know UCS Manager, the book dives deeper into configuring LAN, SAN, identity pools, resource pools, and service profiles for the servers. The book also presents other administration topics including Backup, Restore, user's roles, and high availability cluster configuration. Finally, you will learn about virtualized networking, 3rd party integration tools and testing failure scenarios. You will learn everything you need to know for the rapidly growing Cisco UCS deployments in the real-world.
Table of Contents (19 chapters)
Implementing Cisco UCS Solutions
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface
Index

Cabling FI and IOM


UCS is an integrated solution that handles both network traffic and management control. All management and data movement intelligence for chassis components and blade servers is present in the FIs and IOM modules (which are line cards for the FIs). Therefore, proper cabling between FIs and IOM modules is an important design consideration.

IOM – FI cabling topology

IOM modules are used to connect blade server chassis to the FIs and act as line cards to them. It is therefore necessary to maintain proper connectivity between IOMs and FIs. Since an IOM module becomes part of the FI, multiple links from a single IOM can only be connected to a single FI and not across to the other FI. Depending on the IOM model, there can be one, two, four, or eight links from IOM to a single FI. These links can be configured in the port channel for bandwidth aggregation. The chassis discovery process is initiated as soon as an IOM is connected to an FI.

In the following figure, on the left-hand side, all links from IOM 0 are connected to a single FI and can be combined into a single port channel. The figure on the right shows a configuration in which links from a single IOM are connected to different FIs. This is an invalid topology, and hence chassis discovery will fail.

Note

Chassis discovery will also fail if a high availability cluster is not established between FIs.

IOM – FI physical cabling

IOMs provide connectivity to the individual blade servers through I/O multiplexer and connectivity to the FI. IOM interface connectivity to blade servers does not require user configuration.

IOM to FI connectivity, however, requires physical cabling. Both IOM and FI have SFP+ slots. There are a variety of possibilities in terms of physical interfaces. Some of the common configurations include the following:

  • 10 GB FET SFP+ interface (special optical multimode fiber SFP+ module which can only be used with UCS and Nexus equipment)

  • 10 GB CU SFP+ (copper twinax cable)

  • 10 GB SR SFP+ (short range multimode optical fiber SFP+ module for up to 300 m)

  • 10 GB LR SFP+ (long-range, single-mode optical fiber SFP+ module for above 300 m)

The following figure shows eight connections from IOM 0 to Fabric Interconnect A and eight connections from IOM 1 to Fabric Interconnect B. Depending on the bandwidth requirements and model, it is possible to have only one, two, four, or eight connections from IOM to FI.

Although large numbers of links provide higher bandwidth for individual servers, as each link consumes a physical port on the FI, they also decrease the total number of UCS chassis which can be connected to the FIs.

As shown in the preceding figure, IOM to FI only supports direct connection. However, FI to north-bound Nexus switch connectivity can be direct and may use regular port channel (PC), or the connections from a single FI may traverse two different Nexus switches and use virtual PortChannel (vPC).

The following figure shows a direct connection between FIs and Nexus switches. All connections from FI A are connected to the Nexus Switch 1 and all connections from FI B are connected to Nexus Switch 2. These links can be aggregated into a PC.

The following are two other connections that need to be configured:

  • Cluster heart beat connectivity: Each FI has two fast Ethernet ports. These ports should be connected using a CAT6 UTP cable for cluster configuration

  • Management port: This is also a fast Ethernet port that can be configured using a CAT6 UTP cable for remote management of the FI

The following figure shows the FI to Nexus switch connectivity where links traverse Nexus switches. One network connection from FI A is connected to Nexus Switch 1 and the other to Nexus Switch 2. Both these connections are configured via vPC. Similarly, one connection from FI B is connected to Nexus Switch 2 and the other to Nexus Switch 1. Both these connections are also configured via vPC. It is also imperative to have vPC on a physical connection between both the Nexus switches. This is shown as two physical links between Nexus Switch 1 and Nexus Switch 2. Without this connectivity and configuration between Nexus Switches, vPC will not work.

Physical slots in Nexus switches also support the same set of SFP+ modules for connectivity that FIs and IOMs do.

Note

A complete list of SFP+ modules is available in Table 3 at http://www.cisco.com/en/US/prod/collateral/ps10265/ps10276/data_sheet_c78-524724.html.