Book Image

Troubleshooting vSphere Storage

By : Mike Preston
Book Image

Troubleshooting vSphere Storage

By: Mike Preston

Overview of this book

Virtualization has created a new role within IT departments everywhere; the vSphere administrator. vSphere administrators have long been managing more than just the hypervisor, they have quickly had to adapt to become a ‘jack of all trades' in organizations. More and more tier 1 workloads are being virtualized, making the infrastructure underneath them all that more important. Due to this, along with the holistic nature of vSphere, administrators are forced to have the know-how on what to do when problems occur.This practical, easy-to-understand guide will give the vSphere administrator the knowledge and skill set they need in order to identify, troubleshoot, and solve issues that relate to storage visibility, storage performance, and storage capacity in a vSphere environment.This book will first give you the fundamental background knowledge of storage and virtualization. From there, you will explore the tools and techniques that you can use to troubleshoot common storage issues in today's data centers. You will learn the steps to take when storage seems slow, or there is limited availability of storage. The book will go over the most common storage transport such as Fibre Channel, iSCSI, and NFS, and explain what to do when you can't see your storage, where to look when your storage is experiencing performance issues, and how to react when you reach capacity. You will also learn about the tools that ESXi contains to help you with this, and how to identify key issues within the many vSphere logfiles.
Table of Contents (16 chapters)
Troubleshooting vSphere Storage
Credits
About the Author
Acknowledgment
About the Reviewers
www.PacktPub.com
Preface
Index

Supported filesystems


VMware ESXi supports a couple of different filesystems to use as virtual machine storage; Virtual Machine File System (VMFS) and Network File System (NFS).

VMFS

One of the most common ESXi storage configurations utilizes a purpose-built, high-performance clustered filesystem called VMFS. VMFS is a distributed storage architecture that facilitates concurrent read and write access from multiple ESXi hosts. Any supported SCSI-based block device, whether it is local, Fibre Channel, or network attached may be formatted as a VMFS datastore. See the following table for more information on the various vSphere supported storage protocols.

NFS

NFS, like VMFS, is also a distributed file system and has been around for nearly 20 years. NFS, however, is strictly network attached and utilizes Remote Procedure Call (RPC) in order to access remote files just as if they were stored locally. vSphere, as it stands today supports NFSv3 over TCP/IP, allowing the ESXi host to mount the NFS volume and use it for any storage needs, including storage for virtual machines. NFS does not contain a VMFS partition. When utilizing NFS, the NAS storage array handles the underlying filesystem assignment and shares in which ESXi simply attaches to as a mount point.

Raw disk

Although not technically a filesystem, vSphere also supports storing virtual machine guest files on a raw disk. This is configured by selecting Raw Device Mapping when adding a new virtual disk to a VM. In general, this allows a guest OS to utilize its preferred filesystem directly on the SAN. A Raw Device Mapping (RDM) may be mounted in a couple of different compatibility modes: physical or virtual. In physical mode, all commands except for REPORT LUNS are sent directly to the storage device. REPORT LUNS is masked in order to allow the VMkernel to isolate the LUN from the virtual machine. In virtual mode, only read and write commands are sent directly to the storage device while the VMkernel handles all other commands from the virtual machine. Virtual mode allows you to take advantage of many of vSphere's features such as file locking and snapshotting whereas physical mode does not.

The following table explains the supported storage connections in vSphere:

 

Fibre Channel

FCoE

iSCSI

NFS

Description

Remote blocks are accessed by encapsulating SCSI commands and data into FC frames and transmitted over the FC network.

Remote blocks are accessed by encapsulating SCSI commands and data into Ethernet frames. FCoE contains many of the same characteristics as Fibre Channel except for Ethernet transport.

Remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets and transmitted over the Ethernet network.

ESXi hosts access metadata and files located on the NFS server by utilizing file devices that are presented over a network.

Filesystem support

VMFS (block)

VMFS (block)

VMFS (block)

NFS (file)

Interface

Requires a dedicated Host Bus Adapter (HBA).

Requires either a hardware converged network adapter or NIC that supports FCoE capabilities in conjunction with the built-in software FCoE initiator.

Requires either a dependent or independent hardware iSCSI initiator or a NIC with iSCSI capabilities utilizing the built-in software iSCSI initiator and a VMkernel port.

Requires a NIC and the use of a VMkernel port.

Load Balancing/Failover

Uses VMware's Pluggable Storage Architecture to provide standard path selections and failover mechanisms.

Utilizes VMware's Pluggable Storage Architecture as well as the built-in iSCSI binding functionality.

Due to the nature of NFS implementing a single session, there is no load balancing available. Aggregate bandwidth can be achieved by manually accessing the NFS server across different paths. Failover can be configured only in an active/standby type configuration.

Security

Utilizes zoning between the hosts and the FC targets to isolate storage devices from hosts.

Utilizes Challenge Handshake Authentication Protocol (CHAP) to allow different hosts to see different LUNs.

Depends on the NFS storage device. Most implement an access control list (ACL) type deployment to allow hosts to see certain NFS exports.