Book Image

Mastering Python Networking - Second Edition

By : Eric Chou
Book Image

Mastering Python Networking - Second Edition

By: Eric Chou

Overview of this book

Networks in your infrastructure set the foundation for how your application can be deployed, maintained, and serviced. Python is the ideal language for network engineers to explore tools that were previously available to systems engineers and application developers. In this second edition of Mastering Python Networking, you’ll embark on a Python-based journey to transition from traditional network engineers to network developers ready for the next-generation of networks. This book begins by reviewing the basics of Python and teaches you how Python can interact with both legacy and API-enabled network devices. As you make your way through the chapters, you will then learn to leverage high-level Python packages and frameworks to perform network engineering tasks for automation, monitoring, management, and enhanced security. In the concluding chapters, you will use Jenkins for continuous network integration as well as testing tools to verify your network. By the end of this book, you will be able to perform all networking tasks with ease using Python.
Table of Contents (15 chapters)

An overview of the internet

What is the internet? This seemingly easy question might receive different answers depending on your background. The internet means different things to different people; the young, the old, students, teachers, business people, poets, could all give different answers to the question.

To a network engineer, the internet is a global computer network consisting of a web of inter-networks connecting large and small networks together. In other words, it is a network of networks without a centralized owner. Take your home network as an example. It might consist of a home Ethernet switch and a wireless access point connecting your smartphone, tablet, computers, and TV together for the devices to communicate with each other. This is your Local Area Network (LAN). When your home network needs to communicate with the outside world, it passes information from your LAN to a larger network, often appropriately named the Internet Service Provider (ISP). Your ISP often consists of edge nodes that aggregate the traffic to their core network. The core network's function is to interconnect these edge networks via a higher speed network. At special edge nodes, your ISP is connected to other ISPs to pass your traffic appropriately to your destination. The return path from your destination to your home computer, tablet, or smartphone may or may not follow the same path through all of these networks back to your device, while the source and destination remain the same.

Let's take a look at the components making up this web of networks.

Servers, hosts, and network components

Hosts are end nodes on the network that communicate to other nodes. In today's world, a host can be a traditional computer, or can be your smartphone, tablet, or TV. With the rise of the Internet of Things (IoT), the broad definition of a host can be expanded to include an IP camera, TV set-top boxes, and the ever-increasing type of sensors that we use in agriculture, farming, automobiles, and more. With the explosion of the number of hosts connected to the internet, all of them need to be addressed, routed, and managed. The demand for proper networking has never been greater.

Most of the time when we are on the internet, we make requests for services. This could be viewing a web page, sending or receiving emails, transferring files, and so on. These services are provided by servers. As the name implies, servers provide services to multiple nodes and generally have higher levels of hardware specification because of this. In a way, servers are special super-nodes on the network that provide additional capabilities to its peers. We will look at servers later on in the client-server model section.

If you think of servers and hosts as cities and towns, the network components are the roads and highways that connect them together. In fact, the term information superhighway comes to mind when describing the network components that transmit the ever increasing bits and bytes across the globe. In the OSI model that we will look at in a bit, these network components are layer one to three devices. They are layer two and three routers and switches that direct traffic, as well as layer one transports such as fiber optic cables, coaxial cables, twisted copper pairs, and some DWDM equipment, to name a few.

Collectively, hosts, servers, and network components make up the internet as we know it today.

The rise of data centers

In the last section, we looked at the different roles that servers, hosts, and network components play in the inter-network. Because of the higher hardware capacity that servers demand, they are often put together in a central location, so they can be managed more efficiently. We often refer to these locations as data centers.

Enterprise data centers

In a typical enterprise, the company generally has the need for internal tools such as emailing, document storage, sales tracking, ordering, HR tools, and a knowledge sharing intranet. These services translate into file and mail servers, database servers, and web servers. Unlike user computers, these are generally high-end computers that require a lot of power, cooling, and network connections. A byproduct of the hardware is also the amount of noise they make. They are generally placed in a central location, called the Main Distribution Frame (MDF), in the enterprise to provide the necessary power feed, power redundancy, cooling, and network connectivity.

To connect to the MDF, the user's traffic is generally aggregated at a location closer to the user, which is sometimes called the Intermediate Distribution Frame (IDF), before they are bundled up and connected to the MDF. It is not unusual for the IDF-MDF spread to follow the physical layout of the enterprise building or campus. For example, each building floor can consist of an IDF that aggregates to the MDF on another floor. If the enterprise consists of several buildings, further aggregation can be done by combining the buildings' traffic before connecting them to the enterprise data center.

Enterprise data centers generally follow the network design of three layers. These layers are access, distribution, and a core. The access layer is analogous to the ports each user connects to, the IDF can be thought of as the distribution layer, while the core layer consists of the connection to the MDF and the enterprise data centers. This is, of course, a generalization of enterprise networks, as some of them will not follow the same model.

Cloud data centers

With the rise of cloud computing and software, or infrastructure as a service, the data centers cloud providers build are at a hyper-scale. Because of the number of servers they house, they generally demand a much, much higher capacity for power, cooling, network speed, and feed than any enterprise data center. Even after working on cloud provider data centers for many years, every time I visit a cloud provider data center, I am still amazed at the scale of them. In fact, cloud data centers are so big and power-hungry that they are typically built close to power plants where they can get the cheapest power rate, without losing too much efficiency during the transportation of the power. Their cooling needs are so great, some are forced to be creative about where the data center is built, building in a generally cold climate just so they can just open the doors and windows to keep the server running at a safe temperature when needed. Any search engine can give you some of the astounding numbers when it comes to the science of building and managing cloud data centers for the likes of Amazon, Microsoft, Google, and Facebook:

Utah data center (source: https://en.wikipedia.org/wiki/Utah_Data_Center)

At the cloud provider scale, the services that they need to provide are generally not cost efficient or able to feasibly be housed in a single server. They are spread between a fleet of servers, sometimes across many different racks, to provide redundancy and flexibility for service owners. The latency and redundancy requirements put a tremendous amount of pressure on the network. The number of interconnections equates to an explosive growth of network equipment; this translates into the number of times this network equipment needs to be racked, provisioned, and managed. A typical network design would be a multi-staged, CLOS network:

CLOS network

In a way, cloud data centers are where network automation becomes a necessity for speed and reliability. If we follow the traditional way of managing network devices via a Terminal and command-line interface, the number of engineering hours required would not allow the service to be available in a reasonable amount of time. This is not to mention that human repetition is error-prone, inefficient, and a terrible waste of engineering talent.

Cloud data centers are where I started my path of network automation with Python a number of years ago, and I've never looked back since.

Edge data centers

If we have sufficient computing power at the data center level, why keep anything anywhere else but at these data centers? All the connections from clients around the world can be routed back to the data center servers providing the service, and we can call it a day, right? The answer, of course, depends on the use case. The biggest limitation in routing the request and session all the way back from the client to a large data center is the latency introduced in the transport. In other words, large latency is where the network becomes a bottleneck. The latency number would never be zero: even as fast as light can travel in a vacuum, it still takes time for physical transportation. In the real world, latency would be much higher than light in a vacuum when the packet is traversing through multiple networks, and sometimes through an undersea cable, slow satellite links, 3G or 4G cellular links, or Wi-Fi connections.

The solution? Reduce the number of networks the end user traverses through. Be as closely connected to the user as possible at the edge where the user enters your network and place enough resources at the edge location to serve the request. Let's take a minute and imagine that you are building the next generation of video streaming service. In order to increase customer satisfaction with smooth streaming, you would want to place the video server as close to the customer as possible, either inside or very near to the customer's ISP. Also, the upstream of the video server farm would not just be connected to one or two ISPs, but all the ISPs that I can connect to to reduce the hop count. All the connections would be with as much bandwidth as needed to decrease latency during peak hours. This need gave rise to the peering exchange's edge data centers of big ISP and content providers. Even when the number of network devices is not as high as cloud data centers, they too can benefit from network automation in terms of the increased reliability, security, and visibility network automation brings.

We will cover security and visibility in later chapters of this book.