Book Image

Mastering Python Networking

Book Image

Mastering Python Networking

Overview of this book

This book begins with a review of the TCP/ IP protocol suite and a refresher of the core elements of the Python language. Next, you will start using Python and supported libraries to automate network tasks from the current major network vendors. We will look at automating traditional network devices based on the command-line interface, as well as newer devices with API support, with hands-on labs. We will then learn the concepts and practical use cases of the Ansible framework in order to achieve your network goals. We will then move on to using Python for DevOps, starting with using open source tools to test, secure, and analyze your network. Then, we will focus on network monitoring and visualization. We will learn how to retrieve network information using a polling mechanism, ?ow-based monitoring, and visualizing the data programmatically. Next, we will learn how to use the Python framework to build your own customized network web services. In the last module, you will use Python for SDN, where you will use a Python-based controller with OpenFlow in a hands-on lab to learn its concepts and applications. We will compare and contrast OpenFlow, OpenStack, OpenDaylight, and NFV. Finally, you will use everything you’ve learned in the book to construct a migration plan to go from a legacy to a scalable SDN-based network.
Table of Contents (22 chapters)
Title
Humble Bundle
Credits
Foreword
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
12
OpenStack, OpenDaylight, and NFV

The internet overview


What is the internet? This seemingly easy question might receive different answers depending on your background. The internet means different things to different people, the young, the older, the student, the teacher, the business person, a poet, all could have a different answer to the question.

To a network engineer and systems engineer by extension, the internet is a global computer network providing a variety of information. This global computer network system is actually a web of internetwork connecting large and small networks together. Imagine your home network; it will consist of a home switch connecting your smart phone, tablet, computers, and TV together, so they can communicate with each other. Then, when it needs to communicate to the outside world, it passes the information on to the home router that connects your home network to a larger network, often appropriately called the Internet Service Provider (ISP). Your ISP often consists of edge nodes that aggregate the traffic to their core network. The core network's function is to interconnect these edge networks via a higher speed network. At special edge nodes, your ISP is connected to other ISPs to pass your traffic appropriately to your destination. The return path from your destination to your home computer, tablet, or smart phone may or may not follow the same path through all of these networks back to your screen.

Let's take a look at the components making up this web of networks.

Servers, hosts, and network components

Hosts are end nodes on the networks that communicate to other nodes. In today's world, a host can be the traditional computer, or it can be your smart phone, tablet, or TV. With the rise of Internet of Things (IoT), the broad definition of host can be expanded to include IP camera, TV set-top boxes, and the ever-increasing type of sensors that we use in agriculture, farming, automobiles, and more. With the explosion of the number of hosts connected to the internet, all of them need to be addressed, routed, and managed, and the demand for proper networking has never been greater.

Most of the time when we are on the internet, we request for services. The service can be a web page, sending or receiving emails, transferring files, and such. These services are provided by servers. As the name implies, they provide services to multiple nodes and generally have higher levels of hardware specification because of it. In a way, servers are special super nodes on the network that provide additional capabilities to its peers. We will look at servers later on in the client-server section.

If one thinks of servers and hosts as cities and towns, the network components are the roads and highways to connect them together. In fact, the term "information superhighway" comes to mind when describing the network components that transmit the ever increasing bits and bytes across the globe. In the OSI model that we will look at in a bit, these network components are the routers and switches that reside on layer two and three of the model as well as the layer on fiber optics cable, coaxial cable, twisted copper pairs, and some DWDM equipment, to name a few.

Collectively, the hosts, servers, and network components make up the internet as we know today.

The rise of datacenter

In the last section, we looked at different roles that servers, hosts, and network components play in the internetwork. Because of the higher hardware capacity that the servers demand, they are often put together in a central location, so they can be managed more efficiently. We will refer to these locations as datacenters.

Enterprise datacenters

In a typical enterprise, the company generally has the need for internal tools such as e-mails, document storage, sales tracking, ordering, HR tools, and knowledge sharing intranet. These terms translate into file and mail servers, database servers, and web servers. Unlike user computers, these are generally high-end computers that require higher power, cooling, and network connection. A byproduct of the hardware is also the amount of noise they make. They are generally placed in a central location called the Main Distribution Frame (MDF) in the enterprise to provide the necessary power feed, power redundancy, cooling, and network speed.

To connect to the MDF, the user's traffic is generally aggregated at a location, which is sometimes called the Intermediate Distribution Frame (IDF) before they are connected to the MDF. It is not unusual for the IDF-MDF spread to follow the physical layout of the enterprise building or campus. For example, each building floor can consist of an IDF that aggregates to the MDF on another floor. If the enterprise consists of several buildings, further aggregation can be done by combining the building traffic before connecting them to the enterprise datacenter.

The enterprise datacenters generally follow the three layers of access, distribution, and core. The access layer is analogous to the ports each user connects to; the IDF can be thought of as the distribution layer, while the core layer consists of the connection to the MDF and the enterprise datacenters. This is, of course, a generalization of enterprise networks as some of them will not follow the same model.

Cloud datacenters

With the rise of cloud computing and software or infrastructure as a service, we can say that the datacenters cloud providers build the cloud datacenters. Because of the number of servers they house, they generally demand a much, much higher capacity of power, cooling, network speed, and feed than any enterprise datacenter. In fact, the cloud datacenters are so big they are typically built close to power sources where they can get the cheapest rate for power, without losing too much during transportation of the power. They can also be creative when it comes to cooling down where the datacenter might be build in a generally cold climate, so they can just open the doors and windows to keep the server running at a safe temperature. Any search engine can give you some of the astounding numbers when it comes to the science of building and managing these cloud datacenters for the likes of Amazon, Microsoft, Google, and Facebook:

Utah Data Center (source: https://en.wikipedia.org/wiki/Utah_Data_Center)

The services that the servers at datacenters need to provide are generally not cost efficient to be housed in any single server. They are spread among a fleet of servers, sometimes across many different racks to provide redundancy and flexibility for service owners. The latency and redundancy requirements put a tremendous amount of pressure on the network. The number of interconnection equates to an explosive growth of network equipment; this translates into the number of times these network equipment need to be racked, provisioned, and managed.

CLOS Network (source: https://en.wikipedia.org/wiki/Clos_network)

In a way, the cloud datacenter is where network automation becomes a necessity. If we follow the traditional way of managing the network devices via a Terminal and command-line interface, the number of engineering hours required would not allow the service to be available in a reasonable amount of time. This is not to mention that human repetition is error prone, inefficient, and a terrible waste of engineering talent.

The cloud datacenter is where the author started his path of network automation with Python a number of years ago, and never looked back since.

Edge datacenters

If we have sufficient computing power at the datacenter level, why keep anything anywhere else but at these datacenters? All the connections will be routed back to the server providing the service, and we can call it a day. The answer, of course, depends on the use case. The biggest limitation in routing back the request and session all the way back to the datacenter is the latency introduced in the transport. In other words, network is the bottleneck. As fast as light travels in a vacuum, the latency is still not zero. In the real world, the latency would be much higher when the packet is traversing through multiple networks and sometimes through undersea cable, slow satellite links, 3G or 4G cellular links, or Wi-Fi connections.

The solution? Reduce the number of networks the end user traverse through as much as possible by one. Be as directly connected to the user as possible; and two: place enough resources at the edge location. Imagine if you are building the next generation of video streaming service. In order to increase customer satisfaction with a smooth streaming, you would want to place the video server as close to the customer as possible, either inside or very near to the customer's ISP. Also, the upstream of my video server farm would not just be connected to one or two ISPs, but all the ISPs that I can connect to with as much bandwidth as needed. This gave rise to the peering exchanges edge datacenters of big ISP and content providers. Even when the number of network devices are not as high as the cloud datacenters, they too can benefit from network automation in terms of the increased security and visibility automation brings. We will cover security and visibility in later chapters of this book.