Book Image

DevOps for Networking

By : Steven Armstrong
Book Image

DevOps for Networking

By: Steven Armstrong

Overview of this book

Frustrated that your company’s network changes are still a manual set of activities that slow developers down? It doesn’t need to be that way any longer, as this book will help your company and network teams embrace DevOps and continuous delivery approaches, enabling them to automate all network functions. This book aims to show readers network automation processes they could implement in their organizations. It will teach you the fundamentals of DevOps in networking and how to improve DevOps processes and workflows by providing automation in your network. You will be exposed to various networking strategies that are stopping your organization from scaling new projects quickly. You will see how SDN and APIs are influencing DevOps transformations, which will in turn help you improve the scalability and efficiency of your organizations networks operations. You will also find out how to leverage various configuration management tools such as Ansible, to automate your network. The book will also look at containers and the impact they are having on networking as well as looking at how automation impacts network security in a software-defined network.
Table of Contents (19 chapters)
DevOps for Networking
Credits
About the Author
Acknowledgments
About the Reviewer
www.PacktPub.com
Preface
Index

Changes that have occurred in networking with the introduction of public cloud


It is unquestionable that the emergence of the AWS, which was launched in 2006, changed and shaped the networking landscape forever. AWS has allowed companies to rapidly develop their products on the AWS platform. AWS has created an innovative set of services for end users, so they can manage infrastructure, load balancing, and even databases. These services have led the way in making the DevOps ideology a reality, by allowing users to elastically scale up and down infrastructure. They need to develop products on demand, so infrastructure wait times are no longer an inhibitor to development teams. AWS rich feature set of technology allows users to create infrastructure by clicking on a portal or more advanced users that want to programmatically create infrastructure using configuration management tooling, such as Ansible, Chef, Puppet, Salt or Platform as a Service (PaaS) solutions.

An overview of AWS

In 2016, the AWS Virtual Private Cloud (VPC) secures a set of Amazon EC2 instances (virtual machines) that can be connected to any existing network using a VPN connection. This simple construct has changed the way that developers want and expect to consume networking.

In 2016, we live in a consumer-based society with mobile phones allowing us instant access to the Internet, films, games, or an array of different applications to meet our every need, instant gratification if you will, so it is easy to see the appeal of AWS has to end users.

AWS allows developers to provision instances (virtual machines) in their own personal network, to their desired specification by selecting different flavors (CPU, RAM, and disk) using a few button clicks on the AWS portal's graphical user interface, alternately using a simple call to an API or scripting against the AWS-provided SDKs.

So now a valid question, why should developers be expected to wait long periods of time for either infrastructure or networking tickets to be serviced in on-premises data centers when AWS is available? It really shouldn't be a hard question to answer. The solution surely has to either be moved to AWS or create a private cloud solution that enables the same agility. However, the answer isn't always that straightforward, there are following arguments against using AWS and public cloud:

  • Not knowing where the data is actually stored and in which data center

  • Not being able to hold sensitive data offsite

  • Not being able to assure the necessary performance

  • High running costs

All of these points are genuine blockers for some businesses that may be highly regulated or need to be PCI compliant or are required to meet specific regularity standards. These points may inhibit some businesses from using public cloud so as with most solutions it isn't the case of one size fits all.

In private data centers, there is a cultural issue that teams have been set up to work in silos and are not set up to succeed in an agile business model, so a lot of the time using AWS, Microsoft Azure, or Google Cloud is a quick fix for broken operational models.

Ticketing systems, a staple of broken internal operational models, are not a concept that aligns itself to speed. An IT ticket raised to an adjacent team can take days or weeks to complete, so requests are queued before virtual or physical servers can be provided to developers. Also, this is prominent for network changes too, with changes such as a simple modification to ACL rules taking an age to be implemented due to ticketing backlogs.

Developers need to have the ability to scale up servers or prototype new features at will, so long wait times for IT tickets to be processed hinder delivery of new products to market or bug fixes to existing products. It has become common in internal IT that some Information Technology Infrastructure Library (ITIL) practitioners put a sense of value on how many tickets that processed over a week as the main metric for success. This shows complete disregard for customer experience of their developers. There are some operations that need to shift to the developers, which have traditionally lived with internal or shadow IT, but there needs to be a change in operational processes at a business level to invoke these changes.

Put simply, AWS has changed the expectations of developers and the expectations placed on infrastructure and networking teams. Developers should be able to service their needs as quickly as making an alteration to an application on their mobile phone, free from slow internal IT operational models associated with companies.

But for start-ups and businesses that can use AWS, which aren't constrained by regulatory requirements, it skips the need to hire teams to rack servers, configure network devices, and pay for the running costs of data centers. It means they can start viable businesses and run them on AWS by putting in credit card details the same way as you would purchase a new book on Amazon or eBay.

OpenStack overview

The reaction to AWS was met with trepidation from competitors, as it disrupted the cloud computing industry and has led to PaaS solutions such as Cloud Foundry and Pivotal coming to fruition to provide an abstraction layer on top of hybrid clouds.

When a market is disrupted, it promotes a reaction, from it spawned the idea for a new private cloud. In 2010, a joint venture by Rackspace and NASA, launched an open source cloud-software initiative known as OpenStack, which came about as NASA couldn't put their data in a public cloud.

The OpenStack project intended to help organizations offer cloud computing services running on standard hardware and directly set out to mimic the model provided by AWS. The main difference with OpenStack is that it is an open source project that can be used by leading vendors to bring AWS-like ability and agility to the private cloud.

Since its inception in 2010, OpenStack has grown to have over 500 member companies as part of the OpenStack Foundation, with platinum members and gold members that comprise the biggest IT vendors in the world that are actively driving the community.

The platinum members of the OpenStack foundation are:

OpenStack is an open source project, which means its source code is publicly available and its underlying architecture is available for analysis, unlike AWS, which acts like a magic box of tricks but it is not really known for how it works underneath its shiny exterior.

OpenStack is primarily used to provide an Infrastructure as a Service (IaaS) function within the private cloud, where it makes commodity x86 compute, centralized storage, and networking features available to end users to self-service their needs, be it via the horizon dashboard or through a set of common API's.

Many companies are now implementing OpenStack to build their own data centers. Rather than doing it on their own, some companies are using different vendor hardened distributions of the community upstream project. It has been proven that using a vendor hardened distributions of OpenStack, when starting out, mean that OpenStack implementation is far likelier to be successful. Initially, for some companies, implementing OpenStack can be seen as complex as it is a completely new set of technology that a company may not be familiar with yet. OpenStack implementations are less likely to fail when using professional service support from known vendors, and it can create a viable alternative to enterprise solutions, such as AWS or Microsoft Azure.

Vendors, such as Red Hat, HP, Suse, Canonical, Mirantis, and many more, provide different distributions of OpenStack to customers, complete with different methods of installing the platform. Although the source code and features are the same, the business model for these OpenStack vendors is that they harden OpenStack for enterprise use and their differentiator to customers is their professional services.

There are many different OpenStack distributions available to customers with the following vendors providing OpenStack distributions:

  • Bright Computing

  • Canonical

  • HPE

  • IBM

  • Mirantis

  • Oracle OpenStack for Oracle Linux, or O3L

  • Oracle OpenStack for Oracle Solaris

  • Red Hat

  • SUSE

  • VMware Integrated OpenStack (VIO)

OpenStack vendors will support build out, on-going maintenance, upgrades, or any customizations a client needs, all of which are fed back to the community. The beauty of OpenStack being an open source project is that if vendors customize OpenStack for clients and create a real differentiator or competitive advantage, they cannot fork OpenStack or uniquely sell this feature. Instead, they have to contribute the source code back to the upstream open source OpenStack project.

This means that all competing vendors contribute to its success of OpenStack and benefit from each other's innovative work. The OpenStack project is not just for vendors though, and everyone can contribute code and features to push the project forward.

OpenStack maintains a release cycle where an upstream release is created every six months and is governed by the OpenStack Foundation. It is important to note that many public clouds, such as at&t, RackSpace, and GoDaddy, are based on OpenStack too, so it is not exclusive to private clouds, but it has undeniably become increasingly popular as a private cloud alternative to AWS public cloud and now widely used for Network Function Virtualization (NFV).

So how does AWS and OpenStack work in terms of networking? Both AWS and OpenStack are made up of some mandatory and optional projects that are all integrated to make up its reference architecture. Mandatory projects include compute and networking, which are the staple of any cloud solution, whereas others are optional bolt-ons to enhance or extend capability. This means that end users can cherry-pick the projects they are interested in to make up their own personal portfolio.