Book Image

DevSecOps in Practice with VMware Tanzu

By : Parth Pandit, Robert Hardt
Book Image

DevSecOps in Practice with VMware Tanzu

By: Parth Pandit, Robert Hardt

Overview of this book

As Kubernetes (or K8s) becomes more prolific, managing large clusters at scale in a multi-cloud environment becomes more challenging – especially from a developer productivity and operational efficiency point of view. DevSecOps in Practice with VMware Tanzu addresses these challenges by automating the delivery of containerized workloads and controlling multi-cloud Kubernetes operations using Tanzu tools. This comprehensive guide begins with an overview of the VMWare Tanzu platform and discusses its tools for building useful and secure applications using the App Accelerator, Build Service, Catalog service, and API portal. Next, you’ll delve into running those applications efficiently at scale with Tanzu Kubernetes Grid and Tanzu Application Platform. As you advance, you’ll find out how to manage these applications, and control, observe, and connect them using Tanzu Mission Control, Tanzu Observability, and Tanzu Service Mesh. Finally, you’ll explore the architecture, capabilities, features, installation, configuration, implementation, and benefits of these services with the help of examples. By the end of this VMware book, you’ll have gained a thorough understanding of the VMWare Tanzu platform and be able to efficiently articulate and solve real-world business problems.
Table of Contents (19 chapters)
1
Part 1 – Building Cloud-Native Applications on the Tanzu Platform
7
Part 2 – Running Cloud-Native Applications on Tanzu
11
Part 3 – Managing Modern Applications on the Tanzu Platform

The emergence of the cloud and containers

Happily, the tech industry has begun to coalesce around some key technologies that have come a long way in addressing some of these concerns. The first is what is known colloquially as The Cloud. Enterprises with a big technology footprint began to realize that managing data centers and infrastructure was not the main focus of their business and could be done more efficiently by third parties for which those tasks were a core competency.

Rather than staff their own data centers and manage their own hardware, they could rent expertly managed and maintained technology infrastructure, and they could scale their capacity up and back down based on real-time business needs. This was a game-changer on many levels. One positive outcome of this shift was a universal raising of the bar with regard to vulnerable and out-of-date software on the internet. As vulnerabilities emerged, the cloud providers could make the right thing to do the easiest thing to do. Managed databases that handle their own operating system updates and object storage that is publicly inaccessible by default are two examples that come immediately to mind. Another outcome was a dramatic increase in deployment velocity as infrastructure management was taken off developers’ plates.

As The Cloud became ubiquitous in the world of enterprise software and infrastructure capacity became a commodity, this allowed some issues that were previously obscured by infrastructure concerns to take center stage. Developers could have something running perfectly in their development environment only to have it fall over in production. This problem became so common that it earned its own subgenre of programmer humor called It Works on My Machine.

Another of these issues was unused capacity. It had become so easy to stand up new server infrastructure, that app teams were standing up large (and expensive) fleets only to have them running nearly idle most of the time.

Containers

That brings us to the subject of containers. Many application teams would argue that they needed their own fleet of servers because they had a unique set of dependencies that needed to be installed on those servers, which conflicted with the libraries and utilities required by other apps that may want to share those servers. It was the happy confluence of two technical streams that solved this problem, allowing applications with vastly different sets of dependencies to run side by side on the same server without them even being aware of one another.

Container runtimes

The first stream was the concept of cgroups and kernel namespaces. These were abstractions that were built into the Linux kernel that gave a process some guarantee as to how much memory and processor capacity it would have available to it, as well as the illusion of its own process space, its own networking stack, and its own root filesystem, among other things.

Container packaging and distribution

The second was an API by which you could package up an entire Linux root filesystem, complete with its own unique dependencies, store it efficiently, unpack it on an arbitrary server, and run it in isolation from other processes that were running with their own root filesystems.

When combined, developers found that they could stand up a fleet of servers and run many heterogeneous applications safely on that single fleet, thereby using their cloud infrastructure much more efficiently.

Then, just as the move to the cloud exposed a new set of problems that would lead to the evolution of a container ecosystem, those containers created a new set of problems, which we’ll cover in the next section about Kubernetes.