Book Image

DevSecOps in Practice with VMware Tanzu

By : Parth Pandit, Robert Hardt
Book Image

DevSecOps in Practice with VMware Tanzu

By: Parth Pandit, Robert Hardt

Overview of this book

As Kubernetes (or K8s) becomes more prolific, managing large clusters at scale in a multi-cloud environment becomes more challenging – especially from a developer productivity and operational efficiency point of view. DevSecOps in Practice with VMware Tanzu addresses these challenges by automating the delivery of containerized workloads and controlling multi-cloud Kubernetes operations using Tanzu tools. This comprehensive guide begins with an overview of the VMWare Tanzu platform and discusses its tools for building useful and secure applications using the App Accelerator, Build Service, Catalog service, and API portal. Next, you’ll delve into running those applications efficiently at scale with Tanzu Kubernetes Grid and Tanzu Application Platform. As you advance, you’ll find out how to manage these applications, and control, observe, and connect them using Tanzu Mission Control, Tanzu Observability, and Tanzu Service Mesh. Finally, you’ll explore the architecture, capabilities, features, installation, configuration, implementation, and benefits of these services with the help of examples. By the end of this VMware book, you’ll have gained a thorough understanding of the VMWare Tanzu platform and be able to efficiently articulate and solve real-world business problems.
Table of Contents (19 chapters)
Part 1 – Building Cloud-Native Applications on the Tanzu Platform
Part 2 – Running Cloud-Native Applications on Tanzu
Part 3 – Managing Modern Applications on the Tanzu Platform


When containers caught on, they took off in a big way, but they were not the be-all-and-end-all solution developers had hoped for. A container runtime on a server often required big trade-offs between flexibility and security. Because the container runtime needed to work closely with the Linux kernel, users often required elevated permissions just to run their containers. Furthermore, there were multiple ways to run containers on a server, some of which were tightly coupled to specific cloud providers. Finally, while container runtimes let developers start up their applications, they varied widely in their support for things like persistent storage and networking, which often required manual configuration and customization.

These were the problems that Joe Beda, Craig McLuckie, and Brendan Burns at Google were trying to solve when they built Kubernetes. Rather than just a means of running containerized applications on a server, Kubernetes evolved into what Google Distinguished Developer Advocate Kelsey Hightower called ”a platform for building platforms.” Kubernetes offered many benefits over running containers directly on a server:

  • It provided a single flexible declarative API for describing the desired state of a running application – 9 instances, each using 1 gigabyte of RAM and 500 millicores of CPU spread evenly over 3 availability zones, for example
  • It handled running the instances across an elastic fleet of servers complete with all the necessary networking and resource management
  • It provided a declarative way to expose cloud-provider-specific implementations of networking and persistent storage to container workloads
  • It provided a framework for custom APIs such that any arbitrary object could be managed by Kubernetes
  • It shipped with developer-oriented abstractions such as Deployments, Stateful Sets, Config Maps, and Secrets, which handled many common use cases

Many of us thought that perhaps Kubernetes was the technological advance that would finally solve all of our problems, but just as with each previous technology iteration, the solution to a particular set of problems simply exposes a new generation of problems.

As companies with large teams of developers began to onboard onto Kubernetes, these problems became increasingly pronounced. Here are some examples:

  • Technology sprawl took hold, with each team solving the same problem differently
  • Teams had their own ops tooling and processes making it difficult to scale operations across applications
  • Enforcing best practices involved synchronous human-bound processes that slowed developer velocity
  • Each cloud provider’s flavor of Kubernetes was slightly different, making multi-cloud and hybrid-cloud deployments difficult
  • Many of the core components of a Kubernetes Deployment – container images, for example – simply took existing problems and allowed developers to deploy vulnerable software much more quickly and widely than before, actually making the problem worse
  • Entire teams had to be spun up just to manage developer tooling and try to enforce some homogeneity across a wide portfolio of applications
  • Running multiple different applications on a single Kubernetes cluster requires significant operator effort and investment

Alas, Kubernetes was not the panacea we had hoped it would be; rather, it was just another iteration of technology that moves the industry forward by solving one set of problems but inevitably surfacing a new set of problems. This is where the Tanzu team at VMware comes into the picture.