Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Google Cloud Digital Leader Certification Guide
  • Table Of Contents Toc
  • Feedback & Rating feedback
Google Cloud Digital Leader Certification Guide

Google Cloud Digital Leader Certification Guide

By : Bruno Beraldo Rodrigues
5 (4)
close
close
Google Cloud Digital Leader Certification Guide

Google Cloud Digital Leader Certification Guide

5 (4)
By: Bruno Beraldo Rodrigues

Overview of this book

To thrive in today's world, leaders and technologists must understand how technology shapes businesses. As organizations shift from self-hosted to cloud-native solutions, embracing serverless systems, strategizing data use, and defining monetization becomes imperative. The Google Cloud Digital Leader Certification Guide lays a solid foundation of industry knowledge, focused on the Google Cloud platform and the innovative ways in which customers leverage its technologies. The book starts by helping you grasp the essence of digital transformation within the Google Cloud context. You’ll then cover core components of the platform, such as infrastructure and application modernization, data innovation, and best practices for environment management and security. With a series of practice exam questions included, this book ensures that you build comprehensive knowledge and prepare to certify as a Google Cloud Digital Leader. Going beyond the exam essentials, you’ll also explore how companies are modernizing infrastructure, data ecosystems, and teams in order to capitalize on new market opportunities through platform expertise, best practices, and real-world scenarios. By the end of this book, you'll have learned everything you need to pass the Google Cloud Digital Leader certification exam and have a reference guide for future requirements.
Table of Contents (21 chapters)
close
close
1
Part 1:Introduction to Digital Transformation with Google Cloud
5
Part 2:Innovating with Data and Google Cloud
9
Part 3:Infrastructure and Platform Modernization
13
Part 4:Understanding Google Cloud Security and Operations
17
Part 5:Practice Exam Questions

An introduction to data centers

To understand where the hyperscale cloud industry is today and where it’s headed, it’s helpful to dig into how the industry came about. The term cloud refers to a system or application that is being used for a business purpose where the infrastructure for that system is centralized, typically in a location referred to as a data center. Data centers are warehouses specifically built for housing advanced computing hardware, networking equipment, data, and applications.

Data centers are composed of racks and racks of servers, a business term to refer to a computer, along with networking equipment, storage arrays, and cooling and power equipment, among other components. Servers themselves are typically composed of a core processing unit (CPU), random access memory (RAM), and disk, which is typically either a hard drive or flash drive to store data. The CPU is typically used for computational tasks such as solving a math problem while RAM is used to store data temporarily, typically because data is being used by an application and being transformed in some way. Disk and flash drives are used to store data for a longer period, with disk drives being a cheaper, mechanical storage device and flash drives being more expensive given that data is stored in a semiconductor chip. Servers are used to host applications that are typically either internally facing, helping employees more efficiently and effectively do their job, or externally facing, providing services to customers and partners.

When discussing data centers, it’s helpful to define the different types. We’ll start by understanding the traditional approach: on-premises infrastructure. On-premises is a term that’s used to refer to an organization that hosts its technology infrastructure onsite at offices or other business facilities. For example, if you are a legal firm, your systems would be on-premises if they were hosted at the same location as your office. There would be a server room or several rooms at the facility where all of the systems and data are hosted for the employees to complete their work. This location would host the systems that support worker productivity and internal workflows, while also facilitating business operations.

This approach had benefits such as making it relatively easy to secure systems and applications. If all of the systems and data are hosted within your organization’s physical locations, and they are not accessible from the internet, you can establish a strong security posture through physical controls such as allowing only employees to access your facilities and not allowing them to take their workstations home with them. At a small scale, this is also a very manageable approach to infrastructure given that you could operate it with a small team and troubleshoot issues by reaching out to your local IT administrators.

However, as organizations and systems began to become more complex, scale globally, and support new patterns of work, new approaches began to surface. If the legal firm, for example, were to grow to 10 offices around the globe, it would be very difficult to continue to work via distributed, on-premises systems. Each office would need its own technology, infrastructure, and IT personnel and there would need to be a way for workers across offices to work together. They would need to share data, even if it were sensitive; if perhaps the New York and London offices were working together on a case for a multinational company operating in both the US and the UK. These pressures led to the centralization of infrastructure, which is where the term cloud comes from.

Let’s explore the two core variations of cloud hosting: private cloud and public cloud. An organization leverages the private cloud when a company is running environments where there is only one tenant – themselves. This may mean that it owns and operates all of the data center infrastructure itself. The company buys or leases land, manages the building, procures the servers, installs the operating systems, and manages the applications and data while also being responsible for both the physical and virtual security of the private cloud. In some circumstances, they may procure the space from a third-party vendor but they are ultimately responsible for the hardware, software, and networking infrastructure, with the space within the facility dedicated exclusively to them. Some drawbacks of the private cloud are the requirements for large capital expenditure on real estate, equipment, and operating costs, as well as the deep complexity that comes along with having to design, build, maintain, secure, and scale their infrastructure.

In contrast, when an organization leverages the public cloud, it offloads much of the physical responsibility of operating a data center to a third party. Google Cloud is a public cloud platform where developers and engineers can access infrastructure built on top of Google’s data centers and networks. This enables businesses to offload responsibilities such as building and managing the physical components of the data center to specialize in what will provide value to their customers. Customers of public cloud providers benefit by abstracting away much of the complexity of managing technology infrastructure by focusing on the virtualization layer and above. This means being responsible for the virtual machines, operating systems, applications, and data to operate their businesses.

A company that operates hybrid cloud environments blends both public and private cloud environments. They may have their own data centers hosting internal or sensitive applications while leveraging the public cloud for externally facing applications. This pattern is common in organizations that built up their own data centers and are building new applications in the cloud or are migrating systems to optimize for the most efficient hosting strategy. In some cases, they may build a data warehouse in the cloud to consolidate and organize data from disparate on-premises systems such as sales, marketing, logistics, and fulfillment data.

Multi-cloud is a cloud computing strategy where companies use multiple public cloud providers to host their applications. This allows them to run their workloads optimally across multiple environments and vendors, reducing the risk that arises related to vendor lock-in, where organizations become restricted in their ability to innovate and negotiate cost when they consolidate their infrastructure under a single vendor. Managing environments across multiple cloud providers can be complex given the variations between the platforms and the potential required integrations. Multi-cloud is becoming more common as engineers build skills across multiple providers and vendors begin to differentiate themselves through unique capabilities, partnerships, or contracting vehicles.

With the advent of cloud computing, we’ve also seen the rise of a new breed of technology company: cloud-native. Google Cloud states the following (https://cloud.google.com/learn/what-is-cloud-native):

“Cloud native means adapting to the many new possibilities – but a very different set of architectural constraints – offered by the cloud compared to traditional on-premises infrastructure. Unlike monolithic applications, which must be built, tested, and deployed as a single unit, cloud-native architectures decompose components into loosely coupled services to help manage complexity and improve the speed, agility, and scale of software delivery.”

Another concept that goes hand in hand with cloud-native is open source, as it relates to how software is developed and the accompanying standards with this shift. Historically, software has been closed source, meaning that the code base was owned and only viewable by the manufacturer. This led to a rise in enterprise software licensing where things such as vendor lock-in come into play, where you are at the mercy of the manufacturer from both a cost and capability perspective, locked into multi-year contracts, and unable to implement nor change the code base based on your needs.

In contrast, open source within the realm of software takes the opposite approach. Code bases are public, and anyone from around the world can view and contribute to the code base to ensure it meets their needs. Given this novel, distributed approach to software development, there were also requirements around policies that govern what would qualify as an open source project.

Some of the key principles to open source software development are as follows:

  • Transparency, allowing public access to review and deploy the code
  • Open, community-centric, and collaborative development
  • Meritocratic approach to contribution, driven by experts
  • Freely available without licensing or cost restrictions

Organizations that adopt the open source approach to software development can accelerate the pace of innovation for their teams given that they are not at the mercy of one company to improve the code base. Employees from the organization can make feature requests, contribute to the code base, and ensure that the code improves over time.

A great example of a successful open source project is Kubernetes. Kubernetes is an open source software project launched by Google as a way to help drive awareness of container-based deployment methods and architecture.

Containers were a new way of thinking about how to run applications as they relate to the underlying hardware and software. In the world of virtual machines, the application, operating system, and hardware are fundamentally coupled. This means that if the hardware fails, the application also fails. Containers allowed developers to scale applications more gracefully and build more fault tolerance into their architecture and systems. It also happened to be a more cost-effective way of hosting applications given that resources could be shared across several systems to ensure maximum utilization while minimizing cost.

This shift in designing and architecting systems translated very well to cloud-native organizations, given their flexibility and ability to rapidly innovate. More traditional organizations began to adopt similar practices as they began to comprehend the value that these approaches can deliver to their business, with start-ups rising quickly and disrupting industry after industry.

The shift from private to public cloud, by offloading the responsibilities of building and managing data centers, has enabled businesses to focus on driving change that has a meaningful impact on the bottom line. Engineers who had historically been tasked with monitoring onsite infrastructure or managing databases can be repurposed to increase developer productivity, improve the security posture, or engage in R&D projects.

As we explore the forces behind why businesses have been shifting to public cloud adoption, it’s helpful to start by defining digital transformation, including its importance and benefits.

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Google Cloud Digital Leader Certification Guide
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon