Book Image

IBM Cloud Pak for Data

By : Hemanth Manda, Sriram Srinivasan, Deepak Rangarao
3 (1)
Book Image

IBM Cloud Pak for Data

3 (1)
By: Hemanth Manda, Sriram Srinivasan, Deepak Rangarao

Overview of this book

Cloud Pak for Data is IBM's modern data and AI platform that includes strategic offerings from its data and AI portfolio delivered in a cloud-native fashion with the flexibility of deployment on any cloud. The platform offers a unique approach to addressing modern challenges with an integrated mix of proprietary, open-source, and third-party services. You'll begin by getting to grips with key concepts in modern data management and artificial intelligence (AI), reviewing real-life use cases, and developing an appreciation of the AI Ladder principle. Once you've gotten to grips with the basics, you will explore how Cloud Pak for Data helps in the elegant implementation of the AI Ladder practice to collect, organize, analyze, and infuse data and trustworthy AI across your business. As you advance, you'll discover the capabilities of the platform and extension services, including how they are packaged and priced. With the help of examples present throughout the book, you will gain a deep understanding of the platform, from its rich capabilities and technical architecture to its ecosystem and key go-to-market aspects. By the end of this IBM book, you'll be able to apply IBM Cloud Pak for Data's prescriptive practices and leverage its capabilities to build a trusted data foundation and accelerate AI adoption in your enterprise.
Table of Contents (17 chapters)
Section 1: The Basics
Section 2: Product Capabilities
Section 3: Technical Details

Data virtualization – accessing data anywhere

Historically, enterprises have consolidated data from multiple sources into central data stores, such as data marts, data warehouses, and data lakes, for analysis. While this is still very relevant for certain use cases, the time, money, and resources required make it prohibitive to scale every time a business user or data scientist needs new data. Extracting, transforming, and consolidating data is resource-intensive, expensive, and time-consuming and can be avoided through data virtualization.

Data virtualization enables users to tap into data at the source, removing complexity and the manual processes of data governance and security, as well as incremental storage requirements. This also helps simplify application development and infuses agility. Extract, Transform, and Load (ETL), on the other hand, is helpful for complex transformational processes and nicely complements data virtualization, which allows users to bypass many...