Book Image

IBM Cloud Pak for Data

By : Hemanth Manda, Sriram Srinivasan, Deepak Rangarao
3 (1)
Book Image

IBM Cloud Pak for Data

3 (1)
By: Hemanth Manda, Sriram Srinivasan, Deepak Rangarao

Overview of this book

Cloud Pak for Data is IBM's modern data and AI platform that includes strategic offerings from its data and AI portfolio delivered in a cloud-native fashion with the flexibility of deployment on any cloud. The platform offers a unique approach to addressing modern challenges with an integrated mix of proprietary, open-source, and third-party services. You'll begin by getting to grips with key concepts in modern data management and artificial intelligence (AI), reviewing real-life use cases, and developing an appreciation of the AI Ladder principle. Once you've gotten to grips with the basics, you will explore how Cloud Pak for Data helps in the elegant implementation of the AI Ladder practice to collect, organize, analyze, and infuse data and trustworthy AI across your business. As you advance, you'll discover the capabilities of the platform and extension services, including how they are packaged and priced. With the help of examples present throughout the book, you will gain a deep understanding of the platform, from its rich capabilities and technical architecture to its ecosystem and key go-to-market aspects. By the end of this IBM book, you'll be able to apply IBM Cloud Pak for Data's prescriptive practices and leverage its capabilities to build a trusted data foundation and accelerate AI adoption in your enterprise.
Table of Contents (17 chapters)
1
Section 1: The Basics
4
Section 2: Product Capabilities
11
Section 3: Technical Details

Data virtualization versus ETL – when to use what?

Historically, Data warehouses and data lakes are built by moving data in bulk using ETL. One of the leading ETL products in the market happens to be from IBM and is called IBM DataStage. So, it begs the question as to when someone should use data virtualization versus an ETL offering. The answer depends on the use case. If the intent is to explore and analyze small sets of data in real time and where data can change every few minutes or hours, data virtualization is recommended. Please note that the reference to small sets of data alludes to the actual data that's transferred, not the dataset that a query is performed on. On the flip side, if the use case requires processing huge datasets across multiple sources and where data is more or less static over time (historical datasets), an ETL-based solution is highly recommended.