Book Image

Data Democratization with Domo

By : Jeff Burtenshaw
Book Image

Data Democratization with Domo

By: Jeff Burtenshaw

Overview of this book

Domo is a power-packed business intelligence (BI) platform that empowers organizations to track, analyze, and activate data in record time at cloud scale and performance. Data Democratization with Domo begins with an overview of the Domo ecosystem. You’ll learn how to get data into the cloud with Domo data connectors and Workbench; profile datasets; use Magic ETL to transform data; work with in-memory data sculpting tools (Data Views and Beast Modes); create, edit, and link card visualizations; and create card drill paths using Domo Analyzer. Next, you’ll discover options to distribute content with real-time updates using Domo Embed and digital wallboards. As you advance, you’ll understand how to use alerts and webhooks to drive automated actions. You’ll also build and deploy a custom app to the Domo Appstore and find out how to code Python apps, use Jupyter Notebooks, and insert R custom models. Furthermore, you’ll learn how to use Auto ML to automatically evaluate dozens of models for the best fit using SageMaker and produce a predictive model as well as use Python and the Domo Command Line Interface tool to extend Domo. Finally, you’ll learn how to govern and secure the entire Domo platform. By the end of this book, you’ll have gained the skills you need to become a successful Domo master.
Table of Contents (26 chapters)
Section 1: Data Pipelines
Section 2: Presenting the Message
Section 3: Communicating to Win
Section 4: Extending
Section 5: Governing

Section 1: Data Pipelines

The hardest part made easy ... getting the data.

Getting to the data – no matter where it lives – is the hardest and most time-consuming task in analytics work. And the second most time-consuming is sculpting the data into a format that will provide insights. In this section, you will learn how data is stored, sculpted, and transformed within the Domo ecosystem. This includes intake paths that are going to get tactically easier. Your ability to sculpt the data will not require you to become a SQL guru. Imagine a wizard-driven solution that acquires data from sources and automatically creates the schema and provisions the data storage. Furthermore, imagine an Extract, Transform, and Load (ETL) tool that is schema-aware and provides a no-code, drag and drop experience that is so intuitive data transformation pipeline creation feels almost like magic.

After completing part one, you will be able to intake (acquire, store and sculpt) data in...