Book Image

Extending Power BI with Python and R

By : Luca Zavarella
Book Image

Extending Power BI with Python and R

By: Luca Zavarella

Overview of this book

Python and R allow you to extend Power BI capabilities to simplify ingestion and transformation activities, enhance dashboards, and highlight insights. With this book, you'll be able to make your artifacts far more interesting and rich in insights using analytical languages. You'll start by learning how to configure your Power BI environment to use your Python and R scripts. The book then explores data ingestion and data transformation extensions, and advances to focus on data augmentation and data visualization. You'll understand how to import data from external sources and transform them using complex algorithms. The book helps you implement personal data de-identification methods such as pseudonymization, anonymization, and masking in Power BI. You'll be able to call external APIs to enrich your data much more quickly using Python programming and R programming. Later, you'll learn advanced Python and R techniques to perform in-depth analysis and extract valuable information using statistics and machine learning. You'll also understand the main statistical features of datasets by plotting multiple visual graphs in the process of creating a machine learning model. By the end of this book, you’ll be able to enrich your Power BI data models and visualizations using complex algorithms in Python and R.
Table of Contents (22 chapters)
1
Section 1: Best Practices for Using R and Python in Power BI
5
Section 2: Data Ingestion and Transformation with R and Python in Power BI
11
Section 3: Data Enrichment with R and Python in Power BI
17
Section 3: Data Visualization with R in Power BI

Import large datasets with Python

In Chapter 3, Configuring Python with Power BI, we suggested that you install some of the most commonly used data management packages in your environment, including NumPy, pandas, and scikit-learn. The biggest limitation of these packages is that they cannot handle datasets larger than the RAM of the machine in which they are used, thus they are not able to scale to more than one machine. To comply with this limitation, distributed systems based on Spark, which has become a dominant tool in the big data analysis landscape, are often used. However, the move to these systems forces developers to have to rethink already-written code using an API called PySpark, born to use Spark objects with Python. This process is generally seen as causing delays in project delivery and causing frustration for developers, who master the libraries available for standard Python with much more confidence.

In response to the preceding issues, the community developed a...