Book Image

Causal Inference and Discovery in Python

By : Aleksander Molak
4.7 (9)
Book Image

Causal Inference and Discovery in Python

4.7 (9)
By: Aleksander Molak

Overview of this book

Causal methods present unique challenges compared to traditional machine learning and statistics. Learning causality can be challenging, but it offers distinct advantages that elude a purely statistical mindset. Causal Inference and Discovery in Python helps you unlock the potential of causality. You’ll start with basic motivations behind causal thinking and a comprehensive introduction to Pearlian causal concepts, such as structural causal models, interventions, counterfactuals, and more. Each concept is accompanied by a theoretical explanation and a set of practical exercises with Python code. Next, you’ll dive into the world of causal effect estimation, consistently progressing towards modern machine learning methods. Step-by-step, you’ll discover Python causal ecosystem and harness the power of cutting-edge algorithms. You’ll further explore the mechanics of how “causes leave traces” and compare the main families of causal discovery algorithms. The final chapter gives you a broad outlook into the future of causal AI where we examine challenges and opportunities and provide you with a comprehensive list of resources to learn more. By the end of this book, you will be able to build your own models for causal inference and discovery using statistical and machine learning techniques as well as perform basic project assessment.
Table of Contents (21 chapters)
Part 1: Causality – an Introduction
Part 2: Causal Inference
Part 3: Causal Discovery

Graphs and distributions and how to map between them

In this section, we will focus on the mappings between the statistical and graphical properties of a system.

To be more precise, we’ll be interested in understanding how to translate between graphical and statistical independencies. In a perfect world, we’d like to be able to do it in both directions: from graph independence to statistical independence and the other way around.

It turns out that this is possible under certain assumptions.

The key concept in this chapter is one of independence. Let’s start by reviewing what it means.

How to talk about independence

Generally speaking, we say that two variables, <mml:math xmlns:mml="" xmlns:m=""><mml:mi>X</mml:mi></mml:math> and <mml:math xmlns:mml="" xmlns:m=""><mml:mi>Y</mml:mi></mml:math>, are independent when our knowledge about <mml:math xmlns:mml="" xmlns:m=""><mml:mi>X</mml:mi></mml:math> does not change our knowledge about <mml:math xmlns:mml="" xmlns:m=""><mml:mi>Y</mml:mi></mml:math> (and vice versa). In terms of probability distributions, we can express it in the following way:

<mml:math xmlns:mml="" xmlns:m="" display="block"><mml:mi>P</mml:mi><mml:mfenced separators="|"><mml:mrow><mml:mi>Y</mml:mi></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mfenced separators="|"><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>X</mml:mi></mml:mrow></mml:mfenced></mml:math>
<mml:math xmlns:mml="" xmlns:m="" display="block"><mml:mi>P</mml:mi><mml:mfenced separators="|"><mml:mrow><mml:mi>X</mml:mi></mml:mrow></mml:mfenced><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>|</mml:mo><mml:mi>Y</mml:mi><mml:mo>)</mml:mo></mml:math>

In other words: the marginal probability of <mml:math xmlns:mml="" xmlns:m=""><mml:mi>Y</mml:mi></mml:math> is the same as the conditional probability of <mml:math xmlns:mml="" xmlns:m=""><mml:mi>Y</mml:mi></mml:math> given...