Book Image

Hands-On Graph Neural Networks Using Python

By : Maxime Labonne
Book Image

Hands-On Graph Neural Networks Using Python

By: Maxime Labonne

Overview of this book

Graph neural networks are a highly effective tool for analyzing data that can be represented as a graph, such as networks, chemical compounds, or transportation networks. The past few years have seen an explosion in the use of graph neural networks, with their application ranging from natural language processing and computer vision to recommendation systems and drug discovery. Hands-On Graph Neural Networks Using Python begins with the fundamentals of graph theory and shows you how to create graph datasets from tabular data. As you advance, you’ll explore major graph neural network architectures and learn essential concepts such as graph convolution, self-attention, link prediction, and heterogeneous graphs. Finally, the book proposes applications to solve real-life problems, enabling you to build a professional portfolio. The code is readily available online and can be easily adapted to other datasets and apps. By the end of this book, you’ll have learned to create graph datasets, implement graph neural networks using Python and PyTorch Geometric, and apply them to solve real-world problems, along with building and training graph neural network models for node and graph classification, link prediction, and much more.
Table of Contents (25 chapters)
1
Part 1: Introduction to Graph Learning
5
Part 2: Fundamentals
10
Part 3: Advanced Techniques
18
Part 4: Applications
22
Chapter 18: Unlocking the Potential of Graph Neural Networks for Real-World Applications

Implementing a hierarchical self-attention network

In this section, we will implement a GNN model designed to handle heterogeneous graphs – the hierarchical self-attention network (HAN). This architecture was introduced by Liu et al. in 2021 [5]. HAN uses self-attention at two different levels:

  • Node-level attention to understand the importance of neighboring nodes in a given meta-path (such as a GAT in a homogeneous setting).
  • Semantic-level attention to learn the importance of each meta-path. This is the main feature of HAN, allowing us to select the best meta-paths for a given task automatically – for example, the meta-path game-user-game might be more relevant than game-dev-game in some tasks, such as predicting the number of players.

In the following section, we will detail the three main components – node-level attention, semantic-level attention, and the prediction module. This architecture is illustrated in Figure 12.5.

Figure 12.5 – HAN’s architecture with its three main modules ...