Book Image

Graph Data Processing with Cypher

By : Ravindranatha Anthapu
Book Image

Graph Data Processing with Cypher

By: Ravindranatha Anthapu

Overview of this book

While it is easy to learn and understand the Cypher declarative language for querying graph databases, it can be very difficult to master it. As graph databases are becoming more mainstream, there is a dearth of content and guidance for developers to leverage database capabilities fully. This book fills the information gap by describing graph traversal patterns in a simple and readable way. This book provides a guided tour of Cypher from understanding the syntax, building a graph data model, and loading the data into graphs to building queries and profiling the queries for best performance. It introduces APOC utilities that can augment Cypher queries to build complex queries. You’ll also be introduced to visualization tools such as Bloom to get the most out of the graph when presenting the results to the end users. After having worked through this book, you’ll have become a seasoned Cypher query developer with a good understanding of the query language and how to use it for the best performance.
Table of Contents (18 chapters)
1
Part 1: Cypher Introduction
4
Part 2: Working with Cypher
9
Part 3: Advanced Cypher Concepts

Using Kafka and Spark connectors

Neo4j has official support for Kafka and Spark connectors that can read and write data to graphs. The Kafka connector makes it easy to ingest data into Neo4j at scale, without needing to build custom client code. Spark connector simplifies the reading and writing of data to graphs using dataframes. Let’s take a look at the core features provided by these connectors:

  • Kafka connector:
    • Provides the capability to ingest data into Neo4j using templatized Cypher queries
    • Can handle streaming data efficiently
    • Runs as a plugin on existing Kafka installations
    • You can read more about this connector at https://neo4j.com/labs/kafka/4.1/kafka-connect/
  • Spark connector:
    • Makes it easier to read nodes and relationships into a dataframe
    • Makes it possible to take the data from dataframes and write it into Neo4j easily
    • Supports using Python or R as the language of choice in Spark
    • Makes it easier to leverage all the capabilities of Spark to massage the data before...