Using Kafka and Spark connectors
Neo4j has official support for Kafka and Spark connectors that can read and write data to graphs. The Kafka connector makes it easy to ingest data into Neo4j at scale, without needing to build custom client code. Spark connector simplifies the reading and writing of data to graphs using dataframes. Let’s take a look at the core features provided by these connectors:
- Kafka connector:
- Provides the capability to ingest data into Neo4j using templatized Cypher queries
- Can handle streaming data efficiently
- Runs as a plugin on existing Kafka installations
- You can read more about this connector at https://neo4j.com/labs/kafka/4.1/kafka-connect/
- Spark connector: