Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Building Natural Language and LLM Pipelines
  • Table Of Contents Toc
Building Natural Language and LLM Pipelines

Building Natural Language and LLM Pipelines

By : Laura Funderburk
close
close
Building Natural Language and LLM Pipelines

Building Natural Language and LLM Pipelines

By: Laura Funderburk

Overview of this book

Modern LLM applications often break in production due to brittle pipelines, loose tool definitions, and noisy context. This book shows you how to build production-ready, context-aware systems using Haystack and LangGraph. You’ll learn to design deterministic pipelines with strict tool contracts and deploy them as microservices. Through structured context engineering, you’ll orchestrate reliable agent workflows and move beyond simple prompt-based interactions. You'll start by understanding LLM behavior—tokens, embeddings, and transformer models—and see how prompt engineering has evolved into a full context engineering discipline. Then, you'll build retrieval-augmented generation (RAG) pipelines with retrievers, rankers, and custom components using Haystack’s graph-based architecture. You’ll also create knowledge graphs, synthesize unstructured data, and evaluate system behavior using Ragas and Weights & Biases. In LangGraph, you’ll orchestrate agents with supervisor-worker patterns, typed state machines, retries, fallbacks, and safety guardrails. By the end of the book, you’ll have the skills to design scalable, testable LLM pipelines and multi-agent systems that remain robust as the AI ecosystem evolves. *Email sign-up and proof of purchase required
Table of Contents (18 chapters)
close
close
Lock Free Chapter
1
Part 1: The Foundation of Reliable AI
4
Part 2: Building The Tool Layer with Haystack
9
Part 3: Deployment and Agentic Orchestration
12
Part 4: The Future of Agentic AI
16
Other Books You May Enjoy
17
Index

Advanced custom component feature implementation

In the previous sections, we learned the basics of custom component definition, including the run() method and the use of the warm_up() method for managing heavy resources. We will now apply what we’ve learned to an advanced case: building a series of custom components to create a knowledge graph from documents and then using that graph to generate question-answer pairs for evaluating a RAG system.

Before diving into the code, it is crucial to understand why we are using a knowledge graph as an intermediate step. While it might seem simpler to generate questions directly from text chunks, this approach has significant limitations. Standard RAG systems that rely on vector search over isolated text chunks often struggle with complex, multi-hop questions (Su et al., 2020); Neo4j, 2025), which are queries that require connecting information scattered across multiple documents or contexts.

Knowledge graphs excel where simple...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Building Natural Language and LLM Pipelines
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon