-
Book Overview & Buying
-
Table Of Contents
Generative AI on Google Cloud with LangChain
By :
Callbacks can be used for logging, token counting, and so on. You can combine multiple callbacks. You can define your own callback or use an existing one. Creating your own callback is very easy: You inherit from the langchain_core.callbacks.BaseCallbackHandler class and define your logic on specific events by implementing methods such as on_llm_new_token, on_llm_new_start, and so on. Take a look at BaseCallbackHandler’s source code for a full list of such events!
If you need token counting, you can use a predefined VertexAICallbackHandler. You can pass it either when you instantiate your LLM (and in that case, it will count all tokens consumed by any requests), or you can pass it through your chain and count only tokens consumed by this execution:
from langchain_google_vertexai.callbacks import (
VertexAICallbackHandler)
handler = VertexAICallbackHandler()
config = {
'callbacks' : [handler]
}
result...