从RefineDocumentsChain迁移
RefineDocumentsChain 实现了分析长文本的策略。该策略如下
- 将文本拆分为更小的文档;
- 对第一个文档应用一个处理过程;
- 根据下一个文档精炼或更新结果;
- 重复处理文档序列直到完成。
在此上下文中,常用的处理过程是总结,其中在处理长文本的各个块时,会修改运行中的摘要。这对于相对于给定 LLM 的上下文窗口来说较大的文本特别有用。
一个 LangGraph 实现为此问题带来了许多优势
- RefineDocumentsChain 通过类内部的 `for` 循环精炼摘要,而 LangGraph 实现则允许您逐步执行以在需要时监控或引导它。
- LangGraph 实现支持执行步骤和单个 token 的流式传输。
- 由于它由模块化组件组装而成,因此也易于扩展或修改(例如,集成工具调用或其他行为)。
下面我们将通过一个简单的示例来演示 `RefineDocumentsChain` 及其对应的 LangGraph 实现,以作说明。
我们首先加载一个聊天模型
选择 聊天模型
pip install -qU "langchain[google-genai]"
import getpass
import os
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("gemini-2.0-flash", model_provider="google_genai")
示例
我们来看一个总结文档序列的例子。我们首先生成一些简单的文档以作说明
from langchain_core.documents import Document
documents = [
Document(page_content="Apples are red", metadata={"title": "apple_book"}),
Document(page_content="Blueberries are blue", metadata={"title": "blueberry_book"}),
Document(page_content="Bananas are yelow", metadata={"title": "banana_book"}),
]
API 参考:Document
旧版
详情
下面我们展示一个使用 `RefineDocumentsChain` 的实现。我们为初始总结和后续精炼定义了提示模板,为此两个目的实例化了独立的 LLMChain 对象,并使用这些组件实例化了 `RefineDocumentsChain`。
from langchain.chains import LLMChain, RefineDocumentsChain
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_openai import ChatOpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
# The prompt here should take as an input variable the
# `document_variable_name`
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_llm_chain = LLMChain(llm=llm, prompt=summarize_prompt)
initial_response_name = "existing_answer"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_llm_chain = LLMChain(llm=llm, prompt=refine_prompt)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
我们现在可以调用我们的链
result = chain.invoke(documents)
result["output_text"]
'Apples are typically red in color, blueberries are blue, and bananas are yellow.'
该 LangSmith 追踪 由三次 LLM 调用组成:一次用于初始总结,另外两次用于更新该总结。当我们用最后一个文档的内容更新总结时,该过程完成。
LangGraph
详情
下面我们展示该过程的 LangGraph 实现
- 我们使用与之前相同的两个模板。
- 我们为初始总结生成一个简单的链,该链提取第一个文档,将其格式化为提示,并使用我们的 LLM 运行推理。
- 我们生成第二个 `refine_summary_chain`,它对每个后续文档进行操作,精炼初始总结。
我们需要安装 langgraph
pip install -qU langgraph
import operator
from typing import List, Literal, TypedDict
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# Initial summary
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_summary_chain = summarize_prompt | llm | StrOutputParser()
# Refining the summary with new docs
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_summary_chain = refine_prompt | llm | StrOutputParser()
# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
class State(TypedDict):
contents: List[str]
index: int
summary: str
# We define functions for each node, including a node that generates
# the initial summary:
async def generate_initial_summary(state: State, config: RunnableConfig):
summary = await initial_summary_chain.ainvoke(
state["contents"][0],
config,
)
return {"summary": summary, "index": 1}
# And a node that refines the summary based on the next document
async def refine_summary(state: State, config: RunnableConfig):
content = state["contents"][state["index"]]
summary = await refine_summary_chain.ainvoke(
{"existing_answer": state["summary"], "context": content},
config,
)
return {"summary": summary, "index": state["index"] + 1}
# Here we implement logic to either exit the application or refine
# the summary.
def should_refine(state: State) -> Literal["refine_summary", END]:
if state["index"] >= len(state["contents"]):
return END
else:
return "refine_summary"
graph = StateGraph(State)
graph.add_node("generate_initial_summary", generate_initial_summary)
graph.add_node("refine_summary", refine_summary)
graph.add_edge(START, "generate_initial_summary")
graph.add_conditional_edges("generate_initial_summary", should_refine)
graph.add_conditional_edges("refine_summary", should_refine)
app = graph.compile()
from IPython.display import Image
Image(app.get_graph().draw_mermaid_png())
我们可以按如下方式逐步执行,并在精炼过程中打印出总结
async for step in app.astream(
{"contents": [doc.page_content for doc in documents]},
stream_mode="values",
):
if summary := step.get("summary"):
print(summary)
Apples are typically red in color.
Apples are typically red in color, while blueberries are blue.
Apples are typically red in color, blueberries are blue, and bananas are yellow.
在 LangSmith 追踪 中,我们再次获得了三次 LLM 调用,执行与之前相同的功能。
请注意,我们可以从应用程序流式传输 token,包括从中间步骤
async for event in app.astream_events(
{"contents": [doc.page_content for doc in documents]}, version="v2"
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="|")
elif kind == "on_chat_model_end":
print("\n\n")
Ap|ples| are| characterized| by| their| red| color|.|
Ap|ples| are| characterized| by| their| red| color|,| while| blueberries| are| known| for| their| blue| hue|.|
Ap|ples| are| characterized| by| their| red| color|,| blueberries| are| known| for| their| blue| hue|,| and| bananas| are| recognized| for| their| yellow| color|.|
下一步
有关更多基于 LLM 的总结策略,请参阅本教程。
查看 LangGraph 文档了解使用 LangGraph 构建的详细信息。