跳到主要内容
Open In ColabOpen on GitHub

总结文本

信息

本教程演示了使用内置链和 LangGraph 进行文本摘要。

本页的先前版本展示了旧版链 StuffDocumentsChainMapReduceDocumentsChainRefineDocumentsChain。请参阅此处,了解有关使用这些抽象以及与本教程中演示的方法进行比较的信息。

假设您有一组文档(PDF、Notion 页面、客户问题等),并且您想要总结内容。

LLM 是实现此目的的绝佳工具,因为它们精通理解和综合文本。

检索增强生成的上下文中,总结文本可以帮助提炼大量检索文档中的信息,从而为 LLM 提供上下文。

在本演练中,我们将介绍如何使用 LLM 总结多个文档中的内容。

Image description

概念

我们将涵盖的概念是

  • 使用语言模型

  • 使用文档加载器,特别是 WebBaseLoader 从 HTML 网页加载内容。

  • 总结或以其他方式组合文档的两种方法。

    1. Stuff,它只是将文档连接到一个提示中;
    2. Map-reduce,适用于较大的文档集。它将文档拆分为批次,总结这些文档,然后总结摘要。

有关这些策略和其他策略(包括迭代改进)的更短、更有针对性的指南,可以在操作指南中找到。

设置

Jupyter Notebook

本指南(以及文档中的大多数其他指南)使用 Jupyter notebooks,并假设读者也是如此。Jupyter notebooks 非常适合学习如何使用 LLM 系统,因为通常情况下,事情可能会出错(意外输出、API 宕机等),并且在交互式环境中浏览指南是更好地理解它们的绝佳方法。

本教程和其他教程可能最方便在 Jupyter notebook 中运行。有关如何安装的说明,请参阅此处

安装

要安装 LangChain,请运行

pip install langchain

有关更多详细信息,请参阅我们的安装指南

LangSmith

您使用 LangChain 构建的许多应用程序将包含多个步骤,其中包含多次 LLM 调用。随着这些应用程序变得越来越复杂,能够检查链或代理内部究竟发生了什么是至关重要的。最好的方法是使用 LangSmith

在上面的链接注册后,请确保设置您的环境变量以开始记录跟踪

export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="..."

或者,如果在 notebook 中,您可以使用以下命令设置它们

import getpass
import os

os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_API_KEY"] = getpass.getpass()

概述

构建摘要器的核心问题是如何将文档传递到 LLM 的上下文窗口中。两种常见的方法是

  1. Stuff:只需将所有文档“塞入”到一个提示中。这是最简单的方法(有关用于此方法的 create_stuff_documents_chain 构造函数的更多信息,请参阅 此处)。

  2. Map-reduce:在“map”步骤中单独总结每个文档,然后将摘要“reduce”为最终摘要(有关用于此方法的 MapReduceDocumentsChain 的更多信息,请参阅 此处)。

请注意,当子文档的理解不依赖于前面的上下文时,map-reduce 尤其有效。例如,当总结大量较短文档的语料库时。在其他情况下,例如总结小说或具有内在序列的文本主体时,迭代改进可能更有效。

Image description

设置

首先设置环境变量并安装软件包

%pip install --upgrade --quiet tiktoken langchain langgraph beautifulsoup4 langchain-community

# Set env var OPENAI_API_KEY or load from a .env file
# import dotenv

# dotenv.load_dotenv()
import os

os.environ["LANGSMITH_TRACING"] = "true"

首先我们加载文档。我们将使用 WebBaseLoader 加载一篇博客文章

from langchain_community.document_loaders import WebBaseLoader

loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
docs = loader.load()
API 参考:WebBaseLoader

接下来让我们选择一个 LLM

pip install -qU "langchain[openai]"
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

llm = init_chat_model("gpt-4o-mini", model_provider="openai")

Stuff:在单个 LLM 调用中总结

我们可以使用 create_stuff_documents_chain,特别是当使用较大的上下文窗口模型时,例如

  • 128k token OpenAI gpt-4o
  • 200k token Anthropic claude-3-5-sonnet-20240620

该链将获取文档列表,将它们全部插入到提示中,并将该提示传递给 LLM

from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains.llm import LLMChain
from langchain_core.prompts import ChatPromptTemplate

# Define prompt
prompt = ChatPromptTemplate.from_messages(
[("system", "Write a concise summary of the following:\\n\\n{context}")]
)

# Instantiate chain
chain = create_stuff_documents_chain(llm, prompt)

# Invoke chain
result = chain.invoke({"context": docs})
print(result)
The article "LLM Powered Autonomous Agents" by Lilian Weng discusses the development and capabilities of autonomous agents powered by large language models (LLMs). It outlines a system architecture that includes three main components: Planning, Memory, and Tool Use. 

1. **Planning** involves task decomposition, where complex tasks are broken down into manageable subgoals, and self-reflection, allowing agents to learn from past actions to improve future performance. Techniques like Chain of Thought (CoT) and Tree of Thoughts (ToT) are highlighted for enhancing reasoning and planning.

2. **Memory** is categorized into short-term and long-term memory, with mechanisms for fast retrieval using Maximum Inner Product Search (MIPS) algorithms. This allows agents to retain and recall information effectively.

3. **Tool Use** enables agents to interact with external APIs and tools, enhancing their capabilities beyond the limitations of their training data. Examples include MRKL systems and frameworks like HuggingGPT, which facilitate task planning and execution.

The article also addresses challenges such as finite context length, difficulties in long-term planning, and the reliability of natural language interfaces. It concludes with case studies demonstrating the practical applications of these concepts in scientific discovery and interactive simulations. Overall, the article emphasizes the potential of LLMs as powerful problem solvers in autonomous agent systems.

流式传输

请注意,我们还可以逐个 token 地流式传输结果

for token in chain.stream({"context": docs}):
print(token, end="|")
|The| article| "|LL|M| Powered| Autonomous| Agents|"| by| Lil|ian| W|eng| discusses| the| development| and| capabilities| of| autonomous| agents| powered| by| large| language| models| (|LL|Ms|).| It| outlines| a| system| architecture| that| includes| three| main| components|:| Planning|,| Memory|,| and| Tool| Use|.| 

|1|.| **|Planning|**| involves| task| decomposition|,| where| complex| tasks| are| broken| down| into| manageable| sub|go|als|,| and| self|-ref|lection|,| allowing| agents| to| learn| from| past| actions| to| improve| future| performance|.| Techniques| like| Chain| of| Thought| (|Co|T|)| and| Tree| of| Thoughts| (|To|T|)| are| highlighted| for| enhancing| reasoning| and| planning|.

|2|.| **|Memory|**| is| categorized| into| short|-term| and| long|-term| memory|,| with| mechanisms| for| fast| retrieval| using| Maximum| Inner| Product| Search| (|M|IPS|)| algorithms|.| This| allows| agents| to| retain| and| recall| information| effectively|.

|3|.| **|Tool| Use|**| emphasizes| the| integration| of| external| APIs| and| tools| to| extend| the| capabilities| of| L|LM|s|,| enabling| them| to| perform| tasks| beyond| their| inherent| limitations|.| Examples| include| MR|KL| systems| and| frameworks| like| Hug|ging|GPT|,| which| facilitate| task| planning| and| execution|.

|The| article| also| addresses| challenges| such| as| finite| context| length|,| difficulties| in| long|-term| planning|,| and| the| reliability| of| natural| language| interfaces|.| It| concludes| with| case| studies| demonstrating| the| practical| applications| of| L|LM|-powered| agents| in| scientific| discovery| and| interactive| simulations|.| Overall|,| the| piece| illustrates| the| potential| of| L|LM|s| as| general| problem| sol|vers| and| their| evolving| role| in| autonomous| systems|.||

深入了解

  • 您可以轻松自定义提示。
  • 您可以轻松尝试不同的 LLM(例如,通过 llm 参数使用 Claude)。

Map-Reduce:通过并行化总结长文本

让我们解开 map reduce 方法。为此,我们将首先使用 LLM 将每个文档映射到单独的摘要。然后,我们将 reduce 或整合这些摘要为一个全局摘要。

请注意,map 步骤通常在输入文档上并行化。

LangGraph 构建于 langchain-core 之上,支持 map-reduce 工作流程,并且非常适合此问题

  • LangGraph 允许流式传输各个步骤(例如连续摘要),从而可以更好地控制执行;
  • LangGraph 的 检查点 支持错误恢复、扩展人工参与工作流程以及更轻松地集成到对话应用程序中。
  • LangGraph 实现易于修改和扩展,我们将在下面看到。

Map

首先,让我们定义与 map 步骤关联的提示。我们可以使用与上面的 stuff 方法中相同的摘要提示

from langchain_core.prompts import ChatPromptTemplate

map_prompt = ChatPromptTemplate.from_messages(
[("system", "Write a concise summary of the following:\\n\\n{context}")]
)
API 参考:ChatPromptTemplate

我们还可以使用 Prompt Hub 来存储和获取提示。

这将与您的 LangSmith API 密钥 一起使用。

例如,请参阅 此处 的 map 提示。

from langchain import hub

map_prompt = hub.pull("rlm/map-prompt")
API 参考:hub

Reduce

我们还定义了一个提示,该提示接受文档映射结果并将它们 reduce 为单个输出。

# Also available via the hub: `hub.pull("rlm/reduce-prompt")`
reduce_template = """
The following is a set of summaries:
{docs}
Take these and distill it into a final, consolidated summary
of the main themes.
"""

reduce_prompt = ChatPromptTemplate([("human", reduce_template)])

通过 LangGraph 进行编排

下面我们实现一个简单的应用程序,该应用程序在文档列表上映射摘要步骤,然后使用上述提示 reduce 它们。

当文本相对于 LLM 的上下文窗口较长时,Map-reduce 流特别有用。对于长文本,我们需要一种机制,以确保 reduce 步骤中要总结的上下文不超过模型的上下文窗口大小。在这里,我们实现了摘要的递归“折叠”:输入根据 token 限制进行分区,并生成分区摘要。重复此步骤,直到摘要的总长度在所需的限制内,从而可以摘要任意长度的文本。

首先,我们将博客文章分块为较小的“子文档”以进行映射

from langchain_text_splitters import CharacterTextSplitter

text_splitter = CharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
)
split_docs = text_splitter.split_documents(docs)
print(f"Generated {len(split_docs)} documents.")
Created a chunk of size 1003, which is longer than the specified 1000
``````output
Generated 14 documents.

接下来,我们定义我们的图。请注意,我们定义了一个人为的低最大 token 长度 1,000 个 token,以说明“折叠”步骤。

import operator
from typing import Annotated, List, Literal, TypedDict

from langchain.chains.combine_documents.reduce import (
acollapse_docs,
split_list_of_docs,
)
from langchain_core.documents import Document
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph

token_max = 1000


def length_function(documents: List[Document]) -> int:
"""Get number of tokens for input contents."""
return sum(llm.get_num_tokens(doc.page_content) for doc in documents)


# This will be the overall state of the main graph.
# It will contain the input document contents, corresponding
# summaries, and a final summary.
class OverallState(TypedDict):
# Notice here we use the operator.add
# This is because we want combine all the summaries we generate
# from individual nodes back into one list - this is essentially
# the "reduce" part
contents: List[str]
summaries: Annotated[list, operator.add]
collapsed_summaries: List[Document]
final_summary: str


# This will be the state of the node that we will "map" all
# documents to in order to generate summaries
class SummaryState(TypedDict):
content: str


# Here we generate a summary, given a document
async def generate_summary(state: SummaryState):
prompt = map_prompt.invoke(state["content"])
response = await llm.ainvoke(prompt)
return {"summaries": [response.content]}


# Here we define the logic to map out over the documents
# We will use this an edge in the graph
def map_summaries(state: OverallState):
# We will return a list of `Send` objects
# Each `Send` object consists of the name of a node in the graph
# as well as the state to send to that node
return [
Send("generate_summary", {"content": content}) for content in state["contents"]
]


def collect_summaries(state: OverallState):
return {
"collapsed_summaries": [Document(summary) for summary in state["summaries"]]
}


async def _reduce(input: dict) -> str:
prompt = reduce_prompt.invoke(input)
response = await llm.ainvoke(prompt)
return response.content


# Add node to collapse summaries
async def collapse_summaries(state: OverallState):
doc_lists = split_list_of_docs(
state["collapsed_summaries"], length_function, token_max
)
results = []
for doc_list in doc_lists:
results.append(await acollapse_docs(doc_list, _reduce))

return {"collapsed_summaries": results}


# This represents a conditional edge in the graph that determines
# if we should collapse the summaries or not
def should_collapse(
state: OverallState,
) -> Literal["collapse_summaries", "generate_final_summary"]:
num_tokens = length_function(state["collapsed_summaries"])
if num_tokens > token_max:
return "collapse_summaries"
else:
return "generate_final_summary"


# Here we will generate the final summary
async def generate_final_summary(state: OverallState):
response = await _reduce(state["collapsed_summaries"])
return {"final_summary": response}


# Construct the graph
# Nodes:
graph = StateGraph(OverallState)
graph.add_node("generate_summary", generate_summary) # same as before
graph.add_node("collect_summaries", collect_summaries)
graph.add_node("collapse_summaries", collapse_summaries)
graph.add_node("generate_final_summary", generate_final_summary)

# Edges:
graph.add_conditional_edges(START, map_summaries, ["generate_summary"])
graph.add_edge("generate_summary", "collect_summaries")
graph.add_conditional_edges("collect_summaries", should_collapse)
graph.add_conditional_edges("collapse_summaries", should_collapse)
graph.add_edge("generate_final_summary", END)

app = graph.compile()

LangGraph 允许绘制图结构以帮助可视化其功能

from IPython.display import Image

Image(app.get_graph().draw_mermaid_png())

运行应用程序时,我们可以流式传输图以观察其步骤序列。下面,我们将只打印步骤的名称。

请注意,由于我们在图中有循环,因此指定执行的 recursion_limit 可能会有所帮助。当超过指定的限制时,这将引发特定错误。

async for step in app.astream(
{"contents": [doc.page_content for doc in split_docs]},
{"recursion_limit": 10},
):
print(list(step.keys()))
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['generate_summary']
['collect_summaries']
['collapse_summaries']
['collapse_summaries']
['generate_final_summary']
print(step)
{'generate_final_summary': {'final_summary': 'The consolidated summary of the main themes from the provided documents is as follows:\n\n1. **Integration of Large Language Models (LLMs) in Autonomous Agents**: The documents explore the evolving role of LLMs in autonomous systems, emphasizing their enhanced reasoning and acting capabilities through methodologies that incorporate structured planning, memory systems, and tool use.\n\n2. **Core Components of Autonomous Agents**:\n   - **Planning**: Techniques like task decomposition (e.g., Chain of Thought) and external classical planners are utilized to facilitate long-term planning by breaking down complex tasks.\n   - **Memory**: The memory system is divided into short-term (in-context learning) and long-term memory, with parallels drawn between human memory and machine learning to improve agent performance.\n   - **Tool Use**: Agents utilize external APIs and algorithms to enhance problem-solving abilities, exemplified by frameworks like HuggingGPT that manage task workflows.\n\n3. **Neuro-Symbolic Architectures**: The integration of MRKL (Modular Reasoning, Knowledge, and Language) systems combines neural and symbolic expert modules with LLMs, addressing challenges in tasks such as verbal math problem-solving.\n\n4. **Specialized Applications**: Case studies, such as ChemCrow and projects in anticancer drug discovery, demonstrate the advantages of LLMs augmented with expert tools in specialized domains.\n\n5. **Challenges and Limitations**: The documents highlight challenges such as hallucination in model outputs and the finite context length of LLMs, which affects their ability to incorporate historical information and perform self-reflection. Techniques like Chain of Hindsight and Algorithm Distillation are discussed to enhance model performance through iterative learning.\n\n6. **Structured Software Development**: A systematic approach to creating Python software projects is emphasized, focusing on defining core components, managing dependencies, and adhering to best practices for documentation.\n\nOverall, the integration of structured planning, memory systems, and advanced tool use aims to enhance the capabilities of LLM-powered autonomous agents while addressing the challenges and limitations these technologies face in real-world applications.'}}

在相应的 LangSmith 跟踪 中,我们可以看到各个 LLM 调用,它们按各自的节点分组。

深入了解

自定义

  • 如上所示,您可以自定义 map 和 reduce 阶段的 LLM 和提示。

真实世界的用例

  • 请参阅 这篇博客文章 案例研究,了解用户交互(关于 LangChain 文档的问题)的分析!
  • 该博客文章和相关的 repo 还介绍了聚类作为摘要的一种手段。
  • 这开辟了除 stuffmap-reduce 方法之外的另一条路径,值得考虑。

Image description

下一步

我们鼓励您查看操作指南,以了解更多详细信息,包括

以及其他概念。


此页是否对您有帮助?