跳到主要内容
Open In ColabOpen on GitHub

如何向聊天机器人添加工具

先决条件

本指南假定您熟悉以下概念

本节将介绍如何创建会话式代理:可以使用工具与其他系统和 API 交互的聊天机器人。

注意

本操作指南之前使用 RunnableWithMessageHistory 构建了一个聊天机器人。您可以在 v0.2 文档中访问本指南的这个版本。

从 LangChain 的 v0.3 版本开始,我们建议 LangChain 用户利用 LangGraph 持久化memory 整合到新的 LangChain 应用中。

如果您的代码已经依赖于 RunnableWithMessageHistoryBaseChatMessageHistory,您无需进行任何更改。我们不打算在近期弃用此功能,因为它适用于简单的聊天应用,并且任何使用 RunnableWithMessageHistory 的代码都将继续按预期工作。

有关更多详细信息,请参阅 如何迁移到 LangGraph 内存

设置

在本指南中,我们将使用一个 工具调用 Agent,它带有一个用于搜索网络的工具。默认情况下将由 Tavily 提供支持,但您可以将其切换为任何类似的工具。本节的其余部分将假定您正在使用 Tavily。

您需要 注册一个帐户 在 Tavily 网站上,并安装以下软件包

%pip install --upgrade --quiet langchain-community langchain-openai tavily-python langgraph

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")

if not os.environ.get("TAVILY_API_KEY"):
os.environ["TAVILY_API_KEY"] = getpass.getpass("Tavily API Key:")
OpenAI API Key: ········
Tavily API Key: ········

您还需要将您的 OpenAI 密钥设置为 OPENAI_API_KEY,并将您的 Tavily API 密钥设置为 TAVILY_API_KEY

创建 Agent

我们的最终目标是创建一个 Agent,它可以会话式地响应用户问题,并在需要时查找信息。

首先,让我们初始化 Tavily 和一个能够进行工具调用的 OpenAI 聊天模型

from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI

tools = [TavilySearchResults(max_results=1)]

# Choose the LLM that will drive the agent
# Only certain models support this
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

为了使我们的 Agent 具有会话性,我们还可以指定一个提示。这是一个例子

prompt = (
"You are a helpful assistant. "
"You may not need to use tools for every query - the user may just want to chat!"
)

太棒了!现在让我们使用 LangGraph 的预构建 create_react_agent 组装我们的 Agent,它允许您创建一个 工具调用 Agent

from langgraph.prebuilt import create_react_agent

# prompt allows you to preprocess the inputs to the model inside ReAct agent
# in this case, since we're passing a prompt string, we'll just always add a SystemMessage
# with this prompt string before any other messages sent to the model
agent = create_react_agent(model, tools, prompt=prompt)
API 参考:create_react_agent

运行 Agent

现在我们已经设置了 Agent,让我们尝试与之交互!它可以处理不需要查找的简单查询

from langchain_core.messages import HumanMessage

agent.invoke({"messages": [HumanMessage(content="I'm Nemo!")]})
API 参考:HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='39e715c7-bd1c-426f-8e14-c05586b3d221'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-6937c944-d702-40bb-9a9f-4141ddde9f78-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

或者,它可以根据需要使用传递的搜索工具来获取最新的信息

agent.invoke(
{
"messages": [
HumanMessage(
content="What is the current conservation status of the Great Barrier Reef?"
)
],
}
)
{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?', additional_kwargs={}, response_metadata={}, id='a74cc581-8ad5-4401-b3a5-f028d69e4b21'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'function': {'arguments': '{"query":"current conservation status of the Great Barrier Reef 2023"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 28, 'prompt_tokens': 116, 'total_tokens': 144, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-267ff8a8-d866-4ae5-9534-ad87ebbdc954-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current conservation status of the Great Barrier Reef 2023'}, 'id': 'call_aKOItwvAb4DHQCwaasKphGHq', 'type': 'tool_call'}], usage_metadata={'input_tokens': 116, 'output_tokens': 28, 'total_tokens': 144}),
ToolMessage(content='[{"url": "https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24", "content": "This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as ‘2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR."}]', name='tavily_search_results_json', id='05b3fab7-9ac8-42bb-9612-ff2a896dbb67', tool_call_id='call_aKOItwvAb4DHQCwaasKphGHq', artifact={'query': 'current conservation status of the Great Barrier Reef 2023', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'title': 'Annual Summary Report of Coral Reef Condition 2023/24', 'url': 'https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24', 'content': 'This report summarises the condition of coral reefs in the Northern, Central and Southern\xa0Great Barrier Reef (GBR) from the Long-Term Monitoring Program (LTMP) surveys of 94 reefs conducted between August\xa02023 and June 2024 (reported as ‘2024’). Over the past 38 years of monitoring by the Australian Institute of Marine Science (AIMS), hard coral cover on reefs of the GBR has decreased and increased in response to cycles of disturbance and recovery. It is relatively rare for GBR reefs to have 75% to 100% hard coral cover and AIMS defines >30% – 50% hard coral cover as a high value, based on historical surveys across the GBR.', 'score': 0.95991266, 'raw_content': None}], 'response_time': 4.22}),
AIMessage(content='The current conservation status of the Great Barrier Reef (GBR) indicates ongoing challenges and fluctuations in coral health. According to a report from the Australian Institute of Marine Science (AIMS), the condition of coral reefs in the GBR has been monitored over the years, showing cycles of disturbance and recovery. \n\nAs of the latest surveys conducted between August 2023 and June 2024, hard coral cover on the GBR has experienced both decreases and increases. AIMS defines a hard coral cover of over 30% to 50% as high value, but it is relatively rare for GBR reefs to achieve 75% to 100% hard coral cover.\n\nFor more detailed information, you can refer to the [AIMS report](https://www.aims.gov.au/monitoring-great-barrier-reef/gbr-condition-summary-2023-24).', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 174, 'prompt_tokens': 337, 'total_tokens': 511, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-bec32925-0dba-445d-8b55-87358ef482bb-0', usage_metadata={'input_tokens': 337, 'output_tokens': 174, 'total_tokens': 511})]}

会话式响应

由于我们的提示包含聊天历史消息的占位符,因此我们的 Agent 还可以考虑以前的交互,并像标准聊天机器人一样进行会话式响应

from langchain_core.messages import AIMessage, HumanMessage

agent.invoke(
{
"messages": [
HumanMessage(content="I'm Nemo!"),
AIMessage(content="Hello Nemo! How can I assist you today?"),
HumanMessage(content="What is my name?"),
],
}
)
API 参考:AIMessage | HumanMessage
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='2c8e58bf-ad20-45a4-940b-84393c6b3a03'),
AIMessage(content='Hello Nemo! How can I assist you today?', additional_kwargs={}, response_metadata={}, id='5e014114-7e9d-42c3-b63e-a662b3a49bef'),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='d92be4e1-6497-4037-9a9a-83d3e7b760d5'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-17db96f8-8dbd-4f25-a80d-e4e872967641-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

如果愿意,您还可以向 LangGraph Agent 添加内存来管理消息的历史记录。让我们以这种方式重新声明它

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
agent = create_react_agent(model, tools, prompt=prompt, checkpointer=memory)
API 参考:MemorySaver
agent.invoke(
{"messages": [HumanMessage("I'm Nemo!")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118})]}

然后,如果我们重新运行我们包装的 Agent 执行器

agent.invoke(
{"messages": [HumanMessage("What is my name?")]},
config={"configurable": {"thread_id": "1"}},
)
{'messages': [HumanMessage(content="I'm Nemo!", additional_kwargs={}, response_metadata={}, id='117b2cfc-c6cc-449c-bba9-26fc545d0afa'),
AIMessage(content='Hi Nemo! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 107, 'total_tokens': 118, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba16cc0b-fba1-4ec5-9d99-e010c3b702d0-0', usage_metadata={'input_tokens': 107, 'output_tokens': 11, 'total_tokens': 118}),
HumanMessage(content='What is my name?', additional_kwargs={}, response_metadata={}, id='53ac8d34-99bb-43a7-9103-80e26b7ee6cc'),
AIMessage(content='Your name is Nemo!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 130, 'total_tokens': 136, 'completion_tokens_details': {'reasoning_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_1bb46167f9', 'finish_reason': 'stop', 'logprobs': None}, id='run-b3f224a5-902a-4973-84ff-9b683615b0e2-0', usage_metadata={'input_tokens': 130, 'output_tokens': 6, 'total_tokens': 136})]}

这个 LangSmith 追踪 显示了幕后发生的事情。

进一步阅读

有关如何构建 Agent 的更多信息,请查看这些 LangGraph 指南

有关工具使用的更多信息,您还可以查看 此用例部分


此页对您有帮助吗?