如何从消息对象中解析文本
先决条件
本指南假定您熟悉以下概念
LangChain 消息 对象支持多种 格式 的内容,包括文本、多模态数据 和 内容块 字典列表。
聊天模型 响应内容的格式可能取决于提供商。例如,Anthropic 的聊天模型将为典型的字符串输入返回字符串内容
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke("Hello")
response.content
API 参考:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'
但是,当生成工具调用时,响应内容将结构化为内容块,以传达模型的推理过程
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""
return "Sunny."
llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API 参考:tool
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]
为了自动从消息对象中解析文本,而不管底层内容的格式如何,我们可以使用 StrOutputParser。我们可以将其与聊天模型组合如下
from langchain_core.output_parsers import StrOutputParser
chain = llm_with_tools | StrOutputParser()
API 参考:StrOutputParser
StrOutputParser 简化了从消息对象中提取文本的过程
response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.
这在流式传输上下文中尤其有用
for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||
有关更多信息,请参阅 API 参考。