跳到主要内容

如何从消息对象中解析文本

先决条件

LangChain 消息 对象支持 多种格式的内容,包括文本、多模态数据内容块 字典列表。

聊天模型响应内容的格式可能取决于提供商。例如,Anthropic 的聊天模型对于典型的字符串输入将返回字符串内容。

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")

response = llm.invoke("Hello")
response.content
API 参考:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'

但是,当生成工具调用时,响应内容会被结构化为内容块,以传达模型的推理过程。

from langchain_core.tools import tool


@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""

return "Sunny."


llm_with_tools = llm.bind_tools([get_weather])

response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API 参考:tool
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]

为了自动解析消息对象中的文本,而无需考虑底层内容的格式,我们可以使用 StrOutputParser。我们可以将其与聊天模型组合使用,如下所示:

from langchain_core.output_parsers import StrOutputParser

chain = llm_with_tools | StrOutputParser()
API 参考:StrOutputParser

StrOutputParser 简化了从消息对象中提取文本的过程。

response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.

这在流式上下文中尤其有用。

for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||

有关更多信息,请参阅 API 参考


此页是否对您有帮助?