跳到主要内容
Open In ColabOpen on GitHub

如何向可运行对象添加回退机制

在使用语言模型时,您可能会经常遇到来自底层API的问题,无论是速率限制还是服务中断。因此,当您将LLM应用程序投入生产时,防范这些问题变得越来越重要。这就是我们引入回退机制概念的原因。

回退机制(**fallback**)是一种可在紧急情况下使用的备用方案。

至关重要的是,回退机制不仅可以应用于LLM层面,还可以应用于整个可运行对象层面。这很重要,因为不同模型通常需要不同的提示词。因此,如果您的OpenAI调用失败,您可能不只是想将相同的提示词发送给Anthropic——您可能希望使用不同的提示词模板并发送不同的版本。

LLM API 错误的回退机制

这可能是回退机制最常见的用例。对LLM API的请求可能因多种原因失败——API可能宕机,您可能已达到速率限制,或者其他任何情况。因此,使用回退机制有助于防范这些问题。

重要提示:默认情况下,许多LLM封装器会捕获错误并重试。在使用回退机制时,您很可能希望关闭这些功能。否则,第一个封装器将一直重试而不会失败。

%pip install --upgrade --quiet  langchain langchain-openai
from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
API 参考:ChatAnthropic | ChatOpenAI

首先,让我们模拟一下当我们遇到OpenAI的RateLimitError时会发生什么。

from unittest.mock import patch

import httpx
from openai import RateLimitError

request = httpx.Request("GET", "/")
response = httpx.Response(200, request=request)
error = RateLimitError("rate limit", response=response, body="")
# Note that we set max_retries = 0 to avoid retrying on RateLimits, etc
openai_llm = ChatOpenAI(model="gpt-4o-mini", max_retries=0)
anthropic_llm = ChatAnthropic(model="claude-3-haiku-20240307")
llm = openai_llm.with_fallbacks([anthropic_llm])
# Let's use just the OpenAI LLm first, to show that we run into an error
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(openai_llm.invoke("Why did the chicken cross the road?"))
except RateLimitError:
print("Hit error")
Hit error
# Now let's try with fallbacks to Anthropic
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(llm.invoke("Why did the chicken cross the road?"))
except RateLimitError:
print("Hit error")
content=' I don\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\n\n- To get to the other side!\n\n- It was too chicken to just stand there. \n\n- It wanted a change of scenery.\n\n- It wanted to show the possum it could be done.\n\n- It was on its way to a poultry farmers\' convention.\n\nThe joke plays on the double meaning of "the other side" - literally crossing the road to the other side, or the "other side" meaning the afterlife. So it\'s an anti-joke, with a silly or unexpected pun as the answer.' additional_kwargs={} example=False

我们可以像使用普通LLM一样使用我们的“带回退的LLM”。

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a nice assistant who always includes a compliment in your response",
),
("human", "Why did the {animal} cross the road"),
]
)
chain = prompt | llm
with patch("openai.resources.chat.completions.Completions.create", side_effect=error):
try:
print(chain.invoke({"animal": "kangaroo"}))
except RateLimitError:
print("Hit error")
API 参考:ChatPromptTemplate
content=" I don't actually know why the kangaroo crossed the road, but I can take a guess! Here are some possible reasons:\n\n- To get to the other side (the classic joke answer!)\n\n- It was trying to find some food or water \n\n- It was trying to find a mate during mating season\n\n- It was fleeing from a predator or perceived threat\n\n- It was disoriented and crossed accidentally \n\n- It was following a herd of other kangaroos who were crossing\n\n- It wanted a change of scenery or environment \n\n- It was trying to reach a new habitat or territory\n\nThe real reason is unknown without more context, but hopefully one of those potential explanations does the joke justice! Let me know if you have any other animal jokes I can try to decipher." additional_kwargs={} example=False

序列的回退机制

我们还可以为序列创建回退机制,这些序列本身就是序列。这里我们使用两种不同的模型来实现:ChatOpenAI,然后是普通的OpenAI(它不使用聊天模型)。因为OpenAI不是聊天模型,您可能需要一个不同的提示词。

# First let's create a chain with a ChatModel
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser

chat_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're a nice assistant who always includes a compliment in your response",
),
("human", "Why did the {animal} cross the road"),
]
)
# Here we're going to use a bad model name to easily create a chain that will error
chat_model = ChatOpenAI(model="gpt-fake")
bad_chain = chat_prompt | chat_model | StrOutputParser()
API 参考:StrOutputParser
# Now lets create a chain with the normal OpenAI model
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI

prompt_template = """Instructions: You should always include a compliment in your response.

Question: Why did the {animal} cross the road?"""
prompt = PromptTemplate.from_template(prompt_template)
llm = OpenAI()
good_chain = prompt | llm
API 参考:PromptTemplate | OpenAI
# We can now create a final chain which combines the two
chain = bad_chain.with_fallbacks([good_chain])
chain.invoke({"animal": "turtle"})
'\n\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.'

长输入的回退机制

LLM的一个主要限制因素是其上下文窗口。通常,您可以在将提示词发送给LLM之前计算并跟踪其长度,但在难以/复杂的情况下,您可以回退到具有更长上下文长度的模型。

short_llm = ChatOpenAI()
long_llm = ChatOpenAI(model="gpt-3.5-turbo-16k")
llm = short_llm.with_fallbacks([long_llm])
inputs = "What is the next number: " + ", ".join(["one", "two"] * 3000)
try:
print(short_llm.invoke(inputs))
except Exception as e:
print(e)
This model's maximum context length is 4097 tokens. However, your messages resulted in 12012 tokens. Please reduce the length of the messages.
try:
print(llm.invoke(inputs))
except Exception as e:
print(e)
content='The next number in the sequence is two.' additional_kwargs={} example=False

回退到更好的模型

我们经常要求模型以特定格式(如JSON)输出。GPT-3.5等模型在这方面表现尚可,但有时会遇到困难。这自然引出了回退机制——我们可以先尝试使用GPT-3.5(更快、更便宜),但如果解析失败,我们就可以使用GPT-4。

from langchain.output_parsers import DatetimeOutputParser
prompt = ChatPromptTemplate.from_template(
"what time was {event} (in %Y-%m-%dT%H:%M:%S.%fZ format - only return this value)"
)
# In this case we are going to do the fallbacks on the LLM + output parser level
# Because the error will get raised in the OutputParser
openai_35 = ChatOpenAI() | DatetimeOutputParser()
openai_4 = ChatOpenAI(model="gpt-4") | DatetimeOutputParser()
only_35 = prompt | openai_35
fallback_4 = prompt | openai_35.with_fallbacks([openai_4])
try:
print(only_35.invoke({"event": "the superbowl in 1994"}))
except Exception as e:
print(f"Error: {e}")
Error: Could not parse datetime string: The Super Bowl in 1994 took place on January 30th at 3:30 PM local time. Converting this to the specified format (%Y-%m-%dT%H:%M:%S.%fZ) results in: 1994-01-30T15:30:00.000Z
try:
print(fallback_4.invoke({"event": "the superbowl in 1994"}))
except Exception as e:
print(f"Error: {e}")
1994-01-30 15:30:00