跳到主要内容

从 LLMRouterChain 迁移

LLMRouterChain 将输入查询路由到多个目标之一——也就是说,给定一个输入查询,它使用 LLM 从目标链列表中选择一个,并将输入传递给选定的链。

LLMRouterChain 不支持常见的 聊天模型 功能,例如消息角色和 工具调用。在底层,LLMRouterChain 通过指示 LLM 生成 JSON 格式的文本,并解析出预期的目标来路由查询。

考虑一个来自 MultiPromptChain 的示例,它使用了 LLMRouterChain。下面是一个(示例)默认提示:

from langchain.chains.router.multi_prompt import MULTI_PROMPT_ROUTER_TEMPLATE

destinations = """
animals: prompt for animal expert
vegetables: prompt for a vegetable expert
"""

router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations)

print(router_template.replace("`", "'")) # for rendering purposes
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
'''json
{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}
'''

REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >>

animals: prompt for animal expert
vegetables: prompt for a vegetable expert


<< INPUT >>
{input}

<< OUTPUT (must include '''json at the start of the response) >>
<< OUTPUT (must end with ''') >>

大多数行为都是通过一个单一的自然语言提示来确定的。支持 工具调用 功能的聊天模型为此任务提供了许多优势

  • 支持聊天提示模板,包括带有 system 和其他角色的消息;
  • 工具调用模型经过微调,可以生成结构化输出;
  • 支持可运行方法,如流式传输和异步操作。

现在让我们将 LLMRouterChain 与使用工具调用的 LCEL 实现并排比较。请注意,对于本指南,我们将使用 langchain-openai >= 0.1.20

%pip install -qU langchain-core langchain-openai
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

旧版

详情
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

router_prompt = PromptTemplate(
# Note: here we use the prompt template from above. Generally this would need
# to be customized.
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)

chain = LLMRouterChain.from_llm(llm, router_prompt)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetables

LCEL

详情
from operator import itemgetter
from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict

llm = ChatOpenAI(model="gpt-4o-mini")

route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)


# Define schema for output:
class RouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal", "vegetable"]


# Instead of writing formatting instructions into the prompt, we
# leverage .with_structured_output to coerce the output into a simple
# schema.
chain = route_prompt | llm.with_structured_output(RouteQuery)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetable

下一步

有关使用提示模板、LLM 和输出解析器构建的更多详细信息,请参阅本教程

查看LCEL 概念文档以获取更多背景信息。


此页是否对您有帮助?