跳到主要内容
Open In ColabOpen on GitHub

从LLMRouterChain迁移

通过 LLMRouterChain,输入查询被路由到一个或多个目标——即,给定一个输入查询,它使用 LLM 从目标链列表中进行选择,并将其输入传递给选定的链。

LLMRouterChain 不支持常见的聊天模型功能,例如消息角色和工具调用。在底层,LLMRouterChain 通过指示 LLM 生成 JSON 格式的文本并解析出预期的目标来路由查询。

考虑一个来自MultiPromptChain的例子,它使用了LLMRouterChain。下面是一个(示例)默认提示

from langchain.chains.router.multi_prompt import MULTI_PROMPT_ROUTER_TEMPLATE

destinations = """
animals: prompt for animal expert
vegetables: prompt for a vegetable expert
"""

router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations)

print(router_template.replace("`", "'")) # for rendering purposes
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
'''json
{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}
'''

REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >>

animals: prompt for animal expert
vegetables: prompt for a vegetable expert


<< INPUT >>
{input}

<< OUTPUT (must include '''json at the start of the response) >>
<< OUTPUT (must end with ''') >>

大部分行为是通过单个自然语言提示确定的。支持工具调用功能的聊天模型为这项任务带来了许多优势:

  • 支持聊天提示模板,包括带有system及其他角色的消息;
  • 工具调用模型经过微调以生成结构化输出;
  • 支持可运行方法,例如流式传输和异步操作。

现在我们来比较LLMRouterChain与使用工具调用的 LCEL 实现。请注意,本指南将使用langchain-openai >= 0.1.20

%pip install -qU langchain-core langchain-openai
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

旧版

详情
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

router_prompt = PromptTemplate(
# Note: here we use the prompt template from above. Generally this would need
# to be customized.
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)

chain = LLMRouterChain.from_llm(llm, router_prompt)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetables

LCEL

详情
from operator import itemgetter
from typing import Literal

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from typing_extensions import TypedDict

llm = ChatOpenAI(model="gpt-4o-mini")

route_system = "Route the user's query to either the animal or vegetable expert."
route_prompt = ChatPromptTemplate.from_messages(
[
("system", route_system),
("human", "{input}"),
]
)


# Define schema for output:
class RouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal", "vegetable"]


# Instead of writing formatting instructions into the prompt, we
# leverage .with_structured_output to coerce the output into a simple
# schema.
chain = route_prompt | llm.with_structured_output(RouteQuery)
result = chain.invoke({"input": "What color are carrots?"})

print(result["destination"])
vegetable

下一步

有关使用提示模板、LLM 和输出解析器进行构建的更多详细信息,请参阅本教程

有关更多背景信息,请查看LCEL 概念文档