从 ConstitutionalChain 迁移
ConstitutionalChain 允许 LLM 根据 原则 评论和修改生成内容,这些原则被构造为评论和修改请求的组合。例如,一个原则可能包含一个识别有害内容的请求,以及一个重写内容的请求。
宪法 AI 原则
基于 宪法 AI:来自 AI 反馈的无害性 论文。
在 ConstitutionalChain
中,这种评论请求和相关修改的结构被格式化为 LLM 提示,并从字符串响应中解析出来。这可以通过聊天模型的 结构化输出 功能更自然地实现。我们可以为此目的在 LangGraph 中构建一个简单的链。这种方法的一些优势包括
- 利用为此目的进行了微调的聊天模型的工具调用功能;
- 减少从提取字符串 LLM 响应中的表达式中解析错误;
- 将指令委托给 消息角色(例如,聊天模型可以理解
ToolMessage
代表什么,而无需额外的提示); - 支持流式传输,包括单个令牌和链步骤。
%pip install --upgrade --quiet langchain-openai
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()
旧版
详情
from langchain.chains import ConstitutionalChain, LLMChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
return_intermediate_steps=True,
)
result = constitutional_chain.invoke("What is the meaning of life?")
result
{'question': 'What is the meaning of life?',
'output': 'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.',
'initial_output': ' The meaning of life is a subjective concept that can vary from person to person. Some may believe that the purpose of life is to find happiness and fulfillment, while others may see it as a journey of self-discovery and personal growth. Ultimately, the meaning of life is something that each individual must determine for themselves.',
'critiques_and_revisions': [('This answer is good in that it recognizes and acknowledges the subjective nature of the question and provides a valid and thoughtful response. However, it could have also mentioned that the meaning of life is a complex and deeply personal concept that can also change and evolve over time for each individual. Critique Needed.',
'The meaning of life is a deeply personal and ever-evolving concept. It is a journey of self-discovery and growth, and can be different for each individual. Some may find meaning in relationships, others in achieving their goals, and some may never find a concrete answer. Ultimately, the meaning of life is what we make of it.')]}
上面,我们返回了显示中间步骤的内容
- 原始问题;
- 初始输出;
- 评论和修订;
- 最终输出(与修订匹配)。
LangGraph
详情
下面,我们使用 .with_structured_output 方法同时生成 (1) 是否需要评论的判断,以及 (2) 评论。为了清晰和易于定制,我们将所有涉及的提示都展示出来。
请注意,我们还能够使用此实现来流式传输中间步骤,因此我们可以在其执行过程中进行监控,并在需要时进行干预。
from typing import List, Optional, Tuple
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
from langchain.chains.constitutional_ai.prompts import (
CRITIQUE_PROMPT,
REVISION_PROMPT,
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from typing_extensions import Annotated, TypedDict
llm = ChatOpenAI(model="gpt-4o-mini")
class Critique(TypedDict):
"""Generate a critique, if needed."""
critique_needed: Annotated[bool, ..., "Whether or not a critique is needed."]
critique: Annotated[str, ..., "If needed, the critique."]
critique_prompt = ChatPromptTemplate.from_template(
"Critique this response according to the critique request. "
"If no critique is needed, specify that.\n\n"
"Query: {query}\n\n"
"Response: {response}\n\n"
"Critique request: {critique_request}"
)
revision_prompt = ChatPromptTemplate.from_template(
"Revise this response according to the critique and reivsion request.\n\n"
"Query: {query}\n\n"
"Response: {response}\n\n"
"Critique request: {critique_request}\n\n"
"Critique: {critique}\n\n"
"If the critique does not identify anything worth changing, ignore the "
"revision request and return 'No revisions needed'. If the critique "
"does identify something worth changing, revise the response based on "
"the revision request.\n\n"
"Revision Request: {revision_request}"
)
chain = llm | StrOutputParser()
critique_chain = critique_prompt | llm.with_structured_output(Critique)
revision_chain = revision_prompt | llm | StrOutputParser()
class State(TypedDict):
query: str
constitutional_principles: List[ConstitutionalPrinciple]
initial_response: str
critiques_and_revisions: List[Tuple[str, str]]
response: str
async def generate_response(state: State):
"""Generate initial response."""
response = await chain.ainvoke(state["query"])
return {"response": response, "initial_response": response}
async def critique_and_revise(state: State):
"""Critique and revise response according to principles."""
critiques_and_revisions = []
response = state["initial_response"]
for principle in state["constitutional_principles"]:
critique = await critique_chain.ainvoke(
{
"query": state["query"],
"response": response,
"critique_request": principle.critique_request,
}
)
if critique["critique_needed"]:
revision = await revision_chain.ainvoke(
{
"query": state["query"],
"response": response,
"critique_request": principle.critique_request,
"critique": critique["critique"],
"revision_request": principle.revision_request,
}
)
response = revision
critiques_and_revisions.append((critique["critique"], revision))
else:
critiques_and_revisions.append((critique["critique"], ""))
return {
"critiques_and_revisions": critiques_and_revisions,
"response": response,
}
graph = StateGraph(State)
graph.add_node("generate_response", generate_response)
graph.add_node("critique_and_revise", critique_and_revise)
graph.add_edge(START, "generate_response")
graph.add_edge("generate_response", "critique_and_revise")
graph.add_edge("critique_and_revise", END)
app = graph.compile()
API 参考:ConstitutionalPrinciple | CRITIQUE_PROMPT | REVISION_PROMPT | StrOutputParser | ChatPromptTemplate | ChatOpenAI | StateGraph
constitutional_principles = [
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
]
query = "What is the meaning of life? Answer in 10 words or fewer."
async for step in app.astream(
{"query": query, "constitutional_principles": constitutional_principles},
stream_mode="values",
):
subset = ["initial_response", "critiques_and_revisions", "response"]
print({k: v for k, v in step.items() if k in subset})
{}
{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'response': 'Finding purpose, connection, and joy in our experiences and relationships.'}
{'initial_response': 'Finding purpose, connection, and joy in our experiences and relationships.', 'critiques_and_revisions': [("The response exceeds the 10-word limit, providing a more elaborate answer than requested. A concise response, such as 'To seek purpose and joy in life,' would better align with the query.", 'To seek purpose and joy in life.')], 'response': 'To seek purpose and joy in life.'}
下一步
查看生成结构化输出的指南 这里。
查看 LangGraph 文档,详细了解使用 LangGraph 进行构建。