Prediction Guard
本页介绍如何在 LangChain 中使用 Prediction Guard 生态系统。它分为两个部分:安装和设置,然后是特定 Prediction Guard 包装器的引用。
安装和设置
- 使用
pip install predictionguard
安装 Python SDK - 获取 Prediction Guard 访问令牌(如此处所述)并将其设置为环境变量 (
PREDICTIONGUARD_TOKEN
)
LLM 包装器
存在一个 Prediction Guard LLM 包装器,您可以使用以下方式访问:
from langchain_community.llms import PredictionGuard
API 参考:PredictionGuard
您可以在初始化 LLM 时提供 Prediction Guard 模型的名称作为参数
pgllm = PredictionGuard(model="MPT-7B-Instruct")
您也可以直接将您的访问令牌作为参数提供
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
最后,您可以提供一个 "output" 参数,用于结构化/控制 LLM 的输出
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
示例用法
受控或受保护 LLM 包装器的基本用法
import os
import predictionguard as pg
from langchain_community.llms import PredictionGuard
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
# Define a prompt template
template = """Respond to the following query based on the context.
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
Query: {query}
Result: """
prompt = PromptTemplate.from_template(template)
# With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="MPT-7B-Instruct",
output={
"type": "categorical",
"categories": [
"product announcement",
"apology",
"relational"
]
})
pgllm(prompt.format(query="What kind of post is this?"))
使用 Prediction Guard 包装器的基本 LLM 链接
import os
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import PredictionGuard
# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
# you to access all the latest open access models (see https://docs.predictionguard.com)
os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
pgllm = PredictionGuard(model="OpenAI-gpt-3.5-turbo-instruct")
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)