跳到主要内容

IBM watsonx.ai

WatsonxLLM是IBM watsonx.ai 基础模型的封装。

此示例展示了如何使用LangChainwatsonx.ai模型进行通信。

概述

集成细节

本地可序列化JS支持包下载包最新
WatsonxLLMlangchain-ibmPyPI - DownloadsPyPI - Version

设置

要访问IBM watsonx.ai模型,您需要创建一个IBM watsonx.ai帐户,获取API密钥,并安装langchain-ibm集成包。

凭据

以下单元格定义了使用watsonx基础模型推理所需的凭据。

操作:提供IBM Cloud用户API密钥。有关详细信息,请参阅管理用户API密钥

import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

此外,您可以将其他机密作为环境变量传递。

import os

os.environ["WATSONX_URL"] = "your service instance url"
os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster"
os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster"
os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster"
os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster"

安装

LangChain IBM集成位于langchain-ibm包中

!pip install -qU langchain-ibm

实例化

您可能需要针对不同的模型或任务调整模型参数。有关详细信息,请参阅文档

parameters = {
"decoding_method": "sample",
"max_new_tokens": 100,
"min_new_tokens": 1,
"temperature": 0.5,
"top_k": 50,
"top_p": 1,
}

使用先前设置的参数初始化WatsonxLLM类。

注意:

  • 要为API调用提供上下文,您必须添加project_idspace_id。有关更多信息,请参阅文档
  • 根据您配置的服务实例的区域,使用此处描述的URL之一。

在此示例中,我们将使用project_id和达拉斯URL。

您需要指定将用于推理的model_id。您可以在文档中找到所有可用的模型。

from langchain_ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)
API 参考:WatsonxLLM

或者,您可以使用 Cloud Pak for Data 凭据。有关详细信息,请参阅文档

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="4.8",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

除了 model_id,您还可以传递先前调整的模型的 deployment_id。 整个模型调优工作流程在此处描述

watsonx_llm = WatsonxLLM(
deployment_id="PASTE YOUR DEPLOYMENT_ID HERE",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

对于某些要求,可以选择将 IBM 的 APIClient 对象传递到 WatsonxLLM 类中。

from ibm_watsonx_ai import APIClient

api_client = APIClient(...)

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
watsonx_client=api_client,
)

您还可以将 IBM 的 ModelInference 对象传递到 WatsonxLLM 类中。

from ibm_watsonx_ai.foundation_models import ModelInference

model = ModelInference(...)

watsonx_llm = WatsonxLLM(watsonx_model=model)

调用

要获取补全结果,您可以使用字符串提示直接调用模型。

# Calling a single prompt

watsonx_llm.invoke("Who is man's best friend?")
"Man's best friend is his dog. Dogs are man's best friend because they are always there for you, they never judge you, and they love you unconditionally. Dogs are also great companions and can help reduce stress levels. "
# Calling multiple prompts

watsonx_llm.generate(
[
"The fastest dog in the world?",
"Describe your chosen dog breed",
]
)
LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound. Greyhounds can run up to 45 mph, which is about the same speed as a Usain Bolt.', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Labrador Retriever is a breed of retriever that was bred for hunting. They are a very smart breed and are very easy to train. They are also very loyal and will make great companions. ', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 82, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None}, run=[RunInfo(run_id=UUID('750b8a0f-8846-456d-93d0-e039e95b1276')), RunInfo(run_id=UUID('aa4c2a1c-5b08-4fcf-87aa-50228de46db5'))], type='LLMResult')

流式传输模型输出

您可以流式传输模型输出。

for chunk in watsonx_llm.stream(
"Describe your favorite breed of dog and why it is your favorite."
):
print(chunk, end="")
My favorite breed of dog is a Labrador Retriever. They are my favorite breed because they are my favorite color, yellow. They are also very smart and easy to train.

链式调用

创建 PromptTemplate 对象,该对象将负责创建随机问题。

from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "

prompt = PromptTemplate.from_template(template)
API 参考:PromptTemplate

提供一个主题并运行链。

llm_chain = prompt | watsonx_llm

topic = "dog"

llm_chain.invoke(topic)
'What is the origin of the name "Pomeranian"?'

API 参考

有关所有 WatsonxLLM 功能和配置的详细文档,请访问API 参考


此页面对您有帮助吗?