跳至主要内容

SambaNova

SambaNova SambaverseSambastudio 是用于运行您自己的开源模型的平台

此示例介绍了如何使用 LangChain 与 SambaNova 模型进行交互

Sambaverse

Sambaverse 允许您与多个开源模型进行交互。您可以在 游乐场 中查看可用模型的列表并与之交互。请注意,Sambaverse 的免费服务在性能方面有限制。 准备好评估 SambaNova 的生产级令牌/秒性能、批量吞吐量以及 10 倍更低的总拥有成本 (TCO) 的公司应 联系我们 以获取不受限制的评估实例。

访问 Sambaverse 模型需要 API 密钥。要获取密钥,请在 sambaverse.sambanova.ai 上创建一个帐户

运行流式预测需要 sseclient-py

%pip install --quiet sseclient-py==1.8.0

将您的 API 密钥注册为环境变量

import os

sambaverse_api_key = "<Your sambaverse API key>"

# Set the environment variables
os.environ["SAMBAVERSE_API_KEY"] = sambaverse_api_key

直接从 LangChain 调用 Sambaverse 模型!

from langchain_community.llms.sambanova import Sambaverse

llm = Sambaverse(
sambaverse_model_name="Meta/llama-2-7b-chat-hf",
streaming=False,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
"select_expert": "llama-2-7b-chat-hf",
"process_prompt": False,
# "stop_sequences": '\"sequence1\",\"sequence2\"',
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_p": 1.0
},
)

print(llm.invoke("Why should I use open source models?"))
API 参考:Sambaverse
# Streaming response

from langchain_community.llms.sambanova import Sambaverse

llm = Sambaverse(
sambaverse_model_name="Meta/llama-2-7b-chat-hf",
streaming=True,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
"select_expert": "llama-2-7b-chat-hf",
"process_prompt": False,
# "stop_sequences": '\"sequence1\",\"sequence2\"',
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_p": 1.0
},
)

for chunk in llm.stream("Why should I use open source models?"):
print(chunk, end="", flush=True)
API 参考:Sambaverse

SambaStudio

SambaStudio 允许您训练、运行批处理推理作业以及部署在线推理端点以运行您自己微调的开源模型。

部署模型需要 SambaStudio 环境。有关更多信息,请访问 sambanova.ai/products/enterprise-ai-platform-sambanova-suite

运行流式预测需要 sseclient-py

%pip install --quiet sseclient-py==1.8.0

注册您的环境变量

import os

sambastudio_base_url = "<Your SambaStudio environment URL>"
sambastudio_base_uri = "<Your SambaStudio endpoint base URI>" # optional, "api/predict/generic" set as default
sambastudio_project_id = "<Your SambaStudio project id>"
sambastudio_endpoint_id = "<Your SambaStudio endpoint id>"
sambastudio_api_key = "<Your SambaStudio endpoint API key>"

# Set the environment variables
os.environ["SAMBASTUDIO_BASE_URL"] = sambastudio_base_url
os.environ["SAMBASTUDIO_BASE_URI"] = sambastudio_base_uri
os.environ["SAMBASTUDIO_PROJECT_ID"] = sambastudio_project_id
os.environ["SAMBASTUDIO_ENDPOINT_ID"] = sambastudio_endpoint_id
os.environ["SAMBASTUDIO_API_KEY"] = sambastudio_api_key

直接从 LangChain 调用 SambaStudio 模型!

from langchain_community.llms.sambanova import SambaStudio

llm = SambaStudio(
streaming=False,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)

print(llm.invoke("Why should I use open source models?"))
API 参考:SambaStudio
# Streaming response

from langchain_community.llms.sambanova import SambaStudio

llm = SambaStudio(
streaming=True,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)

for chunk in llm.stream("Why should I use open source models?"):
print(chunk, end="", flush=True)
API 参考:SambaStudio

您还可以调用 CoE 端点专家模型

# Using a CoE endpoint

from langchain_community.llms.sambanova import SambaStudio

llm = SambaStudio(
streaming=False,
model_kwargs={
"do_sample": True,
"max_tokens_to_generate": 1000,
"temperature": 0.01,
"process_prompt": False,
"select_expert": "Meta-Llama-3-8B-Instruct",
# "repetition_penalty": 1.0,
# "top_k": 50,
# "top_logprobs": 0,
# "top_p": 1.0
},
)

print(llm.invoke("Why should I use open source models?"))
API 参考:SambaStudio

此页面是否有帮助?


您也可以留下详细的反馈 在 GitHub 上.