跳到主要内容
Open In ColabOpen on GitHub

Nebius Retriever

NebiusRetriever 使用来自Nebius AI Studio 的嵌入,实现高效的相似性搜索。它利用高质量的嵌入模型,对文档进行语义搜索。

此检索器针对以下场景进行了优化:您需要对文档集合执行相似性搜索,但无需将向量持久化到向量数据库。它通过矩阵运算在内存中执行向量相似性搜索,使其对于中等大小的文档集合非常高效。

设置

安装

Nebius 集成可以通过 pip 安装

%pip install --upgrade langchain-nebius

凭证

Nebius 需要一个 API 密钥,该密钥可以作为初始化参数 api_key 传入,或者设置为环境变量 NEBIUS_API_KEY。您可以通过在 Nebius AI Studio 上创建账户来获取 API 密钥。

import getpass
import os

# Make sure you've set your API key as an environment variable
if "NEBIUS_API_KEY" not in os.environ:
os.environ["NEBIUS_API_KEY"] = getpass.getpass("Enter your Nebius API key: ")

实例化

NebiusRetriever 需要一个 NebiusEmbeddings 实例和文档列表。以下是如何初始化它:

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever

# Create sample documents
docs = [
Document(
page_content="Paris is the capital of France", metadata={"country": "France"}
),
Document(
page_content="Berlin is the capital of Germany", metadata={"country": "Germany"}
),
Document(
page_content="Rome is the capital of Italy", metadata={"country": "Italy"}
),
Document(
page_content="Madrid is the capital of Spain", metadata={"country": "Spain"}
),
Document(
page_content="London is the capital of the United Kingdom",
metadata={"country": "UK"},
),
Document(
page_content="Moscow is the capital of Russia", metadata={"country": "Russia"}
),
Document(
page_content="Washington DC is the capital of the United States",
metadata={"country": "USA"},
),
Document(
page_content="Tokyo is the capital of Japan", metadata={"country": "Japan"}
),
Document(
page_content="Beijing is the capital of China", metadata={"country": "China"}
),
Document(
page_content="Canberra is the capital of Australia",
metadata={"country": "Australia"},
),
]

# Initialize embeddings
embeddings = NebiusEmbeddings()

# Create retriever
retriever = NebiusRetriever(
embeddings=embeddings,
docs=docs,
k=3, # Number of documents to return
)
API 参考:Document

使用

检索相关文档

您可以使用该检索器查找与查询相关的文档

# Query for European capitals
query = "What are some capitals in Europe?"
results = retriever.invoke(query)

print(f"Query: {query}")
print(f"Top {len(results)} results:")
for i, doc in enumerate(results):
print(f"{i + 1}. {doc.page_content} (Country: {doc.metadata['country']})")
Query: What are some capitals in Europe?
Top 3 results:
1. Paris is the capital of France (Country: France)
2. Berlin is the capital of Germany (Country: Germany)
3. Rome is the capital of Italy (Country: Italy)

使用 get_relevant_documents

您也可以直接使用 get_relevant_documents 方法(尽管 invoke 是首选接口)

# Query for Asian countries
query = "What are the capitals in Asia?"
results = retriever.get_relevant_documents(query)

print(f"Query: {query}")
print(f"Top {len(results)} results:")
for i, doc in enumerate(results):
print(f"{i + 1}. {doc.page_content} (Country: {doc.metadata['country']})")
Query: What are the capitals in Asia?
Top 3 results:
1. Beijing is the capital of China (Country: China)
2. Tokyo is the capital of Japan (Country: Japan)
3. Canberra is the capital of Australia (Country: Australia)

自定义结果数量

您可以通过将 k 作为参数传入,在查询时调整结果数量

# Query for a specific country, with custom k
query = "Where is France?"
results = retriever.invoke(query, k=1) # Override default k

print(f"Query: {query}")
print(f"Top {len(results)} result:")
for i, doc in enumerate(results):
print(f"{i + 1}. {doc.page_content} (Country: {doc.metadata['country']})")
Query: Where is France?
Top 1 result:
1. Paris is the capital of France (Country: France)

异步支持

NebiusRetriever 支持异步操作

import asyncio


async def retrieve_async():
query = "What are some capital cities?"
results = await retriever.ainvoke(query)

print(f"Async query: {query}")
print(f"Top {len(results)} results:")
for i, doc in enumerate(results):
print(f"{i + 1}. {doc.page_content} (Country: {doc.metadata['country']})")


await retrieve_async()
Async query: What are some capital cities?
Top 3 results:
1. Washington DC is the capital of the United States (Country: USA)
2. Canberra is the capital of Australia (Country: Australia)
3. Paris is the capital of France (Country: France)

处理空文档

# Create a retriever with empty documents
empty_retriever = NebiusRetriever(
embeddings=embeddings,
docs=[],
k=2, # Empty document list
)

# Test the retriever with empty docs
results = empty_retriever.invoke("What are the capitals of European countries?")
print(f"Number of results: {len(results)}")
Number of results: 0

在链中使用

NebiusRetriever 在 LangChain RAG 管道中无缝工作。以下是使用 NebiusRetriever 创建简单 RAG 链的示例:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_nebius import ChatNebius

# Initialize LLM
llm = ChatNebius(model="meta-llama/Llama-3.3-70B-Instruct-fast")

# Create a prompt template
prompt = ChatPromptTemplate.from_template(
"""
Answer the question based only on the following context:

Context:
{context}

Question: {question}
"""
)


# Format documents function
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)


# Create RAG chain
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)

# Run the chain
answer = rag_chain.invoke("What are three European capitals?")
print(answer)
Based on the context provided, three European capitals are:

1. Paris
2. Berlin
3. Rome

创建搜索工具

您可以使用 NebiusRetrievalTool 为代理创建工具

from langchain_nebius import NebiusRetrievalTool

# Create a retrieval tool
tool = NebiusRetrievalTool(
retriever=retriever,
name="capital_search",
description="Search for information about capital cities around the world",
)

# Use the tool
result = tool.invoke({"query": "capitals in Europe", "k": 3})
print("Tool results:")
print(result)
Tool results:
Document 1:
Paris is the capital of France

Document 2:
Berlin is the capital of Germany

Document 3:
Rome is the capital of Italy

工作原理

NebiusRetriever 的工作原理如下:

  1. 初始化期间

    • 它存储所提供的文档
    • 它使用提供的 NebiusEmbeddings 为所有文档计算嵌入
    • 这些嵌入存储在内存中以便快速检索
  2. 检索期间(invokeget_relevant_documents

    • 它使用相同的嵌入模型嵌入查询
    • 它计算查询嵌入与所有文档嵌入之间的相似性分数
    • 它返回按相似性排序的前 k 个文档

这种方法对于中等大小的文档集合非常高效,因为它避免了对独立向量数据库的需求,同时仍提供高质量的语义搜索。

API 参考

有关 Nebius AI Studio API 的更多详细信息,请访问 Nebius AI Studio 文档