跳到主要内容
Open In ColabOpen on GitHub

Vectara 自查询

Vectara 是值得信赖的 AI 助手和 Agent 平台,专注于企业关键任务应用就绪性。

Vectara 无服务器 RAG 即服务提供 RAG 的所有组件,通过易于使用的 API 提供,包括

  1. 一种从文件(PDF、PPT、DOCX 等)中提取文本的方法
  2. 基于 ML 的分块,提供最先进的性能。
  3. Boomerang 嵌入模型。
  4. 其自身的内部向量数据库,用于存储文本块和嵌入向量。
  5. 查询服务,可自动将查询编码为嵌入,并检索最相关的文本段,包括对 混合搜索 以及多种重新排序选项的支持,例如 多语言相关性重新排序器MMRUDF 重新排序器
  6. 用于创建 生成式摘要 的 LLM,基于检索到的文档(上下文),包括引文。

有关如何使用 API 的更多信息,请参阅 Vectara API 文档

本笔记本展示了如何将 SelfQueryRetriever 与 Vectara 一起使用。

入门指南

要开始使用,请按照以下步骤操作

  1. 如果您还没有帐户,请注册免费的 Vectara 试用版。完成注册后,您将获得一个 Vectara 客户 ID。您可以通过单击 Vectara 控制台窗口右上角的您的姓名来找到您的客户 ID。
  2. 在您的帐户中,您可以创建一个或多个语料库。每个语料库代表一个区域,用于存储从输入文档摄取的文本数据。要创建语料库,请使用“创建语料库”按钮。然后,您需要为您的语料库提供名称和描述。可选地,您可以定义过滤属性并应用一些高级选项。如果您单击您创建的语料库,您可以在顶部看到它的名称和语料库 ID。
  3. 接下来,您需要创建 API 密钥才能访问语料库。单击语料库视图中的“访问控制”选项卡,然后单击“创建 API 密钥”按钮。为您的密钥命名,并选择您想要查询专用还是查询+索引的密钥。单击“创建”,您现在就拥有了一个活动的 API 密钥。请对该密钥保密。

要将 LangChain 与 Vectara 结合使用,您需要以下三个值:customer IDcorpus IDapi_key。您可以通过两种方式将这些值提供给 LangChain

  1. 将以下三个变量包含在您的环境中:VECTARA_CUSTOMER_IDVECTARA_CORPUS_IDVECTARA_API_KEY

    例如,您可以使用 os.environ 和 getpass 按如下方式设置这些变量

import os
import getpass

os.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")
os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")
os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
  1. 将它们添加到 Vectara 向量存储构造函数中
vectara = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)

在本笔记本中,我们假设它们在环境中提供。

注意: 自查询检索器要求您安装 larkpip install lark)。

从 LangChain 连接到 Vectara

在此示例中,我们假设您已创建一个帐户和一个语料库,并将您的 VECTARA_CUSTOMER_IDVECTARA_CORPUS_IDVECTARA_API_KEY(使用索引和查询权限创建)添加为环境变量。

我们进一步假设语料库有 4 个字段定义为可过滤的元数据属性:yeardirectorratinggenre

import os

from langchain_core.documents import Document

os.environ["VECTARA_API_KEY"] = "<YOUR_VECTARA_API_KEY>"
os.environ["VECTARA_CORPUS_ID"] = "<YOUR_VECTARA_CORPUS_ID>"
os.environ["VECTARA_CUSTOMER_ID"] = "<YOUR_VECTARA_CUSTOMER_ID>"

from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.vectorstores import Vectara
from langchain_openai.chat_models import ChatOpenAI

数据集

我们首先定义一个电影示例数据集,并将这些数据集连同元数据一起上传到语料库

docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"rating": 9.9,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
},
),
]

vectara = Vectara()
for doc in docs:
vectara.add_texts([doc.page_content], doc_metadata=doc.metadata)

创建自查询检索器

现在我们可以实例化我们的检索器。为此,我们需要预先提供有关我们的文档支持的元数据字段以及文档内容的简短描述的一些信息。

然后我们提供一个 llm(在本例中为 OpenAI)和 vectara 向量存储作为参数

metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = ChatOpenAI(temperature=0, model="gpt-4o", max_tokens=4069)
retriever = SelfQueryRetriever.from_llm(
llm, vectara, document_content_description, metadata_field_info, verbose=True
)

自检索查询

现在我们可以尝试实际使用我们的检索器了!

# This example only specifies a relevant query
retriever.invoke("What are movies about scientists")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'}),
Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'})]
# This example only specifies a filter
retriever.invoke("I want to watch a movie rated higher than 8.5")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
# This example specifies a query and a filter
retriever.invoke("Has Greta Gerwig directed any movies about women")
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]
# This example specifies a composite filter
retriever.invoke("What's a highly rated (above 8.5) science fiction film?")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
# This example specifies a query and composite filter
retriever.invoke(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
[Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),
Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'})]

过滤 k

我们还可以使用自查询检索器来指定 k:要获取的文档数量。

我们可以通过将 enable_limit=True 传递给构造函数来做到这一点。

retriever = SelfQueryRetriever.from_llm(
llm,
vectara,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)

这很酷,我们可以将我们希望在查询中看到的results数量包括在内,自检索器会正确理解它。例如,让我们查找

# This example only specifies a relevant query
retriever.invoke("what are two movies with a rating above 8.5")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]

此页内容对您有帮助吗?