跳到主要内容
Open In ColabOpen on GitHub

MyScale

MyScale 是一个集成的向量数据库。您可以在 SQL 中访问您的数据库,也可以从 LangChain 访问。MyScale 可以使用各种数据类型和函数进行过滤。无论您是扩展数据还是将系统扩展到更广泛的应用,它都将提升您的 LLM 应用。

在 notebook 中,我们将演示 SelfQueryRetriever,它封装在 MyScale 向量存储周围,并包含我们为 LangChain 贡献的一些额外组件。

简而言之,它可以浓缩为 4 点

  1. 添加 contain 比较器以匹配列表中的任何元素(如果匹配到多个元素)
  2. 为日期时间匹配添加 timestamp 数据类型(ISO 格式或 YYYY-MM-DD)
  3. 为字符串模式搜索添加 like 比较器
  4. 添加任意函数功能

创建 MyScale 向量存储

MyScale 已经集成到 LangChain 一段时间了。因此,您可以按照此 notebook 创建您自己的向量存储,用于自查询检索器。

注意: 所有自查询检索器都要求您安装 lark (pip install lark)。我们使用 lark 进行语法定义。在继续下一步之前,我们还要提醒您,还需要 clickhouse-connect 才能与您的 MyScale 后端进行交互。

%pip install --upgrade --quiet  lark clickhouse-connect

在本教程中,我们遵循其他示例的设置,并使用 OpenAIEmbeddings。请记住获取 OpenAI API 密钥以有效访问 LLM。

import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
if "MYSCALE_HOST" not in os.environ:
os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale URL:")
if "MYSCALE_PORT" not in os.environ:
os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")
if "MYSCALE_USERNAME" not in os.environ:
os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")
if "MYSCALE_PASSWORD" not in os.environ:
os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")
from langchain_community.vectorstores import MyScale
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings()

创建一些示例数据

如您所见,与其他的自查询检索器相比,我们创建的数据有一些不同之处。我们将关键字 year 替换为 date,以便您可以更精细地控制时间戳。我们还将关键字 gerne 的类型更改为字符串列表,LLM 可以使用新的 contain 比较器来构造过滤器。我们还提供了 like 比较器和对过滤器的任意函数支持,这将在接下来的几个单元格中介绍。

现在让我们先看一下数据。

docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"date": "1993-07-02", "rating": 7.7, "genre": ["science fiction"]},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"date": "2010-12-30", "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"date": "2006-04-23", "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"date": "2019-08-22", "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"date": "1995-02-11", "genre": ["animated"]},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"date": "1979-09-10",
"director": "Andrei Tarkovsky",
"genre": ["science fiction", "adventure"],
"rating": 9.9,
},
),
]
vectorstore = MyScale.from_documents(
docs,
embeddings,
)

创建我们的自查询检索器

就像其他的检索器一样... 简单而友好。

from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import ChatOpenAI

metadata_field_info = [
AttributeInfo(
name="genre",
description="The genres of the movie. "
"It only supports equal and contain comparisons. "
"Here are some examples: genre = [' A '], genre = [' A ', 'B'], contain (genre, 'A')",
type="list[string]",
),
# If you want to include length of a list, just define it as a new column
# This will teach the LLM to use it as a column when constructing filter.
AttributeInfo(
name="length(genre)",
description="The length of genres of the movie",
type="integer",
),
# Now you can define a column as timestamp. By simply set the type to timestamp.
AttributeInfo(
name="date",
description="The date the movie was released",
type="timestamp",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = ChatOpenAI(temperature=0, model_name="gpt-4o")
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)

使用自查询检索器的现有功能进行测试

现在我们可以尝试实际使用我们的检索器了!

# This example only specifies a relevant query
retriever.invoke("What are some movies about dinosaurs")
# This example only specifies a filter
retriever.invoke("I want to watch a movie rated higher than 8.5")
# This example specifies a query and a filter
retriever.invoke("Has Greta Gerwig directed any movies about women")
# This example specifies a composite filter
retriever.invoke("What's a highly rated (above 8.5) science fiction film?")
# This example specifies a query and composite filter
retriever.invoke(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)

等一下... 还有什么?

带有 MyScale 的自查询检索器可以做更多的事情!让我们来了解一下。

# You can use length(genres) to do anything you want
retriever.invoke("What's a movie that have more than 1 genres?")
# Fine-grained datetime? You got it already.
retriever.invoke("What's a movie that release after feb 1995?")
# Don't know what your exact filter should be? Use string pattern match!
retriever.invoke("What's a movie whose name is like Andrei?")
# Contain works for lists: so you can match a list with contain comparator!
retriever.invoke("What's a movie who has genres science fiction and adventure?")

过滤器 k

我们还可以使用自查询检索器来指定 k:要获取的文档数量。

我们可以通过将 enable_limit=True 传递给构造函数来做到这一点。

retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.invoke("what are two movies about dinosaurs")

此页面是否对您有帮助?