跳至主要内容

Oracle AI 向量搜索:向量存储

Oracle AI 向量搜索专为人工智能 (AI) 工作负载而设计,允许您根据语义而不是关键字查询数据。Oracle AI 向量搜索的最大优势之一是,非结构化数据的语义搜索可以与单个系统中的业务数据的关联搜索相结合。这不仅功能强大,而且效率更高,因为您无需添加专门的向量数据库,从而消除了多个系统之间数据碎片的难题。

此外,您的向量可以从 Oracle 数据库的所有强大功能中受益,例如以下功能

如果您刚开始使用 Oracle 数据库,请考虑探索免费的 Oracle 23 AI,它提供了有关设置数据库环境的极佳入门介绍。在使用数据库时,通常建议避免默认使用系统用户;相反,您可以创建自己的用户以增强安全性并进行自定义。有关用户创建的详细步骤,请参阅我们的端到端指南,该指南还演示了如何在 Oracle 中设置用户。此外,了解用户权限对于有效管理数据库安全至关重要。您可以在 Oracle 官方指南中了解有关此主题的更多信息,该指南介绍了如何管理用户帐户和安全性。

您需要使用 pip install -qU langchain-community 安装 langchain-community 才能使用此集成。

请安装 Oracle Python 客户端驱动程序以将 Langchain 与 Oracle AI 向量搜索一起使用。

# pip install oracledb

以下示例代码将显示如何连接到 Oracle 数据库。默认情况下,python-oracledb 以“瘦”模式运行,该模式直接连接到 Oracle 数据库。此模式不需要 Oracle 客户端库。但是,当 python-oracledb 使用它们时,可以获得一些其他功能。当使用 Oracle 客户端库时,据说 Python-oracledb 处于“胖”模式。这两种模式都具有全面的功能,支持 Python 数据库 API v2.0 规范。请参阅以下指南,其中介绍了每种模式支持的功能。如果您无法使用瘦模式,则可能需要切换到胖模式。

import oracledb

username = "username"
password = "password"
dsn = "ipaddress:port/orclpdb1"

try:
connection = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
except Exception as e:
print("Connection failed!")
from langchain_community.vectorstores import oraclevs
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
from langchain_huggingface import HuggingFaceEmbeddings

加载文档

# Define a list of documents (The examples below are 5 random documents from Oracle Concepts Manual )

documents_json_list = [
{
"id": "cncpt_15.5.3.2.2_P4",
"text": "If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/logical-storage-structures.html#GUID-5387D7B2-C0CA-4C1E-811B-C7EB9B636442",
},
{
"id": "cncpt_15.5.5_P1",
"text": "A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/logical-storage-structures.html#GUID-D02B2220-E6F5-40D9-AFB5-BC69BCEF6CD4",
},
{
"id": "cncpt_22.3.4.3.1_P2",
"text": "The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/concepts-for-database-developers.html#GUID-3C50EAB8-FC39-4BB3-B680-4EACCE49E866",
},
{
"id": "cncpt_22.3.4.3.1_P3",
"text": "The LOB segment stores data in pieces called chunks. A chunk is a logically contiguous set of data blocks and is the smallest unit of allocation for a LOB. A row in the table stores a pointer called a LOB locator, which points to the LOB index. When the table is queried, the database uses the LOB index to quickly locate the LOB chunks.",
"link": "https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/concepts-for-database-developers.html#GUID-3C50EAB8-FC39-4BB3-B680-4EACCE49E866",
},
]
# Create Langchain Documents

documents_langchain = []

for doc in documents_json_list:
metadata = {"id": doc["id"], "link": doc["link"]}
doc_langchain = Document(page_content=doc["text"], metadata=metadata)
documents_langchain.append(doc_langchain)

首先,我们将创建三个向量存储,每个存储都具有不同的距离函数。由于我们尚未在其中创建索引,因此它们目前只会创建表。稍后,我们将使用这些向量存储来创建 HNSW 索引。要了解有关 Oracle AI 向量搜索支持的不同索引类型的更多信息,请参阅以下指南

您可以手动连接到 Oracle 数据库,您将看到三个表:Documents_DOT、Documents_COSINE 和 Documents_EUCLIDEAN。

然后,我们将创建另外三个表 Documents_DOT_IVF、Documents_COSINE_IVF 和 Documents_EUCLIDEAN_IVF,这些表将用于在表上创建 IVF 索引而不是 HNSW 索引。

# Ingest documents into Oracle Vector Store using different distance strategies

# When using our API calls, start by initializing your vector store with a subset of your documents
# through from_documents(), then incrementally add more documents using add_texts().
# This approach prevents system overload and ensures efficient document processing.

model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")

vector_store_dot = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_DOT",
distance_strategy=DistanceStrategy.DOT_PRODUCT,
)
vector_store_max = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_COSINE",
distance_strategy=DistanceStrategy.COSINE,
)
vector_store_euclidean = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_EUCLIDEAN",
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
)

# Ingest documents into Oracle Vector Store using different distance strategies
vector_store_dot_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_DOT_IVF",
distance_strategy=DistanceStrategy.DOT_PRODUCT,
)
vector_store_max_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_COSINE_IVF",
distance_strategy=DistanceStrategy.COSINE,
)
vector_store_euclidean_ivf = OracleVS.from_documents(
documents_langchain,
model,
client=connection,
table_name="Documents_EUCLIDEAN_IVF",
distance_strategy=DistanceStrategy.EUCLIDEAN_DISTANCE,
)
def manage_texts(vector_stores):
"""
Adds texts to each vector store, demonstrates error handling for duplicate additions,
and performs deletion of texts. Showcases similarity searches and index creation for each vector store.

Args:
- vector_stores (list): A list of OracleVS instances.
"""
texts = ["Rohan", "Shailendra"]
metadata = [
{"id": "100", "link": "Document Example Test 1"},
{"id": "101", "link": "Document Example Test 2"},
]

for i, vs in enumerate(vector_stores, start=1):
# Adding texts
try:
vs.add_texts(texts, metadata)
print(f"\n\n\nAdd texts complete for vector store {i}\n\n\n")
except Exception as ex:
print(f"\n\n\nExpected error on duplicate add for vector store {i}\n\n\n")

# Deleting texts using the value of 'id'
vs.delete([metadata[0]["id"]])
print(f"\n\n\nDelete texts complete for vector store {i}\n\n\n")

# Similarity search
results = vs.similarity_search("How are LOBS stored in Oracle Database", 2)
print(f"\n\n\nSimilarity search results for vector store {i}: {results}\n\n\n")


vector_store_list = [
vector_store_dot,
vector_store_max,
vector_store_euclidean,
vector_store_dot_ivf,
vector_store_max_ivf,
vector_store_euclidean_ivf,
]
manage_texts(vector_store_list)

演示为每种距离策略创建具有特定参数的索引

def create_search_indices(connection):
"""
Creates search indices for the vector stores, each with specific parameters tailored to their distance strategy.
"""
# Index for DOT_PRODUCT strategy
# Notice we are creating a HNSW index with default parameters
# This will default to creating a HNSW index with 8 Parallel Workers and use the Default Accuracy used by Oracle AI Vector Search
oraclevs.create_index(
connection,
vector_store_dot,
params={"idx_name": "hnsw_idx1", "idx_type": "HNSW"},
)

# Index for COSINE strategy with specific parameters
# Notice we are creating a HNSW index with parallel 16 and Target Accuracy Specification as 97 percent
oraclevs.create_index(
connection,
vector_store_max,
params={
"idx_name": "hnsw_idx2",
"idx_type": "HNSW",
"accuracy": 97,
"parallel": 16,
},
)

# Index for EUCLIDEAN_DISTANCE strategy with specific parameters
# Notice we are creating a HNSW index by specifying Power User Parameters which are neighbors = 64 and efConstruction = 100
oraclevs.create_index(
connection,
vector_store_euclidean,
params={
"idx_name": "hnsw_idx3",
"idx_type": "HNSW",
"neighbors": 64,
"efConstruction": 100,
},
)

# Index for DOT_PRODUCT strategy with specific parameters
# Notice we are creating an IVF index with default parameters
# This will default to creating an IVF index with 8 Parallel Workers and use the Default Accuracy used by Oracle AI Vector Search
oraclevs.create_index(
connection,
vector_store_dot_ivf,
params={
"idx_name": "ivf_idx1",
"idx_type": "IVF",
},
)

# Index for COSINE strategy with specific parameters
# Notice we are creating an IVF index with parallel 32 and Target Accuracy Specification as 90 percent
oraclevs.create_index(
connection,
vector_store_max_ivf,
params={
"idx_name": "ivf_idx2",
"idx_type": "IVF",
"accuracy": 90,
"parallel": 32,
},
)

# Index for EUCLIDEAN_DISTANCE strategy with specific parameters
# Notice we are creating an IVF index by specifying Power User Parameters which is neighbor_part = 64
oraclevs.create_index(
connection,
vector_store_euclidean_ivf,
params={"idx_name": "ivf_idx3", "idx_type": "IVF", "neighbor_part": 64},
)

print("Index creation complete.")


create_search_indices(connection)

演示所有六个向量存储的高级搜索,包含和不包含属性过滤 - 使用过滤,我们仅选择文档 ID 101,其他任何内容都不选择

# Conduct advanced searches after creating the indices
def conduct_advanced_searches(vector_stores):
query = "How are LOBS stored in Oracle Database"
# Constructing a filter for direct comparison against document metadata
# This filter aims to include documents whose metadata 'id' is exactly '2'
filter_criteria = {"id": ["101"]} # Direct comparison filter

for i, vs in enumerate(vector_stores, start=1):
print(f"\n--- Vector Store {i} Advanced Searches ---")
# Similarity search without a filter
print("\nSimilarity search results without filter:")
print(vs.similarity_search(query, 2))

# Similarity search with a filter
print("\nSimilarity search results with filter:")
print(vs.similarity_search(query, 2, filter=filter_criteria))

# Similarity search with relevance score
print("\nSimilarity search with relevance score:")
print(vs.similarity_search_with_score(query, 2))

# Similarity search with relevance score with filter
print("\nSimilarity search with relevance score with filter:")
print(vs.similarity_search_with_score(query, 2, filter=filter_criteria))

# Max marginal relevance search
print("\nMax marginal relevance search results:")
print(vs.max_marginal_relevance_search(query, 2, fetch_k=20, lambda_mult=0.5))

# Max marginal relevance search with filter
print("\nMax marginal relevance search results with filter:")
print(
vs.max_marginal_relevance_search(
query, 2, fetch_k=20, lambda_mult=0.5, filter=filter_criteria
)
)


conduct_advanced_searches(vector_store_list)

端到端演示

请参阅我们的完整演示指南Oracle AI 向量搜索端到端演示指南,以在 Oracle AI 向量搜索的帮助下构建端到端 RAG 管道。


此页面是否有帮助?


您还可以留下详细的反馈 在 GitHub 上.