Fleet AI 上下文
Fleet AI 上下文 是一个包含前 1200 个最流行和最许可的 Python 库及其文档的高质量嵌入数据集。
Fleet AI
团队的使命是嵌入世界上最重要的数据。他们首先嵌入了前 1200 个 Python 库,以实现使用最新知识的代码生成。他们非常慷慨地分享了他们对LangChain 文档和API 参考的嵌入。
让我们看看如何使用这些嵌入来为文档检索系统提供支持,并最终构建一个简单的代码生成链!
%pip install --upgrade --quiet langchain fleet-context langchain-openai pandas faiss-cpu # faiss-gpu for CUDA supported GPU
from operator import itemgetter
from typing import Any, Optional, Type
import pandas as pd
from langchain.retrievers import MultiVectorRetriever
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.stores import BaseStore
from langchain_core.vectorstores import VectorStore
from langchain_openai import OpenAIEmbeddings
def load_fleet_retriever(
df: pd.DataFrame,
*,
vectorstore_cls: Type[VectorStore] = FAISS,
docstore: Optional[BaseStore] = None,
**kwargs: Any,
):
vectorstore = _populate_vectorstore(df, vectorstore_cls)
if docstore is None:
return vectorstore.as_retriever(**kwargs)
else:
_populate_docstore(df, docstore)
return MultiVectorRetriever(
vectorstore=vectorstore, docstore=docstore, id_key="parent", **kwargs
)
def _populate_vectorstore(
df: pd.DataFrame,
vectorstore_cls: Type[VectorStore],
) -> VectorStore:
if not hasattr(vectorstore_cls, "from_embeddings"):
raise ValueError(
f"Incompatible vector store class {vectorstore_cls}."
"Must implement `from_embeddings` class method."
)
texts_embeddings = []
metadatas = []
for _, row in df.iterrows():
texts_embeddings.append((row.metadata["text"], row["dense_embeddings"]))
metadatas.append(row.metadata)
return vectorstore_cls.from_embeddings(
texts_embeddings,
OpenAIEmbeddings(model="text-embedding-ada-002"),
metadatas=metadatas,
)
def _populate_docstore(df: pd.DataFrame, docstore: BaseStore) -> None:
parent_docs = []
df = df.copy()
df["parent"] = df.metadata.apply(itemgetter("parent"))
for parent_id, group in df.groupby("parent"):
sorted_group = group.iloc[
group.metadata.apply(itemgetter("section_index")).argsort()
]
text = "".join(sorted_group.metadata.apply(itemgetter("text")))
metadata = {
k: sorted_group.iloc[0].metadata[k] for k in ("title", "type", "url")
}
text = metadata["title"] + "\n" + text
metadata["id"] = parent_id
parent_docs.append(Document(page_content=text, metadata=metadata))
docstore.mset(((d.metadata["id"], d) for d in parent_docs))
检索器片段
作为其嵌入过程的一部分,Fleet AI 团队在嵌入长文档之前首先对其进行了分段。这意味着向量对应于 LangChain 文档中页面的部分,而不是整个页面。默认情况下,当我们从这些嵌入中启动检索器时,我们将检索这些嵌入的片段。
我们将使用 Fleet Context 的 download_embeddings()
来获取 Langchain 的文档嵌入。您可以在https://fleet.so/context查看所有受支持库的文档。
from context import download_embeddings
df = download_embeddings("langchain")
vecstore_retriever = load_fleet_retriever(df)
vecstore_retriever.invoke("How does the multi vector retriever work")
其他包
您可以从此 Dropbox 链接下载和使用其他嵌入。
检索父文档
Fleet AI 提供的嵌入包含元数据,这些元数据指示哪些嵌入片段对应于相同的原始文档页面。如果需要,我们可以使用此信息来检索整个父文档,而不仅仅是嵌入的片段。在后台,我们将使用 MultiVectorRetriever 和 BaseStore 对象来搜索相关片段,然后将其映射到其父文档。
from langchain.storage import InMemoryStore
parent_retriever = load_fleet_retriever(
"https://www.dropbox.com/scl/fi/4rescpkrg9970s3huz47l/libraries_langchain_release.parquet?rlkey=283knw4wamezfwiidgpgptkep&dl=1",
docstore=InMemoryStore(),
)
API 参考:InMemoryStore
parent_retriever.invoke("How does the multi vector retriever work")
将其放入链中
让我们尝试在简单的链中使用我们的检索系统!
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""You are a great software engineer who is very familiar \
with Python. Given a user question or request about a new Python library called LangChain and \
parts of the LangChain documentation, answer the question or generate the requested code. \
Your answers must be accurate, should include code whenever possible, and should assume anything \
about LangChain which is note explicitly stated in the LangChain documentation. If the required \
information is not available, just say so.
LangChain Documentation
------------------
{context}""",
),
("human", "{question}"),
]
)
model = ChatOpenAI(model="gpt-3.5-turbo-16k")
chain = (
{
"question": RunnablePassthrough(),
"context": parent_retriever
| (lambda docs: "\n\n".join(d.page_content for d in docs)),
}
| prompt
| model
| StrOutputParser()
)
for chunk in chain.invoke(
"How do I create a FAISS vector store retriever that returns 10 documents per search query"
):
print(chunk, end="", flush=True)