跳到主要内容
Open on GitHub

如何实现集成包

本指南介绍了实现 LangChain 集成包的流程。

集成包只是可以使用 pip install <your-package> 安装的 Python 包,其中包含与 LangChain 核心接口兼容的类。

我们将介绍

  1. (可选)如何引导新的集成包
  2. 如何实现符合 LangChain 接口的组件,例如 聊天模型向量存储

(可选)引导新的集成包

在本节中,我们将概述引导新集成包的 2 种选项,如果您愿意,也欢迎使用其他工具!

  1. langchain-cli:这是一个命令行工具,可用于使用 LangChain 组件的模板和 Poetry 进行依赖管理来引导新的集成包。
  2. Poetry:这是一个 Python 依赖管理工具,可用于引导具有依赖项的新 Python 包。然后,您可以将 LangChain 组件添加到此包中。
选项 1:langchain-cli(推荐)

在本指南中,我们将使用 langchain-cli 从模板创建一个新的集成包,可以对其进行编辑以实现您的 LangChain 组件。

先决条件

使用 langchain-cli 引导新的 Python 包

首先,安装 langchain-clipoetry

pip install langchain-cli poetry

接下来,为您的包想一个名称。在本指南中,我们将使用 langchain-parrot-link。您可以通过在 PyPi 网站上搜索来确认该名称在 PyPi 上是否可用。

接下来,使用 langchain-cli 创建新的 Python 包,并使用 cd 导航到新目录中

langchain-cli integration new

> The name of the integration to create (e.g. `my-integration`): parrot-link
> Name of integration in PascalCase [ParrotLink]:

cd parrot-link

接下来,让我们添加任何我们需要的依赖项

poetry add my-integration-sdk

我们还可以将一些 typingtest 依赖项添加到单独的 poetry 依赖项组中。

poetry add --group typing my-typing-dep
poetry add --group test my-test-dep

最后,让 poetry 使用您的依赖项以及您的集成包设置虚拟环境

poetry install --with lint,typing,test,test_integration

现在您有了一个新的 Python 包,其中包含 LangChain 组件的模板!此模板附带每个集成类型的文件,您可以根据需要复制或删除其中任何文件(包括关联的测试文件)。

要从 [模板] 创建任何单个文件,您可以运行例如

langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models_2.py
选项 2:Poetry(手动)

在本指南中,我们将使用 Poetry 进行依赖管理和打包,您也可以随意使用您喜欢的任何其他工具。

先决条件

使用 Poetry 引导新的 Python 包

首先,安装 Poetry

pip install poetry

接下来,为您的包想一个名称。在本指南中,我们将使用 langchain-parrot-link。您可以通过在 PyPi 网站上搜索来确认该名称在 PyPi 上是否可用。

接下来,使用 Poetry 创建新的 Python 包,并使用 cd 导航到新目录中

poetry new langchain-parrot-link
cd langchain-parrot-link

使用 Poetry 添加主要依赖项,这将把它们添加到您的 pyproject.toml 文件中

poetry add langchain-core

我们还将一些 test 依赖项添加到单独的 poetry 依赖项组中。如果您不使用 Poetry,我们建议以一种不会将它们与您发布的包打包的方式添加它们,或者在您运行测试时单独安装它们。

langchain-tests 将提供我们稍后将使用的标准测试。我们建议将这些固定到最新版本:

注意:将 <latest_version> 替换为下面 langchain-tests 的最新版本。

poetry add --group test pytest pytest-socket pytest-asyncio langchain-tests==<latest_version>

最后,让 poetry 使用您的依赖项以及您的集成包设置虚拟环境

poetry install --with test

您现在可以开始编写您的集成包了!

编写您的集成

假设您正在构建一个简单的集成包,该包为 LangChain 提供 ChatParrotLink 聊天模型集成。这是一个简单的示例,说明您的项目结构可能是什么样子

langchain-parrot-link/
├── langchain_parrot_link/
│ ├── __init__.py
│ └── chat_models.py
├── tests/
│ ├── __init__.py
│ └── test_chat_models.py
├── pyproject.toml
└── README.md

所有这些文件都应该已从步骤 1 中存在,chat_models.pytest_chat_models.py 除外!我们稍后将按照标准测试指南实现 test_chat_models.py

对于 chat_models.py,只需粘贴上面聊天模型实现的内容。

将您的包推送到公共 Github 仓库

如果您想在 LangChain 文档中发布您的集成,则这是必需的。

  1. 在 GitHub 上创建一个新仓库。
  2. 将您的代码推送到仓库。
  3. 确认您的仓库可以被公众查看(例如,在您未登录 Github 的私有浏览窗口中)。

实现 LangChain 组件

LangChain 组件是 langchain-core 中基类的子类。示例包括 聊天模型向量存储工具嵌入模型检索器

您的集成包通常会实现这些组件中至少一个的子类。展开下面的选项卡以查看每个组件的详细信息。

有关入门聊天模型实现的详细信息,请参阅自定义聊天模型指南

您可以从以下模板或 langchain-cli 命令开始

langchain-cli integration new \
--name parrot-link \
--name-class ParrotLink \
--src integration_template/chat_models.py \
--dst langchain_parrot_link/chat_models.py
聊天模型代码示例
langchain_parrot_link/chat_models.py
"""ParrotLink chat models."""

from typing import Any, Dict, Iterator, List, Optional

from langchain_core.callbacks import (
CallbackManagerForLLMRun,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.messages.ai import UsageMetadata
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from pydantic import Field


class ChatParrotLink(BaseChatModel):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/7ff05357bac6eaedf5058a2af88f23a1817d40fe/libs/partners/openai/langchain_openai/chat_models/base.py#L1120
"""ParrotLink chat model integration.

The default implementation echoes the first `parrot_buffer_length` characters of the input.

# TODO: Replace with relevant packages, env vars.
Setup:
Install ``langchain-parrot-link`` and set environment variable ``PARROT_LINK_API_KEY``.

.. code-block:: bash

pip install -U langchain-parrot-link
export PARROT_LINK_API_KEY="your-api-key"

# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of ParrotLink model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.

# TODO: Populate with relevant params.
Key init args — client params:
timeout: Optional[float]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
ParrotLink API key. If not passed in will be read from env var PARROT_LINK_API_KEY.

See full list of supported init args and their descriptions in the params section.

# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python

from langchain_parrot_link import ChatParrotLink

llm = ChatParrotLink(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)

Invoke:
.. code-block:: python

messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
llm.invoke(messages)

.. code-block:: python

# TODO: Example output.

# TODO: Delete if token-level streaming isn't supported.
Stream:
.. code-block:: python

for chunk in llm.stream(messages):
print(chunk.text(), end="")

.. code-block:: python

# TODO: Example output.

.. code-block:: python

stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full

.. code-block:: python

# TODO: Example output.

# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python

await llm.ainvoke(messages)

# stream:
# async for chunk in (await llm.astream(messages))

# batch:
# await llm.abatch([messages])

.. code-block:: python

# TODO: Example output.

# TODO: Delete if .bind_tools() isn't supported.
Tool calling:
.. code-block:: python

from pydantic import BaseModel, Field

class GetWeather(BaseModel):
'''Get the current weather in a given location'''

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
'''Get the current population in a given location'''

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
ai_msg.tool_calls

.. code-block:: python

# TODO: Example output.

See ``ChatParrotLink.bind_tools()`` method for more.

# TODO: Delete if .with_structured_output() isn't supported.
Structured output:
.. code-block:: python

from typing import Optional

from pydantic import BaseModel, Field

class Joke(BaseModel):
'''Joke to tell user.'''

setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")

structured_llm = llm.with_structured_output(Joke)
structured_llm.invoke("Tell me a joke about cats")

.. code-block:: python

# TODO: Example output.

See ``ChatParrotLink.with_structured_output()`` for more.

# TODO: Delete if JSON mode response format isn't supported.
JSON mode:
.. code-block:: python

# TODO: Replace with appropriate bind arg.
json_llm = llm.bind(response_format={"type": "json_object"})
ai_msg = json_llm.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
ai_msg.content

.. code-block:: python

# TODO: Example output.

# TODO: Delete if image inputs aren't supported.
Image input:
.. code-block:: python

import base64
import httpx
from langchain_core.messages import HumanMessage

image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
image_data = base64.b64encode(httpx.get(image_url).content).decode("utf-8")
# TODO: Replace with appropriate message content format.
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"},
},
],
)
ai_msg = llm.invoke([message])
ai_msg.content

.. code-block:: python

# TODO: Example output.

# TODO: Delete if audio inputs aren't supported.
Audio input:
.. code-block:: python

# TODO: Example input

.. code-block:: python

# TODO: Example output

# TODO: Delete if video inputs aren't supported.
Video input:
.. code-block:: python

# TODO: Example input

.. code-block:: python

# TODO: Example output

# TODO: Delete if token usage metadata isn't supported.
Token usage:
.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.usage_metadata

.. code-block:: python

{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}

# TODO: Delete if logprobs aren't supported.
Logprobs:
.. code-block:: python

# TODO: Replace with appropriate bind arg.
logprobs_llm = llm.bind(logprobs=True)
ai_msg = logprobs_llm.invoke(messages)
ai_msg.response_metadata["logprobs"]

.. code-block:: python

# TODO: Example output.

Response metadata
.. code-block:: python

ai_msg = llm.invoke(messages)
ai_msg.response_metadata

.. code-block:: python

# TODO: Example output.

""" # noqa: E501

model_name: str = Field(alias="model")
"""The name of the model"""
parrot_buffer_length: int
"""The number of characters from the last message of the prompt to be echoed."""
temperature: Optional[float] = None
max_tokens: Optional[int] = None
timeout: Optional[int] = None
stop: Optional[List[str]] = None
max_retries: int = 2

@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "chat-__package_name_short__"

@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters.

This information is used by the LangChain callback system, which
is used for tracing purposes make it possible to monitor LLMs.
"""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": self.model_name,
}

def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.

This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.

Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##

generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])

def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
"""Stream the output of the model.

This method should be implemented if the model can generate output
in a streaming fashion. If the model does not support streaming,
do not implement it. In that case streaming requests will be automatically
handled by the _generate method.

Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
"""
last_message = messages[-1]
tokens = str(last_message.content[: self.parrot_buffer_length])
ct_input_tokens = sum(len(message.content) for message in messages)

for token in tokens:
usage_metadata = UsageMetadata(
{
"input_tokens": ct_input_tokens,
"output_tokens": 1,
"total_tokens": ct_input_tokens + 1,
}
)
ct_input_tokens = 0
chunk = ChatGenerationChunk(
message=AIMessageChunk(content=token, usage_metadata=usage_metadata)
)

if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)

yield chunk

# Let's add some other information (e.g., response metadata)
chunk = ChatGenerationChunk(
message=AIMessageChunk(content="", response_metadata={"time_in_sec": 3})
)
if run_manager:
# This is optional in newer versions of LangChain
# The on_llm_new_token will be called automatically
run_manager.on_llm_new_token(token, chunk=chunk)
yield chunk

# TODO: Implement if ChatParrotLink supports async streaming. Otherwise delete.
# async def _astream(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> AsyncIterator[ChatGenerationChunk]:

# TODO: Implement if ChatParrotLink supports async generation. Otherwise delete.
# async def _agenerate(
# self,
# messages: List[BaseMessage],
# stop: Optional[List[str]] = None,
# run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
# **kwargs: Any,
# ) -> ChatResult:

下一步

现在您已经实现了您的包,您可以继续测试您的集成的集成并成功运行它们。


此页对您有帮助吗?