跳到主要内容

LangChain 装饰器 ✨

Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.

LangChain 装饰器 是 LangChain 之上的一个层,它为编写自定义 langchain 提示和链提供语法糖 🍭

如需反馈、问题或贡献,请在此处提出问题:ju-bezdek/langchain-decorators

主要原则和优势

  • pythonic 的代码编写方式
  • 编写多行提示,不会因缩进而破坏代码流程
  • 利用 IDE 内置的 提示类型检查文档弹出窗口 功能,快速查看函数以了解提示、它消耗的参数等。
  • 利用 🦜🔗 LangChain 生态系统的所有功能
  • 添加对 可选参数 的支持
  • 通过将参数绑定到一个类,轻松在提示之间共享参数

这是一个使用 LangChain 装饰器 ✨ 编写的简单代码示例


@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

快速开始

安装

pip install langchain_decorators

示例

一个好的开始方法是查看此处的示例

定义其他参数

这里我们只是使用 llm_prompt 装饰器将一个函数标记为提示,从而有效地将其转换为 LLMChain。而不是运行它

标准 LLMchain 比仅输入变量和提示需要更多的初始化参数... 这里这个实现细节隐藏在装饰器中。 这是它的工作原理

  1. 使用 全局设置
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
  1. 使用预定义的 提示类型
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
...

  1. 直接在装饰器中定义设置
from langchain_openai import OpenAI

@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
...
)
def creative_writer(book_title:str)->str:
...
API 参考:OpenAI

传递内存和/或回调:

要传递这些,只需在函数中声明它们(或使用 kwargs 传递任何内容)


@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass

await write_me_short_post(topic="old movies")

简化流式传输

如果我们想利用流式传输

  • 我们需要将提示定义为异步函数
  • 打开装饰器上的流式传输,或者我们可以定义带有流式传输的 PromptType
  • 使用 StreamingContext 捕获流

这样,我们只需标记哪个提示应该被流式传输,而无需摆弄我们应该使用什么 LLM,将创建和分发流处理程序传递到我们链的特定部分...只需在提示/提示类型上打开/关闭流式传输...

只有当我们在流式传输上下文中调用它时才会发生流式传输... 在那里我们可以定义一个简单的函数来处理流

# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass



# just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await run_prompt()
print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

提示声明

默认情况下,提示是整个函数文档,除非您标记了您的提示

记录您的提示

我们可以通过使用带有 <prompt> 语言标签的代码块来指定文档的哪个部分是提示定义

@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.

It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)

现在只有上面的代码块将用作提示,文档字符串的其余部分将用作开发人员的描述。(它还有一个好处,即 IDE(如 VS Code)将正确显示提示(不会尝试将其解析为 markdown,因此不会正确显示新行)) """ return


## Chat messages prompt

For chat models is very useful to define prompt as a set of message templates... here is how to do it:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` suffix inside the <prompt:_role_> tag


```<prompt:system>
You are a {agent_role} hacker. You mus act like one.
You reply always in code, using python or javascript code block...
for example:

... do not reply with anything else.. just with code - respecting your role.

人类消息

(我们正在使用 LLM 强制执行的真实角色 - GPT 支持系统、助手、用户)

Helo, who are you

回复

\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
\```

我们还可以使用占位符添加一些历史记录

{history}
{human_input}

现在只有上面的代码块将用作提示,文档字符串的其余部分将用作开发人员的描述。(它还有一个好处,即 IDE(如 VS Code)将正确显示提示(不会尝试将其解析为 markdown,因此不会正确显示新行)) """ pass


the roles here are model native roles (assistant, user, system for chatGPT)



# Optional sections
- you can define a whole sections of your prompt that should be optional
- if any input in the section is missing, the whole section won't be rendered

the syntax for this is as follows:

``` python
@llm_prompt
def prompt_with_optional_partials():
"""
this text will be rendered always, but

{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}

you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
is not empty?} !
"""

输出解析器

  • llm_prompt 装饰器会尝试根据输出类型自动检测最佳输出解析器。(如果未设置,则返回原始字符串)
  • 列表、字典和 pydantic 输出也原生支持(自动)
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
""" Write me {count} good name suggestions for company that {company_business}
"""
pass

write_name_suggestions(company_business="sells cookies", count=5)

更复杂的结构

对于字典/pydantic,您需要指定格式化说明... 这可能很繁琐,这就是为什么您可以让输出解析器根据模型(pydantic)生成指令

from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
name:str = Field (description="The name of the company")
headline:str = Field( description="The description of the company (for landing page)")
employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

将提示绑定到对象

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
assistant_name:str
assistant_role:str
field:str

@property
def a_property(self):
return "whatever"

def hello_world(self, function_kwarg:str=None):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""


@llm_prompt
def introduce_your_self(self)->str:
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)

"""

personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")

print(personality.introduce_your_self(personality))



# More examples:

- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators

此页是否对您有帮助?