Llama.cpp
llama-cpp-python 是 llama.cpp 的 Python 绑定。
它支持对 许多 LLM 模型的推断,这些模型可以在 Hugging Face 上访问。
此笔记本介绍了如何在 LangChain 中运行 llama-cpp-python
。
注意: llama-cpp-python
的新版本使用 GGUF 模型文件 (参见 此处)。
这是一个重大更改。
要将现有的 GGML 模型转换为 GGUF,您可以在 llama.cpp 中运行以下命令
python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input models/openorca-platypus2-13b.ggmlv3.q4_0.bin --output models/openorca-platypus2-13b.gguf.q4_0.bin
安装
安装 llama-cpp 包有多种选择
- CPU 使用
- CPU + GPU (使用多个 BLAS 后端之一)
- Metal GPU (带有 Apple 硅芯片的 MacOS)
仅 CPU 安装
%pip install --upgrade --quiet llama-cpp-python
使用 OpenBLAS / cuBLAS / CLBlast 安装
llama.cpp
支持多个 BLAS 后端以实现更快的处理。使用 FORCE_CMAKE=1
环境变量强制使用 cmake 并安装所需 BLAS 后端的 pip 包 (来源)。
使用 cuBLAS 后端的安装示例
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
重要: 如果您已经安装了仅 CPU 版本的包,则需要从头开始重新安装它。请考虑以下命令
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
使用 Metal 安装
llama.cpp
支持 Apple 硅作为一等公民 - 通过 ARM NEON、Accelerate 和 Metal 框架进行优化。使用 FORCE_CMAKE=1
环境变量强制使用 cmake 并安装 Metal 支持的 pip 包 (来源)。
使用 Metal 支持的安装示例
!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
重要: 如果您已经安装了仅 CPU 版本的包,则需要从头开始重新安装它:请考虑以下命令
!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
使用 Windows 安装
通过从源代码编译来安装 llama-cpp-python
库是稳定的。您可以按照存储库本身中的大多数说明进行操作,但还有一些特定于 Windows 的说明可能会有所帮助。
安装 llama-cpp-python
的要求,
- git
- python
- cmake
- Visual Studio Community (确保您使用以下设置安装它)
- 使用 C++ 进行桌面开发
- Python 开发
- 使用 C++ 进行 Linux 嵌入式开发
- 递归克隆 git 存储库以获取
llama.cpp
子模块
git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.git
- 打开一个命令提示符并设置以下环境变量。
set FORCE_CMAKE=1
set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF
如果您有 NVIDIA GPU,请确保 DLLAMA_CUBLAS
设置为 ON
编译和安装
现在您可以 cd
到 llama-cpp-python
目录并安装该包
python -m pip install -e .
重要: 如果您已经安装了仅 CPU 版本的包,则需要从头开始重新安装它:请考虑以下命令
!python -m pip install -e . --force-reinstall --no-cache-dir
使用
确保您按照所有说明 安装所有必要的模型文件。
您不需要 API_TOKEN
,因为您将在本地运行 LLM。
了解哪些模型适合在目标机器上使用是值得的。
TheBloke 的 Hugging Face 模型有一个 Provided files
部分,它显示了运行不同量化大小和方法的模型所需的 RAM (例如:Llama2-7B-Chat-GGUF)。
此 github 问题 也很重要,可以帮助您找到适合您的机器的模型。
from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
考虑使用适合您的模型的模板!检查 Hugging Face 等网站上的模型页面以获取正确的提示模板。
template = """Question: {question}
Answer: Let's work this out in a step by step way to be sure we have the right answer."""
prompt = PromptTemplate.from_template(template)
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
CPU
使用 LLaMA 2 7B 模型的示例
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
temperature=0.75,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
question = """
Question: A rap battle between Stephen Colbert and John Oliver
"""
llm.invoke(question)
Stephen Colbert:
Yo, John, I heard you've been talkin' smack about me on your show.
Let me tell you somethin', pal, I'm the king of late-night TV
My satire is sharp as a razor, it cuts deeper than a knife
While you're just a british bloke tryin' to be funny with your accent and your wit.
John Oliver:
Oh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.
My show is the one that people actually watch and listen to, not just for the laughs but for the facts.
While you're busy talkin' trash, I'm out here bringing the truth to light.
Stephen Colbert:
Truth? Ha! You think your show is about truth? Please, it's all just a joke to you.
You're just a fancy-pants british guy tryin' to be funny with your news and your jokes.
While I'm the one who's really makin' a difference, with my sat
``````output
llama_print_timings: load time = 358.60 ms
llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second)
llama_print_timings: prompt eval time = 613.36 ms / 16 tokens ( 38.33 ms per token, 26.09 tokens per second)
llama_print_timings: eval time = 10151.17 ms / 255 runs ( 39.81 ms per token, 25.12 tokens per second)
llama_print_timings: total time = 11332.41 ms
"\nStephen Colbert:\nYo, John, I heard you've been talkin' smack about me on your show.\nLet me tell you somethin', pal, I'm the king of late-night TV\nMy satire is sharp as a razor, it cuts deeper than a knife\nWhile you're just a british bloke tryin' to be funny with your accent and your wit.\nJohn Oliver:\nOh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\nMy show is the one that people actually watch and listen to, not just for the laughs but for the facts.\nWhile you're busy talkin' trash, I'm out here bringing the truth to light.\nStephen Colbert:\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\nWhile I'm the one who's really makin' a difference, with my sat"
使用 LLaMA v1 模型的示例
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True
)
llm_chain = prompt | llm
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.invoke({"question": question})
1. First, find out when Justin Bieber was born.
2. We know that Justin Bieber was born on March 1, 1994.
3. Next, we need to look up when the Super Bowl was played in that year.
4. The Super Bowl was played on January 28, 1995.
5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.
``````output
llama_print_timings: load time = 434.15 ms
llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token)
llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token)
llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token)
llama_print_timings: total time = 28945.95 ms
'\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'
GPU
如果使用 BLAS 后端的安装正确,您将在模型属性中看到 BLAS = 1
指示器。
与 GPU 一起使用时,两个最重要的参数是
n_gpu_layers
- 确定将模型的多少层卸载到您的 GPU 上。n_batch
- 并行处理多少个令牌。
正确设置这些参数将极大地提高评估速度 (有关更多详细信息,请参见 包装器代码)。
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
llm_chain = prompt | llm
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.invoke({"question": question})
1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994.
2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994.
3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup.
So, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl.
``````output
llama_print_timings: load time = 427.63 ms
llama_print_timings: sample time = 115.85 ms / 164 runs ( 0.71 ms per token, 1415.67 tokens per second)
llama_print_timings: prompt eval time = 427.53 ms / 45 tokens ( 9.50 ms per token, 105.26 tokens per second)
llama_print_timings: eval time = 4526.53 ms / 163 runs ( 27.77 ms per token, 36.01 tokens per second)
llama_print_timings: total time = 5293.77 ms
"\n\n1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994.\n\n2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994.\n\n3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup.\n\nSo, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl."
Metal
如果使用 Metal 的安装正确,您将在模型属性中看到 NEON = 1
指示器。
两个最重要的 GPU 参数是
n_gpu_layers
- 确定将模型的多少层卸载到您的 Metal GPU 上。n_batch
- 并行处理多少个令牌,默认值为 8,设置为更大的数字。f16_kv
- 出于某种原因,Metal 仅支持True
,否则您将收到错误,例如Asserting on type 0 GGML_ASSERT: .../ggml-metal.m:706: false && "not implemented"
正确设置这些参数将极大地提高评估速度 (有关更多详细信息,请参见 包装器代码)。
n_gpu_layers = 1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
控制台日志将显示以下日志以表明 Metal 已正确启用。
ggml_metal_init: allocating
ggml_metal_init: using MPS
...
您还可以通过观察进程的 GPU 使用情况来检查 活动监视器
,在打开 n_gpu_layers=1
之后,CPU 使用率将大幅下降。
对于对 LLM 的首次调用,由于模型在 Metal GPU 中的编译,性能可能很慢。
语法
我们可以使用 语法 来约束模型输出并根据其中定义的规则对令牌进行采样。
为了演示这个概念,我们包含了 示例语法文件,这些文件将在下面的示例中使用。
创建 gbnf 语法文件可能很耗时,但如果您有输出模式很重要的用例,则有两个工具可以提供帮助
- 在线语法生成器应用程序,用于将 TypeScript 接口定义转换为 gbnf 文件。
- Python 脚本,用于将 json 模式转换为 gbnf 文件。例如,您可以创建一个
pydantic
对象,使用.schema_json()
方法生成其 JSON 模式,然后使用此脚本将其转换为 gbnf 文件。
在第一个示例中,提供指定 json.gbnf
文件的路径以生成 JSON
n_gpu_layers = 1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/json.gbnf",
)
%%capture captured --no-stdout
result = llm.invoke("Describe a person in JSON format:")
{
"name": "John Doe",
"age": 34,
"": {
"title": "Software Developer",
"company": "Google"
},
"interests": [
"Sports",
"Music",
"Cooking"
],
"address": {
"street_number": 123,
"street_name": "Oak Street",
"city": "Mountain View",
"state": "California",
"postal_code": 94040
}}
``````output
llama_print_timings: load time = 357.51 ms
llama_print_timings: sample time = 1213.30 ms / 144 runs ( 8.43 ms per token, 118.68 tokens per second)
llama_print_timings: prompt eval time = 356.78 ms / 9 tokens ( 39.64 ms per token, 25.23 tokens per second)
llama_print_timings: eval time = 3947.16 ms / 143 runs ( 27.60 ms per token, 36.23 tokens per second)
llama_print_timings: total time = 5846.21 ms
我们也可以提供 list.gbnf
以返回列表
n_gpu_layers = 1
n_batch = 512
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
callback_manager=callback_manager,
verbose=True,
grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/list.gbnf",
)
%%capture captured --no-stdout
result = llm.invoke("List of top-3 my favourite books:")
["The Catcher in the Rye", "Wuthering Heights", "Anna Karenina"]
``````output
llama_print_timings: load time = 322.34 ms
llama_print_timings: sample time = 232.60 ms / 26 runs ( 8.95 ms per token, 111.78 tokens per second)
llama_print_timings: prompt eval time = 321.90 ms / 11 tokens ( 29.26 ms per token, 34.17 tokens per second)
llama_print_timings: eval time = 680.82 ms / 25 runs ( 27.23 ms per token, 36.72 tokens per second)
llama_print_timings: total time = 1295.27 ms