本地运行模型
用例
诸如 llama.cpp、Ollama、GPT4All、llamafile 等项目的流行程度突显了在本地(在您自己的设备上)运行 LLM 的需求。
这至少有两个重要的好处
隐私
:您的数据不会发送给第三方,并且不受商业服务条款的约束成本
:没有推理费用,这对于令牌密集型应用程序(例如,长时间运行的模拟、摘要)非常重要
概述
在本地运行LLM需要以下几样东西
开源LLM
:可以自由修改和共享的开源LLM推理
:能够在您的设备上以可接受的延迟运行此LLM的能力
开源LLM
用户现在可以访问快速增长的开源LLM集合。
这些LLM至少可以从两个维度进行评估(见图)
基础模型
:什么是基础模型,它是如何训练的?微调方法
:基础模型是否经过微调?如果是,使用了什么指令集?
这些模型的相对性能可以使用几个排行榜进行评估,包括
推理
已经出现了一些框架来支持在各种设备上对开源LLM进行推理
llama.cpp
:llama推理代码的C++实现,具有权重优化/量化gpt4all
:用于推理的优化C后端Ollama
:将模型权重和环境捆绑到一个在设备上运行并提供LLM服务的应用程序中llamafile
:将模型权重和运行模型所需的一切捆绑到一个文件中,允许您从该文件在本地运行LLM,而无需任何额外的安装步骤
一般来说,这些框架会做以下几件事
量化
:减少原始模型权重的内存占用高效的推理实现
:支持在消费级硬件(例如,CPU或笔记本电脑GPU)上进行推理
特别是,请参阅这篇关于量化重要性的优秀文章。
通过降低精度,我们可以大幅减少将LLM存储在内存中所需的内存。
此外,我们可以看到GPU内存带宽的重要性表!
由于更大的GPU内存带宽,Mac M2 Max的推理速度比M1快5-6倍。
格式化提示
一些提供商有聊天模型包装器,它会处理为您正在使用的特定本地模型格式化输入提示。但是,如果您使用文本输入/文本输出LLM包装器提示本地模型,则可能需要使用针对您的特定模型量身定制的提示。
快速入门
Ollama
是在macOS上轻松运行推理的一种方法。
此处的说明提供了详细信息,我们总结如下
%pip install -qU langchain_ollama
from langchain_ollama import OllamaLLM
llm = OllamaLLM(model="llama3.1:8b")
llm.invoke("The first man on the moon was ...")
'...Neil Armstrong!\n\nOn July 20, 1969, Neil Armstrong became the first person to set foot on the lunar surface, famously declaring "That\'s one small step for man, one giant leap for mankind" as he stepped off the lunar module Eagle onto the Moon\'s surface.\n\nWould you like to know more about the Apollo 11 mission or Neil Armstrong\'s achievements?'
在生成令牌时流式传输它们
for chunk in llm.stream("The first man on the moon was ..."):
print(chunk, end="|", flush=True)
...|
``````output
Neil| Armstrong|,| an| American| astronaut|.| He| stepped| out| of| the| lunar| module| Eagle| and| onto| the| surface| of| the| Moon| on| July| |20|,| |196|9|,| famously| declaring|:| "|That|'s| one| small| step| for| man|,| one| giant| leap| for| mankind|."||
Ollama还包括一个聊天模型包装器,用于处理格式化对话轮次
from langchain_ollama import ChatOllama
chat_model = ChatOllama(model="llama3.1:8b")
chat_model.invoke("Who was the first man on the moon?")
AIMessage(content='The answer is a historic one!\n\nThe first man to walk on the Moon was Neil Armstrong, an American astronaut and commander of the Apollo 11 mission. On July 20, 1969, Armstrong stepped out of the lunar module Eagle onto the surface of the Moon, famously declaring:\n\n"That\'s one small step for man, one giant leap for mankind."\n\nArmstrong was followed by fellow astronaut Edwin "Buzz" Aldrin, who also walked on the Moon during the mission. Michael Collins remained in orbit around the Moon in the command module Columbia.\n\nNeil Armstrong passed away on August 25, 2012, but his legacy as a pioneering astronaut and engineer continues to inspire people around the world!', response_metadata={'model': 'llama3.1:8b', 'created_at': '2024-08-01T00:38:29.176717Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 10681861417, 'load_duration': 34270292, 'prompt_eval_count': 19, 'prompt_eval_duration': 6209448000, 'eval_count': 141, 'eval_duration': 4432022000}, id='run-7bed57c5-7f54-4092-912c-ae49073dcd48-0', usage_metadata={'input_tokens': 19, 'output_tokens': 141, 'total_tokens': 160})
环境
在本地运行模型时,推理速度是一个挑战(见上文)。
为了最大程度地减少延迟,最好在GPU上本地运行模型,许多消费类笔记本电脑都配备了GPU,例如Apple设备。
即使使用GPU,可用的GPU内存带宽(如上所述)也很重要。
运行Apple Silicon GPU
Ollama
和llamafile
会自动利用Apple设备上的GPU。
其他框架需要用户设置环境以利用Apple GPU。
例如,可以通过Metal将llama.cpp
python绑定配置为使用GPU。
Metal是由Apple创建的图形和计算API,可提供对GPU的近乎直接的访问。
特别是,确保conda正在使用您创建的正确的虚拟环境(miniforge3
)。
例如,对我来说
conda activate /Users/rlm/miniforge3/envs/llama
确认上述内容后,则
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
LLM
有多种方法可以访问量化的模型权重。
HuggingFace
- 许多量化模型可供下载,并且可以使用诸如llama.cpp
之类的框架运行。您还可以从HuggingFace下载llamafile
格式的模型。gpt4all
- 模型浏览器提供了指标排行榜和可供下载的相关量化模型Ollama
- 可以直接通过pull
访问几个模型
Ollama
使用Ollama,通过ollama pull <模型系列>:<标签>
获取模型
- 例如,对于Llama 2 7b:
ollama pull llama2
将下载模型的最基本版本(例如,最小的参数数量和4位量化) - 我们还可以从模型列表中指定特定版本,例如,
ollama pull llama2:13b
- 请在API参考页面上查看完整参数集
llm = OllamaLLM(model="llama2:13b")
llm.invoke("The first man on the moon was ... think step by step")
' Sure! Here\'s the answer, broken down step by step:\n\nThe first man on the moon was... Neil Armstrong.\n\nHere\'s how I arrived at that answer:\n\n1. The first manned mission to land on the moon was Apollo 11.\n2. The mission included three astronauts: Neil Armstrong, Edwin "Buzz" Aldrin, and Michael Collins.\n3. Neil Armstrong was the mission commander and the first person to set foot on the moon.\n4. On July 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\'s surface, famously declaring "That\'s one small step for man, one giant leap for mankind."\n\nSo, the first man on the moon was Neil Armstrong!'
Llama.cpp
Llama.cpp与广泛的模型集兼容。
例如,下面我们在从HuggingFace下载的4位量化llama2-13b
上运行推理。
如上所述,请参阅API参考,了解完整的参数集。
从llama.cpp API参考文档中,有几个值得评论
n_gpu_layers
:要加载到GPU内存中的层数
- 值:1
- 含义:只有模型的一层将加载到GPU内存中(1通常足够)。
n_batch
:模型应并行处理的令牌数
- 值:n_batch
- 含义:建议选择1到n_ctx之间的值(在这种情况下设置为2048)
n_ctx
:令牌上下文窗口
- 值:2048
- 含义:模型将一次考虑2048个令牌的窗口
f16_kv
:模型是否应为键/值缓存使用半精度
- 值:True
- 含义:模型将使用半精度,这可以更节省内存;Metal仅支持True。
%env CMAKE_ARGS="-DLLAMA_METAL=on"
%env FORCE_CMAKE=1
%pip install --upgrade --quiet llama-cpp-python --no-cache-dirclear
from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=1,
n_batch=512,
n_ctx=2048,
f16_kv=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
)
控制台日志将显示以下内容,以指示Metal已通过以上步骤正确启用
ggml_metal_init: allocating
ggml_metal_init: using MPS
llm.invoke("The first man on the moon was ... Let's think step by step")
Llama.generate: prefix-match hit
``````output
and use logical reasoning to figure out who the first man on the moon was.
Here are some clues:
1. The first man on the moon was an American.
2. He was part of the Apollo 11 mission.
3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.
4. His last name is Armstrong.
Now, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.
Therefore, the first man on the moon was Neil Armstrong!
``````output
llama_print_timings: load time = 9623.21 ms
llama_print_timings: sample time = 143.77 ms / 203 runs ( 0.71 ms per token, 1412.01 tokens per second)
llama_print_timings: prompt eval time = 485.94 ms / 7 tokens ( 69.42 ms per token, 14.40 tokens per second)
llama_print_timings: eval time = 6385.16 ms / 202 runs ( 31.61 ms per token, 31.64 tokens per second)
llama_print_timings: total time = 7279.28 ms
" and use logical reasoning to figure out who the first man on the moon was.\n\nHere are some clues:\n\n1. The first man on the moon was an American.\n2. He was part of the Apollo 11 mission.\n3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\n4. His last name is Armstrong.\n\nNow, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\nTherefore, the first man on the moon was Neil Armstrong!"
GPT4All
我们可以使用从GPT4All模型浏览器下载的模型权重。
与上面显示的内容类似,我们可以运行推理并使用API参考来设置感兴趣的参数。
%pip install gpt4all
from langchain_community.llms import GPT4All
llm = GPT4All(
model="/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin"
)
llm.invoke("The first man on the moon was ... Let's think step by step")
".\n1) The United States decides to send a manned mission to the moon.2) They choose their best astronauts and train them for this specific mission.3) They build a spacecraft that can take humans to the moon, called the Lunar Module (LM).4) They also create a larger spacecraft, called the Saturn V rocket, which will launch both the LM and the Command Service Module (CSM), which will carry the astronauts into orbit.5) The mission is planned down to the smallest detail: from the trajectory of the rockets to the exact movements of the astronauts during their moon landing.6) On July 16, 1969, the Saturn V rocket launches from Kennedy Space Center in Florida, carrying the Apollo 11 mission crew into space.7) After one and a half orbits around the Earth, the LM separates from the CSM and begins its descent to the moon's surface.8) On July 20, 1969, at 2:56 pm EDT (GMT-4), Neil Armstrong becomes the first man on the moon. He speaks these"
llamafile
在本地运行LLM的最简单方法之一是使用llamafile。您只需要做的是
- 从HuggingFace下载llamafile
- 使文件可执行
- 运行该文件
llamafile将模型权重和经过专门编译的llama.cpp
版本捆绑到一个可以在大多数计算机上运行的文件中,而无需任何其他依赖项。它们还附带一个嵌入式推理服务器,该服务器提供一个API用于与您的模型交互。
这是一个简单的bash脚本,显示了所有3个设置步骤
# Download a llamafile from HuggingFace
wget https://hugging-face.cn/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile
# Make the file executable. On Windows, instead just rename the file to end in ".exe".
chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile
# Start the model server. Listens at https://127.0.0.1:8080 by default.
./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser
在运行上述设置步骤后,您可以使用LangChain与您的模型进行交互
from langchain_community.llms.llamafile import Llamafile
llm = Llamafile()
llm.invoke("The first man on the moon was ... Let's think step by step.")
"\nFirstly, let's imagine the scene where Neil Armstrong stepped onto the moon. This happened in 1969. The first man on the moon was Neil Armstrong. We already know that.\n2nd, let's take a step back. Neil Armstrong didn't have any special powers. He had to land his spacecraft safely on the moon without injuring anyone or causing any damage. If he failed to do this, he would have been killed along with all those people who were on board the spacecraft.\n3rd, let's imagine that Neil Armstrong successfully landed his spacecraft on the moon and made it back to Earth safely. The next step was for him to be hailed as a hero by his people back home. It took years before Neil Armstrong became an American hero.\n4th, let's take another step back. Let's imagine that Neil Armstrong wasn't hailed as a hero, and instead, he was just forgotten. This happened in the 1970s. Neil Armstrong wasn't recognized for his remarkable achievement on the moon until after he died.\n5th, let's take another step back. Let's imagine that Neil Armstrong didn't die in the 1970s and instead, lived to be a hundred years old. This happened in 2036. In the year 2036, Neil Armstrong would have been a centenarian.\nNow, let's think about the present. Neil Armstrong is still alive. He turned 95 years old on July 20th, 2018. If he were to die now, his achievement of becoming the first human being to set foot on the moon would remain an unforgettable moment in history.\nI hope this helps you understand the significance and importance of Neil Armstrong's achievement on the moon!"
提示
一些LLM将受益于特定的提示。
例如,LLaMA将使用特殊标记。
我们可以使用ConditionalPromptSelector
根据模型类型设置提示。
# Set our LLM
llm = LlamaCpp(
model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin",
n_gpu_layers=1,
n_batch=512,
n_ctx=2048,
f16_kv=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True,
)
根据模型版本设置关联的提示。
from langchain.chains.prompt_selector import ConditionalPromptSelector
from langchain_core.prompts import PromptTemplate
DEFAULT_LLAMA_SEARCH_PROMPT = PromptTemplate(
input_variables=["question"],
template="""<<SYS>> \n You are an assistant tasked with improving Google search \
results. \n <</SYS>> \n\n [INST] Generate THREE Google search queries that \
are similar to this question. The output should be a numbered list of questions \
and each should have a question mark at the end: \n\n {question} [/INST]""",
)
DEFAULT_SEARCH_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an assistant tasked with improving Google search \
results. Generate THREE Google search queries that are similar to \
this question. The output should be a numbered list of questions and each \
should have a question mark at the end: {question}""",
)
QUESTION_PROMPT_SELECTOR = ConditionalPromptSelector(
default_prompt=DEFAULT_SEARCH_PROMPT,
conditionals=[(lambda llm: isinstance(llm, LlamaCpp), DEFAULT_LLAMA_SEARCH_PROMPT)],
)
prompt = QUESTION_PROMPT_SELECTOR.get_prompt(llm)
prompt
PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='<<SYS>> \n You are an assistant tasked with improving Google search results. \n <</SYS>> \n\n [INST] Generate THREE Google search queries that are similar to this question. The output should be a numbered list of questions and each should have a question mark at the end: \n\n {question} [/INST]', template_format='f-string', validate_template=True)
# Chain
chain = prompt | llm
question = "What NFL team won the Super Bowl in the year that Justin Bieber was born?"
chain.invoke({"question": question})
Sure! Here are three similar search queries with a question mark at the end:
1. Which NBA team did LeBron James lead to a championship in the year he was drafted?
2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?
3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?
``````output
llama_print_timings: load time = 14943.19 ms
llama_print_timings: sample time = 72.93 ms / 101 runs ( 0.72 ms per token, 1384.87 tokens per second)
llama_print_timings: prompt eval time = 14942.95 ms / 93 tokens ( 160.68 ms per token, 6.22 tokens per second)
llama_print_timings: eval time = 3430.85 ms / 100 runs ( 34.31 ms per token, 29.15 tokens per second)
llama_print_timings: total time = 18578.26 ms
' Sure! Here are three similar search queries with a question mark at the end:\n\n1. Which NBA team did LeBron James lead to a championship in the year he was drafted?\n2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?\n3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?'
我们还可以使用 LangChain Prompt Hub 来获取和/或存储特定于模型的提示。
这需要您的 LangSmith API 密钥。
例如,这里 是一个针对 RAG 的提示,其中包含 LLaMA 特定的标记。
用例
给定一个由上述模型之一创建的 llm
,您可以将其用于 许多用例。
例如,您可以使用此处演示的聊天模型实现 RAG 应用程序。
一般来说,本地 LLM 的用例至少受以下两个因素驱动:
隐私
:用户不想共享的私人数据(例如,日记等)成本
:文本预处理(提取/标记)、摘要和代理模拟是标记使用密集型任务
此外,这里 概述了微调,它可以利用开源 LLM。