vLLM
vLLM 是一个快速且易于使用的 LLM 推理和服务库,提供以下功能:
- 最先进的服务吞吐量
- 使用 PagedAttention 对注意力键和值内存进行有效管理
- 对传入请求进行连续批处理
- 优化的 CUDA 内核
本笔记本介绍了如何使用 Langchain 和 vLLM 来使用 LLM。
要使用它,您应该安装 vllm
python 包。
%pip install --upgrade --quiet vllm -q
from langchain_community.llms import VLLM
llm = VLLM(
model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm.invoke("What is the capital of France ?"))
API 参考:VLLM
INFO 08-06 11:37:33 llm_engine.py:70] Initializing an LLM engine with config: model='mosaicml/mpt-7b', tokenizer='mosaicml/mpt-7b', tokenizer_mode=auto, trust_remote_code=True, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0)
INFO 08-06 11:37:41 llm_engine.py:196] # GPU blocks: 861, # CPU blocks: 512
``````output
Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 2.00it/s]
``````output
What is the capital of France ? The capital of France is Paris.
``````output
将模型集成到 LLMChain 中
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who was the US president in the year the first Pokemon game was released?"
print(llm_chain.invoke(question))
API 参考:LLMChain | PromptTemplate
Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.34s/it]
``````output
1. The first Pokemon game was released in 1996.
2. The president was Bill Clinton.
3. Clinton was president from 1993 to 2001.
4. The answer is Clinton.
``````output
分布式推理
vLLM 支持分布式张量并行推理和服务。
要使用 LLM 类运行多 GPU 推理,请将 tensor_parallel_size
参数设置为要使用的 GPU 数量。例如,要运行 4 个 GPU 上的推理
from langchain_community.llms import VLLM
llm = VLLM(
model="mosaicml/mpt-30b",
tensor_parallel_size=4,
trust_remote_code=True, # mandatory for hf models
)
llm.invoke("What is the future of AI?")
API 参考:VLLM
量化
vLLM 支持 awq
量化。要启用它,请将 quantization
传递给 vllm_kwargs
。
llm_q = VLLM(
model="TheBloke/Llama-2-7b-Chat-AWQ",
trust_remote_code=True,
max_new_tokens=512,
vllm_kwargs={"quantization": "awq"},
)
OpenAI 兼容服务器
vLLM 可以部署为模仿 OpenAI API 协议的服务器。这允许 vLLM 用作使用 OpenAI API 的应用程序的直接替代品。
此服务器可以以与 OpenAI API 相同的格式进行查询。
OpenAI 兼容完成
from langchain_community.llms import VLLMOpenAI
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="tiiuae/falcon-7b",
model_kwargs={"stop": ["."]},
)
print(llm.invoke("Rome is"))
API 参考:VLLMOpenAI
a city that is filled with history, ancient buildings, and art around every corner