Llama2Chat
此笔记本展示了如何使用Llama2Chat
包装器增强Llama-2 LLM
,以支持Llama-2 聊天提示格式。LangChain 中的几个LLM
实现可用作 Llama-2 聊天模型的接口。这些包括ChatHuggingFace、LlamaCpp、GPT4All等,仅举几例。
Llama2Chat
是一个通用包装器,它实现了BaseChatModel
,因此可以在应用程序中用作聊天模型。Llama2Chat
将消息列表转换为所需的聊天提示格式,并将格式化的提示作为str
转发到包装的LLM
。
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_experimental.chat_models import Llama2Chat
对于下面的聊天应用程序示例,我们将使用以下聊天prompt_template
from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
template_messages = [
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
prompt_template = ChatPromptTemplate.from_messages(template_messages)
通过HuggingFaceTextGenInference
LLM 与 Llama-2 聊天
HuggingFaceTextGenInference LLM 封装了对text-generation-inference服务器的访问。在下面的示例中,推理服务器提供了一个meta-llama/Llama-2-13b-chat-hf模型。它可以使用以下命令在本地启动:
docker run \
--rm \
--gpus all \
--ipc=host \
-p 8080:80 \
-v ~/.cache/huggingface/hub:/data \
-e HF_API_TOKEN=${HF_API_TOKEN} \
ghcr.io/huggingface/text-generation-inference:0.9 \
--hostname 0.0.0.0 \
--model-id meta-llama/Llama-2-13b-chat-hf \
--quantize bitsandbytes \
--num-shard 4
例如,这在具有 4 个 RTX 3080ti 显卡的机器上有效。将--num_shard
值调整为可用 GPU 的数量。HF_API_TOKEN
环境变量包含 Hugging Face API 令牌。
# !pip3 install text-generation
创建一个连接到本地推理服务器的HuggingFaceTextGenInference
实例,并将其包装到Llama2Chat
中。
from langchain_community.llms import HuggingFaceTextGenInference
llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:8080/",
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
)
model = Llama2Chat(llm=llm)
然后,您可以使用聊天model
以及prompt_template
和对话memory
在LLMChain
中。
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
Sure, I'd be happy to help! Here are a few popular locations to consider visiting in Vienna:
1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. Vienna State Opera
7. Albertina Museum
8. Museum of Natural History
9. Kunsthistorisches Museum
10. Ringstrasse
print(chain.run(text="Tell me more about #2."))
Certainly! St. Stephen's Cathedral (Stephansdom) is one of the most recognizable landmarks in Vienna and a must-see attraction for visitors. This stunning Gothic cathedral is located in the heart of the city and is known for its intricate stone carvings, colorful stained glass windows, and impressive dome.
The cathedral was built in the 12th century and has been the site of many important events throughout history, including the coronation of Holy Roman emperors and the funeral of Mozart. Today, it is still an active place of worship and offers guided tours, concerts, and special events. Visitors can climb up the south tower for panoramic views of the city or attend a service to experience the beautiful music and chanting.
通过LlamaCPP
LLM 与 Llama-2 聊天
要使用LlamaCPP
LMM
与 Llama-2 聊天模型一起使用,请使用这些安装说明安装llama-cpp-python
库。以下示例使用存储在本地~/Models/llama-2-7b-chat.Q4_0.gguf
处的量化llama-2-7b-chat.Q4_0.gguf模型。
创建LlamaCpp
实例后,llm
再次包装到Llama2Chat
中
from os.path import expanduser
from langchain_community.llms import LlamaCpp
model_path = expanduser("~/Models/llama-2-7b-chat.Q4_0.gguf")
llm = LlamaCpp(
model_path=model_path,
streaming=False,
)
model = Llama2Chat(llm=llm)
并在与前一个示例相同的方式中使用。
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt_template, memory=memory)
print(
chain.run(
text="What can I see in Vienna? Propose a few locations. Names only, no details."
)
)
Of course! Vienna is a beautiful city with a rich history and culture. Here are some of the top tourist attractions you might want to consider visiting:
1. Schönbrunn Palace
2. St. Stephen's Cathedral
3. Hofburg Palace
4. Belvedere Palace
5. Prater Park
6. MuseumsQuartier
7. Ringstrasse
8. Vienna State Opera
9. Kunsthistorisches Museum
10. Imperial Palace
These are just a few of the many amazing places to see in Vienna. Each one has its own unique history and charm, so I hope you enjoy exploring this beautiful city!
``````output
llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 56.40 ms / 144 runs ( 0.39 ms per token, 2553.37 tokens per second)
llama_print_timings: prompt eval time = 1444.25 ms / 47 tokens ( 30.73 ms per token, 32.54 tokens per second)
llama_print_timings: eval time = 8832.02 ms / 143 runs ( 61.76 ms per token, 16.19 tokens per second)
llama_print_timings: total time = 10645.94 ms
print(chain.run(text="Tell me more about #2."))
Llama.generate: prefix-match hit
``````output
Of course! St. Stephen's Cathedral (also known as Stephansdom) is a stunning Gothic-style cathedral located in the heart of Vienna, Austria. It is one of the most recognizable landmarks in the city and is considered a symbol of Vienna.
Here are some interesting facts about St. Stephen's Cathedral:
1. History: The construction of St. Stephen's Cathedral began in the 12th century on the site of a former Romanesque church, and it took over 600 years to complete. The cathedral has been renovated and expanded several times throughout its history, with the most significant renovation taking place in the 19th century.
2. Architecture: St. Stephen's Cathedral is built in the Gothic style, characterized by its tall spires, pointed arches, and intricate stone carvings. The cathedral features a mix of Romanesque, Gothic, and Baroque elements, making it a unique blend of styles.
3. Design: The cathedral's design is based on the plan of a cross with a long nave and two shorter arms extending from it. The main altar is
``````output
llama_print_timings: load time = 250.46 ms
llama_print_timings: sample time = 100.60 ms / 256 runs ( 0.39 ms per token, 2544.73 tokens per second)
llama_print_timings: prompt eval time = 5128.71 ms / 160 tokens ( 32.05 ms per token, 31.20 tokens per second)
llama_print_timings: eval time = 16193.02 ms / 255 runs ( 63.50 ms per token, 15.75 tokens per second)
llama_print_timings: total time = 21988.57 ms