聊天数据集¶
聊天数据集涉及用户和助手之间的多轮次对话(多次来回)。
[
{"role": "user", "content": "What is the answer to the ultimate question of life?"},
{"role": "assistant", "content": "The answer is 42."},
{"role": "user", "content": "That's ridiculous"},
{"role": "assistant", "content": "Oh I know."},
]
这比模型通常使用自由格式文本关联进行预训练的自由格式文本关联更结构化。 他们学会了简单地预测下一个标记,而不是准确地响应用户。
在 torchtune 中使用聊天数据集进行微调的主要入口点是构建器。这允许您指定遵循聊天数据格式的本地或 Hugging Face 数据集
直接从配置中,并在其上训练您的 LLM。
示例 chat 数据集¶
# data/my_data.json
[
{
"conversations": [
{
"from": "human",
"value": "What is the answer to life?"
},
{
"from": "gpt",
"value": "The answer is 42."
},
{
"from": "human",
"value": "That's ridiculous"
},
{
"from": "gpt",
"value": "Oh I know."
}
]
}
]
from torchtune.models.mistral import mistral_tokenizer
from torchtune.datasets import chat_dataset
m_tokenizer = mistral_tokenizer(
path="/tmp/Mistral-7B-v0.1/tokenizer.model",
prompt_template="torchtune.models.mistral.MistralChatTemplate",
max_seq_len=8192,
)
ds = chat_dataset(
tokenizer=m_tokenizer,
source="json",
data_files="data/my_data.json",
split="train",
conversation_column="conversations",
conversation_style="sharegpt",
# By default, user prompt is ignored in loss. Set to True to include it
train_on_input=True,
new_system_prompt=None,
)
tokenized_dict = ds[0]
tokens, labels = tokenized_dict["tokens"], tokenized_dict["labels"]
print(m_tokenizer.decode(tokens))
# [INST] What is the answer to life? [/INST] The answer is 42. [INST] That's ridiculous [/INST] Oh I know.
print(labels)
# [1, 733, 16289, 28793, 1824, 349, 272, 4372, ...]
# In config
tokenizer:
_component_: torchtune.models.mistral.mistral_tokenizer
path: /tmp/Mistral-7B-v0.1/tokenizer.model
prompt_template: torchtune.models.mistral.MistralChatTemplate
max_seq_len: 8192
dataset:
_component_: torchtune.datasets.chat_dataset
source: json
data_files: data/my_data.json
split: train
conversation_column: conversations
conversation_style: sharegpt
train_on_input: True
new_system_prompt: null
聊天数据集格式¶
聊天数据集通常有一个名为“conversations”或“messages”的列,其中包含有关单个主题的消息列表 每个样本。消息列表可能包括系统提示符、用户和 Assistant 之间的多次轮次以及工具调用/返回。
| conversations |
|--------------------------------------------------------------|
| [{"role": "user", "content": "What day is today?"}, |
| {"role": "assistant", "content": "It is Tuesday."}] |
| [{"role": "user", "content": "What about tomorrow?"}, |
| {"role": "assistant", "content": "Tomorrow is Wednesday."}] |
例如,您可以看到 SlimOrca 数据集的架构。
从 Hugging Face 加载聊天数据集¶
您需要将数据集存储库名称传递到 ,在 中选择一种对话样式,然后指定 .
对于大多数 HF 数据集,您还需要指定 .source
conversation_style
conversation_column
split
from torchtune.models.gemma import gemma_tokenizer
from torchtune.datasets import chat_dataset
g_tokenizer = gemma_tokenizer("/tmp/gemma-7b/tokenizer.model")
ds = chat_dataset(
tokenizer=g_tokenizer,
source="Open-Orca/SlimOrca-Dedup",
conversation_column="conversations",
conversation_style="sharegpt",
split="train",
)
# Tokenizer is passed into the dataset in the recipe
dataset:
_component_: torchtune.datasets.chat_dataset
source: Open-Orca/SlimOrca-Dedup
conversation_column: conversations
conversation_style: sharegpt
split: train
加载本地和远程聊天数据集¶
要通过 https 加载包含对话数据的本地或远程数据集,您需要另外指定 and 参数。有关加载本地或远程文件的更多详细信息,请参阅 Hugging Face 的文档。data_files
split
load_dataset
from torchtune.models.gemma import gemma_tokenizer
from torchtune.datasets import chat_dataset
g_tokenizer = gemma_tokenizer("/tmp/gemma-7b/tokenizer.model")
ds = chat_dataset(
tokenizer=g_tokenizer,
source="json",
conversation_column="conversations",
conversation_style="sharegpt",
data_files="data/my_data.json",
split="train",
)
# Tokenizer is passed into the dataset in the recipe
dataset:
_component_: torchtune.datasets.chat_dataset
source: json
conversation_column: conversations
conversation_style: sharegpt
data_files: data/my_data.json
split: train
指定对话样式¶
原始数据集中的对话结构可能会因不同的角色名称和不同的字段而有很大差异
指示消息内容名称。许多数据集中有一些通用的标准化格式。
我们有内置的转换器,可以将这些标准化格式转换为遵循以下格式的 torchtune 列表:
[
{
"role": "system" | "user" | "assistant" | "ipython",
"content": <message>,
},
...
]
"openai"
¶
{
"messages": [
{
"role": "system" | "user" | "assistant",
"content": <message>,
},
...
]
}
您可以在 code 或 config 中指定:conversation_style=openai
from torchtune.models.gemma import gemma_tokenizer
from torchtune.datasets import chat_dataset
g_tokenizer = gemma_tokenizer("/tmp/gemma-7b/tokenizer.model")
ds = chat_dataset(
tokenizer=g_tokenizer,
source="json",
conversation_column="conversations",
conversation_style="openai",
data_files="data/my_data.json",
split="train",
)
# Tokenizer is passed into the dataset in the recipe
dataset:
_component_: torchtune.datasets.chat_dataset
source: json
conversation_column: conversations
conversation_style: openai
data_files: data/my_data.json
split: train
如果您的数据集不符合上述对话样式之一,则需要创建自定义消息转换。
重命名列¶
要指定包含对话数据的列,请使用 。conversation_column
# data/my_data.json
[
{
"dialogue": [
{
"from": "human",
"value": "What is the answer to life?"
},
{
"from": "gpt",
"value": "The answer is 42."
},
{
"from": "human",
"value": "That's ridiculous"
},
{
"from": "gpt",
"value": "Oh I know."
}
]
}
]
from torchtune.models.gemma import gemma_tokenizer
from torchtune.datasets import chat_dataset
g_tokenizer = gemma_tokenizer("/tmp/gemma-7b/tokenizer.model")
ds = chat_dataset(
tokenizer=g_tokenizer,
source="json",
conversation_column="dialogue",
conversation_style="sharegpt",
data_files="data/my_data.json",
split="train",
)
# Tokenizer is passed into the dataset in the recipe
dataset:
_component_: torchtune.datasets.chat_dataset
source: json
conversation_column: dialogue
conversation_style: sharegpt
data_files: data/my_data.json
split: train
聊天模板¶
聊天模板的定义方式与 中的指示模板相同。有关更多信息,请参阅 Instruct templates 。