[LLM]大模型训练DeepSpeed(三)
转自:https://blog.csdn.net/zwqjoy/article/details/138274598
优化器和调度器
当不使用offload_optimizer 时,可以按照下表,混合使用HF和DS的优化器和迭代器,除了HF Scheduler和DS Optimizer这一种情况。
| Combos | HF Scheduler | DS Scheduler |
|---|---|---|
| HF Optimizer | Yes | Yes |
| DS Optimizer | No | Yes |
优化器
启用 offload_optimizer 时可以使用非 DeepSpeed 的优化器,只要它同时具有 CPU 和 GPU 的实现(LAMB 除外)。
DeepSpeed 的主要优化器是 Adam、AdamW、OneBitAdam 和 Lamb。 这些已通过 ZeRO 进行了彻底测试,建议使用。
如果没有在配置文件中配置优化器参数,Trainer 将自动将其设置为 AdamW,并将使用命令行参数的默认值:--learning_rate、--adam_beta1、--adam_beta2、 --adam_epsilon 和 --weight_decay。
与 AdamW 类似,可以配置其他官方支持的优化器。 请记住,它们可能具有不同的配置值。 例如 对于 Adam,需要将 weight_decay 设置为 0.01 左右。
此外,offload在与 Deepspeed 的 CPU Adam 优化器一起使用时效果最佳。 如果想对offload使用不同的优化器,deepspeed==0.8.3 以后的版本,还需要添加:
{
"zero_force_ds_cpu_optimizer": false
}
调度器
- DeepSpeed 支持 LRRangeTest、OneCycle、WarmupLR 和 WarmupDecayLR 学习率调度器。
- Transformers和DeepSpeed中调度器的overlap
WarmupLR 使用 --lr_scheduler_type constant_with_warmup
WarmupDecayLR 使用 --lr_scheduler_type linear
获取模型参数
deepspeed会在优化器参数中存储模型的主参数,存储在global_step*/*optim_states.pt 文件中,数据类型为fp32。因此,想要从checkpoint中恢复训练,则保持默认即可
如果模型是在ZeRO-2模式下保存的,模型参数会以fp16的形式存储在pytorch_model.bin中
如果模型是在ZeRO-3模式下保存的,需要如下所示设置参数,否则pytorch_model.bin将不会被创建
- 在线fp32权重恢复(需要很多的RAM)略
- 离线获取fp32权重
python zero_to_fp32.py . pytorch_model.bin
DeepSpeed训练
基本训练的介绍
安装 DeepSpeed:
pip install deepspeed
- 在训练脚本中导入 DeepSpeed 模块:
- 在训练脚本中导入 Trainer 模块:
- 创建 Trainer 对象,将模型、训练数据集、优化器等参数传入:
import deepspeed from transformers import Trainer trainer = Trainer( model=model, args=args, train_dataset=train_dataset, data_collator=data_collator, optimizer=optimizer, ) trainer.train()
- 使用 DeepSpeed 命令行工具运行训练脚本(单机):
deepspeed --num_gpus=8 train.py
其中,--num_gpus 表示使用的 GPU 数量。
多节点:
deepspeed --hostfile=hostfile --master_port 60000 --include="node1:0,1,2,3@node2:0,1,2,3" run.py \ --deepspeed ds_config.json
hostfile
增加hostfile文件,填写host的相应的gpu数量(slots=4代表有4个gpu)
node1_ip slots=4
node2_ip slots=4
include参数,指定机器和gpu,如下代表使用host1机器的3号和host2的2、3号gpu
ds_config.json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1e6,
"stage3_prefetch_bucket_size": 0.94e6,
"stage3_param_persistence_threshold": 1e4,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
训练实战介绍
1. 预处理和Json文件
首先是利用huggingface的datasets.map对数据集的样本自定义操作;transformers可以通过trainer集成deepspeed功能,这种用法需要提供配置文件,如下面的deepspeed配置文件ds_config.json文件。关于这个config具体配置可参考文档。
这里用的FLAN-T5模型;启动deepspeed:deepspeed --include=localhost:1,2 train.py,启动前两张显卡;注意使用ZeRO3需要有足够的内存
如果不使用trianer来集成deepspeed,from_pretrained和 from_config这样的核心功能应该包含DeepSpeed中的重要部分,例如zero。初始化Zero的时候应该为stage3或者更高。参考文档。
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": false
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
2. 训练代码
- 数据:samsum数据集
- 模型:google/flan-t5-xxl大模型
# !/usr/bin/python # -*- coding: utf-8 -*- import nltk import torch import evaluate import datasets import numpy as np from nltk.tokenize import sent_tokenize from torch.utils.data import DataLoader from torch.nn.utils.rnn import pad_sequence from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments nltk.download("punkt") dataset_name = "samsum" # 数据集名称 model_name="google/flan-t5-xxl" # 模型名称 max_input_length = 512 max_gen_length = 128 output_dir = "checkpoints" num_train_epochs = 5 learning_rate = 5e-5 deepspeed_config = "./ds_config.json" # deepspeed配置文件 per_device_train_batch_size=1 # batch size设置为1,因为太大导致OOM per_device_eval_batch_size=1 gradient_accumulation_steps=2 # 由于单卡的batch size为1,为了扩展batch size,使用梯度累加 tokenizer = AutoTokenizer.from_pretrained(model_name) # 加载数据 dataset = datasets.load_dataset(dataset_name) print(dataset["train"][0]) # tokenize def preprocess(examples): dialogues = ["summarize:" + dia for dia in examples["dialogue"]] # summaries = [summ for summ in examples["summary"]] model_inputs = tokenizer(dialogues, max_length=max_input_length, truncation=True) labels = tokenizer(text_target=examples["summary"], max_length=max_gen_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs tokenized_dataset = dataset.map(preprocess, batched=True, remove_columns=["dialogue", "summary", "id"]) # print(tokenized_dataset["train"]["input_ids"][0]) # 打印结果 # 对batch进行padding def collate_fn(features): batch_input_ids = [torch.LongTensor(feature["input_ids"]) for feature in features] batch_attention_mask = [torch.LongTensor(feature["attention_mask"]) for feature in features] batch_labels = [torch.LongTensor(feature["labels"]) for feature in features] batch_input_ids = pad_sequence(batch_input_ids, batch_first=True, padding_value=tokenizer.pad_token_id) batch_attention_mask = pad_sequence(batch_attention_mask, batch_first=True, padding_value=0) batch_labels = pad_sequence(batch_labels, batch_first=True, padding_value=-100) return { "input_ids": batch_input_ids, "attention_mask": batch_attention_mask, "labels": batch_labels } # 用于测试的代码 # dataloader = DataLoader(tokenized_dataset["test"], shuffle=False, batch_size=4, collate_fn=collate_fn) # batch = next(iter(dataloader)) # print(batch) # 加载模型 model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # 用于测试的代码 # dataloader = DataLoader(tokenized_dataset["test"], shuffle=False, batch_size=4, collate_fn=collate_fn) # batch = next(iter(dataloader)) # output = model(**batch) # print(output) # 定义评估函数 metric = evaluate.load("rouge") def compute_metrics(eval_preds): preds, labels = eval_preds if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds = ["\n".join(sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) result = {k: round(v * 100, 4) for k, v in result.items()} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] result["gen_len"] = np.mean(prediction_lens) return result # 设置训练参数 training_args = Seq2SeqTrainingArguments( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, per_device_eval_batch_size=per_device_eval_batch_size, gradient_accumulation_steps=gradient_accumulation_steps, eval_accumulation_steps=1, # 防止评估时导致OOM predict_with_generate=True, fp16=False, learning_rate=learning_rate, num_train_epochs=num_train_epochs, # logging & evaluation strategies logging_dir="logs", logging_strategy="steps", logging_steps=50, # 每50个step打印一次log evaluation_strategy="steps", eval_steps=500, # 每500个step进行一次评估 save_steps=500, save_total_limit=2, load_best_model_at_end=True, deepspeed=deepspeed_config, # deepspeed配置文件的位置 report_to="all" ) # 模型训练 trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_dataset["train"], eval_dataset=tokenized_dataset["validation"], data_collator=collate_fn, compute_metrics=compute_metrics, ) trainer.train() # 打印验证集上的结果 print(trainer.evaluate(tokenized_dataset["validation"])) # 打印测试集上的结果 print(trainer.evaluate(tokenized_dataset["test"])) # 保存最优模型 trainer.save_model("best")
加速训练方法:量化工具包bitsandbytes、deepspeed(先读torch.distributed和ColossalAI在搞)、llama.cpp量化模型
deepspeed加速Bloom lora微调
1. 配置文件
{ "train_micro_batch_size_per_gpu": "auto", "gradient_accumulation_steps": "auto", "steps_per_print": 50, "gradient_clipping": 1.0, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu" }, "contiguous_gradients": true, "overlap_comm": true }, "zero_allow_untested_optimizer": true, "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "Adam", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "activation_checkpointing": { "partition_activations": true, "contiguous_memory_optimization": true }, "wall_clock_breakdown": false }
2. 训练代码
- 数据:使用BELLE提供的100万条指令微调数据
- 模型:bloomz-7b1-mt模型
deepspeed --include=localhost:0,1,2,3 train.py启动
#!/usr/bin/env python # -*- coding: utf-8 -*- import os import torch import random import datasets import numpy as np from tqdm import tqdm from typing import Dict from torch.utils.data import DataLoader from transformers import ( AutoModelForCausalLM, AutoTokenizer, DataCollatorForSeq2Seq, TrainingArguments, Trainer ) from peft import ( LoraConfig, TaskType, get_peft_model, get_peft_model_state_dict, set_peft_model_state_dict ) def set_random_seed(seed): if seed is not None and seed > 0: random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.random.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True set_random_seed(1234) # 1. 设置参数 # LoRA参数 LORA_R = 8 LORA_ALPHA = 32 LORA_DROPOUT = 0.1 # 训练参数 EPOCHS=3 LEARNING_RATE=5e-5 OUTPUT_DIR="./checkpoints" BATCH_SIZE=4 # 2 GRADIENT_ACCUMULATION_STEPS=3 # 其他参数 MODEL_PATH = "bigscience/bloomz-7b1-mt" DATA_PATH = "./data/belle_open_source_1M.train.json" MAX_LENGTH = 512 PATTERN = "{}\n{}" DS_CONFIG = "ds_zero2_config.json" tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH) # 加载tokenizer # 加载数据 dataset = datasets.load_dataset("json", data_files=DATA_PATH) # print(dataset["train"][0]) # 2. tokenize def tokenize(text: str, add_eos_token=True): result = tokenizer( text, truncation=True, max_length=MAX_LENGTH, padding=False, return_tensors=None) # 判断是否要添加eos_token if (result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < MAX_LENGTH and add_eos_token): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) result["labels"] = result["input_ids"].copy() return result def preprocess(example: Dict, train_on_inputs: bool = False): prompt = example["input"] response = example["target"] text = PATTERN.format(prompt, response) tokenized_inp = tokenize(text) # 若train_on_inputs为False,则将label中与input相关的token替换为-100 if not train_on_inputs: tokenized_prompt = tokenize(prompt,add_eos_token=False) prompt_tokens_len = len(tokenized_prompt["input_ids"]) tokenized_inp["labels"] = [-100]*prompt_tokens_len + tokenized_inp["labels"][prompt_tokens_len:] return tokenized_inp train_data = dataset["train"].shuffle().map(preprocess, remove_columns=["id", "input", "target"]) print(train_data[0]) # pad_to_multiple_of=8表示padding的长度是8的倍数 collate_fn = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True) # 2. 加载模型 evice_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} # device_map指定模型加载的GPU;troch_dtype=torch.float16表示半精度加载模型 model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.float16, device_map=device_map) # 3. LoRA相关 lora_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=LORA_R, # LoRA中低秩近似的秩 lora_alpha=LORA_ALPHA, # 见上文中的低秩矩阵缩放超参数 lora_dropout=LORA_DROPOUT, # LoRA层的dropout ) # 转换模型 model = get_peft_model(model, lora_config) model.config.use_cache = False old_state_dict = model.state_dict model.state_dict = ( lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict()) ).__get__(model, type(model)) # 打印模型中的可训练参数 model.print_trainable_parameters() # 4. 训练参数 args = TrainingArguments( output_dir=OUTPUT_DIR, # checkpoint的存储目录 per_device_train_batch_size=BATCH_SIZE, # 单设备上的batch size gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS, # 梯度累加的step数 warmup_steps=100, num_train_epochs=EPOCHS, learning_rate=LEARNING_RATE, fp16=True, # 使用混合精度训练 logging_steps=50, evaluation_strategy="no", # 不进行评估 save_strategy="steps", save_steps=2000, # 保存checkpoint的step数 save_total_limit=5, # 最多保存5个checkpoint deepspeed=DS_CONFIG ) # 5. 模型训练 trainer = Trainer( model=model, train_dataset=train_data, eval_dataset=None, args=args, data_collator=collate_fn ) trainer.train() model.save_pretrained("best_model")
————————————————
本文为摩登都市天空博主原创文章,未经博主允许不得转载。
原文链接:https://blog.csdn.net/zwqjoy/article/details/138274598

浙公网安备 33010602011771号