LLaMA-Factory 使用 Qwen2-1.5B-Instruct 在华为 Ascend NPU docker环境上进行模型微调

制作 LLaMA-Factory 华为 npu 镜像

https://hub.docker.com/r/hiyouga/llamafactory/tags

默认情况下 LLaMA-Factory 镜像有

cuda版本的镜像

docker pull hiyouga/llamafactory:latest

Atlas A2训练系列(Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)
Atlas 800I A2推理系列(Atlas 800I A2)

npu A2版本的镜像

docker pull hiyouga/llamafactory:latest-npu-a2

我的环境是华为昇腾 910b 系列,所以自己制作镜像

下载 LLaMA-Factory 代码

git clone https://github.com/hiyouga/LLaMA-Factory.git

构建 npu 镜像

使用 docker-compose 构建并启动 docker 容器

进入 LLaMA-Factory 项目中存放 Dockerfiledocker-compose.yamldocker-npu 目录:

cd docker/docker-npu

构建 docker 镜像并启动 docker 容器:

docker-compose up -d

进入 docker 容器:

docker exec -it llamafactory bash

不使用 docker-compose

使用 docker build 直接构建 docker 镜像:

docker build -f ./docker/docker-npu/Dockerfile --build-arg INSTALL_DEEPSPEED=false --build-arg PIP_INDEX=https://pypi.org/simple -t llamafactory:latest .

启动 docker 容器:

docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-p 7860:7860 \
-p 8000:8000 \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \
llamafactory:latest

进入 docker 容器:

docker exec -it llamafactory bash

只启动 web UI界面

8000 是API接口地址

docker run -dit --ipc=host \
    -v /usr/local/dcmi:/usr/local/dcmi \
    -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
    -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
    -v /etc/ascend_install.info:/etc/ascend_install.info \
    -p 7860:7860 \
    --device /dev/davinci0 \
    --device /dev/davinci_manager \
    --device /dev/devmm_svm \
    --device /dev/hisi_hdc \
    --name llamafactory \
    llamafactory:latest \
    python src/webui.py --host 0.0.0.0 --port 7860

开始模型微调

创建本地挂载目录

mkdir -p ~/niuben/llama-factory-workspace/{models,data,saves}

下载 Qwen2.5-1.5B-Instruct 模型

https://modelscope.cn/models/Qwen/Qwen2.5-1.5B-Instruct/files

python 安装 modelscope

pip install modelscope

下载完整模型库到指定文件夹中

modelscope download --model Qwen/Qwen2.5-1.5B-Instruct  --local_dir ~/niuben/llama-factory-workspace/models

启动交互式进入容器

docker run -it --rm \
  --name llamafactory-npu \
  --device=/dev/davinci0 \
  --device=/dev/davinci_manager \
  --device=/dev/devmm_svm \
  --device=/dev/hisi_hdc \
  -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
  -v /usr/local/dcmi:/usr/local/dcmi \
  -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
  -v ~/niuben/llama-factory-workspace:/workspace \
  -e ASCEND_RT_VISIBLE_DEVICES=0 \
  -p 7860:7860 \
  --network=host \
  llamafactory:latest \
  /bin/bash

微调配置文件

train_qwen2_npu.yaml

### model

model_name_or_path: /workspace/models/qwen2-1.5b/dir
cache_dir: /workspace/models          # 你的挂载目录
trust_remote_code: true

### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all

### dataset
dataset: alpaca_gpt4_zh_50       # 对应 data/alpaca_zh_demo.json
template: qwen
cutoff_len: 1024
max_samples: 1000

### output
output_dir: /workspace/saves/qwen2-1.5b-lora
overwrite_output_dir: true
logging_steps: 5
save_steps: 200

### training hyperparams
per_device_train_batch_size: 2
gradient_accumulation_steps: 8
learning_rate: 1e-4
num_train_epochs: 3
lr_scheduler_type: cosine
warmup_ratio: 0.1
fp16: true
flash_attn: sdpa

数据集和数据集描述

数据集文件,alpaca_gpt4_zh_50.json

[
  {
    "instruction": "解释量子纠缠",
    "input": "",
    "output": "量子纠缠是一种量子力学现象,当两个或多个粒子在量子态上相互关联时,一个粒子的状态改变会立即影响另一个粒子的状态,无论它们相距多远。"
  },
  {
    "instruction": "写一首关于春天的诗",
    "input": "",
    "output": "春风轻拂柳梢绿,\n燕子归来呢喃声。\n桃花流水鲑鱼肥,\n草长莺飞二月天。"
  }
]

数据集描述文件,dataset_info.json

{
  "alpaca_gpt4_zh_50": {
    "file_name": "alpaca_gpt4_zh_50.json",
    "columns": {
      "prompt": "instruction",
      "query": "input",
      "response": "output"
    }
  }
}

开始训练微调

pip install bitsandbytes>=0.39.0
ASCEND_RT_VISIBLE_DEVICES=0 llamafactory-cli train /workspace/train_qwen2_npu.yaml

开始模型合并

模型合并导出配置

export.yaml

### model
model_name_or_path: /workspace/models/qwen2-1.5b/dir
adapter_name_or_path: /workspace/saves/qwen2-1.5b-lora
finetuning_type: lora
template: qwen

### export
export_dir: /workspace/saves/qwen2-1.5b-merged
export_size: 2
export_device: cpu   # 强烈推荐 cpu,避免 NPU OOM

模型导出

llamafactory-cli export \
  /workspace/train_qwen2_npu.yaml \
  --export_dir /workspace/saves/qwen2-1.5b-merged \
  --export_size 2 \
  --export_device npu
posted @ 2025-11-17 15:34  牛奔  阅读(4)  评论(0)    收藏  举报