axolotl-mistral fine-tuning

command & progress

click to view the command
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m axolotl.cli.preprocess examples/mistral/lora-mps.yml
accelerate launch -m axolotl.cli.train  examples/mistral/lora-mps.yml

dataset

daze-unlv/medmcqa_axolotl

note

1 before runing mistral fine-tuning, use pip install --upgrade flash-attn to update flash-attn to 2.5.6
2 chage this line control.should_training_stop = True, change True as False, otherwise the training will stoped cause the high loss.

posted @ 2024-03-06 07:26  Daze_Lu  阅读(24)  评论(0)    收藏  举报