LLaMa 3.1: rope_scaling 错误

原因是因为一些基本库和model不匹配了:

Q:rope_scaling must be a dictionary with with two fields, name and factor, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

pip install --upgrade transformers

Q: ImportError: cannot import name 'top_k_top_p_filtering' from 'transformers'

pip install --upgrade trl  

Q: ImportError: Using the Trainer with PyTorch requires accelerate>=0.26.0:

pip install -U accelerate
posted @ 2024-10-28 11:05  EpicMoCN  阅读(127)  评论(0编辑  收藏  举报