12 2023 档案

摘要:use text-generation-inference to set up run command click to view command docker run --gpus all --shm-size 1g -p 3000:80 -v /data:/data ghcr.io/huggin 阅读全文
posted @ 2023-12-30 08:27 Daze_Lu 阅读(47) 评论(0) 推荐(0)
摘要:Introduction Here we re-evaluate llama2 benchmarks to prove its performence. datasets In this blog, we'll test the following datasets shown in the ima 阅读全文
posted @ 2023-12-24 15:57 Daze_Lu 阅读(258) 评论(0) 推荐(0)
摘要:introduction fine-tuning command mistral click to view the code CUDA_VISIBLE_DEVICES=1 nohup python src/train_bash.py \ --stage sft \ --do_train \ --m 阅读全文
posted @ 2023-12-19 09:26 Daze_Lu 阅读(103) 评论(0) 推荐(0)
摘要:1 Evaluate medical model fine-tuned by llama 1.1 evaluation dataset here how to organize the dataset 阅读全文
posted @ 2023-12-19 06:44 Daze_Lu 阅读(19) 评论(0) 推荐(0)
摘要:1 remote run 1 when you want to debug the code in server, remember the following set. interpreter: server interpreter script: use path in server, inpu 阅读全文
posted @ 2023-12-19 06:11 Daze_Lu 阅读(41) 评论(0) 推荐(0)
摘要:1 Introduction In this blog, we will use 3 dataset to fine-tuning our model using llama-factory. 2 dataset preparation 2.1 MedQA dataset (address) The 阅读全文
posted @ 2023-12-14 04:19 Daze_Lu 阅读(292) 评论(0) 推荐(0)
摘要:implement steps display a red point on the screen let the redpoint move, left, right, down design standard shape of the block let the block could be r 阅读全文
posted @ 2023-12-05 02:41 Daze_Lu 阅读(21) 评论(0) 推荐(0)