455 Commits

Author SHA1 Message Date
Yaowei Zheng
ca75f1edf3 [model] fix vlm utils (#8388) 2025-06-17 01:08:49 +08:00
Yaowei Zheng
3a3bae1cfe [data] fix qwen2vl pos ids (#8387) 2025-06-17 00:48:54 +08:00
Yaowei Zheng
31874e4f62 [version] release v0.9.3 (#8386) 2025-06-16 19:21:32 +08:00
Yaowei Zheng
9a2d1dec62 [assets] update wechat (#8385) 2025-06-16 18:23:22 +08:00
Aman Gupta
8e4ac78607 [trainer] Add LD-DPO objective (#8362) 2025-06-12 16:10:38 +08:00
Yaowei Zheng
44f1b9b5ad [misc] tiny fixes (#8348) 2025-06-10 15:30:58 +08:00
阿丹(adan)
b41697c9b6 [model] support MiniCPM4 (#8314) 2025-06-10 14:38:39 +08:00
Kingsley
31bca4d172 [model] support Mistral3.1 small 2503 (#8335) 2025-06-09 10:37:42 +08:00
Chenhao Zhang
fa4360dca7 [assets] Add awesome works used LLaMA-Factory (#8333) 2025-06-09 10:21:17 +08:00
Yaowei Zheng
9acab4949d [model] fix model generate (#8327) 2025-06-07 08:47:50 +08:00
Vivek Iyer
32b4574094 [model] pushing FFT with unsloth (#8325)
Co-authored-by: viyer <vivek_iyer2@apple.com>
2025-06-07 08:20:58 +08:00
Yaowei Zheng
03a93ec513 [data] fix empty template (#8312) 2025-06-06 13:50:50 +08:00
Yaowei Zheng
bcb6b94658 [setup] fix uv (#8311) 2025-06-06 11:54:15 +08:00
Yaowei Zheng
c0710be6d7 [assets] update readme (#8303) 2025-06-05 23:23:15 +08:00
Kingsley
212a8006dc [tests] add visual model save test (#8248)
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
2025-06-05 20:38:01 +08:00
Yaowei Zheng
ed70f8d5a2 [assets] fix npu docker (#8298) 2025-06-05 19:09:20 +08:00
Butui Hu
1a33d65a56 [launcher] Add elastic and fault-tolerant training support (#8286)
Signed-off-by: Butui Hu <hot123tea123@gmail.com>
2025-06-05 16:40:03 +08:00
Kingsley
69c9e379d5 [script] add Script description for qwen_omni_merge (#8293) 2025-06-05 13:22:01 +08:00
Yaowei Zheng
e9fe9cee29 [assets] update docker files (#8291) 2025-06-04 23:30:46 +08:00
Yaowei Zheng
cb7ab69783 [assets] update readme (#8288) 2025-06-04 17:46:12 +08:00
Yaowei Zheng
c1ed76e109 [assets] add icon (#8276) 2025-06-03 20:36:21 +08:00
Kingsley
c224d17cb2 [data] support nested images input for videos (#8264) 2025-06-03 20:26:29 +08:00
Ze-Yi LIN
c4e51d40e0 [tracking] swanlab add llamafactory tag (#8258) 2025-06-03 18:42:29 +08:00
Kingsley
554e89ff02 [model] add MIMO_VL (#8249) 2025-06-01 03:54:54 +08:00
Yaowei Zheng
fee2122f09 [deps] upgrade transformers to 4.52.4 (#8245) 2025-05-31 16:51:40 +08:00
Akshat Sehgal
c7e63bead7 [model] add smollm2 support (#8220) 2025-05-31 16:29:01 +08:00
hoshi-hiyouga
3e1a7fcb9c [assets] update readme (#8235) 2025-05-30 16:52:12 +08:00
Kingsley
2aaede8ef4 [scripts] specify model class for qwen_omni merge (#8227) 2025-05-30 14:20:12 +08:00
hoshi-hiyouga
42bebc341d [model] add deepseek 0528 models (#8215) 2025-05-29 21:37:07 +08:00
hoshi-hiyouga
83a9ff5853 [assets] fix docker images (#8203) 2025-05-28 22:26:05 +08:00
yzoaim
519bab86e6 [workflow] auto push docker images (#8181)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-05-28 20:21:15 +08:00
hoshi-hiyouga
dbc9f5a5d9 [assets] update Dockerfile (#8201) 2025-05-28 20:20:59 +08:00
hoshi-hiyouga
9b152d9cb5 [webui] fix skip args (#8195) 2025-05-28 18:11:07 +08:00
Youngwoo Kim
6c3cd400b5 [data] Reading files from cloud is broken (#8182) (#8183) 2025-05-28 15:50:44 +08:00
hoshi-hiyouga
4d3ffa2ec4 [assets] fix docker image (#8180) 2025-05-27 19:01:31 +08:00
hoshi-hiyouga
2bf8e993ab [data] fix shared file system (#8179) 2025-05-27 18:36:03 +08:00
hoshi-hiyouga
d4a413eb37 [webui] add extra args to export (#8178) 2025-05-27 18:25:31 +08:00
hoshi-hiyouga
00974a3169 [assets] update docker files (#8176) 2025-05-27 18:15:23 +08:00
hoshi-hiyouga
46ccf84aaa [webui] add infer extra args (#8167) 2025-05-27 12:04:00 +08:00
hoshi-hiyouga
07343ca83d [webui] fix input args (#8162) 2025-05-27 02:05:54 +08:00
hoshi-hiyouga
3c7dc66a92 [model] add smollm2 and medgemma (#8161) 2025-05-26 23:19:58 +08:00
hoshi-hiyouga
ba032828e2 [deps] upgrade transformers (#8159) 2025-05-26 22:03:58 +08:00
Akshat Sehgal
501e7d8a8f feat: add smollm support (#8050) 2025-05-26 19:47:54 +08:00
wangzhan
12292e4283 [api] support repetition_penalty and align presence_penalty with OpenAI Client (#7958) 2025-05-26 18:45:11 +08:00
Kingsley
f08b748199 [data] fix internvl plugin when using PIL images (#8129) 2025-05-22 01:32:59 +08:00
hoshi-hiyouga
d2a3036a23 [misc] update data readme (#8128) 2025-05-21 22:41:18 +08:00
hoshi-hiyouga
9ae17cd173 [deps] update to transformers 4.52 (#8125) 2025-05-21 05:16:18 +08:00
hoshi-hiyouga
56926d76f9 [data] llama3 multi tool support (#8124) 2025-05-21 02:01:12 +08:00
hoshi-hiyouga
c2f6f2fa77 [assets] update readme (#8110) 2025-05-20 02:44:18 +08:00
hoshi-hiyouga
9b5baa97f0 [data] qwen3 fixes (#8109) 2025-05-20 02:00:30 +08:00
hoshi-hiyouga
45030ff803 [model] switch to gptqmodel (#8108) 2025-05-19 22:25:40 +08:00
piamo
bc7f00f2c7 [model] update rope kwargs for yarn (#8101) 2025-05-19 20:07:54 +08:00
hoshi-hiyouga
beae231af6 [doc] add no build isolation (#8103) 2025-05-19 19:25:13 +08:00
Ma, Xiaochen
a0b4b91577 [trainer] fix KeyError at end of pretrain (#8099) 2025-05-19 18:01:26 +08:00
Biao Wang
90492f3582 [misc] fix cli (#8095)
Co-authored-by: wangbiao11 <wangbiao11@baidu.com>
2025-05-19 17:59:39 +08:00
Saiya
ab41f7956c [infer] support lora adapter for SGLang backend (#8067) 2025-05-16 23:33:47 +08:00
Kingsley
52b23f9e56 [data] add forward compatibility for video_utils in Transformers 4.52.0 (#8077) 2025-05-16 17:41:04 +08:00
Eric Tang
a9aa392ba4 [data] support loading folder from remote (#8078) 2025-05-16 15:35:38 +08:00
Shawn Tao
0b773234e5 [infer] Modify vllm_infer.py to batch preprocess to avoid too much files opened error (#8051)
Co-authored-by: Kingsley <82590017+Kuangdd01@users.noreply.github.com>
2025-05-15 10:54:35 +08:00
hoshi-hiyouga
712c57f3b4 [assets] update windows installation (#8042) 2025-05-13 17:01:56 +08:00
hoshi-hiyouga
dc080399c6 [model] add seed coder and qwen3 quant models (#8039) 2025-05-13 15:59:55 +08:00
hoshi-hiyouga
68fc068cab [data] fix kimi vl template (#8015) 2025-05-11 20:45:19 +08:00
Kingsley
9620825892 [scripts] add video params for vllm infer (#7992) 2025-05-09 21:16:52 +08:00
yunhao-tech
26cbb03a5f [data] Avoid repetitive tool description warp (#8000)
Co-authored-by: chenyunhao <chenyunhao@wps.cn>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-05-09 21:16:37 +08:00
tpoisonooo
5f4b793e04 [docs] add GraphGen (#7974) 2025-05-07 12:23:11 +02:00
hoshi-hiyouga
994ab6424a [misc] update liger kernel patch (#7966) 2025-05-06 20:32:16 +02:00
hoshi-hiyouga
aa9ed4db59 [example] update examples (#7964) 2025-05-06 17:24:25 +02:00
Kingsley
ef86a53063 [model] add mimo7b (#7946) 2025-05-06 17:10:30 +02:00
hoshi-hiyouga
bf0286e1e3 [misc] fix qwen2 omni (#7962) 2025-05-06 15:39:13 +02:00
hoshi-hiyouga
ce7032e1b3 [model] add qwen2 omni 3b (#7945) 2025-05-03 16:36:51 +08:00
Eric Chen
5763017cea [assets] Warp Support README Update (#7887) 2025-05-02 00:08:48 +08:00
hoshi-hiyouga
13b05e74f1 [hparam] add enable think argument (#7928) 2025-04-30 17:21:30 +08:00
hoshi-hiyouga
c566e39b7d [data] fix base plugin (#7924) 2025-04-30 16:28:05 +08:00
hoshi-hiyouga
052ca871bd [data] optimize qwen3 loss computation (#7923) 2025-04-30 16:18:00 +08:00
hoshi-hiyouga
73198a6645 [misc] fix uv (#7913) 2025-04-30 07:45:03 +08:00
hoshi-hiyouga
d4ee44bdef [data] add eval_on_each_dataset arg (#7912) 2025-04-30 06:56:43 +08:00
hoshi-hiyouga
6d2cde43e7 [data] replace eos token for base models (#7911) 2025-04-30 06:52:28 +08:00
hoshi-hiyouga
11295cdea0 [data] improve mm plugin (#7910) 2025-04-30 06:34:28 +08:00
hoshi-hiyouga
98f23c6584 [model] add qwen3 (#7885) 2025-04-29 09:34:05 +08:00
Kingsley
db9559456c [data] fix qwen2.5 omni template (#7883) 2025-04-29 00:58:23 +08:00
hoshi-hiyouga
3ae5da2a04 [model] fix dsv3 leaf node (#7879) 2025-04-28 18:11:09 +08:00
hoshi-hiyouga
d173cb50f5 [data] fix qwen2 omni plugin (#7875) 2025-04-28 14:22:41 +08:00
zhaop-l
df27d7e48a [trainer] make projector trainable in freeze training (#7872)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-28 13:19:37 +08:00
hoshi-hiyouga
bb5b83352b [data] fix minicpmo vllm infer (#7870) 2025-04-28 01:59:53 +08:00
Kingsley
1157f4e246 fix attn patch for kimivl (#7867) 2025-04-27 23:12:28 +08:00
Eric Tang
ef03832cd4 [ray] add storage filesystem to ray config (#7854) 2025-04-27 22:12:40 +08:00
hoshi-hiyouga
2233b739fa [model] fix vit gradient checkpointing (#7830) 2025-04-23 22:48:48 +08:00
hoshi-hiyouga
091d2539e8 Merge commit from fork 2025-04-23 16:38:27 +08:00
hoshi-hiyouga
c1a7f2ebb2 [model] fix moe zero3 (#7826) 2025-04-23 15:30:49 +08:00
Kingsley
fa0eb91f1f [data] fix internvl plugin (#7817) 2025-04-23 00:58:22 +08:00
hoshi-hiyouga
49f9ed0232 [assets] update model readme (#7804) 2025-04-22 16:43:56 +08:00
Kingsley
2a564c25d1 [model] add arch check for InternVL (#7803) 2025-04-22 16:38:05 +08:00
Kingsley
7500e761d3 [misc] update internvl constants (#7801) 2025-04-22 15:53:08 +08:00
hoshi-hiyouga
fddcd43c88 [trainer] support early stop (#7797) 2025-04-22 01:59:33 +08:00
hoshi-hiyouga
0e4ce039ee [data] improve mmplugin (#7795) 2025-04-22 01:25:33 +08:00
hoshi-hiyouga
b07628dea5 [example] add bash usage (#7794) 2025-04-22 00:25:51 +08:00
Juanxi Tian
12ada72ed4 [trainer] Add Muon Optimizer (#7749)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:38:37 +08:00
hoshi-hiyouga
416853dd25 [parser] support omegaconf (#7793) 2025-04-21 23:30:30 +08:00
Changrui Chen
bd7bc31c79 [data] Fix wrong position ids with packed attention masks (#7754)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:36 +08:00
flashJd
0ac641326b [misc] fix new tokens adding (#7253)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:02 +08:00
ddddng
c5ba9106ec [model] fix gemma3 export (#7786)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:07:11 +08:00
Sachin Beldona
3b2d3794a5 [misc] fix bug in constant (#7765)
Co-authored-by: Sachin Beldona <sbeldona@cs.cmu.edu>
2025-04-21 23:06:31 +08:00
hoshi-hiyouga
b605c20768 [assets] update wechat (#7792) 2025-04-21 21:29:42 +08:00
hoshi-hiyouga
39169986ef [trainer] fix pt loss (#7748)
* fix pt loss

* robust

* fix

* test
2025-04-17 03:15:35 +08:00
hoshi-hiyouga
86ebb219d6 [breaking] bump transformers to 4.45.0 & improve ci (#7746)
* update ci

* fix

* fix

* fix

* fix

* fix
2025-04-17 02:36:48 +08:00
hoshi-hiyouga
d222f63cb7 [infer] set env for vllm ascend (#7745) 2025-04-17 01:08:55 +08:00
Kingsley
2e518f255f [model] support intern-VL 2.5-3 series (#7258)
* add internvl and rebase

* fix for internvl2&3

* remove lines

* fix video_inputs & lint

* nit

* add constants

* remove lines

* fix

* fix error

* pass ci

* pass ci

* skip internvl & nit
2025-04-17 00:31:30 +08:00
ENg-122
8f88a4e6a4 [misc] improve entrypoint (#7345)
* 纯粹优化下入口代码,因为看到if else太多了

* Update cli.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-16 21:48:23 +08:00
leo-pony
b9263ff5ac [infer] support vllm-ascend (#7739) 2025-04-16 20:06:47 +08:00
hoshi-hiyouga
ee2ab093a7 [api] fix chat messages (#7732) 2025-04-15 16:39:08 +08:00
hoshi-hiyouga
3df021d4d7 [deps] upgrade vllm (#7728) 2025-04-15 14:57:40 +08:00
Joe Schoonover
e252abf051 [docker] patch docker-rocm (#7725)
* Update Dockerfile

* Fix typo

* Fix syntax for /bin/sh conditional

* Add build args to docker-compose

* Change shell to /bin/bash

This is required for "==" syntax in conditional string comparison
2025-04-15 13:36:39 +08:00
hoshi-hiyouga
1134baeedd [assets] update model readme (#7724) 2025-04-15 00:41:09 +08:00
Kingsley
2101399c94 [model] Support Kimi_VL thinking/instruct (#7719)
* add kimi_vl

* patch config

* check version

* Update mm_plugin.py

* Update mm_plugin.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-15 00:21:58 +08:00
hoshi-hiyouga
3f91a95250 [misc] fix env vars (#7715) 2025-04-14 16:04:04 +08:00
hoshi-hiyouga
7c61b35106 [misc] upgrade cli (#7714) 2025-04-14 15:41:22 +08:00
hoshi-hiyouga
f518bfba5b [deps] upgrade transformers (#7704) 2025-04-13 18:11:34 +08:00
Yuxuan Zhang
8162f94db5 [model] add GLM-4-0414 (#7695)
* Update README_zh.md

* update
2025-04-13 17:10:45 +08:00
hoshi-hiyouga
1f0c52b73c [deps] fix uv conflicts (#7686)
* fix #7678

* Update setup.py

* Update tests.yml

* Update publish.yml

* Update Makefile
2025-04-11 18:02:24 +08:00
Eric Tang
a8caf09c7f [data] support for specifying a dataset in cloud storage (#7567)
* add support for loading datasets from s3/gcs

* add comments to readme

* run linter and address comments

* add option to pass in kwargs to ray init (i.e. runtime env)

* address comment

* revert mixed up changes
2025-04-10 11:31:35 +08:00
Eric Tang
bb8d79bae2 [ray] allow for specifying ray.init kwargs (i.e. runtime_env) (#7647)
* ray init kwargs

* Update trainer_utils.py

* fix ray args

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-10 11:31:05 +08:00
Dain Kim
1c436c9f25 [bugfix] enable_gemma_liger_kernel (#7660)
- The `enable_liger_kernel` function for the Gemma model series was not executed due to the existing `if` statement in the code.
- Changed the line to an `elif` statement so that the `apply_liger_kernel` function is executed properly.

resolved: #7628
2025-04-10 11:27:30 +08:00
jilongW
1b0934bccb [misc] fix cuda warn on intel GPU (#7655) 2025-04-09 21:37:54 +08:00
hoshi-hiyouga
4eec541857 [data] add coig-p dataset (#7657) 2025-04-09 21:18:25 +08:00
hoshi-hiyouga
89a4f9ec7f [assets] update readme (#7654) 2025-04-09 18:27:38 +08:00
hoshi-hiyouga
1abd71b551 [assets] update readme (#7644) 2025-04-09 01:06:06 +08:00
Kingsley
349c56c51c [data] Fix bugs of use_audio_in_video in Qwen2.5 Omni (#7638)
* cache _mm_inputs

* nit

* support for use_audio_in_video

* remove cache

* fix data

* Update mllm_video_audio_demo.json
2025-04-08 18:40:10 +08:00
Shawn Tao
acb09fa3a3 [trainer] fix key error (#7635) 2025-04-08 18:39:50 +08:00
Adarsh Shirawalmath
f75b91077b [sglang] support transformers 4.51.0 (#7639) 2025-04-08 18:39:23 +08:00
hoshi-hiyouga
c3c0efbaa0 [misc] fix packing and eval plot (#7623) 2025-04-07 18:20:57 +08:00
hoshi-hiyouga
5115dc8c7f [assets] update readme (#7612) 2025-04-06 13:58:49 +08:00
hoshi-hiyouga
831e7f1cfd [model] add llama4 (#7611) 2025-04-06 13:42:31 +08:00
Kingsley
d4cfa9507e [data] fix qwen2.5 omni plugin (#7578)
* specific entry

* Update mm_plugin.py

* fix fps cal

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-02 23:58:39 +08:00
Kingsley
d32c6c014d [data] fix qwen2.5 omni plugin (#7573)
* align key with qwen2vl

* nit && change scripts
2025-04-02 21:28:52 +08:00
gechengze
7b9deb9410 [trainer] fix batch processing in PPO trainer (#7576) 2025-04-02 21:17:48 +08:00
hoshi-hiyouga
5e22597ff1 [infer] vllm video/audio inference (#7566) 2025-04-02 02:27:04 +08:00
hoshi-hiyouga
2bfcad2394 [model] fix kv cache (#7564) 2025-04-01 23:07:46 +08:00
Yu Shi Jie
a13b1bb49a [model] fix use_cache patching for gemma3 multimodal (#7500) 2025-04-01 16:06:48 +08:00
Ritesh Goru
d10467d178 [data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing (#7318)
* use position_ids for neat_packing with fa2

* revert fa2 changes
2025-04-01 16:03:13 +08:00
taoharry
aac70663fd [webui] fix launch with proxy (#7332) 2025-04-01 15:52:56 +08:00
Billy Cao
00409ff28a [data] shard the dataset to allow multiprocessing when streaming is enabled (#7530)
* Shard the dataset when streaming to allow multiprocessing

* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
d70b3b4bc5 [trainer] new kto mismatch pair creation strategy (#7509) 2025-04-01 15:21:53 +08:00
hoshi-hiyouga
e76eba051d [data] fix qwen2.5 omni collator (#7553) 2025-04-01 00:15:12 +08:00
Kingsley
7eed496336 [model] add Qwen2.5-Omni model (#7537)
* preserve image_sizes

* preserve image_sizes

* init plugin

* support audio-text2text lora

* nit

* support image/video-text2text, audio-text2text

* remove args

* remove lines

* add docs && nit

* remove some comments

* fix && add merge part script

* add license
2025-03-31 20:39:35 +08:00
hoshi-hiyouga
0f8296626a [deps] pin pydantic to 2.10.6 (#7546) 2025-03-31 14:42:28 +08:00
Kingsley
8da1d2fa71 [data] fix pixtral plugin (#7505)
* preserve `image_sizes`

* add comments
2025-03-27 17:06:40 +08:00
Xu-pixel
b578a7d5b6 [3rdparty] support swanlab lark notification (#7481) 2025-03-27 01:52:01 +08:00
Kdump
24afceddb7 [trainer] fix wsd scheduler (#7304)
* [trainer] Warmup_stable_decay supports setting the number of stable and decay steps according to the warmup_ratio ratio

* Update trainer_utils.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-26 15:25:02 +08:00
hoshi-hiyouga
0583d06676 [model] add qwen2vl 32b & upgrade peft (#7469)
* add qwen2vl 32b

* fix ci

* upgrade peft to 0.15

* fix ci

* fix ci
2025-03-25 12:15:58 +08:00
GuoCoder
ec6a261568 [model] fix lora on quant models (#7456)
Co-authored-by: root <root@ai>
2025-03-25 11:59:46 +08:00
Xiaosu Zhu
6b3b97c738 [misc] update liger-kernel's monkey patch (#7453)
* Update liger_kernel.py

* Update setup.py
2025-03-25 11:58:52 +08:00
AbdelKarim ELJANDOUBI
6d3748f727 [misc] enable liger kernel for gemma3 text and paligemma (#7466)
* add gemma3 text

* add paligemma (1,2 and 2 mix)
2025-03-25 09:27:43 +08:00
Kenny Lam
7c890170e3 [misc] enable liger kernel for gemma3 (#7462) 2025-03-24 19:09:59 +08:00
hoshi-hiyouga
ca42c0c406 [assets] fix gemma3 readme (#7449) 2025-03-24 10:31:25 +08:00
hoshi-hiyouga
7203365b80 [trainer] fix vlm loss for transformers 4.49 (#7448) 2025-03-24 10:24:05 +08:00
rumichi
3612946dd9 [docker] upgrade to torch 2.6 (#7442) 2025-03-23 21:18:08 +08:00
hoshi-hiyouga
3aa4f32e9c [misc] fix ci (#7441)
* fix ci

* improve ci
2025-03-23 21:09:35 +08:00
hoshi-hiyouga
304796b803 [misc] fix license (#7440) 2025-03-23 19:31:56 +08:00
SnowFox4004
7cfd6e4bb0 [scripts] support compute score on vllm's predictions (#7419)
* enable manual bleu&rouge eval by adding `scripts/eval_bleu_rouge.py`

* added libraries check

* update: 使用datasets库的多进程加速处理

* update:
- 使用 fire.Fire
- 修改代码格式

* Update eval_bleu_rouge.py: correctly uses fire

Deleted the code of using sys.argv

* Update eval_bleu_rouge.py

---------

Co-authored-by: SnowFox4004 <manba@out>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-23 19:21:01 +08:00
hoshi-hiyouga
05b19d6952 [deps] upgrade transformers to 4.50.0 (#7437)
* upgrade transformers

* fix hf cache

* fix dpo trainer
2025-03-23 17:44:27 +08:00
hoshi-hiyouga
919415dba9 [deps] upgrade vllm to 0.8 (#7436) 2025-03-23 14:32:22 +08:00
Guo, Quan
a959c2a509 [misc] fix sglang deps (#7432)
* feat: Add transformer version requirement for sglang

* feat: add srt to sglang which is required for running sglang

Other options are srt_hip, srt_xpu, srt_npu, srt_hpu, srt_cpu, for different computation architectures.
2025-03-23 14:07:10 +08:00
Eric Tang
db0a08db6f [3rdparty] fix redundant process group destroy for ray (#7395)
* fix redundant process group destroy for ray

* Update tuner.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-21 10:56:47 +08:00
hoshi-hiyouga
a306f0f5a2 [version] fix minicpmo (#7378) 2025-03-20 16:59:31 +08:00
hoshi-hiyouga
63752fccf7 [assets] update wechat (#7361) 2025-03-18 21:31:09 +08:00
hoshi-hiyouga
1f9773395b [misc] set dev version (#7351) 2025-03-18 00:10:53 +08:00
hoshi-hiyouga
128b5b12b3 [data] fix template (#7349) 2025-03-17 23:45:20 +08:00
hoshi-hiyouga
d5915a7dd7 [assets] update videos (#7340)
* Update README.md

* Update README_zh.md
2025-03-17 15:48:02 +08:00
Hertz
ec1154662b [model] support hunyuan 7b (#7317)
* [Model]supported tencent-hunyuan model

* [Model]supported tencent-hunyuan model(fix)

* [Model]supported tencent-hunyuan model(fix)
2025-03-15 20:55:24 +08:00
Qiaolin Yu
a44a53ebec [inference] support sglang backend (#7278)
* Mimic SGLang offline Engine

* Add more tests and args

* Pass all current tests

* Clean Code

* fix sample_params

* clean code

* Fix Stream Chat

* change sglang from engine mode to server mode

* fix

* Fix Review Issues

* Use SGLang Built-In Utilities

* Fix test SGLang

* Some Doc Issue

* fix sglang engine

* add readme

---------

Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
2025-03-15 04:37:58 +08:00
hoshi-hiyouga
93e6184cbe [data] gemma3 plugin pan and scan (#7294)
* gemma3 pan and scan

* add test case

* fix test
2025-03-13 23:29:23 +08:00
hoshi-hiyouga
0be0d7796a [assets] update video (#7287) 2025-03-13 18:45:47 +08:00
Ritesh Goru
480369a9f2 [data] efficient 4d_attention_mask creation in neat_packing (#7272) 2025-03-13 03:31:12 +08:00
hoshi-hiyouga
650a9a9057 [misc] update format (#7277) 2025-03-13 02:53:08 +08:00
hoshi-hiyouga
4b9d8da5a4 [model] support gemma3 (#7273) 2025-03-13 01:35:23 +08:00
hoshi-hiyouga
e6159ad730 [misc] upgrade deps (#7257) 2025-03-12 00:33:47 +08:00
hoshi-hiyouga
264538cb26 [misc] upgrade format to py39 (#7256) 2025-03-12 00:08:41 +08:00
hoshi-hiyouga
5995800bce [ci] update workflow (#7255) 2025-03-11 22:57:49 +08:00
hoshi-hiyouga
bf8b483186 [core] release v0.9.2 (#7254) 2025-03-11 22:42:23 +08:00
hoshi-hiyouga
e2299e261b Merge pull request #7242 from hiyouga/hiyouga/release
[release] release v0.9.2

Former-commit-id: 6b25268990bf225d84e29d4067595cf720fa12d8
2025-03-11 15:28:45 +08:00
hoshi-hiyouga
8a44dce326 Merge pull request #7247 from hiyouga/hiyouga/commit
[misc] support print commit info

Former-commit-id: 0f7ec4f8529a5d7ea2153b881335821038307bb7
2025-03-11 15:28:04 +08:00
hoshi-hiyouga
6d9233833b Merge pull request #7244 from hiyouga/hiyouga/token
[data] avoid exit after saving preprocessed data

Former-commit-id: dcbf01b0035062fa14187e5bdbb925080d349501
2025-03-11 15:17:15 +08:00
hiyouga
d019603835 support commit info
Former-commit-id: a7d89a6dc10579deaf9f45825cc18405a27cade6
2025-03-11 15:13:59 +08:00
hiyouga
478e8194d9 remove exit in preprocess
Former-commit-id: f369b6ef41ffd9586ba568b88c5ff32a1af4bace
2025-03-11 15:08:25 +08:00
hiyouga
1890d3dafe release v0.9.2
Former-commit-id: e7ed1782d4a006400de6fc0f864abd01f7fadeea
2025-03-11 14:49:13 +08:00
hoshi-hiyouga
522a3e8493 [infer] fix vllm args (#7235)
Former-commit-id: 999be5b4512890b8cf4f45874a77e35cf35626f5
2025-03-11 01:15:35 +08:00
Ze-Yi LIN
18968405d0 [tracking] add swanlab_logdir param (#7219)
* feat: add swanlab_logdir param

* fix

Former-commit-id: 9215ad488b6ac6cd57fe8fa4acdacceb63f68ca5
2025-03-11 00:53:07 +08:00
hoshi-hiyouga
71a1c1321a [config] update args (#7231)
Former-commit-id: f71a901840811bf560df671ec63a146ff99140c6
2025-03-10 23:04:43 +08:00
hoshi-hiyouga
cf58a6d860 [config] fix export max len (#7230)
Former-commit-id: 211c0b3e8f3340acd2fae1762d9152a09f19ba34
2025-03-10 16:46:08 +08:00
hoshi-hiyouga
9adc0a2c3f [assets] update readme (#7209)
Former-commit-id: d1631b38dad9ba3d41aebbb00e3500eb79b9e8e9
2025-03-07 17:27:49 +08:00
hoshi-hiyouga
16419b2834 [data] fix loader (#7207)
* fix dataloader

* add test case

* fix type

* fix ci

* fix ci

* fix ci

* disable overwrite cache in ci

Former-commit-id: e84af0e140b1aafd1a6d6fe185a8e41c8fc5f831
2025-03-07 17:20:46 +08:00
hoshi-hiyouga
82a2bac866 [misc] fix ds config (#7205)
Former-commit-id: b478fa1d9de1858075769f86f57126fde92db813
2025-03-07 15:21:28 +08:00
ZhangChuanhui
151ef48b40 [data] fix function formatter (#7201)
Co-authored-by: zhangchuanhui <zhangchal@digitalchina.com>
Former-commit-id: 3efb32b986170d2839e526640f85ba230715879a
2025-03-07 15:17:23 +08:00
hoshi-hiyouga
a255c3a476 [misc] fix cli (#7204)
Former-commit-id: 999f57133ca163c7108d2d5ee8194eca9b2109b4
2025-03-07 15:01:18 +08:00
hoshi-hiyouga
f4ec4fa6ad [script] fix vllm version (#7193)
Former-commit-id: ababdde597b2b9bf0ab3f30f036bc8d97de07f03
2025-03-06 17:14:17 +08:00
hoshi-hiyouga
2635794727 [webui] support escape html (#7190)
Former-commit-id: cf9840374f171359c828b0d6f7a2aa9893c8f701
2025-03-06 16:52:21 +08:00
hoshi-hiyouga
d2f845d70d [deps] upgrade vllm (#7183)
Former-commit-id: 37678a3d64668c3b4a4bfefc054e3b9b40427c1a
2025-03-06 15:25:08 +08:00
hoshi-hiyouga
bb8aba5abf [data] fix mm template (#7181)
Former-commit-id: 648616d473c81d393592806307e3e25b159cb278
2025-03-06 15:18:32 +08:00
hoshi-hiyouga
9f16c50155 [model] add QwQ 32b (#7179)
Former-commit-id: 8897e48b8cd55407812453ddd4ff98ac7bdc4e91
2025-03-06 11:58:36 +08:00
Ze-Yi LIN
25bb9f5ad9 [trainer] fix swanlab callback (#7176)
Former-commit-id: 6d9acf4bd30db24499118aee16bd19cb19ba9e3d
2025-03-06 00:33:37 +08:00
hoshi-hiyouga
7b985f55db [trainer] update config (#7174)
Former-commit-id: 9f535d0e3c4ee3cd0f1b65218c2eee5d03f43c6f
2025-03-05 23:32:54 +08:00
sirui.li
fd0357a26d [data] fix qwen2audio plugin (#7166)
* Update pairwise.py

[data]Repair multimodal model dpo training

* Update pairwise.py

[data]repair multimodal model dpo training using deepcopy

* Update pairwise.py

* Update mm_plugin.py

Former-commit-id: 86763dfdb8e9e5668c1ddd7e924e4be76bf78368
2025-03-05 18:03:36 +08:00
hoshi-hiyouga
31f9daa362 [data] use bicubic resampler (#7143)
Former-commit-id: c708f19ab0ab57526134952afddaa90aae8decbf
2025-03-04 00:17:06 +08:00
hoshi-hiyouga
15ea576246 [webui] fix webui (#7142)
Former-commit-id: d07281f8a45ad8a38d390181d01dcadbcf9aa1b9
2025-03-04 00:01:49 +08:00
rabbit
19a6916d80 [data] bailing template (#7117)
* add bailing template

* add bailing template

* add bailing template

---------

Co-authored-by: chengshiwen.csw@antgroup.com <chengshiwen.csw@antgroup.com>
Former-commit-id: 4a36f5e0abb5a63f4b3b81560bb1ad0e6832d379
2025-03-03 15:33:22 +08:00
hoshi-hiyouga
585c475f71 [inference] fix hf_engine (#7120)
Former-commit-id: f8cf5319cb5d6e06a1b0d8b8db2b678627f2271e
2025-03-01 05:22:49 +08:00
hoshi-hiyouga
e62dae37fe [assets] update wechat (#7106)
Former-commit-id: 0ea430060994631e9fdb18fbbca0dd565a04fd66
2025-02-28 12:01:04 +08:00
Ze-Yi LIN
11672f760d [webui] display swanlab exp link (#7089)
* webui add swanlab link

* change callback name

* update

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 27a4b93871c63b839c92940766bd7e0177972c9b
2025-02-27 19:40:54 +08:00
leo-pony
b9f84900ee [npu] update cann base image and torch 2.4 (#7061)
* Update base npu container image version:The Python version required for Hugging Face Transformers is >= python3.10

* Fix the bug: arg type of INSTALL_DEEPSPEED shoud been string now.

* Update Ascend CANN, CANN-Kernel and corresponding torch and torch-npu version

* Upgrade torch-npu needs packages' version: torch==2.1.0 and torch-npu==2.4.0.post2

Former-commit-id: d6dafada58412b0c801e576ef4d8d96203f792af
2025-02-25 23:32:01 +08:00
hoshi-hiyouga
5f65558088 [misc] fix project toml (#7067)
Former-commit-id: 28a668ff4e0beebfe5387362f5518c1d9343666f
2025-02-25 23:22:48 +08:00
JieShen
0f54a78144 [script] add seed args (#7058)
* add seed args

* add seed args

* update seed

Former-commit-id: eb9770b2c01a840b6a0ac119210c22bdbb81e18b
2025-02-25 19:44:57 +08:00
Kingsley
2986bef530 [model] add paligemma2-mix series (#7060)
Former-commit-id: 0c0196306d343242ee5e6f22c55562f9a74aa782
2025-02-25 18:51:16 +08:00
hoshi-hiyouga
065f7fb5da [data] fix mllama (#7053)
* fix mllama

* fix test

Former-commit-id: f5af20a63f3d59a6a68d323a7c6f68e551edb3a3
2025-02-24 22:05:38 +08:00
hoshi-hiyouga
c1d5073bd3 [model] add models (#7054)
* add qwen25vl awq models

* add moonlight

Former-commit-id: ae3be2970fea8a35907202a313ab767381c44916
2025-02-24 22:05:13 +08:00
hoshi-hiyouga
ee46011b34 [assets] update readme (#7051)
Former-commit-id: c89a39bfc6a3f0aaa376cd1b221320f466aba617
2025-02-24 20:45:06 +08:00
hoshi-hiyouga
d55f420206 [assets] update wechat (#7019)
Former-commit-id: 3d102fe7e0bfc23db7d75f90ebaf53216c54cc85
2025-02-20 20:32:33 +08:00
Zhangchi Feng
fcf75633a0 [data] fix MiniCPMV plugin (#6998)
* fix template

* fix bug in messages processing

Former-commit-id: f98b828f53968fb9c72bff9e45510ad5586c4fab
2025-02-19 19:36:04 +08:00
hoshi-hiyouga
e77ced045d [webui] update css (#6985)
Former-commit-id: 760a1dfb8193de418d7aa1063c0d111a3a64ae0f
2025-02-18 18:27:57 +08:00
hoshi-hiyouga
331f53381f [data] add r1 distill dataset (#6983)
Former-commit-id: 1da5ee4edaa3896593b9cae488f0ac5917c3243e
2025-02-18 17:25:09 +08:00
hoshi-hiyouga
1d675a287d [version] support transformers 449 (#6982)
* support transformers 449

* fix mm plugin

Former-commit-id: e9118a9df0839d24f6ddff5a0b55ef101a1d3d22
2025-02-18 17:05:40 +08:00
hoshi-hiyouga
be33ef67fb [misc] fix script (#6977)
Former-commit-id: 775efa1d8cbdb1b7d122be2a986d47f85214e0a1
2025-02-18 17:00:46 +08:00
hoshi-hiyouga
f5cd17881e [data] update vlm args (#6976)
Former-commit-id: c28e710636a0286d4b8a1d494529b25168a8f3ab
2025-02-18 02:12:51 +08:00
hoshi-hiyouga
c09b648934 [data] add min resolution option (#6975)
Former-commit-id: 76bd9a98a2fb00f1a1d881e6e1364c02fd36d327
2025-02-18 01:40:46 +08:00
hoshi-hiyouga
f2fd9d1b25 [data] fix predict dataset (#6972)
Former-commit-id: f9a82e527877b1ed47cabb3d34f4d155705f4048
2025-02-17 20:29:40 +08:00
Zhangchi Feng
167342af8a [data] fix minicpmo template (#6946)
Former-commit-id: 09e4438b58d5c1a5fdde37ff781c3d79461c4743
2025-02-15 00:37:41 +08:00
Eric Tang
76f9bd1820 [ray] specify ray storage path (#6920)
Former-commit-id: 4be6b66b1eaa79955e936ce2b747a8837ecd1e49
2025-02-14 21:55:41 +08:00
hoshi-hiyouga
a893505924 [misc] fix lora regex (#6944)
* fix lora regex

* fix

Former-commit-id: 1d0ecbaee1b72f1e03154ddd4fcc8b7876e01f89
2025-02-14 21:38:43 +08:00
hoshi-hiyouga
ed25e051a9 [misc] fix grad ckpt (#6931)
Former-commit-id: deae1fc9a0bea5c8b8be1564cf9c81c9c02a0b3a
2025-02-13 23:27:51 +08:00
hoshi-hiyouga
5e5fc337f9 [model] add liger kernel to qwen2_5 vl (#6930)
* add liger kernel to qwen2_5 vl

* fix patch

* fix patch

Former-commit-id: 828776d155986166498dfc907194f64436571106
2025-02-13 23:05:54 +08:00
Billy Cao
58e9ca8aa0 [trainer] fix gen_kwarg to eval during training (#5451)
* Correctly pass gen_kwarg to eval during model runs

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 845d16122496311e08263610a6a922f82604de7b
2025-02-13 02:35:06 +08:00
SrWYG
a4c4b8496f [data] evaluate on each dataset (#5522)
* [Update] loader.py , evaluate will run separate evaluations on each dataset.

`If you pass a dictionary with names of datasets as keys and datasets as values, evaluate will run separate evaluations on each dataset. This can be useful to monitor how training affects other datasets or simply to get a more fine-grained evaluation`

seq2seqtrainner support eval_dataset as Dict.

* fix format

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: cf00f78650a442c85678ce805e030d2b96cbecd7
2025-02-13 02:19:03 +08:00
Noah
38c9641777 [data] improve error handling (#6128)
* sync from upstream

* update

* update

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 1569e6096fec07da5583f1a3435b0d23ae09b5ba
2025-02-13 01:39:41 +08:00
hoshi-hiyouga
8b8fdb3a85 [misc] update readme (#6918)
Former-commit-id: f5823479bd51c39db668b68056be749af09894d1
2025-02-13 01:01:41 +08:00
hoshi-hiyouga
290057069e [misc] update readme (#6917)
Former-commit-id: 6bbed1d8c4189fb7bea40230e278c40bb5336fbd
2025-02-13 00:58:10 +08:00
hoshi-hiyouga
46203856fc [breaking change] refactor data pipeline (#6901)
* refactor data

* rename file

Former-commit-id: 7a1a4ce6451cb782573d0bd9dd27a5e443e3a18b
2025-02-13 00:39:20 +08:00
Eric Tang
80b89978d9 [misc] support for launching LLaMA-Factory with uv run (#6907)
* yay

* uv with ray temporary commit

* remove ray specific code for now

* cleanup

Former-commit-id: 1a9cab6de49e300bf9c747eefbb11d693592b477
2025-02-13 00:38:44 +08:00
Eric Tang
5a221d91f9 [example] fix path to ray example (#6906)
Former-commit-id: e9bee3ef045d85051da04e6ad581a23a9e1a9551
2025-02-13 00:29:32 +08:00
hoshi-hiyouga
3a3f4072e5 [misc] fix grad ckpt func (#6916)
Former-commit-id: 35e069a52b3d7cfd9b0107574b09265eb2290f0b
2025-02-13 00:17:18 +08:00
marko1616
0c0cdc26bc [trainer] fix llama3.2 vision kto train (#6904)
Former-commit-id: 1563e89adc8988fc6e4250634a3f1e385979b0e5
2025-02-12 19:09:14 +08:00
hoshi-hiyouga
2581cc844b [data] feat: auto template (#6905)
* support auto template

* add unittest

Former-commit-id: 0c6c9150db6414a5a05527ea486dce6633dff4b3
2025-02-12 00:22:53 +08:00
hoshi-hiyouga
d58fcd094e [misc] update readme (#6903)
Former-commit-id: 830d028939149d54bc91b6bda110dfa5de949483
2025-02-11 22:51:26 +08:00
hoshi-hiyouga
86063e27ea [data] fix ollama template (#6902)
* fix ollama template

* add meta info

* use half precision

Former-commit-id: 1304bbea69d8c8ca57140017515dee7ae2ee6536
2025-02-11 22:43:09 +08:00
hoshi-hiyouga
88eafd865b [misc] support export ollama modelfile (#6899)
* support export ollama modelfile

* update config

* add system and num ctx

Former-commit-id: 8c2af7466f4015f300b51841db11bcd2505ebf20
2025-02-11 19:52:25 +08:00
hoshi-hiyouga
3f7bd98bfa [data] refactor template (#6896)
Former-commit-id: f78d5a3eca947ed965ca2f6c87d60441b1a59867
2025-02-11 17:59:25 +08:00
codingma
b72c4bd118 support ollama modelfile export (#4686)
Former-commit-id: 15cca102a7fc0d08b5d049cf264acc6fa576b104
2025-02-11 17:52:24 +08:00
hoshi-hiyouga
808ff89a2d [data] refactor mm plugin (#6895)
* refactor plugin

* lint

Former-commit-id: 1c8dcc3adca4a2e78f514f8bb70573dd1ca08746
2025-02-11 16:34:49 +08:00
HJ
6d7f1299bd [data] fix qwen_2_5_vl video processing (#6868)
* fix qwen_2_5_vl video processing

* Update mm_plugin.py

* Update mm_plugin.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 35f326dabdc8e84036296d2e3de1c84c67b8def8
2025-02-11 16:14:50 +08:00
hoshi-hiyouga
0420a608ca [assets] update wechat (#6892)
Former-commit-id: 0b268cc903a583ae78cb7e63d2bdc4602d7220fc
2025-02-11 13:56:26 +08:00
Zhangchi Feng
2047eab723 [da'ta] fix minicpmv plugin (#6890)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

* update init audio

* update init audio

* [model]fix image process in minicpmo

* fix no mm inputs

Former-commit-id: cdd19ccd8cec460606b4545e886e932c1c5c5fe1
2025-02-11 13:30:44 +08:00
HJ
e11b40c344 [data] fix: sharegpt converter (#6879)
* fix-sharegpt-format

* fix

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: ae8f8151ff750839998b50446f127061f240d41a
2025-02-10 21:59:12 +08:00
hoshi-hiyouga
b869506a57 [data] fix mllama collator (#6874)
Former-commit-id: c694fa3d66651c6ce547fa72c8260c46a406126b
2025-02-09 22:42:25 +08:00
hoshi-hiyouga
72d5b06b08 [test] align test cases (#6865)
* align test cases

* fix function formatter

Former-commit-id: a68f5e22d0391c80a9a826dc83967255be572032
2025-02-09 01:03:49 +08:00
hoshi-hiyouga
94726bdc8d [dataset] add openthought (#6866)
Former-commit-id: 20c748a4f108c0087f0d85377a4aa99126a0beb0
2025-02-09 00:53:01 +08:00
hoshi-hiyouga
4d1791e905 [deps] upgrade vllm (#6857)
Former-commit-id: 4bd50f65a3d62528768561019fda2723d045c7fd
2025-02-08 15:02:28 +08:00
hoshi-hiyouga
528e06ccaa fix qwen2vl plugin (#6855)
Former-commit-id: fd13b7138ab3f4da0a429a327b9d076bcb70b944
2025-02-08 10:59:10 +08:00
hoshi-hiyouga
fec641ec82 [misc] allow extra args (#6831)
Former-commit-id: 0fd3a5295cb4e08a4e57e860e82103364c28fba8
2025-02-06 12:38:08 +08:00
Zhangchi Feng
8f401e37f8 [model] support audio (#6701)
* support qwen2_audio

* improve code

* lint

* fix

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 5eacb5629e4d7733cd992a63747a1335f2c6a929
2025-02-05 04:59:09 +08:00
Yueqi Song
9feb78e7b4 [data] allow thought in function call (#6797)
* Update template.py

* Update template.py

* use formatter

* fix regex

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 3a31af6e920683ec074da93b1719e29f5d4cffd6
2025-02-05 02:26:23 +08:00
hoshi-hiyouga
c2022431aa [misc] update license year & fix llama pro (#6814)
* fix llamapro script

* change year

Former-commit-id: d9ae594178796994d400a5f207d6499712816f89
2025-02-05 01:53:33 +08:00
Yueqi Song
0817c24c04 [data] fix qwen tool template (#6796)
* Update tool_utils.py

* fix unittest

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 02bb78a792112f5151b3a96ddde2528823855288
2025-02-05 00:02:00 +08:00
Zhangchi Feng
cfb926fb84 [data] fix minicpmv plugin (#6801)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

* update init audio

* update init audio

* [model]fix image process in minicpmo

Former-commit-id: 8f704c8b6228ef50f828014f85dce67fda868660
2025-02-04 21:20:15 +08:00
neavo
34746d6151 [readme] update flash attention installation instruction on win platform (#6788)
* Update README_zh.md

* Update README.md

Former-commit-id: e48d1327fb39cc95f8fbfc746494f67a79471893
2025-02-01 12:43:29 +08:00
hoshi-hiyouga
5bb447b118 [misc] update workflows (#6787)
Former-commit-id: 15add6b250149e2aeabdc62d7dca69fc06054e01
2025-02-01 04:54:42 +08:00
hoshi-hiyouga
a28261a866 [model] add mistral small models (#6786)
Former-commit-id: e5e95c39bc4199fa89c67e34f9adaaa987058744
2025-02-01 04:31:38 +08:00
hoshi-hiyouga
800de98dc8 [model] add qwen2.5 vl models (#6779)
Former-commit-id: ed46fb4f6194c30060b908092464dded12e5787c
2025-01-31 03:00:29 +08:00
hoshi-hiyouga
222423bcef [breaking] support transformers 4.48 (#6628)
Former-commit-id: f154ab175c513a4d7bb866bf2cffc34b77b50508
2025-01-31 01:36:33 +08:00
hoshi-hiyouga
e71737351f [webui] improve webui & reasoning mode (#6778)
Former-commit-id: 3f17fc0d7163372e0446f1a38792ff761e99b739
2025-01-31 00:09:21 +08:00
qvlehao
4f298894da [model] add deepseek-R1 & show think process (#6767)
Former-commit-id: 4dccb724af51208a001c96fefbdbf226be09e50c
2025-01-29 12:16:26 +08:00
yinpu
a8fae3869d fix: avoid redundant normalization in DPO's SFT loss calculation (#6722)
Former-commit-id: 971a8ccbdacf130763d40c7ef82a711b2fc1292f
2025-01-21 13:38:02 +08:00
engchina
db9b977e4f [webui] support ja (#6698)
* add support for japanese language

* add support for japanese language

---------

Co-authored-by: engchina <atjapan2015@gmail.com>
Former-commit-id: 88692e403f9b5085dd0c7c2b2c68656c5da50dd4
2025-01-20 19:46:38 +08:00
hoshi-hiyouga
87d685b59f [model] support yarn (#6693)
Former-commit-id: 8c412abc44a4c61b683465e36c6288580d980250
2025-01-18 13:56:09 +08:00
hoshi-hiyouga
e4046bdd1f [assets] update wechat (#6692)
Former-commit-id: 70dba5fab6f4c9225758cafb646113d8e80ac084
2025-01-18 12:35:03 +08:00
hoshi-hiyouga
5baa3add8c [misc] update mm plugin (#6691)
Former-commit-id: 00303338d6927b1fda58b23340a31a8fa009f706
2025-01-17 23:04:26 +08:00
hoshi-hiyouga
332f637592 disable valset by default (#6690)
Former-commit-id: a1a94f364e33d1d73852f74eda4fa581e6b16533
2025-01-17 21:09:30 +08:00
hoshi-hiyouga
31daa6570b [webui] upgrade to gradio 5 (#6688)
Former-commit-id: 9df7721264ddef0008d7648e6ed173adef99bd74
2025-01-17 20:15:42 +08:00
hoshi-hiyouga
33525a34b6 fix qwen2 moe (#6684)
Former-commit-id: ab624419fa0ab23ef7a331a0ec14e393328772b5
2025-01-17 13:46:09 +08:00
Zhangchi Feng
3607caa2ad [data] Fix minicpmv/o dpo training (#6657)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

Former-commit-id: 8d9f47b98047f370637d1c96c2f3440dcc738ef3
2025-01-15 17:30:37 +08:00
steveepreston
0fc2e19279 Update val_size english description (#6653)
* Update `val_size` Description in locales.py

* Update `val_size` Description in data_args.py

* Remove extra space in data_args.py

Former-commit-id: f1ba5158091446dce540dd796284037bdd724c38
2025-01-15 16:00:20 +08:00
hoshi-hiyouga
ef994600db update readme (#6648)
Former-commit-id: b47467276ab3174c50329b3c8b76823bc0a2249c
2025-01-15 11:06:19 +08:00
hoshi-hiyouga
7638f1070e [optim] clean apollo (#6645)
* clean apollo code

* update readme

Former-commit-id: 38b8ec4a99189483124b54df9d6bc6b0d318855a
2025-01-15 01:42:50 +08:00
zhuHQ
c2120432db [optim] add support to APOLLO (#6617)
Former-commit-id: 5a252e5a458457adbd19da3b68a3897ad2962824
2025-01-15 00:24:56 +08:00
Zhangchi Feng
66184762e8 update readme of MiniCPM-o (#6642)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

Former-commit-id: 68604050ae2c98aeef5e9a6b4d2c11a4eb609bfa
2025-01-14 21:22:35 +08:00
hoshi-hiyouga
41a9e231cb lint (#6641)
Former-commit-id: 79731ae13ecd17eb8646fb53162c81dddfef3b00
2025-01-14 18:40:07 +08:00
Haian Huang(深度眸)
1bb06e06df Support InternLM3 Dense 8B Model (#6640)
* support internlm3

* update

* update

* update

* add hint

Former-commit-id: 24ab7ae0944c5f373e9cac60f0332e704824a057
2025-01-14 18:07:27 +08:00
Xiaosu Zhu
381f7120e6 Fix tokenizer max length (#6632)
Former-commit-id: 1807c7ba033985490aa7c8c39d880da6af983b92
2025-01-14 17:35:54 +08:00
Zhangchi Feng
f7857c83e1 Support Inference of MiniCPM-V-2.6 and MiniCPM-o-2.6 (#6631)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

Former-commit-id: 7f3c64e853a7cdd49d02bf85e237611941ac7fa8
2025-01-14 17:34:58 +08:00
hoshi-hiyouga
d0da6f40b0 [model] fix mllama any image (#6637)
* fix mllama any image

* reorder classes

Former-commit-id: 1242a1c4b4a465c06363fdc59302e80e5c4c96e6
2025-01-14 16:47:58 +08:00
hoshi-hiyouga
28d145a066 pin vllm version to 0.6.5 (#6629)
Former-commit-id: 26097ca0adf25ebb7d9e8eec2d2cef673c6cfe88
2025-01-14 02:44:02 +08:00
Zhangchi Feng
ae32c148d1 Support new features of MiniCPM-V (#6626)
* fix template name

* tiny fix

* support minicpm-o-2.6

Former-commit-id: 53034a61c7654358f46916cbc370910fb2aeff3b
2025-01-14 00:26:19 +08:00
hoshi-hiyouga
2a05941b14 [inference] fix stop token for object detection (#6624)
* fix stop token

* update minicpm data pipeline

* fix npu qlora examples

Former-commit-id: 844919fadaa8a61dfae47020971ea80730b2346f
2025-01-13 21:34:20 +08:00
codingma
11c38b9173 add nf4 qlora support on Ascend NPU (#6601)
* add nf4 qlora support on Ascend NPU

* add transformers version check

* add python>=3.10 requirement description for npu

* tiny fix

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 7912d1acac5f10dab22145fe729a90c57aad8d85
2025-01-13 19:43:36 +08:00
Zhangchi Feng
73c1c15b62 Fix template name of MiniCPM-V (#6620)
* fix template name

* tiny fix

Former-commit-id: 94dea52cef709a7e6f1cdc0b78e83e0422bd65d3
2025-01-13 16:46:48 +08:00
hoshi-hiyouga
7f58bf984f Merge pull request #6598 from BUAADreamer/minicpmv
[model] Support MiniCPM-V

Former-commit-id: 251e82bec12eaea6cf13608de191c096c63d1214
2025-01-13 15:24:02 +08:00
fzc8578
ec552372ba remove tests
Former-commit-id: 51addcd7ab81548a9952064dd8c95a8542252003
2025-01-13 15:08:35 +08:00
fzc8578
17d32fb5c7 fix tests
Former-commit-id: 582a17a12010943c7ca1cc0e25ebc8d125d10b45
2025-01-13 15:01:39 +08:00
fzc8578
4b61610b12 fix style
Former-commit-id: 76a36d9acecbf36b6959a14caacfed1d32bcee41
2025-01-13 14:19:38 +08:00
fzc8578
07798e4aad fix system prompt and tests
Former-commit-id: 955efca677b299749f3d40d587ee310951537543
2025-01-13 14:18:06 +08:00
fzc8578
6d6acd0213 add some
Former-commit-id: 5ad8ef3ec434f53f6fc494474becb034a3aca0ca
2025-01-11 15:03:20 +08:00
fzc8578
a789e0f263 add cpm_o test
Former-commit-id: 53cade69caed82b470fdb249274f03ee34af3100
2025-01-11 11:55:30 +08:00
fzc8578
f9ee00b6b6 add cpm_o test
Former-commit-id: 81dc0f678a7609c834581d956387bde42652755d
2025-01-11 11:49:03 +08:00
fzc8578
31bfdb08cd fix format
Former-commit-id: 964e18be5a824950164bc7232d35822a8b116d1a
2025-01-11 01:27:40 +08:00
fzc8578
12c83e00fc add some
Former-commit-id: 6233764d18f31365e9ba450408306fad55567ffc
2025-01-11 01:10:24 +08:00
fzc8578
9dc7b6c7ac adapt to new mllm_param
Former-commit-id: 0775b71965863c2618c117726a1046a36d6d85b8
2025-01-11 00:16:34 +08:00
Zhangchi Feng
627548bf7f Merge branch 'main' into minicpmv
Former-commit-id: 8a9c90759feda975faadc5858bd44b7ea116e7fb
2025-01-11 00:01:36 +08:00
hiyouga
dc65ecdf09 refactor mllm param logic
Former-commit-id: b895c190945cf5d991cb4e4dea2ae73cc9c8d246
2025-01-10 15:45:48 +00:00
fzc8578
e577990eb2 add minicpmv2.6
Former-commit-id: 1ab0aea54b54066cad500b7969b86a0e952d396d
2025-01-10 23:45:44 +08:00
fzc8578
1f3b729a4b add some
Former-commit-id: 58f50b8729083e9ea0fdcf07042b06261670ad57
2025-01-10 23:29:06 +08:00
fzc8578
0aa7ac210f add some
Former-commit-id: 3acd151a0f8efdd230c0b0980550795d204a69f7
2025-01-10 21:25:32 +08:00
fzc8578
40382f1387 fix some
Former-commit-id: 1eb7118db3ad6054cfd59d5f16a5d882e40e9057
2025-01-10 20:55:52 +08:00
fzc8578
75b3819e43 fix version
Former-commit-id: 834903fbf7a0fc8ac110f62f4df7c13819dd3c68
2025-01-10 20:31:04 +08:00
fzc8578
e63c2df0b1 fix some
Former-commit-id: cd5a1a8b9c6eb59d6e95f79573f60ad8668f1942
2025-01-10 20:27:06 +08:00
fzc8578
25d4889789 tiny fix
Former-commit-id: f088e580d3bacd0eecd0c3bf17e928eb49832ba1
2025-01-10 20:15:39 +08:00
Zhangchi Feng
8c0a721c4c Merge branch 'main' into minicpmv
Former-commit-id: d8840ae416660e23f1d615ffd404f519360151d9
2025-01-10 20:12:07 +08:00
fzc8578
9e972bc9ec add some
Former-commit-id: fede563aeb716ba5d1e368fd3e1182e4e580d248
2025-01-10 20:01:22 +08:00
hoshi-hiyouga
1675712a4c Merge pull request #6588 from hiyouga/hiyouga/upd_issue_temp
[gh] update issue template

Former-commit-id: 0a2626f996ce61559e93bedf19083aac5c861666
2025-01-10 03:03:48 +08:00
hiyouga
e0c9012f7f update issue template
Former-commit-id: 2bfca993588d8087dfd118f6f02486bbe752b166
2025-01-09 18:58:53 +00:00
hoshi-hiyouga
a25024bd0c Merge pull request #6585 from hiyouga/hiyouga/add_phi4
[model] add phi4 model

Former-commit-id: 0ae6a9b7bf9f1d6d844b97406b4795363bf75e78
2025-01-10 02:39:17 +08:00
hiyouga
867980196e improve template, add phi4 model
Former-commit-id: a785b6796e445a3adba45c5b6947166a2ff99871
2025-01-09 18:27:54 +00:00
hoshi-hiyouga
4e25d037c8 Merge pull request #6564 from stephen-nju/fix_ray
Fix ray

Former-commit-id: d4566839369726023f1b6e8f4b2332bda0c715cc
2025-01-08 18:14:18 +08:00
hoshi-hiyouga
6ba6926221 Merge pull request #6565 from hiyouga/hiyouga/improve_log
[misc] imporve log

Former-commit-id: 538bf7b839c63d6a6758522fa08999d9b78e9db2
2025-01-08 18:08:21 +08:00
zhubin
b6b53b61f7 fix –get ray args when args not a dict
Former-commit-id: 5e5398cd5b117b2378107172d3f91cfb0321e842
2025-01-08 10:06:02 +00:00
hiyouga
647c51a772 imporve log
Former-commit-id: a6abf375975ffea3d51e1b944c9855b5f62ffac8
2025-01-08 09:56:10 +00:00
hoshi-hiyouga
3b843ac9d4 Merge pull request #6542 from erictang000/et/ray-integration
Ray Train integration with LLaMA-Factory

Former-commit-id: 4e34ee0a8e0aa90b535e53608b51c5c0804db34e
2025-01-08 11:46:03 +08:00
hiyouga
0ef1f981da fix llamaboard with ray
Former-commit-id: bd8a432d6a980b1b24a551626304fe3d394b1baf
2025-01-07 09:59:24 +00:00
hiyouga
944a2aec4d refactor ray integration, support save ckpt
Former-commit-id: 2f50b27e608b2092bfceab6c6e84e6631e973ee2
2025-01-07 09:39:10 +00:00
Eric Tang
4f31ad997c run style check
Former-commit-id: 5ec33baf5f95df9fa2afe5523c825d3eda8a076b
2025-01-07 08:55:44 +00:00
Kourosh Hakhamaneshi
8683582300 drafting ray integration
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>

Former-commit-id: 19c12ddae9350f6e25a270fe3372f5b9094cf960
2025-01-07 08:55:44 +00:00
hoshi-hiyouga
5ccc607222 Merge pull request #6547 from hiyouga/hiyouga/fix_pixtral_dpo
[trainer] fix pixtral dpo

Former-commit-id: 920bb2a8922847fa544e2c260c67161e64cf5d50
2025-01-07 14:38:55 +08:00
hiyouga
d8bd46f1bf fix #6546
Former-commit-id: 6fcf2f10faf3b1614896b091591eeef96d717e64
2025-01-07 06:30:44 +00:00
fzc8578
8c2a712247 add some
Former-commit-id: b4790c66c126567bd193de52a564e3ce11c94769
2025-01-06 19:32:39 +08:00
hoshi-hiyouga
53e41bf2c7 Merge pull request #6528 from hiyouga/hiyouga/upd_wechat
[assets] update wechat

Former-commit-id: 3ceedf44896b5ebc406d6398b3f15e74e4710fbe
2025-01-04 16:01:21 +08:00
hiyouga
0eeae9061c update wechat
Former-commit-id: 11a9d96a042e8afd972e0bf2fa3e51f95e4799ec
2025-01-04 07:59:57 +00:00
Zhangchi Feng
08729dbefc Merge branch 'hiyouga:main' into minicpmv
Former-commit-id: 873b2d5888038e2328a12a6eb7c84099ba7ca1f3
2025-01-04 11:20:33 +08:00
fzc8578
2c120aa0df add some
Former-commit-id: 81176fe226da89eace89cb202bad68e73b7c2a02
2025-01-04 11:11:15 +08:00
hoshi-hiyouga
cca6286b6f Merge pull request #6524 from hiyouga/hiyouga/upd_scripts
[misc] update scripts

Former-commit-id: 6ba3ec45fc369c095ab9a1fbd9847dc66cf24ca4
2025-01-03 23:52:26 +08:00
hiyouga
8516054e4d update scripts
Former-commit-id: 05aa52adde8905ca892f1ed5847d6f90b1992848
2025-01-03 10:50:32 +00:00
hoshi-hiyouga
d1a8cd67d2 Merge pull request #6515 from hiyouga/hiyouga/misc
[misc] update model name

Former-commit-id: f92eea4090351dcd3c364e10a9eec0d17d480e12
2025-01-02 20:20:02 +08:00
hiyouga
8a5b4bdfd4 update model name
Former-commit-id: bf627d9f1ac117f040adbfd7630b5283f0db556a
2025-01-02 12:19:21 +00:00
hoshi-hiyouga
3bceef02ee Merge pull request #6514 from hiyouga/hiyouga/add_project
[readme] add project

Former-commit-id: 0bd0c373183731302f1af9f33a1f8ff70ba743e2
2025-01-02 20:16:15 +08:00
hoshi-hiyouga
166a830938 Merge pull request #6513 from hiyouga/hiyouga/add_gpt2
[model] add gpt2 model

Former-commit-id: 859c37f43c8a49eea4f118d0d00ee2a554f6bd4f
2025-01-02 20:15:55 +08:00
hiyouga
18767fe026 add project
Former-commit-id: 3b7e745d271e36b4cfe8826820b23254e1debfe9
2025-01-02 12:15:41 +00:00
hiyouga
18a1a4b9da add gpt2 model
Former-commit-id: 37d5e3639fcf5ae6e58cc435e0fa9dee0d6e4ead
2025-01-02 12:07:38 +00:00
hoshi-hiyouga
6015fe700e Merge pull request #6512 from hiyouga/hiyouga/fix_gen_logic
[trainer] fix generate logic

Former-commit-id: b97759421c535560ade631a7fa0a57b7c0da50f1
2025-01-02 19:36:54 +08:00
hoshi-hiyouga
369dae8dd3 Merge pull request #6462 from shibingli/main
Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building

Former-commit-id: 1e72bb24253bb07da874f3a37ccfa4fddaaf6978
2025-01-02 19:34:17 +08:00
hiyouga
2aaf3697d7 fix #6499
Former-commit-id: dffc607220ff6dac15cf501ac9a3cdbe80c25211
2025-01-02 11:28:54 +00:00
hoshi-hiyouga
5504b5254c Merge pull request #6492 from hiyouga/hiyouga/add_deepseek3
[model] add deepseek3 model

Former-commit-id: 0a6d1244a51f3cc8fe141b32f39bffce4c924a8c
2024-12-30 21:50:13 +08:00
hiyouga
b2e4f11602 add deepseek3 model
Former-commit-id: 611779d412f31e25b1ed38049050eee2da61dde5
2024-12-30 13:39:20 +00:00
hoshi-hiyouga
e3f95abca7 Merge pull request #5507 from piamo/main
Add deepseek-v2.5 template

Former-commit-id: 8a4911d201e219465fe0835a3ceb967f8b80dc0e
2024-12-30 21:08:25 +08:00
hoshi-hiyouga
2f44f70c2c Merge pull request #6483 from hiyouga/hiyouga/fix_paligemma_infer
[model] update vllm & fix paligemma dtype

Former-commit-id: 03ad6d44805a965764aaa51376964972b9b7da3d
2024-12-30 16:34:32 +08:00
hiyouga
f8f05a883b fix #6482
Former-commit-id: 8577f52b4152efe6cc7a8b5f6d37b4f9ba6684e7
2024-12-30 06:03:07 +00:00
hoshi-hiyouga
5f473e2696 Merge pull request #6465 from hiyouga/hiyouga/fix_eval_loss
[trainer] fix eval loss

Former-commit-id: fa8110b2052a74b4bd0dcf391a54207e1e31056d
2024-12-28 01:02:56 +08:00
hiyouga
88b1874c04 fix #6448
Former-commit-id: 04f78e85af5af14b4c195936623e426a6a128af2
2024-12-27 16:54:39 +00:00
shibingli@yeah.net
58bc6943dc Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building.
Former-commit-id: c46af4c45f96f1942dfaf77bdbdbe5d0fe85a387
2024-12-27 18:31:14 +08:00
shibingli@yeah.net
2dedf7b401 Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building.This commit introduces an ARG parameter named HTTP_PROXY in the Dockerfile. This addition allows for the configuration of an HTTP proxy, facilitating image building in environments with network restrictions.
Former-commit-id: d59fe30bca636bc2ca132d50172dba0032cecb6b
2024-12-27 18:17:17 +08:00
hoshi-hiyouga
5769a553d2 Merge pull request #6457 from youkaichao/module-run
[misc] enable module run

Former-commit-id: 813881a5d13dd1d5a526a85d41032196e0d46f04
2024-12-26 23:41:37 +08:00
youkaichao
552816e04b Update cli.py
Former-commit-id: 18e65bbd3ae07af3b9eed7f293c345815776c325
2024-12-26 23:22:09 +08:00
hoshi-hiyouga
b5fa1044b8 Merge pull request #6443 from hiyouga/hiyouga/add_qvq
[modle] add qvq

Former-commit-id: 2010e80b1a939d21efa13d54df5f5d648ea640de
2024-12-25 15:53:19 +08:00
hiyouga
3c55976a0e add qvq #6439
Former-commit-id: 4dbfa142d899dd6e4d1a9d4db125765af5580a4f
2024-12-25 07:52:41 +00:00
hoshi-hiyouga
4611f67fae Merge pull request #6426 from hiyouga/hiyouga/update_readme
[assets] update readme

Former-commit-id: 2309c431090d1f3b573d113bbedeabee2b01fdf2
2024-12-23 22:17:19 +08:00
hiyouga
a5346041bb update readme
Former-commit-id: 1deda4750e0df6c46aeb33cf3f8b35baa537cc1d
2024-12-23 14:08:59 +00:00
hoshi-hiyouga
df42e438c1 Merge pull request #5922 from Tuyohai/main
support granite3 models

Former-commit-id: a9087bc0549f7f16e5b4c39e324043755b1618c8
2024-12-23 16:46:02 +08:00
hoshi-hiyouga
7dbfd7dff6 Merge pull request #6418 from hiyouga/hiyouga/add_report
[trainer] add custom args to experimental logger

Former-commit-id: 5e5a7ba73c1a386f025d75c10b102306bcb98674
2024-12-22 05:47:55 +08:00
hiyouga
a897d46049 support report custom args
Former-commit-id: d41254c40a1c5cacf9377096adb27efa9bdb79ea
2024-12-21 21:42:45 +00:00
hiyouga
adff887659 fix paligemma infer
Former-commit-id: d272455d6118c1d670c70cfe3458d8dab111da6c
2024-12-21 20:24:32 +00:00
hoshi-hiyouga
eba78f2159 Merge pull request #6416 from Zeyi-Lin/main
docs: use swanlab
Former-commit-id: 0759b576a36cde120ccb8cadd96fca4d871be130
2024-12-22 04:08:26 +08:00
ZeYi Lin
ec05c8cdb4 docs: use swanlab
Former-commit-id: 33509ea7bcd5f698a8393379bb3941c3c32f7fd6
2024-12-21 20:59:25 +08:00
hoshi-hiyouga
0a869c4ed4 Merge pull request #6401 from Zeyi-Lin/hiyouga/swanlab
feat: add swanlab for experiment tracking and visualization.
Former-commit-id: e65fe507f7643bf40b0fc462805c7b7f8ef6b738
2024-12-21 14:09:33 +08:00
ZeYi Lin
f792eaf8d4 fix: project blank
Former-commit-id: 3a0939572b0bfc7da0ee1a7244b6b3fbf567aba0
2024-12-20 18:26:02 +08:00
ZeYi Lin
8a41c96761 fix: by hiyouga suggestion
Former-commit-id: 41195f1bc69e4b5da7a265369d368b06754362cf
2024-12-20 16:43:03 +08:00
ZeYi Lin
e5d9d8c55d feat: ui improve
Former-commit-id: 6a1effb1741a13ae5238b0e9b429b4cbe3b6534f
2024-12-20 11:03:02 +08:00
ZeYi Lin
3e44c8fe3a fix: text
Former-commit-id: 52fe8d61eba7b7d8f66df09a03d40f25cc9c5b44
2024-12-19 21:26:02 +08:00
ZeYi Lin
925e421bde fix: bugs
Former-commit-id: a2297f97f7587c77d55fbce9ffa81dc60d0b04a1
2024-12-19 21:08:16 +08:00
hoshi-hiyouga
bbb636bdba Merge pull request #6395 from hiyouga/hiyouga/fix_genkwargs
[generate] fix generate kwargs

Former-commit-id: 1193594f2d06df38ec0aef7f591c74651cf1353c
2024-12-19 20:24:17 +08:00
ZeYi Lin
a30bdbb1c0 docs: config framework
Former-commit-id: 9cad21df82754170900e3ea74476f674754159b3
2024-12-19 20:22:36 +08:00
ZeYi Lin
95b7e10a06 fix: string
Former-commit-id: 73e1da5ab07c96a6faa9738e83c4dd9297f34b14
2024-12-19 20:18:59 +08:00
hiyouga
0385c60177 fix #6391
Former-commit-id: 067ba6e6cb4d8a1d95bba0a108f73008416a2865
2024-12-19 12:16:38 +00:00
ZeYi Lin
44895ebe36 feat: optimize frontend
Former-commit-id: 4a78603c141d9bd78bcaf81261b443cf082bf51f
2024-12-19 19:04:19 +08:00
ZeYi Lin
44dfbf9dbd feat: swanlab params
Former-commit-id: 761b3bdb03e27826fde2ca86d4e37b53c2bbc777
2024-12-19 18:47:27 +08:00
hoshi-hiyouga
0a465fc3ca Merge pull request #6388 from hiyouga/hiyouga/shuffle_control
[trainer] support disable shuffling

Former-commit-id: 3243e74a2ed3b1f7fa818842955f91386b591a9c
2024-12-19 17:00:12 +08:00
hiyouga
01eeae50b5 support disable shuffling
Former-commit-id: 9d8c35fd6b838ede0bd6827c6c6121f2cba2b11b
2024-12-19 08:53:21 +00:00
hiyouga
7eeeffdb8a add swanlab
Former-commit-id: c85a77c8a8824a56a67d56b97b4877fcd6edeb3d
2024-12-19 07:12:31 +00:00
hoshi-hiyouga
eca06531c3 Merge pull request #6384 from hiyouga/hiyouga/fix_webui
[webui] fix webui args

Former-commit-id: 94294c4e356b3ac5546f897d6e3255ee8c2a260f
2024-12-19 14:57:52 +08:00
hiyouga
d90b40b60f fix webui
Former-commit-id: 7152fde4a026e67f15885814c1900f3911d04ee8
2024-12-19 06:48:03 +00:00
hoshi-hiyouga
1898c1e9a6 Merge pull request #6379 from hiyouga/hiyouga/add_paligemma2
[model] add paligemma2

Former-commit-id: abe3ff3fe0b113e949bf6d2bd10e4c125fb8fe75
2024-12-18 17:03:11 +08:00
hiyouga
8d2f8b0dd8 add paligemma2
Former-commit-id: dafbc31684cb2566ef23c79e171cdfd02d6d396b
2024-12-18 08:57:26 +00:00
hoshi-hiyouga
df42281256 Merge pull request #6313 from ge-xing/main
support telechat2 model

Former-commit-id: 282d0619b1047ba48f9bc3ac837d2ed40b7df307
2024-12-18 16:16:17 +08:00
hoshi-hiyouga
896cf476d5 Merge pull request #6369 from hiyouga/hiyouga/template
[template] support qwen2 tool template

Former-commit-id: e1e133635f05f5b83869bc02340d6ea46976f318
2024-12-18 04:23:49 +08:00
hiyouga
37961d5f06 support qwen tool format
Former-commit-id: cbef4cb501fa1b50fa611e7054a856ce2c5ed10e
2024-12-17 20:12:06 +00:00
hiyouga
bb047bc844 change default replace jinja to false
Former-commit-id: bfe6625f6f6aa294933fa9056a4bfedee4fbe5e2
2024-12-17 19:27:10 +00:00
hoshi-hiyouga
448adedf6a Merge pull request #5473 from AlongWY/mistral
Support Mistral format tools

Former-commit-id: 4838427310d49e5942138e4578d2483baa005471
2024-12-18 03:23:24 +08:00
ylfeng
469c7cd462 Support Mistral format tools
Former-commit-id: e42d0e54b7a64a3f017a09e99846d174db7b438f
2024-12-17 19:13:26 +00:00
hoshi-hiyouga
ebf6a07681 Merge pull request #6368 from hiyouga/hiyouga/fix_llama_template
[template] fix llama3 tool template

Former-commit-id: 7c6763c4f3287f758077191361d5b0354741f84a
2024-12-18 01:10:48 +08:00
hiyouga
53f0fff513 fix llama3 tool template
Former-commit-id: 63f28a594a44c011f2e6d418f22ddbfc445db163
2024-12-17 17:05:10 +00:00
hoshi-hiyouga
ab7567693d Merge pull request #6367 from hiyouga/hiyouga/add_model
[model&template] add llama3.3 & support llama3 tool prompt

Former-commit-id: c32012c5e4943a30c3061716ed780d6124b6c90d
2024-12-18 00:13:28 +08:00
hiyouga
1b8aab0723 support llama3 tool prompt
Former-commit-id: dc45d2f56669fd99935a68cda1ec0e8f36229f7f
2024-12-17 15:52:37 +00:00
hoshi-hiyouga
30ebe61914 Merge pull request #5819 from yafshar/remote_code
Add trust_remote_code Parameter and Set Default to False

Former-commit-id: e82099350a2fb6d8ddf9c80ba0b18173057d4dcf
2024-12-17 21:10:24 +08:00
Yaser Afshar
6f1c8dacea Add missing key to init_kwargs
Former-commit-id: 03fc4621dad132164596a58d3e8693787b7e1aca
2024-12-17 12:34:05 +00:00
Yaser Afshar
8881237475 Add trust_remote_code parameter and remove True
- Introduced a new model parameter `trust_remote_code`
- Set the default value of `trust_remote_code` to `False`
  to enhance security


Former-commit-id: 4bf23f406cf5235c16f9f8139850c53354901814
2024-12-17 12:25:12 +00:00
zhaohu xing
584755be4b support telechat2 model
Former-commit-id: 15a069d85c07842cd28d65845af93c3cf70ef1f4
2024-12-17 12:15:33 +00:00
hoshi-hiyouga
3d3324be5c Merge pull request #6364 from hiyouga/hiyouga/control_reenterent_gc
[model] support non-reenterent-gc

Former-commit-id: a8a13cb360980bb4acd493e33ed405e07460fe73
2024-12-17 19:58:36 +08:00
hiyouga
4196d5b4d6 support non-reenterent-gc & fix #6358
Former-commit-id: 20446141e408885eb36d512bfb2dfb62bbc0c20d
2024-12-17 11:41:59 +00:00
hoshi-hiyouga
101c95ce65 Merge pull request #6363 from hiyouga/hiyouga/control_skip_eos
[infer] support control eos

Former-commit-id: 963640cff370be9f2fab649c88a120a645e6992e
2024-12-17 19:35:40 +08:00
hiyouga
19ebc0e7a2 support control eos, fix #6345
Former-commit-id: cb0f8399356bf372f3b7963f2565c3d504be0923
2024-12-17 10:42:05 +00:00
hoshi-hiyouga
1ce15b5d9e Merge pull request #6362 from hiyouga/hiyouga/mllm_packing
[model] generalized packing

Former-commit-id: b85f77a2687f7e0d11f7d2e49de54c544e39e3d5
2024-12-17 18:41:48 +08:00
hiyouga
d670d62a66 generalized packing & fix #6343
Former-commit-id: 3b1e4194616cacd5c24f08b328e31a008bddcf29
2024-12-17 10:26:19 +00:00
hoshi-hiyouga
6522467ddb Merge pull request #6359 from hiyouga/hiyouga/fix_qwen2vl_infer
[model] fix qwen2vl infern

Former-commit-id: 419cba5fae31a3c88305fe424b8aae9d59e3941a
2024-12-17 18:15:23 +08:00
hiyouga
aacd9642f5 fix #6348
Former-commit-id: 83e552320909f4775377889f1512994b7e638a7e
2024-12-17 10:06:46 +00:00
hoshi-hiyouga
4446c92517 Merge pull request #6334 from hiyouga/hiyouga/add_examples
[assets] update wechat and examples

Former-commit-id: 7725e7ac7d21ad844e8424a920e8bece6f38af19
2024-12-15 01:37:01 +08:00
hiyouga
8c65548b10 update assets
Former-commit-id: 7b9bd552b2bf97b72976511094eb51dfde5d1017
2024-12-14 17:36:03 +00:00
hiyouga
fb22651faf fix mrope
Former-commit-id: 55bee1d333549ca19858b3f5c1b7b86926e5fb09
2024-12-12 15:08:17 +00:00
hoshi-hiyouga
cfff136b2a Merge pull request #6253 from hiyouga/hiyouga/qwen2vl_mm_proj
[model] support qwen2vl train proj only

Former-commit-id: 0b0012142ab683da1e0558e6240310bf90f39150
2024-12-05 20:25:33 +08:00
hiyouga
bac2c64f87 support qwen2vl train proj only
Former-commit-id: 0e949ef03455726e907c6f1039e93ebe480c897a
2024-12-05 10:37:42 +00:00
hoshi-hiyouga
be1ec97c8e Merge pull request #6251 from hiyouga/hiyouga/vllm_qwen2vl_infer
[infer] support qwen2vl vllm infer

Former-commit-id: df76f7d6e124131ce7628c31cce01de4f8e6014c
2024-12-05 18:26:19 +08:00
hiyouga
bbd432415d support qwen2vl vllm infer
Former-commit-id: 03ddd2555fb97488cd4daab11e8b672d36150c5a
2024-12-05 10:17:26 +00:00
hoshi-hiyouga
1fef702382 Merge pull request #6246 from hiyouga/hiyouga/update_examples
[examples] update examples

Former-commit-id: ecb688bdb3e940651d64bc1edc85ce4568f3eabe
2024-12-05 16:49:30 +08:00
hiyouga
39865d8a1f update examples
Former-commit-id: bcb010be7732ae137f156932100ee4d02a93725c
2024-12-05 08:48:25 +00:00
hoshi-hiyouga
c7b27bd70b Merge pull request #6242 from hiyouga/hiyouga/fix_script
[script] fix scripts

Former-commit-id: cf254ea0891ea2e6522fdbefcccf409ff7aafd99
2024-12-05 11:54:46 +08:00
hiyouga
86e4fab0d5 fix scripts
Former-commit-id: f94f55d20283298cb7d90d0573992a62df414a8f
2024-12-05 03:47:32 +00:00
hoshi-hiyouga
ff3e40e4a5 Merge pull request #6160 from village-way/pr_dataloader
fix:tokenized_path not None and load_from_disk return Dataset Trigger…
Former-commit-id: 63de20970c8062aeebed5f366f1675beb12e05bf
2024-12-04 22:18:19 +08:00
hoshi-hiyouga
ea830cad0c lint
Former-commit-id: 191ccc585399ad4c6c2c4f280b144b2c0a4869f3
2024-12-04 22:08:27 +08:00
hoshi-hiyouga
225e270fd5 Merge pull request #6238 from hiyouga/hiyouga/vllm_batchinfer
[infer] feat: support batch infer in vllm

Former-commit-id: 886752801ba8a5bf6fc4853ed618817185950c11
2024-12-04 21:59:13 +08:00
hiyouga
c1768cfb14 support batch infer in vllm
Former-commit-id: 3ef5ed3b9a44eed2f7e3ff221dfc343d0a97c0b5
2024-12-04 13:50:00 +00:00
hoshi-hiyouga
53edd62f8b Merge pull request #6190 from JieShenAI/main
add vllm_infer script

Former-commit-id: 09c7ea700c83dcf8d75796a1e28a36197f62cab4
2024-12-04 21:19:23 +08:00
hoshi-hiyouga
41a7e128b6 Merge pull request #6170 from hykilpikonna/main
[+] Show the hostname in webui title

Former-commit-id: 1cb2f9da317a8db8f45e887ab57cdfdc0e8b9412
2024-12-04 18:07:29 +08:00
hoshi-hiyouga
6b8c41c3ac Merge pull request #6233 from hiyouga/hiyouga/vlm_zero3
[data] fix vlm zero3 training

Former-commit-id: b0cbd5e3464a8a1a0f1cf709fb107b23a61f34ff
2024-12-04 17:51:10 +08:00
hiyouga
2f09c34980 fix vlm zero3 training
Former-commit-id: 86fe7fe71b51077310357b7b1895522258f9bc7a
2024-12-04 09:40:39 +00:00
JieShen
76dc69ce36 add async call api
Former-commit-id: 0f728386d88cf8253250c6650555d41578114a0c
2024-12-01 22:18:05 +08:00
JieShen
6c9d05539a add vllm_infer script
Former-commit-id: 4daab843a3aa096b35e5d3832c01fac4271e4604
2024-11-29 14:22:20 +08:00
Azalea
b6bc17f730 [U] Compute hostname differently
Former-commit-id: fbc735972af6facdaba169603a4c77e613b2e8d7
2024-11-28 22:23:41 -05:00
hoshi-hiyouga
c07ba8ccc0 Merge pull request #6175 from hiyouga/hiyouga/add_qwq
[model] add QwQ

Former-commit-id: da8f565c359004d811481b8b85f2a36f30e95e23
2024-11-28 17:01:53 +08:00
hiyouga
ed86f621a0 add qwq
Former-commit-id: acad977356a7f2e729eb6f2cb919a416b18f8add
2024-11-28 08:50:57 +00:00
Azalea
c6a3175bbf [+] Show the hostname
Former-commit-id: 410847656a760fe4c2c310b0d770072392d7aefb
2024-11-28 12:25:02 +08:00
wangdepeng
452291417d fix:tokenized_path not None and load_from_disk return Dataset Trigger stuck
Former-commit-id: cbf9da35728daaf98d92e699e891e334c74af1e5
2024-11-27 16:44:42 +08:00
hoshi-hiyouga
ab9db8b7c7 Merge pull request #6156 from hiyouga/hiyouga/add_o1
[data&model] add marco-o1, skywork-o1 and openo1

Former-commit-id: fa8aa1a3bcb49357799ec30fbb3f143a015e5d58
2024-11-27 14:36:01 +08:00
hiyouga
877e2ea791 fix dataset
Former-commit-id: d4a2d299414984a4043d30034c5c95e2d717a49e
2024-11-27 06:27:44 +00:00
hiyouga
6ea42d5b63 add skywork o1
Former-commit-id: 272a6fe972de926e5841c1570995f4e6fed9f28d
2024-11-27 05:51:59 +00:00
hiyouga
31c117e696 Merge remote-tracking branch 'origin/main' into hiyouga/add_o1
Former-commit-id: 5da8c00b233f96e51cf3bac7f25e3e61659d0cb7
2024-11-27 05:36:41 +00:00
hoshi-hiyouga
04f057334f Merge pull request #6157 from hiyouga/hiyouga/fix_ci
[ci] pin tokenizers version

Former-commit-id: 0357d7530d16699e728bc648abd08ea309e84865
2024-11-27 13:33:04 +08:00
hiyouga
99a54d06ca pin tokenizers version
Former-commit-id: 2b747737f0be2caeb737fe87dad6bf5902b4a588
2024-11-27 05:24:58 +00:00
hiyouga
8332c85f37 add marco-o1 and openo1 dataset
Former-commit-id: 51d49e075470951f109bcdde136203f972450c2e
2024-11-27 04:20:23 +00:00
hoshi-hiyouga
fcf1a3df62 Merge pull request #6152 from hiyouga/hiyouga/add_num_proc_in_data_load
[data] add num_proc in load_dataset

Former-commit-id: d8258ba7e792d5f17ae80d5e8b303e8fa820f162
2024-11-27 00:16:15 +08:00
hoshi-hiyouga
f4f52ae67d Merge pull request #6151 from hiyouga/hiyouga/fix_mllama
[model] fix mllama cross mask

Former-commit-id: 7e64661c1fc53c4d3d9fd915162b762e403b1991
2024-11-27 00:07:54 +08:00
hiyouga
0b08d5882a fix #6149
Former-commit-id: b581b272793314a9602f4dc2fb646a988a6249df
2024-11-26 16:03:02 +00:00
hiyouga
62eeafaba6 fix mllama cross_mask
Former-commit-id: c33967308bebd99489d28bd5a879525cf304c1f9
2024-11-26 15:56:58 +00:00
hoshi-hiyouga
5a52e41399 Merge pull request #6141 from hiyouga/hiyouga-patch-1
[misc] chore: lint

Former-commit-id: ba2b94c68eb08798792be76f95b94b358ce69f44
2024-11-25 23:02:11 +08:00
hoshi-hiyouga
e8083f8f3f lint
Former-commit-id: 57c3cf1f498d5ffafdc8c06e0f8713f8ff77de81
2024-11-25 22:55:56 +08:00
hoshi-hiyouga
338b3a03f0 Merge pull request #6140 from hiyouga/hiyouga/fix_mllama
[data] fix mllama plugin

Former-commit-id: b7e220a7d82db26cbe7ced9ed30332418cc4fa20
2024-11-25 22:32:07 +08:00
hoshi-hiyouga
c8b01b41ac fix #6139
Former-commit-id: a4e9552b9ade6ebb22d782f0412003279ddca23c
2024-11-25 22:22:06 +08:00
hoshi-hiyouga
6d08a418ed Merge pull request #6137 from hiyouga/hiyouga/fix_mllama
[model] fix mllama hidden_size

Former-commit-id: 54f1d3f4064b9d37261883e8399c8e7909178857
2024-11-25 20:17:33 +08:00
hoshi-hiyouga
e3066d1489 fix visual patch
Former-commit-id: ac51fa37cc23518b30a6123e188964dce39be82f
2024-11-25 20:06:06 +08:00
hoshi-hiyouga
487e3f2507 fix #6136
Former-commit-id: b84e5d91a070c473ea820c379bf9b5abbca6df2c
2024-11-25 19:43:42 +08:00
hoshi-hiyouga
b82a53cad8 Merge pull request #6127 from hiyouga/hiyouga/dev_version
[misc] set dev version

Former-commit-id: cb0a51031324c9fdf0c1fedf237692a40c2091d9
2024-11-25 01:42:29 +08:00
hiyouga
5bec82ca9d set dev version
Former-commit-id: a0aea74100a9505664023f6a46fc290e332dfa40
2024-11-25 01:36:49 +08:00
steven
6ef0d13e42 support granite3 models
Former-commit-id: 8cff612e55eb7df116e51c4dd21e7a42543e7a1f
2024-11-04 10:35:03 +08:00
huangpan.foo
ed5c641e8b Add deepseek-v2.5 template
Former-commit-id: e80c1fe798fb2e076c0891a64300f1b6710176b6
2024-09-21 19:33:30 +08:00
256 changed files with 15080 additions and 7273 deletions

View File

@@ -3,12 +3,12 @@
.github .github
.venv .venv
cache cache
data
docker docker
saves saves
hf_cache hf_cache
ms_cache ms_cache
om_cache om_cache
shared_data
output output
.dockerignore .dockerignore
.gitattributes .gitattributes

View File

@@ -4,15 +4,20 @@ API_HOST=
API_PORT= API_PORT=
API_KEY= API_KEY=
API_MODEL_NAME= API_MODEL_NAME=
API_VERBOSE=
FASTAPI_ROOT_PATH= FASTAPI_ROOT_PATH=
MAX_CONCURRENT= MAX_CONCURRENT=
# general # general
DISABLE_VERSION_CHECK= DISABLE_VERSION_CHECK=
FORCE_CHECK_IMPORTS= FORCE_CHECK_IMPORTS=
ALLOW_EXTRA_ARGS=
LLAMAFACTORY_VERBOSITY= LLAMAFACTORY_VERBOSITY=
USE_MODELSCOPE_HUB= USE_MODELSCOPE_HUB=
USE_OPENMIND_HUB= USE_OPENMIND_HUB=
USE_RAY=
RECORD_VRAM= RECORD_VRAM=
OPTIM_TORCH=
NPU_JIT_COMPILE=
# torchrun # torchrun
FORCE_TORCHRUN= FORCE_TORCHRUN=
MASTER_ADDR= MASTER_ADDR=
@@ -31,7 +36,7 @@ GRADIO_SERVER_PORT=
GRADIO_ROOT_PATH= GRADIO_ROOT_PATH=
GRADIO_IPV6= GRADIO_IPV6=
# setup # setup
ENABLE_SHORT_CONSOLE=1 ENABLE_SHORT_CONSOLE=
# reserved (do not use) # reserved (do not use)
LLAMABOARD_ENABLED= LLAMABOARD_ENABLED=
LLAMABOARD_WORKDIR= LLAMABOARD_WORKDIR=

61
.github/ISSUE_TEMPLATE/1-bug-report.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
name: "\U0001F41B Bug / help"
description: Create a report to help us improve the LLaMA Factory
labels: ["bug", "pending"]
body:
- type: markdown
attributes:
value: |
Issues included in **[FAQs](https://github.com/hiyouga/LLaMA-Factory/issues/4614)** or those with **insufficient** information may be closed without a response.
已经包含在 **[常见问题](https://github.com/hiyouga/LLaMA-Factory/issues/4614)** 内或提供信息**不完整**的 issues 可能不会被回复。
- type: markdown
attributes:
value: |
Please do not create issues that are not related to framework bugs under this category, use **[Discussions](https://github.com/hiyouga/LLaMA-Factory/discussions/categories/q-a)** instead.
请勿在此分类下创建和框架 bug 无关的 issues训练问题求助请使用 **[讨论区](https://github.com/hiyouga/LLaMA-Factory/discussions/categories/q-a)**。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the above rules carefully and searched the existing issues (including FAQs).
请确保您已经认真阅读了上述规则并且搜索过现有的 issues包括常见问题
options:
- label: I have read the above rules and searched the existing issues.
required: true
- type: textarea
id: system-info
validations:
required: true
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。
placeholder: llamafactory version, platform, python version, ...
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide entry arguments, error messages and stack traces that reproduces the problem.
请提供入口参数,错误日志以及异常堆栈以便于我们复现问题。
value: |
```text
Put your message here.
```
- type: textarea
id: others
validations:
required: false
attributes:
label: Others

View File

@@ -0,0 +1,41 @@
name: "\U0001F680 Feature request"
description: Submit a request for a new feature
labels: ["enhancement", "pending"]
body:
- type: markdown
attributes:
value: |
Please do not create issues that are not related to new features under this category.
请勿在此分类下创建和新特性无关的 issues。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the above rules carefully and searched the existing issues.
请确保您已经认真阅读了上述规则并且搜索过现有的 issues。
options:
- label: I have read the above rules and searched the existing issues.
required: true
- type: textarea
id: description
validations:
required: true
attributes:
label: Description
description: |
A clear and concise description of the feature proposal.
请详细描述您希望加入的新功能特性。
- type: textarea
id: contribution
validations:
required: false
attributes:
label: Pull Request
description: |
Have you already created the relevant PR and submitted the code?
您是否已经创建了相关 PR 并提交了代码?

View File

@@ -1,66 +0,0 @@
name: "\U0001F41B Bug / Help"
description: Create a report to help us improve the LLaMA Factory
body:
- type: markdown
attributes:
value: |
Issues included in **FAQs** or those with **insufficient** information may be closed without a response.
包含在**常见问题**内或提供信息**不完整**的 issues 可能不会被回复。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the README carefully and searched the existing issues (including FAQs).
请确保您已经认真阅读了 README 并且搜索过现有的 issues包括常见问题
options:
- label: I have read the README and searched the existing issues.
required: true
- type: textarea
id: system-info
validations:
required: true
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。
placeholder: llamafactory version, platform, python version, ...
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide code snippets, error messages and stack traces that reproduces the problem.
请提供运行参数,错误信息以及异常堆栈以便于我们复现该问题。
Remember to use Markdown tags to correctly format your code.
请合理使用 Markdown 标签来格式化您的文本。
placeholder: |
```bash
llamafactory-cli train ...
```
- type: textarea
id: expected-behavior
validations:
required: false
attributes:
label: Expected behavior
description: |
Please provide a clear and concise description of what you would expect to happen.
请提供您原本的目的,即这段代码的期望行为。
- type: textarea
id: others
validations:
required: false
attributes:
label: Others

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

66
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,66 @@
name: docker
on:
workflow_dispatch:
push:
branches:
- "main"
paths:
- "**/*.py"
- "requirements.txt"
- "docker/**"
- ".github/workflows/*.yml"
pull_request:
branches:
- "main"
paths:
- "**/*.py"
- "requirements.txt"
- "docker/**"
- ".github/workflows/*.yml"
jobs:
build:
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
environment:
name: docker
url: https://hub.docker.com/r/hiyouga/llamafactory
steps:
- name: Free up disk space
run: |
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache
df -h
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./docker/docker-cuda/Dockerfile
build-args: |
EXTRAS=metrics,deepspeed,liger-kernel
push: ${{ github.event_name != 'pull_request' }}
tags: docker.io/hiyouga/llamafactory:latest
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -18,13 +18,15 @@ jobs:
ISSUE_URL: ${{ github.event.issue.html_url }} ISSUE_URL: ${{ github.event.issue.html_url }}
ISSUE_TITLE: ${{ github.event.issue.title }} ISSUE_TITLE: ${{ github.event.issue.title }}
run: | run: |
LABEL=pending LABEL=""
NPU_KEYWORDS=(npu huawei ascend 华为 昇腾) NPU_KEYWORDS=(npu huawei ascend 华为 昇腾)
ISSUE_TITLE_LOWER=$(echo $ISSUE_TITLE | tr '[:upper:]' '[:lower:]') ISSUE_TITLE_LOWER=$(echo $ISSUE_TITLE | tr '[:upper:]' '[:lower:]')
for KEYWORD in ${NPU_KEYWORDS[@]}; do for KEYWORD in ${NPU_KEYWORDS[@]}; do
if [[ $ISSUE_TITLE_LOWER == *$KEYWORD* ]] && [[ $ISSUE_TITLE_LOWER != *input* ]]; then if [[ $ISSUE_TITLE_LOWER == *$KEYWORD* ]] && [[ $ISSUE_TITLE_LOWER != *input* ]]; then
LABEL=pending,npu LABEL="npu"
break break
fi fi
done done
if [ -n "$LABEL" ]; then
gh issue edit $ISSUE_URL --add-label $LABEL gh issue edit $ISSUE_URL --add-label $LABEL
fi

View File

@@ -1,6 +1,7 @@
name: publish name: publish
on: on:
workflow_dispatch:
release: release:
types: types:
- published - published
@@ -25,16 +26,11 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: "3.8" python-version: "3.9"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install build
- name: Build package - name: Build package
run: | run: |
python -m build make build
- name: Publish package - name: Publish package
uses: pypa/gh-action-pypi-publish@release/v1 uses: pypa/gh-action-pypi-publish@release/v1

View File

@@ -1,6 +1,7 @@
name: tests name: tests
on: on:
workflow_dispatch:
push: push:
branches: branches:
- "main" - "main"
@@ -21,20 +22,33 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
python-version: python:
- "3.8" # TODO: remove py38 in next transformers release
- "3.9" - "3.9"
- "3.10" - "3.10"
- "3.11" - "3.11"
- "3.12"
os: os:
- "ubuntu-latest" - "ubuntu-latest"
- "windows-latest" - "windows-latest"
- "macos-13" - "macos-13"
transformers:
- null
include: # test backward compatibility
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.45.0"
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.49.0"
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.51.0"
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
environment: concurrency:
name: tests group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python }}-${{ matrix.transformers }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
env: env:
HF_TOKEN: ${{ secrets.HF_TOKEN }} HF_TOKEN: ${{ secrets.HF_TOKEN }}
@@ -47,19 +61,42 @@ jobs:
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python }}
cache: "pip" cache: "pip"
cache-dependency-path: "setup.py" cache-dependency-path: "**/requirements*.txt"
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
python -m pip install ".[torch,dev]" python -m pip install ".[torch,dev]"
- name: Install transformers
if: ${{ matrix.transformers }}
run: |
python -m pip install "transformers==${{ matrix.transformers }}"
- name: Cache files
id: hf-hub-cache
uses: actions/cache@v4
with:
path: ${{ runner.temp }}/huggingface
key: huggingface-${{ matrix.os }}-${{ matrix.python }}-${{ matrix.transformers }}-${{ hashFiles('tests/version.txt') }}
- name: Check quality - name: Check quality
run: | run: |
make style && make quality make style && make quality
- name: Check license
run: |
make license
- name: Check build
run: |
make build
- name: Test with pytest - name: Test with pytest
run: | run: |
make test make test
env:
HF_HOME: ${{ runner.temp }}/huggingface
HF_HUB_OFFLINE: "${{ steps.hf-hub-cache.outputs.cache-hit == 'true' && '1' || '0' }}"

8
.gitignore vendored
View File

@@ -162,12 +162,18 @@ cython_debug/
# vscode # vscode
.vscode/ .vscode/
# uv
uv.lock
# custom .gitignore # custom .gitignore
ms_cache/
hf_cache/ hf_cache/
ms_cache/
om_cache/ om_cache/
cache/ cache/
config/ config/
saves/ saves/
output/ output/
wandb/ wandb/
swanlog/
generated_predictions.jsonl
predictions_score.json

View File

@@ -1,14 +1,17 @@
.PHONY: build commit quality style test .PHONY: build commit license quality style test
check_dirs := scripts src tests setup.py check_dirs := scripts src tests setup.py
build: build:
pip install build && python -m build pip3 install build && python3 -m build
commit: commit:
pre-commit install pre-commit install
pre-commit run --all-files pre-commit run --all-files
license:
python3 tests/check_license.py $(check_dirs)
quality: quality:
ruff check $(check_dirs) ruff check $(check_dirs)
ruff format --check $(check_dirs) ruff format --check $(check_dirs)

433
README.md
View File

@@ -1,40 +1,61 @@
![# LLaMA Factory](assets/logo.png) ![# LLaMA Factory](assets/logo.png)
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![GitHub contributors](https://img.shields.io/github/contributors/hiyouga/LLaMA-Factory?color=orange)](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
[![GitHub workflow](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml/badge.svg)](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-93-green)](#projects-using-llama-factory) [![Citation](https://img.shields.io/badge/citation-614-green)](https://scholar.google.com/scholar?cites=12620864006390196564)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Docker Pulls](https://img.shields.io/docker/pulls/hiyouga/llamafactory)](https://hub.docker.com/r/hiyouga/llamafactory/tags)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![GitCode](https://gitcode.com/zhengyaowei/LLaMA-Factory/star/badge.svg)](https://gitcode.com/zhengyaowei/LLaMA-Factory)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) [![Open in Alaya](assets/alaya_new.svg)](https://docs.alayanew.com/docs/documents/newActivities/llamafactory/?utm_source=LLaMA-Factory)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) [![Open in Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) [![Open in Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Novita](https://img.shields.io/badge/Novita-Deploy%20Template-blue)](https://novita.ai/templates-library/105981?sharer=88115474-394e-4bda-968e-b88e123d0c47)
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) ### Used by [Amazon](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/), [NVIDIA](https://developer.nvidia.com/rtx/ai-toolkit), [Aliyun](https://help.aliyun.com/zh/pai/use-cases/fine-tune-a-llama-3-model-with-llama-factory), etc.
👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg). <div align="center" markdown="1">
### Supporters ❤️
<a href="https://warp.dev/llama-factory">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/ab8dd143-b0fd-4904-bdc5-dd7ecac94eae">
</a>
#### [Warp, the agentic terminal for developers](https://warp.dev/llama-factory)
[Available for MacOS, Linux, & Windows](https://warp.dev/llama-factory)
----
### Easily fine-tune 100+ large language models with zero-code [CLI](#quickstart) and [Web UI](#fine-tuning-with-llama-board-gui-powered-by-gradio)
![GitHub Trend](https://trendshift.io/api/badge/repositories/4535)
</div>
👋 Join our [WeChat group](assets/wechat.jpg), [NPU user group](assets/wechat_npu.jpg) or [Alaya NeW user group](assets/wechat_alaya.png).
\[ English | [中文](README_zh.md) \] \[ English | [中文](README_zh.md) \]
**Fine-tuning a large language model can be easy as...** **Fine-tuning a large language model can be easy as...**
https://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3 https://github.com/user-attachments/assets/3991a3a8-4276-4d30-9cab-4cb0c4b9b99e
Choose your path: Choose your path:
- **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/ - **Documentation**: https://llamafactory.readthedocs.io/en/latest/
- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing - **Colab (free)**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- **Local machine**: Please refer to [usage](#getting-started) - **Local machine**: Please refer to [usage](#getting-started)
- **PAI-DSW**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) - **PAI-DSW (free trial)**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **Amazon SageMaker**: [Blog](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - **Alaya NeW (cloud GPU deal)**: https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
Recent activities:
- **2024/10/18-2024/11/30**: Build a personal tour guide bot using PAI+LLaMA Factory. [[website]](https://developer.aliyun.com/topic/llamafactory2)
> [!NOTE] > [!NOTE]
> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them. > Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.
@@ -42,13 +63,23 @@ Recent activities:
## Table of Contents ## Table of Contents
- [Features](#features) - [Features](#features)
- [Benchmark](#benchmark) - [Blogs](#blogs)
- [Changelog](#changelog) - [Changelog](#changelog)
- [Supported Models](#supported-models) - [Supported Models](#supported-models)
- [Supported Training Approaches](#supported-training-approaches) - [Supported Training Approaches](#supported-training-approaches)
- [Provided Datasets](#provided-datasets) - [Provided Datasets](#provided-datasets)
- [Requirement](#requirement) - [Requirement](#requirement)
- [Getting Started](#getting-started) - [Getting Started](#getting-started)
- [Installation](#installation)
- [Data Preparation](#data-preparation)
- [Quickstart](#quickstart)
- [Fine-Tuning with LLaMA Board GUI](#fine-tuning-with-llama-board-gui-powered-by-gradio)
- [Build Docker](#build-docker)
- [Deploy with OpenAI-style API and vLLM](#deploy-with-openai-style-api-and-vllm)
- [Download from ModelScope Hub](#download-from-modelscope-hub)
- [Download from Modelers Hub](#download-from-modelers-hub)
- [Use W&B Logger](#use-wb-logger)
- [Use SwanLab Logger](#use-swanlab-logger)
- [Projects using LLaMA Factory](#projects-using-llama-factory) - [Projects using LLaMA Factory](#projects-using-llama-factory)
- [License](#license) - [License](#license)
- [Citation](#citation) - [Citation](#citation)
@@ -56,46 +87,90 @@ Recent activities:
## Features ## Features
- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, Yi, Gemma, Baichuan, ChatGLM, Phi, etc. - **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, DeepSeek, Yi, Gemma, ChatGLM, Phi, etc.
- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc. - **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.
- **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ. - **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
- **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [Adam-mini](https://github.com/zyushun/Adam-mini), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning. - **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [APOLLO](https://github.com/zhuhanqing/APOLLO), [Adam-mini](https://github.com/zyushun/Adam-mini), [Muon](https://github.com/KellerJordan/Muon), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ and PiSSA.
- **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA. - **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA.
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc. - **Wide tasks**: Multi-turn dialogue, tool using, image understanding, visual grounding, video recognition, audio understanding, etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker. - **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, [SwanLab](https://github.com/SwanHubX/SwanLab), etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with [vLLM worker](https://github.com/vllm-project/vllm) or [SGLang worker](https://github.com/sgl-project/sglang).
## Benchmark ### Day-N Support for Fine-Tuning Cutting-Edge Models
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory. | Support Date | Model Name |
| ------------ | ------------------------------------------------------------ |
| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / InternLM 3 / MiniCPM-o-2.6 |
| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
![benchmark](assets/benchmark.svg) ## Blogs
<details><summary>Definitions</summary> - [Fine-tune Qwen2.5-VL for Autonomous Driving using LLaMA-Factory](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory) (Chinese)
- [How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/) (English)
- [Easy Dataset × LLaMA Factory: Enabling LLMs to Efficiently Learn Domain Knowledge](https://buaa-act.feishu.cn/wiki/GVzlwYcRFiR8OLkHbL6cQpYin7g) (English)
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024) <details><summary>All Blogs</summary>
- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024) - [LLaMA Factory: Fine-tuning the DeepSeek-R1-Distill-Qwen-7B Model for News Classifier](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b) (Chinese)
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning. - [A One-Stop Code-Free Model Fine-Tuning \& Deployment Platform based on SageMaker and LLaMA-Factory](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) (Chinese)
- [LLaMA Factory Multi-Modal Fine-Tuning Practice: Fine-Tuning Qwen2-VL for Personal Tourist Guide](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) (Chinese)
- [LLaMA Factory: Fine-tuning the LLaMA3 Model for Role-Playing](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) (Chinese)
</details> </details>
## Changelog ## Changelog
[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage. [25/04/28] We supported fine-tuning the **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** model family.
[24/09/19] We support fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models. [25/04/21] We supported the **[Muon](https://github.com/KellerJordan/Muon)** optimizer. See [examples](examples/README.md) for usage. Thank [@tianshijing](https://github.com/tianshijing)'s PR.
[24/08/30] We support fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR. [25/04/16] We supported fine-tuning the **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** model. See [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) to get started.
[24/08/27] We support **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training. [25/04/14] We supported fine-tuning the **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** and **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** models.
[24/08/09] We support **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR. [25/04/06] We supported fine-tuning the **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** model. See [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) to get started.
<details><summary>Full Changelog</summary> <details><summary>Full Changelog</summary>
[24/07/04] We support [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR. [25/03/31] We supported fine-tuning the **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** model. See [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) to get started.
[24/06/16] We support **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage. [25/03/15] We supported **[SGLang](https://github.com/sgl-project/sglang)** as inference backend. Try `infer_backend: sglang` to accelerate inference.
[25/03/12] We supported fine-tuning the **[Gemma 3](https://huggingface.co/blog/gemma3)** model.
[25/02/24] Announcing **[EasyR1](https://github.com/hiyouga/EasyR1)**, an efficient, scalable and multi-modality RL training framework for efficient GRPO training.
[25/02/11] We supported saving the **[Ollama](https://github.com/ollama/ollama)** modelfile when exporting the model checkpoints. See [examples](examples/README.md) for usage.
[25/02/05] We supported fine-tuning the **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** and **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** on audio understanding tasks.
[25/01/31] We supported fine-tuning the **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** and **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** models.
[25/01/15] We supported **[APOLLO](https://arxiv.org/abs/2412.05270)** optimizer. See [examples](examples/README.md) for usage.
[25/01/14] We supported fine-tuning the **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** and **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** models. Thank [@BUAADreamer](https://github.com/BUAADreamer)'s PR.
[25/01/14] We supported fine-tuning the **[InternLM 3](https://huggingface.co/collections/internlm/)** models. Thank [@hhaAndroid](https://github.com/hhaAndroid)'s PR.
[25/01/10] We supported fine-tuning the **[Phi-4](https://huggingface.co/microsoft/phi-4)** model.
[24/12/21] We supported using **[SwanLab](https://github.com/SwanHubX/SwanLab)** for experiment tracking and visualization. See [this section](#use-swanlab-logger) for details.
[24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset.
[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage.
[24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models.
[24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.
[24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.
[24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.
[24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.
[24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models. [24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.
@@ -171,38 +246,61 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
</details> </details>
> [!TIP]
> If you cannot use the latest feature, please pull the latest code and install LLaMA-Factory again.
## Supported Models ## Supported Models
| Model | Model size | Template | | Model | Model size | Template |
| ----------------------------------------------------------------- | -------------------------------- | ---------------- | | ----------------------------------------------------------------- | -------------------------------- | ------------------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | | [Gemma 3](https://huggingface.co/google) | 1B/4B/12B/27B | gemma3/gemma (1B) |
| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/THUDM) | 9B/32B | glm4/glmz1 |
| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
| [Index](https://huggingface.co/IndexTeam) | 1.9B | index | | [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | | [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | | [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | | [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | | [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - | | [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | paligemma | | [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/14B | phi | | [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small | | [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | | [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
| [Qwen (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | | [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | | [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
@@ -212,6 +310,10 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
> For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models. > For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
> >
> Remember to use the **SAME** template in training and inference. > Remember to use the **SAME** template in training and inference.
>
> \*: You should install the `transformers` from main branch and use `DISABLE_VERSION_CHECK=1` to skip version check.
>
> \*\*: You need to install a specific version of `transformers` to use the corresponding model.
Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported. Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
@@ -290,9 +392,13 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
@@ -311,8 +417,10 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
@@ -332,35 +440,35 @@ huggingface-cli login
| Mandatory | Minimum | Recommend | | Mandatory | Minimum | Recommend |
| ------------ | ------- | --------- | | ------------ | ------- | --------- |
| python | 3.8 | 3.11 | | python | 3.9 | 3.10 |
| torch | 1.13.1 | 2.4.0 | | torch | 2.0.0 | 2.6.0 |
| transformers | 4.41.2 | 4.43.4 | | torchvision | 0.15.0 | 0.21.0 |
| datasets | 2.16.0 | 2.20.0 | | transformers | 4.45.0 | 4.50.0 |
| accelerate | 0.30.1 | 0.32.0 | | datasets | 2.16.0 | 3.2.0 |
| peft | 0.11.1 | 0.12.0 | | accelerate | 0.34.0 | 1.2.1 |
| peft | 0.14.0 | 0.15.1 |
| trl | 0.8.6 | 0.9.6 | | trl | 0.8.6 | 0.9.6 |
| Optional | Minimum | Recommend | | Optional | Minimum | Recommend |
| ------------ | ------- | --------- | | ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 | | CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 | | deepspeed | 0.10.0 | 0.16.4 |
| bitsandbytes | 0.39.0 | 0.43.1 | | bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.5.0 | | vllm | 0.4.3 | 0.8.2 |
| flash-attn | 2.3.0 | 2.6.3 | | flash-attn | 2.5.6 | 2.7.2 |
### Hardware Requirement ### Hardware Requirement
\* *estimated* \* *estimated*
| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | | Method | Bits | 7B | 14B | 30B | 70B | `x`B |
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | | ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | | Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | | Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | | Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
## Getting Started ## Getting Started
@@ -369,53 +477,99 @@ huggingface-cli login
> [!IMPORTANT] > [!IMPORTANT]
> Installation is mandatory. > Installation is mandatory.
#### Install from Source
```bash ```bash
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory cd LLaMA-Factory
pip install -e ".[torch,metrics]" pip install -e ".[torch,metrics]" --no-build-isolation
``` ```
Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, aqlm, vllm, sglang, galore, apollo, badam, adam-mini, qwen, minicpm_v, modelscope, openmind, swanlab, dev
> [!TIP] #### Install from Docker Image
> Use `pip install --no-deps -e .` to resolve package conflicts.
```bash
docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
```
This image is built on Ubuntu 22.04 (x86\_64), CUDA 12.4, Python 3.11, PyTorch 2.6.0, and Flash-attn 2.7.4.
Find the pre-built images: https://hub.docker.com/r/hiyouga/llamafactory/tags
Please refer to [build docker](#build-docker) to build the image yourself.
<details><summary>Setting up a virtual environment with <b>uv</b></summary>
Create an isolated Python environment with [uv](https://github.com/astral-sh/uv):
```bash
uv sync --extra torch --extra metrics --prerelease=allow
```
Run LLaMA-Factory in the isolated environment:
```bash
uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
</details>
<details><summary>For Windows users</summary> <details><summary>For Windows users</summary>
#### Install PyTorch
You need to manually install the GPU version of PyTorch on the Windows platform. Please refer to the [official website](https://pytorch.org/get-started/locally/) and the following command to install PyTorch with CUDA support:
```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
python -c "import torch; print(torch.cuda.is_available())"
```
If you see `True` then you have successfully installed PyTorch with CUDA support.
Try `dataloader_num_workers: 0` if you encounter `Can't pickle local object` error.
#### Install BitsAndBytes
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version. If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
```bash ```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
``` ```
To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements. #### Install Flash Attention-2
To enable FlashAttention-2 on the Windows platform, please use the script from [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) to compile and install it by yourself.
</details> </details>
<details><summary>For Ascend NPU users</summary> <details><summary>For Ascend NPU users</summary>
To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands: To install LLaMA Factory on Ascend NPU devices, please upgrade Python to version 3.10 or higher and specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
```bash ```bash
# replace the url according to your CANN version and devices # replace the url according to your CANN version and devices
# install CANN Toolkit # install CANN Toolkit
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run
bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install bash Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run --install
# install CANN Kernels # install CANN Kernels
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run
bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install bash Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run --install
# set env variables # set env variables
source /usr/local/Ascend/ascend-toolkit/set_env.sh source /usr/local/Ascend/ascend-toolkit/set_env.sh
``` ```
| Requirement | Minimum | Recommend | | Requirement | Minimum | Recommend |
| ------------ | ------- | ----------- | | ------------ | ------- | -------------- |
| CANN | 8.0.RC1 | 8.0.RC1 | | CANN | 8.0.RC1 | 8.0.0.alpha002 |
| torch | 2.1.0 | 2.1.0 | | torch | 2.1.0 | 2.4.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 | | torch-npu | 2.1.0 | 2.4.0.post2 |
| deepspeed | 0.13.2 | 0.13.2 | | deepspeed | 0.13.2 | 0.13.2 |
| vllm-ascend | - | 0.7.3 |
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use. Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
@@ -423,15 +577,51 @@ If you cannot infer model on NPU devices, try setting `do_sample: false` in the
Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
#### Install BitsAndBytes
To use QLoRA based on bitsandbytes on Ascend NPU, please follow these 3 steps:
1. Manually compile bitsandbytes: Refer to [the installation documentation](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU) for the NPU version of bitsandbytes to complete the compilation and installation. The compilation requires a cmake version of at least 3.22.1 and a g++ version of at least 12.x.
```bash
# Install bitsandbytes from source
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
cd bitsandbytes/
# Install dependencies
pip install -r requirements-dev.txt
# Install the dependencies for the compilation tools. Note that the commands for this step may vary depending on the operating system. The following are provided for reference
apt-get install -y build-essential cmake
# Compile & install
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install .
```
2. Install transformers from the main branch.
```bash
git clone -b main https://github.com/huggingface/transformers.git
cd transformers
pip install .
```
3. Set `double_quantization: false` in the configuration. You can refer to the [example](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml).
</details> </details>
### Data Preparation ### Data Preparation
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk. Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can use datasets on HuggingFace / ModelScope / Modelers hub, load the dataset in local disk, or specify a path to s3/gcs cloud storage.
> [!NOTE] > [!NOTE]
> Please update `data/dataset_info.json` to use your custom dataset. > Please update `data/dataset_info.json` to use your custom dataset.
You can also use **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** or **[GraphGen](https://github.com/open-sciencelab/GraphGen)** to create synthetic data for fine-tuning.
### Quickstart ### Quickstart
Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively. Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
@@ -446,6 +636,8 @@ See [examples/README.md](examples/README.md) for advanced usage (including distr
> [!TIP] > [!TIP]
> Use `llamafactory-cli help` to show help information. > Use `llamafactory-cli help` to show help information.
>
> Read [FAQs](https://github.com/hiyouga/LLaMA-Factory/issues/4614) first if you encounter any problems.
### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio)) ### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
@@ -485,22 +677,13 @@ For CUDA users:
```bash ```bash
docker build -f ./docker/docker-cuda/Dockerfile \ docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest . -t llamafactory:latest .
docker run -dit --gpus=all \ docker run -dit --ipc=host --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-p 7860:7860 \ -p 7860:7860 \
-p 8000:8000 \ -p 8000:8000 \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -510,19 +693,12 @@ docker exec -it llamafactory bash
For Ascend NPU users: For Ascend NPU users:
```bash ```bash
# Choose docker image upon your environment
docker build -f ./docker/docker-npu/Dockerfile \ docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=torch-npu,metrics \
-t llamafactory:latest . -t llamafactory:latest .
# Change `device` upon your resources docker run -dit --ipc=host \
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
@@ -533,7 +709,6 @@ docker run -dit \
--device /dev/davinci_manager \ --device /dev/davinci_manager \
--device /dev/devmm_svm \ --device /dev/devmm_svm \
--device /dev/hisi_hdc \ --device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -544,25 +719,15 @@ For AMD ROCm users:
```bash ```bash
docker build -f ./docker/docker-rocm/Dockerfile \ docker build -f ./docker/docker-rocm/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest . -t llamafactory:latest .
docker run -dit \ docker run -dit --ipc=host \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-v ./saves:/app/saves \
-p 7860:7860 \ -p 7860:7860 \
-p 8000:8000 \ -p 8000:8000 \
--device /dev/kfd \ --device /dev/kfd \
--device /dev/dri \ --device /dev/dri \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -571,12 +736,14 @@ docker exec -it llamafactory bash
</details> </details>
<details><summary>Details about volume</summary> <details><summary>Use Docker volumes</summary>
- `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory. You can uncomment `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` in the Dockerfile to use data volumes.
- `ms_cache`: Similar to Hugging Face cache but for ModelScope users.
- `om_cache`: Similar to Hugging Face cache but for Modelers users. When building the Docker image, use `-v ./hf_cache:/root/.cache/huggingface` argument to mount the local directory to the container. The following data volumes are available.
- `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.
- `hf_cache`: Utilize Hugging Face cache on the host machine.
- `shared_data`: The directionary to store datasets on the host machine.
- `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine. - `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine.
</details> </details>
@@ -584,13 +751,13 @@ docker exec -it llamafactory bash
### Deploy with OpenAI-style API and vLLM ### Deploy with OpenAI-style API and vLLM
```bash ```bash
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
``` ```
> [!TIP] > [!TIP]
> Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document. > Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document.
> >
> Examples: [Image understanding](scripts/test_image.py) | [Function calling](scripts/test_toolcall.py) > Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py)
### Download from ModelScope Hub ### Download from ModelScope Hub
@@ -623,6 +790,21 @@ run_name: test_run # optional
Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account. Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.
### Use SwanLab Logger
To use [SwanLab](https://github.com/SwanHubX/SwanLab) for logging experimental results, you need to add the following arguments to yaml files.
```yaml
use_swanlab: true
swanlab_run_name: test_run # optional
```
When launching training tasks, you can log in to SwanLab in three ways:
1. Add `swanlab_api_key=<your_api_key>` to the yaml file, and set it to your [API key](https://swanlab.cn/settings).
2. Set the environment variable `SWANLAB_API_KEY` to your [API key](https://swanlab.cn/settings).
3. Use the `swanlab login` command to complete the login.
## Projects using LLaMA Factory ## Projects using LLaMA Factory
If you have a project that should be incorporated, please contact via email or create a pull request. If you have a project that should be incorporated, please contact via email or create a pull request.
@@ -711,6 +893,7 @@ If you have a project that should be incorporated, please contact via email or c
1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/) 1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072) 1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611) 1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)
1. Zhang et al. CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling. ACL 2024. [[paper]](https://aclanthology.org/2024.findings-acl.830.pdf)
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B. 1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge. 1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B. 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
@@ -722,15 +905,17 @@ If you have a project that should be incorporated, please contact via email or c
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX. 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory. 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357) 1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**: A modified library that supports long sequence SFT & DPO using ring attention.
1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**: An o1-like model fine-tuned by NovaSky AI with very small cost.
1. **[WeClone](https://github.com/xming521/WeClone)**: One-stop solution for creating your digital avatar from chat logs.
1. **[EmoLLM](https://github.com/SmartFlowAI/EmoLLM)**: A project about large language models (LLMs) and mental health.
</details> </details>
## License ## License
This repository is licensed under the [Apache-2.0 License](LICENSE). This repository is licensed under the [Apache-2.0 License](LICENSE).
Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## Citation ## Citation

View File

@@ -1,41 +1,63 @@
![# LLaMA Factory](assets/logo.png) ![# LLaMA Factory](assets/logo.png)
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![GitHub contributors](https://img.shields.io/github/contributors/hiyouga/LLaMA-Factory?color=orange)](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
[![GitHub workflow](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml/badge.svg)](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-93-green)](#使用了-llama-factory-的项目) [![Citation](https://img.shields.io/badge/citation-614-green)](https://scholar.google.com/scholar?cites=12620864006390196564)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Docker Pulls](https://img.shields.io/docker/pulls/hiyouga/llamafactory)](https://hub.docker.com/r/hiyouga/llamafactory/tags)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![GitCode](https://gitcode.com/zhengyaowei/LLaMA-Factory/star/badge.svg)](https://gitcode.com/zhengyaowei/LLaMA-Factory)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) [![Open in Alaya](assets/alaya_new.svg)](https://docs.alayanew.com/docs/documents/newActivities/llamafactory/?utm_source=LLaMA-Factory)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) [![Open in Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) [![Open in Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Novita](https://img.shields.io/badge/Novita-Deploy%20Template-blue)](https://novita.ai/templates-library/105981?sharer=88115474-394e-4bda-968e-b88e123d0c47)
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) ### 获得[亚马逊](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)、[英伟达](https://developer.nvidia.cn/rtx/ai-toolkit)、[阿里云](https://help.aliyun.com/zh/pai/use-cases/fine-tune-a-llama-3-model-with-llama-factory)等的应用。
👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。 <div align="center" markdown="1">
### 赞助商 ❤️
<a href="https://warp.dev/llama-factory">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/ab8dd143-b0fd-4904-bdc5-dd7ecac94eae">
</a>
#### [Warp面向开发者的智能终端](https://warp.dev/llama-factory)
[适用于 MacOS、Linux 和 Windows](https://warp.dev/llama-factory)
----
### 使用零代码[命令行](#快速开始)与 [Web UI](#llama-board-可视化微调由-gradio-驱动) 轻松微调百余种大模型
![GitHub Trend](https://trendshift.io/api/badge/repositories/4535)
</div>
👋 加入我们的[微信群](assets/wechat.jpg)、[NPU 用户群](assets/wechat_npu.jpg)或 [九章智算云算力优惠群](assets/wechat_alaya.png)。
\[ [English](README.md) | 中文 \] \[ [English](README.md) | 中文 \]
**微调大模型可以像这样轻松…** **微调大模型可以像这样轻松…**
https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272 https://github.com/user-attachments/assets/43b700c6-a178-41db-b1f8-8190a5d3fcfc
选择你的打开方式: 选择你的打开方式:
- **入门教程**https://zhuanlan.zhihu.com/p/695287607 - **入门教程**https://zhuanlan.zhihu.com/p/695287607
- **框架文档**https://llamafactory.readthedocs.io/zh-cn/latest/ - **框架文档**https://llamafactory.readthedocs.io/zh-cn/latest/
- **Colab**https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing - **框架文档(昇腾 NPU**https://ascend.github.io/docs/sources/llamafactory/
- **Colab免费**https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
- **本地机器**:请见[如何使用](#如何使用) - **本地机器**:请见[如何使用](#如何使用)
- **PAI-DSW**[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) - **PAI-DSW(免费试用)**https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **Amazon SageMaker**[博客](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) - **九章智算云(算力优惠活动)**https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
近期活动:
- **2024/10/18-2024/11/30**:使用 PAI+LLaMA Factory 构建个性化导游机器人。[[活动页面]](https://developer.aliyun.com/topic/llamafactory2)
> [!NOTE] > [!NOTE]
> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。 > 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。
@@ -43,13 +65,23 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
## 目录 ## 目录
- [项目特色](#项目特色) - [项目特色](#项目特色)
- [性能指标](#性能指标) - [官方博客](#官方博客)
- [更新日志](#更新日志) - [更新日志](#更新日志)
- [模型](#模型) - [模型](#模型)
- [训练方法](#训练方法) - [训练方法](#训练方法)
- [数据集](#数据集) - [数据集](#数据集)
- [软硬件依赖](#软硬件依赖) - [软硬件依赖](#软硬件依赖)
- [如何使用](#如何使用) - [如何使用](#如何使用)
- [安装 LLaMA Factory](#安装-llama-factory)
- [数据准备](#数据准备)
- [快速开始](#快速开始)
- [LLaMA Board 可视化微调](#llama-board-可视化微调由-gradio-驱动)
- [构建 Docker](#构建-docker)
- [利用 vLLM 部署 OpenAI API](#利用-vllm-部署-openai-api)
- [从魔搭社区下载](#从魔搭社区下载)
- [从魔乐社区下载](#从魔乐社区下载)
- [使用 W&B 面板](#使用-wb-面板)
- [使用 SwanLab 面板](#使用-swanlab-面板)
- [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目) - [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目)
- [协议](#协议) - [协议](#协议)
- [引用](#引用) - [引用](#引用)
@@ -57,31 +89,77 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
## 项目特色 ## 项目特色
- **多种模型**LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。 - **多种模型**LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、DeepSeek、Yi、Gemma、ChatGLM、Phi 等等。
- **集成方法**增量预训练、多模态指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。 - **集成方法**增量预训练、多模态指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。
- **多种精度**16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。 - **多种精度**16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。
- **先进算法**[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQPiSSA 和 Agent 微调 - **先进算法**[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[APOLLO](https://github.com/zhuhanqing/APOLLO)、[Adam-mini](https://github.com/zyushun/Adam-mini)、[Muon](https://github.com/KellerJordan/Muon)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQPiSSA。
- **实用技巧**[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。 - **实用技巧**[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。
- **实验监控**LlamaBoard、TensorBoard、Wandb、MLflow 等等。 - **广泛任务**:多轮对话、工具调用、图像理解、视觉定位、视频识别和语音理解等等。
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口 - **实验监控**LlamaBoard、TensorBoard、Wandb、MLflow、[SwanLab](https://github.com/SwanHubX/SwanLab) 等等
- **极速推理**:基于 [vLLM](https://github.com/vllm-project/vllm) 或 [SGLang](https://github.com/sgl-project/sglang) 的 OpenAI 风格 API、浏览器界面和命令行接口。
## 性能指标 ### 最新模型的 Day-N 微调适配
与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。 | 适配时间 | 模型名称 |
| ------------ | ------------------------------------------------------------ |
| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / InternLM 3 / MiniCPM-o-2.6 |
| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
![benchmark](assets/benchmark.svg) ## 官方博客
<details><summary>变量定义</summary> - [使用 LLaMA-Factory 微调 Qwen2.5-VL 实现自动驾驶场景微调](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory)(中文)
- [通过亚马逊 SageMaker HyperPod 上的 LLaMA-Factory 增强多模态模型银行文档的视觉信息提取](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/)(英文)
- [Easy Dataset × LLaMA Factory: 让大模型高效学习领域知识](https://buaa-act.feishu.cn/wiki/KY9xwTGs1iqHrRkjXBwcZP9WnL9)(中文)
- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4截断长度=1024 <details><summary>全部博客</summary>
- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4截断长度=1024
- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1截断长度=1024 - [LLaMA Factory微调 DeepSeek-R1-Distill-Qwen-7B 模型实现新闻标题分类器](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b)(中文
- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32` - [基于 Amazon SageMaker 和 LLaMA-Factory 打造一站式无代码模型微调部署平台 Model Hub](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)(中文)
- [LLaMA Factory 多模态微调实践:微调 Qwen2-VL 构建文旅大模型](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)(中文)
- [LLaMA Factory微调LLaMA3模型实现角色扮演](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)(中文)
</details> </details>
## 更新日志 ## 更新日志
[25/04/28] 我们支持了 **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** 系列模型的微调。
[25/04/21] 我们支持了 **[Muon](https://github.com/KellerJordan/Muon)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@tianshijing](https://github.com/tianshijing) 的 PR。
[25/04/16] 我们支持了 **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** 模型的微调。查看 [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) 以使用。
[25/04/14] 我们支持了 **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** 和 **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** 模型的微调。
[25/04/06] 我们支持了 **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** 模型的微调。查看 [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) 以使用。
<details><summary>展开日志</summary>
[25/03/31] 我们支持了 **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** 模型的微调。查看 [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) 以使用。
[25/03/15] 我们支持了 **[SGLang](https://github.com/sgl-project/sglang)** 推理后端,请使用 `infer_backend: sglang` 启用。
[25/03/12] 我们支持了 **[Gemma 3](https://huggingface.co/blog/gemma3)** 模型的微调。
[25/02/24] 我们宣布开源 **[EasyR1](https://github.com/hiyouga/EasyR1)**,一个高效可扩展的多模态强化学习框架,支持高效的 GRPO 训练。
[25/02/11] 我们支持了在导出模型时保存 **[Ollama](https://github.com/ollama/ollama)** 配置文件。详细用法请参照 [examples](examples/README_zh.md)。
[25/02/05] 我们支持了在语音理解任务上微调 **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** 和 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 模型。
[25/01/31] 我们支持了 **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** 和 **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** 模型的微调。
[25/01/15] 我们支持了 **[APOLLO](https://arxiv.org/abs/2412.05270)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
[25/01/14] 我们支持了 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 和 **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** 模型的微调。 感谢 [@BUAADreamer](https://github.com/BUAADreamer) 的 PR.
[25/01/14] 我们支持了 **[InternLM 3](https://huggingface.co/collections/internlm/)** 模型的微调。感谢 [@hhaAndroid](https://github.com/hhaAndroid) 的 PR。
[25/01/10] 我们支持了 **[Phi-4](https://huggingface.co/microsoft/phi-4)** 模型的微调。
[24/12/21] 我们支持了使用 **[SwanLab](https://github.com/SwanHubX/SwanLab)** 跟踪与可视化实验。详细用法请参考 [此部分](#使用-swanlab-面板)。
[24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。
[24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。 [24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。
[24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。 [24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。
@@ -92,8 +170,6 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
[24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。 [24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。
<details><summary>展开日志</summary>
[24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。 [24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。
[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。 [24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。
@@ -172,37 +248,61 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
</details> </details>
> [!TIP]
> 如果您无法使用最新的功能,请尝试重新拉取代码并再次安装 LLaMA-Factory。
## 模型 ## 模型
| 模型名 | 模型大小 | Template | | 模型名 | 参数量 | Template |
| ----------------------------------------------------------------- | -------------------------------- | ---------------- | | ----------------------------------------------------------------- | -------------------------------- | ------------------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | | [Gemma 3](https://huggingface.co/google) | 1B/4B/12B/27B | gemma3/gemma (1B) |
| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/THUDM) | 9B/32B | glm4/glmz1 |
| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
| [Index](https://huggingface.co/IndexTeam) | 1.9B | index | | [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | | [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | | [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | | [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | | [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - | | [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | paligemma | | [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi | | [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | | [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
| [Qwen (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | | [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | | [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
@@ -212,6 +312,10 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
> 对于所有“基座”Base模型`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”Instruct/Chat模型请务必使用**对应的模板**。 > 对于所有“基座”Base模型`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”Instruct/Chat模型请务必使用**对应的模板**。
> >
> 请务必在训练和推理时采用**完全一致**的模板。 > 请务必在训练和推理时采用**完全一致**的模板。
>
> \*:您需要从 main 分支安装 `transformers` 并使用 `DISABLE_VERSION_CHECK=1` 来跳过版本检查。
>
> \*\*:您需要安装特定版本的 `transformers` 以使用该模型。
项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。 项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
@@ -220,7 +324,7 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
## 训练方法 ## 训练方法
| 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA | | 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA |
| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | --------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
| 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
@@ -290,9 +394,13 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
@@ -311,8 +419,10 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
@@ -332,35 +442,35 @@ huggingface-cli login
| 必需项 | 至少 | 推荐 | | 必需项 | 至少 | 推荐 |
| ------------ | ------- | --------- | | ------------ | ------- | --------- |
| python | 3.8 | 3.11 | | python | 3.9 | 3.10 |
| torch | 1.13.1 | 2.4.0 | | torch | 2.0.0 | 2.6.0 |
| transformers | 4.41.2 | 4.43.4 | | torchvision | 0.15.0 | 0.21.0 |
| datasets | 2.16.0 | 2.20.0 | | transformers | 4.45.0 | 4.50.0 |
| accelerate | 0.30.1 | 0.32.0 | | datasets | 2.16.0 | 3.2.0 |
| peft | 0.11.1 | 0.12.0 | | accelerate | 0.34.0 | 1.2.1 |
| peft | 0.14.0 | 0.15.1 |
| trl | 0.8.6 | 0.9.6 | | trl | 0.8.6 | 0.9.6 |
| 可选项 | 至少 | 推荐 | | 可选项 | 至少 | 推荐 |
| ------------ | ------- | --------- | | ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 | | CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 | | deepspeed | 0.10.0 | 0.16.4 |
| bitsandbytes | 0.39.0 | 0.43.1 | | bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.5.0 | | vllm | 0.4.3 | 0.8.2 |
| flash-attn | 2.3.0 | 2.6.3 | | flash-attn | 2.5.6 | 2.7.2 |
### 硬件依赖 ### 硬件依赖
\* *估算值* \* *估算值*
| 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | | 方法 | 精度 | 7B | 14B | 30B | 70B | `x`B |
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | | ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | | Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | | Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | | Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
## 如何使用 ## 如何使用
@@ -369,32 +479,77 @@ huggingface-cli login
> [!IMPORTANT] > [!IMPORTANT]
> 此步骤为必需。 > 此步骤为必需。
#### 从源码安装
```bash ```bash
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory cd LLaMA-Factory
pip install -e ".[torch,metrics]" pip install -e ".[torch,metrics]" --no-build-isolation
``` ```
可选的额外依赖项torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、openmind、quality 可选的额外依赖项torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、aqlm、vllm、sglang、galore、apollo、badam、adam-mini、qwen、minicpm_v、modelscope、openmind、swanlab、dev
> [!TIP] #### 从镜像安装
> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。
```bash
docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
```
该镜像基于 Ubuntu 22.04x86\_64、CUDA 12.4、Python 3.11、PyTorch 2.6.0 和 Flash-attn 2.7.4 构建。
查看全部镜像https://hub.docker.com/r/hiyouga/llamafactory/tags
请参阅[构建 Docker](#构建-docker) 来重新构建镜像。
<details><summary>使用 <b>uv</b> 构建虚拟环境</summary>
使用 [uv](https://github.com/astral-sh/uv) 创建隔离的 Python 环境:
```bash
uv sync --extra torch --extra metrics --prerelease=allow
```
在环境中运行 LLaMA-Factory
```bash
uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
</details>
<details><summary>Windows 用户指南</summary> <details><summary>Windows 用户指南</summary>
#### 安装 PyTorch
Windows 平台需要额外手动安装 GPU 版本的 PyTorch 依赖包,您可以参考[官方网站](https://pytorch.org/get-started/locally/)和以下命令安装并测试 PyTorch 是否正确安装。
```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
python -c "import torch; print(torch.cuda.is_available())"
```
如果看到 `True` 则说明安装成功。
若遇到类似 `Can't pickle local object` 的报错,请设置 `dataloader_num_workers: 0`
#### 安装 BitsAndBytes
如果要在 Windows 平台上开启量化 LoRAQLoRA需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。 如果要在 Windows 平台上开启量化 LoRAQLoRA需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
```bash ```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
``` ```
如果要在 Windows 平台上开启 FlashAttention-2需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。 #### 安装 Flash Attention-2
如果要在 Windows 平台上开启 FlashAttention-2请使用 [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) 中的脚本自行编译与安装。
</details> </details>
<details><summary>昇腾 NPU 用户指南</summary> <details><summary>昇腾 NPU 用户指南</summary>
在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令: 在昇腾 NPU 设备上安装 LLaMA Factory 时,请升级 Python 到 3.10 及以上,并需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
```bash ```bash
# 请替换 URL 为 CANN 版本和设备型号对应的 URL # 请替换 URL 为 CANN 版本和设备型号对应的 URL
@@ -411,11 +566,12 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh
``` ```
| 依赖项 | 至少 | 推荐 | | 依赖项 | 至少 | 推荐 |
| ------------ | ------- | ----------- | | ------------ | ------- | -------------- |
| CANN | 8.0.RC1 | 8.0.RC1 | | CANN | 8.0.RC1 | 8.0.0.alpha002 |
| torch | 2.1.0 | 2.1.0 | | torch | 2.1.0 | 2.4.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 | | torch-npu | 2.1.0 | 2.4.0.post2 |
| deepspeed | 0.13.2 | 0.13.2 | | deepspeed | 0.13.2 | 0.13.2 |
| vllm-ascend | - | 0.7.3 |
请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。 请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。
@@ -423,6 +579,40 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh
下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) 下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
#### 安装 BitsAndBytes
如果要在 Ascend NPU 上进行基于 bitsandbytes 的 QLoRA 量化微调,请执行如下步骤:
1. 手动编译 bitsandbytes请参考[安装文档](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU)完成 NPU 版的 bitsandbytes 安装,编译要求环境 cmake 版本不低于 3.22.1g++ 版本不低于 12.x。
```bash
# 从源码安装 bitsandbytes
# 克隆 bitsandbytes 仓库, Ascend NPU 目前在 multi-backend-refactor 中支持
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
cd bitsandbytes/
# 安装依赖
pip install -r requirements-dev.txt
# 安装编译工具依赖,该步骤在不同系统上命令有所不同,供参考
apt-get install -y build-essential cmake
# 编译 & 安装
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install .
```
2. 安装 transformers 的 main 分支版本。
```bash
git clone -b main https://github.com/huggingface/transformers.git
cd transformers
pip install .
```
3. 在训练参数中设置 `double_quantization: false`,可参考[示例](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml)。
</details> </details>
### 数据准备 ### 数据准备
@@ -432,6 +622,8 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh
> [!NOTE] > [!NOTE]
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。 > 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
您也可以使用 **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** 或 **[GraphGen](https://github.com/open-sciencelab/GraphGen)** 构建用于微调的合成数据。
### 快速开始 ### 快速开始
下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。 下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
@@ -446,6 +638,8 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
> [!TIP] > [!TIP]
> 使用 `llamafactory-cli help` 显示帮助信息。 > 使用 `llamafactory-cli help` 显示帮助信息。
>
> 遇到报错请先看[常见问题](https://github.com/hiyouga/LLaMA-Factory/issues/4614)。
### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动) ### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
@@ -485,22 +679,13 @@ CUDA 用户:
```bash ```bash
docker build -f ./docker/docker-cuda/Dockerfile \ docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest . -t llamafactory:latest .
docker run -dit --gpus=all \ docker run -dit --ipc=host --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-p 7860:7860 \ -p 7860:7860 \
-p 8000:8000 \ -p 8000:8000 \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -510,19 +695,12 @@ docker exec -it llamafactory bash
昇腾 NPU 用户: 昇腾 NPU 用户:
```bash ```bash
# 根据您的环境选择镜像
docker build -f ./docker/docker-npu/Dockerfile \ docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=torch-npu,metrics \
-t llamafactory:latest . -t llamafactory:latest .
# 根据您的资源更改 `device` docker run -dit --ipc=host \
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
@@ -533,7 +711,6 @@ docker run -dit \
--device /dev/davinci_manager \ --device /dev/davinci_manager \
--device /dev/devmm_svm \ --device /dev/devmm_svm \
--device /dev/hisi_hdc \ --device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -544,25 +721,15 @@ AMD ROCm 用户:
```bash ```bash
docker build -f ./docker/docker-rocm/Dockerfile \ docker build -f ./docker/docker-rocm/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \ --build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest . -t llamafactory:latest .
docker run -dit \ docker run -dit --ipc=host \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./om_cache:/root/.cache/openmind \
-v ./data:/app/data \
-v ./output:/app/output \
-v ./saves:/app/saves \
-p 7860:7860 \ -p 7860:7860 \
-p 8000:8000 \ -p 8000:8000 \
--device /dev/kfd \ --device /dev/kfd \
--device /dev/dri \ --device /dev/dri \
--shm-size 16G \
--name llamafactory \ --name llamafactory \
llamafactory:latest llamafactory:latest
@@ -571,12 +738,14 @@ docker exec -it llamafactory bash
</details> </details>
<details><summary>数据卷详情</summary> <details><summary>使用数据卷</summary>
- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录 您可以通过移除 Dockerfile 中 `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` 的注释来使用数据卷
- `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。
- `om_cache`:类似 Hugging Face 缓存文件夹,为 Modelers 用户提供 在构建 Docker 时使用参数 `-v ./hf_cache:/root/.cache/huggingface` 来挂载数据卷。各个数据卷的含义表示如下
- `data`:宿主机中存放数据集的文件夹路径。
- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹。
- `shared_data`:宿主机中存放数据集的文件夹路径。
- `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。 - `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。
</details> </details>
@@ -584,13 +753,13 @@ docker exec -it llamafactory bash
### 利用 vLLM 部署 OpenAI API ### 利用 vLLM 部署 OpenAI API
```bash ```bash
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
``` ```
> [!TIP] > [!TIP]
> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。 > API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。
> >
> 示例:[图像理解](scripts/test_image.py) | [工具调用](scripts/test_toolcall.py) > 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py)
### 从魔搭社区下载 ### 从魔搭社区下载
@@ -623,6 +792,21 @@ run_name: test_run # 可选
在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。 在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。
### 使用 SwanLab 面板
若要使用 [SwanLab](https://github.com/SwanHubX/SwanLab) 记录实验数据,请在 yaml 文件中添加下面的参数。
```yaml
use_swanlab: true
swanlab_run_name: test_run # 可选
```
在启动训练任务时登录SwanLab账户有以下三种方式
方式一:在 yaml 文件中添加 `swanlab_api_key=<your_api_key>` ,并设置为你的 [API 密钥](https://swanlab.cn/settings)。
方式二:将环境变量 `SWANLAB_API_KEY` 设置为你的 [API 密钥](https://swanlab.cn/settings)。
方式三:启动前使用 `swanlab login` 命令完成登录。
## 使用了 LLaMA Factory 的项目 ## 使用了 LLaMA Factory 的项目
如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。 如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。
@@ -722,6 +906,9 @@ run_name: test_run # 可选
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357) 1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**:一个魔改后的代码库,通过 Ring Attention 支持长序列的 SFT 和 DPO 训练。
1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**:由 NovaSky AI 微调的低成本类 o1 长推理模型。
1. **[WeClone](https://github.com/xming521/WeClone)**:从聊天记录创造数字分身的一站式解决方案。
</details> </details>
@@ -729,7 +916,7 @@ run_name: test_run # 可选
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) 使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## 引用 ## 引用

38
assets/alaya_new.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 47 KiB

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 28 KiB

View File

@@ -1,12 +1,15 @@
The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it. The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it.
Currently we support datasets in **alpaca** and **sharegpt** format. The `dataset_info.json` file should be put in the `dataset_dir` directory. You can change `dataset_dir` to use another directory. The default value is `./data`.
Currently we support datasets in **alpaca** and **sharegpt** format. Allowed file types include json, jsonl, csv, parquet, arrow.
```json ```json
"dataset_name": { "dataset_name": {
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)", "hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url, file_name and cloud_file_name)",
"ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)", "ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url, file_name and cloud_file_name)",
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)", "script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name and cloud_file_name)",
"cloud_file_name": "the name of the dataset file in s3/gcs cloud storage. (if specified, ignore file_name)",
"file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)", "file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)",
"formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})", "formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
"ranking": "whether the dataset is a preference dataset or not. (default: False)", "ranking": "whether the dataset is a preference dataset or not. (default: False)",
@@ -24,6 +27,7 @@ Currently we support datasets in **alpaca** and **sharegpt** format.
"tools": "the column name in the dataset containing the tool description. (default: None)", "tools": "the column name in the dataset containing the tool description. (default: None)",
"images": "the column name in the dataset containing the image inputs. (default: None)", "images": "the column name in the dataset containing the image inputs. (default: None)",
"videos": "the column name in the dataset containing the videos inputs. (default: None)", "videos": "the column name in the dataset containing the videos inputs. (default: None)",
"audios": "the column name in the dataset containing the audios inputs. (default: None)",
"chosen": "the column name in the dataset containing the chosen answers. (default: None)", "chosen": "the column name in the dataset containing the chosen answers. (default: None)",
"rejected": "the column name in the dataset containing the rejected answers. (default: None)", "rejected": "the column name in the dataset containing the rejected answers. (default: None)",
"kto_tag": "the column name in the dataset containing the kto tags. (default: None)" "kto_tag": "the column name in the dataset containing the kto tags. (default: None)"
@@ -46,7 +50,9 @@ Currently we support datasets in **alpaca** and **sharegpt** format.
* [Example dataset](alpaca_en_demo.json) * [Example dataset](alpaca_en_demo.json)
In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response. In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the user prompt, then the user prompt would be `instruction\ninput`. The `output` column represents the model response.
For reasoning models, if the dataset contains chain-of-thought (CoT), the CoT needs to be placed in the model responses, such as `<think>cot</think>output`.
The `system` column will be used as the system prompt if specified. The `system` column will be used as the system prompt if specified.
@@ -55,13 +61,13 @@ The `history` column is a list consisting of string tuples representing prompt-r
```json ```json
[ [
{ {
"instruction": "human instruction (required)", "instruction": "user instruction (required)",
"input": "human input (optional)", "input": "user input (optional)",
"output": "model response (required)", "output": "model response (required)",
"system": "system prompt (optional)", "system": "system prompt (optional)",
"history": [ "history": [
["human instruction in the first round (optional)", "model response in the first round (optional)"], ["user instruction in the first round (optional)", "model response in the first round (optional)"],
["human instruction in the second round (optional)", "model response in the second round (optional)"] ["user instruction in the second round (optional)", "model response in the second round (optional)"]
] ]
} }
] ]
@@ -82,9 +88,14 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
} }
``` ```
> [!TIP]
> If the model has reasoning capabilities (e.g. Qwen3) but the dataset does not contain chain-of-thought (CoT), LLaMA-Factory will automatically add empty CoT to the data. When `enable_thinking` is `True` (slow thinking, by default), the empty CoT will be added to the model responses and loss computation will be considered; otherwise (fast thinking), it will be added to the user prompts and loss computation will be ignored. Please keep the `enable_thinking` parameter consistent during training and inference.
>
> If you want to train data containing CoT with slow thinking and data without CoT with fast thinking, you can set `enable_thinking` to `None`. However, this feature is relatively complicated and should be used with caution.
### Pre-training Dataset ### Pre-training Dataset
- [Example dataset](c4_demo.json) - [Example dataset](c4_demo.jsonl)
In pre-training, only the `text` column will be used for model learning. In pre-training, only the `text` column will be used for model learning.
@@ -115,8 +126,8 @@ It requires a better response in `chosen` column and a worse response in `reject
```json ```json
[ [
{ {
"instruction": "human instruction (required)", "instruction": "user instruction (required)",
"input": "human input (optional)", "input": "user input (optional)",
"chosen": "chosen answer (required)", "chosen": "chosen answer (required)",
"rejected": "rejected answer (required)" "rejected": "rejected answer (required)"
} }
@@ -150,6 +161,10 @@ An additional column `images` is required. Please refer to the [sharegpt](#share
An additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. An additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.
### Multimodal Audio Dataset
An additional column `audios` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.
## Sharegpt Format ## Sharegpt Format
### Supervised Fine-Tuning Dataset ### Supervised Fine-Tuning Dataset
@@ -166,7 +181,7 @@ Note that the human and observation should appear in odd positions, while gpt an
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "human instruction" "value": "user instruction"
}, },
{ {
"from": "function_call", "from": "function_call",
@@ -217,7 +232,7 @@ Preference datasets in sharegpt format also require a better message in `chosen`
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "human instruction" "value": "user instruction"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -225,7 +240,7 @@ Preference datasets in sharegpt format also require a better message in `chosen`
}, },
{ {
"from": "human", "from": "human",
"value": "human instruction" "value": "user instruction"
} }
], ],
"chosen": { "chosen": {
@@ -267,7 +282,7 @@ KTO datasets require a extra `kto_tag` column containing the boolean human feedb
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "human instruction" "value": "user instruction"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -296,7 +311,7 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
- [Example dataset](mllm_demo.json) - [Example dataset](mllm_demo.json)
Multimodal image datasets require a `images` column containing the paths to the input images. Multimodal image datasets require an `images` column containing the paths to the input images.
The number of images should be identical to the `<image>` tokens in the conversations. The number of images should be identical to the `<image>` tokens in the conversations.
@@ -306,7 +321,7 @@ The number of images should be identical to the `<image>` tokens in the conversa
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "<image>human instruction" "value": "<image>user instruction"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -347,7 +362,7 @@ The number of videos should be identical to the `<video>` tokens in the conversa
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "<video>human instruction" "value": "<video>user instruction"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -374,6 +389,47 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
} }
``` ```
### Multimodal Audio Dataset
- [Example dataset](mllm_audio_demo.json)
Multimodal audio datasets require an `audios` column containing the paths to the input audios.
The number of audios should be identical to the `<audio>` tokens in the conversations.
```json
[
{
"conversations": [
{
"from": "human",
"value": "<audio>user instruction"
},
{
"from": "gpt",
"value": "model response"
}
],
"audios": [
"audio path (required)"
]
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"audios": "audios"
}
}
```
### OpenAI Format ### OpenAI Format
The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt. The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt.
@@ -388,7 +444,7 @@ The openai format is simply a special case of the sharegpt format, where the fir
}, },
{ {
"role": "user", "role": "user",
"content": "human instruction" "content": "user instruction"
}, },
{ {
"role": "assistant", "role": "assistant",

View File

@@ -1,6 +1,8 @@
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。 [dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集 其中 `dataset_info.json` 文件应放置在 `dataset_dir` 目录下。您可以通过修改 `dataset_dir` 参数来使用其他目录。默认值为 `./data`
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。允许的文件类型包括 json、jsonl、csv、parquet 和 arrow。
```json ```json
"数据集名称": { "数据集名称": {
@@ -24,6 +26,7 @@
"tools": "数据集代表工具描述的表头名称默认None", "tools": "数据集代表工具描述的表头名称默认None",
"images": "数据集代表图像输入的表头名称默认None", "images": "数据集代表图像输入的表头名称默认None",
"videos": "数据集代表视频输入的表头名称默认None", "videos": "数据集代表视频输入的表头名称默认None",
"audios": "数据集代表音频输入的表头名称默认None",
"chosen": "数据集代表更优回答的表头名称默认None", "chosen": "数据集代表更优回答的表头名称默认None",
"rejected": "数据集代表更差回答的表头名称默认None", "rejected": "数据集代表更差回答的表头名称默认None",
"kto_tag": "数据集代表 KTO 标签的表头名称默认None" "kto_tag": "数据集代表 KTO 标签的表头名称默认None"
@@ -46,7 +49,9 @@
- [样例数据集](alpaca_zh_demo.json) - [样例数据集](alpaca_zh_demo.json)
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令`instruction\ninput`。而 `output` 列对应的内容为模型回答。 在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为提示词,即提示词`instruction\ninput`。而 `output` 列对应的内容为模型回答。
对于推理类模型的微调,如果数据集包含思维链,则需要把思维链放在模型回答中,例如 `<think>cot</think>output`
如果指定,`system` 列对应的内容将被作为系统提示词。 如果指定,`system` 列对应的内容将被作为系统提示词。
@@ -55,8 +60,8 @@
```json ```json
[ [
{ {
"instruction": "人类指令(必填)", "instruction": "用户指令(必填)",
"input": "人类输入(选填)", "input": "用户输入(选填)",
"output": "模型回答(必填)", "output": "模型回答(必填)",
"system": "系统提示词(选填)", "system": "系统提示词(选填)",
"history": [ "history": [
@@ -82,9 +87,14 @@
} }
``` ```
> [!TIP]
> 如果模型本身具备推理能力(如 Qwen3而数据集不包含思维链LLaMA-Factory 会自动为数据添加空思维链。当 `enable_thinking` 为 `True` 时(慢思考,默认),空思维链会添加到模型回答中并且计算损失,否则会添加到用户指令中并且不计算损失(快思考)。请在训练和推理时保持 `enable_thinking` 参数一致。
>
> 如果您希望训练包含思维链的数据时使用慢思考,训练不包含思维链的数据时使用快思考,可以设置 `enable_thinking` 为 `None`。但该功能较为复杂,请谨慎使用。
### 预训练数据集 ### 预训练数据集
- [样例数据集](c4_demo.json) - [样例数据集](c4_demo.jsonl)
在预训练时,只有 `text` 列中的内容会用于模型学习。 在预训练时,只有 `text` 列中的内容会用于模型学习。
@@ -115,8 +125,8 @@
```json ```json
[ [
{ {
"instruction": "人类指令(必填)", "instruction": "用户指令(必填)",
"input": "人类输入(选填)", "input": "用户输入(选填)",
"chosen": "优质回答(必填)", "chosen": "优质回答(必填)",
"rejected": "劣质回答(必填)" "rejected": "劣质回答(必填)"
} }
@@ -150,6 +160,10 @@ KTO 数据集需要提供额外的 `kto_tag` 列。详情请参阅 [sharegpt](#s
多模态视频数据集需要提供额外的 `videos` 列。详情请参阅 [sharegpt](#sharegpt-格式)。 多模态视频数据集需要提供额外的 `videos` 列。详情请参阅 [sharegpt](#sharegpt-格式)。
### 多模态音频数据集
多模态音频数据集需要提供额外的 `audios` 列。详情请参阅 [sharegpt](#sharegpt-格式)。
## Sharegpt 格式 ## Sharegpt 格式
### 指令监督微调数据集 ### 指令监督微调数据集
@@ -166,7 +180,7 @@ KTO 数据集需要提供额外的 `kto_tag` 列。详情请参阅 [sharegpt](#s
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "人类指令" "value": "用户指令"
}, },
{ {
"from": "function_call", "from": "function_call",
@@ -217,7 +231,7 @@ Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "人类指令" "value": "用户指令"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -225,7 +239,7 @@ Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的
}, },
{ {
"from": "human", "from": "human",
"value": "人类指令" "value": "用户指令"
} }
], ],
"chosen": { "chosen": {
@@ -267,7 +281,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "人类指令" "value": "用户指令"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -306,7 +320,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "<image>人类指令" "value": "<image><image>用户指令"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -314,6 +328,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
} }
], ],
"images": [ "images": [
"图像路径(必填)",
"图像路径(必填)" "图像路径(必填)"
] ]
} }
@@ -347,7 +362,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [ "conversations": [
{ {
"from": "human", "from": "human",
"value": "<video>人类指令" "value": "<video><video>用户指令"
}, },
{ {
"from": "gpt", "from": "gpt",
@@ -355,6 +370,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
} }
], ],
"videos": [ "videos": [
"视频路径(必填)",
"视频路径(必填)" "视频路径(必填)"
] ]
} }
@@ -374,6 +390,49 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
} }
``` ```
### 多模态音频数据集
- [样例数据集](mllm_audio_demo.json)
多模态音频数据集需要额外添加一个 `audios` 列,包含输入音频的路径。
注意音频的数量必须与文本中所有 `<audio>` 标记的数量严格一致。
```json
[
{
"conversations": [
{
"from": "human",
"value": "<audio><audio>用户指令"
},
{
"from": "gpt",
"value": "模型回答"
}
],
"audios": [
"音频路径(必填)",
"音频路径(必填)"
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"audios": "audios"
}
}
```
### OpenAI 格式 ### OpenAI 格式
OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。 OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。
@@ -388,7 +447,7 @@ OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消
}, },
{ {
"role": "user", "role": "user",
"content": "人类指令" "content": "用户指令"
}, },
{ {
"role": "assistant", "role": "assistant",

View File

@@ -1,3 +1,18 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json import json
import os import os
@@ -10,7 +25,7 @@ _DESCRIPTION = "BELLE multiturn chat dataset."
_CITATION = """\ _CITATION = """\
@article{belle2023exploring, @article{belle2023exploring,
title={Exploring the Impact of Instruction Data Scaling on Large Language Models: An Empirical Study on Real-World Use Cases}, title={Exploring the Impact of Instruction Data Scaling on Large Language Models},
author={Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, Xiangang Li}, author={Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, Xiangang Li},
journal={arXiv preprint arXiv:2303.14742}, journal={arXiv preprint arXiv:2303.14742},
year={2023} year={2023}

300
data/c4_demo.jsonl Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,20 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json import json
import os import os
from typing import List
import datasets import datasets
@@ -50,7 +64,7 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": file_path["test"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": file_path["test"]}),
] ]
def _generate_examples(self, filepaths: List[str]): def _generate_examples(self, filepaths: list[str]):
key = 0 key = 0
for filepath in filepaths: for filepath in filepaths:
with open(filepath, encoding="utf-8") as f: with open(filepath, encoding="utf-8") as f:

BIN
data/mllm_demo_data/1.mp3 Normal file

Binary file not shown.

BIN
data/mllm_demo_data/2.wav Normal file

Binary file not shown.

BIN
data/mllm_demo_data/3.flac Normal file

Binary file not shown.

BIN
data/mllm_demo_data/4.mp3 Normal file

Binary file not shown.

BIN
data/mllm_demo_data/4.mp4 Normal file

Binary file not shown.

View File

@@ -1,6 +1,20 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json import json
import os import os
from typing import List
import datasets import datasets
@@ -11,7 +25,7 @@ _DESCRIPTION = "UltraChat: Large-scale, Informative, and Diverse Multi-round Dia
_CITATION = """\ _CITATION = """\
@misc{UltraChat, @misc{UltraChat,
author = {Ding, Ning and Chen, Yulin and Xu, Bokai and Hu, Shengding and Qin, Yujia and Liu, Zhiyuan and Sun, Maosong and Zhou, Bowen}, author = {Ding, Ning and Chen, Yulin and Xu, Bokai and Hu, Shengding and others},
title = {UltraChat: A Large-scale Auto-generated Multi-round Dialogue Data}, title = {UltraChat: A Large-scale Auto-generated Multi-round Dialogue Data},
year = {2023}, year = {2023},
publisher = {GitHub}, publisher = {GitHub},
@@ -40,7 +54,7 @@ class UltraChat(datasets.GeneratorBasedBuilder):
file_paths = [dl_manager.download(_BASE_DATA_URL.format(idx=idx)) for idx in range(10)] # multiple shards file_paths = [dl_manager.download(_BASE_DATA_URL.format(idx=idx)) for idx in range(10)] # multiple shards
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": file_paths})] return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": file_paths})]
def _generate_examples(self, filepaths: List[str]): def _generate_examples(self, filepaths: list[str]):
for filepath in filepaths: for filepath in filepaths:
with open(filepath, encoding="utf-8") as f: with open(filepath, encoding="utf-8") as f:
for row in f: for row in f:
@@ -49,7 +63,7 @@ class UltraChat(datasets.GeneratorBasedBuilder):
except Exception: except Exception:
continue continue
key: int = data["id"] key: int = data["id"]
content: List[str] = data["data"] content: list[str] = data["data"]
if len(content) % 2 == 1: if len(content) % 2 == 1:
content.pop(-1) content.pop(-1)
if len(content) < 2: if len(content) < 2:

View File

@@ -1,72 +1,66 @@
# Default use the NVIDIA official image with PyTorch 2.3.0 # https://hub.docker.com/r/hiyouga/pytorch/tags
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html ARG BASE_IMAGE=hiyouga/pytorch:th2.6.0-cu124-flashattn2.7.4-cxx11abi0-devel
ARG BASE_IMAGE=nvcr.io/nvidia/pytorch:24.02-py3
FROM ${BASE_IMAGE} FROM ${BASE_IMAGE}
# Installation arguments
ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=metrics
ARG INSTALL_FLASHATTN=false
ARG HTTP_PROXY=""
# Define environments # Define environments
ENV MAX_JOBS=4 ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments # Use Bash instead of default /bin/sh
ARG INSTALL_BNB=false SHELL ["/bin/bash", "-c"]
ARG INSTALL_VLLM=false
ARG INSTALL_DEEPSPEED=false
ARG INSTALL_FLASHATTN=false
ARG INSTALL_LIGER_KERNEL=false
ARG INSTALL_HQQ=false
ARG INSTALL_EETQ=false
ARG PIP_INDEX=https://pypi.org/simple
# Set the working directory # Set the working directory
WORKDIR /app WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install the requirements # Install the requirements
COPY requirements.txt /app COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \ RUN pip install --no-cache-dir -r requirements.txt
pip config set global.extra-index-url "$PIP_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
# Copy the rest of the application into the image # Copy the rest of the application into the image
COPY . /app COPY . /app
# Install the LLaMA Factory # Install LLaMA Factory
RUN EXTRA_PACKAGES="metrics"; \ RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
if [ "$INSTALL_BNB" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes"; \
fi; \
if [ "$INSTALL_VLLM" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm"; \
fi; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
if [ "$INSTALL_LIGER_KERNEL" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},liger-kernel"; \
fi; \
if [ "$INSTALL_HQQ" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},hqq"; \
fi; \
if [ "$INSTALL_EETQ" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},eetq"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Rebuild flash attention # Rebuild flash attention
RUN pip uninstall -y transformer-engine flash-attn && \ RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
if [ "$INSTALL_FLASHATTN" == "true" ]; then \ pip uninstall -y ninja && \
pip uninstall -y ninja && pip install ninja && \ pip install --no-cache-dir ninja && \
pip install --no-cache-dir flash-attn --no-build-isolation; \ pip install --no-cache-dir flash-attn --no-build-isolation; \
fi fi
# Set up volumes # Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ] # VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board # Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT 7860 ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860 EXPOSE 7860
# Expose port 8000 for the API service # Expose port 8000 for API service
ENV API_PORT 8000 ENV API_PORT=8000
EXPOSE 8000 EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -0,0 +1,55 @@
# Start from the pytorch official image (ubuntu-22.04 + cuda-12.4.1 + python-3.11)
# https://hub.docker.com/r/pytorch/pytorch/tags
FROM pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel
# Define environments
ENV MAX_JOBS=16
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
# Define installation arguments
ARG APT_SOURCE=https://mirrors.tuna.tsinghua.edu.cn/ubuntu/
ARG PIP_INDEX=https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# Set apt source
RUN cp /etc/apt/sources.list /etc/apt/sources.list.bak && \
{ \
echo "deb ${APT_SOURCE} jammy main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-updates main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-backports main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-security main restricted universe multiverse"; \
} > /etc/apt/sources.list
# Install systemctl and wget
RUN apt-get update && \
apt-get install -y -o Dpkg::Options::="--force-confdef" systemd wget && \
apt-get clean
# Install git and vim
RUN apt-get update && \
apt-get install -y git vim && \
apt-get clean
# Install gcc and g++
RUN apt-get update && \
apt-get install -y gcc g++ && \
apt-get clean
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install flash-attn-2.7.4.post1 (cxx11abi=False)
RUN wget -nv https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl && \
pip install --no-cache-dir flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
# Install flashinfer-0.2.2.post1+cu124 (cxx11abi=False)
RUN wget -nv https://github.com/flashinfer-ai/flashinfer/releases/download/v0.2.2.post1/flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl && \
pip install --no-cache-dir flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,27 +4,15 @@ services:
dockerfile: ./docker/docker-cuda/Dockerfile dockerfile: ./docker/docker-cuda/Dockerfile
context: ../.. context: ../..
args: args:
INSTALL_BNB: false
INSTALL_VLLM: false
INSTALL_DEEPSPEED: false
INSTALL_FLASHATTN: false
INSTALL_LIGER_KERNEL: false
INSTALL_HQQ: false
INSTALL_EETQ: false
PIP_INDEX: https://pypi.org/simple PIP_INDEX: https://pypi.org/simple
EXTRAS: metrics
container_name: llamafactory container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../om_cache:/root/.cache/openmind
- ../../data:/app/data
- ../../output:/app/output
ports: ports:
- "7860:7860" - "7860:7860"
- "8000:8000" - "8000:8000"
ipc: host ipc: host
tty: true tty: true
shm_size: '16gb' # shm_size: "16gb" # ipc: host is set
stdin_open: true stdin_open: true
command: bash command: bash
deploy: deploy:

View File

@@ -1,45 +1,58 @@
# Use the Ubuntu 22.04 image with CANN 8.0.rc1 # https://hub.docker.com/r/ascendai/cann/tags
# More versions can be found at https://hub.docker.com/r/ascendai/cann/tags ARG BASE_IMAGE=ascendai/cann:8.0.0-910b-ubuntu22.04-py3.11
# FROM ascendai/cann:8.0.rc1-910-ubuntu22.04-py3.8 FROM ${BASE_IMAGE}
FROM ascendai/cann:8.0.rc1-910b-ubuntu22.04-py3.8
# FROM ascendai/cann:8.0.rc1-910-openeuler22.03-py3.8 # Installation arguments
# FROM ascendai/cann:8.0.rc1-910b-openeuler22.03-py3.8 ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=torch-npu,metrics
ARG HTTP_PROXY=""
# Define environments # Define environments
ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments # Use Bash instead of default /bin/sh
ARG INSTALL_DEEPSPEED=false SHELL ["/bin/bash", "-c"]
ARG PIP_INDEX=https://pypi.org/simple
ARG TORCH_INDEX=https://download.pytorch.org/whl/cpu
# Set the working directory # Set the working directory
WORKDIR /app WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install the requirements # Install the requirements
COPY requirements.txt /app COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \ RUN pip install --no-cache-dir -r requirements.txt
pip config set global.extra-index-url "$TORCH_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
# Copy the rest of the application into the image # Copy the rest of the application into the image
COPY . /app COPY . /app
# Install the LLaMA Factory # Install LLaMA Factory
RUN EXTRA_PACKAGES="torch-npu,metrics"; \ RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Set up volumes # Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ] # VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board # Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT 7860 ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860 EXPOSE 7860
# Expose port 8000 for the API service # Expose port 8000 for API service
ENV API_PORT 8000 ENV API_PORT=8000
EXPOSE 8000 EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,15 +4,10 @@ services:
dockerfile: ./docker/docker-npu/Dockerfile dockerfile: ./docker/docker-npu/Dockerfile
context: ../.. context: ../..
args: args:
INSTALL_DEEPSPEED: false
PIP_INDEX: https://pypi.org/simple PIP_INDEX: https://pypi.org/simple
EXTRAS: torch-npu,metrics
container_name: llamafactory container_name: llamafactory
volumes: volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../om_cache:/root/.cache/openmind
- ../../data:/app/data
- ../../output:/app/output
- /usr/local/dcmi:/usr/local/dcmi - /usr/local/dcmi:/usr/local/dcmi
- /usr/local/bin/npu-smi:/usr/local/bin/npu-smi - /usr/local/bin/npu-smi:/usr/local/bin/npu-smi
- /usr/local/Ascend/driver:/usr/local/Ascend/driver - /usr/local/Ascend/driver:/usr/local/Ascend/driver
@@ -22,7 +17,7 @@ services:
- "8000:8000" - "8000:8000"
ipc: host ipc: host
tty: true tty: true
shm_size: '16gb' # shm_size: "16gb" # ipc: host is set
stdin_open: true stdin_open: true
command: bash command: bash
devices: devices:

View File

@@ -1,65 +1,71 @@
FROM hardandheavy/transformers-rocm:2.2.0 # https://hub.docker.com/r/rocm/pytorch/tags
ARG BASE_IMAGE=rocm/pytorch:rocm6.4.1_ubuntu22.04_py3.10_pytorch_release_2.6.0
FROM ${BASE_IMAGE}
# Installation arguments
ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=metrics
ARG INSTALL_FLASHATTN=false
ARG HTTP_PROXY=""
ARG PYTORCH_INDEX=https://download.pytorch.org/whl/rocm6.3
# Define environments # Define environments
ENV MAX_JOBS=4 ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments # Use Bash instead of default /bin/sh
ARG INSTALL_BNB=false SHELL ["/bin/bash", "-c"]
ARG INSTALL_VLLM=false
ARG INSTALL_DEEPSPEED=false
ARG INSTALL_FLASHATTN=false
ARG INSTALL_LIGER_KERNEL=false
ARG INSTALL_HQQ=false
ARG PIP_INDEX=https://pypi.org/simple
# Set the working directory # Set the working directory
WORKDIR /app WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Reinstall pytorch rocm
RUN pip uninstall -y torch torchvision torchaudio && \
pip install --no-cache-dir --pre torch torchvision torchaudio --index-url "${PYTORCH_INDEX}"
# Install the requirements # Install the requirements
COPY requirements.txt /app COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \ RUN pip install --no-cache-dir -r requirements.txt
pip config set global.extra-index-url "$PIP_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
# Copy the rest of the application into the image # Copy the rest of the application into the image
COPY . /app COPY . /app
# Install the LLaMA Factory # Install LLaMA Factory
RUN EXTRA_PACKAGES="metrics"; \ RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
if [ "$INSTALL_BNB" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes"; \
fi; \
if [ "$INSTALL_VLLM" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm"; \
fi; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
if [ "$INSTALL_LIGER_KERNEL" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},liger-kernel"; \
fi; \
if [ "$INSTALL_HQQ" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},hqq"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Rebuild flash attention # Rebuild flash attention
RUN pip uninstall -y transformer-engine flash-attn && \ RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
if [ "$INSTALL_FLASHATTN" == "true" ]; then \ pip uninstall -y ninja && \
pip uninstall -y ninja && pip install ninja && \ pip install --no-cache-dir ninja && \
pip install --no-cache-dir flash-attn --no-build-isolation; \ pip install --no-cache-dir flash-attn --no-build-isolation; \
fi fi
# Set up volumes # Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ] # VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board # Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT 7860 ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860 EXPOSE 7860
# Expose port 8000 for the API service # Expose port 8000 for API service
ENV API_PORT 8000 ENV API_PORT=8000
EXPOSE 8000 EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,27 +4,15 @@ services:
dockerfile: ./docker/docker-rocm/Dockerfile dockerfile: ./docker/docker-rocm/Dockerfile
context: ../.. context: ../..
args: args:
INSTALL_BNB: false
INSTALL_VLLM: false
INSTALL_DEEPSPEED: false
INSTALL_FLASHATTN: false
INSTALL_LIGER_KERNEL: false
INSTALL_HQQ: false
PIP_INDEX: https://pypi.org/simple PIP_INDEX: https://pypi.org/simple
EXTRAS: metrics
container_name: llamafactory container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../om_cache:/root/.cache/openmind
- ../../data:/app/data
- ../../output:/app/output
- ../../saves:/app/saves
ports: ports:
- "7860:7860" - "7860:7860"
- "8000:8000" - "8000:8000"
ipc: host ipc: host
tty: true tty: true
shm_size: '16gb' # shm_size: "16gb" # ipc: host is set
stdin_open: true stdin_open: true
command: bash command: bash
devices: devices:

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\ _CITATION = """\
@article{huang2023ceval, @article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models}, title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian}, author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and others},
journal={arXiv preprint arXiv:2305.08322}, journal={arXiv preprint arXiv:2305.08322},
year={2023} year={2023}
} }
""" """
_DESCRIPTION = """\ _DESCRIPTION = """\
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. C-Eval is a comprehensive Chinese evaluation suite for foundation models.
It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
""" """
_HOMEPAGE = "https://cevalbenchmark.com" _HOMEPAGE = "https://cevalbenchmark.com"

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\ _CITATION = """\
@article{li2023cmmlu, @article{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese}, title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin}, author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and others,
journal={arXiv preprint arXiv:2306.09212}, journal={arXiv preprint arXiv:2306.09212},
year={2023} year={2023}
} }
""" """
_DESCRIPTION = """\ _DESCRIPTION = """\
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context. CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge
and reasoning abilities of LLMs within the Chinese language and cultural context.
""" """
_HOMEPAGE = "https://github.com/haonan-li/CMMLU" _HOMEPAGE = "https://github.com/haonan-li/CMMLU"

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\ _CITATION = """\
@article{hendryckstest2021, @article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding}, title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, author={Dan Hendrycks and Collin Burns and others},
journal={Proceedings of the International Conference on Learning Representations (ICLR)}, journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021} year={2021}
} }
""" """
_DESCRIPTION = """\ _DESCRIPTION = """\
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021). Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart,
Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
""" """
_HOMEPAGE = "https://github.com/hendrycks/test" _HOMEPAGE = "https://github.com/hendrycks/test"

View File

@@ -13,6 +13,26 @@ Make sure to execute these commands in the `LLaMA-Factory` directory.
Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices. Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices.
By default, LLaMA-Factory uses all visible computing devices.
Basic usage:
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
Advanced usage:
```bash
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml \
learning_rate=1e-5 \
logging_steps=1
```
```bash
bash examples/train_lora/llama3_lora_sft.sh
```
## Examples ## Examples
### LoRA Fine-Tuning ### LoRA Fine-Tuning
@@ -32,8 +52,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
#### Multimodal Supervised Fine-Tuning #### Multimodal Supervised Fine-Tuning
```bash ```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml llamafactory-cli train examples/train_lora/qwen2_5vl_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml
``` ```
#### DPO/ORPO/SimPO Training #### DPO/ORPO/SimPO Training
@@ -45,7 +64,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
#### Multimodal DPO/ORPO/SimPO Training #### Multimodal DPO/ORPO/SimPO Training
```bash ```bash
llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml llamafactory-cli train examples/train_lora/qwen2_5vl_lora_dpo.yaml
``` ```
#### Reward Modeling #### Reward Modeling
@@ -80,12 +99,6 @@ llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
``` ```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes #### Supervised Fine-Tuning on Multiple Nodes
```bash ```bash
@@ -99,6 +112,12 @@ FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
``` ```
#### Supervised Fine-Tuning with Ray on 4 GPUs
```bash
USE_RAY=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ray.yaml
```
### QLoRA Fine-Tuning ### QLoRA Fine-Tuning
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended) #### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended)
@@ -107,6 +126,12 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
``` ```
#### Supervised Fine-Tuning with 4-bit Bitsandbytes Quantization on Ascend NPU
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_bnb_npu.yaml
```
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization #### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
```bash ```bash
@@ -130,26 +155,28 @@ llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
#### Supervised Fine-Tuning on Single Node #### Supervised Fine-Tuning on Single Node
```bash ```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
``` ```
#### Supervised Fine-Tuning on Multiple Nodes #### Supervised Fine-Tuning on Multiple Nodes
```bash ```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
### Elastic and Fault-Tolerant Supervised Fine-Tuning on Multiple Nodes
To launch an elastic job with `MAX_RESTARTS` failures retries, run the following on at least `MIN_NNODES` nodes and at most `MAX_NNODES` nodes. `RDZV_ID` should be set as a unique job id (shared by all nodes participating in the job). See also [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html).
```bash
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
``` ```
#### Multimodal Supervised Fine-Tuning #### Multimodal Supervised Fine-Tuning
```bash ```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2_5vl_full_sft.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
``` ```
### Merging LoRA Adapters and Quantization ### Merging LoRA Adapters and Quantization
@@ -168,15 +195,28 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
``` ```
### Save Ollama modelfile
```bash
llamafactory-cli export examples/merge_lora/llama3_full_sft.yaml
```
### Inferring LoRA Fine-Tuned Models ### Inferring LoRA Fine-Tuned Models
#### Use CLI #### Evaluation using vLLM's Multi-GPU Inference
```
python scripts/vllm_infer.py --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct --template llama3 --dataset alpaca_en_demo
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
```
#### Use CLI ChatBox
```bash ```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
``` ```
#### Use Web UI #### Use Web UI ChatBox
```bash ```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
@@ -196,6 +236,12 @@ llamafactory-cli api examples/inference/llama3_lora_sft.yaml
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
``` ```
#### Full-Parameter Fine-Tuning using APOLLO
```bash
llamafactory-cli train examples/extras/apollo/llama3_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using BAdam #### Full-Parameter Fine-Tuning using BAdam
```bash ```bash
@@ -208,6 +254,12 @@ llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml
``` ```
#### Full-Parameter Fine-Tuning using Muon
```bash
llamafactory-cli train examples/extras/muon/qwen2_full_sft.yaml
```
#### LoRA+ Fine-Tuning #### LoRA+ Fine-Tuning
```bash ```bash

View File

@@ -13,6 +13,26 @@
使用 `CUDA_VISIBLE_DEVICES`GPU`ASCEND_RT_VISIBLE_DEVICES`NPU选择计算设备。 使用 `CUDA_VISIBLE_DEVICES`GPU`ASCEND_RT_VISIBLE_DEVICES`NPU选择计算设备。
LLaMA-Factory 默认使用所有可见的计算设备。
基础用法:
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
高级用法:
```bash
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml \
learning_rate=1e-5 \
logging_steps=1
```
```bash
bash examples/train_lora/llama3_lora_sft.sh
```
## 示例 ## 示例
### LoRA 微调 ### LoRA 微调
@@ -32,8 +52,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
#### 多模态指令监督微调 #### 多模态指令监督微调
```bash ```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml llamafactory-cli train examples/train_lora/qwen2_5vl_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml
``` ```
#### DPO/ORPO/SimPO 训练 #### DPO/ORPO/SimPO 训练
@@ -45,7 +64,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
#### 多模态 DPO/ORPO/SimPO 训练 #### 多模态 DPO/ORPO/SimPO 训练
```bash ```bash
llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml llamafactory-cli train examples/train_lora/qwen2_5vl_lora_dpo.yaml
``` ```
#### 奖励模型训练 #### 奖励模型训练
@@ -80,12 +99,6 @@ llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
``` ```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### 多机指令监督微调 #### 多机指令监督微调
```bash ```bash
@@ -93,12 +106,26 @@ FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
``` ```
### 支持弹性和容错的多机指令监督微调
要启动一个支持弹性节点和容错的多机指令微调,在每个节点上执行以下命令。弹性节点数量范围为 `MIN_NNODES:MAX_NNODES`,每个节点最多允许因为错误重启 `MAX_RESTARTS` 次。`RDZV_ID` 应设置为一个唯一的作业 ID由参与该作业的所有节点共享。更多新可以参考官方文档 [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html)。
```bash
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### 使用 DeepSpeed ZeRO-3 平均分配显存 #### 使用 DeepSpeed ZeRO-3 平均分配显存
```bash ```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
``` ```
#### 使用 Ray 在 4 张 GPU 上微调
```bash
USE_RAY=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ray.yaml
```
### QLoRA 微调 ### QLoRA 微调
#### 基于 4/8 比特 Bitsandbytes/HQQ/EETQ 量化进行指令监督微调(推荐) #### 基于 4/8 比特 Bitsandbytes/HQQ/EETQ 量化进行指令监督微调(推荐)
@@ -107,6 +134,12 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
``` ```
#### 在 NPU 上基于 4 比特 Bitsandbytes 量化进行指令监督微调
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_bnb_npu.yaml
```
#### 基于 4/8 比特 GPTQ 量化进行指令监督微调 #### 基于 4/8 比特 GPTQ 量化进行指令监督微调
```bash ```bash
@@ -130,26 +163,20 @@ llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
#### 在单机上进行指令监督微调 #### 在单机上进行指令监督微调
```bash ```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
``` ```
#### 在多机上进行指令监督微调 #### 在多机上进行指令监督微调
```bash ```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
``` ```
#### 多模态指令监督微调 #### 多模态指令监督微调
```bash ```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2_5vl_full_sft.yaml
```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
``` ```
### 合并 LoRA 适配器与模型量化 ### 合并 LoRA 适配器与模型量化
@@ -168,15 +195,28 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
``` ```
### 保存 Ollama 配置文件
```bash
llamafactory-cli export examples/merge_lora/llama3_full_sft.yaml
```
### 推理 LoRA 模型 ### 推理 LoRA 模型
#### 使用命令行接口 #### 使用 vLLM 多卡推理评估
```
python scripts/vllm_infer.py --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct --template llama3 --dataset alpaca_en_demo
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
```
#### 使用命令行对话框
```bash ```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
``` ```
#### 使用浏览器界面 #### 使用浏览器对话框
```bash ```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
@@ -196,6 +236,12 @@ llamafactory-cli api examples/inference/llama3_lora_sft.yaml
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
``` ```
#### 使用 APOLLO 进行全参数训练
```bash
llamafactory-cli train examples/extras/apollo/llama3_full_sft.yaml
```
#### 使用 BAdam 进行全参数训练 #### 使用 BAdam 进行全参数训练
```bash ```bash
@@ -208,6 +254,12 @@ llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml
``` ```
#### 使用 Muon 进行全参数训练
```bash
llamafactory-cli train examples/extras/muon/qwen2_full_sft.yaml
```
#### LoRA+ 微调 #### LoRA+ 微调
```bash ```bash

View File

@@ -7,14 +7,14 @@ fsdp_config:
fsdp_backward_prefetch: BACKWARD_PRE fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: false fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: true # offload may affect training speed fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true fsdp_sync_module_states: true
fsdp_use_orig_params: true fsdp_use_orig_params: true
machine_rank: 0 machine_rank: 0
main_training_function: main main_training_function: main
mixed_precision: fp16 # or bf16 mixed_precision: bf16 # or fp16
num_machines: 1 # the number of nodes num_machines: 1 # the number of nodes
num_processes: 2 # the number of GPUs in all nodes num_processes: 2 # the number of GPUs in all nodes
rdzv_backend: static rdzv_backend: static

View File

@@ -0,0 +1,25 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: true # offload may affect training speed
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16 # or fp16
num_machines: 1 # the number of nodes
num_processes: 2 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -1,5 +1,6 @@
### model ### model
model_name_or_path: Qwen/Qwen2-1.5B-Instruct model_name_or_path: Qwen/Qwen2-1.5B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
@@ -14,6 +15,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/qwen2-1_5b/full/sft output_dir: saves/qwen2-1_5b/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,7 +37,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -0,0 +1,48 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
use_apollo: true
apollo_layerwise: true # choices: [true, false], use false for DDP training
apollo_target: all
apollo_rank: 128
apollo_scale: 32.0
apollo_scale_type: channel
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1 # use 1 for layerwise apollo
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
pure_bf16: true
ddp_timeout: 180000000
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
@@ -19,6 +20,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/full/sft output_dir: saves/llama3-8b/full/sft
@@ -26,6 +28,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -36,7 +40,7 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,11 +1,13 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4 quantization_bit: 4
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -15,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -34,7 +39,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,13 +1,14 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: full finetuning_type: full
use_galore: true use_galore: true
galore_layerwise: true galore_layerwise: true # choices: [true, false], use false for DDP training
galore_target: mlp,self_attn galore_target: all
galore_rank: 128 galore_rank: 128
galore_scale: 2.0 galore_scale: 2.0
@@ -18,6 +19,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/full/sft output_dir: saves/llama3-8b/full/sft
@@ -25,10 +27,12 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
gradient_accumulation_steps: 1 gradient_accumulation_steps: 1 # use 1 for layerwise galore
learning_rate: 1.0e-5 learning_rate: 1.0e-5
num_train_epochs: 3.0 num_train_epochs: 3.0
lr_scheduler_type: cosine lr_scheduler_type: cosine
@@ -37,7 +41,7 @@ pure_bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model ### model
model_name_or_path: models/llama3-8b-pro model_name_or_path: models/llama3-8b-pro
trust_remote_code: true
### method ### method
stage: sft stage: sft
@@ -16,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b-pro/freeze/sft output_dir: saves/llama3-8b-pro/freeze/sft
@@ -23,6 +25,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -35,7 +39,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
loraplus_lr_ratio: 16.0 loraplus_lr_ratio: 16.0
@@ -15,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -34,7 +39,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
@@ -14,6 +15,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b-mod/full/sft output_dir: saves/llama3-8b-mod/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -34,7 +38,7 @@ pure_bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,30 +1,34 @@
### model ### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct model_name_or_path: Qwen/Qwen2-1.5B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: full finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json use_muon: true
### dataset ### dataset
dataset: mllm_demo,identity dataset: identity,alpaca_en_demo
template: qwen2_vl template: qwen
cutoff_len: 2048 cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/qwen2_vl-7b/full/sft output_dir: saves/qwen2-1_5b/full/sft
logging_steps: 10 logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
gradient_accumulation_steps: 2 gradient_accumulation_steps: 8
learning_rate: 1.0e-5 learning_rate: 1.0e-5
num_train_epochs: 3.0 num_train_epochs: 3.0
lr_scheduler_type: cosine lr_scheduler_type: cosine
@@ -33,7 +37,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,6 +1,10 @@
# The batch generation can be SLOW using this config.
# For faster inference, we recommend to use `scripts/vllm_infer.py`.
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft adapter_name_or_path: saves/llama3-8b/lora/sft
trust_remote_code: true
### method ### method
stage: sft stage: sft
@@ -14,10 +18,12 @@ cutoff_len: 2048
max_samples: 50 max_samples: 50
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/predict output_dir: saves/llama3-8b/lora/predict
overwrite_output_dir: true overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### eval ### eval
per_device_eval_batch_size: 1 per_device_eval_batch_size: 1

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
pissa_init: true pissa_init: true
pissa_iter: 16 pissa_iter: 16
@@ -17,6 +19,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -24,6 +27,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -36,7 +41,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,2 +1,4 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3 template: llama3
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -0,0 +1,4 @@
model_name_or_path: saves/llama3-8b/full/sft
template: llama3
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,4 +1,5 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3 template: llama3
finetuning_type: lora infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,4 +0,0 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
infer_backend: vllm
vllm_enforce_eager: true

View File

@@ -1,2 +0,0 @@
model_name_or_path: llava-hf/llava-1.5-7b-hf
template: llava

View File

@@ -0,0 +1,4 @@
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
template: qwen2_vl
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,2 +0,0 @@
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
template: qwen2_vl

View File

@@ -0,0 +1,10 @@
### model
model_name_or_path: saves/llama3-8b/full/sft
template: llama3
trust_remote_code: true
### export
export_dir: output/llama3_full_sft
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -1,11 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3 template: llama3
trust_remote_code: true
### export ### export
export_dir: models/llama3_gptq export_dir: output/llama3_gptq
export_quantization_bit: 4 export_quantization_bit: 4
export_quantization_dataset: data/c4_demo.json export_quantization_dataset: data/c4_demo.jsonl
export_size: 2 export_size: 5
export_device: cpu export_device: cpu # choices: [cpu, auto]
export_legacy_format: false export_legacy_format: false

View File

@@ -4,10 +4,10 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3 template: llama3
finetuning_type: lora trust_remote_code: true
### export ### export
export_dir: models/llama3_lora_sft export_dir: output/llama3_lora_sft
export_size: 2 export_size: 5
export_device: cpu export_device: cpu # choices: [cpu, auto]
export_legacy_format: false export_legacy_format: false

View File

@@ -0,0 +1,13 @@
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
adapter_name_or_path: saves/qwen2_5vl-7b/lora/sft
template: qwen2_vl
trust_remote_code: true
### export
export_dir: output/qwen2_5vl_lora_sft
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -1,13 +0,0 @@
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
adapter_name_or_path: saves/qwen2_vl-7b/lora/sft
template: qwen2_vl
finetuning_type: lora
### export
export_dir: models/qwen2_vl_lora_sft
export_size: 2
export_device: cpu
export_legacy_format: false

View File

@@ -1,23 +0,0 @@
### model
model_name_or_path: saves/llama3-8b/full/sft
### method
stage: sft
do_predict: true
finetuning_type: full
### dataset
eval_dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 2048
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/predict
overwrite_output_dir: true
### eval
per_device_eval_batch_size: 1
predict_with_generate: true

View File

@@ -1,11 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: full finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset ### dataset
dataset: identity,alpaca_en_demo dataset: identity,alpaca_en_demo
@@ -14,6 +15,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/full/sft output_dir: saves/llama3-8b/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -31,9 +35,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: alpaca_en_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,49 @@
### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
freeze_vision_tower: true
freeze_multi_modal_projector: true
freeze_language_model: false
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: mllm_demo,identity,alpaca_en_demo
template: qwen2_vl
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2_5vl-7b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: dpo stage: dpo
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
pref_beta: 0.1 pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo] pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
@@ -16,6 +18,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/dpo output_dir: saves/llama3-8b/lora/dpo
@@ -23,6 +26,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,9 +38,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: dpo_en_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -1,6 +1,7 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft adapter_name_or_path: saves/llama3-8b/lora/sft
trust_remote_code: true
### method ### method
finetuning_type: lora finetuning_type: lora

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: kto stage: kto
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
pref_beta: 0.1 pref_beta: 0.1
@@ -15,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/kto output_dir: saves/llama3-8b/lora/kto
@@ -22,6 +25,7 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -34,7 +38,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,11 +1,13 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
reward_model: saves/llama3-8b/lora/reward reward_model: saves/llama3-8b/lora/reward
trust_remote_code: true
### method ### method
stage: ppo stage: ppo
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -15,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/ppo output_dir: saves/llama3-8b/lora/ppo
@@ -22,6 +25,7 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: pt stage: pt
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -13,6 +15,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/pretrain output_dir: saves/llama3-8b/lora/pretrain
@@ -20,6 +23,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -30,9 +35,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: c4_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: rm stage: rm
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -14,6 +16,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/reward output_dir: saves/llama3-8b/lora/reward
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -31,9 +36,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: dpo_en_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,36 @@
#!/bin/bash
set -x
MODEL_PATH=meta-llama/Meta-Llama-3-8B-Instruct
llamafactory-cli train \
--model_name_or_path ${MODEL_PATH} \
--trust_remote_code \
--stage sft \
--do_train \
--finetuning_type lora \
--lora_rank 8 \
--lora_target all \
--dataset identity,alpaca_en_demo \
--template llama3 \
--cutoff_len 2048 \
--max_samples 1000 \
--overwrite_cache \
--preprocessing_num_workers 16 \
--dataloader_num_workers 4 \
--output_dir saves/llama3-8b/lora/sft \
--logging_steps 10 \
--save_steps 500 \
--plot_loss \
--overwrite_output_dir \
--save_only_model false \
--report_to none \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 3.0 \
--lr_scheduler_type cosine \
--warmup_ratio 0.1 \
--bf16 \
--ddp_timeout 180000000

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -14,6 +16,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -31,9 +36,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: alpaca_en_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -1,12 +1,14 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
deepspeed: examples/deepspeed/ds_z3_config.json deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset ### dataset
dataset: identity,alpaca_en_demo dataset: identity,alpaca_en_demo
@@ -15,6 +17,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -32,9 +37,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # eval_dataset: alpaca_en_demo
per_device_eval_batch_size: 1 # val_size: 0.1
eval_strategy: steps # per_device_eval_batch_size: 1
eval_steps: 500 # eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,61 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct # or use local absolute path
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
dataset_dir: REMOTE:llamafactory/demo_data # or use local absolute path
template: llama3
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: tmp_dir
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### ray
ray_run_name: llama3_8b_sft_lora
ray_storage_path: ./saves
ray_num_workers: 4 # Number of GPUs to use.
placement_strategy: PACK
resources_per_worker:
GPU: 1
# ray_init_kwargs:
# runtime_env:
# env_vars:
# <YOUR-ENV-VAR-HERE>: "<YOUR-ENV-VAR-HERE>"
# pip:
# - emoji
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset

View File

@@ -0,0 +1,49 @@
# pip install git+https://github.com/hiyouga/transformers.git@llama4_train
### model
model_name_or_path: meta-llama/Llama-4-Scout-17B-16E-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset: mllm_demo,identity,alpaca_en_demo
template: llama4
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama4-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,39 +0,0 @@
### model
model_name_or_path: llava-hf/llava-1.5-7b-hf
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: mllm_demo
template: llava
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llava1_5-7b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,10 +1,14 @@
### model ### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method ### method
stage: dpo stage: dpo
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
pref_beta: 0.1 pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo] pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
@@ -16,13 +20,16 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/qwen2_vl-7b/lora/dpo output_dir: saves/qwen2_5vl-7b/lora/dpo
logging_steps: 10 logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,9 +40,10 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,26 +1,33 @@
### model ### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
dataset: mllm_demo,identity # video: mllm_video_demo dataset: mllm_demo,identity,alpaca_en_demo # video: mllm_video_demo
template: qwen2_vl template: qwen2_vl
cutoff_len: 2048 cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/qwen2_vl-7b/lora/sft output_dir: saves/qwen2_5vl-7b/lora/sft
logging_steps: 10 logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -31,9 +38,10 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1 warmup_ratio: 0.1
bf16: true bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
resume_from_checkpoint: null
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16 model_name_or_path: ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -14,6 +16,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-AWQ model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-AWQ
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -14,6 +16,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,12 +1,16 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4
quantization_method: bnb
double_quantization: false
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
deepspeed: examples/deepspeed/ds_z0_config.json
### dataset ### dataset
dataset: identity,alpaca_en_demo dataset: identity,alpaca_en_demo
@@ -15,6 +19,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -22,10 +27,12 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
gradient_accumulation_steps: 2 gradient_accumulation_steps: 8
learning_rate: 1.0e-4 learning_rate: 1.0e-4
num_train_epochs: 3.0 num_train_epochs: 3.0
lr_scheduler_type: cosine lr_scheduler_type: cosine
@@ -34,7 +41,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model ### model
model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-GPTQ model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-GPTQ
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -14,6 +16,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -1,12 +1,14 @@
### model ### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4 quantization_bit: 4 # choices: [8 (bnb/hqq/eetq), 4 (bnb/hqq), 3 (hqq), 2 (hqq)]
quantization_method: bitsandbytes # choices: [bitsandbytes (4/8), hqq (2/3/4/5/6/8), eetq (8)] quantization_method: bnb # choices: [bnb, hqq, eetq]
trust_remote_code: true
### method ### method
stage: sft stage: sft
do_train: true do_train: true
finetuning_type: lora finetuning_type: lora
lora_rank: 8
lora_target: all lora_target: all
### dataset ### dataset
@@ -16,6 +18,7 @@ cutoff_len: 2048
max_samples: 1000 max_samples: 1000
overwrite_cache: true overwrite_cache: true
preprocessing_num_workers: 16 preprocessing_num_workers: 16
dataloader_num_workers: 4
### output ### output
output_dir: saves/llama3-8b/lora/sft output_dir: saves/llama3-8b/lora/sft
@@ -23,6 +26,8 @@ logging_steps: 10
save_steps: 500 save_steps: 500
plot_loss: true plot_loss: true
overwrite_output_dir: true overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train ### train
per_device_train_batch_size: 1 per_device_train_batch_size: 1
@@ -35,7 +40,7 @@ bf16: true
ddp_timeout: 180000000 ddp_timeout: 180000000
### eval ### eval
val_size: 0.1 # val_size: 0.1
per_device_eval_batch_size: 1 # per_device_eval_batch_size: 1
eval_strategy: steps # eval_strategy: steps
eval_steps: 500 # eval_steps: 500

View File

@@ -2,14 +2,53 @@
requires = ["setuptools>=61.0"] requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"
[project]
name = "llamafactory"
dynamic = [
"version",
"dependencies",
"optional-dependencies",
"requires-python",
"scripts",
"authors",
"description",
"readme",
"license",
"keywords",
"classifiers"
]
[tool.ruff] [tool.ruff]
target-version = "py38" target-version = "py39"
line-length = 119 line-length = 119
indent-width = 4 indent-width = 4
[tool.ruff.lint] [tool.ruff.lint]
ignore = ["C408", "C901", "E501", "E731", "E741", "W605"] ignore = [
select = ["C", "E", "F", "I", "W"] "C408", # collection
"C901", # complex
"E501", # line too long
"E731", # lambda function
"E741", # ambiguous var name
"D100", # no doc public module
"D101", # no doc public class
"D102", # no doc public method
"D103", # no doc public function
"D104", # no doc public package
"D105", # no doc magic method
"D107", # no doc __init__
]
extend-select = [
"C", # complexity
"E", # error
"F", # pyflakes
"I", # isort
"W", # warning
"UP", # pyupgrade
"D", # pydocstyle
"PT009", # pytest assert
"RUF022", # sort __all__
]
[tool.ruff.lint.isort] [tool.ruff.lint.isort]
lines-after-imports = 2 lines-after-imports = 2
@@ -22,12 +61,35 @@ known-third-party = [
"peft", "peft",
"torch", "torch",
"transformers", "transformers",
"trl" "trl",
] ]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.format] [tool.ruff.format]
quote-style = "double" quote-style = "double"
indent-style = "space" indent-style = "space"
docstring-code-format = true docstring-code-format = true
skip-magic-trailing-comma = false skip-magic-trailing-comma = false
line-ending = "auto" line-ending = "auto"
[tool.uv]
conflicts = [
[
{ extra = "torch-npu" },
{ extra = "aqlm" },
],
[
{ extra = "torch-npu" },
{ extra = "vllm" },
],
[
{ extra = "torch-npu" },
{ extra = "sglang" },
],
[
{ extra = "vllm" },
{ extra = "sglang" },
],
]

View File

@@ -1,23 +1,27 @@
transformers>=4.41.2,<=4.46.1 transformers>=4.45.0,<=4.52.4,!=4.46.*,!=4.47.*,!=4.48.0,!=4.52.0; sys_platform != 'darwin'
datasets>=2.16.0,<=3.1.0 transformers>=4.45.0,<=4.51.3,!=4.46.*,!=4.47.*,!=4.48.0,!=4.52.0; sys_platform == 'darwin'
accelerate>=0.34.0,<=1.0.1 datasets>=2.16.0,<=3.6.0
peft>=0.11.1,<=0.12.0 accelerate>=0.34.0,<=1.7.0
peft>=0.14.0,<=0.15.2
trl>=0.8.6,<=0.9.6 trl>=0.8.6,<=0.9.6
gradio>=4.0.0,<5.0.0 tokenizers>=0.19.0,<=0.21.1
pandas>=2.0.0 gradio>=4.38.0,<=5.31.0
scipy scipy
einops einops
sentencepiece sentencepiece
tiktoken tiktoken
protobuf protobuf
uvicorn uvicorn
pydantic
fastapi fastapi
sse-starlette sse-starlette
matplotlib>=3.7.0 matplotlib>=3.7.0
fire fire
omegaconf
packaging packaging
pyyaml pyyaml
numpy<2.0.0 numpy<2.0.0
pydantic<=2.10.6
pandas>=2.0.0
av av
librosa
tyro<0.9.0 tyro<0.9.0

View File

@@ -1,4 +1,4 @@
# Copyright 2024 the LlamaFactory team. # Copyright 2025 the LlamaFactory team.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -23,8 +23,8 @@ require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
def main(): def main():
client = OpenAI( client = OpenAI(
api_key="{}".format(os.environ.get("API_KEY", "0")), api_key="{}".format(os.getenv("API_KEY", "0")),
base_url="http://localhost:{}/v1".format(os.environ.get("API_PORT", 8000)), base_url="http://localhost:{}/v1".format(os.getenv("API_PORT", 8000)),
) )
messages = [] messages = []
messages.append( messages.append(

View File

@@ -1,4 +1,4 @@
# Copyright 2024 the LlamaFactory team. # Copyright 2025 the LlamaFactory team.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -14,7 +14,6 @@
import json import json
import os import os
from typing import Sequence
from openai import OpenAI from openai import OpenAI
from transformers.utils.versions import require_version from transformers.utils.versions import require_version
@@ -23,7 +22,7 @@ from transformers.utils.versions import require_version
require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0") require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
def calculate_gpa(grades: Sequence[str], hours: Sequence[int]) -> float: def calculate_gpa(grades: list[str], hours: list[int]) -> float:
grade_to_score = {"A": 4, "B": 3, "C": 2} grade_to_score = {"A": 4, "B": 3, "C": 2}
total_score, total_hour = 0, 0 total_score, total_hour = 0, 0
for grade, hour in zip(grades, hours): for grade, hour in zip(grades, hours):
@@ -34,8 +33,8 @@ def calculate_gpa(grades: Sequence[str], hours: Sequence[int]) -> float:
def main(): def main():
client = OpenAI( client = OpenAI(
api_key="{}".format(os.environ.get("API_KEY", "0")), api_key="{}".format(os.getenv("API_KEY", "0")),
base_url="http://localhost:{}/v1".format(os.environ.get("API_PORT", 8000)), base_url="http://localhost:{}/v1".format(os.getenv("API_PORT", 8000)),
) )
tools = [ tools = [
{ {

View File

@@ -1,4 +1,4 @@
# Copyright 2024 the LlamaFactory team. # Copyright 2025 the LlamaFactory team.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -15,64 +15,67 @@
import json import json
import os import os
from collections import OrderedDict from collections import OrderedDict
from typing import Any, Dict from typing import Any
import fire import fire
import torch import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors.torch import save_file from safetensors.torch import save_file
from tqdm import tqdm from tqdm import tqdm
from transformers.modeling_utils import ( from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
CONFIG_NAME = "config.json" CONFIG_NAME = "config.json"
def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool): def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool):
baichuan2_state_dict: Dict[str, torch.Tensor] = OrderedDict() baichuan2_state_dict: dict[str, torch.Tensor] = OrderedDict()
for filepath in tqdm(os.listdir(input_dir), desc="Load weights"): for filepath in tqdm(os.listdir(input_dir), desc="Load weights"):
if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".bin"): if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".bin"):
shard_weight = torch.load(os.path.join(input_dir, filepath), map_location="cpu") shard_weight = torch.load(os.path.join(input_dir, filepath), map_location="cpu", weights_only=True)
baichuan2_state_dict.update(shard_weight) baichuan2_state_dict.update(shard_weight)
llama2_state_dict: Dict[str, torch.Tensor] = OrderedDict() llama_state_dict: dict[str, torch.Tensor] = OrderedDict()
for key, value in tqdm(baichuan2_state_dict.items(), desc="Convert format"): for key, value in tqdm(baichuan2_state_dict.items(), desc="Convert format"):
if "W_pack" in key: if "W_pack" in key:
proj_size = value.size(0) // 3 proj_size = value.size(0) // 3
llama2_state_dict[key.replace("W_pack", "q_proj")] = value[:proj_size, :] llama_state_dict[key.replace("W_pack", "q_proj")] = value[:proj_size, :]
llama2_state_dict[key.replace("W_pack", "k_proj")] = value[proj_size : 2 * proj_size, :] llama_state_dict[key.replace("W_pack", "k_proj")] = value[proj_size : 2 * proj_size, :]
llama2_state_dict[key.replace("W_pack", "v_proj")] = value[2 * proj_size :, :] llama_state_dict[key.replace("W_pack", "v_proj")] = value[2 * proj_size :, :]
elif "lm_head" in key: elif "lm_head" in key:
llama2_state_dict[key] = torch.nn.functional.normalize(value) llama_state_dict[key] = torch.nn.functional.normalize(value)
else: else:
llama2_state_dict[key] = value llama_state_dict[key] = value
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(llama2_state_dict, max_shard_size=shard_size, weights_name=weights_name) filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
for shard_file, shard in tqdm(shards.items(), desc="Save weights"): llama_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: llama_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors: if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"}) save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else: else:
torch.save(shard, os.path.join(output_dir, shard_file)) torch.save(shard, os.path.join(output_dir, shard_file))
if index is None: if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, WEIGHTS_NAME)}") print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else: else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f: with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True) json.dump(index, f, indent=2, sort_keys=True)
print(f"Model weights saved in {output_dir}")
print(f"Model weights saved in {output_dir}.")
def save_config(input_dir: str, output_dir: str): def save_config(input_dir: str, output_dir: str):
with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f: with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f:
llama2_config_dict: Dict[str, Any] = json.load(f) llama2_config_dict: dict[str, Any] = json.load(f)
llama2_config_dict["architectures"] = ["LlamaForCausalLM"] llama2_config_dict["architectures"] = ["LlamaForCausalLM"]
llama2_config_dict.pop("auto_map", None) llama2_config_dict.pop("auto_map", None)
@@ -81,6 +84,7 @@ def save_config(input_dir: str, output_dir: str):
with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f: with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f:
json.dump(llama2_config_dict, f, indent=2) json.dump(llama2_config_dict, f, indent=2)
print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}") print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}")
@@ -90,8 +94,8 @@ def llamafy_baichuan2(
shard_size: str = "2GB", shard_size: str = "2GB",
save_safetensors: bool = True, save_safetensors: bool = True,
): ):
r""" r"""Convert the Baichuan2-7B model in the same format as LLaMA2-7B.
Converts the Baichuan2-7B model in the same format as LLaMA2-7B.
Usage: python llamafy_baichuan2.py --input_dir input --output_dir output Usage: python llamafy_baichuan2.py --input_dir input --output_dir output
Converted model: https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied Converted model: https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied
""" """

View File

@@ -1,4 +1,4 @@
# Copyright 2024 the LlamaFactory team. # Copyright 2025 the LlamaFactory team.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
@@ -15,20 +15,15 @@
import json import json
import os import os
from collections import OrderedDict from collections import OrderedDict
from typing import Any, Dict from typing import Any
import fire import fire
import torch import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors import safe_open from safetensors import safe_open
from safetensors.torch import save_file from safetensors.torch import save_file
from tqdm import tqdm from tqdm import tqdm
from transformers.modeling_utils import ( from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
from transformers.utils import check_min_version from transformers.utils import check_min_version
@@ -42,76 +37,84 @@ CONFIG_NAME = "config.json"
def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool) -> str: def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool) -> str:
qwen_state_dict: Dict[str, torch.Tensor] = OrderedDict() qwen_state_dict: dict[str, torch.Tensor] = OrderedDict()
for filepath in tqdm(os.listdir(input_dir), desc="Load weights"): for filepath in tqdm(os.listdir(input_dir), desc="Load weights"):
if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".safetensors"): if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".safetensors"):
with safe_open(os.path.join(input_dir, filepath), framework="pt", device="cpu") as f: with safe_open(os.path.join(input_dir, filepath), framework="pt", device="cpu") as f:
for key in f.keys(): for key in f.keys():
qwen_state_dict[key] = f.get_tensor(key) qwen_state_dict[key] = f.get_tensor(key)
llama2_state_dict: Dict[str, torch.Tensor] = OrderedDict() llama_state_dict: dict[str, torch.Tensor] = OrderedDict()
torch_dtype = None torch_dtype = None
for key, value in tqdm(qwen_state_dict.items(), desc="Convert format"): for key, value in tqdm(qwen_state_dict.items(), desc="Convert format"):
if torch_dtype is None: if torch_dtype is None:
torch_dtype = value.dtype torch_dtype = value.dtype
if "wte" in key: if "wte" in key:
llama2_state_dict["model.embed_tokens.weight"] = value llama_state_dict["model.embed_tokens.weight"] = value
elif "ln_f" in key: elif "ln_f" in key:
llama2_state_dict["model.norm.weight"] = value llama_state_dict["model.norm.weight"] = value
else: else:
key = key.replace("transformer.h", "model.layers") key = key.replace("transformer.h", "model.layers")
if "attn.c_attn" in key: if "attn.c_attn" in key:
proj_size = value.size(0) // 3 proj_size = value.size(0) // 3
llama2_state_dict[key.replace("attn.c_attn", "self_attn.q_proj")] = value[:proj_size, ...] llama_state_dict[key.replace("attn.c_attn", "self_attn.q_proj")] = value[:proj_size, ...]
llama2_state_dict[key.replace("attn.c_attn", "self_attn.k_proj")] = value[ llama_state_dict[key.replace("attn.c_attn", "self_attn.k_proj")] = value[
proj_size : 2 * proj_size, ... proj_size : 2 * proj_size, ...
] ]
llama2_state_dict[key.replace("attn.c_attn", "self_attn.v_proj")] = value[2 * proj_size :, ...] llama_state_dict[key.replace("attn.c_attn", "self_attn.v_proj")] = value[2 * proj_size :, ...]
elif "attn.c_proj" in key: elif "attn.c_proj" in key:
llama2_state_dict[key.replace("attn.c_proj", "self_attn.o_proj")] = value llama_state_dict[key.replace("attn.c_proj", "self_attn.o_proj")] = value
llama2_state_dict[key.replace("attn.c_proj.weight", "self_attn.o_proj.bias")] = torch.zeros_like( llama_state_dict[key.replace("attn.c_proj.weight", "self_attn.o_proj.bias")] = torch.zeros_like(
value[:, 0] value[:, 0]
).squeeze() ).squeeze()
elif "ln_1" in key: elif "ln_1" in key:
llama2_state_dict[key.replace("ln_1", "input_layernorm")] = value llama_state_dict[key.replace("ln_1", "input_layernorm")] = value
elif "ln_2" in key: elif "ln_2" in key:
llama2_state_dict[key.replace("ln_2", "post_attention_layernorm")] = value llama_state_dict[key.replace("ln_2", "post_attention_layernorm")] = value
elif "mlp.w1" in key: elif "mlp.w1" in key:
llama2_state_dict[key.replace("mlp.w1", "mlp.up_proj")] = value llama_state_dict[key.replace("mlp.w1", "mlp.up_proj")] = value
elif "mlp.w2" in key: elif "mlp.w2" in key:
llama2_state_dict[key.replace("mlp.w2", "mlp.gate_proj")] = value llama_state_dict[key.replace("mlp.w2", "mlp.gate_proj")] = value
elif "mlp.c_proj" in key: elif "mlp.c_proj" in key:
llama2_state_dict[key.replace("mlp.c_proj", "mlp.down_proj")] = value llama_state_dict[key.replace("mlp.c_proj", "mlp.down_proj")] = value
elif "lm_head" in key: elif "lm_head" in key:
llama2_state_dict[key] = value llama_state_dict[key] = value
else: else:
raise KeyError(f"Unable to process key {key}") raise KeyError(f"Unable to process key {key}")
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(llama2_state_dict, max_shard_size=shard_size, weights_name=weights_name) filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
for shard_file, shard in tqdm(shards.items(), desc="Save weights"): llama_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: llama_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors: if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"}) save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else: else:
torch.save(shard, os.path.join(output_dir, shard_file)) torch.save(shard, os.path.join(output_dir, shard_file))
if index is None: if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, weights_name)}") print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else: else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f: with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True) json.dump(index, f, indent=2, sort_keys=True)
print(f"Model weights saved in {output_dir}")
print(f"Model weights saved in {output_dir}.")
return str(torch_dtype).replace("torch.", "") return str(torch_dtype).replace("torch.", "")
def save_config(input_dir: str, output_dir: str, torch_dtype: str): def save_config(input_dir: str, output_dir: str, torch_dtype: str):
with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f: with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f:
qwen_config_dict: Dict[str, Any] = json.load(f) qwen_config_dict: dict[str, Any] = json.load(f)
llama2_config_dict: Dict[str, Any] = OrderedDict() llama2_config_dict: dict[str, Any] = OrderedDict()
llama2_config_dict["architectures"] = ["LlamaForCausalLM"] llama2_config_dict["architectures"] = ["LlamaForCausalLM"]
llama2_config_dict["hidden_act"] = "silu" llama2_config_dict["hidden_act"] = "silu"
llama2_config_dict["hidden_size"] = qwen_config_dict["hidden_size"] llama2_config_dict["hidden_size"] = qwen_config_dict["hidden_size"]
@@ -134,6 +137,7 @@ def save_config(input_dir: str, output_dir: str, torch_dtype: str):
with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f: with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f:
json.dump(llama2_config_dict, f, indent=2) json.dump(llama2_config_dict, f, indent=2)
print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}") print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}")
@@ -143,8 +147,8 @@ def llamafy_qwen(
shard_size: str = "2GB", shard_size: str = "2GB",
save_safetensors: bool = False, save_safetensors: bool = False,
): ):
r""" r"""Convert the Qwen models in the same format as LLaMA2.
Converts the Qwen models in the same format as LLaMA2.
Usage: python llamafy_qwen.py --input_dir input --output_dir output Usage: python llamafy_qwen.py --input_dir input --output_dir output
Converted model: https://huggingface.co/hiyouga/Qwen-14B-Chat-LLaMAfied Converted model: https://huggingface.co/hiyouga/Qwen-14B-Chat-LLaMAfied
""" """

View File

@@ -0,0 +1,39 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from transformers import Llama4Config, Llama4ForConditionalGeneration, Llama4TextConfig, Llama4VisionConfig
if __name__ == "__main__":
vision_config = Llama4VisionConfig(
hidden_size=1408,
image_size=336,
intermediate_size=5632,
num_attention_heads=16,
num_hidden_layers=4,
vision_output_dim=4096,
)
text_config = Llama4TextConfig(
hidden_size=512,
intermediate_size=1024,
intermediate_size_mlp=1024,
num_hidden_layers=4,
num_attention_heads=8,
num_key_value_heads=2,
head_dim=512 // 8,
num_local_experts=2,
)
config = Llama4Config(vision_config=vision_config, text_config=text_config)
model = Llama4ForConditionalGeneration._from_config(config)
model.save_pretrained("tiny-llama4")

View File

@@ -0,0 +1,79 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import time
import fire
from datasets import load_dataset
try:
import jieba # type: ignore
from nltk.translate.bleu_score import SmoothingFunction, sentence_bleu # type: ignore
from rouge_chinese import Rouge # type: ignore
jieba.setLogLevel(logging.CRITICAL)
jieba.initialize()
except ImportError:
print("Please install llamafactory with `pip install -e .[metrics]`.")
raise
def compute_metrics(sample):
hypothesis = list(jieba.cut(sample["predict"]))
reference = list(jieba.cut(sample["label"]))
bleu_score = sentence_bleu(
[list(sample["label"])],
list(sample["predict"]),
smoothing_function=SmoothingFunction().method3,
)
if len(" ".join(hypothesis).split()) == 0 or len(" ".join(reference).split()) == 0:
result = {"rouge-1": {"f": 0.0}, "rouge-2": {"f": 0.0}, "rouge-l": {"f": 0.0}}
else:
rouge = Rouge()
scores = rouge.get_scores(" ".join(hypothesis), " ".join(reference))
result = scores[0]
metric_result = {}
for k, v in result.items():
metric_result[k] = round(v["f"] * 100, 4)
metric_result["bleu-4"] = round(bleu_score * 100, 4)
return metric_result
def main(filename: str):
start_time = time.time()
dataset = load_dataset("json", data_files=filename, split="train")
dataset = dataset.map(compute_metrics, num_proc=8, remove_columns=dataset.column_names)
score_dict = dataset.to_dict()
average_score = {}
for task, scores in sorted(score_dict.items(), key=lambda x: x[0]):
print(f"{task}: {sum(scores) / len(scores):.4f}")
average_score[task] = sum(scores) / len(scores)
with open("predictions_score.json", "w", encoding="utf-8") as f:
json.dump(average_score, f, indent=4)
print(f"\nDone in {time.time() - start_time:.3f}s.\nScore file saved to predictions_score.json")
if __name__ == "__main__":
fire.Fire(main)

View File

@@ -1,4 +1,4 @@
# Copyright 2024 Tencent Inc. and the LlamaFactory team. # Copyright 2025 Tencent Inc. and the LlamaFactory team.
# #
# This code is inspired by the Tencent's LLaMA-Pro library. # This code is inspired by the Tencent's LLaMA-Pro library.
# https://github.com/TencentARC/LLaMA-Pro/blob/main/scripts/block_expansion.py # https://github.com/TencentARC/LLaMA-Pro/blob/main/scripts/block_expansion.py
@@ -22,20 +22,15 @@ from typing import TYPE_CHECKING
import fire import fire
import torch import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors.torch import save_file from safetensors.torch import save_file
from tqdm import tqdm from tqdm import tqdm
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, PreTrainedModel
from transformers.modeling_utils import ( from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from transformers import PretrainedConfig, PreTrainedModel from transformers import PretrainedConfig
def change_name(name: str, old_index: int, new_index: int) -> str: def change_name(name: str, old_index: int, new_index: int) -> str:
@@ -46,46 +41,42 @@ def block_expansion(
model_name_or_path: str, model_name_or_path: str,
output_dir: str, output_dir: str,
num_expand: int, num_expand: int,
shard_size: str = "2GB", shard_size: str = "5GB",
save_safetensors: bool = True, save_safetensors: bool = True,
): ):
r""" r"""Perform block expansion for LLaMA, Mistral, Qwen2 or Yi models.
Performs block expansion for LLaMA, Mistral, Qwen1.5 or Yi models.
Usage: python llama_pro.py --model_name_or_path meta-llama/Llama-2-7b-hf --output_dir llama2_pro --num_expand 8 Usage: python llama_pro.py --model_name_or_path meta-llama/Llama-2-7b-hf --output_dir llama2_pro --num_expand 8
""" """
config: "PretrainedConfig" = AutoConfig.from_pretrained(model_name_or_path) config: PretrainedConfig = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
num_layers = getattr(config, "num_hidden_layers") num_layers = getattr(config, "num_hidden_layers")
setattr(config, "num_hidden_layers", num_layers + num_expand)
config.save_pretrained(output_dir)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
tokenizer.save_pretrained(output_dir)
config: "PretrainedConfig" = AutoConfig.from_pretrained(model_name_or_path) # load the original one
if save_safetensors:
setattr(config, "tie_word_embeddings", False) # safetensors does not allow shared weights
model: "PreTrainedModel" = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
config=config,
torch_dtype="auto",
trust_remote_code=True,
low_cpu_mem_usage=True,
)
state_dict = model.state_dict()
if num_layers % num_expand != 0: if num_layers % num_expand != 0:
raise ValueError(f"`num_layers` {num_layers} should be divisible by `num_expand` {num_expand}.") raise ValueError(f"`num_layers` {num_layers} should be divisible by `num_expand` {num_expand}.")
setattr(config, "num_hidden_layers", num_layers + num_expand)
config.save_pretrained(output_dir)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer.save_pretrained(output_dir)
print(f"Expanding model of {num_layers} layers to {num_layers + num_expand} layers.")
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype="auto", device_map="cpu", trust_remote_code=True, low_cpu_mem_usage=True
)
assert isinstance(model, PreTrainedModel) # type hint
if save_safetensors and getattr(model.config, "tie_word_embeddings", False):
del model.lm_head # safetensors does not allow shared weights
split = num_layers // num_expand split = num_layers // num_expand
layer_cnt = 0 layer_cnt = 0
output_state_dict = OrderedDict() state_dict = model.state_dict()
output_state_dict: dict[str, torch.Tensor] = OrderedDict()
for i in range(num_layers): for i in range(num_layers):
for key, value in state_dict.items(): for key, value in state_dict.items():
if f".{i:d}." in key: if f".{i:d}." in key:
output_state_dict[change_name(key, i, layer_cnt)] = value output_state_dict[change_name(key, i, layer_cnt)] = value
print(f"Add layer {layer_cnt} copied from layer {i}") print(f"Add layer {layer_cnt} copied from layer {i}.")
layer_cnt += 1 layer_cnt += 1
if (i + 1) % split == 0: if (i + 1) % split == 0:
for key, value in state_dict.items(): for key, value in state_dict.items():
@@ -95,7 +86,7 @@ def block_expansion(
else: else:
output_state_dict[change_name(key, i, layer_cnt)] = torch.clone(value) output_state_dict[change_name(key, i, layer_cnt)] = torch.clone(value)
print(f"Add layer {layer_cnt} expanded from layer {i}") print(f"Add layer {layer_cnt} expanded from layer {i}.")
layer_cnt += 1 layer_cnt += 1
for key, value in state_dict.items(): for key, value in state_dict.items():
@@ -103,21 +94,29 @@ def block_expansion(
output_state_dict[key] = value output_state_dict[key] = value
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(output_state_dict, max_shard_size=shard_size, weights_name=weights_name) filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
for shard_file, shard in tqdm(shards.items(), desc="Save weights"): output_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: output_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors: if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"}) save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else: else:
torch.save(shard, os.path.join(output_dir, shard_file)) torch.save(shard, os.path.join(output_dir, shard_file))
if index is None: if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, weights_name)}") print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else: else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f: with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True) json.dump(index, f, indent=2, sort_keys=True)
print(f"Model weights saved in {output_dir}")
print(f"Model weights saved in {output_dir}.")
print("- Fine-tune this model with:") print("- Fine-tune this model with:")
print(f"model_name_or_path: {output_dir}") print(f"model_name_or_path: {output_dir}")

View File

@@ -1,4 +1,4 @@
# Copyright 2024 HuggingFace Inc. and the LlamaFactory team. # Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
# #
# This code is based on the HuggingFace's PEFT library. # This code is based on the HuggingFace's PEFT library.
# https://github.com/huggingface/peft/blob/v0.10.0/examples/loftq_finetuning/quantize_save_load.py # https://github.com/huggingface/peft/blob/v0.10.0/examples/loftq_finetuning/quantize_save_load.py
@@ -38,8 +38,8 @@ def quantize_loftq(
lora_target: tuple = ("q_proj", "v_proj"), lora_target: tuple = ("q_proj", "v_proj"),
save_safetensors: bool = True, save_safetensors: bool = True,
): ):
r""" r"""Initialize LoRA weights with LoRA-fine-tuning-aware Quantization (LoftQ).
Initializes LoRA weights with LoRA-fine-tuning-aware Quantization (LoftQ)
Usage: python loftq_init.py --model_name_or_path path_to_model --output_dir output_dir Usage: python loftq_init.py --model_name_or_path path_to_model --output_dir output_dir
""" """
if isinstance(lora_target, str): if isinstance(lora_target, str):
@@ -72,7 +72,7 @@ def quantize_loftq(
print(f"Adapter weights saved in {loftq_dir}") print(f"Adapter weights saved in {loftq_dir}")
# Save base model # Save base model
base_model: "PreTrainedModel" = peft_model.unload() base_model: PreTrainedModel = peft_model.unload()
base_model.save_pretrained(output_dir, safe_serialization=save_safetensors) base_model.save_pretrained(output_dir, safe_serialization=save_safetensors)
tokenizer.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir)
print(f"Model weights saved in {output_dir}") print(f"Model weights saved in {output_dir}")

View File

@@ -1,4 +1,4 @@
# Copyright 2024 HuggingFace Inc. and the LlamaFactory team. # Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
# #
# This code is based on the HuggingFace's PEFT library. # This code is based on the HuggingFace's PEFT library.
# https://github.com/huggingface/peft/blob/v0.11.0/examples/pissa_finetuning/preprocess.py # https://github.com/huggingface/peft/blob/v0.11.0/examples/pissa_finetuning/preprocess.py
@@ -37,8 +37,8 @@ def quantize_pissa(
lora_target: tuple = ("q_proj", "v_proj"), lora_target: tuple = ("q_proj", "v_proj"),
save_safetensors: bool = True, save_safetensors: bool = True,
): ):
r""" r"""Initialize LoRA weights with Principal Singular values and Singular vectors Adaptation (PiSSA).
Initializes LoRA weights with Principal Singular values and Singular vectors Adaptation (PiSSA)
Usage: python pissa_init.py --model_name_or_path path_to_model --output_dir output_dir Usage: python pissa_init.py --model_name_or_path path_to_model --output_dir output_dir
""" """
if isinstance(lora_target, str): if isinstance(lora_target, str):
@@ -67,7 +67,7 @@ def quantize_pissa(
print(f"Adapter weights saved in {pissa_dir}") print(f"Adapter weights saved in {pissa_dir}")
# Save base model # Save base model
base_model: "PreTrainedModel" = peft_model.unload() base_model: PreTrainedModel = peft_model.unload()
base_model.save_pretrained(output_dir, safe_serialization=save_safetensors) base_model.save_pretrained(output_dir, safe_serialization=save_safetensors)
tokenizer.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir)
print(f"Model weights saved in {output_dir}") print(f"Model weights saved in {output_dir}")

136
scripts/qwen_omni_merge.py Normal file
View File

@@ -0,0 +1,136 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Why we need this script for qwen_omni?
Because the qwen_omni model is constructed by two parts:
1. [Thinker]:[audio_encoder, vision_encoder, LLM backbone], which our repository does support to post-training.
2. [Talker]: [audio_decoder, wave_model], which is not supported to post-training without specific tokenizer.
When we post-training the model, we exactly train the [Thinker] part, and the [Talker] part is dropped.
So, to get the complete model, we need to merge the [Talker] part back to the [Thinker] part.
LoRA mode: [Thinker + LoRA weights] + [Original Talker] -> [Omni model]
Full mode: [Thinker] + [Original Talker] -> [Omni model]
For Processor, we do saved the processor from trained model instead of the original model.
"""
import os
import shutil
import fire
from peft import PeftModel
from transformers import (
AutoProcessor,
Qwen2_5OmniForConditionalGeneration, # type: ignore
Qwen2_5OmniThinkerForConditionalGeneration,
)
def merge_lora(
base_model_path: str,
lora_checkpoint_path: str,
extra_file: str = "spk_dict.pt",
submodule_name: str = "thinker",
save_path: str = "./merged_model_checkpoint",
):
"""Load the original model, merge the LoRA weights.
For a specified submodule, and save the final merged model along with its configurations.
Args:
base_model_path (str): Path to the original model directory.
lora_checkpoint_path (str): Path to the directory containing LoRA weights.
extra_file (str): Name of the extra file to be copied (default: "spk_dict.pt").
submodule_name (str): Name of the submodule to merge (default: "thinker").
save_path (str): Directory where the merged model and configurations will be saved.
"""
# 1. Load the original model
model = Qwen2_5OmniForConditionalGeneration.from_pretrained(base_model_path, torch_dtype="auto", device_map="cpu")
print("Successfully loaded the original model.")
# 2. Extract the submodule to be merged (e.g., model.thinker)
if not hasattr(model, submodule_name):
raise AttributeError(f"The model does not have a submodule named '{submodule_name}'.")
base_submodule = getattr(model, submodule_name)
print(f"Successfully extracted submodule: {submodule_name}.")
# 3. Load the LoRA weights onto the extracted submodule
lora_model = PeftModel.from_pretrained(base_submodule, lora_checkpoint_path)
processor = AutoProcessor.from_pretrained(lora_checkpoint_path)
print("LoRA weights and processor loaded successfully.")
# 4. Merge the LoRA weights into the submodule and unload the LoRA modules
merged_submodule = lora_model.merge_and_unload()
print("LoRA weights merged successfully.")
# 5. Replace the original submodule with the merged submodule in the model
setattr(model, submodule_name, merged_submodule)
# 6. Save the final merged model along with the tokenizer and processor configuration
model.save_pretrained(save_path)
processor.save_pretrained(save_path)
print(f"Merged model and tokenizer saved to {save_path}.")
source_file = os.path.join(base_model_path, extra_file)
target_file = os.path.join(save_path, extra_file)
if os.path.exists(source_file):
shutil.copy(source_file, target_file)
print(f"File '{extra_file}' copied from {base_model_path} to {save_path}.")
else:
print(f"File '{extra_file}' not found in {base_model_path}, skipping copy.")
def save_full_model(
saved_thinker_path: str,
base_model_path: str,
save_path: str = "./merged_model_checkpoint",
extra_file: str = "spk_dict.pt",
):
"""Load the saved thinker module and the original model, replace the thinker in the original model.
Then save the complete model along with its tokenizer and processor configuration.
Args:
saved_thinker_path (str): Path to the saved thinker weights.
base_model_path (str): Directory path of the original model.
save_path (str): Directory where the merged model and configurations will be saved.
extra_file (str): Name of the extra file to be copied (default: "spk_dict.pt").
"""
# 1. Load the saved thinker module and the original model
thinker = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
saved_thinker_path, torch_dtype="auto", device_map="cpu"
)
base_model = Qwen2_5OmniForConditionalGeneration.from_pretrained(
base_model_path, torch_dtype="auto", device_map="cpu"
)
base_model.thinker = thinker
# 2. Save the complete model along with its tokenizer and processor configuration
processor = AutoProcessor.from_pretrained(saved_thinker_path)
base_model.save_pretrained(save_path)
processor.save_pretrained(save_path)
print(f"Merged model and processor saved to {save_path}.")
# 3. Copy the extra file from the base model directory to the save_path
source_file = os.path.join(base_model_path, extra_file)
target_file = os.path.join(save_path, extra_file)
if os.path.exists(source_file):
shutil.copy(source_file, target_file)
print(f"File '{extra_file}' copied from {base_model_path} to {save_path}.")
else:
print(f"File '{extra_file}' not found in {base_model_path}, skipping copy.")
if __name__ == "__main__":
fire.Fire({"save_full": save_full_model, "merge_lora": merge_lora})

View File

@@ -1,4 +1,4 @@
# Copyright 2024 Microsoft Corporation and the LlamaFactory team. # Copyright 2025 Microsoft Corporation and the LlamaFactory team.
# #
# This code is inspired by the Microsoft's DeepSpeed library. # This code is inspired by the Microsoft's DeepSpeed library.
# https://www.deepspeed.ai/tutorials/flops-profiler/ # https://www.deepspeed.ai/tutorials/flops-profiler/
@@ -29,8 +29,8 @@ def calculate_flops(
seq_length: int = 512, seq_length: int = 512,
flash_attn: str = "auto", flash_attn: str = "auto",
): ):
r""" r"""Calculate the flops of pre-trained models.
Calculates the flops of pre-trained models.
Usage: python cal_flops.py --model_name_or_path path_to_model --batch_size 1 --seq_length 512 Usage: python cal_flops.py --model_name_or_path path_to_model --batch_size 1 --seq_length 512
""" """
with get_accelerator().device(0): with get_accelerator().device(0):

Some files were not shown because too many files have changed in this diff Show More