714 Commits

Author SHA1 Message Date
Yaowei Zheng
ca75f1edf3 [model] fix vlm utils (#8388) 2025-06-17 01:08:49 +08:00
Yaowei Zheng
3a3bae1cfe [data] fix qwen2vl pos ids (#8387) 2025-06-17 00:48:54 +08:00
Yaowei Zheng
31874e4f62 [version] release v0.9.3 (#8386) 2025-06-16 19:21:32 +08:00
Yaowei Zheng
9a2d1dec62 [assets] update wechat (#8385) 2025-06-16 18:23:22 +08:00
Aman Gupta
8e4ac78607 [trainer] Add LD-DPO objective (#8362) 2025-06-12 16:10:38 +08:00
Yaowei Zheng
44f1b9b5ad [misc] tiny fixes (#8348) 2025-06-10 15:30:58 +08:00
阿丹(adan)
b41697c9b6 [model] support MiniCPM4 (#8314) 2025-06-10 14:38:39 +08:00
Kingsley
31bca4d172 [model] support Mistral3.1 small 2503 (#8335) 2025-06-09 10:37:42 +08:00
Chenhao Zhang
fa4360dca7 [assets] Add awesome works used LLaMA-Factory (#8333) 2025-06-09 10:21:17 +08:00
Yaowei Zheng
9acab4949d [model] fix model generate (#8327) 2025-06-07 08:47:50 +08:00
Vivek Iyer
32b4574094 [model] pushing FFT with unsloth (#8325)
Co-authored-by: viyer <vivek_iyer2@apple.com>
2025-06-07 08:20:58 +08:00
Yaowei Zheng
03a93ec513 [data] fix empty template (#8312) 2025-06-06 13:50:50 +08:00
Yaowei Zheng
bcb6b94658 [setup] fix uv (#8311) 2025-06-06 11:54:15 +08:00
Yaowei Zheng
c0710be6d7 [assets] update readme (#8303) 2025-06-05 23:23:15 +08:00
Kingsley
212a8006dc [tests] add visual model save test (#8248)
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
2025-06-05 20:38:01 +08:00
Yaowei Zheng
ed70f8d5a2 [assets] fix npu docker (#8298) 2025-06-05 19:09:20 +08:00
Butui Hu
1a33d65a56 [launcher] Add elastic and fault-tolerant training support (#8286)
Signed-off-by: Butui Hu <hot123tea123@gmail.com>
2025-06-05 16:40:03 +08:00
Kingsley
69c9e379d5 [script] add Script description for qwen_omni_merge (#8293) 2025-06-05 13:22:01 +08:00
Yaowei Zheng
e9fe9cee29 [assets] update docker files (#8291) 2025-06-04 23:30:46 +08:00
Yaowei Zheng
cb7ab69783 [assets] update readme (#8288) 2025-06-04 17:46:12 +08:00
Yaowei Zheng
c1ed76e109 [assets] add icon (#8276) 2025-06-03 20:36:21 +08:00
Kingsley
c224d17cb2 [data] support nested images input for videos (#8264) 2025-06-03 20:26:29 +08:00
Ze-Yi LIN
c4e51d40e0 [tracking] swanlab add llamafactory tag (#8258) 2025-06-03 18:42:29 +08:00
Kingsley
554e89ff02 [model] add MIMO_VL (#8249) 2025-06-01 03:54:54 +08:00
Yaowei Zheng
fee2122f09 [deps] upgrade transformers to 4.52.4 (#8245) 2025-05-31 16:51:40 +08:00
Akshat Sehgal
c7e63bead7 [model] add smollm2 support (#8220) 2025-05-31 16:29:01 +08:00
hoshi-hiyouga
3e1a7fcb9c [assets] update readme (#8235) 2025-05-30 16:52:12 +08:00
Kingsley
2aaede8ef4 [scripts] specify model class for qwen_omni merge (#8227) 2025-05-30 14:20:12 +08:00
hoshi-hiyouga
42bebc341d [model] add deepseek 0528 models (#8215) 2025-05-29 21:37:07 +08:00
hoshi-hiyouga
83a9ff5853 [assets] fix docker images (#8203) 2025-05-28 22:26:05 +08:00
yzoaim
519bab86e6 [workflow] auto push docker images (#8181)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-05-28 20:21:15 +08:00
hoshi-hiyouga
dbc9f5a5d9 [assets] update Dockerfile (#8201) 2025-05-28 20:20:59 +08:00
hoshi-hiyouga
9b152d9cb5 [webui] fix skip args (#8195) 2025-05-28 18:11:07 +08:00
Youngwoo Kim
6c3cd400b5 [data] Reading files from cloud is broken (#8182) (#8183) 2025-05-28 15:50:44 +08:00
hoshi-hiyouga
4d3ffa2ec4 [assets] fix docker image (#8180) 2025-05-27 19:01:31 +08:00
hoshi-hiyouga
2bf8e993ab [data] fix shared file system (#8179) 2025-05-27 18:36:03 +08:00
hoshi-hiyouga
d4a413eb37 [webui] add extra args to export (#8178) 2025-05-27 18:25:31 +08:00
hoshi-hiyouga
00974a3169 [assets] update docker files (#8176) 2025-05-27 18:15:23 +08:00
hoshi-hiyouga
46ccf84aaa [webui] add infer extra args (#8167) 2025-05-27 12:04:00 +08:00
hoshi-hiyouga
07343ca83d [webui] fix input args (#8162) 2025-05-27 02:05:54 +08:00
hoshi-hiyouga
3c7dc66a92 [model] add smollm2 and medgemma (#8161) 2025-05-26 23:19:58 +08:00
hoshi-hiyouga
ba032828e2 [deps] upgrade transformers (#8159) 2025-05-26 22:03:58 +08:00
Akshat Sehgal
501e7d8a8f feat: add smollm support (#8050) 2025-05-26 19:47:54 +08:00
wangzhan
12292e4283 [api] support repetition_penalty and align presence_penalty with OpenAI Client (#7958) 2025-05-26 18:45:11 +08:00
Kingsley
f08b748199 [data] fix internvl plugin when using PIL images (#8129) 2025-05-22 01:32:59 +08:00
hoshi-hiyouga
d2a3036a23 [misc] update data readme (#8128) 2025-05-21 22:41:18 +08:00
hoshi-hiyouga
9ae17cd173 [deps] update to transformers 4.52 (#8125) 2025-05-21 05:16:18 +08:00
hoshi-hiyouga
56926d76f9 [data] llama3 multi tool support (#8124) 2025-05-21 02:01:12 +08:00
hoshi-hiyouga
c2f6f2fa77 [assets] update readme (#8110) 2025-05-20 02:44:18 +08:00
hoshi-hiyouga
9b5baa97f0 [data] qwen3 fixes (#8109) 2025-05-20 02:00:30 +08:00
hoshi-hiyouga
45030ff803 [model] switch to gptqmodel (#8108) 2025-05-19 22:25:40 +08:00
piamo
bc7f00f2c7 [model] update rope kwargs for yarn (#8101) 2025-05-19 20:07:54 +08:00
hoshi-hiyouga
beae231af6 [doc] add no build isolation (#8103) 2025-05-19 19:25:13 +08:00
Ma, Xiaochen
a0b4b91577 [trainer] fix KeyError at end of pretrain (#8099) 2025-05-19 18:01:26 +08:00
Biao Wang
90492f3582 [misc] fix cli (#8095)
Co-authored-by: wangbiao11 <wangbiao11@baidu.com>
2025-05-19 17:59:39 +08:00
Saiya
ab41f7956c [infer] support lora adapter for SGLang backend (#8067) 2025-05-16 23:33:47 +08:00
Kingsley
52b23f9e56 [data] add forward compatibility for video_utils in Transformers 4.52.0 (#8077) 2025-05-16 17:41:04 +08:00
Eric Tang
a9aa392ba4 [data] support loading folder from remote (#8078) 2025-05-16 15:35:38 +08:00
Shawn Tao
0b773234e5 [infer] Modify vllm_infer.py to batch preprocess to avoid too much files opened error (#8051)
Co-authored-by: Kingsley <82590017+Kuangdd01@users.noreply.github.com>
2025-05-15 10:54:35 +08:00
hoshi-hiyouga
712c57f3b4 [assets] update windows installation (#8042) 2025-05-13 17:01:56 +08:00
hoshi-hiyouga
dc080399c6 [model] add seed coder and qwen3 quant models (#8039) 2025-05-13 15:59:55 +08:00
hoshi-hiyouga
68fc068cab [data] fix kimi vl template (#8015) 2025-05-11 20:45:19 +08:00
Kingsley
9620825892 [scripts] add video params for vllm infer (#7992) 2025-05-09 21:16:52 +08:00
yunhao-tech
26cbb03a5f [data] Avoid repetitive tool description warp (#8000)
Co-authored-by: chenyunhao <chenyunhao@wps.cn>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-05-09 21:16:37 +08:00
tpoisonooo
5f4b793e04 [docs] add GraphGen (#7974) 2025-05-07 12:23:11 +02:00
hoshi-hiyouga
994ab6424a [misc] update liger kernel patch (#7966) 2025-05-06 20:32:16 +02:00
hoshi-hiyouga
aa9ed4db59 [example] update examples (#7964) 2025-05-06 17:24:25 +02:00
Kingsley
ef86a53063 [model] add mimo7b (#7946) 2025-05-06 17:10:30 +02:00
hoshi-hiyouga
bf0286e1e3 [misc] fix qwen2 omni (#7962) 2025-05-06 15:39:13 +02:00
hoshi-hiyouga
ce7032e1b3 [model] add qwen2 omni 3b (#7945) 2025-05-03 16:36:51 +08:00
Eric Chen
5763017cea [assets] Warp Support README Update (#7887) 2025-05-02 00:08:48 +08:00
hoshi-hiyouga
13b05e74f1 [hparam] add enable think argument (#7928) 2025-04-30 17:21:30 +08:00
hoshi-hiyouga
c566e39b7d [data] fix base plugin (#7924) 2025-04-30 16:28:05 +08:00
hoshi-hiyouga
052ca871bd [data] optimize qwen3 loss computation (#7923) 2025-04-30 16:18:00 +08:00
hoshi-hiyouga
73198a6645 [misc] fix uv (#7913) 2025-04-30 07:45:03 +08:00
hoshi-hiyouga
d4ee44bdef [data] add eval_on_each_dataset arg (#7912) 2025-04-30 06:56:43 +08:00
hoshi-hiyouga
6d2cde43e7 [data] replace eos token for base models (#7911) 2025-04-30 06:52:28 +08:00
hoshi-hiyouga
11295cdea0 [data] improve mm plugin (#7910) 2025-04-30 06:34:28 +08:00
hoshi-hiyouga
98f23c6584 [model] add qwen3 (#7885) 2025-04-29 09:34:05 +08:00
Kingsley
db9559456c [data] fix qwen2.5 omni template (#7883) 2025-04-29 00:58:23 +08:00
hoshi-hiyouga
3ae5da2a04 [model] fix dsv3 leaf node (#7879) 2025-04-28 18:11:09 +08:00
hoshi-hiyouga
d173cb50f5 [data] fix qwen2 omni plugin (#7875) 2025-04-28 14:22:41 +08:00
zhaop-l
df27d7e48a [trainer] make projector trainable in freeze training (#7872)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-28 13:19:37 +08:00
hoshi-hiyouga
bb5b83352b [data] fix minicpmo vllm infer (#7870) 2025-04-28 01:59:53 +08:00
Kingsley
1157f4e246 fix attn patch for kimivl (#7867) 2025-04-27 23:12:28 +08:00
Eric Tang
ef03832cd4 [ray] add storage filesystem to ray config (#7854) 2025-04-27 22:12:40 +08:00
hoshi-hiyouga
2233b739fa [model] fix vit gradient checkpointing (#7830) 2025-04-23 22:48:48 +08:00
hoshi-hiyouga
091d2539e8 Merge commit from fork 2025-04-23 16:38:27 +08:00
hoshi-hiyouga
c1a7f2ebb2 [model] fix moe zero3 (#7826) 2025-04-23 15:30:49 +08:00
Kingsley
fa0eb91f1f [data] fix internvl plugin (#7817) 2025-04-23 00:58:22 +08:00
hoshi-hiyouga
49f9ed0232 [assets] update model readme (#7804) 2025-04-22 16:43:56 +08:00
Kingsley
2a564c25d1 [model] add arch check for InternVL (#7803) 2025-04-22 16:38:05 +08:00
Kingsley
7500e761d3 [misc] update internvl constants (#7801) 2025-04-22 15:53:08 +08:00
hoshi-hiyouga
fddcd43c88 [trainer] support early stop (#7797) 2025-04-22 01:59:33 +08:00
hoshi-hiyouga
0e4ce039ee [data] improve mmplugin (#7795) 2025-04-22 01:25:33 +08:00
hoshi-hiyouga
b07628dea5 [example] add bash usage (#7794) 2025-04-22 00:25:51 +08:00
Juanxi Tian
12ada72ed4 [trainer] Add Muon Optimizer (#7749)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:38:37 +08:00
hoshi-hiyouga
416853dd25 [parser] support omegaconf (#7793) 2025-04-21 23:30:30 +08:00
Changrui Chen
bd7bc31c79 [data] Fix wrong position ids with packed attention masks (#7754)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:36 +08:00
flashJd
0ac641326b [misc] fix new tokens adding (#7253)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:02 +08:00
ddddng
c5ba9106ec [model] fix gemma3 export (#7786)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:07:11 +08:00
Sachin Beldona
3b2d3794a5 [misc] fix bug in constant (#7765)
Co-authored-by: Sachin Beldona <sbeldona@cs.cmu.edu>
2025-04-21 23:06:31 +08:00
hoshi-hiyouga
b605c20768 [assets] update wechat (#7792) 2025-04-21 21:29:42 +08:00
hoshi-hiyouga
39169986ef [trainer] fix pt loss (#7748)
* fix pt loss

* robust

* fix

* test
2025-04-17 03:15:35 +08:00
hoshi-hiyouga
86ebb219d6 [breaking] bump transformers to 4.45.0 & improve ci (#7746)
* update ci

* fix

* fix

* fix

* fix

* fix
2025-04-17 02:36:48 +08:00
hoshi-hiyouga
d222f63cb7 [infer] set env for vllm ascend (#7745) 2025-04-17 01:08:55 +08:00
Kingsley
2e518f255f [model] support intern-VL 2.5-3 series (#7258)
* add internvl and rebase

* fix for internvl2&3

* remove lines

* fix video_inputs & lint

* nit

* add constants

* remove lines

* fix

* fix error

* pass ci

* pass ci

* skip internvl & nit
2025-04-17 00:31:30 +08:00
ENg-122
8f88a4e6a4 [misc] improve entrypoint (#7345)
* 纯粹优化下入口代码,因为看到if else太多了

* Update cli.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-16 21:48:23 +08:00
leo-pony
b9263ff5ac [infer] support vllm-ascend (#7739) 2025-04-16 20:06:47 +08:00
hoshi-hiyouga
ee2ab093a7 [api] fix chat messages (#7732) 2025-04-15 16:39:08 +08:00
hoshi-hiyouga
3df021d4d7 [deps] upgrade vllm (#7728) 2025-04-15 14:57:40 +08:00
Joe Schoonover
e252abf051 [docker] patch docker-rocm (#7725)
* Update Dockerfile

* Fix typo

* Fix syntax for /bin/sh conditional

* Add build args to docker-compose

* Change shell to /bin/bash

This is required for "==" syntax in conditional string comparison
2025-04-15 13:36:39 +08:00
hoshi-hiyouga
1134baeedd [assets] update model readme (#7724) 2025-04-15 00:41:09 +08:00
Kingsley
2101399c94 [model] Support Kimi_VL thinking/instruct (#7719)
* add kimi_vl

* patch config

* check version

* Update mm_plugin.py

* Update mm_plugin.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-15 00:21:58 +08:00
hoshi-hiyouga
3f91a95250 [misc] fix env vars (#7715) 2025-04-14 16:04:04 +08:00
hoshi-hiyouga
7c61b35106 [misc] upgrade cli (#7714) 2025-04-14 15:41:22 +08:00
hoshi-hiyouga
f518bfba5b [deps] upgrade transformers (#7704) 2025-04-13 18:11:34 +08:00
Yuxuan Zhang
8162f94db5 [model] add GLM-4-0414 (#7695)
* Update README_zh.md

* update
2025-04-13 17:10:45 +08:00
hoshi-hiyouga
1f0c52b73c [deps] fix uv conflicts (#7686)
* fix #7678

* Update setup.py

* Update tests.yml

* Update publish.yml

* Update Makefile
2025-04-11 18:02:24 +08:00
Eric Tang
a8caf09c7f [data] support for specifying a dataset in cloud storage (#7567)
* add support for loading datasets from s3/gcs

* add comments to readme

* run linter and address comments

* add option to pass in kwargs to ray init (i.e. runtime env)

* address comment

* revert mixed up changes
2025-04-10 11:31:35 +08:00
Eric Tang
bb8d79bae2 [ray] allow for specifying ray.init kwargs (i.e. runtime_env) (#7647)
* ray init kwargs

* Update trainer_utils.py

* fix ray args

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-10 11:31:05 +08:00
Dain Kim
1c436c9f25 [bugfix] enable_gemma_liger_kernel (#7660)
- The `enable_liger_kernel` function for the Gemma model series was not executed due to the existing `if` statement in the code.
- Changed the line to an `elif` statement so that the `apply_liger_kernel` function is executed properly.

resolved: #7628
2025-04-10 11:27:30 +08:00
jilongW
1b0934bccb [misc] fix cuda warn on intel GPU (#7655) 2025-04-09 21:37:54 +08:00
hoshi-hiyouga
4eec541857 [data] add coig-p dataset (#7657) 2025-04-09 21:18:25 +08:00
hoshi-hiyouga
89a4f9ec7f [assets] update readme (#7654) 2025-04-09 18:27:38 +08:00
hoshi-hiyouga
1abd71b551 [assets] update readme (#7644) 2025-04-09 01:06:06 +08:00
Kingsley
349c56c51c [data] Fix bugs of use_audio_in_video in Qwen2.5 Omni (#7638)
* cache _mm_inputs

* nit

* support for use_audio_in_video

* remove cache

* fix data

* Update mllm_video_audio_demo.json
2025-04-08 18:40:10 +08:00
Shawn Tao
acb09fa3a3 [trainer] fix key error (#7635) 2025-04-08 18:39:50 +08:00
Adarsh Shirawalmath
f75b91077b [sglang] support transformers 4.51.0 (#7639) 2025-04-08 18:39:23 +08:00
hoshi-hiyouga
c3c0efbaa0 [misc] fix packing and eval plot (#7623) 2025-04-07 18:20:57 +08:00
hoshi-hiyouga
5115dc8c7f [assets] update readme (#7612) 2025-04-06 13:58:49 +08:00
hoshi-hiyouga
831e7f1cfd [model] add llama4 (#7611) 2025-04-06 13:42:31 +08:00
Kingsley
d4cfa9507e [data] fix qwen2.5 omni plugin (#7578)
* specific entry

* Update mm_plugin.py

* fix fps cal

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-02 23:58:39 +08:00
Kingsley
d32c6c014d [data] fix qwen2.5 omni plugin (#7573)
* align key with qwen2vl

* nit && change scripts
2025-04-02 21:28:52 +08:00
gechengze
7b9deb9410 [trainer] fix batch processing in PPO trainer (#7576) 2025-04-02 21:17:48 +08:00
hoshi-hiyouga
5e22597ff1 [infer] vllm video/audio inference (#7566) 2025-04-02 02:27:04 +08:00
hoshi-hiyouga
2bfcad2394 [model] fix kv cache (#7564) 2025-04-01 23:07:46 +08:00
Yu Shi Jie
a13b1bb49a [model] fix use_cache patching for gemma3 multimodal (#7500) 2025-04-01 16:06:48 +08:00
Ritesh Goru
d10467d178 [data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing (#7318)
* use position_ids for neat_packing with fa2

* revert fa2 changes
2025-04-01 16:03:13 +08:00
taoharry
aac70663fd [webui] fix launch with proxy (#7332) 2025-04-01 15:52:56 +08:00
Billy Cao
00409ff28a [data] shard the dataset to allow multiprocessing when streaming is enabled (#7530)
* Shard the dataset when streaming to allow multiprocessing

* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
d70b3b4bc5 [trainer] new kto mismatch pair creation strategy (#7509) 2025-04-01 15:21:53 +08:00
hoshi-hiyouga
e76eba051d [data] fix qwen2.5 omni collator (#7553) 2025-04-01 00:15:12 +08:00
Kingsley
7eed496336 [model] add Qwen2.5-Omni model (#7537)
* preserve image_sizes

* preserve image_sizes

* init plugin

* support audio-text2text lora

* nit

* support image/video-text2text, audio-text2text

* remove args

* remove lines

* add docs && nit

* remove some comments

* fix && add merge part script

* add license
2025-03-31 20:39:35 +08:00
hoshi-hiyouga
0f8296626a [deps] pin pydantic to 2.10.6 (#7546) 2025-03-31 14:42:28 +08:00
Kingsley
8da1d2fa71 [data] fix pixtral plugin (#7505)
* preserve `image_sizes`

* add comments
2025-03-27 17:06:40 +08:00
Xu-pixel
b578a7d5b6 [3rdparty] support swanlab lark notification (#7481) 2025-03-27 01:52:01 +08:00
Kdump
24afceddb7 [trainer] fix wsd scheduler (#7304)
* [trainer] Warmup_stable_decay supports setting the number of stable and decay steps according to the warmup_ratio ratio

* Update trainer_utils.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-26 15:25:02 +08:00
hoshi-hiyouga
0583d06676 [model] add qwen2vl 32b & upgrade peft (#7469)
* add qwen2vl 32b

* fix ci

* upgrade peft to 0.15

* fix ci

* fix ci
2025-03-25 12:15:58 +08:00
GuoCoder
ec6a261568 [model] fix lora on quant models (#7456)
Co-authored-by: root <root@ai>
2025-03-25 11:59:46 +08:00
Xiaosu Zhu
6b3b97c738 [misc] update liger-kernel's monkey patch (#7453)
* Update liger_kernel.py

* Update setup.py
2025-03-25 11:58:52 +08:00
AbdelKarim ELJANDOUBI
6d3748f727 [misc] enable liger kernel for gemma3 text and paligemma (#7466)
* add gemma3 text

* add paligemma (1,2 and 2 mix)
2025-03-25 09:27:43 +08:00
Kenny Lam
7c890170e3 [misc] enable liger kernel for gemma3 (#7462) 2025-03-24 19:09:59 +08:00
hoshi-hiyouga
ca42c0c406 [assets] fix gemma3 readme (#7449) 2025-03-24 10:31:25 +08:00
hoshi-hiyouga
7203365b80 [trainer] fix vlm loss for transformers 4.49 (#7448) 2025-03-24 10:24:05 +08:00
rumichi
3612946dd9 [docker] upgrade to torch 2.6 (#7442) 2025-03-23 21:18:08 +08:00
hoshi-hiyouga
3aa4f32e9c [misc] fix ci (#7441)
* fix ci

* improve ci
2025-03-23 21:09:35 +08:00
hoshi-hiyouga
304796b803 [misc] fix license (#7440) 2025-03-23 19:31:56 +08:00
SnowFox4004
7cfd6e4bb0 [scripts] support compute score on vllm's predictions (#7419)
* enable manual bleu&rouge eval by adding `scripts/eval_bleu_rouge.py`

* added libraries check

* update: 使用datasets库的多进程加速处理

* update:
- 使用 fire.Fire
- 修改代码格式

* Update eval_bleu_rouge.py: correctly uses fire

Deleted the code of using sys.argv

* Update eval_bleu_rouge.py

---------

Co-authored-by: SnowFox4004 <manba@out>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-23 19:21:01 +08:00
hoshi-hiyouga
05b19d6952 [deps] upgrade transformers to 4.50.0 (#7437)
* upgrade transformers

* fix hf cache

* fix dpo trainer
2025-03-23 17:44:27 +08:00
hoshi-hiyouga
919415dba9 [deps] upgrade vllm to 0.8 (#7436) 2025-03-23 14:32:22 +08:00
Guo, Quan
a959c2a509 [misc] fix sglang deps (#7432)
* feat: Add transformer version requirement for sglang

* feat: add srt to sglang which is required for running sglang

Other options are srt_hip, srt_xpu, srt_npu, srt_hpu, srt_cpu, for different computation architectures.
2025-03-23 14:07:10 +08:00
Eric Tang
db0a08db6f [3rdparty] fix redundant process group destroy for ray (#7395)
* fix redundant process group destroy for ray

* Update tuner.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-21 10:56:47 +08:00
hoshi-hiyouga
a306f0f5a2 [version] fix minicpmo (#7378) 2025-03-20 16:59:31 +08:00
hoshi-hiyouga
63752fccf7 [assets] update wechat (#7361) 2025-03-18 21:31:09 +08:00
hoshi-hiyouga
1f9773395b [misc] set dev version (#7351) 2025-03-18 00:10:53 +08:00
hoshi-hiyouga
128b5b12b3 [data] fix template (#7349) 2025-03-17 23:45:20 +08:00
hoshi-hiyouga
d5915a7dd7 [assets] update videos (#7340)
* Update README.md

* Update README_zh.md
2025-03-17 15:48:02 +08:00
Hertz
ec1154662b [model] support hunyuan 7b (#7317)
* [Model]supported tencent-hunyuan model

* [Model]supported tencent-hunyuan model(fix)

* [Model]supported tencent-hunyuan model(fix)
2025-03-15 20:55:24 +08:00
Qiaolin Yu
a44a53ebec [inference] support sglang backend (#7278)
* Mimic SGLang offline Engine

* Add more tests and args

* Pass all current tests

* Clean Code

* fix sample_params

* clean code

* Fix Stream Chat

* change sglang from engine mode to server mode

* fix

* Fix Review Issues

* Use SGLang Built-In Utilities

* Fix test SGLang

* Some Doc Issue

* fix sglang engine

* add readme

---------

Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
2025-03-15 04:37:58 +08:00
hoshi-hiyouga
93e6184cbe [data] gemma3 plugin pan and scan (#7294)
* gemma3 pan and scan

* add test case

* fix test
2025-03-13 23:29:23 +08:00
hoshi-hiyouga
0be0d7796a [assets] update video (#7287) 2025-03-13 18:45:47 +08:00
Ritesh Goru
480369a9f2 [data] efficient 4d_attention_mask creation in neat_packing (#7272) 2025-03-13 03:31:12 +08:00
hoshi-hiyouga
650a9a9057 [misc] update format (#7277) 2025-03-13 02:53:08 +08:00
hoshi-hiyouga
4b9d8da5a4 [model] support gemma3 (#7273) 2025-03-13 01:35:23 +08:00
hoshi-hiyouga
e6159ad730 [misc] upgrade deps (#7257) 2025-03-12 00:33:47 +08:00
hoshi-hiyouga
264538cb26 [misc] upgrade format to py39 (#7256) 2025-03-12 00:08:41 +08:00
hoshi-hiyouga
5995800bce [ci] update workflow (#7255) 2025-03-11 22:57:49 +08:00
hoshi-hiyouga
bf8b483186 [core] release v0.9.2 (#7254) 2025-03-11 22:42:23 +08:00
hoshi-hiyouga
e2299e261b Merge pull request #7242 from hiyouga/hiyouga/release
[release] release v0.9.2

Former-commit-id: 6b25268990bf225d84e29d4067595cf720fa12d8
2025-03-11 15:28:45 +08:00
hoshi-hiyouga
8a44dce326 Merge pull request #7247 from hiyouga/hiyouga/commit
[misc] support print commit info

Former-commit-id: 0f7ec4f8529a5d7ea2153b881335821038307bb7
2025-03-11 15:28:04 +08:00
hoshi-hiyouga
6d9233833b Merge pull request #7244 from hiyouga/hiyouga/token
[data] avoid exit after saving preprocessed data

Former-commit-id: dcbf01b0035062fa14187e5bdbb925080d349501
2025-03-11 15:17:15 +08:00
hiyouga
d019603835 support commit info
Former-commit-id: a7d89a6dc10579deaf9f45825cc18405a27cade6
2025-03-11 15:13:59 +08:00
hiyouga
478e8194d9 remove exit in preprocess
Former-commit-id: f369b6ef41ffd9586ba568b88c5ff32a1af4bace
2025-03-11 15:08:25 +08:00
hiyouga
1890d3dafe release v0.9.2
Former-commit-id: e7ed1782d4a006400de6fc0f864abd01f7fadeea
2025-03-11 14:49:13 +08:00
hoshi-hiyouga
522a3e8493 [infer] fix vllm args (#7235)
Former-commit-id: 999be5b4512890b8cf4f45874a77e35cf35626f5
2025-03-11 01:15:35 +08:00
Ze-Yi LIN
18968405d0 [tracking] add swanlab_logdir param (#7219)
* feat: add swanlab_logdir param

* fix

Former-commit-id: 9215ad488b6ac6cd57fe8fa4acdacceb63f68ca5
2025-03-11 00:53:07 +08:00
hoshi-hiyouga
71a1c1321a [config] update args (#7231)
Former-commit-id: f71a901840811bf560df671ec63a146ff99140c6
2025-03-10 23:04:43 +08:00
hoshi-hiyouga
cf58a6d860 [config] fix export max len (#7230)
Former-commit-id: 211c0b3e8f3340acd2fae1762d9152a09f19ba34
2025-03-10 16:46:08 +08:00
hoshi-hiyouga
9adc0a2c3f [assets] update readme (#7209)
Former-commit-id: d1631b38dad9ba3d41aebbb00e3500eb79b9e8e9
2025-03-07 17:27:49 +08:00
hoshi-hiyouga
16419b2834 [data] fix loader (#7207)
* fix dataloader

* add test case

* fix type

* fix ci

* fix ci

* fix ci

* disable overwrite cache in ci

Former-commit-id: e84af0e140b1aafd1a6d6fe185a8e41c8fc5f831
2025-03-07 17:20:46 +08:00
hoshi-hiyouga
82a2bac866 [misc] fix ds config (#7205)
Former-commit-id: b478fa1d9de1858075769f86f57126fde92db813
2025-03-07 15:21:28 +08:00
ZhangChuanhui
151ef48b40 [data] fix function formatter (#7201)
Co-authored-by: zhangchuanhui <zhangchal@digitalchina.com>
Former-commit-id: 3efb32b986170d2839e526640f85ba230715879a
2025-03-07 15:17:23 +08:00
hoshi-hiyouga
a255c3a476 [misc] fix cli (#7204)
Former-commit-id: 999f57133ca163c7108d2d5ee8194eca9b2109b4
2025-03-07 15:01:18 +08:00
hoshi-hiyouga
f4ec4fa6ad [script] fix vllm version (#7193)
Former-commit-id: ababdde597b2b9bf0ab3f30f036bc8d97de07f03
2025-03-06 17:14:17 +08:00
hoshi-hiyouga
2635794727 [webui] support escape html (#7190)
Former-commit-id: cf9840374f171359c828b0d6f7a2aa9893c8f701
2025-03-06 16:52:21 +08:00
hoshi-hiyouga
d2f845d70d [deps] upgrade vllm (#7183)
Former-commit-id: 37678a3d64668c3b4a4bfefc054e3b9b40427c1a
2025-03-06 15:25:08 +08:00
hoshi-hiyouga
bb8aba5abf [data] fix mm template (#7181)
Former-commit-id: 648616d473c81d393592806307e3e25b159cb278
2025-03-06 15:18:32 +08:00
hoshi-hiyouga
9f16c50155 [model] add QwQ 32b (#7179)
Former-commit-id: 8897e48b8cd55407812453ddd4ff98ac7bdc4e91
2025-03-06 11:58:36 +08:00
Ze-Yi LIN
25bb9f5ad9 [trainer] fix swanlab callback (#7176)
Former-commit-id: 6d9acf4bd30db24499118aee16bd19cb19ba9e3d
2025-03-06 00:33:37 +08:00
hoshi-hiyouga
7b985f55db [trainer] update config (#7174)
Former-commit-id: 9f535d0e3c4ee3cd0f1b65218c2eee5d03f43c6f
2025-03-05 23:32:54 +08:00
sirui.li
fd0357a26d [data] fix qwen2audio plugin (#7166)
* Update pairwise.py

[data]Repair multimodal model dpo training

* Update pairwise.py

[data]repair multimodal model dpo training using deepcopy

* Update pairwise.py

* Update mm_plugin.py

Former-commit-id: 86763dfdb8e9e5668c1ddd7e924e4be76bf78368
2025-03-05 18:03:36 +08:00
hoshi-hiyouga
31f9daa362 [data] use bicubic resampler (#7143)
Former-commit-id: c708f19ab0ab57526134952afddaa90aae8decbf
2025-03-04 00:17:06 +08:00
hoshi-hiyouga
15ea576246 [webui] fix webui (#7142)
Former-commit-id: d07281f8a45ad8a38d390181d01dcadbcf9aa1b9
2025-03-04 00:01:49 +08:00
rabbit
19a6916d80 [data] bailing template (#7117)
* add bailing template

* add bailing template

* add bailing template

---------

Co-authored-by: chengshiwen.csw@antgroup.com <chengshiwen.csw@antgroup.com>
Former-commit-id: 4a36f5e0abb5a63f4b3b81560bb1ad0e6832d379
2025-03-03 15:33:22 +08:00
hoshi-hiyouga
585c475f71 [inference] fix hf_engine (#7120)
Former-commit-id: f8cf5319cb5d6e06a1b0d8b8db2b678627f2271e
2025-03-01 05:22:49 +08:00
hoshi-hiyouga
e62dae37fe [assets] update wechat (#7106)
Former-commit-id: 0ea430060994631e9fdb18fbbca0dd565a04fd66
2025-02-28 12:01:04 +08:00
Ze-Yi LIN
11672f760d [webui] display swanlab exp link (#7089)
* webui add swanlab link

* change callback name

* update

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 27a4b93871c63b839c92940766bd7e0177972c9b
2025-02-27 19:40:54 +08:00
leo-pony
b9f84900ee [npu] update cann base image and torch 2.4 (#7061)
* Update base npu container image version:The Python version required for Hugging Face Transformers is >= python3.10

* Fix the bug: arg type of INSTALL_DEEPSPEED shoud been string now.

* Update Ascend CANN, CANN-Kernel and corresponding torch and torch-npu version

* Upgrade torch-npu needs packages' version: torch==2.1.0 and torch-npu==2.4.0.post2

Former-commit-id: d6dafada58412b0c801e576ef4d8d96203f792af
2025-02-25 23:32:01 +08:00
hoshi-hiyouga
5f65558088 [misc] fix project toml (#7067)
Former-commit-id: 28a668ff4e0beebfe5387362f5518c1d9343666f
2025-02-25 23:22:48 +08:00
JieShen
0f54a78144 [script] add seed args (#7058)
* add seed args

* add seed args

* update seed

Former-commit-id: eb9770b2c01a840b6a0ac119210c22bdbb81e18b
2025-02-25 19:44:57 +08:00
Kingsley
2986bef530 [model] add paligemma2-mix series (#7060)
Former-commit-id: 0c0196306d343242ee5e6f22c55562f9a74aa782
2025-02-25 18:51:16 +08:00
hoshi-hiyouga
065f7fb5da [data] fix mllama (#7053)
* fix mllama

* fix test

Former-commit-id: f5af20a63f3d59a6a68d323a7c6f68e551edb3a3
2025-02-24 22:05:38 +08:00
hoshi-hiyouga
c1d5073bd3 [model] add models (#7054)
* add qwen25vl awq models

* add moonlight

Former-commit-id: ae3be2970fea8a35907202a313ab767381c44916
2025-02-24 22:05:13 +08:00
hoshi-hiyouga
ee46011b34 [assets] update readme (#7051)
Former-commit-id: c89a39bfc6a3f0aaa376cd1b221320f466aba617
2025-02-24 20:45:06 +08:00
hoshi-hiyouga
d55f420206 [assets] update wechat (#7019)
Former-commit-id: 3d102fe7e0bfc23db7d75f90ebaf53216c54cc85
2025-02-20 20:32:33 +08:00
Zhangchi Feng
fcf75633a0 [data] fix MiniCPMV plugin (#6998)
* fix template

* fix bug in messages processing

Former-commit-id: f98b828f53968fb9c72bff9e45510ad5586c4fab
2025-02-19 19:36:04 +08:00
hoshi-hiyouga
e77ced045d [webui] update css (#6985)
Former-commit-id: 760a1dfb8193de418d7aa1063c0d111a3a64ae0f
2025-02-18 18:27:57 +08:00
hoshi-hiyouga
331f53381f [data] add r1 distill dataset (#6983)
Former-commit-id: 1da5ee4edaa3896593b9cae488f0ac5917c3243e
2025-02-18 17:25:09 +08:00
hoshi-hiyouga
1d675a287d [version] support transformers 449 (#6982)
* support transformers 449

* fix mm plugin

Former-commit-id: e9118a9df0839d24f6ddff5a0b55ef101a1d3d22
2025-02-18 17:05:40 +08:00
hoshi-hiyouga
be33ef67fb [misc] fix script (#6977)
Former-commit-id: 775efa1d8cbdb1b7d122be2a986d47f85214e0a1
2025-02-18 17:00:46 +08:00
hoshi-hiyouga
f5cd17881e [data] update vlm args (#6976)
Former-commit-id: c28e710636a0286d4b8a1d494529b25168a8f3ab
2025-02-18 02:12:51 +08:00
hoshi-hiyouga
c09b648934 [data] add min resolution option (#6975)
Former-commit-id: 76bd9a98a2fb00f1a1d881e6e1364c02fd36d327
2025-02-18 01:40:46 +08:00
hoshi-hiyouga
f2fd9d1b25 [data] fix predict dataset (#6972)
Former-commit-id: f9a82e527877b1ed47cabb3d34f4d155705f4048
2025-02-17 20:29:40 +08:00
Zhangchi Feng
167342af8a [data] fix minicpmo template (#6946)
Former-commit-id: 09e4438b58d5c1a5fdde37ff781c3d79461c4743
2025-02-15 00:37:41 +08:00
Eric Tang
76f9bd1820 [ray] specify ray storage path (#6920)
Former-commit-id: 4be6b66b1eaa79955e936ce2b747a8837ecd1e49
2025-02-14 21:55:41 +08:00
hoshi-hiyouga
a893505924 [misc] fix lora regex (#6944)
* fix lora regex

* fix

Former-commit-id: 1d0ecbaee1b72f1e03154ddd4fcc8b7876e01f89
2025-02-14 21:38:43 +08:00
hoshi-hiyouga
ed25e051a9 [misc] fix grad ckpt (#6931)
Former-commit-id: deae1fc9a0bea5c8b8be1564cf9c81c9c02a0b3a
2025-02-13 23:27:51 +08:00
hoshi-hiyouga
5e5fc337f9 [model] add liger kernel to qwen2_5 vl (#6930)
* add liger kernel to qwen2_5 vl

* fix patch

* fix patch

Former-commit-id: 828776d155986166498dfc907194f64436571106
2025-02-13 23:05:54 +08:00
Billy Cao
58e9ca8aa0 [trainer] fix gen_kwarg to eval during training (#5451)
* Correctly pass gen_kwarg to eval during model runs

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 845d16122496311e08263610a6a922f82604de7b
2025-02-13 02:35:06 +08:00
SrWYG
a4c4b8496f [data] evaluate on each dataset (#5522)
* [Update] loader.py , evaluate will run separate evaluations on each dataset.

`If you pass a dictionary with names of datasets as keys and datasets as values, evaluate will run separate evaluations on each dataset. This can be useful to monitor how training affects other datasets or simply to get a more fine-grained evaluation`

seq2seqtrainner support eval_dataset as Dict.

* fix format

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: cf00f78650a442c85678ce805e030d2b96cbecd7
2025-02-13 02:19:03 +08:00
Noah
38c9641777 [data] improve error handling (#6128)
* sync from upstream

* update

* update

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 1569e6096fec07da5583f1a3435b0d23ae09b5ba
2025-02-13 01:39:41 +08:00
hoshi-hiyouga
8b8fdb3a85 [misc] update readme (#6918)
Former-commit-id: f5823479bd51c39db668b68056be749af09894d1
2025-02-13 01:01:41 +08:00
hoshi-hiyouga
290057069e [misc] update readme (#6917)
Former-commit-id: 6bbed1d8c4189fb7bea40230e278c40bb5336fbd
2025-02-13 00:58:10 +08:00
hoshi-hiyouga
46203856fc [breaking change] refactor data pipeline (#6901)
* refactor data

* rename file

Former-commit-id: 7a1a4ce6451cb782573d0bd9dd27a5e443e3a18b
2025-02-13 00:39:20 +08:00
Eric Tang
80b89978d9 [misc] support for launching LLaMA-Factory with uv run (#6907)
* yay

* uv with ray temporary commit

* remove ray specific code for now

* cleanup

Former-commit-id: 1a9cab6de49e300bf9c747eefbb11d693592b477
2025-02-13 00:38:44 +08:00
Eric Tang
5a221d91f9 [example] fix path to ray example (#6906)
Former-commit-id: e9bee3ef045d85051da04e6ad581a23a9e1a9551
2025-02-13 00:29:32 +08:00
hoshi-hiyouga
3a3f4072e5 [misc] fix grad ckpt func (#6916)
Former-commit-id: 35e069a52b3d7cfd9b0107574b09265eb2290f0b
2025-02-13 00:17:18 +08:00
marko1616
0c0cdc26bc [trainer] fix llama3.2 vision kto train (#6904)
Former-commit-id: 1563e89adc8988fc6e4250634a3f1e385979b0e5
2025-02-12 19:09:14 +08:00
hoshi-hiyouga
2581cc844b [data] feat: auto template (#6905)
* support auto template

* add unittest

Former-commit-id: 0c6c9150db6414a5a05527ea486dce6633dff4b3
2025-02-12 00:22:53 +08:00
hoshi-hiyouga
d58fcd094e [misc] update readme (#6903)
Former-commit-id: 830d028939149d54bc91b6bda110dfa5de949483
2025-02-11 22:51:26 +08:00
hoshi-hiyouga
86063e27ea [data] fix ollama template (#6902)
* fix ollama template

* add meta info

* use half precision

Former-commit-id: 1304bbea69d8c8ca57140017515dee7ae2ee6536
2025-02-11 22:43:09 +08:00
hoshi-hiyouga
88eafd865b [misc] support export ollama modelfile (#6899)
* support export ollama modelfile

* update config

* add system and num ctx

Former-commit-id: 8c2af7466f4015f300b51841db11bcd2505ebf20
2025-02-11 19:52:25 +08:00
hoshi-hiyouga
3f7bd98bfa [data] refactor template (#6896)
Former-commit-id: f78d5a3eca947ed965ca2f6c87d60441b1a59867
2025-02-11 17:59:25 +08:00
codingma
b72c4bd118 support ollama modelfile export (#4686)
Former-commit-id: 15cca102a7fc0d08b5d049cf264acc6fa576b104
2025-02-11 17:52:24 +08:00
hoshi-hiyouga
808ff89a2d [data] refactor mm plugin (#6895)
* refactor plugin

* lint

Former-commit-id: 1c8dcc3adca4a2e78f514f8bb70573dd1ca08746
2025-02-11 16:34:49 +08:00
HJ
6d7f1299bd [data] fix qwen_2_5_vl video processing (#6868)
* fix qwen_2_5_vl video processing

* Update mm_plugin.py

* Update mm_plugin.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 35f326dabdc8e84036296d2e3de1c84c67b8def8
2025-02-11 16:14:50 +08:00
hoshi-hiyouga
0420a608ca [assets] update wechat (#6892)
Former-commit-id: 0b268cc903a583ae78cb7e63d2bdc4602d7220fc
2025-02-11 13:56:26 +08:00
Zhangchi Feng
2047eab723 [da'ta] fix minicpmv plugin (#6890)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

* update init audio

* update init audio

* [model]fix image process in minicpmo

* fix no mm inputs

Former-commit-id: cdd19ccd8cec460606b4545e886e932c1c5c5fe1
2025-02-11 13:30:44 +08:00
HJ
e11b40c344 [data] fix: sharegpt converter (#6879)
* fix-sharegpt-format

* fix

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: ae8f8151ff750839998b50446f127061f240d41a
2025-02-10 21:59:12 +08:00
hoshi-hiyouga
b869506a57 [data] fix mllama collator (#6874)
Former-commit-id: c694fa3d66651c6ce547fa72c8260c46a406126b
2025-02-09 22:42:25 +08:00
hoshi-hiyouga
72d5b06b08 [test] align test cases (#6865)
* align test cases

* fix function formatter

Former-commit-id: a68f5e22d0391c80a9a826dc83967255be572032
2025-02-09 01:03:49 +08:00
hoshi-hiyouga
94726bdc8d [dataset] add openthought (#6866)
Former-commit-id: 20c748a4f108c0087f0d85377a4aa99126a0beb0
2025-02-09 00:53:01 +08:00
hoshi-hiyouga
4d1791e905 [deps] upgrade vllm (#6857)
Former-commit-id: 4bd50f65a3d62528768561019fda2723d045c7fd
2025-02-08 15:02:28 +08:00
hoshi-hiyouga
528e06ccaa fix qwen2vl plugin (#6855)
Former-commit-id: fd13b7138ab3f4da0a429a327b9d076bcb70b944
2025-02-08 10:59:10 +08:00
hoshi-hiyouga
fec641ec82 [misc] allow extra args (#6831)
Former-commit-id: 0fd3a5295cb4e08a4e57e860e82103364c28fba8
2025-02-06 12:38:08 +08:00
Zhangchi Feng
8f401e37f8 [model] support audio (#6701)
* support qwen2_audio

* improve code

* lint

* fix

* fix

* fix

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 5eacb5629e4d7733cd992a63747a1335f2c6a929
2025-02-05 04:59:09 +08:00
Yueqi Song
9feb78e7b4 [data] allow thought in function call (#6797)
* Update template.py

* Update template.py

* use formatter

* fix regex

---------

Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 3a31af6e920683ec074da93b1719e29f5d4cffd6
2025-02-05 02:26:23 +08:00
hoshi-hiyouga
c2022431aa [misc] update license year & fix llama pro (#6814)
* fix llamapro script

* change year

Former-commit-id: d9ae594178796994d400a5f207d6499712816f89
2025-02-05 01:53:33 +08:00
Yueqi Song
0817c24c04 [data] fix qwen tool template (#6796)
* Update tool_utils.py

* fix unittest

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 02bb78a792112f5151b3a96ddde2528823855288
2025-02-05 00:02:00 +08:00
Zhangchi Feng
cfb926fb84 [data] fix minicpmv plugin (#6801)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

* update init audio

* update init audio

* [model]fix image process in minicpmo

Former-commit-id: 8f704c8b6228ef50f828014f85dce67fda868660
2025-02-04 21:20:15 +08:00
neavo
34746d6151 [readme] update flash attention installation instruction on win platform (#6788)
* Update README_zh.md

* Update README.md

Former-commit-id: e48d1327fb39cc95f8fbfc746494f67a79471893
2025-02-01 12:43:29 +08:00
hoshi-hiyouga
5bb447b118 [misc] update workflows (#6787)
Former-commit-id: 15add6b250149e2aeabdc62d7dca69fc06054e01
2025-02-01 04:54:42 +08:00
hoshi-hiyouga
a28261a866 [model] add mistral small models (#6786)
Former-commit-id: e5e95c39bc4199fa89c67e34f9adaaa987058744
2025-02-01 04:31:38 +08:00
hoshi-hiyouga
800de98dc8 [model] add qwen2.5 vl models (#6779)
Former-commit-id: ed46fb4f6194c30060b908092464dded12e5787c
2025-01-31 03:00:29 +08:00
hoshi-hiyouga
222423bcef [breaking] support transformers 4.48 (#6628)
Former-commit-id: f154ab175c513a4d7bb866bf2cffc34b77b50508
2025-01-31 01:36:33 +08:00
hoshi-hiyouga
e71737351f [webui] improve webui & reasoning mode (#6778)
Former-commit-id: 3f17fc0d7163372e0446f1a38792ff761e99b739
2025-01-31 00:09:21 +08:00
qvlehao
4f298894da [model] add deepseek-R1 & show think process (#6767)
Former-commit-id: 4dccb724af51208a001c96fefbdbf226be09e50c
2025-01-29 12:16:26 +08:00
yinpu
a8fae3869d fix: avoid redundant normalization in DPO's SFT loss calculation (#6722)
Former-commit-id: 971a8ccbdacf130763d40c7ef82a711b2fc1292f
2025-01-21 13:38:02 +08:00
engchina
db9b977e4f [webui] support ja (#6698)
* add support for japanese language

* add support for japanese language

---------

Co-authored-by: engchina <atjapan2015@gmail.com>
Former-commit-id: 88692e403f9b5085dd0c7c2b2c68656c5da50dd4
2025-01-20 19:46:38 +08:00
hoshi-hiyouga
87d685b59f [model] support yarn (#6693)
Former-commit-id: 8c412abc44a4c61b683465e36c6288580d980250
2025-01-18 13:56:09 +08:00
hoshi-hiyouga
e4046bdd1f [assets] update wechat (#6692)
Former-commit-id: 70dba5fab6f4c9225758cafb646113d8e80ac084
2025-01-18 12:35:03 +08:00
hoshi-hiyouga
5baa3add8c [misc] update mm plugin (#6691)
Former-commit-id: 00303338d6927b1fda58b23340a31a8fa009f706
2025-01-17 23:04:26 +08:00
hoshi-hiyouga
332f637592 disable valset by default (#6690)
Former-commit-id: a1a94f364e33d1d73852f74eda4fa581e6b16533
2025-01-17 21:09:30 +08:00
hoshi-hiyouga
31daa6570b [webui] upgrade to gradio 5 (#6688)
Former-commit-id: 9df7721264ddef0008d7648e6ed173adef99bd74
2025-01-17 20:15:42 +08:00
hoshi-hiyouga
33525a34b6 fix qwen2 moe (#6684)
Former-commit-id: ab624419fa0ab23ef7a331a0ec14e393328772b5
2025-01-17 13:46:09 +08:00
Zhangchi Feng
3607caa2ad [data] Fix minicpmv/o dpo training (#6657)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

* support dpo of minicpmv

Former-commit-id: 8d9f47b98047f370637d1c96c2f3440dcc738ef3
2025-01-15 17:30:37 +08:00
steveepreston
0fc2e19279 Update val_size english description (#6653)
* Update `val_size` Description in locales.py

* Update `val_size` Description in data_args.py

* Remove extra space in data_args.py

Former-commit-id: f1ba5158091446dce540dd796284037bdd724c38
2025-01-15 16:00:20 +08:00
hoshi-hiyouga
ef994600db update readme (#6648)
Former-commit-id: b47467276ab3174c50329b3c8b76823bc0a2249c
2025-01-15 11:06:19 +08:00
hoshi-hiyouga
7638f1070e [optim] clean apollo (#6645)
* clean apollo code

* update readme

Former-commit-id: 38b8ec4a99189483124b54df9d6bc6b0d318855a
2025-01-15 01:42:50 +08:00
zhuHQ
c2120432db [optim] add support to APOLLO (#6617)
Former-commit-id: 5a252e5a458457adbd19da3b68a3897ad2962824
2025-01-15 00:24:56 +08:00
Zhangchi Feng
66184762e8 update readme of MiniCPM-o (#6642)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

* update readme

Former-commit-id: 68604050ae2c98aeef5e9a6b4d2c11a4eb609bfa
2025-01-14 21:22:35 +08:00
hoshi-hiyouga
41a9e231cb lint (#6641)
Former-commit-id: 79731ae13ecd17eb8646fb53162c81dddfef3b00
2025-01-14 18:40:07 +08:00
Haian Huang(深度眸)
1bb06e06df Support InternLM3 Dense 8B Model (#6640)
* support internlm3

* update

* update

* update

* add hint

Former-commit-id: 24ab7ae0944c5f373e9cac60f0332e704824a057
2025-01-14 18:07:27 +08:00
Xiaosu Zhu
381f7120e6 Fix tokenizer max length (#6632)
Former-commit-id: 1807c7ba033985490aa7c8c39d880da6af983b92
2025-01-14 17:35:54 +08:00
Zhangchi Feng
f7857c83e1 Support Inference of MiniCPM-V-2.6 and MiniCPM-o-2.6 (#6631)
* fix template name

* tiny fix

* support minicpm-o-2.6

* support inference of minicpmv

Former-commit-id: 7f3c64e853a7cdd49d02bf85e237611941ac7fa8
2025-01-14 17:34:58 +08:00
hoshi-hiyouga
d0da6f40b0 [model] fix mllama any image (#6637)
* fix mllama any image

* reorder classes

Former-commit-id: 1242a1c4b4a465c06363fdc59302e80e5c4c96e6
2025-01-14 16:47:58 +08:00
hoshi-hiyouga
28d145a066 pin vllm version to 0.6.5 (#6629)
Former-commit-id: 26097ca0adf25ebb7d9e8eec2d2cef673c6cfe88
2025-01-14 02:44:02 +08:00
Zhangchi Feng
ae32c148d1 Support new features of MiniCPM-V (#6626)
* fix template name

* tiny fix

* support minicpm-o-2.6

Former-commit-id: 53034a61c7654358f46916cbc370910fb2aeff3b
2025-01-14 00:26:19 +08:00
hoshi-hiyouga
2a05941b14 [inference] fix stop token for object detection (#6624)
* fix stop token

* update minicpm data pipeline

* fix npu qlora examples

Former-commit-id: 844919fadaa8a61dfae47020971ea80730b2346f
2025-01-13 21:34:20 +08:00
codingma
11c38b9173 add nf4 qlora support on Ascend NPU (#6601)
* add nf4 qlora support on Ascend NPU

* add transformers version check

* add python>=3.10 requirement description for npu

* tiny fix

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
Former-commit-id: 7912d1acac5f10dab22145fe729a90c57aad8d85
2025-01-13 19:43:36 +08:00
Zhangchi Feng
73c1c15b62 Fix template name of MiniCPM-V (#6620)
* fix template name

* tiny fix

Former-commit-id: 94dea52cef709a7e6f1cdc0b78e83e0422bd65d3
2025-01-13 16:46:48 +08:00
hoshi-hiyouga
7f58bf984f Merge pull request #6598 from BUAADreamer/minicpmv
[model] Support MiniCPM-V

Former-commit-id: 251e82bec12eaea6cf13608de191c096c63d1214
2025-01-13 15:24:02 +08:00
fzc8578
ec552372ba remove tests
Former-commit-id: 51addcd7ab81548a9952064dd8c95a8542252003
2025-01-13 15:08:35 +08:00
fzc8578
17d32fb5c7 fix tests
Former-commit-id: 582a17a12010943c7ca1cc0e25ebc8d125d10b45
2025-01-13 15:01:39 +08:00
fzc8578
4b61610b12 fix style
Former-commit-id: 76a36d9acecbf36b6959a14caacfed1d32bcee41
2025-01-13 14:19:38 +08:00
fzc8578
07798e4aad fix system prompt and tests
Former-commit-id: 955efca677b299749f3d40d587ee310951537543
2025-01-13 14:18:06 +08:00
fzc8578
6d6acd0213 add some
Former-commit-id: 5ad8ef3ec434f53f6fc494474becb034a3aca0ca
2025-01-11 15:03:20 +08:00
fzc8578
a789e0f263 add cpm_o test
Former-commit-id: 53cade69caed82b470fdb249274f03ee34af3100
2025-01-11 11:55:30 +08:00
fzc8578
f9ee00b6b6 add cpm_o test
Former-commit-id: 81dc0f678a7609c834581d956387bde42652755d
2025-01-11 11:49:03 +08:00
fzc8578
31bfdb08cd fix format
Former-commit-id: 964e18be5a824950164bc7232d35822a8b116d1a
2025-01-11 01:27:40 +08:00
fzc8578
12c83e00fc add some
Former-commit-id: 6233764d18f31365e9ba450408306fad55567ffc
2025-01-11 01:10:24 +08:00
fzc8578
9dc7b6c7ac adapt to new mllm_param
Former-commit-id: 0775b71965863c2618c117726a1046a36d6d85b8
2025-01-11 00:16:34 +08:00
Zhangchi Feng
627548bf7f Merge branch 'main' into minicpmv
Former-commit-id: 8a9c90759feda975faadc5858bd44b7ea116e7fb
2025-01-11 00:01:36 +08:00
hiyouga
dc65ecdf09 refactor mllm param logic
Former-commit-id: b895c190945cf5d991cb4e4dea2ae73cc9c8d246
2025-01-10 15:45:48 +00:00
fzc8578
e577990eb2 add minicpmv2.6
Former-commit-id: 1ab0aea54b54066cad500b7969b86a0e952d396d
2025-01-10 23:45:44 +08:00
fzc8578
1f3b729a4b add some
Former-commit-id: 58f50b8729083e9ea0fdcf07042b06261670ad57
2025-01-10 23:29:06 +08:00
fzc8578
0aa7ac210f add some
Former-commit-id: 3acd151a0f8efdd230c0b0980550795d204a69f7
2025-01-10 21:25:32 +08:00
fzc8578
40382f1387 fix some
Former-commit-id: 1eb7118db3ad6054cfd59d5f16a5d882e40e9057
2025-01-10 20:55:52 +08:00
fzc8578
75b3819e43 fix version
Former-commit-id: 834903fbf7a0fc8ac110f62f4df7c13819dd3c68
2025-01-10 20:31:04 +08:00
fzc8578
e63c2df0b1 fix some
Former-commit-id: cd5a1a8b9c6eb59d6e95f79573f60ad8668f1942
2025-01-10 20:27:06 +08:00
fzc8578
25d4889789 tiny fix
Former-commit-id: f088e580d3bacd0eecd0c3bf17e928eb49832ba1
2025-01-10 20:15:39 +08:00
Zhangchi Feng
8c0a721c4c Merge branch 'main' into minicpmv
Former-commit-id: d8840ae416660e23f1d615ffd404f519360151d9
2025-01-10 20:12:07 +08:00
fzc8578
9e972bc9ec add some
Former-commit-id: fede563aeb716ba5d1e368fd3e1182e4e580d248
2025-01-10 20:01:22 +08:00
hoshi-hiyouga
1675712a4c Merge pull request #6588 from hiyouga/hiyouga/upd_issue_temp
[gh] update issue template

Former-commit-id: 0a2626f996ce61559e93bedf19083aac5c861666
2025-01-10 03:03:48 +08:00
hiyouga
e0c9012f7f update issue template
Former-commit-id: 2bfca993588d8087dfd118f6f02486bbe752b166
2025-01-09 18:58:53 +00:00
hoshi-hiyouga
a25024bd0c Merge pull request #6585 from hiyouga/hiyouga/add_phi4
[model] add phi4 model

Former-commit-id: 0ae6a9b7bf9f1d6d844b97406b4795363bf75e78
2025-01-10 02:39:17 +08:00
hiyouga
867980196e improve template, add phi4 model
Former-commit-id: a785b6796e445a3adba45c5b6947166a2ff99871
2025-01-09 18:27:54 +00:00
hoshi-hiyouga
4e25d037c8 Merge pull request #6564 from stephen-nju/fix_ray
Fix ray

Former-commit-id: d4566839369726023f1b6e8f4b2332bda0c715cc
2025-01-08 18:14:18 +08:00
hoshi-hiyouga
6ba6926221 Merge pull request #6565 from hiyouga/hiyouga/improve_log
[misc] imporve log

Former-commit-id: 538bf7b839c63d6a6758522fa08999d9b78e9db2
2025-01-08 18:08:21 +08:00
zhubin
b6b53b61f7 fix –get ray args when args not a dict
Former-commit-id: 5e5398cd5b117b2378107172d3f91cfb0321e842
2025-01-08 10:06:02 +00:00
hiyouga
647c51a772 imporve log
Former-commit-id: a6abf375975ffea3d51e1b944c9855b5f62ffac8
2025-01-08 09:56:10 +00:00
hoshi-hiyouga
3b843ac9d4 Merge pull request #6542 from erictang000/et/ray-integration
Ray Train integration with LLaMA-Factory

Former-commit-id: 4e34ee0a8e0aa90b535e53608b51c5c0804db34e
2025-01-08 11:46:03 +08:00
hiyouga
0ef1f981da fix llamaboard with ray
Former-commit-id: bd8a432d6a980b1b24a551626304fe3d394b1baf
2025-01-07 09:59:24 +00:00
hiyouga
944a2aec4d refactor ray integration, support save ckpt
Former-commit-id: 2f50b27e608b2092bfceab6c6e84e6631e973ee2
2025-01-07 09:39:10 +00:00
Eric Tang
4f31ad997c run style check
Former-commit-id: 5ec33baf5f95df9fa2afe5523c825d3eda8a076b
2025-01-07 08:55:44 +00:00
Kourosh Hakhamaneshi
8683582300 drafting ray integration
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>

Former-commit-id: 19c12ddae9350f6e25a270fe3372f5b9094cf960
2025-01-07 08:55:44 +00:00
hoshi-hiyouga
5ccc607222 Merge pull request #6547 from hiyouga/hiyouga/fix_pixtral_dpo
[trainer] fix pixtral dpo

Former-commit-id: 920bb2a8922847fa544e2c260c67161e64cf5d50
2025-01-07 14:38:55 +08:00
hiyouga
d8bd46f1bf fix #6546
Former-commit-id: 6fcf2f10faf3b1614896b091591eeef96d717e64
2025-01-07 06:30:44 +00:00
fzc8578
8c2a712247 add some
Former-commit-id: b4790c66c126567bd193de52a564e3ce11c94769
2025-01-06 19:32:39 +08:00
hoshi-hiyouga
53e41bf2c7 Merge pull request #6528 from hiyouga/hiyouga/upd_wechat
[assets] update wechat

Former-commit-id: 3ceedf44896b5ebc406d6398b3f15e74e4710fbe
2025-01-04 16:01:21 +08:00
hiyouga
0eeae9061c update wechat
Former-commit-id: 11a9d96a042e8afd972e0bf2fa3e51f95e4799ec
2025-01-04 07:59:57 +00:00
Zhangchi Feng
08729dbefc Merge branch 'hiyouga:main' into minicpmv
Former-commit-id: 873b2d5888038e2328a12a6eb7c84099ba7ca1f3
2025-01-04 11:20:33 +08:00
fzc8578
2c120aa0df add some
Former-commit-id: 81176fe226da89eace89cb202bad68e73b7c2a02
2025-01-04 11:11:15 +08:00
hoshi-hiyouga
cca6286b6f Merge pull request #6524 from hiyouga/hiyouga/upd_scripts
[misc] update scripts

Former-commit-id: 6ba3ec45fc369c095ab9a1fbd9847dc66cf24ca4
2025-01-03 23:52:26 +08:00
hiyouga
8516054e4d update scripts
Former-commit-id: 05aa52adde8905ca892f1ed5847d6f90b1992848
2025-01-03 10:50:32 +00:00
hoshi-hiyouga
d1a8cd67d2 Merge pull request #6515 from hiyouga/hiyouga/misc
[misc] update model name

Former-commit-id: f92eea4090351dcd3c364e10a9eec0d17d480e12
2025-01-02 20:20:02 +08:00
hiyouga
8a5b4bdfd4 update model name
Former-commit-id: bf627d9f1ac117f040adbfd7630b5283f0db556a
2025-01-02 12:19:21 +00:00
hoshi-hiyouga
3bceef02ee Merge pull request #6514 from hiyouga/hiyouga/add_project
[readme] add project

Former-commit-id: 0bd0c373183731302f1af9f33a1f8ff70ba743e2
2025-01-02 20:16:15 +08:00
hoshi-hiyouga
166a830938 Merge pull request #6513 from hiyouga/hiyouga/add_gpt2
[model] add gpt2 model

Former-commit-id: 859c37f43c8a49eea4f118d0d00ee2a554f6bd4f
2025-01-02 20:15:55 +08:00
hiyouga
18767fe026 add project
Former-commit-id: 3b7e745d271e36b4cfe8826820b23254e1debfe9
2025-01-02 12:15:41 +00:00
hiyouga
18a1a4b9da add gpt2 model
Former-commit-id: 37d5e3639fcf5ae6e58cc435e0fa9dee0d6e4ead
2025-01-02 12:07:38 +00:00
hoshi-hiyouga
6015fe700e Merge pull request #6512 from hiyouga/hiyouga/fix_gen_logic
[trainer] fix generate logic

Former-commit-id: b97759421c535560ade631a7fa0a57b7c0da50f1
2025-01-02 19:36:54 +08:00
hoshi-hiyouga
369dae8dd3 Merge pull request #6462 from shibingli/main
Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building

Former-commit-id: 1e72bb24253bb07da874f3a37ccfa4fddaaf6978
2025-01-02 19:34:17 +08:00
hiyouga
2aaf3697d7 fix #6499
Former-commit-id: dffc607220ff6dac15cf501ac9a3cdbe80c25211
2025-01-02 11:28:54 +00:00
hoshi-hiyouga
5504b5254c Merge pull request #6492 from hiyouga/hiyouga/add_deepseek3
[model] add deepseek3 model

Former-commit-id: 0a6d1244a51f3cc8fe141b32f39bffce4c924a8c
2024-12-30 21:50:13 +08:00
hiyouga
b2e4f11602 add deepseek3 model
Former-commit-id: 611779d412f31e25b1ed38049050eee2da61dde5
2024-12-30 13:39:20 +00:00
hoshi-hiyouga
e3f95abca7 Merge pull request #5507 from piamo/main
Add deepseek-v2.5 template

Former-commit-id: 8a4911d201e219465fe0835a3ceb967f8b80dc0e
2024-12-30 21:08:25 +08:00
hoshi-hiyouga
2f44f70c2c Merge pull request #6483 from hiyouga/hiyouga/fix_paligemma_infer
[model] update vllm & fix paligemma dtype

Former-commit-id: 03ad6d44805a965764aaa51376964972b9b7da3d
2024-12-30 16:34:32 +08:00
hiyouga
f8f05a883b fix #6482
Former-commit-id: 8577f52b4152efe6cc7a8b5f6d37b4f9ba6684e7
2024-12-30 06:03:07 +00:00
hoshi-hiyouga
5f473e2696 Merge pull request #6465 from hiyouga/hiyouga/fix_eval_loss
[trainer] fix eval loss

Former-commit-id: fa8110b2052a74b4bd0dcf391a54207e1e31056d
2024-12-28 01:02:56 +08:00
hiyouga
88b1874c04 fix #6448
Former-commit-id: 04f78e85af5af14b4c195936623e426a6a128af2
2024-12-27 16:54:39 +00:00
shibingli@yeah.net
58bc6943dc Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building.
Former-commit-id: c46af4c45f96f1942dfaf77bdbdbe5d0fe85a387
2024-12-27 18:31:14 +08:00
shibingli@yeah.net
2dedf7b401 Add ARG HTTP_PROXY in Dockerfile to support HTTP proxy during image building.This commit introduces an ARG parameter named HTTP_PROXY in the Dockerfile. This addition allows for the configuration of an HTTP proxy, facilitating image building in environments with network restrictions.
Former-commit-id: d59fe30bca636bc2ca132d50172dba0032cecb6b
2024-12-27 18:17:17 +08:00
hoshi-hiyouga
5769a553d2 Merge pull request #6457 from youkaichao/module-run
[misc] enable module run

Former-commit-id: 813881a5d13dd1d5a526a85d41032196e0d46f04
2024-12-26 23:41:37 +08:00
youkaichao
552816e04b Update cli.py
Former-commit-id: 18e65bbd3ae07af3b9eed7f293c345815776c325
2024-12-26 23:22:09 +08:00
hoshi-hiyouga
b5fa1044b8 Merge pull request #6443 from hiyouga/hiyouga/add_qvq
[modle] add qvq

Former-commit-id: 2010e80b1a939d21efa13d54df5f5d648ea640de
2024-12-25 15:53:19 +08:00
hiyouga
3c55976a0e add qvq #6439
Former-commit-id: 4dbfa142d899dd6e4d1a9d4db125765af5580a4f
2024-12-25 07:52:41 +00:00
hoshi-hiyouga
4611f67fae Merge pull request #6426 from hiyouga/hiyouga/update_readme
[assets] update readme

Former-commit-id: 2309c431090d1f3b573d113bbedeabee2b01fdf2
2024-12-23 22:17:19 +08:00
hiyouga
a5346041bb update readme
Former-commit-id: 1deda4750e0df6c46aeb33cf3f8b35baa537cc1d
2024-12-23 14:08:59 +00:00
hoshi-hiyouga
df42e438c1 Merge pull request #5922 from Tuyohai/main
support granite3 models

Former-commit-id: a9087bc0549f7f16e5b4c39e324043755b1618c8
2024-12-23 16:46:02 +08:00
hoshi-hiyouga
7dbfd7dff6 Merge pull request #6418 from hiyouga/hiyouga/add_report
[trainer] add custom args to experimental logger

Former-commit-id: 5e5a7ba73c1a386f025d75c10b102306bcb98674
2024-12-22 05:47:55 +08:00
hiyouga
a897d46049 support report custom args
Former-commit-id: d41254c40a1c5cacf9377096adb27efa9bdb79ea
2024-12-21 21:42:45 +00:00
hiyouga
adff887659 fix paligemma infer
Former-commit-id: d272455d6118c1d670c70cfe3458d8dab111da6c
2024-12-21 20:24:32 +00:00
hoshi-hiyouga
eba78f2159 Merge pull request #6416 from Zeyi-Lin/main
docs: use swanlab
Former-commit-id: 0759b576a36cde120ccb8cadd96fca4d871be130
2024-12-22 04:08:26 +08:00
ZeYi Lin
ec05c8cdb4 docs: use swanlab
Former-commit-id: 33509ea7bcd5f698a8393379bb3941c3c32f7fd6
2024-12-21 20:59:25 +08:00
hoshi-hiyouga
0a869c4ed4 Merge pull request #6401 from Zeyi-Lin/hiyouga/swanlab
feat: add swanlab for experiment tracking and visualization.
Former-commit-id: e65fe507f7643bf40b0fc462805c7b7f8ef6b738
2024-12-21 14:09:33 +08:00
ZeYi Lin
f792eaf8d4 fix: project blank
Former-commit-id: 3a0939572b0bfc7da0ee1a7244b6b3fbf567aba0
2024-12-20 18:26:02 +08:00
ZeYi Lin
8a41c96761 fix: by hiyouga suggestion
Former-commit-id: 41195f1bc69e4b5da7a265369d368b06754362cf
2024-12-20 16:43:03 +08:00
ZeYi Lin
e5d9d8c55d feat: ui improve
Former-commit-id: 6a1effb1741a13ae5238b0e9b429b4cbe3b6534f
2024-12-20 11:03:02 +08:00
ZeYi Lin
3e44c8fe3a fix: text
Former-commit-id: 52fe8d61eba7b7d8f66df09a03d40f25cc9c5b44
2024-12-19 21:26:02 +08:00
ZeYi Lin
925e421bde fix: bugs
Former-commit-id: a2297f97f7587c77d55fbce9ffa81dc60d0b04a1
2024-12-19 21:08:16 +08:00
hoshi-hiyouga
bbb636bdba Merge pull request #6395 from hiyouga/hiyouga/fix_genkwargs
[generate] fix generate kwargs

Former-commit-id: 1193594f2d06df38ec0aef7f591c74651cf1353c
2024-12-19 20:24:17 +08:00
ZeYi Lin
a30bdbb1c0 docs: config framework
Former-commit-id: 9cad21df82754170900e3ea74476f674754159b3
2024-12-19 20:22:36 +08:00
ZeYi Lin
95b7e10a06 fix: string
Former-commit-id: 73e1da5ab07c96a6faa9738e83c4dd9297f34b14
2024-12-19 20:18:59 +08:00
hiyouga
0385c60177 fix #6391
Former-commit-id: 067ba6e6cb4d8a1d95bba0a108f73008416a2865
2024-12-19 12:16:38 +00:00
ZeYi Lin
44895ebe36 feat: optimize frontend
Former-commit-id: 4a78603c141d9bd78bcaf81261b443cf082bf51f
2024-12-19 19:04:19 +08:00
ZeYi Lin
44dfbf9dbd feat: swanlab params
Former-commit-id: 761b3bdb03e27826fde2ca86d4e37b53c2bbc777
2024-12-19 18:47:27 +08:00
hoshi-hiyouga
0a465fc3ca Merge pull request #6388 from hiyouga/hiyouga/shuffle_control
[trainer] support disable shuffling

Former-commit-id: 3243e74a2ed3b1f7fa818842955f91386b591a9c
2024-12-19 17:00:12 +08:00
hiyouga
01eeae50b5 support disable shuffling
Former-commit-id: 9d8c35fd6b838ede0bd6827c6c6121f2cba2b11b
2024-12-19 08:53:21 +00:00
hiyouga
7eeeffdb8a add swanlab
Former-commit-id: c85a77c8a8824a56a67d56b97b4877fcd6edeb3d
2024-12-19 07:12:31 +00:00
hoshi-hiyouga
eca06531c3 Merge pull request #6384 from hiyouga/hiyouga/fix_webui
[webui] fix webui args

Former-commit-id: 94294c4e356b3ac5546f897d6e3255ee8c2a260f
2024-12-19 14:57:52 +08:00
hiyouga
d90b40b60f fix webui
Former-commit-id: 7152fde4a026e67f15885814c1900f3911d04ee8
2024-12-19 06:48:03 +00:00
hoshi-hiyouga
1898c1e9a6 Merge pull request #6379 from hiyouga/hiyouga/add_paligemma2
[model] add paligemma2

Former-commit-id: abe3ff3fe0b113e949bf6d2bd10e4c125fb8fe75
2024-12-18 17:03:11 +08:00
hiyouga
8d2f8b0dd8 add paligemma2
Former-commit-id: dafbc31684cb2566ef23c79e171cdfd02d6d396b
2024-12-18 08:57:26 +00:00
hoshi-hiyouga
df42281256 Merge pull request #6313 from ge-xing/main
support telechat2 model

Former-commit-id: 282d0619b1047ba48f9bc3ac837d2ed40b7df307
2024-12-18 16:16:17 +08:00
hoshi-hiyouga
896cf476d5 Merge pull request #6369 from hiyouga/hiyouga/template
[template] support qwen2 tool template

Former-commit-id: e1e133635f05f5b83869bc02340d6ea46976f318
2024-12-18 04:23:49 +08:00
hiyouga
37961d5f06 support qwen tool format
Former-commit-id: cbef4cb501fa1b50fa611e7054a856ce2c5ed10e
2024-12-17 20:12:06 +00:00
hiyouga
bb047bc844 change default replace jinja to false
Former-commit-id: bfe6625f6f6aa294933fa9056a4bfedee4fbe5e2
2024-12-17 19:27:10 +00:00
hoshi-hiyouga
448adedf6a Merge pull request #5473 from AlongWY/mistral
Support Mistral format tools

Former-commit-id: 4838427310d49e5942138e4578d2483baa005471
2024-12-18 03:23:24 +08:00
ylfeng
469c7cd462 Support Mistral format tools
Former-commit-id: e42d0e54b7a64a3f017a09e99846d174db7b438f
2024-12-17 19:13:26 +00:00
hoshi-hiyouga
ebf6a07681 Merge pull request #6368 from hiyouga/hiyouga/fix_llama_template
[template] fix llama3 tool template

Former-commit-id: 7c6763c4f3287f758077191361d5b0354741f84a
2024-12-18 01:10:48 +08:00
hiyouga
53f0fff513 fix llama3 tool template
Former-commit-id: 63f28a594a44c011f2e6d418f22ddbfc445db163
2024-12-17 17:05:10 +00:00
hoshi-hiyouga
ab7567693d Merge pull request #6367 from hiyouga/hiyouga/add_model
[model&template] add llama3.3 & support llama3 tool prompt

Former-commit-id: c32012c5e4943a30c3061716ed780d6124b6c90d
2024-12-18 00:13:28 +08:00
hiyouga
1b8aab0723 support llama3 tool prompt
Former-commit-id: dc45d2f56669fd99935a68cda1ec0e8f36229f7f
2024-12-17 15:52:37 +00:00
hoshi-hiyouga
30ebe61914 Merge pull request #5819 from yafshar/remote_code
Add trust_remote_code Parameter and Set Default to False

Former-commit-id: e82099350a2fb6d8ddf9c80ba0b18173057d4dcf
2024-12-17 21:10:24 +08:00
Yaser Afshar
6f1c8dacea Add missing key to init_kwargs
Former-commit-id: 03fc4621dad132164596a58d3e8693787b7e1aca
2024-12-17 12:34:05 +00:00
Yaser Afshar
8881237475 Add trust_remote_code parameter and remove True
- Introduced a new model parameter `trust_remote_code`
- Set the default value of `trust_remote_code` to `False`
  to enhance security


Former-commit-id: 4bf23f406cf5235c16f9f8139850c53354901814
2024-12-17 12:25:12 +00:00
zhaohu xing
584755be4b support telechat2 model
Former-commit-id: 15a069d85c07842cd28d65845af93c3cf70ef1f4
2024-12-17 12:15:33 +00:00
hoshi-hiyouga
3d3324be5c Merge pull request #6364 from hiyouga/hiyouga/control_reenterent_gc
[model] support non-reenterent-gc

Former-commit-id: a8a13cb360980bb4acd493e33ed405e07460fe73
2024-12-17 19:58:36 +08:00
hiyouga
4196d5b4d6 support non-reenterent-gc & fix #6358
Former-commit-id: 20446141e408885eb36d512bfb2dfb62bbc0c20d
2024-12-17 11:41:59 +00:00
hoshi-hiyouga
101c95ce65 Merge pull request #6363 from hiyouga/hiyouga/control_skip_eos
[infer] support control eos

Former-commit-id: 963640cff370be9f2fab649c88a120a645e6992e
2024-12-17 19:35:40 +08:00
hiyouga
19ebc0e7a2 support control eos, fix #6345
Former-commit-id: cb0f8399356bf372f3b7963f2565c3d504be0923
2024-12-17 10:42:05 +00:00
hoshi-hiyouga
1ce15b5d9e Merge pull request #6362 from hiyouga/hiyouga/mllm_packing
[model] generalized packing

Former-commit-id: b85f77a2687f7e0d11f7d2e49de54c544e39e3d5
2024-12-17 18:41:48 +08:00
hiyouga
d670d62a66 generalized packing & fix #6343
Former-commit-id: 3b1e4194616cacd5c24f08b328e31a008bddcf29
2024-12-17 10:26:19 +00:00
hoshi-hiyouga
6522467ddb Merge pull request #6359 from hiyouga/hiyouga/fix_qwen2vl_infer
[model] fix qwen2vl infern

Former-commit-id: 419cba5fae31a3c88305fe424b8aae9d59e3941a
2024-12-17 18:15:23 +08:00
hiyouga
aacd9642f5 fix #6348
Former-commit-id: 83e552320909f4775377889f1512994b7e638a7e
2024-12-17 10:06:46 +00:00
hoshi-hiyouga
4446c92517 Merge pull request #6334 from hiyouga/hiyouga/add_examples
[assets] update wechat and examples

Former-commit-id: 7725e7ac7d21ad844e8424a920e8bece6f38af19
2024-12-15 01:37:01 +08:00
hiyouga
8c65548b10 update assets
Former-commit-id: 7b9bd552b2bf97b72976511094eb51dfde5d1017
2024-12-14 17:36:03 +00:00
hiyouga
fb22651faf fix mrope
Former-commit-id: 55bee1d333549ca19858b3f5c1b7b86926e5fb09
2024-12-12 15:08:17 +00:00
hoshi-hiyouga
cfff136b2a Merge pull request #6253 from hiyouga/hiyouga/qwen2vl_mm_proj
[model] support qwen2vl train proj only

Former-commit-id: 0b0012142ab683da1e0558e6240310bf90f39150
2024-12-05 20:25:33 +08:00
hiyouga
bac2c64f87 support qwen2vl train proj only
Former-commit-id: 0e949ef03455726e907c6f1039e93ebe480c897a
2024-12-05 10:37:42 +00:00
hoshi-hiyouga
be1ec97c8e Merge pull request #6251 from hiyouga/hiyouga/vllm_qwen2vl_infer
[infer] support qwen2vl vllm infer

Former-commit-id: df76f7d6e124131ce7628c31cce01de4f8e6014c
2024-12-05 18:26:19 +08:00
hiyouga
bbd432415d support qwen2vl vllm infer
Former-commit-id: 03ddd2555fb97488cd4daab11e8b672d36150c5a
2024-12-05 10:17:26 +00:00
hoshi-hiyouga
1fef702382 Merge pull request #6246 from hiyouga/hiyouga/update_examples
[examples] update examples

Former-commit-id: ecb688bdb3e940651d64bc1edc85ce4568f3eabe
2024-12-05 16:49:30 +08:00
hiyouga
39865d8a1f update examples
Former-commit-id: bcb010be7732ae137f156932100ee4d02a93725c
2024-12-05 08:48:25 +00:00
hoshi-hiyouga
c7b27bd70b Merge pull request #6242 from hiyouga/hiyouga/fix_script
[script] fix scripts

Former-commit-id: cf254ea0891ea2e6522fdbefcccf409ff7aafd99
2024-12-05 11:54:46 +08:00
hiyouga
86e4fab0d5 fix scripts
Former-commit-id: f94f55d20283298cb7d90d0573992a62df414a8f
2024-12-05 03:47:32 +00:00
hoshi-hiyouga
ff3e40e4a5 Merge pull request #6160 from village-way/pr_dataloader
fix:tokenized_path not None and load_from_disk return Dataset Trigger…
Former-commit-id: 63de20970c8062aeebed5f366f1675beb12e05bf
2024-12-04 22:18:19 +08:00
hoshi-hiyouga
ea830cad0c lint
Former-commit-id: 191ccc585399ad4c6c2c4f280b144b2c0a4869f3
2024-12-04 22:08:27 +08:00
hoshi-hiyouga
225e270fd5 Merge pull request #6238 from hiyouga/hiyouga/vllm_batchinfer
[infer] feat: support batch infer in vllm

Former-commit-id: 886752801ba8a5bf6fc4853ed618817185950c11
2024-12-04 21:59:13 +08:00
hiyouga
c1768cfb14 support batch infer in vllm
Former-commit-id: 3ef5ed3b9a44eed2f7e3ff221dfc343d0a97c0b5
2024-12-04 13:50:00 +00:00
hoshi-hiyouga
53edd62f8b Merge pull request #6190 from JieShenAI/main
add vllm_infer script

Former-commit-id: 09c7ea700c83dcf8d75796a1e28a36197f62cab4
2024-12-04 21:19:23 +08:00
hoshi-hiyouga
41a7e128b6 Merge pull request #6170 from hykilpikonna/main
[+] Show the hostname in webui title

Former-commit-id: 1cb2f9da317a8db8f45e887ab57cdfdc0e8b9412
2024-12-04 18:07:29 +08:00
hoshi-hiyouga
6b8c41c3ac Merge pull request #6233 from hiyouga/hiyouga/vlm_zero3
[data] fix vlm zero3 training

Former-commit-id: b0cbd5e3464a8a1a0f1cf709fb107b23a61f34ff
2024-12-04 17:51:10 +08:00
hiyouga
2f09c34980 fix vlm zero3 training
Former-commit-id: 86fe7fe71b51077310357b7b1895522258f9bc7a
2024-12-04 09:40:39 +00:00
JieShen
76dc69ce36 add async call api
Former-commit-id: 0f728386d88cf8253250c6650555d41578114a0c
2024-12-01 22:18:05 +08:00
JieShen
6c9d05539a add vllm_infer script
Former-commit-id: 4daab843a3aa096b35e5d3832c01fac4271e4604
2024-11-29 14:22:20 +08:00
Azalea
b6bc17f730 [U] Compute hostname differently
Former-commit-id: fbc735972af6facdaba169603a4c77e613b2e8d7
2024-11-28 22:23:41 -05:00
hoshi-hiyouga
c07ba8ccc0 Merge pull request #6175 from hiyouga/hiyouga/add_qwq
[model] add QwQ

Former-commit-id: da8f565c359004d811481b8b85f2a36f30e95e23
2024-11-28 17:01:53 +08:00
hiyouga
ed86f621a0 add qwq
Former-commit-id: acad977356a7f2e729eb6f2cb919a416b18f8add
2024-11-28 08:50:57 +00:00
Azalea
c6a3175bbf [+] Show the hostname
Former-commit-id: 410847656a760fe4c2c310b0d770072392d7aefb
2024-11-28 12:25:02 +08:00
wangdepeng
452291417d fix:tokenized_path not None and load_from_disk return Dataset Trigger stuck
Former-commit-id: cbf9da35728daaf98d92e699e891e334c74af1e5
2024-11-27 16:44:42 +08:00
hoshi-hiyouga
ab9db8b7c7 Merge pull request #6156 from hiyouga/hiyouga/add_o1
[data&model] add marco-o1, skywork-o1 and openo1

Former-commit-id: fa8aa1a3bcb49357799ec30fbb3f143a015e5d58
2024-11-27 14:36:01 +08:00
hiyouga
877e2ea791 fix dataset
Former-commit-id: d4a2d299414984a4043d30034c5c95e2d717a49e
2024-11-27 06:27:44 +00:00
hiyouga
6ea42d5b63 add skywork o1
Former-commit-id: 272a6fe972de926e5841c1570995f4e6fed9f28d
2024-11-27 05:51:59 +00:00
hiyouga
31c117e696 Merge remote-tracking branch 'origin/main' into hiyouga/add_o1
Former-commit-id: 5da8c00b233f96e51cf3bac7f25e3e61659d0cb7
2024-11-27 05:36:41 +00:00
hoshi-hiyouga
04f057334f Merge pull request #6157 from hiyouga/hiyouga/fix_ci
[ci] pin tokenizers version

Former-commit-id: 0357d7530d16699e728bc648abd08ea309e84865
2024-11-27 13:33:04 +08:00
hiyouga
99a54d06ca pin tokenizers version
Former-commit-id: 2b747737f0be2caeb737fe87dad6bf5902b4a588
2024-11-27 05:24:58 +00:00
hiyouga
8332c85f37 add marco-o1 and openo1 dataset
Former-commit-id: 51d49e075470951f109bcdde136203f972450c2e
2024-11-27 04:20:23 +00:00
hoshi-hiyouga
fcf1a3df62 Merge pull request #6152 from hiyouga/hiyouga/add_num_proc_in_data_load
[data] add num_proc in load_dataset

Former-commit-id: d8258ba7e792d5f17ae80d5e8b303e8fa820f162
2024-11-27 00:16:15 +08:00
hoshi-hiyouga
f4f52ae67d Merge pull request #6151 from hiyouga/hiyouga/fix_mllama
[model] fix mllama cross mask

Former-commit-id: 7e64661c1fc53c4d3d9fd915162b762e403b1991
2024-11-27 00:07:54 +08:00
hiyouga
0b08d5882a fix #6149
Former-commit-id: b581b272793314a9602f4dc2fb646a988a6249df
2024-11-26 16:03:02 +00:00
hiyouga
62eeafaba6 fix mllama cross_mask
Former-commit-id: c33967308bebd99489d28bd5a879525cf304c1f9
2024-11-26 15:56:58 +00:00
hoshi-hiyouga
5a52e41399 Merge pull request #6141 from hiyouga/hiyouga-patch-1
[misc] chore: lint

Former-commit-id: ba2b94c68eb08798792be76f95b94b358ce69f44
2024-11-25 23:02:11 +08:00
hoshi-hiyouga
e8083f8f3f lint
Former-commit-id: 57c3cf1f498d5ffafdc8c06e0f8713f8ff77de81
2024-11-25 22:55:56 +08:00
hoshi-hiyouga
338b3a03f0 Merge pull request #6140 from hiyouga/hiyouga/fix_mllama
[data] fix mllama plugin

Former-commit-id: b7e220a7d82db26cbe7ced9ed30332418cc4fa20
2024-11-25 22:32:07 +08:00
hoshi-hiyouga
c8b01b41ac fix #6139
Former-commit-id: a4e9552b9ade6ebb22d782f0412003279ddca23c
2024-11-25 22:22:06 +08:00
hoshi-hiyouga
6d08a418ed Merge pull request #6137 from hiyouga/hiyouga/fix_mllama
[model] fix mllama hidden_size

Former-commit-id: 54f1d3f4064b9d37261883e8399c8e7909178857
2024-11-25 20:17:33 +08:00
hoshi-hiyouga
e3066d1489 fix visual patch
Former-commit-id: ac51fa37cc23518b30a6123e188964dce39be82f
2024-11-25 20:06:06 +08:00
hoshi-hiyouga
487e3f2507 fix #6136
Former-commit-id: b84e5d91a070c473ea820c379bf9b5abbca6df2c
2024-11-25 19:43:42 +08:00
hoshi-hiyouga
b82a53cad8 Merge pull request #6127 from hiyouga/hiyouga/dev_version
[misc] set dev version

Former-commit-id: cb0a51031324c9fdf0c1fedf237692a40c2091d9
2024-11-25 01:42:29 +08:00
hiyouga
5bec82ca9d set dev version
Former-commit-id: a0aea74100a9505664023f6a46fc290e332dfa40
2024-11-25 01:36:49 +08:00
hoshi-hiyouga
57354fc990 Merge pull request #6124 from hiyouga/hiyouga/release
[release] release v0.9.1

Former-commit-id: f61cdd99fd282612884c92d36e111ad46b4e0d00
2024-11-25 00:20:02 +08:00
hoshi-hiyouga
89f240805c Merge pull request #6126 from hiyouga/hiyouga/fix_vllm
[inference] fix vllm

Former-commit-id: c5025c3ee6e67e62724cc3f34fbf8aa9968590f5
2024-11-25 00:19:54 +08:00
hoshi-hiyouga
27bbea886c Merge pull request #6010 from XYZliang/fix-#4316
Increase shm_size to 16GB in docker-compose.yml

Former-commit-id: 73194233f9f1aa8299be1360deb25b753338e168
2024-11-25 00:16:42 +08:00
hoshi-hiyouga
3ec3dda33a Merge pull request #6125 from hiyouga/hiyouga/fix_cli
[cli] remove shell=True in cli

Former-commit-id: cf3ec28baa9a9f1ba342fe3a627e85d8799a1912
2024-11-25 00:07:35 +08:00
hiyouga
ae9f338bf7 fix vllm
Former-commit-id: 9ce0e4b07e3733c015137bc93c7e6d53bf25b08e
2024-11-25 00:07:24 +08:00
hiyouga
bf44f76dc7 fix cli
Former-commit-id: 9338c287cc15c0cad8d5ddbdadfb6f64d383c034
2024-11-24 23:56:21 +08:00
hiyouga
c18581f0a4 release v0.9.1
Former-commit-id: a134ad42c65dc4d72e3083c932ddfaaa687c513d
2024-11-24 23:48:41 +08:00
hoshi-hiyouga
9f6c5c4798 Merge pull request #6123 from hiyouga/hiyouga/fix_qwen2vl_vllm
[inference] fix qwen2vl vllm infer

Former-commit-id: 5d886f99e3bd20795d5313dccf9f045d37a0aefc
2024-11-24 23:42:11 +08:00
hiyouga
7bc03ac986 fix qwen2vl vllm infer
Former-commit-id: 3ac98847fdc23129912c8994ed19a8c66fe00b8c
2024-11-24 23:27:24 +08:00
hoshi-hiyouga
85d7e4f4ab Merge pull request #6121 from hiyouga/hiyouga/readme
[readme] update readme

Former-commit-id: d603650a671c3a323f29001fd0cc53563d28f3e0
2024-11-24 03:28:09 +08:00
hiyouga
bf69747f40 update readme
Former-commit-id: 48423afe53d6f6de1a257a33019909009626a42e
2024-11-23 19:27:18 +00:00
hoshi-hiyouga
f1146bf7b6 Merge pull request #6120 from hiyouga/hiyouga/fix_ci
[test] fix ci

Former-commit-id: 573a0978b82986ec45aae16637edb6ff4af54a35
2024-11-24 03:21:11 +08:00
hiyouga
9efd1fec90 fix ci
Former-commit-id: 91c672f0147bb6eb998871a42f8a89992af88528
2024-11-23 19:13:32 +00:00
hoshi-hiyouga
3b91839a55 Merge pull request #5555 from marko1616/feat/llama3.2vl
Support llama3.2 vision

Former-commit-id: 8151dc488585d1cec6d4a0c9c6dcd46a6a57e9f0
2024-11-24 02:49:07 +08:00
hiyouga
bc4421eeef add forbidden modules
Former-commit-id: c9f4d051d0eca7515bab201afdef17f1ac1b3cb9
2024-11-23 18:34:15 +00:00
hiyouga
5003820a6a fix inputs
Former-commit-id: 7d535bb8cdf7e81edda81152e63c8cfe6c9dcc9f
2024-11-23 18:26:02 +00:00
marko1616
cd2485f28d Linter.
Former-commit-id: 719d124f65ebb18ba0a1212751da9909160fb6f1
2024-11-23 16:09:04 +00:00
marko1616
918a367378 Tiny fix.
Former-commit-id: 4c1cef12d812832eed58b5da562ba083104756d3
2024-11-23 16:09:01 +00:00
marko1616
3d35aeca72 Support llama3.2vl.
Former-commit-id: 664229d7d1f7994e1ae68c5d197ab81f081bcd2e
2024-11-23 16:07:35 +00:00
hoshi-hiyouga
53b1e5fd1d Merge commit from fork
[patch] Patch remote OS command injection vulnerability

Former-commit-id: 960897b950e29aa440afa45b4deb9d42d2f6e941
2024-11-21 22:39:44 +08:00
hoshi-hiyouga
b852c895cf do not split save_cmd ret value
Former-commit-id: 1e312072fb4a9f472e2d3fa7e6b4fb0aec00b566
2024-11-21 22:30:23 +08:00
superboy-zjc
aaa7ed8712 [patch] Patch remote OS command injection vulnerability
Former-commit-id: 4678ceea4ce334a8289caf87d86047e67c67c603
2024-11-21 01:52:12 -05:00
hoshi-hiyouga
205aca5b03 Merge pull request #6078 from wtmlon/support-efficient-tokens-calculation
support effective tokens calculation on sft/dpo

Former-commit-id: d0510e6d49b43c5ffadd8af653c3bdecc1582417
2024-11-20 13:43:15 +08:00
Ting
87b1f851f1 code refactor
Former-commit-id: ee3f85aa9677d0aeecb3bc396530d2cd7c50dce5
2024-11-19 20:33:18 +08:00
Ting
fca814b30d update
Former-commit-id: 516ed0ea5fed8c74fe3669a7e85dd89b5a0ec3c2
2024-11-19 19:12:10 +08:00
Ting
a20c2b6ecf update
Former-commit-id: a3e8ca53e654136242197a2da872cc0e5cf67880
2024-11-19 19:10:07 +08:00
Ting
fee94e1c54 support efficient tokens calculation on sft/dpo
Former-commit-id: b157d5cccdeb42412b8b440d25d5bdfa8a50be68
2024-11-19 17:15:47 +08:00
hoshi-hiyouga
047a596542 Merge pull request #6065 from hiyouga/hiyouga-patch-1
[misc] fix dep package version

Former-commit-id: 34a09e6cd1a8b1c2acddf837f1c787978bc526f5
2024-11-18 21:13:59 +08:00
hoshi-hiyouga
3d45606984 fix #6061
Former-commit-id: 4eb0b6763f0a1b3cde89bd5c69760178bb35d303
2024-11-18 20:56:44 +08:00
hoshi-hiyouga
310c107d56 Merge pull request #6052 from hiyouga/hiyouga-patch-1
[trainer] fix DPO metrics

Former-commit-id: 94add263fe874d2be1b37110faf5da7a5096df6d
2024-11-16 16:20:12 +08:00
hoshi-hiyouga
089e4d9e96 fix #6050
Former-commit-id: 028ea3d9b4fa4ab74a969ac80e61a449d6c15e74
2024-11-16 16:11:16 +08:00
hoshi-hiyouga
ae56c3cf49 Merge pull request #6046 from hiyouga/hiyouga/add_code_model
[model] add qwen-coder and opencoder

Former-commit-id: 5b485671aee8dd2f775371d0b9ff3d0d043159f3
2024-11-15 21:58:03 +08:00
hiyouga
0a0288a286 add qwen-coder and opencoder
Former-commit-id: 9669a42704cd40bdfc76ca278cc6a562549bc27d
2024-11-15 21:48:38 +08:00
XYZliang
25da686758 Increase shm_size to 16GB in docker-compose.yml to optimize shared memory allocation for large-scale model fine-tuning tasks.
This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.

This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708) regarding Shared Memory Limit error.


Former-commit-id: de2616d103b4bdc2458874068b1a223c7de82b4e
2024-11-13 10:13:59 +08:00
hoshi-hiyouga
e2da3cc9fa Merge pull request #5990 from hiyouga/hiyouga/dev_vllm
[generate] fix vllm config args

Former-commit-id: ee0745022bd7484f4f2e6b183088f55d5e60c085
2024-11-11 14:10:35 +08:00
hoshi-hiyouga
c42e5cf401 fix #5988
Former-commit-id: 9e08e206a8ea9926768b0f1d5ff9d7e3e216c269
2024-11-11 13:57:14 +08:00
hoshi-hiyouga
9943cd1c96 Merge pull request #5982 from hiyouga/hiyouga/vllm_args
[args] add vllm config

Former-commit-id: 07d3de5c8376d3c4147411ec603da4254885d2d7
2024-11-10 21:37:18 +08:00
hiyouga
1e6f96508a add vllm config
Former-commit-id: 95365f0ce4f362bde7de8b679b54b548d7055bfb
2024-11-10 21:28:18 +08:00
hoshi-hiyouga
d401974f69 Merge pull request #5973 from JJJJerry/fix_vllm_generate
fix VllmEngine: 将inputs参数替换为prompt

Former-commit-id: d3271416a316e6b92aea3026f6941f6967215a7b
2024-11-10 21:04:38 +08:00
hoshi-hiyouga
09b2dbe859 Update vllm_engine.py
Former-commit-id: 5638fae81c180b7d91eb6aebe6629640beb217d8
2024-11-10 20:57:00 +08:00
JJJJerry
7f8ef8c132 fix VllmEngine: 将inputs参数替换为prompt
Former-commit-id: 5affb1d20921afd3fe48802ff80785e412e2e3aa
2024-11-09 11:45:59 +08:00
hoshi-hiyouga
fcb6283a72 Merge pull request #5971 from hiyouga/hiyouga/fix_webui
[webui] fix extra args

Former-commit-id: d04e21d69e60ab4a350e70da7d1abbf11cfeed0e
2024-11-09 00:25:24 +08:00
hiyouga
0027f46ccc fix extra args
Former-commit-id: 2c98a1bc3d885170f8298872c2ea2e24427fb447
2024-11-09 00:24:27 +08:00
hoshi-hiyouga
967a27695e Merge pull request #5970 from hiyouga/hiyouga/fix_beam
[generation] fix vllm v0.6.3

Former-commit-id: 571d4538568272fd59cc5621e56113329c857546
2024-11-08 23:58:15 +08:00
hiyouga
3ce8a326c6 fix #5966
Former-commit-id: a9a99b545609083533cca1fd1e5480c60ea68750
2024-11-08 23:49:16 +08:00
hoshi-hiyouga
91b56b7baf Merge pull request #5927 from hiyouga/hiyouga/dev_fixmmchat
[fix] chat engines

Former-commit-id: e9c22e2d089927eee3bce052bbf7d6502d0ac544
2024-11-04 16:36:23 +08:00
hiyouga
e2fa961302 add image input type
Former-commit-id: 6fe260e35ff12662b72f26ec9df44e87b9693551
2024-11-04 08:27:20 +00:00
hiyouga
87d6d7dc61 fix chat engines
Former-commit-id: 3a220b7992d265c77d9a1a406ef86eefbc699cfe
2024-11-04 08:18:12 +00:00
hoshi-hiyouga
00019e2ca4 Merge pull request #5926 from hiyouga/hiyouga/dev_deps
[version] update datasets version

Former-commit-id: 4a24e8fc8e1c229ef8751bd7eafe024661d46661
2024-11-04 16:04:00 +08:00
hiyouga
b104739d63 update datasets version
Former-commit-id: feba2c6418a15715fee77a34428fa3cf47fcee5b
2024-11-04 07:52:26 +00:00
steven
6ef0d13e42 support granite3 models
Former-commit-id: 8cff612e55eb7df116e51c4dd21e7a42543e7a1f
2024-11-04 10:35:03 +08:00
hoshi-hiyouga
b238d1aa04 Merge pull request #5914 from hiyouga/hiyouga/dev_read
[misc] update readme

Former-commit-id: 2897696bad6bcc2d826845750c0c913882449829
2024-11-02 21:44:10 +08:00
hoshi-hiyouga
aa497d5d96 Merge pull request #5475 from menibrief/main
Fix phi-3-small issues 

Former-commit-id: c1daf49a967f6c0b641c9639a78971275aaa7cae
2024-11-02 21:31:34 +08:00
hiyouga
fecf04b2f4 fix phi3 template
Former-commit-id: b62131a3c5b4ff6f2969a8041e6e7b9cf2c444ed
2024-11-02 21:31:23 +08:00
hiyouga
3f157e2f6f update readme
Former-commit-id: 94bae8360b1aa124cc57dca481b9e686ba559f31
2024-11-02 21:28:04 +08:00
hoshi-hiyouga
c7c558562e update template
Former-commit-id: 3559ef6115a831dcd1adf7210995ffd62890cff6
2024-11-02 21:21:22 +08:00
hoshi-hiyouga
c2ea5fb618 Merge branch 'main' into main
Former-commit-id: 154f504fc2cebaae2b58c0121d6d8d8016db1bb2
2024-11-02 21:20:27 +08:00
hoshi-hiyouga
fa9c32bb8d Merge pull request #5913 from hiyouga/hiyouga/dev_metrics
[train] support gather DPO metrics, fix return output

Former-commit-id: a17ac67f22c4de7699a8f2c1d4980af4babd2c7e
2024-11-02 21:13:43 +08:00
hiyouga
c610deb5a2 fix webchat
Former-commit-id: 071fe40f209156f994c069507a2d53cc4f586d67
2024-11-02 21:04:18 +08:00
hiyouga
2bb3255e74 fix dpo metrics
Former-commit-id: 57029280da825a39fbf5a05097921b861f126669
2024-11-02 20:59:01 +08:00
hoshi-hiyouga
b28b74c71e Merge pull request #5880 from sd3ntato/make-image-parametric
make base image parametric.

Former-commit-id: e2ea7c8b67cf598bba2b2b298e638b23712f14b3
2024-11-02 20:26:14 +08:00
hoshi-hiyouga
1ed921bff7 Update Dockerfile
Former-commit-id: 89a1c1eb6d717b20107c06a645652b87fba388e8
2024-11-02 20:20:26 +08:00
hoshi-hiyouga
80f634cc95 Merge pull request #5910 from Cuiyn/index
Support Index series models.

Former-commit-id: b74d9fa8efeb4f52ba0e20538ad90c8b40492e29
2024-11-02 20:16:54 +08:00
Cuiyn
a3eb5e200c fix: rename to Index-1.9B-Charater-Chat and Index-1.9B-Chat-32K
Former-commit-id: 95ab64749155a781ab5e55b989388ccd9e094c8d
2024-11-02 20:04:14 +08:00
hoshi-hiyouga
2d02c0e22d Merge pull request #5912 from hiyouga/hiyouga/dev_logging
[misc] support rank0 logger

Former-commit-id: ed34a6322814f302f050ba8ca4ecc53689f4d646
2024-11-02 18:48:41 +08:00
hiyouga
093eda2ad6 support rank0 logger
Former-commit-id: 84528eabe560091bfd866b6a0ca864085af7529b
2024-11-02 18:31:04 +08:00
Cuiyn
dbaf621f57 Add support for Index
Former-commit-id: 4e6dba16ca1755235d2ae117b53b68c5ae2f239a
2024-11-02 13:45:27 +08:00
hoshi-hiyouga
ceb701c2d4 Merge pull request #5909 from hiyouga/hiyouga/dev2
[data] support auto convert for single image, add image_dir argument

Former-commit-id: ced43fa0c84f7d0792694721d2c5e572c0d0e718
2024-11-02 13:43:04 +08:00
hoshi-hiyouga
29ad3783f5 Merge pull request #5907 from hiyouga/hiyouga/dev
[data] fix template replace behavior

Former-commit-id: 0a51c0bfdd9b193d2a3ac34a62fe8b073569c41a
2024-11-02 13:42:53 +08:00
hiyouga
fa2386e73c fix #5904
Former-commit-id: 079ebe038b11f36a11681dc8688f8ea48bccf324
2024-11-02 13:08:15 +08:00
hiyouga
e0045e8386 fix #5883
Former-commit-id: 73b93caa9ac16ffd8d3faae24d16210d85ae9754
2024-11-02 13:06:34 +08:00
hoshi-hiyouga
b94c941196 Merge pull request #5906 from hiyouga/dev
[test] update tests

Former-commit-id: f95f2824b3c078508408da23e1958292dc96d0fa
2024-11-02 12:50:43 +08:00
hiyouga
ba66ac084f update tests
Former-commit-id: 4e92b656e324725048d914946e70867be20032ff
2024-11-02 12:41:44 +08:00
hoshi-hiyouga
83479c9ef0 Merge pull request #5895 from hiyouga/dev
[inference] support multiple images

Former-commit-id: 491132e5db483fd00aa9f3cbc201b8fb83693f57
2024-11-01 16:52:55 +08:00
hiyouga
df8ac15ef0 add examples
Former-commit-id: 9eff9625adba643263bc6cba480f30edc6bb086a
2024-11-01 08:41:54 +00:00
hiyouga
8cea5cd967 support multiimage inference
Former-commit-id: 8083e4607549e805eb308c4e93c8aa256202f438
2024-11-01 07:25:20 +00:00
Valerio Mariani
a2d7d6a518 make base image parametric.
default `BASE_IMAGE` is nvcr.io/nvidia/pytorch:24.02-py3 for retro-compatibility


Former-commit-id: db8d00536acb02b29d10a3d735438d194656ece3
2024-10-30 21:53:32 +01:00
hoshi-hiyouga
a63e624eca Merge pull request #5873 from hiyouga/dev
[misc] update readme

Former-commit-id: e02c3bea981dff6beae45a9428d5d88d210db5e1
2024-10-30 17:14:44 +08:00
hiyouga
8596c321ce update readme
Former-commit-id: b3d3b440e8879198603da042441d4b4f84296109
2024-10-30 09:14:01 +00:00
hoshi-hiyouga
54cd799aa0 Merge pull request #5871 from hiyouga/dev
[loss&ui] fix incorrect loss of vlms, add extra args to ui

Former-commit-id: 5f4a62b600ab47db6aab3a1f831ecfe1df4335d9
2024-10-30 17:13:17 +08:00
hiyouga
8185eb1890 fix incorrect loss value for vlms
Former-commit-id: 0aa29a71ce958343a2086090d647eb63b8f5f5be
2024-10-30 08:56:46 +00:00
hiyouga
03213984ec tiny fix
Former-commit-id: b8f4b145506851cf5488cd8551a04d1c7603019b
2024-10-30 08:56:29 +00:00
hiyouga
aeeee9d4b5 support extra args in llamaboard
Former-commit-id: da0a5fd612e2214cc4bcb72516efd768fbe18a20
2024-10-30 08:55:54 +00:00
hoshi-hiyouga
c8a1fb99bf Merge pull request #5581 from Kuangdd01/pixtral-patch
[WIP] Support Pixtral-12B

Former-commit-id: fcddf4ec5c2914f73e23eeda2dbf67b048246669
2024-10-29 22:29:10 +08:00
hoshi-hiyouga
f0181a41ff fix bug
Former-commit-id: e69665746d9fcd17a92ace7d5d9c8de1fc0c29b7
2024-10-29 22:19:04 +08:00
hoshi-hiyouga
f6b06d0c6f Update mm_plugin.py
Former-commit-id: 830315cb438e75b589017fd57f70d0a513780a53
2024-10-29 22:16:22 +08:00
hoshi-hiyouga
1047217f78 Update template.py
Former-commit-id: 99a01547ca31adade1c48feae5796e06b73d387c
2024-10-29 22:11:21 +08:00
hoshi-hiyouga
16a9a44849 Update visual.py
Former-commit-id: 6f1db7b9abfbdea1781452388d66df3e9f9a5dd9
2024-10-29 22:10:29 +08:00
hoshi-hiyouga
58fb24ce41 Update collator.py
Former-commit-id: 941fa8a0d9c3a9106ad0af6e776db7e57f69548f
2024-10-29 22:03:42 +08:00
hoshi-hiyouga
a9afffa246 Update hf_engine.py
Former-commit-id: 7412a8b95678ca6827a8c42c9f4d38115fede897
2024-10-29 22:00:59 +08:00
hoshi-hiyouga
1fdd053022 Update README_zh.md
Former-commit-id: e14535aa97062d0e57bbf1230c050f2c56a45556
2024-10-29 21:58:03 +08:00
hoshi-hiyouga
0a833968a0 Update README.md
Former-commit-id: 65be32f6b12c2be80a12a4e903001820f64a0833
2024-10-29 21:57:28 +08:00
hoshi-hiyouga
58b681de78 Merge pull request #5801 from NLPJCL/main
使用了 LLaMA Factory 的项目:RAG-Retrieval 使用LLaMA-Factory作为生成方法做Reranker任务的微调框架。

Former-commit-id: cc9995cc99a7d7ba2958094bcd3d597eddc349e3
2024-10-29 21:20:16 +08:00
hoshi-hiyouga
22d5fc5f4c Update README_zh.md
Former-commit-id: 9e356805aa631810fd5897cb6a6cfc1fe0e939ab
2024-10-29 21:19:17 +08:00
hoshi-hiyouga
cc0119f698 Update README.md
Former-commit-id: 9181486c630bca23f68868128c9b0e04a0d7cea4
2024-10-29 21:18:15 +08:00
hoshi-hiyouga
580cedebde Merge pull request #5857 from hiyouga/dev
[train] fix saving processor

Former-commit-id: 5aaa90124483c8b54225797fa91065ed072d171a
2024-10-29 21:12:04 +08:00
hiyouga
43bd1b070c fix #5749
Former-commit-id: c36c5c61fc022b3f144d4c798ec584c4954b0181
2024-10-29 13:02:13 +00:00
Kingsley
42aa9c65be Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: 438302edfdb66b6397266b8b17ac66f60a89300c
2024-10-29 21:01:25 +08:00
hoshi-hiyouga
b0b87fa33f Merge pull request #5852 from hiyouga/dev
[misc] several important updates

Former-commit-id: 5bc5ddf3b62abc132df08be477ffb46e9257e2ba
2024-10-29 20:30:02 +08:00
hiyouga
22912eba1a fix pissa
Former-commit-id: 4ac65a318b87249d42ffa73cbd3b33f0934f2afa
2024-10-29 12:18:45 +00:00
hiyouga
e2748fa967 fix #5747
Former-commit-id: 26d07de349c98b547cd6a6166ea20616d08ba343
2024-10-29 10:47:04 +00:00
hiyouga
248d5daaff use pre-commit
Former-commit-id: 7cfede95df22a9ff236788f04159b6b16b8d04bb
2024-10-29 09:07:46 +00:00
hiyouga
8f5921692e update requires
Former-commit-id: cae0e688ddcead370821e126c192bddc53ff6017
2024-10-29 16:10:07 +08:00
grok
e880eb8844 Update README_zh.md
Former-commit-id: e0c4aa091e71bcb4be44f5a07bdda5df6b949af2
2024-10-23 23:50:56 +08:00
grok
dc076c4e52 Update README.md
update english readme

Former-commit-id: c295a8b549603ec1d58f460c041401e1393d18b5
2024-10-23 23:49:47 +08:00
grok
8306e93ef3 Update README_zh.md
Former-commit-id: 77e39e7c34410a24055ab63cc088e6ec768d49c7
2024-10-23 23:36:14 +08:00
hoshi-hiyouga
6a2cd129c0 fix #5797
Former-commit-id: 71d23ed3444f24b31785d9f0f6dd711f6f516731
2024-10-23 20:49:44 +08:00
KUANGDD
30d7f6a22e rm comment
Former-commit-id: 80b58eaaec1996571d24b2dc2b73859cc28911a1
2024-10-23 15:50:59 +08:00
KUANGDD
5440ebbae6 rm useless code
Former-commit-id: 2dc337a49a8646ce916981b2914718e7472b5946
2024-10-23 15:38:11 +08:00
KUANGDD
22dbe694e9 Merge branch 'pixtral-patch' of https://github.com/Kuangdd01/LLaMA-Factory-X into pixtral-patch
Former-commit-id: 10c58488558549c382f9bba43c487d7f9222f16e
2024-10-23 15:32:50 +08:00
KUANGDD
64ac6ca396 rm import torch
Former-commit-id: 561a0f8155afca20ac699e124320b0eaef6dac07
2024-10-23 15:32:33 +08:00
Kingsley
377d37fa7f Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: f3ad96aea6f2602981bf5f27d2bbd1f729d11aa0
2024-10-23 15:30:03 +08:00
KUANGDD
55296744a8 Merge branch 'pixtral-patch' of https://github.com/Kuangdd01/LLaMA-Factory-X into pixtral-patch
Former-commit-id: 3c1694157d61d88fd53fb3c9197196013b98e0e7
2024-10-23 15:28:19 +08:00
KUANGDD
d0889012c2 modify style & little change
Former-commit-id: c988477d14dc656450d5fec31895781b7f9f7dce
2024-10-23 15:24:07 +08:00
hoshi-hiyouga
3a8b2890eb fix test
Former-commit-id: a0a23f79d2d94d68e3bf1e90b95beff817bc409c
2024-10-22 12:35:36 +08:00
hoshi-hiyouga
5b2284a51d fix #5768
Former-commit-id: 9f9e3fd186ce917f0b323c8cd42cf050ed238c58
2024-10-22 11:06:22 +08:00
hoshi-hiyouga
4807d8a4ef Update misc.py
Former-commit-id: fe9a927f1ea8e44e0429b437e5feecf13e34e9aa
2024-10-17 19:48:51 +08:00
hoshi-hiyouga
c6e1313977 Update loader.py
Former-commit-id: 3b229a27a108b840e6bed3c8684737f51ce9faf4
2024-10-17 19:48:12 +08:00
hoshi-hiyouga
66819fd3ee Update README_zh.md
Former-commit-id: a829d4a28fae77b08a6ea451479c71578b3b552f
2024-10-17 19:47:33 +08:00
hoshi-hiyouga
bd85e370be Update README.md
Former-commit-id: f62b0682e476dd62a4a3ac5620f8fc244e8bf150
2024-10-17 19:46:36 +08:00
BUAADreamer
cc097174cc tiny fix [skip ci]
Former-commit-id: 937f69190e529fe7bf0fdf58d7bbb39017854c5e
2024-10-16 15:55:30 +08:00
KUANGDD
7d135bbdb8 remove useless codes
Former-commit-id: 01247fcdde215398ec67cbd6cf1bc6cfb512a9ba
2024-10-16 01:14:51 +08:00
KUANGDD
4845a76535 fix bug for webui infer
Former-commit-id: 17768832908cc59ab64ed72522b2954c575ce21d
2024-10-16 01:09:33 +08:00
Kingsley
67645c0db8 Merge branch 'pixtral-patch' of https://github.com/Kuangdd01/LLaMA-Factory-X into pixtral-patch
Former-commit-id: 995eae4333f4346734d76f7d18cfffb5147e2f7b
2024-10-15 17:09:56 +08:00
Kingsley
f463b3f038 add extra test for pixtral mm_input
Former-commit-id: c706ec8a5dbd3c72ab15a709668624c0c7bbd8ce
2024-10-15 17:09:24 +08:00
BUAADreamer
01defc2779 tiny fix [skip ci]
Former-commit-id: 95f968eec2628cb26b3c4f4d4e81a9536e23cc31
2024-10-15 13:53:33 +08:00
Kingsley
c9e77ab352 Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: da6eb7bab2b4e551366d33b81083773cfd45ec08
2024-10-15 13:41:10 +08:00
BUAADreamer
c3de160d1c fix some
Former-commit-id: c9b644693996f96d234349823911fc267635acb9
2024-10-15 13:30:41 +08:00
KUANGDD
3693d7b571 plugin test & check
Former-commit-id: 76c7c8c5a729b8b43e3a31efc44f2c9c2678bf3d
2024-10-15 12:12:46 +08:00
hiyouga
a63144c28f fix #5705
Former-commit-id: 0c85fd253f860eee3c7b9b5a4e77ffbf93af372a
2024-10-15 10:10:16 +08:00
KUANGDD
2b3b0473cd required transformers version
Former-commit-id: d9915db327a038c93b5e3421c90b1f218fb23f92
2024-10-14 21:11:09 +08:00
Kingsley
9d929897ce remove bs condition
Former-commit-id: bf3520178ab66058c62a9cf31b42f36a9d88ce20
2024-10-14 16:55:59 +08:00
Kingsley
313a5e1494 Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: 28696e2f945a9f55e4ca9e9dc5ebd8af9df45d8b
2024-10-13 17:42:02 +08:00
hiyouga
74dd25224a fix #5668
Former-commit-id: 116f2946201d55305f6b57b3f926670a3e2173c8
2024-10-12 01:24:43 +08:00
hiyouga
c7efc7f2ed tiny fix
Former-commit-id: 1fe424323b212094856f423351dc2a15774d39c3
2024-10-11 23:51:54 +08:00
hoshi-hiyouga
c71c78da50 Merge pull request #5665 from johnnynunez/main
vllm 0.6.3

Former-commit-id: 6f8a9581fa406e255ca6955794f16cc06b5cf287
2024-10-11 23:45:58 +08:00
hoshi-hiyouga
f4897da009 Merge pull request #5642 from huniu20/main
[hub support] add modelers hub support

Former-commit-id: ea96c8ba3f81546df1311ca738ff961aa4ef7446
2024-10-11 23:45:17 +08:00
huniu20
a6951db970 bugs fixed
Former-commit-id: 5457ba7512d70564ea784b9ec6bdb86cfd2d7e3d
2024-10-11 19:56:13 +08:00
Johnny
9d27aaa38f Update parser.py
Former-commit-id: 60b13c86f4feaffbb43f5a23a28376fe416ed118
2024-10-11 12:29:33 +02:00
Johnny
3b19b6f31b Update setup.py
Former-commit-id: f85b756ffafa241304624819b7612603ad5e0ee3
2024-10-11 12:29:09 +02:00
huniu20
5b15ca0b0b add om_hub_token argument
Former-commit-id: b3214e69d32067a1c22dbd60c2cde1545ba75b19
2024-10-10 17:16:46 +08:00
huniu20
aad79127e6 1. add model and dataset info to support webui
Former-commit-id: 92f6226f3fecbd9af744a7232dda2c68b2bb0d86
2024-10-10 16:46:34 +08:00
huniu20
c42dcab32b 1. add modelers hub support
Former-commit-id: 14678eb444d8181176745d18d4a6865fd6860f58
2024-10-09 17:21:37 +08:00
Kingsley
be519c84d9 Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: 2076d00dfbe1279a91207157fd6d9a118427626a
2024-10-08 21:04:08 +08:00
hiyouga
b2dc6dc59a tiny fix
Former-commit-id: d8ddd07c2ed14d871fb25743c20265fc99e3e221
2024-10-08 17:48:56 +08:00
hoshi-hiyouga
9df626dc18 Merge pull request #5546 from chengchengpei/cpei/refactor
1, log exceptions in details; 2, check processor is None before calling it

Former-commit-id: 81c23ebdd7ef46102437b1d352818fe205fa3851
2024-10-08 17:46:54 +08:00
hoshi-hiyouga
8d4b9200a1 Merge branch 'main' into cpei/refactor
Former-commit-id: c2951f17f726470bcd5dff6bf7028ec90212442e
2024-10-08 17:31:17 +08:00
hoshi-hiyouga
7806df46ba Merge pull request #5615 from johnnynunez/patch-1
Update setup.py (Compatible with Jetson)

Former-commit-id: baa3cd4c0db2502cf8a606e034df20492a83e6b2
2024-10-07 16:50:34 +08:00
hoshi-hiyouga
bba026a212 Update parser.py
Former-commit-id: e7d291605f184f6ac48429015e15755192d2f274
2024-10-07 16:27:23 +08:00
hoshi-hiyouga
6e111eb29f Update setup.py
Former-commit-id: 4c017fe014b708d79c65eff24329b9c324399461
2024-10-07 16:26:50 +08:00
Johnny
2b69ae0eb2 Update parser.py
Former-commit-id: 55c449b54aec04e2141bffe75d4016cbac9ef4c5
2024-10-07 10:17:45 +02:00
Johnny
13d73574ef Update setup.py
Former-commit-id: 73d3f93496712edace38711613e14768922d6c96
2024-10-07 10:16:53 +02:00
hiyouga
bc264807ae update readme
Former-commit-id: 915f25e9b34fc4554fd1198a383f96a2536fec60
2024-10-07 11:31:18 +08:00
Johnny
f9815dd20a Update parser.py
Former-commit-id: f832edc8dc0e2b78c12dc8edd702fe147a0a5292
2024-10-06 20:34:19 +02:00
Johnny
1f58943b32 Update setup.py
Former-commit-id: b4de2c84b078194bb6358697fd6815d622843f58
2024-10-06 08:53:55 +02:00
hiyouga
6476507429 fix #5611
Former-commit-id: 3bef07ecf0557999bb0b33b650a778addc8e5b91
2024-10-06 10:34:55 +08:00
hiyouga
35862d19ec fix #5611
Former-commit-id: 76c813d37c1d945a8bb6d3e4168e15fbe97c7a87
2024-10-06 10:33:11 +08:00
Kingsley
1272cb00df Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: 9372ac93f304db438383d539ccd00bffe7415dbc
2024-10-01 00:52:31 +08:00
Kingsley
e9ac26db4c unfactor md
Former-commit-id: 1a79d61f8d25a4c1127c2f393418e14ab9d2abd4
2024-09-30 23:36:16 +08:00
hiyouga
20ee1d2e19 fix #5542
Former-commit-id: cf28e7418c2eb07e86923a53ef832ef218e45af1
2024-09-30 23:28:55 +08:00
Kingsley
cbc1dd0c88 sync with former
Former-commit-id: f8707e52586182144c4fb70c7c0de8bf7044ef5e
2024-09-30 20:27:05 +08:00
Kingsley
870bbabbc4 register model fix
Former-commit-id: 077d8e3c0344d944705254cc5a2cd06c9f5dc116
2024-09-30 20:04:47 +08:00
Kingsley
8fd84c375e fix some errors due to inconsistency of model cards
Former-commit-id: dd83265b9b8768eb8732f59ace128dfe4aac1c47
2024-09-30 19:58:34 +08:00
Kingsley
32b5364051 Merge branch 'hiyouga:main' into pixtral-patch
Former-commit-id: df0baeaa3fd093433d92b7921d3a57d88061d6d4
2024-09-30 19:33:29 +08:00
hiyouga
cf72aec098 add patch processor func
Former-commit-id: 0cd6327da6a044b4a62f203a662e5bb6068d9c29
2024-09-30 17:07:43 +08:00
hiyouga
87849d12d2 lint
Former-commit-id: d7564365f4008e468f89102879d6e65c627ad447
2024-09-30 17:00:33 +08:00
hoshi-hiyouga
a19512436f Merge pull request #5585 from shing100/main
Support EXAONE3.0 Model

Former-commit-id: 2fba28d586757bbb3ac57e4dd10c756381766b51
2024-09-30 16:56:08 +08:00
hoshi-hiyouga
6c89d93aea Update constants.py
Former-commit-id: 7c04e1caea38fd1e1e9abcf8ed1bbdc24ddd6df1
2024-09-30 16:47:52 +08:00
hoshi-hiyouga
345f40a660 Update template.py
Former-commit-id: d893289b595c0530b5aeb8902369885118809b86
2024-09-30 16:39:48 +08:00
Zhangchi Feng
8b9a814653 Merge branch 'main' into pixtral-patch
Former-commit-id: 0cf52d48fbc505e2fba29e5df0f2e6722db7ac79
2024-09-30 12:37:03 +08:00
shing100
05fabf9095 fix chat template Exaone3.0
Former-commit-id: 2e32864b59c1ef1a78f3eb1c28fbf578cfaa19cd
2024-09-30 09:44:21 +09:00
Geun, Lim
95eede911a Update README_zh.md
Former-commit-id: c4bf9d86e14a9d7a5ed5f9c49d73006d13df2707
2024-09-30 09:25:02 +09:00
Geun, Lim
7bc7f7d673 Update README.md
Former-commit-id: d014eb931cd9ed70abb8a466281668a0b00ba9f9
2024-09-30 09:24:44 +09:00
shing100
054fdbe186 update docs Support model Exaone3.0
Former-commit-id: e6fbf8fd7c84cfb11a0a4a173657b1541806b5f9
2024-09-30 09:19:27 +09:00
shing100
f0f80819a0 add Exaone3.0 template
Former-commit-id: f7478af1d04353ab13236323e3bfb96fd2870fce
2024-09-30 09:18:25 +09:00
hoshi-hiyouga
e702678252 Merge pull request #5574 from BUAADreamer/main
support llava-next(video)/video-llava

Former-commit-id: bf7611e15a7e7ee9fb870efeba9bdac358c6d462
2024-09-30 00:22:43 +08:00
hoshi-hiyouga
553579986a Update common.py
Former-commit-id: 7f7f4b67b8b757e3787a78993cf083552cd5fbbd
2024-09-29 23:58:09 +08:00
hoshi-hiyouga
622cb04f27 Update README_zh.md
Former-commit-id: 01ee426c745f522bd0dee79ace2c6b2eb52d0510
2024-09-29 23:56:32 +08:00
hoshi-hiyouga
f3ba11a432 Update README.md
Former-commit-id: 45b79a78f62a1d916083f8c74ebf08ad0fb8fe6f
2024-09-29 23:55:55 +08:00
hoshi-hiyouga
8b1f53bca5 Update README.md
Former-commit-id: 0bcf6a30ae95d5c76e477f829f6ba633d9ccdd64
2024-09-29 23:55:21 +08:00
hoshi-hiyouga
ac25fef80e Update constants.py
Former-commit-id: a0dd90fa41fc10d7944521d95a312631be64af8f
2024-09-29 23:45:34 +08:00
hoshi-hiyouga
15f819d273 Update test_mm_plugin.py
Former-commit-id: 8490ba1bb3b429d10c5a1cf791aa1bfe3547fd5f
2024-09-29 22:59:47 +08:00
BUAADreamer
f2d1c43d28 fix template
Former-commit-id: cfd05bb009895a936c59f3d97afebf2ed8006f84
2024-09-29 22:56:36 +08:00
BUAADreamer
464acc7d6c fix template
Former-commit-id: 6291c933448022ae80fd85d7f1d785bf6c0fcb25
2024-09-29 22:55:45 +08:00
BUAADreamer
a96c5da737 fix constants
Former-commit-id: e66a338410be6812064a119d8c6a6644e0f035d1
2024-09-29 22:40:43 +08:00
BUAADreamer
28d09b81c9 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: 2358bdde973dfde3abff251d02f7622e9c144e4d
2024-09-29 22:00:35 +08:00
BUAADreamer
a769d0e3d4 fix constants
Former-commit-id: 69309a23598995aa1937fd8d80732a018c18db87
2024-09-29 22:00:01 +08:00
hoshi-hiyouga
1b98b5e65c Update requirements.txt
Former-commit-id: bd3b235904aae267ead8db1809d06d6935d2ea30
2024-09-29 21:51:23 +08:00
BUAADreamer
3cc5408da7 fix style
Former-commit-id: dc1bdcb69e6f2c605a2c533dab15613affc902f4
2024-09-29 21:39:37 +08:00
Zhangchi Feng
689f5c4554 Merge branch 'main' into main
Former-commit-id: 7566589b820e6030269523e9d08c312594f893ae
2024-09-29 21:32:54 +08:00
BUAADreamer
ab5d042cd3 add more llava-next series template
Former-commit-id: 93f64f2aebf41582d39aa8a2c6059e562ca694b0
2024-09-29 21:29:29 +08:00
BUAADreamer
4d43317aa1 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: bf6d6eb0bfe00453a77bbe42a3842b856dd2e47f
2024-09-29 20:55:23 +08:00
BUAADreamer
ed3b0c5b40 fix readme_zh
Former-commit-id: b663d664793b79c02db1b91d206dea2beb168e26
2024-09-29 20:55:18 +08:00
hoshi-hiyouga
67a97794ee Update mm_plugin.py
Former-commit-id: 507de0df036e39eae3a3887ded9165bd918ee48f
2024-09-29 20:54:04 +08:00
hoshi-hiyouga
2c7c93cb9b Update mm_plugin.py
Former-commit-id: b8be270f9c97bfcaf431bbd9f06c4c0b83980539
2024-09-29 20:53:34 +08:00
BUAADreamer
4d4fe08d14 fix readme_zh
Former-commit-id: 4621cc3e0b8a5dc7fcfa7cf2d60ff1838aef9a1a
2024-09-29 20:46:47 +08:00
BUAADreamer
85a919b6f7 fix readme
Former-commit-id: 867e7e70dbff207dbd78668af09a638654937f71
2024-09-29 20:45:02 +08:00
BUAADreamer
fe2abe20fc tiny fix
Former-commit-id: 0c7c875d55bc45795a41c0b8a5c407d72b1f3d8d
2024-09-29 20:38:46 +08:00
BUAADreamer
12444720db fix style
Former-commit-id: 7b922803586c05981cd095cfb730061091f0204c
2024-09-29 20:30:57 +08:00
BUAADreamer
510faf5805 fix tests
Former-commit-id: e932907f6f6473bd6917d61a464366cc9918f66c
2024-09-29 18:00:45 +08:00
BUAADreamer
722e01c8ab fix some
Former-commit-id: aeca8c0f978cb9754e0526b40cd431aaf867044f
2024-09-29 17:55:40 +08:00
hoshi-hiyouga
6050e6cff9 update readme
Former-commit-id: e5c8634cbd4e00459894c031ef0e10fcc6ef5775
2024-09-29 05:02:44 +00:00
hoshi-hiyouga
c8abbe4fc3 Merge pull request #5580 from amrear/main
made a small change to a warning about fa2 for gemma2 models.

Former-commit-id: 5e2d90ab976dd55b8c61a68e929d7e5b3583156c
2024-09-29 12:45:03 +08:00
BUAADreamer
f2881c9d4a fix some params of visual regularize
Former-commit-id: 15cbc35af4559dad73c09317e82a63571a8c3540
2024-09-29 12:38:25 +08:00
hoshi-hiyouga
1ded3abdf1 Update attention.py
Former-commit-id: 2adf79c195053bb4541e0317573a2c89da28b5bc
2024-09-29 10:47:41 +08:00
Kingsley
e641f1215a Tiny fix
Former-commit-id: ae66e1a545f4cd209a57fd824f9bfb7e94436cba
2024-09-29 00:00:23 +08:00
Amirreza A
ca736bcab7 made a small change to a warning about fa2 for gemma2 models.
Former-commit-id: e0695a026d822c896cb4f5b33e0c4f88441d75e9
2024-09-28 19:03:36 +03:30
Kingsley
bddb2646bd tiny fix
Former-commit-id: 35bc71b2a68fd303798c35fe22ad29ceea87cf9b
2024-09-28 22:50:53 +08:00
Kingsley
e4c57f54f8 remove some unnecessary if conditions
Former-commit-id: 482d3e5ff3338385da664475fee88c7dc623c993
2024-09-28 02:14:06 +08:00
BUAADreamer
6de82ca843 fix some
Former-commit-id: 12e509da85af76ccf1e9a879a78e450a7b70cc4b
2024-09-28 01:15:33 +08:00
BUAADreamer
b2c02df555 modify some style
Former-commit-id: 36bc408b8296cfc6d565b2f968fb1059bc6d1305
2024-09-28 01:07:38 +08:00
BUAADreamer
ca86d6361e add tests
Former-commit-id: f0ed66bf6f9b45e0c3fddb5179a93363f5a4194f
2024-09-28 00:59:14 +08:00
BUAADreamer
b6fb00e046 add llava-next/llava-next-video/video-llava
Former-commit-id: a4e4239931b0b0e3fd12c9f9bbfd2c201cbc78ca
2024-09-28 00:57:03 +08:00
Zhangchi Feng
86c84972c8 Merge branch 'hiyouga:main' into main
Former-commit-id: 2695dcdf468f9e39e3aeec7892eb3dad399736ee
2024-09-27 18:14:39 +08:00
Kingsley
9390927875 add pixtral template
Former-commit-id: c7b4e47e0fda955272ccd6340b2047fd92acbfcf
2024-09-26 17:14:51 +08:00
Kingsley
c4a585f232 Merge branches 'pixtral-patch' and 'pixtral-patch' of https://github.com/Kuangdd01/LLaMA-Factory-X into pixtral-patch
Former-commit-id: 197bb14e6308bdf9af65eafe7bf06b36dbf96df6
2024-09-26 12:18:25 +08:00
Kingsley
300feb3245 add pixtral template
Former-commit-id: e0bcaa6c6e902e29361438a6d215bbc2535b648f
2024-09-26 12:11:58 +08:00
Chengcheng Pei
cacafb0038 address comments
Former-commit-id: 6311bb2ca266ce156537cfa477202b2904921593
2024-09-25 21:07:51 -07:00
hoshi-hiyouga
6509114259 Merge pull request #5547 from marko1616/chore/llama3.2
Chore: Support llama3.2.
Former-commit-id: 979ecc92a0db6b90ed8249d9a17120d5ed18b6aa
2024-09-26 11:38:34 +08:00
hoshi-hiyouga
7d4cb79822 add modelscope models
Former-commit-id: 4de3081eea9cede78a1f2db65cf22a5731c54447
2024-09-26 11:22:48 +08:00
marko1616
b867e164fe Chore: Support llama3.2.
Former-commit-id: 2741ac784c1a776bd545fa6dffc07b6346273519
2024-09-25 16:08:44 -04:00
Chengcheng Pei
26bbfc084d 1, log exceptions in details; 2, check processor is None before calling it.
Former-commit-id: 0f0a4813db9ca4e9bb5762a781a0a214129284a6
2024-09-25 12:59:48 -07:00
hiyouga
c376eed31d fix ci
Former-commit-id: f354593ca9b13e542fccd8fe2b64ea0ec4db78b2
2024-09-25 23:14:17 +08:00
hoshi-hiyouga
7c595abc38 Merge pull request #5533 from StrangeBytesOrg/add-docker-args
Add additional install options to Dockerfiles

Former-commit-id: c52aa3d5323e270f6b50a51d97a92e79138b7293
2024-09-25 23:04:57 +08:00
hiyouga
c428ab68d8 optionally replace jinja template
Former-commit-id: f15dec3001f785eeac1ed9cc545fab96bac2c4fd
2024-09-25 23:02:02 +08:00
hiyouga
968b9f1852 update readme
Former-commit-id: 826a47909f22b72228cd8944875a13f5f65232b1
2024-09-25 20:13:04 +08:00
hiyouga
018266c66e update readme
Former-commit-id: fe482183ae9d19cc42f78b5cd144ef21b93ec8d1
2024-09-25 19:39:52 +08:00
StrangeBytesDev
111c644bf1 Add additional install options to Dockerfiles
Former-commit-id: 5310af2f2ac8d226b95785d6b1eb0632312871a7
2024-09-24 16:54:46 -07:00
huangpan.foo
ed5c641e8b Add deepseek-v2.5 template
Former-commit-id: e80c1fe798fb2e076c0891a64300f1b6710176b6
2024-09-21 19:33:30 +08:00
hoshi-hiyouga
de72d1f0e7 Merge pull request #5483 from whybeyoung/main
fix: 修复function call数据集如果 function_call 值的为不合法json,异常提示且中断训练。
Former-commit-id: 9e36ebebd087cd3b128b9426255d420f3c94353c
2024-09-19 17:01:52 +08:00
hoshi-hiyouga
8bfb856923 flat string
Former-commit-id: f1e7731075e6ded4a5ecac7ef46ca4a318b91597
2024-09-19 16:43:42 +08:00
hoshi-hiyouga
8fdbaab95d lint
Former-commit-id: dd94fdd69c8f36df80d6d70d63ab7403a0e55d46
2024-09-19 16:21:43 +08:00
hoshi-hiyouga
a01668bbe8 fix bug
Former-commit-id: b6d0ee1fd8b555bc6aac8b8686c9a3eea784c3a8
2024-09-19 16:21:21 +08:00
hoshi-hiyouga
3385616a37 improve error message
Former-commit-id: e7735dd487ae4e31c34dcd8e2ea9af0a39d1cf9e
2024-09-19 16:06:00 +08:00
ybyang
1f0d89328d fix: 修复function call数据集如果 function_call 值的为不合法json,异常提示且中断训练。
Former-commit-id: 625a0cd7cb5725a0f76c8c19cd23d6c0275bd146
2024-09-19 15:00:10 +08:00
menibrief
a7feab45d5 fix phi-small template
Former-commit-id: 48fb6bae6245dc6d5f72ebfc1c2bd9ffacd51b86
2024-09-18 23:52:30 +03:00
menibrief
f34322afd7 Update README.md
update readme to phi-small template

Former-commit-id: e9df26aa45f916ab0756db3329dff48dcdfce1f1
2024-09-18 23:51:36 +03:00
hoshi-hiyouga
3815fa40b7 tiny fix
Former-commit-id: 1f45d18a780c2aa501f060688a09ff04071379b9
2024-09-19 02:20:24 +08:00
hoshi-hiyouga
c43050b3fa Update README_zh.md
Former-commit-id: 750c57cbcee3ecdd6a9096f1569b9bee282d5ac7
2024-09-19 02:17:59 +08:00
hoshi-hiyouga
3e152872ad Update README.md
Former-commit-id: 40b0e51092289dbf1f2a112cd8c36df399314c8b
2024-09-19 02:16:16 +08:00
hoshi-hiyouga
ae6ad55758 fix webui
Former-commit-id: aa6e65b24451fe9f65d58e5eca5a56eb9aba71e8
2024-09-19 02:13:39 +08:00
hoshi-hiyouga
0118a2fc04 add qwen2.5 models
Former-commit-id: 408a7d7b2e1a2316cbeefade872b732c88191b75
2024-09-19 02:07:54 +08:00
hoshi-hiyouga
4dd81976f4 Merge pull request #5438 from aliencaocao/patch-1
Add qwen_vl to liger kernel supported list

Former-commit-id: c706ff61dc3e5c152a10789c7524844e2be554a2
2024-09-16 13:40:02 +08:00
Billy Cao
2b4da8baf6 Add qwen_vl to liger kernel supported list
Former-commit-id: 053b2d832450cb6cd6af673b9fc51404f1fb1e41
2024-09-14 19:28:20 +08:00
hoshi-hiyouga
7d1b4071e8 Merge pull request #5427 from HardAndHeavy/update-rocm
Update the ROCm version to 6.2

Former-commit-id: 5dcdf5d16590b59004be9d728887781729344ea0
2024-09-13 10:25:47 +08:00
HardAndHeavy
8fc5377f50 update the ROCm version to 6.2
Former-commit-id: a6eda6a500daa4f3383a7868f6abe2434f967b1d
2024-09-12 23:46:33 +03:00
hiyouga
e5812f261d update ci
https://github.com/huggingface/transformers/pull/33436

Former-commit-id: c723f16cdb919cedbf938d51d422ad49b9c6eecf
2024-09-11 20:44:42 +08:00
hiyouga
f7e85cd7de set dev version
Former-commit-id: 39edf597f050bcb2099a10d6f6018f96e29b7e65
2024-09-11 18:56:37 +08:00
hiyouga
749395420b remove windows in ci
Former-commit-id: 56046767c086853b6d40fbc42e0ed9662546de6b
2024-09-11 18:14:39 +08:00
hiyouga
7d536d1d75 fix ci
Former-commit-id: 627f30200068f58d06eb53b1b4797ed426c9c1f1
2024-09-11 18:01:09 +08:00
hiyouga
7fd0d2fc2f fix #5411
Former-commit-id: 392bdaf1ea9e5baf6289f2d4415a175dd55a479d
2024-09-11 17:36:42 +08:00
BUAADreamer
ec696bbcdd try to past test
Former-commit-id: 2db97e1e5e06370375f4f5c577671524e399321f
2024-09-10 13:29:09 +08:00
BUAADreamer
df24345d65 try to past test
Former-commit-id: 76a4cfcb84b55467792318dc15a5fbcd6807b674
2024-09-10 13:25:30 +08:00
Zhangchi Feng
386dd26097 Merge branch 'hiyouga:main' into main
Former-commit-id: 8619ad7dc124c50e254b1bb2e173ff99ca4f0e22
2024-09-10 13:20:24 +08:00
BUAADreamer
514f976cc1 try to past test
Former-commit-id: 3b6bfae0e5fe795a70d530b2765f27d95c5862f8
2024-09-10 13:12:51 +08:00
BUAADreamer
66b870fd08 try to past test
Former-commit-id: 808a4bd77daca4dd92423652878d8262f3a6f2a4
2024-09-10 12:56:12 +08:00
BUAADreamer
24d3c7e378 resolve confilct
Former-commit-id: d6168da2a1f74424b83416cbcbf685861e76ff5f
2024-09-10 12:39:17 +08:00
BUAADreamer
484128b641 support llava-next(video)
Former-commit-id: 27e94593ac467e56e3a7f5c64f4ff6cee81f4b47
2024-09-10 12:31:53 +08:00
hiyouga
588ea95732 update accelerate ver for schedule_free optimizers
Former-commit-id: 2de74e79049ce8e50f605f649275b1dbfb899c8c
2024-09-09 22:51:08 +08:00
hiyouga
800567cde7 fix mm plugin
Former-commit-id: 6a3549c6c1a8c40de61e748f0b280bfc9e1279a2
2024-09-09 22:41:28 +08:00
hiyouga
7a3ba5a25d fix qwen2vl preprocess
Former-commit-id: 52ddd42b7d2ae9e1aa08c15fd5c13ddad96f1b74
2024-09-09 22:33:33 +08:00
261 changed files with 17071 additions and 7453 deletions

View File

@@ -3,10 +3,12 @@
.github
.venv
cache
data
docker
saves
hf_cache
ms_cache
om_cache
shared_data
output
.dockerignore
.gitattributes

View File

@@ -1,35 +1,42 @@
# Note: actually we do not support .env, just for reference
# api
API_HOST=0.0.0.0
API_PORT=8000
API_HOST=
API_PORT=
API_KEY=
API_MODEL_NAME=gpt-3.5-turbo
API_MODEL_NAME=
API_VERBOSE=
FASTAPI_ROOT_PATH=
MAX_CONCURRENT=
# general
DISABLE_VERSION_CHECK=
FORCE_CHECK_IMPORTS=
FORCE_TORCHRUN=
ALLOW_EXTRA_ARGS=
LLAMAFACTORY_VERBOSITY=
USE_MODELSCOPE_HUB=
USE_OPENMIND_HUB=
USE_RAY=
RECORD_VRAM=
OPTIM_TORCH=
NPU_JIT_COMPILE=
# torchrun
FORCE_TORCHRUN=
MASTER_ADDR=
MASTER_PORT=
NNODES=
RANK=
NODE_RANK=
NPROC_PER_NODE=
# wandb
WANDB_DISABLED=
WANDB_PROJECT=huggingface
WANDB_PROJECT=
WANDB_API_KEY=
# gradio ui
GRADIO_SHARE=False
GRADIO_SERVER_NAME=0.0.0.0
GRADIO_SHARE=
GRADIO_SERVER_NAME=
GRADIO_SERVER_PORT=
GRADIO_ROOT_PATH=
GRADIO_IPV6=
# setup
ENABLE_SHORT_CONSOLE=1
ENABLE_SHORT_CONSOLE=
# reserved (do not use)
LLAMABOARD_ENABLED=
LLAMABOARD_WORKDIR=

View File

@@ -19,3 +19,49 @@ There are several ways you can contribute to LLaMA Factory:
### Style guide
LLaMA Factory follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html), check it for details.
### Create a Pull Request
1. Fork the [repository](https://github.com/hiyouga/LLaMA-Factory) by clicking on the [Fork](https://github.com/hiyouga/LLaMA-Factory/fork) button on the repository's page. This creates a copy of the code under your GitHub user account.
2. Clone your fork to your local disk, and add the base repository as a remote:
```bash
git clone git@github.com:[username]/LLaMA-Factory.git
cd LLaMA-Factory
git remote add upstream https://github.com/hiyouga/LLaMA-Factory.git
```
3. Create a new branch to hold your development changes:
```bash
git checkout -b dev_your_branch
```
4. Set up a development environment by running the following command in a virtual environment:
```bash
pip install -e ".[dev]"
```
If LLaMA Factory was already installed in the virtual environment, remove it with `pip uninstall llamafactory` before reinstalling it in editable mode with the -e flag.
5. Check code before commit:
```bash
make commit
make style && make quality
make test
```
6. Submit changes:
```bash
git add .
git commit -m "commit message"
git fetch upstream
git rebase upstream/main
git push -u origin dev_your_branch
```
7. Create a merge request from your branch `dev_your_branch` at [origin repo](https://github.com/hiyouga/LLaMA-Factory).

61
.github/ISSUE_TEMPLATE/1-bug-report.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
name: "\U0001F41B Bug / help"
description: Create a report to help us improve the LLaMA Factory
labels: ["bug", "pending"]
body:
- type: markdown
attributes:
value: |
Issues included in **[FAQs](https://github.com/hiyouga/LLaMA-Factory/issues/4614)** or those with **insufficient** information may be closed without a response.
已经包含在 **[常见问题](https://github.com/hiyouga/LLaMA-Factory/issues/4614)** 内或提供信息**不完整**的 issues 可能不会被回复。
- type: markdown
attributes:
value: |
Please do not create issues that are not related to framework bugs under this category, use **[Discussions](https://github.com/hiyouga/LLaMA-Factory/discussions/categories/q-a)** instead.
请勿在此分类下创建和框架 bug 无关的 issues训练问题求助请使用 **[讨论区](https://github.com/hiyouga/LLaMA-Factory/discussions/categories/q-a)**。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the above rules carefully and searched the existing issues (including FAQs).
请确保您已经认真阅读了上述规则并且搜索过现有的 issues包括常见问题
options:
- label: I have read the above rules and searched the existing issues.
required: true
- type: textarea
id: system-info
validations:
required: true
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。
placeholder: llamafactory version, platform, python version, ...
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide entry arguments, error messages and stack traces that reproduces the problem.
请提供入口参数,错误日志以及异常堆栈以便于我们复现问题。
value: |
```text
Put your message here.
```
- type: textarea
id: others
validations:
required: false
attributes:
label: Others

View File

@@ -0,0 +1,41 @@
name: "\U0001F680 Feature request"
description: Submit a request for a new feature
labels: ["enhancement", "pending"]
body:
- type: markdown
attributes:
value: |
Please do not create issues that are not related to new features under this category.
请勿在此分类下创建和新特性无关的 issues。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the above rules carefully and searched the existing issues.
请确保您已经认真阅读了上述规则并且搜索过现有的 issues。
options:
- label: I have read the above rules and searched the existing issues.
required: true
- type: textarea
id: description
validations:
required: true
attributes:
label: Description
description: |
A clear and concise description of the feature proposal.
请详细描述您希望加入的新功能特性。
- type: textarea
id: contribution
validations:
required: false
attributes:
label: Pull Request
description: |
Have you already created the relevant PR and submitted the code?
您是否已经创建了相关 PR 并提交了代码?

View File

@@ -1,66 +0,0 @@
name: "\U0001F41B Bug / Help"
description: Create a report to help us improve the LLaMA Factory
body:
- type: markdown
attributes:
value: |
Issues included in **FAQs** or those with **insufficient** information may be closed without a response.
包含在**常见问题**内或提供信息**不完整**的 issues 可能不会被回复。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the README carefully and searched the existing issues (including FAQs).
请确保您已经认真阅读了 README 并且搜索过现有的 issues包括常见问题
options:
- label: I have read the README and searched the existing issues.
required: true
- type: textarea
id: system-info
validations:
required: true
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。
placeholder: llamafactory version, platform, python version, ...
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Reproduction
description: |
Please provide code snippets, error messages and stack traces that reproduces the problem.
请提供运行参数,错误信息以及异常堆栈以便于我们复现该问题。
Remember to use Markdown tags to correctly format your code.
请合理使用 Markdown 标签来格式化您的文本。
placeholder: |
```bash
llamafactory-cli train ...
```
- type: textarea
id: expected-behavior
validations:
required: false
attributes:
label: Expected behavior
description: |
Please provide a clear and concise description of what you would expect to happen.
请提供您原本的目的,即这段代码的期望行为。
- type: textarea
id: others
validations:
required: false
attributes:
label: Others

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

66
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,66 @@
name: docker
on:
workflow_dispatch:
push:
branches:
- "main"
paths:
- "**/*.py"
- "requirements.txt"
- "docker/**"
- ".github/workflows/*.yml"
pull_request:
branches:
- "main"
paths:
- "**/*.py"
- "requirements.txt"
- "docker/**"
- ".github/workflows/*.yml"
jobs:
build:
runs-on: ubuntu-latest
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
environment:
name: docker
url: https://hub.docker.com/r/hiyouga/llamafactory
steps:
- name: Free up disk space
run: |
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache
df -h
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./docker/docker-cuda/Dockerfile
build-args: |
EXTRAS=metrics,deepspeed,liger-kernel
push: ${{ github.event_name != 'pull_request' }}
tags: docker.io/hiyouga/llamafactory:latest
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -18,13 +18,15 @@ jobs:
ISSUE_URL: ${{ github.event.issue.html_url }}
ISSUE_TITLE: ${{ github.event.issue.title }}
run: |
LABEL=pending
LABEL=""
NPU_KEYWORDS=(npu huawei ascend 华为 昇腾)
ISSUE_TITLE_LOWER=$(echo $ISSUE_TITLE | tr '[:upper:]' '[:lower:]')
for KEYWORD in ${NPU_KEYWORDS[@]}; do
if [[ $ISSUE_TITLE_LOWER == *$KEYWORD* ]] && [[ $ISSUE_TITLE_LOWER != *input* ]]; then
LABEL=pending,npu
LABEL="npu"
break
fi
done
if [ -n "$LABEL" ]; then
gh issue edit $ISSUE_URL --add-label $LABEL
fi

View File

@@ -1,6 +1,7 @@
name: publish
on:
workflow_dispatch:
release:
types:
- published
@@ -25,16 +26,11 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install build
python-version: "3.9"
- name: Build package
run: |
python -m build
make build
- name: Publish package
uses: pypa/gh-action-pypi-publish@release/v1

View File

@@ -1,6 +1,7 @@
name: tests
on:
workflow_dispatch:
push:
branches:
- "main"
@@ -21,20 +22,33 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version:
- "3.8"
python:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
os:
- "ubuntu-latest"
- "windows-latest"
- "macos-13"
transformers:
- null
include: # test backward compatibility
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.45.0"
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.49.0"
- python: "3.9"
os: "ubuntu-latest"
transformers: "4.51.0"
runs-on: ${{ matrix.os }}
environment:
name: tests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ matrix.os }}-${{ matrix.python }}-${{ matrix.transformers }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
@@ -47,20 +61,42 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
python-version: ${{ matrix.python }}
cache: "pip"
cache-dependency-path: "setup.py"
cache-dependency-path: "**/requirements*.txt"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install git+https://github.com/huggingface/transformers.git
python -m pip install ".[torch,dev]"
- name: Install transformers
if: ${{ matrix.transformers }}
run: |
python -m pip install "transformers==${{ matrix.transformers }}"
- name: Cache files
id: hf-hub-cache
uses: actions/cache@v4
with:
path: ${{ runner.temp }}/huggingface
key: huggingface-${{ matrix.os }}-${{ matrix.python }}-${{ matrix.transformers }}-${{ hashFiles('tests/version.txt') }}
- name: Check quality
run: |
make style && make quality
- name: Check license
run: |
make license
- name: Check build
run: |
make build
- name: Test with pytest
run: |
make test
env:
HF_HOME: ${{ runner.temp }}/huggingface
HF_HUB_OFFLINE: "${{ steps.hf-hub-cache.outputs.cache-hit == 'true' && '1' || '0' }}"

12
.gitignore vendored
View File

@@ -159,11 +159,21 @@ cython_debug/
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
.idea/
# vscode
.vscode/
# uv
uv.lock
# custom .gitignore
ms_cache/
hf_cache/
ms_cache/
om_cache/
cache/
config/
saves/
output/
wandb/
swanlog/
generated_predictions.jsonl
predictions_score.json

28
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,28 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: check-ast
- id: check-added-large-files
args: ['--maxkb=25000']
- id: check-merge-conflict
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- id: no-commit-to-branch
args: ['--branch', 'main']
- repo: https://github.com/asottile/pyupgrade
rev: v3.17.0
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.9
hooks:
- id: ruff
args: [--fix]
- id: ruff-format

View File

@@ -1,7 +1,17 @@
.PHONY: quality style test
.PHONY: build commit license quality style test
check_dirs := scripts src tests setup.py
build:
pip3 install build && python3 -m build
commit:
pre-commit install
pre-commit run --all-files
license:
python3 tests/check_license.py $(check_dirs)
quality:
ruff check $(check_dirs)
ruff format --check $(check_dirs)
@@ -11,4 +21,4 @@ style:
ruff format $(check_dirs)
test:
CUDA_VISIBLE_DEVICES= pytest tests/
CUDA_VISIBLE_DEVICES= WANDB_DISABLED=true pytest -vv tests/

447
README.md
View File

@@ -1,45 +1,85 @@
![# LLaMA Factory](assets/logo.png)
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![GitHub contributors](https://img.shields.io/github/contributors/hiyouga/LLaMA-Factory?color=orange)](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
[![GitHub workflow](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml/badge.svg)](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-91-green)](#projects-using-llama-factory)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Citation](https://img.shields.io/badge/citation-614-green)](https://scholar.google.com/scholar?cites=12620864006390196564)
[![Docker Pulls](https://img.shields.io/docker/pulls/hiyouga/llamafactory)](https://hub.docker.com/r/hiyouga/llamafactory/tags)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![GitCode](https://gitcode.com/zhengyaowei/LLaMA-Factory/star/badge.svg)](https://gitcode.com/zhengyaowei/LLaMA-Factory)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Alaya](assets/alaya_new.svg)](https://docs.alayanew.com/docs/documents/newActivities/llamafactory/?utm_source=LLaMA-Factory)
[![Open in Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Open in Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Novita](https://img.shields.io/badge/Novita-Deploy%20Template-blue)](https://novita.ai/templates-library/105981?sharer=88115474-394e-4bda-968e-b88e123d0c47)
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
### Used by [Amazon](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/), [NVIDIA](https://developer.nvidia.com/rtx/ai-toolkit), [Aliyun](https://help.aliyun.com/zh/pai/use-cases/fine-tune-a-llama-3-model-with-llama-factory), etc.
👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg).
<div align="center" markdown="1">
### Supporters ❤️
<a href="https://warp.dev/llama-factory">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/ab8dd143-b0fd-4904-bdc5-dd7ecac94eae">
</a>
#### [Warp, the agentic terminal for developers](https://warp.dev/llama-factory)
[Available for MacOS, Linux, & Windows](https://warp.dev/llama-factory)
----
### Easily fine-tune 100+ large language models with zero-code [CLI](#quickstart) and [Web UI](#fine-tuning-with-llama-board-gui-powered-by-gradio)
![GitHub Trend](https://trendshift.io/api/badge/repositories/4535)
</div>
👋 Join our [WeChat group](assets/wechat.jpg), [NPU user group](assets/wechat_npu.jpg) or [Alaya NeW user group](assets/wechat_alaya.png).
\[ English | [中文](README_zh.md) \]
**Fine-tuning a large language model can be easy as...**
https://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3
https://github.com/user-attachments/assets/3991a3a8-4276-4d30-9cab-4cb0c4b9b99e
Choose your path:
- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- **PAI-DSW**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **Documentation**: https://llamafactory.readthedocs.io/en/latest/
- **Colab (free)**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- **Local machine**: Please refer to [usage](#getting-started)
- **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/
- **PAI-DSW (free trial)**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **Alaya NeW (cloud GPU deal)**: https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
> [!NOTE]
> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.
## Table of Contents
- [Features](#features)
- [Benchmark](#benchmark)
- [Blogs](#blogs)
- [Changelog](#changelog)
- [Supported Models](#supported-models)
- [Supported Training Approaches](#supported-training-approaches)
- [Provided Datasets](#provided-datasets)
- [Requirement](#requirement)
- [Getting Started](#getting-started)
- [Installation](#installation)
- [Data Preparation](#data-preparation)
- [Quickstart](#quickstart)
- [Fine-Tuning with LLaMA Board GUI](#fine-tuning-with-llama-board-gui-powered-by-gradio)
- [Build Docker](#build-docker)
- [Deploy with OpenAI-style API and vLLM](#deploy-with-openai-style-api-and-vllm)
- [Download from ModelScope Hub](#download-from-modelscope-hub)
- [Download from Modelers Hub](#download-from-modelers-hub)
- [Use W&B Logger](#use-wb-logger)
- [Use SwanLab Logger](#use-swanlab-logger)
- [Projects using LLaMA Factory](#projects-using-llama-factory)
- [License](#license)
- [Citation](#citation)
@@ -47,42 +87,90 @@ Choose your path:
## Features
- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, DeepSeek, Yi, Gemma, ChatGLM, Phi, etc.
- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.
- **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
- **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [Adam-mini](https://github.com/zyushun/Adam-mini), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning.
- **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [APOLLO](https://github.com/zhuhanqing/APOLLO), [Adam-mini](https://github.com/zyushun/Adam-mini), [Muon](https://github.com/KellerJordan/Muon), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ and PiSSA.
- **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA.
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
- **Wide tasks**: Multi-turn dialogue, tool using, image understanding, visual grounding, video recognition, audio understanding, etc.
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, [SwanLab](https://github.com/SwanHubX/SwanLab), etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with [vLLM worker](https://github.com/vllm-project/vllm) or [SGLang worker](https://github.com/sgl-project/sglang).
## Benchmark
### Day-N Support for Fine-Tuning Cutting-Edge Models
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory.
| Support Date | Model Name |
| ------------ | ------------------------------------------------------------ |
| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / InternLM 3 / MiniCPM-o-2.6 |
| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
![benchmark](assets/benchmark.svg)
## Blogs
<details><summary>Definitions</summary>
- [Fine-tune Qwen2.5-VL for Autonomous Driving using LLaMA-Factory](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory) (Chinese)
- [How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/) (English)
- [Easy Dataset × LLaMA Factory: Enabling LLMs to Efficiently Learn Domain Knowledge](https://buaa-act.feishu.cn/wiki/GVzlwYcRFiR8OLkHbL6cQpYin7g) (English)
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning.
<details><summary>All Blogs</summary>
- [LLaMA Factory: Fine-tuning the DeepSeek-R1-Distill-Qwen-7B Model for News Classifier](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b) (Chinese)
- [A One-Stop Code-Free Model Fine-Tuning \& Deployment Platform based on SageMaker and LLaMA-Factory](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) (Chinese)
- [LLaMA Factory Multi-Modal Fine-Tuning Practice: Fine-Tuning Qwen2-VL for Personal Tourist Guide](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) (Chinese)
- [LLaMA Factory: Fine-tuning the LLaMA3 Model for Role-Playing](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) (Chinese)
</details>
## Changelog
[24/08/30] We support fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.
[25/04/28] We supported fine-tuning the **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** model family.
[24/08/27] We support **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.
[25/04/21] We supported the **[Muon](https://github.com/KellerJordan/Muon)** optimizer. See [examples](examples/README.md) for usage. Thank [@tianshijing](https://github.com/tianshijing)'s PR.
[24/08/09] We support **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.
[25/04/16] We supported fine-tuning the **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** model. See [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) to get started.
[25/04/14] We supported fine-tuning the **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** and **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** models.
[25/04/06] We supported fine-tuning the **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** model. See [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) to get started.
<details><summary>Full Changelog</summary>
[24/07/04] We support [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.
[25/03/31] We supported fine-tuning the **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** model. See [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) to get started.
[24/06/16] We support **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
[25/03/15] We supported **[SGLang](https://github.com/sgl-project/sglang)** as inference backend. Try `infer_backend: sglang` to accelerate inference.
[25/03/12] We supported fine-tuning the **[Gemma 3](https://huggingface.co/blog/gemma3)** model.
[25/02/24] Announcing **[EasyR1](https://github.com/hiyouga/EasyR1)**, an efficient, scalable and multi-modality RL training framework for efficient GRPO training.
[25/02/11] We supported saving the **[Ollama](https://github.com/ollama/ollama)** modelfile when exporting the model checkpoints. See [examples](examples/README.md) for usage.
[25/02/05] We supported fine-tuning the **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** and **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** on audio understanding tasks.
[25/01/31] We supported fine-tuning the **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** and **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** models.
[25/01/15] We supported **[APOLLO](https://arxiv.org/abs/2412.05270)** optimizer. See [examples](examples/README.md) for usage.
[25/01/14] We supported fine-tuning the **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** and **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** models. Thank [@BUAADreamer](https://github.com/BUAADreamer)'s PR.
[25/01/14] We supported fine-tuning the **[InternLM 3](https://huggingface.co/collections/internlm/)** models. Thank [@hhaAndroid](https://github.com/hhaAndroid)'s PR.
[25/01/10] We supported fine-tuning the **[Phi-4](https://huggingface.co/microsoft/phi-4)** model.
[24/12/21] We supported using **[SwanLab](https://github.com/SwanHubX/SwanLab)** for experiment tracking and visualization. See [this section](#use-swanlab-logger) for details.
[24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset.
[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage.
[24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models.
[24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.
[24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.
[24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.
[24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.
[24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.
@@ -128,7 +216,7 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#download-from-modelscope-hub) for usage.
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage.
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.
@@ -158,32 +246,61 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
</details>
> [!TIP]
> If you cannot use the latest feature, please pull the latest code and install LLaMA-Factory again.
## Supported Models
| Model | Model size | Template |
| ----------------------------------------------------------------- | -------------------------------- | --------- |
| ----------------------------------------------------------------- | -------------------------------- | ------------------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |
| [Gemma 3](https://huggingface.co/google) | 1B/4B/12B/27B | gemma3/gemma (1B) |
| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/THUDM) | 9B/32B | glm4/glmz1 |
| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
| [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3/Llama 3.1](https://huggingface.co/meta-llama) | 8B/70B | llama3 |
| [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |
| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
| [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | paligemma |
| [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
| [Qwen/Qwen1.5/Qwen2 (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/4B/7B/14B/32B/72B/110B | qwen |
| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B | qwen2_vl |
| [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
| [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
| [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
@@ -193,6 +310,10 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
> For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
>
> Remember to use the **SAME** template in training and inference.
>
> \*: You should install the `transformers` from main branch and use `DISABLE_VERSION_CHECK=1` to skip version check.
>
> \*\*: You need to install a specific version of `transformers` to use the corresponding model.
Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
@@ -271,9 +392,13 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
@@ -292,8 +417,10 @@ You also can add a custom chat template to [template.py](src/llamafactory/data/t
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
@@ -313,35 +440,35 @@ huggingface-cli login
| Mandatory | Minimum | Recommend |
| ------------ | ------- | --------- |
| python | 3.8 | 3.11 |
| torch | 1.13.1 | 2.4.0 |
| transformers | 4.41.2 | 4.43.4 |
| datasets | 2.16.0 | 2.20.0 |
| accelerate | 0.30.1 | 0.32.0 |
| peft | 0.11.1 | 0.12.0 |
| python | 3.9 | 3.10 |
| torch | 2.0.0 | 2.6.0 |
| torchvision | 0.15.0 | 0.21.0 |
| transformers | 4.45.0 | 4.50.0 |
| datasets | 2.16.0 | 3.2.0 |
| accelerate | 0.34.0 | 1.2.1 |
| peft | 0.14.0 | 0.15.1 |
| trl | 0.8.6 | 0.9.6 |
| Optional | Minimum | Recommend |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
| deepspeed | 0.10.0 | 0.16.4 |
| bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.5.0 |
| flash-attn | 2.3.0 | 2.6.3 |
| vllm | 0.4.3 | 0.8.2 |
| flash-attn | 2.5.6 | 2.7.2 |
### Hardware Requirement
\* *estimated*
| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
| Method | Bits | 7B | 14B | 30B | 70B | `x`B |
| ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
| Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
| Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
| Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
## Getting Started
@@ -350,53 +477,99 @@ huggingface-cli login
> [!IMPORTANT]
> Installation is mandatory.
#### Install from Source
```bash
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics]"
pip install -e ".[torch,metrics]" --no-build-isolation
```
Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, quality
Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, aqlm, vllm, sglang, galore, apollo, badam, adam-mini, qwen, minicpm_v, modelscope, openmind, swanlab, dev
> [!TIP]
> Use `pip install --no-deps -e .` to resolve package conflicts.
#### Install from Docker Image
```bash
docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
```
This image is built on Ubuntu 22.04 (x86\_64), CUDA 12.4, Python 3.11, PyTorch 2.6.0, and Flash-attn 2.7.4.
Find the pre-built images: https://hub.docker.com/r/hiyouga/llamafactory/tags
Please refer to [build docker](#build-docker) to build the image yourself.
<details><summary>Setting up a virtual environment with <b>uv</b></summary>
Create an isolated Python environment with [uv](https://github.com/astral-sh/uv):
```bash
uv sync --extra torch --extra metrics --prerelease=allow
```
Run LLaMA-Factory in the isolated environment:
```bash
uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
</details>
<details><summary>For Windows users</summary>
#### Install PyTorch
You need to manually install the GPU version of PyTorch on the Windows platform. Please refer to the [official website](https://pytorch.org/get-started/locally/) and the following command to install PyTorch with CUDA support:
```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
python -c "import torch; print(torch.cuda.is_available())"
```
If you see `True` then you have successfully installed PyTorch with CUDA support.
Try `dataloader_num_workers: 0` if you encounter `Can't pickle local object` error.
#### Install BitsAndBytes
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
```
To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.
#### Install Flash Attention-2
To enable FlashAttention-2 on the Windows platform, please use the script from [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) to compile and install it by yourself.
</details>
<details><summary>For Ascend NPU users</summary>
To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
To install LLaMA Factory on Ascend NPU devices, please upgrade Python to version 3.10 or higher and specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
```bash
# replace the url according to your CANN version and devices
# install CANN Toolkit
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run
bash Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run --install
# install CANN Kernels
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run
bash Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run --install
# set env variables
source /usr/local/Ascend/ascend-toolkit/set_env.sh
```
| Requirement | Minimum | Recommend |
| ------------ | ------- | ----------- |
| CANN | 8.0.RC1 | 8.0.RC1 |
| torch | 2.1.0 | 2.1.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 |
| ------------ | ------- | -------------- |
| CANN | 8.0.RC1 | 8.0.0.alpha002 |
| torch | 2.1.0 | 2.4.0 |
| torch-npu | 2.1.0 | 2.4.0.post2 |
| deepspeed | 0.13.2 | 0.13.2 |
| vllm-ascend | - | 0.7.3 |
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
@@ -404,15 +577,51 @@ If you cannot infer model on NPU devices, try setting `do_sample: false` in the
Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
#### Install BitsAndBytes
To use QLoRA based on bitsandbytes on Ascend NPU, please follow these 3 steps:
1. Manually compile bitsandbytes: Refer to [the installation documentation](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU) for the NPU version of bitsandbytes to complete the compilation and installation. The compilation requires a cmake version of at least 3.22.1 and a g++ version of at least 12.x.
```bash
# Install bitsandbytes from source
# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
cd bitsandbytes/
# Install dependencies
pip install -r requirements-dev.txt
# Install the dependencies for the compilation tools. Note that the commands for this step may vary depending on the operating system. The following are provided for reference
apt-get install -y build-essential cmake
# Compile & install
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install .
```
2. Install transformers from the main branch.
```bash
git clone -b main https://github.com/huggingface/transformers.git
cd transformers
pip install .
```
3. Set `double_quantization: false` in the configuration. You can refer to the [example](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml).
</details>
### Data Preparation
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can use datasets on HuggingFace / ModelScope / Modelers hub, load the dataset in local disk, or specify a path to s3/gcs cloud storage.
> [!NOTE]
> Please update `data/dataset_info.json` to use your custom dataset.
You can also use **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** or **[GraphGen](https://github.com/open-sciencelab/GraphGen)** to create synthetic data for fine-tuning.
### Quickstart
Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
@@ -427,6 +636,8 @@ See [examples/README.md](examples/README.md) for advanced usage (including distr
> [!TIP]
> Use `llamafactory-cli help` to show help information.
>
> Read [FAQs](https://github.com/hiyouga/LLaMA-Factory/issues/4614) first if you encounter any problems.
### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
@@ -466,21 +677,13 @@ For CUDA users:
```bash
docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest .
docker run -dit --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
docker run -dit --ipc=host --gpus=all \
-p 7860:7860 \
-p 8000:8000 \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -490,18 +693,12 @@ docker exec -it llamafactory bash
For Ascend NPU users:
```bash
# Choose docker image upon your environment
docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=torch-npu,metrics \
-t llamafactory:latest .
# Change `device` upon your resources
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
docker run -dit --ipc=host \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
@@ -512,7 +709,6 @@ docker run -dit \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -523,24 +719,15 @@ For AMD ROCm users:
```bash
docker build -f ./docker/docker-rocm/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest .
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-v ./saves:/app/saves \
docker run -dit --ipc=host \
-p 7860:7860 \
-p 8000:8000 \
--device /dev/kfd \
--device /dev/dri \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -549,11 +736,14 @@ docker exec -it llamafactory bash
</details>
<details><summary>Details about volume</summary>
<details><summary>Use Docker volumes</summary>
- `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory.
- `ms_cache`: Similar to Hugging Face cache but for ModelScope users.
- `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.
You can uncomment `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` in the Dockerfile to use data volumes.
When building the Docker image, use `-v ./hf_cache:/root/.cache/huggingface` argument to mount the local directory to the container. The following data volumes are available.
- `hf_cache`: Utilize Hugging Face cache on the host machine.
- `shared_data`: The directionary to store datasets on the host machine.
- `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine.
</details>
@@ -561,11 +751,13 @@ docker exec -it llamafactory bash
### Deploy with OpenAI-style API and vLLM
```bash
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
```
> [!TIP]
> Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document.
>
> Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py)
### Download from ModelScope Hub
@@ -577,6 +769,16 @@ export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
### Download from Modelers Hub
You can also use Modelers Hub to download models and datasets.
```bash
export USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows
```
Train the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`.
### Use W&B Logger
To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files.
@@ -588,6 +790,21 @@ run_name: test_run # optional
Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.
### Use SwanLab Logger
To use [SwanLab](https://github.com/SwanHubX/SwanLab) for logging experimental results, you need to add the following arguments to yaml files.
```yaml
use_swanlab: true
swanlab_run_name: test_run # optional
```
When launching training tasks, you can log in to SwanLab in three ways:
1. Add `swanlab_api_key=<your_api_key>` to the yaml file, and set it to your [API key](https://swanlab.cn/settings).
2. Set the environment variable `SWANLAB_API_KEY` to your [API key](https://swanlab.cn/settings).
3. Use the `swanlab login` command to complete the login.
## Projects using LLaMA Factory
If you have a project that should be incorporated, please contact via email or create a pull request.
@@ -675,24 +892,30 @@ If you have a project that should be incorporated, please contact via email or c
1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)
1. Zhang et al. CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling. ACL 2024. [[paper]](https://aclanthology.org/2024.findings-acl.830.pdf)
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.
1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models.
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**: A modified library that supports long sequence SFT & DPO using ring attention.
1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**: An o1-like model fine-tuned by NovaSky AI with very small cost.
1. **[WeClone](https://github.com/xming521/WeClone)**: One-stop solution for creating your digital avatar from chat logs.
1. **[EmoLLM](https://github.com/SmartFlowAI/EmoLLM)**: A project about large language models (LLMs) and mental health.
</details>
## License
This repository is licensed under the [Apache-2.0 License](LICENSE).
Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## Citation

View File

@@ -1,46 +1,87 @@
![# LLaMA Factory](assets/logo.png)
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![GitHub contributors](https://img.shields.io/github/contributors/hiyouga/LLaMA-Factory?color=orange)](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
[![GitHub workflow](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml/badge.svg)](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-91-green)](#使用了-llama-factory-的项目)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Citation](https://img.shields.io/badge/citation-614-green)](https://scholar.google.com/scholar?cites=12620864006390196564)
[![Docker Pulls](https://img.shields.io/docker/pulls/hiyouga/llamafactory)](https://hub.docker.com/r/hiyouga/llamafactory/tags)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![GitCode](https://gitcode.com/zhengyaowei/LLaMA-Factory/star/badge.svg)](https://gitcode.com/zhengyaowei/LLaMA-Factory)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Alaya](assets/alaya_new.svg)](https://docs.alayanew.com/docs/documents/newActivities/llamafactory/?utm_source=LLaMA-Factory)
[![Open in Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Open in Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Novita](https://img.shields.io/badge/Novita-Deploy%20Template-blue)](https://novita.ai/templates-library/105981?sharer=88115474-394e-4bda-968e-b88e123d0c47)
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
### 获得[亚马逊](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)、[英伟达](https://developer.nvidia.cn/rtx/ai-toolkit)、[阿里云](https://help.aliyun.com/zh/pai/use-cases/fine-tune-a-llama-3-model-with-llama-factory)等的应用。
👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。
<div align="center" markdown="1">
### 赞助商 ❤️
<a href="https://warp.dev/llama-factory">
<img alt="Warp sponsorship" width="400" src="https://github.com/user-attachments/assets/ab8dd143-b0fd-4904-bdc5-dd7ecac94eae">
</a>
#### [Warp面向开发者的智能终端](https://warp.dev/llama-factory)
[适用于 MacOS、Linux 和 Windows](https://warp.dev/llama-factory)
----
### 使用零代码[命令行](#快速开始)与 [Web UI](#llama-board-可视化微调由-gradio-驱动) 轻松微调百余种大模型
![GitHub Trend](https://trendshift.io/api/badge/repositories/4535)
</div>
👋 加入我们的[微信群](assets/wechat.jpg)、[NPU 用户群](assets/wechat_npu.jpg)或 [九章智算云算力优惠群](assets/wechat_alaya.png)。
\[ [English](README.md) | 中文 \]
**微调大模型可以像这样轻松…**
https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
https://github.com/user-attachments/assets/43b700c6-a178-41db-b1f8-8190a5d3fcfc
选择你的打开方式:
- **Colab**https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
- **PAI-DSW**https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **本地机器**:请见[如何使用](#如何使用)
- **入门教程**https://zhuanlan.zhihu.com/p/695287607
- **框架文档**https://llamafactory.readthedocs.io/zh-cn/latest/
- **框架文档(昇腾 NPU**https://ascend.github.io/docs/sources/llamafactory/
- **Colab免费**https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
- **本地机器**:请见[如何使用](#如何使用)
- **PAI-DSW免费试用**https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **九章智算云(算力优惠活动)**https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
> [!NOTE]
> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。
## 目录
- [项目特色](#项目特色)
- [性能指标](#性能指标)
- [官方博客](#官方博客)
- [更新日志](#更新日志)
- [模型](#模型)
- [训练方法](#训练方法)
- [数据集](#数据集)
- [软硬件依赖](#软硬件依赖)
- [如何使用](#如何使用)
- [安装 LLaMA Factory](#安装-llama-factory)
- [数据准备](#数据准备)
- [快速开始](#快速开始)
- [LLaMA Board 可视化微调](#llama-board-可视化微调由-gradio-驱动)
- [构建 Docker](#构建-docker)
- [利用 vLLM 部署 OpenAI API](#利用-vllm-部署-openai-api)
- [从魔搭社区下载](#从魔搭社区下载)
- [从魔乐社区下载](#从魔乐社区下载)
- [使用 W&B 面板](#使用-wb-面板)
- [使用 SwanLab 面板](#使用-swanlab-面板)
- [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目)
- [协议](#协议)
- [引用](#引用)
@@ -48,39 +89,87 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
## 项目特色
- **多种模型**LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
- **多种模型**LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、DeepSeek、Yi、Gemma、ChatGLM、Phi 等等。
- **集成方法**增量预训练、多模态指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。
- **多种精度**16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。
- **先进算法**[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQPiSSA 和 Agent 微调
- **先进算法**[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[APOLLO](https://github.com/zhuhanqing/APOLLO)、[Adam-mini](https://github.com/zyushun/Adam-mini)、[Muon](https://github.com/KellerJordan/Muon)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQPiSSA。
- **实用技巧**[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。
- **实验监控**LlamaBoard、TensorBoard、Wandb、MLflow 等等。
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口
- **广泛任务**:多轮对话、工具调用、图像理解、视觉定位、视频识别和语音理解等等。
- **实验监控**LlamaBoard、TensorBoard、Wandb、MLflow、[SwanLab](https://github.com/SwanHubX/SwanLab) 等等
- **极速推理**:基于 [vLLM](https://github.com/vllm-project/vllm) 或 [SGLang](https://github.com/sgl-project/sglang) 的 OpenAI 风格 API、浏览器界面和命令行接口。
## 性能指标
### 最新模型的 Day-N 微调适配
与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。
| 适配时间 | 模型名称 |
| ------------ | ------------------------------------------------------------ |
| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / InternLM 3 / MiniCPM-o-2.6 |
| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
![benchmark](assets/benchmark.svg)
## 官方博客
<details><summary>变量定义</summary>
- [使用 LLaMA-Factory 微调 Qwen2.5-VL 实现自动驾驶场景微调](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory)(中文)
- [通过亚马逊 SageMaker HyperPod 上的 LLaMA-Factory 增强多模态模型银行文档的视觉信息提取](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/)(英文)
- [Easy Dataset × LLaMA Factory: 让大模型高效学习领域知识](https://buaa-act.feishu.cn/wiki/KY9xwTGs1iqHrRkjXBwcZP9WnL9)(中文)
- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4截断长度=1024
- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4截断长度=1024
- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1截断长度=1024
- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`
<details><summary>全部博客</summary>
- [LLaMA Factory微调 DeepSeek-R1-Distill-Qwen-7B 模型实现新闻标题分类器](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b)(中文
- [基于 Amazon SageMaker 和 LLaMA-Factory 打造一站式无代码模型微调部署平台 Model Hub](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)(中文)
- [LLaMA Factory 多模态微调实践:微调 Qwen2-VL 构建文旅大模型](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)(中文)
- [LLaMA Factory微调LLaMA3模型实现角色扮演](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)(中文)
</details>
## 更新日志
[25/04/28] 我们支持了 **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** 系列模型的微调。
[25/04/21] 我们支持了 **[Muon](https://github.com/KellerJordan/Muon)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@tianshijing](https://github.com/tianshijing) 的 PR。
[25/04/16] 我们支持了 **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** 模型的微调。查看 [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) 以使用。
[25/04/14] 我们支持了 **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** 和 **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** 模型的微调。
[25/04/06] 我们支持了 **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** 模型的微调。查看 [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) 以使用。
<details><summary>展开日志</summary>
[25/03/31] 我们支持了 **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** 模型的微调。查看 [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) 以使用。
[25/03/15] 我们支持了 **[SGLang](https://github.com/sgl-project/sglang)** 推理后端,请使用 `infer_backend: sglang` 启用。
[25/03/12] 我们支持了 **[Gemma 3](https://huggingface.co/blog/gemma3)** 模型的微调。
[25/02/24] 我们宣布开源 **[EasyR1](https://github.com/hiyouga/EasyR1)**,一个高效可扩展的多模态强化学习框架,支持高效的 GRPO 训练。
[25/02/11] 我们支持了在导出模型时保存 **[Ollama](https://github.com/ollama/ollama)** 配置文件。详细用法请参照 [examples](examples/README_zh.md)。
[25/02/05] 我们支持了在语音理解任务上微调 **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** 和 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 模型。
[25/01/31] 我们支持了 **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** 和 **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** 模型的微调。
[25/01/15] 我们支持了 **[APOLLO](https://arxiv.org/abs/2412.05270)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
[25/01/14] 我们支持了 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 和 **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** 模型的微调。 感谢 [@BUAADreamer](https://github.com/BUAADreamer) 的 PR.
[25/01/14] 我们支持了 **[InternLM 3](https://huggingface.co/collections/internlm/)** 模型的微调。感谢 [@hhaAndroid](https://github.com/hhaAndroid) 的 PR。
[25/01/10] 我们支持了 **[Phi-4](https://huggingface.co/microsoft/phi-4)** 模型的微调。
[24/12/21] 我们支持了使用 **[SwanLab](https://github.com/SwanHubX/SwanLab)** 跟踪与可视化实验。详细用法请参考 [此部分](#使用-swanlab-面板)。
[24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。
[24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。
[24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。
[24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。
[24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。
[24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。
<details><summary>展开日志</summary>
[24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。
[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。
@@ -159,32 +248,61 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
</details>
> [!TIP]
> 如果您无法使用最新的功能,请尝试重新拉取代码并再次安装 LLaMA-Factory。
## 模型
| 模型名 | 模型大小 | Template |
| ----------------------------------------------------------------- | -------------------------------- | --------- |
| 模型名 | 参数量 | Template |
| ----------------------------------------------------------------- | -------------------------------- | ------------------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
| [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 |
| [Gemma 3](https://huggingface.co/google) | 1B/4B/12B/27B | gemma3/gemma (1B) |
| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/THUDM) | 9B/32B | glm4/glmz1 |
| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
| [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3/Llama 3.1](https://huggingface.co/meta-llama) | 8B/70B | llama3 |
| [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
| [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 |
| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
| [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | paligemma |
| [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
| [Qwen/Qwen1.5/Qwen2 (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/4B/7B/14B/32B/72B/110B | qwen |
| [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B | qwen2_vl |
| [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
| [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
| [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
@@ -194,6 +312,10 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
> 对于所有“基座”Base模型`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”Instruct/Chat模型请务必使用**对应的模板**。
>
> 请务必在训练和推理时采用**完全一致**的模板。
>
> \*:您需要从 main 分支安装 `transformers` 并使用 `DISABLE_VERSION_CHECK=1` 来跳过版本检查。
>
> \*\*:您需要安装特定版本的 `transformers` 以使用该模型。
项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
@@ -202,7 +324,7 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
## 训练方法
| 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA |
| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
| --------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
| 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
@@ -272,9 +394,13 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
@@ -293,8 +419,10 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
@@ -314,35 +442,35 @@ huggingface-cli login
| 必需项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| python | 3.8 | 3.11 |
| torch | 1.13.1 | 2.4.0 |
| transformers | 4.41.2 | 4.43.4 |
| datasets | 2.16.0 | 2.20.0 |
| accelerate | 0.30.1 | 0.32.0 |
| peft | 0.11.1 | 0.12.0 |
| python | 3.9 | 3.10 |
| torch | 2.0.0 | 2.6.0 |
| torchvision | 0.15.0 | 0.21.0 |
| transformers | 4.45.0 | 4.50.0 |
| datasets | 2.16.0 | 3.2.0 |
| accelerate | 0.34.0 | 1.2.1 |
| peft | 0.14.0 | 0.15.1 |
| trl | 0.8.6 | 0.9.6 |
| 可选项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
| deepspeed | 0.10.0 | 0.16.4 |
| bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.5.0 |
| flash-attn | 2.3.0 | 2.6.3 |
| vllm | 0.4.3 | 0.8.2 |
| flash-attn | 2.5.6 | 2.7.2 |
### 硬件依赖
\* *估算值*
| 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
| 方法 | 精度 | 7B | 14B | 30B | 70B | `x`B |
| ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
| Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
| Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
| Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
## 如何使用
@@ -351,32 +479,77 @@ huggingface-cli login
> [!IMPORTANT]
> 此步骤为必需。
#### 从源码安装
```bash
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e ".[torch,metrics]"
pip install -e ".[torch,metrics]" --no-build-isolation
```
可选的额外依赖项torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、quality
可选的额外依赖项torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、aqlm、vllm、sglang、galore、apollo、badam、adam-mini、qwen、minicpm_v、modelscope、openmind、swanlab、dev
> [!TIP]
> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。
#### 从镜像安装
```bash
docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
```
该镜像基于 Ubuntu 22.04x86\_64、CUDA 12.4、Python 3.11、PyTorch 2.6.0 和 Flash-attn 2.7.4 构建。
查看全部镜像https://hub.docker.com/r/hiyouga/llamafactory/tags
请参阅[构建 Docker](#构建-docker) 来重新构建镜像。
<details><summary>使用 <b>uv</b> 构建虚拟环境</summary>
使用 [uv](https://github.com/astral-sh/uv) 创建隔离的 Python 环境:
```bash
uv sync --extra torch --extra metrics --prerelease=allow
```
在环境中运行 LLaMA-Factory
```bash
uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
</details>
<details><summary>Windows 用户指南</summary>
#### 安装 PyTorch
Windows 平台需要额外手动安装 GPU 版本的 PyTorch 依赖包,您可以参考[官方网站](https://pytorch.org/get-started/locally/)和以下命令安装并测试 PyTorch 是否正确安装。
```bash
pip uninstall torch torchvision torchaudio
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
python -c "import torch; print(torch.cuda.is_available())"
```
如果看到 `True` 则说明安装成功。
若遇到类似 `Can't pickle local object` 的报错,请设置 `dataloader_num_workers: 0`
#### 安装 BitsAndBytes
如果要在 Windows 平台上开启量化 LoRAQLoRA需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
```
如果要在 Windows 平台上开启 FlashAttention-2需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。
#### 安装 Flash Attention-2
如果要在 Windows 平台上开启 FlashAttention-2请使用 [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) 中的脚本自行编译与安装。
</details>
<details><summary>昇腾 NPU 用户指南</summary>
在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
在昇腾 NPU 设备上安装 LLaMA Factory 时,请升级 Python 到 3.10 及以上,并需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
```bash
# 请替换 URL 为 CANN 版本和设备型号对应的 URL
@@ -393,11 +566,12 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh
```
| 依赖项 | 至少 | 推荐 |
| ------------ | ------- | ----------- |
| CANN | 8.0.RC1 | 8.0.RC1 |
| torch | 2.1.0 | 2.1.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 |
| ------------ | ------- | -------------- |
| CANN | 8.0.RC1 | 8.0.0.alpha002 |
| torch | 2.1.0 | 2.4.0 |
| torch-npu | 2.1.0 | 2.4.0.post2 |
| deepspeed | 0.13.2 | 0.13.2 |
| vllm-ascend | - | 0.7.3 |
请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。
@@ -405,15 +579,51 @@ source /usr/local/Ascend/ascend-toolkit/set_env.sh
下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
#### 安装 BitsAndBytes
如果要在 Ascend NPU 上进行基于 bitsandbytes 的 QLoRA 量化微调,请执行如下步骤:
1. 手动编译 bitsandbytes请参考[安装文档](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU)完成 NPU 版的 bitsandbytes 安装,编译要求环境 cmake 版本不低于 3.22.1g++ 版本不低于 12.x。
```bash
# 从源码安装 bitsandbytes
# 克隆 bitsandbytes 仓库, Ascend NPU 目前在 multi-backend-refactor 中支持
git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
cd bitsandbytes/
# 安装依赖
pip install -r requirements-dev.txt
# 安装编译工具依赖,该步骤在不同系统上命令有所不同,供参考
apt-get install -y build-essential cmake
# 编译 & 安装
cmake -DCOMPUTE_BACKEND=npu -S .
make
pip install .
```
2. 安装 transformers 的 main 分支版本。
```bash
git clone -b main https://github.com/huggingface/transformers.git
cd transformers
pip install .
```
3. 在训练参数中设置 `double_quantization: false`,可参考[示例](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml)。
</details>
### 数据准备
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。
> [!NOTE]
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
您也可以使用 **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** 或 **[GraphGen](https://github.com/open-sciencelab/GraphGen)** 构建用于微调的合成数据。
### 快速开始
下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
@@ -428,6 +638,8 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
> [!TIP]
> 使用 `llamafactory-cli help` 显示帮助信息。
>
> 遇到报错请先看[常见问题](https://github.com/hiyouga/LLaMA-Factory/issues/4614)。
### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
@@ -467,21 +679,13 @@ CUDA 用户:
```bash
docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest .
docker run -dit --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
docker run -dit --ipc=host --gpus=all \
-p 7860:7860 \
-p 8000:8000 \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -491,18 +695,12 @@ docker exec -it llamafactory bash
昇腾 NPU 用户:
```bash
# 根据您的环境选择镜像
docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=torch-npu,metrics \
-t llamafactory:latest .
# 根据您的资源更改 `device`
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
docker run -dit --ipc=host \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
@@ -513,7 +711,6 @@ docker run -dit \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -524,24 +721,15 @@ AMD ROCm 用户:
```bash
docker build -f ./docker/docker-rocm/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
--build-arg EXTRAS=metrics \
-t llamafactory:latest .
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-v ./saves:/app/saves \
docker run -dit --ipc=host \
-p 7860:7860 \
-p 8000:8000 \
--device /dev/kfd \
--device /dev/dri \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
@@ -550,11 +738,14 @@ docker exec -it llamafactory bash
</details>
<details><summary>数据卷详情</summary>
<details><summary>使用数据卷</summary>
- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录
- `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。
- `data`:宿主机中存放数据集的文件夹路径
您可以通过移除 Dockerfile 中 `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` 的注释来使用数据卷
在构建 Docker 时使用参数 `-v ./hf_cache:/root/.cache/huggingface` 来挂载数据卷。各个数据卷的含义表示如下
- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹。
- `shared_data`:宿主机中存放数据集的文件夹路径。
- `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。
</details>
@@ -562,11 +753,13 @@ docker exec -it llamafactory bash
### 利用 vLLM 部署 OpenAI API
```bash
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
```
> [!TIP]
> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。
>
> 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py)
### 从魔搭社区下载
@@ -578,6 +771,16 @@ export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
`model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`
### 从魔乐社区下载
您也可以通过下述方法,使用魔乐社区下载数据集和模型。
```bash
export USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1`
```
`model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`
### 使用 W&B 面板
若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。
@@ -589,6 +792,21 @@ run_name: test_run # 可选
在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。
### 使用 SwanLab 面板
若要使用 [SwanLab](https://github.com/SwanHubX/SwanLab) 记录实验数据,请在 yaml 文件中添加下面的参数。
```yaml
use_swanlab: true
swanlab_run_name: test_run # 可选
```
在启动训练任务时登录SwanLab账户有以下三种方式
方式一:在 yaml 文件中添加 `swanlab_api_key=<your_api_key>` ,并设置为你的 [API 密钥](https://swanlab.cn/settings)。
方式二:将环境变量 `SWANLAB_API_KEY` 设置为你的 [API 密钥](https://swanlab.cn/settings)。
方式三:启动前使用 `swanlab login` 命令完成登录。
## 使用了 LLaMA Factory 的项目
如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。
@@ -676,16 +894,21 @@ run_name: test_run # 可选
1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**MBTI性格大模型项目根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**:一个魔改后的代码库,通过 Ring Attention 支持长序列的 SFT 和 DPO 训练。
1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**:由 NovaSky AI 微调的低成本类 o1 长推理模型。
1. **[WeClone](https://github.com/xming521/WeClone)**:从聊天记录创造数字分身的一站式解决方案。
</details>
@@ -693,7 +916,7 @@ run_name: test_run # 可选
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## 引用

38
assets/alaya_new.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 47 KiB

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -1,12 +1,15 @@
The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it.
Currently we support datasets in **alpaca** and **sharegpt** format.
The `dataset_info.json` file should be put in the `dataset_dir` directory. You can change `dataset_dir` to use another directory. The default value is `./data`.
Currently we support datasets in **alpaca** and **sharegpt** format. Allowed file types include json, jsonl, csv, parquet, arrow.
```json
"dataset_name": {
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)",
"ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)",
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)",
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url, file_name and cloud_file_name)",
"ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url, file_name and cloud_file_name)",
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name and cloud_file_name)",
"cloud_file_name": "the name of the dataset file in s3/gcs cloud storage. (if specified, ignore file_name)",
"file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)",
"formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
"ranking": "whether the dataset is a preference dataset or not. (default: False)",
@@ -24,6 +27,7 @@ Currently we support datasets in **alpaca** and **sharegpt** format.
"tools": "the column name in the dataset containing the tool description. (default: None)",
"images": "the column name in the dataset containing the image inputs. (default: None)",
"videos": "the column name in the dataset containing the videos inputs. (default: None)",
"audios": "the column name in the dataset containing the audios inputs. (default: None)",
"chosen": "the column name in the dataset containing the chosen answers. (default: None)",
"rejected": "the column name in the dataset containing the rejected answers. (default: None)",
"kto_tag": "the column name in the dataset containing the kto tags. (default: None)"
@@ -46,7 +50,9 @@ Currently we support datasets in **alpaca** and **sharegpt** format.
* [Example dataset](alpaca_en_demo.json)
In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response.
In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the user prompt, then the user prompt would be `instruction\ninput`. The `output` column represents the model response.
For reasoning models, if the dataset contains chain-of-thought (CoT), the CoT needs to be placed in the model responses, such as `<think>cot</think>output`.
The `system` column will be used as the system prompt if specified.
@@ -55,13 +61,13 @@ The `history` column is a list consisting of string tuples representing prompt-r
```json
[
{
"instruction": "human instruction (required)",
"input": "human input (optional)",
"instruction": "user instruction (required)",
"input": "user input (optional)",
"output": "model response (required)",
"system": "system prompt (optional)",
"history": [
["human instruction in the first round (optional)", "model response in the first round (optional)"],
["human instruction in the second round (optional)", "model response in the second round (optional)"]
["user instruction in the first round (optional)", "model response in the first round (optional)"],
["user instruction in the second round (optional)", "model response in the second round (optional)"]
]
}
]
@@ -82,9 +88,14 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
}
```
> [!TIP]
> If the model has reasoning capabilities (e.g. Qwen3) but the dataset does not contain chain-of-thought (CoT), LLaMA-Factory will automatically add empty CoT to the data. When `enable_thinking` is `True` (slow thinking, by default), the empty CoT will be added to the model responses and loss computation will be considered; otherwise (fast thinking), it will be added to the user prompts and loss computation will be ignored. Please keep the `enable_thinking` parameter consistent during training and inference.
>
> If you want to train data containing CoT with slow thinking and data without CoT with fast thinking, you can set `enable_thinking` to `None`. However, this feature is relatively complicated and should be used with caution.
### Pre-training Dataset
- [Example dataset](c4_demo.json)
- [Example dataset](c4_demo.jsonl)
In pre-training, only the `text` column will be used for model learning.
@@ -115,8 +126,8 @@ It requires a better response in `chosen` column and a worse response in `reject
```json
[
{
"instruction": "human instruction (required)",
"input": "human input (optional)",
"instruction": "user instruction (required)",
"input": "user input (optional)",
"chosen": "chosen answer (required)",
"rejected": "rejected answer (required)"
}
@@ -150,6 +161,10 @@ An additional column `images` is required. Please refer to the [sharegpt](#share
An additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.
### Multimodal Audio Dataset
An additional column `audios` is required. Please refer to the [sharegpt](#sharegpt-format) format for details.
## Sharegpt Format
### Supervised Fine-Tuning Dataset
@@ -166,7 +181,7 @@ Note that the human and observation should appear in odd positions, while gpt an
"conversations": [
{
"from": "human",
"value": "human instruction"
"value": "user instruction"
},
{
"from": "function_call",
@@ -217,7 +232,7 @@ Preference datasets in sharegpt format also require a better message in `chosen`
"conversations": [
{
"from": "human",
"value": "human instruction"
"value": "user instruction"
},
{
"from": "gpt",
@@ -225,7 +240,7 @@ Preference datasets in sharegpt format also require a better message in `chosen`
},
{
"from": "human",
"value": "human instruction"
"value": "user instruction"
}
],
"chosen": {
@@ -267,7 +282,7 @@ KTO datasets require a extra `kto_tag` column containing the boolean human feedb
"conversations": [
{
"from": "human",
"value": "human instruction"
"value": "user instruction"
},
{
"from": "gpt",
@@ -296,7 +311,7 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
- [Example dataset](mllm_demo.json)
Multimodal image datasets require a `images` column containing the paths to the input images.
Multimodal image datasets require an `images` column containing the paths to the input images.
The number of images should be identical to the `<image>` tokens in the conversations.
@@ -306,7 +321,7 @@ The number of images should be identical to the `<image>` tokens in the conversa
"conversations": [
{
"from": "human",
"value": "<image>human instruction"
"value": "<image>user instruction"
},
{
"from": "gpt",
@@ -347,7 +362,7 @@ The number of videos should be identical to the `<video>` tokens in the conversa
"conversations": [
{
"from": "human",
"value": "<video>human instruction"
"value": "<video>user instruction"
},
{
"from": "gpt",
@@ -374,6 +389,47 @@ Regarding the above dataset, the *dataset description* in `dataset_info.json` sh
}
```
### Multimodal Audio Dataset
- [Example dataset](mllm_audio_demo.json)
Multimodal audio datasets require an `audios` column containing the paths to the input audios.
The number of audios should be identical to the `<audio>` tokens in the conversations.
```json
[
{
"conversations": [
{
"from": "human",
"value": "<audio>user instruction"
},
{
"from": "gpt",
"value": "model response"
}
],
"audios": [
"audio path (required)"
]
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"audios": "audios"
}
}
```
### OpenAI Format
The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt.
@@ -388,7 +444,7 @@ The openai format is simply a special case of the sharegpt format, where the fir
},
{
"role": "user",
"content": "human instruction"
"content": "user instruction"
},
{
"role": "assistant",

View File

@@ -1,6 +1,8 @@
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集
其中 `dataset_info.json` 文件应放置在 `dataset_dir` 目录下。您可以通过修改 `dataset_dir` 参数来使用其他目录。默认值为 `./data`
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。允许的文件类型包括 json、jsonl、csv、parquet 和 arrow。
```json
"数据集名称": {
@@ -24,6 +26,7 @@
"tools": "数据集代表工具描述的表头名称默认None",
"images": "数据集代表图像输入的表头名称默认None",
"videos": "数据集代表视频输入的表头名称默认None",
"audios": "数据集代表音频输入的表头名称默认None",
"chosen": "数据集代表更优回答的表头名称默认None",
"rejected": "数据集代表更差回答的表头名称默认None",
"kto_tag": "数据集代表 KTO 标签的表头名称默认None"
@@ -46,7 +49,9 @@
- [样例数据集](alpaca_zh_demo.json)
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令`instruction\ninput`。而 `output` 列对应的内容为模型回答。
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为提示词,即提示词`instruction\ninput`。而 `output` 列对应的内容为模型回答。
对于推理类模型的微调,如果数据集包含思维链,则需要把思维链放在模型回答中,例如 `<think>cot</think>output`
如果指定,`system` 列对应的内容将被作为系统提示词。
@@ -55,8 +60,8 @@
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"instruction": "用户指令(必填)",
"input": "用户输入(选填)",
"output": "模型回答(必填)",
"system": "系统提示词(选填)",
"history": [
@@ -82,9 +87,14 @@
}
```
> [!TIP]
> 如果模型本身具备推理能力(如 Qwen3而数据集不包含思维链LLaMA-Factory 会自动为数据添加空思维链。当 `enable_thinking` 为 `True` 时(慢思考,默认),空思维链会添加到模型回答中并且计算损失,否则会添加到用户指令中并且不计算损失(快思考)。请在训练和推理时保持 `enable_thinking` 参数一致。
>
> 如果您希望训练包含思维链的数据时使用慢思考,训练不包含思维链的数据时使用快思考,可以设置 `enable_thinking` 为 `None`。但该功能较为复杂,请谨慎使用。
### 预训练数据集
- [样例数据集](c4_demo.json)
- [样例数据集](c4_demo.jsonl)
在预训练时,只有 `text` 列中的内容会用于模型学习。
@@ -115,8 +125,8 @@
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"instruction": "用户指令(必填)",
"input": "用户输入(选填)",
"chosen": "优质回答(必填)",
"rejected": "劣质回答(必填)"
}
@@ -150,6 +160,10 @@ KTO 数据集需要提供额外的 `kto_tag` 列。详情请参阅 [sharegpt](#s
多模态视频数据集需要提供额外的 `videos` 列。详情请参阅 [sharegpt](#sharegpt-格式)。
### 多模态音频数据集
多模态音频数据集需要提供额外的 `audios` 列。详情请参阅 [sharegpt](#sharegpt-格式)。
## Sharegpt 格式
### 指令监督微调数据集
@@ -166,7 +180,7 @@ KTO 数据集需要提供额外的 `kto_tag` 列。详情请参阅 [sharegpt](#s
"conversations": [
{
"from": "human",
"value": "人类指令"
"value": "用户指令"
},
{
"from": "function_call",
@@ -217,7 +231,7 @@ Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的
"conversations": [
{
"from": "human",
"value": "人类指令"
"value": "用户指令"
},
{
"from": "gpt",
@@ -225,7 +239,7 @@ Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的
},
{
"from": "human",
"value": "人类指令"
"value": "用户指令"
}
],
"chosen": {
@@ -267,7 +281,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [
{
"from": "human",
"value": "人类指令"
"value": "用户指令"
},
{
"from": "gpt",
@@ -306,7 +320,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [
{
"from": "human",
"value": "<image>人类指令"
"value": "<image><image>用户指令"
},
{
"from": "gpt",
@@ -314,6 +328,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
}
],
"images": [
"图像路径(必填)",
"图像路径(必填)"
]
}
@@ -347,7 +362,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
"conversations": [
{
"from": "human",
"value": "<video>人类指令"
"value": "<video><video>用户指令"
},
{
"from": "gpt",
@@ -355,6 +370,7 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
}
],
"videos": [
"视频路径(必填)",
"视频路径(必填)"
]
}
@@ -374,6 +390,49 @@ KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人
}
```
### 多模态音频数据集
- [样例数据集](mllm_audio_demo.json)
多模态音频数据集需要额外添加一个 `audios` 列,包含输入音频的路径。
注意音频的数量必须与文本中所有 `<audio>` 标记的数量严格一致。
```json
[
{
"conversations": [
{
"from": "human",
"value": "<audio><audio>用户指令"
},
{
"from": "gpt",
"value": "模型回答"
}
],
"audios": [
"音频路径(必填)",
"音频路径(必填)"
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"audios": "audios"
}
}
```
### OpenAI 格式
OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。
@@ -388,7 +447,7 @@ OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消
},
{
"role": "user",
"content": "人类指令"
"content": "用户指令"
},
{
"role": "assistant",

View File

@@ -1,3 +1,18 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
@@ -10,16 +25,16 @@ _DESCRIPTION = "BELLE multiturn chat dataset."
_CITATION = """\
@article{belle2023exploring,
title={Exploring the Impact of Instruction Data Scaling on Large Language Models: An Empirical Study on Real-World Use Cases},
title={Exploring the Impact of Instruction Data Scaling on Large Language Models},
author={Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, Xiangang Li},
journal={arXiv preprint arXiv:2303.14742},
year={2023}
}
"""
_HOMEPAGE = "{}/datasets/BelleGroup/multiturn_chat_0.8M".format(_HF_ENDPOINT)
_HOMEPAGE = f"{_HF_ENDPOINT}/datasets/BelleGroup/multiturn_chat_0.8M"
_LICENSE = "gpl-3.0"
_URL = "{}/datasets/BelleGroup/multiturn_chat_0.8M/resolve/main/multiturn_chat_0.8M.json".format(_HF_ENDPOINT)
_URL = f"{_HF_ENDPOINT}/datasets/BelleGroup/multiturn_chat_0.8M/resolve/main/multiturn_chat_0.8M.json"
class BelleMultiturn(datasets.GeneratorBasedBuilder):
@@ -38,7 +53,7 @@ class BelleMultiturn(datasets.GeneratorBasedBuilder):
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": file_path})]
def _generate_examples(self, filepath: str):
with open(filepath, "r", encoding="utf-8") as f:
with open(filepath, encoding="utf-8") as f:
for key, row in enumerate(f):
data = json.loads(row)
conversations = []

300
data/c4_demo.jsonl Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,20 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
from typing import List
import datasets
@@ -8,9 +22,9 @@ import datasets
_HF_ENDPOINT = os.getenv("HF_ENDPOINT", "https://huggingface.co")
_DESCRIPTION = "Human preference data about helpfulness and harmlessness."
_CITATION = ""
_HOMEPAGE = "{}/datasets/Anthropic/hh-rlhf".format(_HF_ENDPOINT)
_HOMEPAGE = f"{_HF_ENDPOINT}/datasets/Anthropic/hh-rlhf"
_LICENSE = "mit"
_URL = "{}/datasets/Anthropic/hh-rlhf/resolve/main/".format(_HF_ENDPOINT)
_URL = f"{_HF_ENDPOINT}/datasets/Anthropic/hh-rlhf/resolve/main/"
_URLS = {
"train": [
_URL + "harmless-base/train.jsonl.gz",
@@ -50,10 +64,10 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": file_path["test"]}),
]
def _generate_examples(self, filepaths: List[str]):
def _generate_examples(self, filepaths: list[str]):
key = 0
for filepath in filepaths:
with open(filepath, "r", encoding="utf-8") as f:
with open(filepath, encoding="utf-8") as f:
for row in f:
data = json.loads(row)
chosen = data["chosen"]

BIN
data/mllm_demo_data/1.mp3 Normal file

Binary file not shown.

BIN
data/mllm_demo_data/2.wav Normal file

Binary file not shown.

BIN
data/mllm_demo_data/3.flac Normal file

Binary file not shown.

BIN
data/mllm_demo_data/4.mp3 Normal file

Binary file not shown.

BIN
data/mllm_demo_data/4.mp4 Normal file

Binary file not shown.

View File

@@ -1,6 +1,20 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
from typing import List
import datasets
@@ -11,7 +25,7 @@ _DESCRIPTION = "UltraChat: Large-scale, Informative, and Diverse Multi-round Dia
_CITATION = """\
@misc{UltraChat,
author = {Ding, Ning and Chen, Yulin and Xu, Bokai and Hu, Shengding and Qin, Yujia and Liu, Zhiyuan and Sun, Maosong and Zhou, Bowen},
author = {Ding, Ning and Chen, Yulin and Xu, Bokai and Hu, Shengding and others},
title = {UltraChat: A Large-scale Auto-generated Multi-round Dialogue Data},
year = {2023},
publisher = {GitHub},
@@ -20,9 +34,9 @@ _CITATION = """\
}
"""
_HOMEPAGE = "{}/datasets/stingning/ultrachat".format(_HF_ENDPOINT)
_HOMEPAGE = f"{_HF_ENDPOINT}/datasets/stingning/ultrachat"
_LICENSE = "cc-by-nc-4.0"
_BASE_DATA_URL = "{}/datasets/stingning/ultrachat/resolve/main/train_{{idx}}.jsonl".format(_HF_ENDPOINT)
_BASE_DATA_URL = f"{_HF_ENDPOINT}/datasets/stingning/ultrachat/resolve/main/train_{{idx}}.jsonl"
class UltraChat(datasets.GeneratorBasedBuilder):
@@ -40,16 +54,16 @@ class UltraChat(datasets.GeneratorBasedBuilder):
file_paths = [dl_manager.download(_BASE_DATA_URL.format(idx=idx)) for idx in range(10)] # multiple shards
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": file_paths})]
def _generate_examples(self, filepaths: List[str]):
def _generate_examples(self, filepaths: list[str]):
for filepath in filepaths:
with open(filepath, "r", encoding="utf-8") as f:
with open(filepath, encoding="utf-8") as f:
for row in f:
try:
data = json.loads(row)
except Exception:
continue
key: int = data["id"]
content: List[str] = data["data"]
content: list[str] = data["data"]
if len(content) % 2 == 1:
content.pop(-1)
if len(content) < 2:

View File

@@ -1,59 +1,66 @@
# Use the NVIDIA official image with PyTorch 2.3.0
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-24-02.html
FROM nvcr.io/nvidia/pytorch:24.02-py3
# https://hub.docker.com/r/hiyouga/pytorch/tags
ARG BASE_IMAGE=hiyouga/pytorch:th2.6.0-cu124-flashattn2.7.4-cxx11abi0-devel
FROM ${BASE_IMAGE}
# Installation arguments
ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=metrics
ARG INSTALL_FLASHATTN=false
ARG HTTP_PROXY=""
# Define environments
ENV MAX_JOBS=4
ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments
ARG INSTALL_BNB=false
ARG INSTALL_VLLM=false
ARG INSTALL_DEEPSPEED=false
ARG INSTALL_FLASHATTN=false
ARG PIP_INDEX=https://pypi.org/simple
# Use Bash instead of default /bin/sh
SHELL ["/bin/bash", "-c"]
# Set the working directory
WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install the requirements
COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \
pip config set global.extra-index-url "$PIP_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application into the image
COPY . /app
# Install the LLaMA Factory
RUN EXTRA_PACKAGES="metrics"; \
if [ "$INSTALL_BNB" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes"; \
fi; \
if [ "$INSTALL_VLLM" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm"; \
fi; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Install LLaMA Factory
RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
# Rebuild flash attention
RUN pip uninstall -y transformer-engine flash-attn && \
if [ "$INSTALL_FLASHATTN" == "true" ]; then \
pip uninstall -y ninja && pip install ninja && \
RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
pip uninstall -y ninja && \
pip install --no-cache-dir ninja && \
pip install --no-cache-dir flash-attn --no-build-isolation; \
fi
# Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ]
# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board
ENV GRADIO_SERVER_PORT 7860
# Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860
# Expose port 8000 for the API service
ENV API_PORT 8000
# Expose port 8000 for API service
ENV API_PORT=8000
EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -0,0 +1,55 @@
# Start from the pytorch official image (ubuntu-22.04 + cuda-12.4.1 + python-3.11)
# https://hub.docker.com/r/pytorch/pytorch/tags
FROM pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel
# Define environments
ENV MAX_JOBS=16
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
# Define installation arguments
ARG APT_SOURCE=https://mirrors.tuna.tsinghua.edu.cn/ubuntu/
ARG PIP_INDEX=https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# Set apt source
RUN cp /etc/apt/sources.list /etc/apt/sources.list.bak && \
{ \
echo "deb ${APT_SOURCE} jammy main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-updates main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-backports main restricted universe multiverse"; \
echo "deb ${APT_SOURCE} jammy-security main restricted universe multiverse"; \
} > /etc/apt/sources.list
# Install systemctl and wget
RUN apt-get update && \
apt-get install -y -o Dpkg::Options::="--force-confdef" systemd wget && \
apt-get clean
# Install git and vim
RUN apt-get update && \
apt-get install -y git vim && \
apt-get clean
# Install gcc and g++
RUN apt-get update && \
apt-get install -y gcc g++ && \
apt-get clean
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install flash-attn-2.7.4.post1 (cxx11abi=False)
RUN wget -nv https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl && \
pip install --no-cache-dir flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
# Install flashinfer-0.2.2.post1+cu124 (cxx11abi=False)
RUN wget -nv https://github.com/flashinfer-ai/flashinfer/releases/download/v0.2.2.post1/flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl && \
pip install --no-cache-dir flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,22 +4,15 @@ services:
dockerfile: ./docker/docker-cuda/Dockerfile
context: ../..
args:
INSTALL_BNB: false
INSTALL_VLLM: false
INSTALL_DEEPSPEED: false
INSTALL_FLASHATTN: false
PIP_INDEX: https://pypi.org/simple
EXTRAS: metrics
container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../data:/app/data
- ../../output:/app/output
ports:
- "7860:7860"
- "8000:8000"
ipc: host
tty: true
# shm_size: "16gb" # ipc: host is set
stdin_open: true
command: bash
deploy:

View File

@@ -1,45 +1,58 @@
# Use the Ubuntu 22.04 image with CANN 8.0.rc1
# More versions can be found at https://hub.docker.com/r/ascendai/cann/tags
# FROM ascendai/cann:8.0.rc1-910-ubuntu22.04-py3.8
FROM ascendai/cann:8.0.rc1-910b-ubuntu22.04-py3.8
# FROM ascendai/cann:8.0.rc1-910-openeuler22.03-py3.8
# FROM ascendai/cann:8.0.rc1-910b-openeuler22.03-py3.8
# https://hub.docker.com/r/ascendai/cann/tags
ARG BASE_IMAGE=ascendai/cann:8.0.0-910b-ubuntu22.04-py3.11
FROM ${BASE_IMAGE}
# Installation arguments
ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=torch-npu,metrics
ARG HTTP_PROXY=""
# Define environments
ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments
ARG INSTALL_DEEPSPEED=false
ARG PIP_INDEX=https://pypi.org/simple
ARG TORCH_INDEX=https://download.pytorch.org/whl/cpu
# Use Bash instead of default /bin/sh
SHELL ["/bin/bash", "-c"]
# Set the working directory
WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Install the requirements
COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \
pip config set global.extra-index-url "$TORCH_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application into the image
COPY . /app
# Install the LLaMA Factory
RUN EXTRA_PACKAGES="torch-npu,metrics"; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Install LLaMA Factory
RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
# Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ]
# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board
ENV GRADIO_SERVER_PORT 7860
# Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860
# Expose port 8000 for the API service
ENV API_PORT 8000
# Expose port 8000 for API service
ENV API_PORT=8000
EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,14 +4,10 @@ services:
dockerfile: ./docker/docker-npu/Dockerfile
context: ../..
args:
INSTALL_DEEPSPEED: false
PIP_INDEX: https://pypi.org/simple
EXTRAS: torch-npu,metrics
container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../data:/app/data
- ../../output:/app/output
- /usr/local/dcmi:/usr/local/dcmi
- /usr/local/bin/npu-smi:/usr/local/bin/npu-smi
- /usr/local/Ascend/driver:/usr/local/Ascend/driver
@@ -21,6 +17,7 @@ services:
- "8000:8000"
ipc: host
tty: true
# shm_size: "16gb" # ipc: host is set
stdin_open: true
command: bash
devices:

View File

@@ -1,57 +1,71 @@
FROM hardandheavy/transformers-rocm:2.1.0
# https://hub.docker.com/r/rocm/pytorch/tags
ARG BASE_IMAGE=rocm/pytorch:rocm6.4.1_ubuntu22.04_py3.10_pytorch_release_2.6.0
FROM ${BASE_IMAGE}
# Installation arguments
ARG PIP_INDEX=https://pypi.org/simple
ARG EXTRAS=metrics
ARG INSTALL_FLASHATTN=false
ARG HTTP_PROXY=""
ARG PYTORCH_INDEX=https://download.pytorch.org/whl/rocm6.3
# Define environments
ENV MAX_JOBS=4
ENV MAX_JOBS=16
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
ENV DEBIAN_FRONTEND=noninteractive
ENV NODE_OPTIONS=""
ENV PIP_ROOT_USER_ACTION=ignore
ENV http_proxy="${HTTP_PROXY}"
ENV https_proxy="${HTTP_PROXY}"
# Define installation arguments
ARG INSTALL_BNB=false
ARG INSTALL_VLLM=false
ARG INSTALL_DEEPSPEED=false
ARG INSTALL_FLASHATTN=false
ARG PIP_INDEX=https://pypi.org/simple
# Use Bash instead of default /bin/sh
SHELL ["/bin/bash", "-c"]
# Set the working directory
WORKDIR /app
# Change pip source
RUN pip config set global.index-url "${PIP_INDEX}" && \
pip config set global.extra-index-url "${PIP_INDEX}" && \
pip install --no-cache-dir --upgrade pip packaging wheel setuptools
# Reinstall pytorch rocm
RUN pip uninstall -y torch torchvision torchaudio && \
pip install --no-cache-dir --pre torch torchvision torchaudio --index-url "${PYTORCH_INDEX}"
# Install the requirements
COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \
pip config set global.extra-index-url "$PIP_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application into the image
COPY . /app
# Install the LLaMA Factory
RUN EXTRA_PACKAGES="metrics"; \
if [ "$INSTALL_BNB" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes"; \
fi; \
if [ "$INSTALL_VLLM" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm"; \
fi; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Install LLaMA Factory
RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
# Rebuild flash attention
RUN pip uninstall -y transformer-engine flash-attn && \
if [ "$INSTALL_FLASHATTN" == "true" ]; then \
pip uninstall -y ninja && pip install ninja && \
RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
pip uninstall -y ninja && \
pip install --no-cache-dir ninja && \
pip install --no-cache-dir flash-attn --no-build-isolation; \
fi
# Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ]
# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
# Expose port 7860 for the LLaMA Board
ENV GRADIO_SERVER_PORT 7860
# Expose port 7860 for LLaMA Board
ENV GRADIO_SERVER_PORT=7860
EXPOSE 7860
# Expose port 8000 for the API service
ENV API_PORT 8000
# Expose port 8000 for API service
ENV API_PORT=8000
EXPOSE 8000
# unset proxy
ENV http_proxy=
ENV https_proxy=
# Reset pip config
RUN pip config unset global.index-url && \
pip config unset global.extra-index-url

View File

@@ -4,23 +4,15 @@ services:
dockerfile: ./docker/docker-rocm/Dockerfile
context: ../..
args:
INSTALL_BNB: false
INSTALL_VLLM: false
INSTALL_DEEPSPEED: false
INSTALL_FLASHATTN: false
PIP_INDEX: https://pypi.org/simple
EXTRAS: metrics
container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../data:/app/data
- ../../output:/app/output
- ../../saves:/app/saves
ports:
- "7860:7860"
- "8000:8000"
ipc: host
tty: true
# shm_size: "16gb" # ipc: host is set
stdin_open: true
command: bash
devices:

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and others},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
"""
_DESCRIPTION = """\
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
C-Eval is a comprehensive Chinese evaluation suite for foundation models.
It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels.
"""
_HOMEPAGE = "https://cevalbenchmark.com"

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\
@article{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and others,
journal={arXiv preprint arXiv:2306.09212},
year={2023}
}
"""
_DESCRIPTION = """\
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge
and reasoning abilities of LLMs within the Chinese language and cultural context.
"""
_HOMEPAGE = "https://github.com/haonan-li/CMMLU"

View File

@@ -1,3 +1,4 @@
# Copyright 2025 the LlamaFactory team.
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,14 +22,15 @@ import pandas as pd
_CITATION = """\
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
author={Dan Hendrycks and Collin Burns and others},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
"""
_DESCRIPTION = """\
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart,
Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt (ICLR 2021).
"""
_HOMEPAGE = "https://github.com/hendrycks/test"
@@ -158,5 +160,4 @@ class MMLU(datasets.GeneratorBasedBuilder):
df = pd.read_csv(filepath, header=None)
df.columns = ["question", "A", "B", "C", "D", "answer"]
for i, instance in enumerate(df.to_dict(orient="records")):
yield i, instance
yield from enumerate(df.to_dict(orient="records"))

View File

@@ -13,6 +13,26 @@ Make sure to execute these commands in the `LLaMA-Factory` directory.
Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices.
By default, LLaMA-Factory uses all visible computing devices.
Basic usage:
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
Advanced usage:
```bash
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml \
learning_rate=1e-5 \
logging_steps=1
```
```bash
bash examples/train_lora/llama3_lora_sft.sh
```
## Examples
### LoRA Fine-Tuning
@@ -32,8 +52,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
#### Multimodal Supervised Fine-Tuning
```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2_5vl_lora_sft.yaml
```
#### DPO/ORPO/SimPO Training
@@ -45,7 +64,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
#### Multimodal DPO/ORPO/SimPO Training
```bash
llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml
llamafactory-cli train examples/train_lora/qwen2_5vl_lora_dpo.yaml
```
#### Reward Modeling
@@ -80,17 +99,11 @@ llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)
@@ -99,6 +112,12 @@ FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llama
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
```
#### Supervised Fine-Tuning with Ray on 4 GPUs
```bash
USE_RAY=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ray.yaml
```
### QLoRA Fine-Tuning
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended)
@@ -107,6 +126,12 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
```
#### Supervised Fine-Tuning with 4-bit Bitsandbytes Quantization on Ascend NPU
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_bnb_npu.yaml
```
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
```bash
@@ -130,26 +155,28 @@ llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
#### Supervised Fine-Tuning on Single Node
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
### Elastic and Fault-Tolerant Supervised Fine-Tuning on Multiple Nodes
To launch an elastic job with `MAX_RESTARTS` failures retries, run the following on at least `MIN_NNODES` nodes and at most `MAX_NNODES` nodes. `RDZV_ID` should be set as a unique job id (shared by all nodes participating in the job). See also [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html).
```bash
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### Multimodal Supervised Fine-Tuning
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2_5vl_full_sft.yaml
```
### Merging LoRA Adapters and Quantization
@@ -168,15 +195,28 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
```
### Save Ollama modelfile
```bash
llamafactory-cli export examples/merge_lora/llama3_full_sft.yaml
```
### Inferring LoRA Fine-Tuned Models
#### Use CLI
#### Evaluation using vLLM's Multi-GPU Inference
```
python scripts/vllm_infer.py --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct --template llama3 --dataset alpaca_en_demo
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
```
#### Use CLI ChatBox
```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
```
#### Use Web UI
#### Use Web UI ChatBox
```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
@@ -196,6 +236,12 @@ llamafactory-cli api examples/inference/llama3_lora_sft.yaml
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using APOLLO
```bash
llamafactory-cli train examples/extras/apollo/llama3_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using BAdam
```bash
@@ -208,6 +254,12 @@ llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using Muon
```bash
llamafactory-cli train examples/extras/muon/qwen2_full_sft.yaml
```
#### LoRA+ Fine-Tuning
```bash

View File

@@ -13,6 +13,26 @@
使用 `CUDA_VISIBLE_DEVICES`GPU`ASCEND_RT_VISIBLE_DEVICES`NPU选择计算设备。
LLaMA-Factory 默认使用所有可见的计算设备。
基础用法:
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
高级用法:
```bash
CUDA_VISIBLE_DEVICES=0,1 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml \
learning_rate=1e-5 \
logging_steps=1
```
```bash
bash examples/train_lora/llama3_lora_sft.sh
```
## 示例
### LoRA 微调
@@ -32,8 +52,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
#### 多模态指令监督微调
```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml
llamafactory-cli train examples/train_lora/qwen2_5vl_lora_sft.yaml
```
#### DPO/ORPO/SimPO 训练
@@ -45,7 +64,7 @@ llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
#### 多模态 DPO/ORPO/SimPO 训练
```bash
llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml
llamafactory-cli train examples/train_lora/qwen2_5vl_lora_dpo.yaml
```
#### 奖励模型训练
@@ -80,17 +99,19 @@ llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### 多机指令监督微调
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
### 支持弹性和容错的多机指令监督微调
要启动一个支持弹性节点和容错的多机指令微调,在每个节点上执行以下命令。弹性节点数量范围为 `MIN_NNODES:MAX_NNODES`,每个节点最多允许因为错误重启 `MAX_RESTARTS` 次。`RDZV_ID` 应设置为一个唯一的作业 ID由参与该作业的所有节点共享。更多新可以参考官方文档 [torchrun](https://docs.pytorch.org/docs/stable/elastic/run.html)。
```bash
FORCE_TORCHRUN=1 MIN_NNODES=1 MAX_NNODES=3 MAX_RESTARTS=3 RDZV_ID=llamafactory MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### 使用 DeepSpeed ZeRO-3 平均分配显存
@@ -99,6 +120,12 @@ FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llama
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
```
#### 使用 Ray 在 4 张 GPU 上微调
```bash
USE_RAY=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ray.yaml
```
### QLoRA 微调
#### 基于 4/8 比特 Bitsandbytes/HQQ/EETQ 量化进行指令监督微调(推荐)
@@ -107,6 +134,12 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
```
#### 在 NPU 上基于 4 比特 Bitsandbytes 量化进行指令监督微调
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_bnb_npu.yaml
```
#### 基于 4/8 比特 GPTQ 量化进行指令监督微调
```bash
@@ -130,26 +163,20 @@ llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
#### 在单机上进行指令监督微调
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### 在多机上进行指令监督微调
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
#### 多模态指令监督微调
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml
```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2_5vl_full_sft.yaml
```
### 合并 LoRA 适配器与模型量化
@@ -168,15 +195,28 @@ llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
```
### 保存 Ollama 配置文件
```bash
llamafactory-cli export examples/merge_lora/llama3_full_sft.yaml
```
### 推理 LoRA 模型
#### 使用命令行接口
#### 使用 vLLM 多卡推理评估
```
python scripts/vllm_infer.py --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct --template llama3 --dataset alpaca_en_demo
python scripts/eval_bleu_rouge.py generated_predictions.jsonl
```
#### 使用命令行对话框
```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
```
#### 使用浏览器界面
#### 使用浏览器对话框
```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
@@ -196,6 +236,12 @@ llamafactory-cli api examples/inference/llama3_lora_sft.yaml
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```
#### 使用 APOLLO 进行全参数训练
```bash
llamafactory-cli train examples/extras/apollo/llama3_full_sft.yaml
```
#### 使用 BAdam 进行全参数训练
```bash
@@ -208,6 +254,12 @@ llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml
```
#### 使用 Muon 进行全参数训练
```bash
llamafactory-cli train examples/extras/muon/qwen2_full_sft.yaml
```
#### LoRA+ 微调
```bash

View File

@@ -7,14 +7,14 @@ fsdp_config:
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: true # offload may affect training speed
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: fp16 # or bf16
mixed_precision: bf16 # or fp16
num_machines: 1 # the number of nodes
num_processes: 2 # the number of GPUs in all nodes
rdzv_backend: static

View File

@@ -0,0 +1,25 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: true # offload may affect training speed
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16 # or fp16
num_machines: 1 # the number of nodes
num_processes: 2 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -1,5 +1,6 @@
### model
model_name_or_path: Qwen/Qwen2-1.5B-Instruct
trust_remote_code: true
### method
stage: sft
@@ -10,10 +11,11 @@ use_adam_mini: true
### dataset
dataset: identity,alpaca_en_demo
template: qwen
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2-1_5b/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,7 +37,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,48 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
use_apollo: true
apollo_layerwise: true # choices: [true, false], use false for DDP training
apollo_target: all
apollo_rank: 128
apollo_scale: 32.0
apollo_scale_type: channel
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1 # use 1 for layerwise apollo
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
pure_bf16: true
ddp_timeout: 180000000
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
@@ -15,10 +16,11 @@ badam_verbose: 2
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/full/sft
@@ -26,6 +28,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -36,7 +40,7 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,20 +1,23 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -34,7 +39,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,23 +1,25 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
use_galore: true
galore_layerwise: true
galore_target: mlp,self_attn
galore_layerwise: true # choices: [true, false], use false for DDP training
galore_target: all
galore_rank: 128
galore_scale: 2.0
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/full/sft
@@ -25,10 +27,12 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
gradient_accumulation_steps: 1 # use 1 for layerwise galore
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
@@ -37,7 +41,7 @@ pure_bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model
model_name_or_path: models/llama3-8b-pro
trust_remote_code: true
### method
stage: sft
@@ -12,10 +13,11 @@ use_llama_pro: true
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b-pro/freeze/sft
@@ -23,6 +25,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -35,7 +39,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,20 +1,23 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
loraplus_lr_ratio: 16.0
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -34,7 +39,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,5 +1,6 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
@@ -10,10 +11,11 @@ mixture_of_depths: convert
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b-mod/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -34,7 +38,7 @@ pure_bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,43 @@
### model
model_name_or_path: Qwen/Qwen2-1.5B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
use_muon: true
### dataset
dataset: identity,alpaca_en_demo
template: qwen
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2-1_5b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,6 +1,10 @@
# The batch generation can be SLOW using this config.
# For faster inference, we recommend to use `scripts/vllm_infer.py`.
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
trust_remote_code: true
### method
stage: sft
@@ -10,14 +14,16 @@ finetuning_type: lora
### dataset
eval_dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/predict
overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### eval
per_device_eval_batch_size: 1

View File

@@ -1,10 +1,12 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
pissa_init: true
pissa_iter: 16
@@ -13,10 +15,11 @@ pissa_convert: true
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -24,6 +27,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -36,7 +41,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,2 +1,4 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -0,0 +1,4 @@
model_name_or_path: saves/llama3-8b/full/sft
template: llama3
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,4 +1,5 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,4 +0,0 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
infer_backend: vllm
vllm_enforce_eager: true

View File

@@ -1,2 +0,0 @@
model_name_or_path: llava-hf/llava-1.5-7b-hf
template: llava

View File

@@ -0,0 +1,4 @@
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
template: qwen2_vl
infer_backend: huggingface # choices: [huggingface, vllm, sglang]
trust_remote_code: true

View File

@@ -1,2 +0,0 @@
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
template: qwen2_vl

View File

@@ -0,0 +1,10 @@
### model
model_name_or_path: saves/llama3-8b/full/sft
template: llama3
trust_remote_code: true
### export
export_dir: output/llama3_full_sft
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -1,11 +1,12 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
trust_remote_code: true
### export
export_dir: models/llama3_gptq
export_dir: output/llama3_gptq
export_quantization_bit: 4
export_quantization_dataset: data/c4_demo.json
export_size: 2
export_device: cpu
export_quantization_dataset: data/c4_demo.jsonl
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -4,10 +4,10 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora
trust_remote_code: true
### export
export_dir: models/llama3_lora_sft
export_size: 2
export_device: cpu
export_dir: output/llama3_lora_sft
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -0,0 +1,13 @@
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
adapter_name_or_path: saves/qwen2_5vl-7b/lora/sft
template: qwen2_vl
trust_remote_code: true
### export
export_dir: output/qwen2_5vl_lora_sft
export_size: 5
export_device: cpu # choices: [cpu, auto]
export_legacy_format: false

View File

@@ -1,13 +0,0 @@
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
adapter_name_or_path: saves/qwen2_vl-7b/lora/sft
template: qwen2_vl
finetuning_type: lora
### export
export_dir: models/qwen2_vl_lora_sft
export_size: 2
export_device: cpu
export_legacy_format: false

View File

@@ -1,23 +0,0 @@
### model
model_name_or_path: saves/llama3-8b/full/sft
### method
stage: sft
do_predict: true
finetuning_type: full
### dataset
eval_dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/predict
overwrite_output_dir: true
### eval
per_device_eval_batch_size: 1
predict_with_generate: true

View File

@@ -1,19 +1,21 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/full/sft
@@ -21,6 +23,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -31,9 +35,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,49 @@
### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
freeze_vision_tower: true
freeze_multi_modal_projector: true
freeze_language_model: false
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: mllm_demo,identity,alpaca_en_demo
template: qwen2_vl
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2_5vl-7b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,39 +0,0 @@
### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: mllm_demo,identity
template: qwen2_vl
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/qwen2_vl-7b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,10 +1,12 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: dpo
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
@@ -12,10 +14,11 @@ pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
### dataset
dataset: dpo_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/dpo
@@ -23,6 +26,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,9 +38,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: dpo_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,6 +1,7 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
trust_remote_code: true
### method
finetuning_type: lora

View File

@@ -1,20 +1,23 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: kto
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
pref_beta: 0.1
### dataset
dataset: kto_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/kto
@@ -22,6 +25,7 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -34,7 +38,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,20 +1,23 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
reward_model: saves/llama3-8b/lora/reward
trust_remote_code: true
### method
stage: ppo
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/ppo
@@ -22,6 +25,7 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1

View File

@@ -1,18 +1,21 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: pt
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: c4_demo
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/pretrain
@@ -20,6 +23,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -30,9 +35,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: c4_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,19 +1,22 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: rm
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: dpo_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/reward
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -31,9 +36,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: dpo_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,36 @@
#!/bin/bash
set -x
MODEL_PATH=meta-llama/Meta-Llama-3-8B-Instruct
llamafactory-cli train \
--model_name_or_path ${MODEL_PATH} \
--trust_remote_code \
--stage sft \
--do_train \
--finetuning_type lora \
--lora_rank 8 \
--lora_target all \
--dataset identity,alpaca_en_demo \
--template llama3 \
--cutoff_len 2048 \
--max_samples 1000 \
--overwrite_cache \
--preprocessing_num_workers 16 \
--dataloader_num_workers 4 \
--output_dir saves/llama3-8b/lora/sft \
--logging_steps 10 \
--save_steps 500 \
--plot_loss \
--overwrite_output_dir \
--save_only_model false \
--report_to none \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 3.0 \
--lr_scheduler_type cosine \
--warmup_ratio 0.1 \
--bf16 \
--ddp_timeout 180000000

View File

@@ -1,19 +1,22 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -31,9 +36,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,20 +1,23 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
deepspeed: examples/deepspeed/ds_z3_config.json
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -22,6 +25,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -32,9 +37,11 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,61 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct # or use local absolute path
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
dataset_dir: REMOTE:llamafactory/demo_data # or use local absolute path
template: llama3
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: tmp_dir
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### ray
ray_run_name: llama3_8b_sft_lora
ray_storage_path: ./saves
ray_num_workers: 4 # Number of GPUs to use.
placement_strategy: PACK
resources_per_worker:
GPU: 1
# ray_init_kwargs:
# runtime_env:
# env_vars:
# <YOUR-ENV-VAR-HERE>: "<YOUR-ENV-VAR-HERE>"
# pip:
# - emoji
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,16 +1,18 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16

View File

@@ -0,0 +1,49 @@
# pip install git+https://github.com/hiyouga/transformers.git@llama4_train
### model
model_name_or_path: meta-llama/Llama-4-Scout-17B-16E-Instruct
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
### dataset
dataset: mllm_demo,identity,alpaca_en_demo
template: llama4
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama4-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# eval_dataset: alpaca_en_demo
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,39 +0,0 @@
### model
model_name_or_path: llava-hf/llava-1.5-7b-hf
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: mllm_demo
template: llava
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llava1_5-7b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,10 +1,14 @@
### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method
stage: dpo
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
@@ -12,17 +16,20 @@ pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
### dataset
dataset: rlhf_v
template: qwen2_vl
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2_vl-7b/lora/dpo
output_dir: saves/qwen2_5vl-7b/lora/dpo
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,9 +40,10 @@ lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -0,0 +1,47 @@
### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: mllm_demo,identity,alpaca_en_demo # video: mllm_video_demo
template: qwen2_vl
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/qwen2_5vl-7b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
### eval
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,39 +0,0 @@
### model
model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: mllm_demo,identity # video: mllm_video_demo
template: qwen2_vl
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/qwen2_vl-7b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,19 +1,22 @@
### model
model_name_or_path: ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,19 +1,22 @@
### model
model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-AWQ
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,20 +1,25 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4
quantization_method: bnb
double_quantization: false
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
deepspeed: examples/deepspeed/ds_z0_config.json
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -22,10 +27,12 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
@@ -34,7 +41,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,19 +1,22 @@
### model
model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-GPTQ
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -21,6 +24,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -33,7 +38,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -1,21 +1,24 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4
quantization_method: bitsandbytes # choices: [bitsandbytes (4/8), hqq (2/3/4/5/6/8), eetq (8)]
quantization_bit: 4 # choices: [8 (bnb/hqq/eetq), 4 (bnb/hqq), 3 (hqq), 2 (hqq)]
quantization_method: bnb # choices: [bnb, hqq, eetq]
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
### output
output_dir: saves/llama3-8b/lora/sft
@@ -23,6 +26,8 @@ logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
### train
per_device_train_batch_size: 1
@@ -35,7 +40,7 @@ bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
# val_size: 0.1
# per_device_eval_batch_size: 1
# eval_strategy: steps
# eval_steps: 500

View File

@@ -2,14 +2,53 @@
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "llamafactory"
dynamic = [
"version",
"dependencies",
"optional-dependencies",
"requires-python",
"scripts",
"authors",
"description",
"readme",
"license",
"keywords",
"classifiers"
]
[tool.ruff]
target-version = "py38"
target-version = "py39"
line-length = 119
indent-width = 4
[tool.ruff.lint]
ignore = ["C408", "C901", "E501", "E731", "E741", "W605"]
select = ["C", "E", "F", "I", "W"]
ignore = [
"C408", # collection
"C901", # complex
"E501", # line too long
"E731", # lambda function
"E741", # ambiguous var name
"D100", # no doc public module
"D101", # no doc public class
"D102", # no doc public method
"D103", # no doc public function
"D104", # no doc public package
"D105", # no doc magic method
"D107", # no doc __init__
]
extend-select = [
"C", # complexity
"E", # error
"F", # pyflakes
"I", # isort
"W", # warning
"UP", # pyupgrade
"D", # pydocstyle
"PT009", # pytest assert
"RUF022", # sort __all__
]
[tool.ruff.lint.isort]
lines-after-imports = 2
@@ -22,12 +61,35 @@ known-third-party = [
"peft",
"torch",
"transformers",
"trl"
"trl",
]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
docstring-code-format = true
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.uv]
conflicts = [
[
{ extra = "torch-npu" },
{ extra = "aqlm" },
],
[
{ extra = "torch-npu" },
{ extra = "vllm" },
],
[
{ extra = "torch-npu" },
{ extra = "sglang" },
],
[
{ extra = "vllm" },
{ extra = "sglang" },
],
]

View File

@@ -1,21 +1,27 @@
transformers>=4.41.2,<=4.45.0
datasets>=2.16.0,<=2.21.0
accelerate>=0.30.1,<=0.33.0
peft>=0.11.1,<=0.12.0
transformers>=4.45.0,<=4.52.4,!=4.46.*,!=4.47.*,!=4.48.0,!=4.52.0; sys_platform != 'darwin'
transformers>=4.45.0,<=4.51.3,!=4.46.*,!=4.47.*,!=4.48.0,!=4.52.0; sys_platform == 'darwin'
datasets>=2.16.0,<=3.6.0
accelerate>=0.34.0,<=1.7.0
peft>=0.14.0,<=0.15.2
trl>=0.8.6,<=0.9.6
gradio>=4.0.0
pandas>=2.0.0
tokenizers>=0.19.0,<=0.21.1
gradio>=4.38.0,<=5.31.0
scipy
einops
sentencepiece
tiktoken
protobuf
uvicorn
pydantic
fastapi
sse-starlette
matplotlib>=3.7.0
fire
omegaconf
packaging
pyyaml
numpy<2.0.0
pydantic<=2.10.6
pandas>=2.0.0
av
librosa
tyro<0.9.0

View File

@@ -0,0 +1,65 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from openai import OpenAI
from transformers.utils.versions import require_version
require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
def main():
client = OpenAI(
api_key="{}".format(os.getenv("API_KEY", "0")),
base_url="http://localhost:{}/v1".format(os.getenv("API_PORT", 8000)),
)
messages = []
messages.append(
{
"role": "user",
"content": [
{"type": "text", "text": "Output the color and number of each box."},
{
"type": "image_url",
"image_url": {"url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/boxes.png"},
},
],
}
)
result = client.chat.completions.create(messages=messages, model="test")
messages.append(result.choices[0].message)
print("Round 1:", result.choices[0].message.content)
# The image shows a pyramid of colored blocks with numbers on them. Here are the colors and numbers of ...
messages.append(
{
"role": "user",
"content": [
{"type": "text", "text": "What kind of flower is this?"},
{
"type": "image_url",
"image_url": {"url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-VL/flowers.jpg"},
},
],
}
)
result = client.chat.completions.create(messages=messages, model="test")
messages.append(result.choices[0].message)
print("Round 2:", result.choices[0].message.content)
# The image shows a cluster of forget-me-not flowers. Forget-me-nots are small ...
if __name__ == "__main__":
main()

View File

@@ -1,5 +1,4 @@
# coding=utf-8
# Copyright 2024 the LlamaFactory team.
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,7 +14,6 @@
import json
import os
from typing import Sequence
from openai import OpenAI
from transformers.utils.versions import require_version
@@ -24,7 +22,7 @@ from transformers.utils.versions import require_version
require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
def calculate_gpa(grades: Sequence[str], hours: Sequence[int]) -> float:
def calculate_gpa(grades: list[str], hours: list[int]) -> float:
grade_to_score = {"A": 4, "B": 3, "C": 2}
total_score, total_hour = 0, 0
for grade, hour in zip(grades, hours):
@@ -35,8 +33,8 @@ def calculate_gpa(grades: Sequence[str], hours: Sequence[int]) -> float:
def main():
client = OpenAI(
api_key="{}".format(os.environ.get("API_KEY", "0")),
base_url="http://localhost:{}/v1".format(os.environ.get("API_PORT", 8000)),
api_key="{}".format(os.getenv("API_KEY", "0")),
base_url="http://localhost:{}/v1".format(os.getenv("API_PORT", 8000)),
)
tools = [
{

View File

@@ -1,5 +1,4 @@
# coding=utf-8
# Copyright 2024 the LlamaFactory team.
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,64 +15,67 @@
import json
import os
from collections import OrderedDict
from typing import Any, Dict
from typing import Any
import fire
import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors.torch import save_file
from tqdm import tqdm
from transformers.modeling_utils import (
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
CONFIG_NAME = "config.json"
def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool):
baichuan2_state_dict: Dict[str, torch.Tensor] = OrderedDict()
baichuan2_state_dict: dict[str, torch.Tensor] = OrderedDict()
for filepath in tqdm(os.listdir(input_dir), desc="Load weights"):
if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".bin"):
shard_weight = torch.load(os.path.join(input_dir, filepath), map_location="cpu")
shard_weight = torch.load(os.path.join(input_dir, filepath), map_location="cpu", weights_only=True)
baichuan2_state_dict.update(shard_weight)
llama2_state_dict: Dict[str, torch.Tensor] = OrderedDict()
llama_state_dict: dict[str, torch.Tensor] = OrderedDict()
for key, value in tqdm(baichuan2_state_dict.items(), desc="Convert format"):
if "W_pack" in key:
proj_size = value.size(0) // 3
llama2_state_dict[key.replace("W_pack", "q_proj")] = value[:proj_size, :]
llama2_state_dict[key.replace("W_pack", "k_proj")] = value[proj_size : 2 * proj_size, :]
llama2_state_dict[key.replace("W_pack", "v_proj")] = value[2 * proj_size :, :]
llama_state_dict[key.replace("W_pack", "q_proj")] = value[:proj_size, :]
llama_state_dict[key.replace("W_pack", "k_proj")] = value[proj_size : 2 * proj_size, :]
llama_state_dict[key.replace("W_pack", "v_proj")] = value[2 * proj_size :, :]
elif "lm_head" in key:
llama2_state_dict[key] = torch.nn.functional.normalize(value)
llama_state_dict[key] = torch.nn.functional.normalize(value)
else:
llama2_state_dict[key] = value
llama_state_dict[key] = value
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(llama2_state_dict, max_shard_size=shard_size, weights_name=weights_name)
for shard_file, shard in tqdm(shards.items(), desc="Save weights"):
filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
llama_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: llama_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else:
torch.save(shard, os.path.join(output_dir, shard_file))
if index is None:
print("Model weights saved in {}".format(os.path.join(output_dir, WEIGHTS_NAME)))
if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True)
print("Model weights saved in {}".format(output_dir))
print(f"Model weights saved in {output_dir}.")
def save_config(input_dir: str, output_dir: str):
with open(os.path.join(input_dir, CONFIG_NAME), "r", encoding="utf-8") as f:
llama2_config_dict: Dict[str, Any] = json.load(f)
with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f:
llama2_config_dict: dict[str, Any] = json.load(f)
llama2_config_dict["architectures"] = ["LlamaForCausalLM"]
llama2_config_dict.pop("auto_map", None)
@@ -82,7 +84,8 @@ def save_config(input_dir: str, output_dir: str):
with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f:
json.dump(llama2_config_dict, f, indent=2)
print("Model config saved in {}".format(os.path.join(output_dir, CONFIG_NAME)))
print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}")
def llamafy_baichuan2(
@@ -91,8 +94,8 @@ def llamafy_baichuan2(
shard_size: str = "2GB",
save_safetensors: bool = True,
):
r"""
Converts the Baichuan2-7B model in the same format as LLaMA2-7B.
r"""Convert the Baichuan2-7B model in the same format as LLaMA2-7B.
Usage: python llamafy_baichuan2.py --input_dir input --output_dir output
Converted model: https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied
"""

View File

@@ -1,5 +1,4 @@
# coding=utf-8
# Copyright 2024 the LlamaFactory team.
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,20 +15,15 @@
import json
import os
from collections import OrderedDict
from typing import Any, Dict
from typing import Any
import fire
import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors import safe_open
from safetensors.torch import save_file
from tqdm import tqdm
from transformers.modeling_utils import (
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
from transformers.utils import check_min_version
@@ -43,76 +37,84 @@ CONFIG_NAME = "config.json"
def save_weight(input_dir: str, output_dir: str, shard_size: str, save_safetensors: bool) -> str:
qwen_state_dict: Dict[str, torch.Tensor] = OrderedDict()
qwen_state_dict: dict[str, torch.Tensor] = OrderedDict()
for filepath in tqdm(os.listdir(input_dir), desc="Load weights"):
if os.path.isfile(os.path.join(input_dir, filepath)) and filepath.endswith(".safetensors"):
with safe_open(os.path.join(input_dir, filepath), framework="pt", device="cpu") as f:
for key in f.keys():
qwen_state_dict[key] = f.get_tensor(key)
llama2_state_dict: Dict[str, torch.Tensor] = OrderedDict()
llama_state_dict: dict[str, torch.Tensor] = OrderedDict()
torch_dtype = None
for key, value in tqdm(qwen_state_dict.items(), desc="Convert format"):
if torch_dtype is None:
torch_dtype = value.dtype
if "wte" in key:
llama2_state_dict["model.embed_tokens.weight"] = value
llama_state_dict["model.embed_tokens.weight"] = value
elif "ln_f" in key:
llama2_state_dict["model.norm.weight"] = value
llama_state_dict["model.norm.weight"] = value
else:
key = key.replace("transformer.h", "model.layers")
if "attn.c_attn" in key:
proj_size = value.size(0) // 3
llama2_state_dict[key.replace("attn.c_attn", "self_attn.q_proj")] = value[:proj_size, ...]
llama2_state_dict[key.replace("attn.c_attn", "self_attn.k_proj")] = value[
llama_state_dict[key.replace("attn.c_attn", "self_attn.q_proj")] = value[:proj_size, ...]
llama_state_dict[key.replace("attn.c_attn", "self_attn.k_proj")] = value[
proj_size : 2 * proj_size, ...
]
llama2_state_dict[key.replace("attn.c_attn", "self_attn.v_proj")] = value[2 * proj_size :, ...]
llama_state_dict[key.replace("attn.c_attn", "self_attn.v_proj")] = value[2 * proj_size :, ...]
elif "attn.c_proj" in key:
llama2_state_dict[key.replace("attn.c_proj", "self_attn.o_proj")] = value
llama2_state_dict[key.replace("attn.c_proj.weight", "self_attn.o_proj.bias")] = torch.zeros_like(
llama_state_dict[key.replace("attn.c_proj", "self_attn.o_proj")] = value
llama_state_dict[key.replace("attn.c_proj.weight", "self_attn.o_proj.bias")] = torch.zeros_like(
value[:, 0]
).squeeze()
elif "ln_1" in key:
llama2_state_dict[key.replace("ln_1", "input_layernorm")] = value
llama_state_dict[key.replace("ln_1", "input_layernorm")] = value
elif "ln_2" in key:
llama2_state_dict[key.replace("ln_2", "post_attention_layernorm")] = value
llama_state_dict[key.replace("ln_2", "post_attention_layernorm")] = value
elif "mlp.w1" in key:
llama2_state_dict[key.replace("mlp.w1", "mlp.up_proj")] = value
llama_state_dict[key.replace("mlp.w1", "mlp.up_proj")] = value
elif "mlp.w2" in key:
llama2_state_dict[key.replace("mlp.w2", "mlp.gate_proj")] = value
llama_state_dict[key.replace("mlp.w2", "mlp.gate_proj")] = value
elif "mlp.c_proj" in key:
llama2_state_dict[key.replace("mlp.c_proj", "mlp.down_proj")] = value
llama_state_dict[key.replace("mlp.c_proj", "mlp.down_proj")] = value
elif "lm_head" in key:
llama2_state_dict[key] = value
llama_state_dict[key] = value
else:
raise KeyError("Unable to process key {}".format(key))
raise KeyError(f"Unable to process key {key}")
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(llama2_state_dict, max_shard_size=shard_size, weights_name=weights_name)
for shard_file, shard in tqdm(shards.items(), desc="Save weights"):
filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
llama_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: llama_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else:
torch.save(shard, os.path.join(output_dir, shard_file))
if index is None:
print("Model weights saved in {}".format(os.path.join(output_dir, weights_name)))
if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True)
print("Model weights saved in {}".format(output_dir))
print(f"Model weights saved in {output_dir}.")
return str(torch_dtype).replace("torch.", "")
def save_config(input_dir: str, output_dir: str, torch_dtype: str):
with open(os.path.join(input_dir, CONFIG_NAME), "r", encoding="utf-8") as f:
qwen_config_dict: Dict[str, Any] = json.load(f)
with open(os.path.join(input_dir, CONFIG_NAME), encoding="utf-8") as f:
qwen_config_dict: dict[str, Any] = json.load(f)
llama2_config_dict: Dict[str, Any] = OrderedDict()
llama2_config_dict: dict[str, Any] = OrderedDict()
llama2_config_dict["architectures"] = ["LlamaForCausalLM"]
llama2_config_dict["hidden_act"] = "silu"
llama2_config_dict["hidden_size"] = qwen_config_dict["hidden_size"]
@@ -135,7 +137,8 @@ def save_config(input_dir: str, output_dir: str, torch_dtype: str):
with open(os.path.join(output_dir, CONFIG_NAME), "w", encoding="utf-8") as f:
json.dump(llama2_config_dict, f, indent=2)
print("Model config saved in {}".format(os.path.join(output_dir, CONFIG_NAME)))
print(f"Model config saved in {os.path.join(output_dir, CONFIG_NAME)}")
def llamafy_qwen(
@@ -144,8 +147,8 @@ def llamafy_qwen(
shard_size: str = "2GB",
save_safetensors: bool = False,
):
r"""
Converts the Qwen models in the same format as LLaMA2.
r"""Convert the Qwen models in the same format as LLaMA2.
Usage: python llamafy_qwen.py --input_dir input --output_dir output
Converted model: https://huggingface.co/hiyouga/Qwen-14B-Chat-LLaMAfied
"""

View File

@@ -0,0 +1,39 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from transformers import Llama4Config, Llama4ForConditionalGeneration, Llama4TextConfig, Llama4VisionConfig
if __name__ == "__main__":
vision_config = Llama4VisionConfig(
hidden_size=1408,
image_size=336,
intermediate_size=5632,
num_attention_heads=16,
num_hidden_layers=4,
vision_output_dim=4096,
)
text_config = Llama4TextConfig(
hidden_size=512,
intermediate_size=1024,
intermediate_size_mlp=1024,
num_hidden_layers=4,
num_attention_heads=8,
num_key_value_heads=2,
head_dim=512 // 8,
num_local_experts=2,
)
config = Llama4Config(vision_config=vision_config, text_config=text_config)
model = Llama4ForConditionalGeneration._from_config(config)
model.save_pretrained("tiny-llama4")

View File

@@ -0,0 +1,79 @@
# Copyright 2025 the LlamaFactory team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import time
import fire
from datasets import load_dataset
try:
import jieba # type: ignore
from nltk.translate.bleu_score import SmoothingFunction, sentence_bleu # type: ignore
from rouge_chinese import Rouge # type: ignore
jieba.setLogLevel(logging.CRITICAL)
jieba.initialize()
except ImportError:
print("Please install llamafactory with `pip install -e .[metrics]`.")
raise
def compute_metrics(sample):
hypothesis = list(jieba.cut(sample["predict"]))
reference = list(jieba.cut(sample["label"]))
bleu_score = sentence_bleu(
[list(sample["label"])],
list(sample["predict"]),
smoothing_function=SmoothingFunction().method3,
)
if len(" ".join(hypothesis).split()) == 0 or len(" ".join(reference).split()) == 0:
result = {"rouge-1": {"f": 0.0}, "rouge-2": {"f": 0.0}, "rouge-l": {"f": 0.0}}
else:
rouge = Rouge()
scores = rouge.get_scores(" ".join(hypothesis), " ".join(reference))
result = scores[0]
metric_result = {}
for k, v in result.items():
metric_result[k] = round(v["f"] * 100, 4)
metric_result["bleu-4"] = round(bleu_score * 100, 4)
return metric_result
def main(filename: str):
start_time = time.time()
dataset = load_dataset("json", data_files=filename, split="train")
dataset = dataset.map(compute_metrics, num_proc=8, remove_columns=dataset.column_names)
score_dict = dataset.to_dict()
average_score = {}
for task, scores in sorted(score_dict.items(), key=lambda x: x[0]):
print(f"{task}: {sum(scores) / len(scores):.4f}")
average_score[task] = sum(scores) / len(scores)
with open("predictions_score.json", "w", encoding="utf-8") as f:
json.dump(average_score, f, indent=4)
print(f"\nDone in {time.time() - start_time:.3f}s.\nScore file saved to predictions_score.json")
if __name__ == "__main__":
fire.Fire(main)

View File

@@ -1,5 +1,4 @@
# coding=utf-8
# Copyright 2024 Tencent Inc. and the LlamaFactory team.
# Copyright 2025 Tencent Inc. and the LlamaFactory team.
#
# This code is inspired by the Tencent's LLaMA-Pro library.
# https://github.com/TencentARC/LLaMA-Pro/blob/main/scripts/block_expansion.py
@@ -23,80 +22,71 @@ from typing import TYPE_CHECKING
import fire
import torch
from huggingface_hub import split_torch_state_dict_into_shards
from safetensors.torch import save_file
from tqdm import tqdm
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from transformers.modeling_utils import (
SAFE_WEIGHTS_INDEX_NAME,
SAFE_WEIGHTS_NAME,
WEIGHTS_INDEX_NAME,
WEIGHTS_NAME,
shard_checkpoint,
)
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, PreTrainedModel
from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
if TYPE_CHECKING:
from transformers import PretrainedConfig, PreTrainedModel
from transformers import PretrainedConfig
def change_name(name: str, old_index: int, new_index: int) -> str:
return name.replace(".{:d}.".format(old_index), ".{:d}.".format(new_index))
return name.replace(f".{old_index:d}.", f".{new_index:d}.")
def block_expansion(
model_name_or_path: str,
output_dir: str,
num_expand: int,
shard_size: str = "2GB",
shard_size: str = "5GB",
save_safetensors: bool = True,
):
r"""
Performs block expansion for LLaMA, Mistral, Qwen1.5 or Yi models.
r"""Perform block expansion for LLaMA, Mistral, Qwen2 or Yi models.
Usage: python llama_pro.py --model_name_or_path meta-llama/Llama-2-7b-hf --output_dir llama2_pro --num_expand 8
"""
config: "PretrainedConfig" = AutoConfig.from_pretrained(model_name_or_path)
config: PretrainedConfig = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
num_layers = getattr(config, "num_hidden_layers")
if num_layers % num_expand != 0:
raise ValueError(f"`num_layers` {num_layers} should be divisible by `num_expand` {num_expand}.")
setattr(config, "num_hidden_layers", num_layers + num_expand)
config.save_pretrained(output_dir)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer.save_pretrained(output_dir)
config: "PretrainedConfig" = AutoConfig.from_pretrained(model_name_or_path) # load the original one
if save_safetensors:
setattr(config, "tie_word_embeddings", False) # safetensors does not allow shared weights
model: "PreTrainedModel" = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
config=config,
torch_dtype="auto",
trust_remote_code=True,
low_cpu_mem_usage=True,
print(f"Expanding model of {num_layers} layers to {num_layers + num_expand} layers.")
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path, torch_dtype="auto", device_map="cpu", trust_remote_code=True, low_cpu_mem_usage=True
)
state_dict = model.state_dict()
if num_layers % num_expand != 0:
raise ValueError("`num_layers` {} should be divisible by `num_expand` {}.".format(num_layers, num_expand))
assert isinstance(model, PreTrainedModel) # type hint
if save_safetensors and getattr(model.config, "tie_word_embeddings", False):
del model.lm_head # safetensors does not allow shared weights
split = num_layers // num_expand
layer_cnt = 0
output_state_dict = OrderedDict()
state_dict = model.state_dict()
output_state_dict: dict[str, torch.Tensor] = OrderedDict()
for i in range(num_layers):
for key, value in state_dict.items():
if ".{:d}.".format(i) in key:
if f".{i:d}." in key:
output_state_dict[change_name(key, i, layer_cnt)] = value
print("Add layer {} copied from layer {}".format(layer_cnt, i))
print(f"Add layer {layer_cnt} copied from layer {i}.")
layer_cnt += 1
if (i + 1) % split == 0:
for key, value in state_dict.items():
if ".{:d}.".format(i) in key:
if f".{i:d}." in key:
if "down_proj" in key or "o_proj" in key:
output_state_dict[change_name(key, i, layer_cnt)] = torch.zeros_like(value)
else:
output_state_dict[change_name(key, i, layer_cnt)] = torch.clone(value)
print("Add layer {} expanded from layer {}".format(layer_cnt, i))
print(f"Add layer {layer_cnt} expanded from layer {i}.")
layer_cnt += 1
for key, value in state_dict.items():
@@ -104,26 +94,34 @@ def block_expansion(
output_state_dict[key] = value
weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
shards, index = shard_checkpoint(output_state_dict, max_shard_size=shard_size, weights_name=weights_name)
for shard_file, shard in tqdm(shards.items(), desc="Save weights"):
filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
state_dict_split = split_torch_state_dict_into_shards(
output_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
)
for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
shard = {tensor: output_state_dict[tensor].contiguous() for tensor in tensors}
if save_safetensors:
save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
else:
torch.save(shard, os.path.join(output_dir, shard_file))
if index is None:
print("Model weights saved in {}".format(os.path.join(output_dir, weights_name)))
if not state_dict_split.is_sharded:
print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
else:
index = {
"metadata": state_dict_split.metadata,
"weight_map": state_dict_split.tensor_to_filename,
}
index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
json.dump(index, f, indent=2, sort_keys=True)
print("Model weights saved in {}".format(output_dir))
print(f"Model weights saved in {output_dir}.")
print("- Fine-tune this model with:")
print("model_name_or_path: {}".format(output_dir))
print(f"model_name_or_path: {output_dir}")
print("finetuning_type: freeze")
print("freeze_trainable_layers: {}".format(num_expand))
print(f"freeze_trainable_layers: {num_expand}")
print("use_llama_pro: true")

Some files were not shown because too many files have changed in this diff Show More