613 Commits

Author SHA1 Message Date
hiyouga
0e88c5754f update parser
Former-commit-id: 5262c8702382ff8bc36a172387bc4c8949f326ea
2024-07-19 01:36:39 +08:00
hiyouga
3fff875f99 release v0.8.3
Former-commit-id: 7180a3b99c3c218dfb0dc607ad5e87219269a678
2024-07-19 01:21:18 +08:00
hiyouga
e2d9ab3591 fix test
Former-commit-id: e86f20134b782c8f5c39ead292f8f7582038eb9e
2024-07-19 01:17:37 +08:00
hiyouga
3db5cf44ea fix unittest
Former-commit-id: 73b56ba30b17a32db694d485135d493315293001
2024-07-19 01:10:30 +08:00
hiyouga
994b9089e9 add unittest
Former-commit-id: 8a1f0c5f922989e08a19c65de0b2c4afd2a5771f
2024-07-19 01:06:27 +08:00
hiyouga
4c1513a845 follow #4878 fix #4684
Former-commit-id: 4715e5c5b8040b21e5f401f7e969b9fd2757d520
2024-07-18 22:06:12 +08:00
hoshi-hiyouga
86e009b504 Merge pull request #4878 from ly863/main
Train the last turing conversation.

Former-commit-id: 1fd39b234e23f762021212c6dfde9701f94e7afa
2024-07-18 22:03:41 +08:00
Shiyu Zhang
c1e1918db1 仅仅训练最后一轮对话
Former-commit-id: ab6198e4c099edeb1a400f58729cd617e8cd8e50
2024-07-18 15:30:25 +08:00
hiyouga
341225a405 fix metrics #4786
Former-commit-id: 7d0c4bd394fc3cba197db1719f1164b9dd66ac21
2024-07-17 00:47:00 +08:00
hiyouga
8c93921952 support batch_eval_metrics, fix #4826
Former-commit-id: 3fe1df17188825f8a32fbe6a1294b4b532ce0c85
2024-07-17 00:33:00 +08:00
hiyouga
45367105fc tiny fix
Former-commit-id: 952807b16cd85fa193a05a83b1a735a6b06abc82
2024-07-15 23:09:50 +08:00
hoshi-hiyouga
df71359069 Merge pull request #4822 from codemayq/test-ci
add github action check to ignore some test cases

Former-commit-id: cf698aa7ab4a35b84f429014c4a5a6cb78b565a6
2024-07-15 23:07:55 +08:00
hoshi-hiyouga
a03d14a9a6 Update test_template.py
Former-commit-id: 470dd92f954a06939d83557fed1201632b0c966b
2024-07-15 23:04:39 +08:00
hoshi-hiyouga
41d7ca395e Update test_template.py
Former-commit-id: 7da56ea6d4c08d555e179d419c245b27e5611b97
2024-07-15 23:00:27 +08:00
hoshi-hiyouga
757573bec1 Merge pull request #4821 from codemayq/feature-eval-split
add "split" as suffix in eval task name

Former-commit-id: 5b6033eef3c2cfd5b47bb67e0d803d8de68f3ff0
2024-07-15 22:59:44 +08:00
hoshi-hiyouga
16d655b119 Update llama3_lora_eval.yaml
Former-commit-id: 946836f9a3f3385c8d3bc6ab82df6edf13ee571c
2024-07-15 22:55:12 +08:00
hoshi-hiyouga
f6483de197 Update test_template.py
Former-commit-id: 352afce20adf26f5e616e5aa4e6c7295a865fb1a
2024-07-15 22:55:05 +08:00
hoshi-hiyouga
da34411bf2 Update test_template.py
Former-commit-id: 0ada82c60ed3df637acc624e8a382765d4c5f743
2024-07-15 22:52:25 +08:00
hiyouga
1891b64072 fix #4820
Former-commit-id: 8c0f8357e1eebee32010fe715554f1136b68b4ba
2024-07-15 22:32:07 +08:00
codingma
a14069acf8 add IN_GITHUB_ACTIONS
Former-commit-id: 3681966a3fe37a1c3d2dd60e54047ced1b2925e5
2024-07-15 10:28:07 +08:00
codingma
0ea708c226 1. change the task name format
2. delete split param in data_args.py


Former-commit-id: 309d30efe24785912ff751fc573677875fc5819e
2024-07-15 09:55:33 +08:00
hiyouga
cb474c7b11 allow computing rouge in training
Former-commit-id: ac67d50673989e8137965f5f718fec67c184f55b
2024-07-15 01:16:26 +08:00
hiyouga
e4d11a117b fix up
Former-commit-id: 43a56cb331fae899ca35b0c312730d4ab79d0c42
2024-07-15 01:04:56 +08:00
hoshi-hiyouga
68365045b4 Merge pull request #4691 from codemayq/feature-suppot-eval-dataset
add eval dataset support

Former-commit-id: 51eb379b44fad0336fc96c329ec98dc4528b9c2c
2024-07-15 01:00:34 +08:00
hoshi-hiyouga
502555b65d Update data_args.py
Former-commit-id: c3cee10294d56a1bc226871819b3a725b09aa67e
2024-07-15 00:56:03 +08:00
hoshi-hiyouga
0bc52c0aae Update preprocess.py
Former-commit-id: da92f4a1b9c12a8e2489b964baba5e2c8e739ef1
2024-07-15 00:55:36 +08:00
hoshi-hiyouga
6bf2663b8e Update parser.py
Former-commit-id: 145687997c86b8785e37dd60fbb9f3a5986730a6
2024-07-15 00:55:21 +08:00
hoshi-hiyouga
d337de668e Update data_utils.py
Former-commit-id: 5c2a0e3b1d1afd2a9219d935d3421fffffc3a2c9
2024-07-15 00:54:34 +08:00
hoshi-hiyouga
ec372f91e9 Update loader.py
Former-commit-id: 860e3eb374947b72dcae88cab0a93ef561e3bfb3
2024-07-15 00:50:06 +08:00
hiyouga
20b1bd8c54 update test template
Former-commit-id: 3cbd0739e8b889ef58a7841959a15b6cd1cb6332
2024-07-15 00:49:34 +08:00
hoshi-hiyouga
ee17741591 Update parser.py
Former-commit-id: b9760df588e64270a140d9111241c62c1cefe781
2024-07-14 23:04:34 +08:00
hoshi-hiyouga
93a6925ec5 Update README.md
Former-commit-id: d9aa6a9437994ac29f3e7a0789ec286f091847d6
2024-07-14 21:27:04 +08:00
hiyouga
47405a8e8a add gemma test
Former-commit-id: f29d9f8665021e506d6237f5337d2b1ac8ede6a8
2024-07-14 18:01:45 +08:00
hiyouga
54ba30c47f fix test
Former-commit-id: 4899309b7cac00573215f6530bfc97d7d87d70b2
2024-07-14 15:44:30 +08:00
hiyouga
b92214f78b fix #4699
slow tokenizer for yi models


Former-commit-id: 4d23a0bcda0c15a903a62eec72d14c584ce020dd
2024-07-14 15:34:22 +08:00
hiyouga
71e4404c0d tiny fix
Former-commit-id: 220d7c1ce15e8013a900e59fe0c7937e38b5c3b5
2024-07-14 10:56:45 +08:00
hiyouga
5ab997d484 fix gemma2 attention
Former-commit-id: aeafc68e169ae0ea5939cc81cb0cf89f0ca044b6
2024-07-13 23:33:45 +08:00
hiyouga
6e7048831b update workflows
Former-commit-id: 47c806da4def1694fb30c1c4cf87ae67903eb9f1
2024-07-13 22:31:15 +08:00
hoshi-hiyouga
97cd932c19 Merge pull request #4781 from hzhaoy/fix-dockerfile-cuda
Fix cuda Dockerfile

Former-commit-id: 56696f6c112f82d514dc3bf93182707297642639
2024-07-13 22:25:32 +08:00
hiyouga
dfc7a7d5cd fix #4792
Former-commit-id: d7547d6b9e4c660897e3ce0f4022e08686c172d5
2024-07-13 22:07:58 +08:00
hoshi-hiyouga
27e13a8371 Merge pull request #4804 from codemayq/fix-examples
tiny fix of examples

Former-commit-id: 1e45486a57a4c559e7deedf077acc0b5b79d631f
2024-07-13 20:49:13 +08:00
hoshi-hiyouga
bf6ad1fbed Update llava1_5.yaml
Former-commit-id: 68c9670be5a6f9d9ec589f13b43c45aa0ed90033
2024-07-13 20:30:06 +08:00
codingma
bc71380b59 1. fix output_dir in llama3_lora_pretrain.yaml
2. add llava1_5.yaml for inference


Former-commit-id: 560928ecf04b7aa351812568d317fcde58bc64d6
2024-07-13 13:16:22 +08:00
hzhaoy
137c87ff60 tiny fix
Former-commit-id: 48be67c41eb394d276b41ca22b28e1ef10af4920
2024-07-12 00:28:44 +08:00
hzhaoy
485b8dc18b fix #4780
Former-commit-id: 15f73c41d556c5f8d989697d774725a88d36f1b4
2024-07-12 00:25:48 +08:00
hzhaoy
875f9078d1 fix #4779
Former-commit-id: 0c8cbf9ea57292de5e222618f86e1fc5379fe008
2024-07-12 00:15:15 +08:00
hoshi-hiyouga
d3bfcbd3af Merge pull request #4700 from marko1616/patch-1
Fix Windows command preview

Former-commit-id: bc49af1e8bde9c396ca4b1e608b7fad02b016ce6
2024-07-10 13:51:50 +08:00
hoshi-hiyouga
e36db692e7 Merge pull request #4746 from yzoaim/fix
fix src/llamafactory/train/callbacks.py

Former-commit-id: 79530736bf6d711ed9366386d43d3fdc84d5b6fc
2024-07-10 13:32:49 +08:00
hoshi-hiyouga
460a40756c Update callbacks.py
Former-commit-id: 526376967deaad73b7ca11063a2e3f0c9a0add98
2024-07-10 13:32:20 +08:00
-.-
18057e14ef fix src/llamafactory/train/callbacks.py
Former-commit-id: c79a21aeaa5462770790887a6826d335e1ded5a2
2024-07-10 12:05:51 +08:00
hiyouga
025c8fe302 fix #4731
Former-commit-id: 99e016ee552a551b52b6fcf3616cb57a5b927715
2024-07-10 11:32:36 +08:00
hiyouga
446129ca7a fix ppo trainer
Former-commit-id: a03b2e5ef0d5d6b1b27753438745385d290cb211
2024-07-10 11:05:45 +08:00
hiyouga
834c4e8ad9 fix #4742
Former-commit-id: ae9cf84347878fcc462f35db941c14e1df104276
2024-07-09 23:24:24 +08:00
hoshi-hiyouga
11d961cf3c Merge pull request #4706 from T-Atlas/main
chore: Update vllm_engine.py to support vllm version >= 0.5.1
Former-commit-id: db17d0c801b78ad9d9f38fcf31df8d7e9c7a0994
2024-07-07 15:50:38 +08:00
hoshi-hiyouga
00b93d8b2f Update packages.py
Former-commit-id: c61ee780f3aed51c31a81e912f25fbfd11dc7edd
2024-07-07 15:48:29 +08:00
Lian Junhong
281fd5bb89 chore: Update vllm_engine.py to support vllm version >= 0.5.1
Former-commit-id: b73c23a88cef237db626a16ab2a30261afd36564
2024-07-07 15:08:12 +08:00
hiyouga
cb10050cb9 fix #4705
Former-commit-id: cfd25c6463bcc263c8672d1de365dd81a028b66a
2024-07-07 13:10:06 +08:00
marko1616
2935c4cddb Update utils.py
In windows mutiline command should like
command --arg1 xxx `
--arg2 xxx `

Former-commit-id: b189750520af1fccd0485052792eda269692df89
2024-07-06 20:40:13 +08:00
hiyouga
0d6ec70c6f add codegeex4, internlm2.5
Former-commit-id: 349a5fbc934ac289cad44b4e3eb16f458b94710c
2024-07-06 16:16:47 +08:00
hiyouga
74777b4ded update pissa example
Former-commit-id: d01bae6af5f3a619c50247efc8fd83d9f521c6ed
2024-07-06 15:47:32 +08:00
codingma
5f2bd04799 1. add custom eval dataset support
2. merge load dataset and split dataset function


Former-commit-id: 963d97ba07e7efa3a4544c4d077283d9e112b3ad
2024-07-05 15:52:10 +08:00
hiyouga
9a1a5f9778 fix processors
Former-commit-id: 7215f3a8612b570cd322802d14db532927900117
2024-07-05 08:33:22 +08:00
hiyouga
edc8aefa59 fix #4683
Former-commit-id: cbff0ea0db6971f8ced503a2f0cb6bc43e7037ac
2024-07-05 00:58:05 +08:00
hiyouga
ee1c786a12 fix #4674
Former-commit-id: c4f35627b4f0aeb6d4337c3d0e58318c46449f65
2024-07-05 00:41:03 +08:00
hiyouga
a3e4f2b716 Merge branch 'main' of https://github.com/hiyouga/LLaMA-Factory
Former-commit-id: f0b54254b43e93063232f633cdcf1e31d1419bfe
2024-07-04 14:23:37 +08:00
hiyouga
6685f1fb9e fix #4677
Former-commit-id: d4b6715cab2e475dee2ff9f75c637f7611549ec7
2024-07-04 14:22:07 +08:00
hoshi-hiyouga
c89ff328f6 Merge pull request #4673 from hzhaoy/main
tiny fix

Former-commit-id: e0ef32fc3a5469cdd854288c4bb9eb78bb7e27f1
2024-07-04 10:40:41 +08:00
hzhaoy
c6f1bc65c0 tiny fix
Former-commit-id: 8f43ad988a4fd518a708fba53a173596ce2c59dd
2024-07-04 10:20:28 +08:00
hiyouga
0f43c61229 update tests
Former-commit-id: 8c479a4f7fc97dedc9ca9ceea9e0dd3c4d573253
2024-07-04 04:00:12 +08:00
hiyouga
8567dab167 tiny fix
Former-commit-id: 9b211861eba19ae9fc360bc96eeb8ad67ba40c49
2024-07-04 03:47:05 +08:00
hiyouga
0517d7bee5 tiny fix
Former-commit-id: 935703b46d2871ce1014832da067dfe4a50c0610
2024-07-04 03:02:23 +08:00
hiyouga
5bc0b9b31c fix data map for packing
Former-commit-id: ee6f8f926f084a195b2dbbd074e041e6c62c6ef4
2024-07-04 03:01:31 +08:00
hiyouga
3d219b91b9 fix packing for eager/sdpa attn
Former-commit-id: 735a033ceb7f2da6da71d138ea091d8a665411a9
2024-07-04 01:52:43 +08:00
hoshi-hiyouga
a90c6306f8 Merge pull request #4224 from chuan298/main
Implement efficient packing without cross-contamination attention

Former-commit-id: ac382cc9fe4ec483658fd54f07f9a123788ce1b1
2024-07-04 01:18:54 +08:00
hiyouga
60558388ec update packing
Former-commit-id: f3d9c31efa0e64317bdd5b4ed6f78653cf3b5ba4
2024-07-04 01:10:55 +08:00
hoshi-hiyouga
b29a7f8cd6 Update packing.py
Former-commit-id: 3cc11aa88839c5b99cfd83d9225770a33d0eb6fd
2024-07-03 23:36:01 +08:00
hiyouga
a1501591e8 update func name
Former-commit-id: ed93ac0829fa656194fd32e1ac063843f475746f
2024-07-03 23:29:33 +08:00
hiyouga
1408aa078d update arg name
Former-commit-id: 1509ed550b2060f946ce20e3c5a9e5c49e86e3ab
2024-07-03 23:23:24 +08:00
hiyouga
5acaa476d6 update hparams
Former-commit-id: 1c4feac44192b1f540208837f5a530b0d3f5fb37
2024-07-03 23:18:58 +08:00
hiyouga
8ac4f87c91 update ui
Former-commit-id: b1522a3c0951e2e57f873dc6c758aaed33ca374e
2024-07-03 23:13:49 +08:00
hiyouga
14d3001824 test
Former-commit-id: 610eea0c0a0069fdc9148620b15ffffcfef731ea
2024-07-03 23:05:39 +08:00
hiyouga
1ac9389ddc update scripts
Former-commit-id: 6dd6bae598d4d0b7b7d80341e88e313e49a49c00
2024-07-03 20:07:44 +08:00
hiyouga
0b0e27c2f1 fix #4609
unwrap_model_for_generation(reward_model) is necessary for zero3 training


Former-commit-id: c8d5b21700577cae8d6ca03359bcf1762c8b7cb8
2024-07-03 19:45:51 +08:00
hiyouga
fd1199cce4 update readme
Former-commit-id: 4b5f05b791fce9fdc4678598d7be8dc954f9ff73
2024-07-03 19:39:05 +08:00
hoshi-hiyouga
3c9eda8265 Merge pull request #4662 from wzh1994/wzh/readme
Add `LazyLLM` to `Projects using LLaMA Factory` in `README.md`

Former-commit-id: 5ac6334cc40cefda91f5344f60ec0d4757d76df4
2024-07-03 15:51:02 +08:00
wangzhihong
6622cdb43f Update README_zh.md
Former-commit-id: d4036add433989ad88d54895b6f5af90b393c009
2024-07-03 14:59:09 +08:00
wangzhihong
49c28a7dab add LazyLLM to Projects using LLaMA Factory in README.md
Former-commit-id: e1d8587ea120ad356df35431f84af92193fcbaf3
2024-07-03 11:12:20 +08:00
hiyouga
a42671c2d7 tiny fix
Former-commit-id: d944020257f363f38e62de6279b337e399b7c65e
2024-07-03 02:31:50 +08:00
hiyouga
f17ab6ad92 tiny fix
Former-commit-id: 98c4a0af6b3e27ae393d2847f48a01d23d9c8780
2024-07-02 23:06:13 +08:00
hiyouga
ca548af2a2 remove rlhf support for chatglm2&3
Former-commit-id: bcbb5b71961b89719bffb0d202c431c82e6067cc
2024-07-02 23:03:17 +08:00
hiyouga
579997688f upcast logits
Former-commit-id: df61660351c8af30591471807a20869a45bb055a
2024-07-02 22:32:05 +08:00
hiyouga
e6ba7ef3e6 improve rlhf
Former-commit-id: e441780e3db256ca09a442ea9254e7ce16898a07
2024-07-02 22:23:08 +08:00
ancv
20fdf177e8 move efficient_packing from data_args to model_args
Former-commit-id: 7b61659c707480bcf8c802c73e10d12ad5b9b965
2024-07-02 18:37:55 +07:00
hiyouga
f0b01803ea Update bug-report.yml
Former-commit-id: b92636feff19f144850d7741d8f3fa9fcfdb0580
2024-07-02 19:18:56 +08:00
hiyouga
f5c4841ff2 Update bug-report.yml
Former-commit-id: dc04e33b17dfb798eaee137eef08879a0b7114c7
2024-07-02 19:16:12 +08:00
hoshi-hiyouga
1e01283d81 Merge pull request #4651 from hzhaoy/add-telechat-1b
Add TeleChat-1B

Former-commit-id: 2da64665d3da9dc0084bb782c65e88bac21f45a1
2024-07-02 17:56:43 +08:00
hzhaoy
2196448c21 add TeleChat-1B
Former-commit-id: 1b81b43fc483a21e0c2985b98459ecf5137aa4c4
2024-07-02 17:49:04 +08:00
hiyouga
96a81ce89d fix ppo callbacks
Former-commit-id: 54f1c67c2a802b1d8368a6d1837d4c9a729f2695
2024-07-02 17:34:56 +08:00
hoshi-hiyouga
a715490c2a Merge branch 'main' into main
Former-commit-id: 7be442f37d53a0c6324728fa1fa8e2c84d7f0fa5
2024-07-01 21:01:09 +08:00
hiyouga
973cf8e980 tiny fix
Former-commit-id: 5dd2e5c3323f56420b5845a5ed28bcd9d4da5e41
2024-07-01 05:43:17 +08:00
hiyouga
4357e42391 tiny fix
Former-commit-id: 19e43c3a9ed771e991cb273d394ab28fb923f868
2024-07-01 03:55:20 +08:00
hiyouga
884b49e662 add eval acc
Former-commit-id: 7ffde76fbfb6192e3aac31ccc098f31ce89181ae
2024-07-01 03:51:20 +08:00
hiyouga
38c94d2e9c Update label_issue.yml
Former-commit-id: fffa3defdda02ad579cb703c0704f94bad94f21a
2024-07-01 01:29:09 +08:00
hiyouga
67d2eb6b2a fix #4402 #4617
Deprecate reserved_label_len arg


Former-commit-id: 4b6568984c0be4b31e7aa91b7c0d52b7f7b12b0b
2024-07-01 01:19:27 +08:00
hiyouga
b670fb57db update readme
Former-commit-id: 7998d969bf942c91cf41a189e3941f6e04c81c6f
2024-07-01 00:22:52 +08:00
hiyouga
188b4be64d fix #4398 #4592
Former-commit-id: 8c92d268903c00392c8bd75a731daa1f107d6202
2024-06-30 21:28:51 +08:00
hiyouga
889c042ecd update npu docker
Former-commit-id: 2f4d5174205605b8821d4fb626283e07694ecf80
2024-06-30 21:05:31 +08:00
hiyouga
3c4f8eaa55 loose gemma2 attention
Former-commit-id: a0b645017a2de3d58b6cbc71bd91ec96fc7a818b
2024-06-29 01:42:14 +08:00
hiyouga
6a75d57060 update readme
Former-commit-id: 9f809c311af373508cb51b204ae54b047729a9dc
2024-06-28 06:55:19 +08:00
hiyouga
fda2cf677b bf16 by default, gemma2 attns
Gemma2 finetuning cannot work until merging https://github.com/huggingface/transformers/pull/31674


Former-commit-id: da66c32c7be0adc28d2185b23e9f62d56acb961c
2024-06-28 06:00:26 +08:00
hiyouga
cfdf5a5a78 increase pissa_iter for stability
Former-commit-id: 03f8d9b0fb10ae58e7f68508197330d616957899
2024-06-28 03:18:54 +08:00
hiyouga
a1437c15f7 fix docker flashattn
Former-commit-id: 0966f5d4616a3877a6b921976dc39e8799831d36
2024-06-28 01:28:59 +08:00
hiyouga
42e7489713 add Gemma2 models
Former-commit-id: 8fc5a248ecfd6861cb90dac6c14fe89cdeaf8921
2024-06-28 01:26:50 +08:00
hiyouga
024760f866 update examples
Former-commit-id: 66f248b90cfa2b29c73060459b2337b78154c47b
2024-06-28 01:17:07 +08:00
hiyouga
46f0189e88 refactor pissa, improve llamaboard
Former-commit-id: 619556e46c19718f702c97df5d570a2a4c5fb13a
2024-06-28 01:04:24 +08:00
hoshi-hiyouga
edc7498111 Merge pull request #4580 from hzhaoy/bugfix-deepspeed-pissa
Fix bug when using pissa method with deepspeed

Former-commit-id: f260d458f91d6d2b4ed141f64844cded11d5aaad
2024-06-28 00:46:51 +08:00
hiyouga
9103fdf866 fix #4549
Former-commit-id: c9fdef10de737d1f433209812ef73e29cb60490a
2024-06-28 00:41:58 +08:00
hiyouga
95bf795de4 fix docker file
Former-commit-id: 688f02decb1185deb74b26444f7643cab7d355c1
2024-06-27 20:29:16 +08:00
hiyouga
bf99223a80 tiny fix
Former-commit-id: c1a78a3a9f8ab9d57577cee37f9c457d60863ba2
2024-06-27 20:14:48 +08:00
hoshi-hiyouga
9caf9b6f91 Merge pull request #4590 from injet-zhou/main
Exit the process with the subprocess's return code when utilizing the CLI

Former-commit-id: c6a8a7f239d7aa7c74ba09d55a24d4416181cc02
2024-06-27 20:09:36 +08:00
hoshi-hiyouga
727c7b0dc6 Merge pull request #4461 from hzhaoy/feature/support-flash-attn
support flash-attn in Dockerfile

Former-commit-id: e30a47ab5bda9303c8a2eb814caf0dd40c01b125
2024-06-27 20:05:26 +08:00
hoshi-hiyouga
13d184b280 Merge pull request #4561 from hashstone/fix-docker-npu
fix torch-npu dependency

Former-commit-id: 14867c5cf8be3a5e8a91a6533a615d32d298fd67
2024-06-27 19:58:16 +08:00
hoshi-hiyouga
12a91774b0 Update Dockerfile
Former-commit-id: a239f535a64378b74ef34799cd8e2e4a78f00f4c
2024-06-27 19:57:40 +08:00
hoshi-hiyouga
88018000ac Update Dockerfile
Former-commit-id: 7dea6840256472f8aa2c642f11d9e30bfa0fb96f
2024-06-27 19:51:25 +08:00
hoshi-hiyouga
f6eda1c35d Update setup.py
Former-commit-id: 544e1844fb237eed3eb621f4e6e355eac2ff7b85
2024-06-27 19:38:15 +08:00
hoshi-hiyouga
a2ebdbc112 Update README_zh.md
Former-commit-id: 62f2e27f4355aa35c26e1146dbe90fac3b380118
2024-06-27 19:17:52 +08:00
hoshi-hiyouga
e930a42083 Update README.md
Former-commit-id: 01869ccbb5af2704c9d5bfdd4f2ff30978fb466d
2024-06-27 19:17:35 +08:00
hoshi-hiyouga
4b123f49cb Update setup.py
Former-commit-id: 42293ab26f7fd7ffb77b308655ccd47b7c2ffa84
2024-06-27 19:16:46 +08:00
faddddeout
556eca918d Exit the process with the subprocess's return code when utilizing the CLI
Former-commit-id: ab42a4e2501a80fba1704a506bd1209a441570fa
2024-06-27 09:58:00 +00:00
fanjunliang
31fcd03f3c support docker-npu-[amd64|arm64] build
Former-commit-id: 25f16f5e299c94175e62bac9f0da5b47a2bb31b7
2024-06-27 15:25:12 +08:00
hzhaoy
89d9dd5aa5 fix #4579
Former-commit-id: 0fa298ff6a4febea36ea9f11c7594277a77e6e9b
2024-06-27 13:49:57 +08:00
hiyouga
d1aad72826 add quant checks
Former-commit-id: 15bb053e3549739b1a2134640a659b0f35df7de7
2024-06-27 01:12:25 +08:00
hiyouga
8e5b4bddf4 update examples
Former-commit-id: cce238f7d07919b79237bc9ab39265766c20f020
2024-06-27 00:53:33 +08:00
hiyouga
5a7cb9af4e tiny fix
Former-commit-id: c6747a39dbbdda8decaa104499918bc7ac5f02e4
2024-06-27 00:46:41 +08:00
hiyouga
d1cda4ec68 tiny fix
Former-commit-id: 69dac21ed9f07977b4540eb838a0ef93f3d3abc4
2024-06-27 00:36:04 +08:00
hiyouga
8aaf1185a5 support HQQ/EETQ #4113
Former-commit-id: b7cb51ddb394f04fe4646b2c297fc8d918c9979e
2024-06-27 00:29:42 +08:00
hzhaoy
b46bd07119 add flash-attn installation flag in Dockerfile
Former-commit-id: 2535044e95f6df628bd1f01e0eecb02407105d79
2024-06-27 00:13:30 +08:00
hiyouga
08fa707085 improve autogptq integration
Former-commit-id: d68408c7b123b8ff92014db35cac0b24b414a6f4
2024-06-26 22:11:44 +08:00
hiyouga
72ba29d81a fix #4458
Former-commit-id: aab14b15268dbe74ded22549dbd3677474868cbb
2024-06-26 19:52:35 +08:00
hiyouga
cf2dc4c444 fix #4556
Former-commit-id: 81faa9a985c14e83e38f42aedd228edb676b0695
2024-06-26 19:43:16 +08:00
fanjunliang
d82d86e16d fix torch-npu dependency
Former-commit-id: 7c8a8061d0cda6342f6c883748fb6bc6650df9f9
2024-06-26 18:21:42 +08:00
hoshi-hiyouga
bde31d8600 Merge pull request #4544 from MengqingCao/npu
fix docker-compose path

Former-commit-id: a3389661d2f6eb6ff7f67204a6d11b758e08d9c8
2024-06-26 10:19:24 +08:00
MengqingCao
e115d55585 fix docker-compose path
Former-commit-id: 9de3c24aa2a8268be06c8fef8e47f4fb6715c7ec
2024-06-26 02:15:00 +00:00
hzhaoy
daea86e047 support flash-attn in Dockerfile
Former-commit-id: 0dba000aa178f915cea7d75bf0c9d47e671a21d2
2024-06-25 15:13:07 +08:00
hiyouga
a4f69d8914 fix #4456
Former-commit-id: 920f4fa4ca9e08bcf0d16450e085ee0fa8b4e1c5
2024-06-25 14:34:13 +08:00
hiyouga
98f382fda3 lint
Former-commit-id: c9e424d2198b5872ce118a6ab4c109bf73be2bee
2024-06-25 02:55:50 +08:00
hiyouga
cd899734f3 fix test case
Former-commit-id: 6663057cfbdc96385d901a5dfba22cfcd7a61b23
2024-06-25 02:51:49 +08:00
hiyouga
f51b435bcf fix #4432
Former-commit-id: 972a3b469c600bc6528aef3a49b6fdec63d65803
2024-06-25 02:34:04 +08:00
hiyouga
0f82a55305 fix #4379
Former-commit-id: 96bedb4b6445a04ff8b97fb2aadace50b2f882df
2024-06-25 02:31:44 +08:00
hiyouga
9fd7a410bb tiny fix about badam
Former-commit-id: 03f49267c7406e36aee35639f86e6e0383897090
2024-06-25 01:54:53 +08:00
hiyouga
98fb3d015a fix #4419
Former-commit-id: 15069c3ca814d5ac9beec77d914b71cde7ea0f47
2024-06-25 01:51:29 +08:00
hoshi-hiyouga
bfb2ad7c79 Merge pull request #4352 from Ledzy/main
[Enhancement] Support ZeRO-3 when using BAdam

Former-commit-id: 0dc75275efa7d7540b472783a52ea6aeaa503c0b
2024-06-25 01:49:13 +08:00
hiyouga
135bfbf7c1 tiny fix
Former-commit-id: bb57478366a70a0871af30ab31c890f471e27ff4
2024-06-25 01:15:19 +08:00
hoshi-hiyouga
c6b17ebc20 Merge pull request #4355 from MengqingCao/npu
Add docker-npu

Former-commit-id: 2a59806352713764b1e4b7a54942466f972f5fdc
2024-06-25 01:07:43 +08:00
hoshi-hiyouga
b55eb30474 Update README_zh.md
Former-commit-id: f0c95160fea48b8c6291f42beb79ac089177fbb2
2024-06-25 01:06:59 +08:00
hoshi-hiyouga
cec2f1fc00 Update README.md
Former-commit-id: abe7aca5e133960da9200e3a036d9a550f474171
2024-06-25 01:03:38 +08:00
hoshi-hiyouga
8367ec03a7 Update docker-compose.yml
Former-commit-id: e038daf8dfa5d948b70c18469cb5a0be9aec464a
2024-06-25 00:54:28 +08:00
hoshi-hiyouga
37013f8068 Update Dockerfile
Former-commit-id: cdcd9455c19311394e148476a28ca75849c845b2
2024-06-25 00:50:34 +08:00
hoshi-hiyouga
8360544d65 Update docker-compose.yml
Former-commit-id: 56af208074e6af5465183af85367e7edd89d5aa6
2024-06-25 00:46:47 +08:00
hoshi-hiyouga
b5cdef43a1 Update Dockerfile
Former-commit-id: c897a70501707c0f4c432bb8e9a9beeb4e8953a3
2024-06-25 00:46:08 +08:00
hoshi-hiyouga
2e5d521ed8 Update Dockerfile
Former-commit-id: 632681d8ece0eaac59bb364d971435a3bc6665a9
2024-06-24 23:41:35 +08:00
hoshi-hiyouga
dbe35d52d1 Merge pull request #4409 from kno10/patch-2
Print help if no arguments given

Former-commit-id: 94ff749773d9f30ee0c98872ace6b7b542fadeda
2024-06-24 23:21:31 +08:00
hoshi-hiyouga
8bcdb6f52c Update cli.py
Former-commit-id: 9db6126496ec9e834541823715f700f92b3968c7
2024-06-24 23:21:10 +08:00
hoshi-hiyouga
5cfcb8262e Merge pull request #4417 from mMrBun/main
Add tool_format parameter to rewrite templates for different function call formats.

Former-commit-id: 8d1460cad5bff5e4626fdd675046021e0a3d1947
2024-06-24 23:17:55 +08:00
hoshi-hiyouga
0b331a318b Update test_formatter.py
Former-commit-id: d13ef043441734189b05e739dbbebb16077a6f0b
2024-06-24 23:14:36 +08:00
hoshi-hiyouga
5d6cf55208 Update template.py
Former-commit-id: d53517bff6f8734221d7df9982f3bdd4d2eb2cab
2024-06-24 23:12:59 +08:00
hoshi-hiyouga
9a1ec19845 Update loader.py
Former-commit-id: afa59d61844595e6b615227e6bfdc0b16c8015dd
2024-06-24 23:06:18 +08:00
hiyouga
a79e93f335 fix #4410
Former-commit-id: f49adc4ab5eade21d7a9e029212f17688ee9b0cf
2024-06-24 22:34:31 +08:00
hoshi-hiyouga
abcb94a738 Merge pull request #4445 from MengqingCao/label
auto-label npu issue

Former-commit-id: 87f4779e224c4d81c410a287369285b86e992c1f
2024-06-24 22:02:05 +08:00
hoshi-hiyouga
a4f2d5aa6f Update label_issue.yml
Former-commit-id: dc2f7998b4ae9d7223c7c16732d835cea2a28713
2024-06-24 22:01:23 +08:00
hoshi-hiyouga
6b738d1c89 Update label_issue.yml
Former-commit-id: 90785a69c6210c3a02babb12c56fb7900095247c
2024-06-24 21:59:39 +08:00
hoshi-hiyouga
f4c518b370 Merge pull request #4446 from stceum/bug-fix
Bug Fix: `off` is parsed as `False` in yaml file

Former-commit-id: 243478a3d6c08f5677ee57871862694561617f64
2024-06-24 21:41:28 +08:00
hoshi-hiyouga
d475dd3809 Update parser.py
Former-commit-id: 60e605cd9d399bd04432864ede9c84302890eac8
2024-06-24 21:37:42 +08:00
hoshi-hiyouga
5675c47a01 Update test_attention.py
Former-commit-id: c2cc7a0f152aa14fc03ae413f4a9dc06742a29d7
2024-06-24 21:35:34 +08:00
stceum
16e950454e Bug Fix: off is parsed as False in yaml file, changed to disabled to avoid this.
Former-commit-id: 171289d8e4c111fdca2b100282b64c74a04a4726
2024-06-24 20:39:31 +08:00
MengqingCao
2926265a14 auto-label npu issue
Former-commit-id: d19c9eac783377151e58731723fb7cbb2dab3323
2024-06-24 12:27:00 +00:00
MengqingCao
af2607de1a update docker files
1. add docker-npu (Dockerfile and docker-compose.yml)
  2. move cuda docker to docker-cuda and tiny changes to adapt to the new path


Former-commit-id: 5431c1f18aadb072208efe7fd8e36fdcfbf807c2
2024-06-24 10:57:36 +00:00
hiyouga
826d7808b4 update readme
Former-commit-id: 0775d56ee3cfde34e28a48cbf4a583f4530def19
2024-06-24 18:29:04 +08:00
hiyouga
4c89aca243 update readme
Former-commit-id: a1477208471039d3578980f929f1ca8c2a07aa96
2024-06-24 18:22:12 +08:00
mMrBun
43a065bb07 Add tool_format to overwrite tool formatter template
Former-commit-id: af08971ca50443fd5597e5e4412a3aa17214502f
2024-06-22 02:13:23 +08:00
hiyouga
4513a2cc75 remove dup template
Former-commit-id: 5fec12203b24608af4d4993f44a657eb5a0348e5
2024-06-22 01:31:32 +08:00
hiyouga
f29c1ac6e5 fix api
Former-commit-id: dcbd6d86dfc49f12529b02ec331e3e5c05740061
2024-06-22 00:00:38 +08:00
Erich Schubert
05abe47c8b Print help if no arguments given
Former-commit-id: 08dfb7ec636fd5bfbb30dac9d5fba6e32bfc6728
2024-06-21 09:14:21 +02:00
ancv
6c185a2c57 move configure_packing to llamafactory.model.patcher and fix constants
Former-commit-id: 9c5e972c9c81957f2e9e30bf284ef1c076de9fd0
2024-06-21 00:45:06 +07:00
hiyouga
af2cb33bb2 tiny fix
Former-commit-id: 2d8d47f6126d68db1701ed18fc31310c6f14dd49
2024-06-20 22:56:05 +08:00
hoshi-hiyouga
f16a4a8264 Merge pull request #4382 from MengqingCao/bugfix
upper bound numpy version to <2.0

Former-commit-id: 07a0182cd470132fafe07b8ea1951c9672d0eb87
2024-06-20 10:19:37 +08:00
MengqingCao
b232552d42 update dependencies
Former-commit-id: 25164273d1ca7a8f6f99b41279e342906f6bc4d5
2024-06-20 02:09:47 +00:00
hiyouga
0edccc11a5 improve llamaboard
Former-commit-id: e606ab35c0eced667dde7137c2d72848f264c96c
2024-06-19 23:46:03 +08:00
hiyouga
b2f5c0e0db fix llamaboard abort
Former-commit-id: 9ef609a2c0185040e531dea3829a6f481539cdea
2024-06-19 23:22:28 +08:00
hiyouga
5f5d4c1923 update patcher
Former-commit-id: afb365e515d615dd62f791622450debab60ce5cc
2024-06-19 21:27:00 +08:00
hiyouga
a7d7f79855 set dev version
Former-commit-id: 221665345d97f839ce4ba8d54643da30c71b6083
2024-06-19 21:08:16 +08:00
hiyouga
f0bff18324 Update publish.yml
Former-commit-id: 60b0633e29c9e701aa3813bd1fdc0282bd07f7c8
2024-06-19 20:46:33 +08:00
hiyouga
b631bdc5b7 release v0.8.2
Former-commit-id: 3050bbe51d46acd8473275d2713fc28932e4a3d3
2024-06-19 20:42:09 +08:00
hiyouga
c65f7e9bd5 fix jinja template
Former-commit-id: 0ebf2e2ee23918d28b0cbb20ba456732d6eedfbb
2024-06-19 20:03:50 +08:00
hiyouga
3e0fa4a8da fix templates
Former-commit-id: 6f357d59b73309c5955683008632e7f320e7dcb1
2024-06-19 17:44:05 +08:00
Jonery
fa3150548e Cleaner integration.
Former-commit-id: 26d4b05d424bd71f570195dd433258caf6465d92
2024-06-19 12:29:40 +08:00
hiyouga
235ed85b0f fix bug
Former-commit-id: 412139eaa2fde98ba19e1257d21144382a59f0d6
2024-06-19 03:49:23 +08:00
hiyouga
1ca639a777 use prefix to replace force system
Former-commit-id: 731d9a964f1c3dbfb83825524d697831e691fb9d
2024-06-19 03:39:52 +08:00
hiyouga
e36a994fe6 fix tool formatter, allow parallel function #4362
Former-commit-id: b8f16c976db4ecec1cc8558851c8cbfb6a5b7e9c
2024-06-19 03:23:51 +08:00
hoshi-hiyouga
19ffcfea76 Merge pull request #4173 from mMrBun/main
Implemented the tool_formatter and tool_extractor for glm4 and Qwen2 tool_format

Former-commit-id: 36b02ceed40198ecd5d559ee4ebef9205442ded2
2024-06-19 03:18:55 +08:00
hiyouga
85f3a09c83 tiny fix
Former-commit-id: bb750fa3dde03ec024ae75596ecd4b884cb126c6
2024-06-18 23:32:18 +08:00
hoshi-hiyouga
60b9a9c1fa Merge pull request #4314 from EliMCosta/patch-2
Fix Dockerfile

Former-commit-id: a123a42d98f5c49446762c1d4cfc674d2e4f61b1
2024-06-18 23:30:59 +08:00
hoshi-hiyouga
984e38575c Merge pull request #4309 from EliMCosta/patch-1
Add Magpie and Webinstruct dataset samples

Former-commit-id: 70966de5d4df51a41fef1da5a919dd622aa9c86c
2024-06-18 23:30:19 +08:00
hiyouga
665df5d733 add deepseek coder v2 #4346
Former-commit-id: d83d3846d8e3bf5c40d4b90c24e2c5909ec61864
2024-06-18 22:53:54 +08:00
hiyouga
4bc0bea0e9 fix #4357
Former-commit-id: a6741bba8cebd16a6a3f97a2dc81057d0e27eb39
2024-06-18 22:42:45 +08:00
hoshi-hiyouga
5cfa342f01 Merge pull request #4334 from zzxzz12345/bugfix/add-pandas-versions
Update requirements.txt

Former-commit-id: 219eb5b346bce7e13c2c3511c1638f9dde595787
2024-06-18 22:30:35 +08:00
hoshi-hiyouga
c106cc24e4 Update requirements.txt
Former-commit-id: da8684f9f0b0103d4fa81279343a48ecd0fcc0cd
2024-06-18 22:27:24 +08:00
hiyouga
372da52d4a fix #4335
Former-commit-id: 2ab449adbb160f339a0586edeb846fa311ad8382
2024-06-18 22:08:56 +08:00
Jonery
c7479751e8 add example
Former-commit-id: 75603db09b085e3f703286b87abe041af020e615
2024-06-18 13:50:26 +08:00
Jonery
870a54ac84 fix typo
Former-commit-id: d4bee3716dbf8a84564d5bcc2059172604819f3e
2024-06-18 12:39:26 +08:00
Jonery
12fcfc2b72 Support distributed BAdam.
Former-commit-id: bdcb986e37975911c190a74d3e60bb77aa2033bd
2024-06-18 12:27:47 +08:00
hiyouga
875270b851 lint
Former-commit-id: a19a7ac99af62b6715c96274f6350b124a784331
2024-06-17 22:35:56 +08:00
hiyouga
43fab306b6 update chat engine #4335
Former-commit-id: b163df7de48777e4319c9ccc736b0acdd5f473ed
2024-06-17 19:07:17 +08:00
hiyouga
77242f4169 update readme
Former-commit-id: 07c629f77c3978f339402e578cde1aede3f37699
2024-06-17 18:47:24 +08:00
Jonery
95ae30f678 Merge remote-tracking branch 'upstream/main'
Former-commit-id: 37834a7e79473ccf50ad7f67745b97c274c326d9
2024-06-17 18:44:51 +08:00
Jonery
7408e778ca update gitigore
Former-commit-id: 0068648aee07840cd2a08071e093436aee3f5cb6
2024-06-17 18:29:36 +08:00
Jonery
ba303fd1aa adapt for badam with ds zero3
Former-commit-id: fff2a020ec8713022bd8145f4a7168168ea07ca4
2024-06-17 18:18:10 +08:00
hiyouga
60d9896a70 fix #4326
Former-commit-id: 3c2c45812a720d92f7f5b15b9f03370fe6bf069e
2024-06-17 18:17:48 +08:00
hiyouga
485a80d294 tiny fix
Former-commit-id: 2289436567a7860d25d9da0afb39e4a3e5e83839
2024-06-17 17:47:25 +08:00
胡翀
63bfe9967e Update requirements.txt
add pandas version requirements

Former-commit-id: ed1cf559aa2d02588aacf55a17b439473651f626
2024-06-17 16:45:57 +08:00
Eli Costa
a720b82e63 Fix Dockerfile
Adds the commands to correctly execute LLama-Factory servers

Former-commit-id: 22af40f0895a6f88709a495febeca8507d41d989
2024-06-16 19:16:23 -03:00
Eli Costa
d3b0048d8c Update README_zh.md
Fix details tag in datasets menus

Former-commit-id: d79c1bd4806e9ea13115fabebf9da2d19b0a52be
2024-06-16 11:34:31 -03:00
Eli Costa
9a0aca42a5 Update README_zh.md
Add Magpie and WebInstruct to README

Former-commit-id: 6cf5323959fe9500ba06ab28980fcc8f62e1373f
2024-06-16 11:22:06 -03:00
Eli Costa
5e802b0645 Update README.md
Add Magpie and Webinstruct to README

Former-commit-id: 2b32b9263f12605e48e11dce9b5fbb746d790745
2024-06-16 11:19:25 -03:00
ancv
dd7a1dbfae update packing with sdpa and eager attention mode
Former-commit-id: 285636ba3a57a1038b2f2fd4cf909a1ca07708d4
2024-06-16 02:25:47 +07:00
hoshi-hiyouga
ca67b7a568 Update parser.py
Former-commit-id: d10c97193d08bd368aca1a72f0d1d8a96c76765d
2024-06-16 02:57:00 +08:00
hiyouga
76cd879c84 update pr template
Former-commit-id: 0b7c29674fda10c0ac87e0a0c75990feabb5a3de
2024-06-16 01:43:43 +08:00
hoshi-hiyouga
e0c049e590 Merge pull request #4307 from hiyouga/pissa
Support pissa

Former-commit-id: e7c0eefe96540c106162f5d252476b10b97ae696
2024-06-16 01:41:50 +08:00
hiyouga
727943f078 fix tol
Former-commit-id: bdb54bcb477126687db789bd89f2df84e424a2a3
2024-06-16 01:38:44 +08:00
hiyouga
8393b08666 Update tests.yml
Former-commit-id: 82e83615a706293abbf266d11c57caedafdd4c5b
2024-06-16 01:22:23 +08:00
hiyouga
9049f72d2f increase tol
Former-commit-id: c29071445e34aed23123fdf883a4d877744a1b0e
2024-06-16 01:21:06 +08:00
hiyouga
32f45c9e91 support pissa
Former-commit-id: ef8e45f2eaf466c54e9a671512a2974575677b08
2024-06-16 01:08:12 +08:00
hiyouga
05f3a3c944 tiny fix
Former-commit-id: f7f440986b0ae3b38ea9f2da80789629d4f79ea1
2024-06-16 01:06:41 +08:00
ancv
f91fe10985 remove some unused params
Former-commit-id: fef8132c50505a5fb6a246bd024491bd31798a3c
2024-06-15 23:00:55 +07:00
hiyouga
14f7bfc545 use fixture
Former-commit-id: 10761985691b9f934f7689c1f82aa6dd68febcca
2024-06-15 20:06:17 +08:00
hiyouga
7f90b0cd20 add tests
Former-commit-id: 484634ee9c982e82e919ff67d507e0210345182d
2024-06-15 19:51:20 +08:00
hiyouga
308abfec6c add minicpm #4227
Former-commit-id: e1bb18ce60be9a1b203989def30f1b9194286325
2024-06-15 17:58:52 +08:00
hiyouga
bb88536166 add license
Former-commit-id: 69cfc98d7c81756a5ab6bf962240e393e449fef0
2024-06-15 17:54:33 +08:00
hiyouga
d2df3f2d6e update readme
Former-commit-id: a43d302aa79cbfb9b0606e855b4c1af6865d8e68
2024-06-15 05:13:16 +08:00
hiyouga
2abfad9c1f fix #4271
Former-commit-id: 03707e78d29bfcf5d395a64bb38632bdb3ff47ce
2024-06-15 05:11:33 +08:00
hiyouga
2af932d969 disable DP
Former-commit-id: c18fd609d268389f3e65274992045a6c9f8e6c1f
2024-06-15 04:57:19 +08:00
hiyouga
c29fa61a9c fix #4292
Former-commit-id: 4cd4c179d24eab0fcaec2b29b9dd71970f877fe8
2024-06-15 04:47:13 +08:00
hiyouga
a30931fe0f fix #4295
Former-commit-id: 08f657868f9d605b837c5d8c2946a25cc05c8735
2024-06-15 04:34:55 +08:00
hiyouga
3ff9b87012 add test cases
Former-commit-id: 731176ff34cdf0cbf6b41c40c69f4ceb54c2daf6
2024-06-15 04:05:54 +08:00
hiyouga
f4f315fd11 Update README.md
Former-commit-id: f8d701cd3ce2e56f95b4f5439b8b48d5b62e0d2b
2024-06-13 16:02:21 +08:00
hiyouga
530165d9a5 update examples
Former-commit-id: d6bf6231290d79eb3a63e711f18fa711ef18a4f6
2024-06-13 03:26:10 +08:00
hiyouga
dbd1458adf add quant check in webui export tab
Former-commit-id: 6455ca07061ae9858cd7bc996b28be1fde697a3d
2024-06-13 03:19:18 +08:00
hiyouga
dedefecd2b Update llama3_full_sft_ds3.yaml
Former-commit-id: e715af62d521112d9c155cfa91fbb42fa0e77710
2024-06-13 03:16:20 +08:00
hiyouga
46f441dd37 update examples
Former-commit-id: 19681f93db399d695aa8e35f8ec2a9e720875baa
2024-06-13 03:15:06 +08:00
hiyouga
49b58fd6af fix #4221
Former-commit-id: 05a3be4853b941909e7d193c31e8d62c8c5f879b
2024-06-13 02:48:21 +08:00
hiyouga
103a507b39 fix #4209
DeepSpeed ZeRO3 has inflight param error when calling model.eval()


Former-commit-id: 4be013f18ea6a35b5a11db98db5f0670ffb41619
2024-06-13 02:25:50 +08:00
hiyouga
0a75224f62 clean code
Former-commit-id: f54cafd5c7f0383370d1a2f357834a61a97397ce
2024-06-13 01:58:16 +08:00
hoshi-hiyouga
04d7629abf Merge pull request #4246 from hzhaoy/adapt-vllm-v0.5.0
adapt vllm==0.5.0

Former-commit-id: 1068e25fc8b89f11cc79b164ee4aef9ce137ad4c
2024-06-13 01:54:02 +08:00
hiyouga
1b6786a21f add neo-sft dataset
Former-commit-id: 34863fa7cb641ceca92e3a2eec914126db537b62
2024-06-13 01:00:56 +08:00
hiyouga
5080f2314c fix lint
Former-commit-id: b170165679317af2b3f03633afac27661b3deb06
2024-06-13 00:48:44 +08:00
hiyouga
41beb7f0a3 fix docker compose usage
Former-commit-id: 59a5bd5d5c8d2a44e2dad26b74e77a45e109c8d6
2024-06-13 00:07:48 +08:00
hzhaoy
799873aa14 adapt vllm==0.5.0
Former-commit-id: 02afd9ff64f23e6707ac739ae1269f41bd70c340
2024-06-12 18:29:03 +08:00
hiyouga
fe2c7eaa93 update readme
Former-commit-id: a436aaa83f0cf12c8f404459e5486f9369d538ec
2024-06-12 17:39:12 +08:00
hiyouga
6392d45ea7 fix #4242
Former-commit-id: cf260e7af03f49aa5e3d6daf3b27738ff9b9bcb8
2024-06-12 16:50:11 +08:00
hoshi-hiyouga
c60ea675d7 Merge pull request #4234 from kimdwkimdw/patch-1
Support vllm==0.5.0

Former-commit-id: 0a9da057c9e7ef11cd709b20263c3d2e4c2d72ed
2024-06-12 16:39:09 +08:00
Arthur Kim
16c7c92396 Support vllm==0.5.0
Former-commit-id: e7a8ffd7af21bc3759f055033ba2209fa7a1be0e
2024-06-12 16:49:12 +09:00
ancv
c7ab302c69 implement efficient packing without cross-contamination attention
Former-commit-id: a64a5305c0da5ef092d4cc26faf829bb44de65d1
2024-06-12 11:56:01 +07:00
hoshi-hiyouga
7598b37543 Merge pull request #4204 from dignfei/main
fixbug:llama3在增量预训练时应该使用<|end_of_text|>标识文本的结束

Former-commit-id: e566342636faf0031a0ba5d5dd4fcff8401a2b76
2024-06-11 17:06:10 +08:00
hoshi-hiyouga
cc9717e2f2 Update pretrain.py
Former-commit-id: e2317b2a84149e39fddfd6366be3de23dfb71f82
2024-06-11 17:02:14 +08:00
hiyouga
08f2f99f4b fix deepspeed version
Former-commit-id: 938a69bb07d4de7d82928ff01c582032162c1480
2024-06-11 16:52:36 +08:00
d
77bf3d66c7 经过大量的增量预训练,进行对比试验,发现这个bug:llama3在预训练时使用的tokenizer.eos_toke是'<|end_of_text|>' ,这里在每条数据后面也得用这个,而不是'<|eot_id|>',否则很容易导致严重的性能下降
Former-commit-id: ef470561f742b16eaa0f99c4cadecd7c84ce6bd2
2024-06-11 16:23:40 +08:00
hiyouga
f14f67f803 Update bug-report.yml
Former-commit-id: bb022cd867ebf2593e40fc6ba43b768603b129a3
2024-06-11 15:40:21 +08:00
hiyouga
820b6e7b32 fix #4198
Former-commit-id: 945d2c6cc73542adf9272ebd9aa332ea2c1c7361
2024-06-11 15:38:38 +08:00
hiyouga
27aece94cf tiny fix
Former-commit-id: c4b2e263d9cefbad0fbc5de72422e4ef8edbcb54
2024-06-11 12:48:53 +08:00
hoshi-hiyouga
3f2508be92 Merge pull request #4191 from iamthebot/al--add_manifest_for_reqs
Add MANIFEST.in so requirements.txt is present in sdist

Former-commit-id: fd6d1c3fce855d1ef7396cf33af9f12eadc5a878
2024-06-11 10:41:15 +08:00
Alfredo Luque
fce11bb386 add manifest so requirements.txt in sdist
Former-commit-id: b501a3c56c51786c3006a2aca15a145641a4556c
2024-06-11 00:07:06 +00:00
hiyouga
2723438531 tiny fix
Former-commit-id: b5e9711ef375cc323fc083e742cccfc974550416
2024-06-11 01:04:16 +08:00
hiyouga
f330b73682 set dev version
Former-commit-id: 16c47cc15226119e33e46ba0f2f6ccb37072257f
2024-06-11 00:50:53 +08:00
hiyouga
0f1e592326 release v0.8.1
Former-commit-id: 875a34f492701d1c644facbe9ede411af2931513
2024-06-11 00:44:26 +08:00
hiyouga
4d7dd0330d fix #4160
The split heads should be concatenated in dim=2


Former-commit-id: 4b3f247f270d44df9fe226cfe0dabfb7fcd2deda
2024-06-11 00:37:17 +08:00
hiyouga
ea2ca2777f fix #4145
Fix the docker image


Former-commit-id: a9838281156fe870bfcde5d1f7afc15264fd4aad
2024-06-11 00:19:17 +08:00
hiyouga
4b2b92fd9a update evaluator
Former-commit-id: bb8661e62481ff7027b8969f3d8a6a17290c9da3
2024-06-10 23:56:00 +08:00
hiyouga
784088db3f fix #2666
Former-commit-id: f121d5c4f94af9f165132c4309cb9bdc8217d985
2024-06-10 21:24:15 +08:00
hoshi-hiyouga
0ecf0d51e3 Merge pull request #4167 from yzoaim/branch
fix README

Former-commit-id: 1a877b0fbf54478dbf905fb3e84bd079a55bb725
2024-06-10 16:24:33 +08:00
mMrBun
bc04ca464a Optimize the handling of QWEN2 in scenarios involving multiple tool calls.
Former-commit-id: 48f870edc96ada40360f7e6e67cbf58805295b33
2024-06-10 02:00:14 +08:00
mMrBun
44829df762 Removed unnecessary comments.
Former-commit-id: 2b81252aa693871098931cd7873ef83ef4922ba5
2024-06-09 18:25:22 +08:00
mMrBun
94ddfa66c0 Merge branch 'hiyouga:main' into main
Former-commit-id: c25734d874a36222e0a540a2c994bbda73008b27
2024-06-09 18:17:24 +08:00
mMrBun
8db8ed5a41 Implemented the tool_formatter and tool_extractor for glm4 tool_format
Former-commit-id: db7fa4490ea7f6966418d2879c895cbc1763b16d
2024-06-09 18:16:15 +08:00
-.-
041ecd0de1 fix README
Former-commit-id: fa30028c0b83c38610b596209493a748b8ca0928
2024-06-08 23:51:56 +08:00
hiyouga
d812249db7 add pr ci
Former-commit-id: 9b05bb8540b946d0c74bf804bcafc4a785d22c47
2024-06-08 21:25:35 +08:00
hiyouga
88528f1a87 Update tests.yml
Former-commit-id: e90f0cc30d6bb819246ccc08935c39e714c179a1
2024-06-08 21:15:36 +08:00
hiyouga
82533114a7 update git workflows
Former-commit-id: 5a3f26bc53433caa98b2a66294becaf156280a4c
2024-06-08 21:11:32 +08:00
hiyouga
6d9fbb3fa9 fix llamafactory-cli env
Former-commit-id: b0515e5f42831b67d1f4d049999ecb68756e66db
2024-06-08 07:15:45 +08:00
hiyouga
9953ae3d03 set dev version
Former-commit-id: 08b7fe1c452cc99264ff0312e310b579590c6a45
2024-06-08 06:46:09 +08:00
hiyouga
c0c387e4db release v0.8.0
Former-commit-id: 004db680b9e3996ec511ee818df6c0c02bf13603
2024-06-08 05:20:54 +08:00
hiyouga
ae60ea15da add ultrafeedback and fineweb #4085 #4132
Former-commit-id: 968e4992e2f2a3ccba73e8668f1654ddc6eb0034
2024-06-08 02:42:34 +08:00
hiyouga
72cd1123a8 fix ci
Former-commit-id: 3f4d293fd861d765edb2040f80d16f99a5e1e3c6
2024-06-08 02:00:44 +08:00
hiyouga
1364190a66 fix ci
Former-commit-id: 95aceebd61d195be5c980a919c12c59b56722898
2024-06-08 01:57:36 +08:00
hiyouga
6d17c59090 add ci
Former-commit-id: 3ea3acdadaa54abe33d93538580196cfdd91ee56
2024-06-08 01:48:30 +08:00
hiyouga
e0f2c0b5dc init unittest
Former-commit-id: 1c6f21cb8878ced043fe0b27c72cad2ef6ee990e
2024-06-08 01:35:58 +08:00
hiyouga
073e34855d Delete .readthedocs.yaml
Former-commit-id: dd3ee514216a9a329519c58d79208040adcad126
2024-06-08 00:58:10 +08:00
hiyouga
ff9ba70bb8 reorganize adapter code
Former-commit-id: b26c2df9d97f4efffccbf7d28de13619b43f10dd
2024-06-08 00:47:23 +08:00
hoshi-hiyouga
adbebb0e3f fix #4139
Former-commit-id: c025a4d74f293c14c2705e68af20a82a84608520
2024-06-08 00:45:02 +08:00
hiyouga
3f6b3eed98 add resume args in webui
Former-commit-id: 1d86ad768b1f36e54b4c2a9f18f6ea5a7df04c90
2024-06-08 00:22:16 +08:00
hiyouga
f45e81e186 fix #4137
Former-commit-id: cdc0d6f5a2e5040e145c82c4801f37bd76529047
2024-06-07 19:16:06 +08:00
hiyouga
ba648fd003 tiny fix
Former-commit-id: 0621bcad1dfbe8ce2464f741d4256c5df2a8d1b6
2024-06-07 05:19:21 +08:00
hiyouga
b0e5a76f4c fix ppo trainer save zero3 model
accelerator.get_state_dict(ds_model) should be called at all ranks


Former-commit-id: 3a0f60f0aa072531e4ae5819ec00c8fa42aa0913
2024-06-07 05:14:19 +08:00
hiyouga
8692796c9b fix ppo in trl 0.8.6
Former-commit-id: 5e0d66a0d80b4bd4a8506e2317209d8fb9d25ff6
2024-06-07 04:48:29 +08:00
hiyouga
d0edcde4ea fix #4120
Former-commit-id: 2a44da678a5e360a9c0f9056397ac9e801329321
2024-06-07 04:18:05 +08:00
hiyouga
8c4c2e580c update data processors
Former-commit-id: 04b138cbcb8b9a72e4bbda6c65843bb459e525e7
2024-06-07 04:15:40 +08:00
hoshi-hiyouga
07f33e7641 Merge pull request #4009 from AlongWY/main
supervised packing with greedy knapsack algorithm

Former-commit-id: 5ded166b39a75a98ded5733678f5a1eab7d4cc71
2024-06-07 03:48:46 +08:00
hoshi-hiyouga
1998c641af Update supervised.py
Former-commit-id: 04b6c2a754e602e0b698cfe6c255c2f2486d8865
2024-06-07 03:42:08 +08:00
hoshi-hiyouga
be1e5f9d62 Update supervised.py
Former-commit-id: 49993c4f4e1f871a22ff0196afe60026b668a4dc
2024-06-07 03:38:23 +08:00
hoshi-hiyouga
fdeec6db52 Update supervised.py
Former-commit-id: 67625b5278a839c12a3e4245f9e90af67d8b11b4
2024-06-07 03:38:04 +08:00
hiyouga
a4d335b42f add qwen2 models
Former-commit-id: 49cb694d02c876e3740a003a8b332349f4310ad3
2024-06-07 00:22:57 +08:00
hiyouga
fcb134e144 rename files
Former-commit-id: e1a8431770fc36c0c9ee7fed4abbc3d7fdcc5efd
2024-06-07 00:09:06 +08:00
hiyouga
a47e24222a add DISABLE_TORCHRUN option
Former-commit-id: bcc574b479c2101438723aadead42743d4378776
2024-06-06 23:44:58 +08:00
hoshi-hiyouga
b96b995620 Merge pull request #4082 from MengqingCao/bugfix
Fix #4077

Former-commit-id: 288028c3fb6bb1b58d1b7f4e8b90108c9bbf27d1
2024-06-06 23:38:40 +08:00
hoshi-hiyouga
c231706aa5 Update cli.py
Former-commit-id: 32190507534adf5f505858b3af2b592ca6568ac7
2024-06-06 23:38:09 +08:00
hiyouga
35b5117a59 fix ppo+zero3 #3108
Former-commit-id: 33a93cc29e3e57bf001515000c0a70c112573dea
2024-06-06 23:30:07 +08:00
hiyouga
80f716bc10 fix torch gc
Former-commit-id: e173799d057598e5692a407601c30d8ce1513461
2024-06-06 20:30:25 +08:00
hiyouga
ca95e98ca0 fix ppo dataset bug #4012
Former-commit-id: 7fc51b2e93698ae5e012566af8481f4d861c873d
2024-06-06 19:03:20 +08:00
hiyouga
d5559461c1 update trainers
Former-commit-id: b7f6c4a171293cf4f3e88f15a811f847342f84ee
2024-06-06 18:45:49 +08:00
hiyouga
f4acd81e2f fix base64 image read #4061
Former-commit-id: 66ccb2a27a04296b4600f2c85f428071bf14eeb0
2024-06-06 17:29:19 +08:00
hiyouga
31feb6e26c update readme
Former-commit-id: cc331fa2d28afe081937c50ea83d63add21d4e3a
2024-06-06 16:59:18 +08:00
hiyouga
7d5c0a069c update readme
Former-commit-id: fb1f709af5199976e63d7188e088e33c75d19bfe
2024-06-06 16:25:42 +08:00
hiyouga
937f49ec3d lora modules: all by default
Former-commit-id: 52c4ae87c7f4312704c31ef26b079b2c5b95ea5f
2024-06-06 03:53:28 +08:00
hiyouga
abc2a73a33 add codestral 22B
Former-commit-id: b011c7f527a57cb1d21c4e2c9631c2fb62bb835e
2024-06-06 03:42:50 +08:00
hiyouga
5e1bf7572c lint
Former-commit-id: 9030501eaef97ea249347198272adf0d709503ec
2024-06-06 03:33:44 +08:00
hoshi-hiyouga
8fdb32d0a3 Merge pull request #4066 from injet-zhou/main
add throughput entry to training log

Former-commit-id: d2816f343f405f3fab09f2a8eade774b886e8f92
2024-06-06 03:32:04 +08:00
hoshi-hiyouga
c709d5f7db Merge pull request #4080 from MengqingCao/npu
Add npu option for model exporting

Former-commit-id: 07fc67193ef6bcb8e8a392aff0c57a2eb36832bf
2024-06-06 03:15:44 +08:00
hoshi-hiyouga
f5b2749ec2 Update export.py
Former-commit-id: 694833c1104d13929d4f181f014a121f25955dc5
2024-06-06 03:14:46 +08:00
hoshi-hiyouga
ee5853c565 Update model_args.py
Former-commit-id: 09c0afd94a8a5f5b45a61b32c983d50e1b9e2941
2024-06-06 03:14:23 +08:00
hoshi-hiyouga
6ec6df8a5f Merge pull request #4053 from hzhaoy/feature/add_select_config_file
Support selecting saved configuration files

Former-commit-id: 568ef3cf2a793f268cbe01c39dec418a13e61ecd
2024-06-06 03:06:03 +08:00
hiyouga
fc95800840 add vllm_dtype arg #3387 #3717
Former-commit-id: a0dd3a6351bb78541d40fec1d2fc457d803c86a4
2024-06-06 02:53:27 +08:00
hiyouga
765715af21 support train from scratch #4033 #4075
Former-commit-id: 1290b9d01077e62f8de7a23637daa2586cc82bfa
2024-06-06 02:43:19 +08:00
hiyouga
639a7f6796 support image input in api #3971 #4061
Former-commit-id: c70aaf763ef22fb83ce3635e8ffd5ec4c89c1cb0
2024-06-06 02:29:55 +08:00
hiyouga
35379c7c0e update train hparams
Former-commit-id: 1ca9fce55b55bf209f4b76152b586731932a3f39
2024-06-06 01:49:20 +08:00
hiyouga
d992f5353f fix setup
Former-commit-id: b2b80d434fcc0c3838d229098e1c21d26632204c
2024-06-06 01:39:02 +08:00
hiyouga
875eef45f3 add llamafactory-cli env
Former-commit-id: 1df077184845ff5f394b9324d46f8c382869e590
2024-06-06 01:28:14 +08:00
hiyouga
556a4aa972 fix #4090
Former-commit-id: d9f15f30a8f4bc64778a5c96baeb6801700d7a2c
2024-06-06 00:50:32 +08:00
MengqingCao
8dc1969111 modify export_device option
Former-commit-id: b2fc4a5499e21a5b9622c2285402efef6e27a74d
2024-06-05 09:37:36 +00:00
hiyouga
b74c229498 fix #4079
Former-commit-id: fda732d7f4616373844c97beff416880260f49db
2024-06-05 16:56:54 +08:00
hiyouga
3dbca466fd update readme
Former-commit-id: 02d34db29a7a35c25711d49e98fd3167a2f4dfe7
2024-06-05 16:32:32 +08:00
MengqingCao
ce6f7fdb82 fix #4077
Former-commit-id: fedbe92f3b56294acc6c49f9a51e369cf2de3ead
2024-06-05 08:03:30 +00:00
hiyouga
7528bc1bc0 support glm-4
Former-commit-id: a10f4718fbf3f3c89dc7eb31cb8e1a46ca6adda5
2024-06-05 15:16:38 +08:00
MengqingCao
9dd5f7d642 add npu for model export
Former-commit-id: ce020b6eb3f35c1db37ee4835e694eddcd0f59b0
2024-06-05 07:06:40 +00:00
faddddeout
99ecb0daaf add throughput entry to log
Former-commit-id: 691f999f64c7bac78761e4354f89816d2f0d46fc
2024-06-04 11:04:29 +00:00
hzhaoy
39d8d7995a add: support selecting saved configuration files and loading training parameters
Former-commit-id: 5c9b17c1dc9093da0ea813642bce9b5c9ae96274
2024-06-04 10:33:43 +08:00
hiyouga
2ac2cde03e tiny fix
Former-commit-id: f9d50501aac1f60a3b445ca3fee9aa60995461ee
2024-06-04 00:31:10 +08:00
hiyouga
aa6c3766de fix #3873
Former-commit-id: 1ac325b4d682bb493573c18bb0b67ceae8d0d372
2024-06-04 00:21:50 +08:00
hiyouga
f4f5d7e3ce fix #3992
Former-commit-id: a48321fbf5196b88a11106cf74a74fbcea2ea50b
2024-06-04 00:17:36 +08:00
hiyouga
efbf6018d3 fix abort in webui DDP mode
Former-commit-id: b90ac72d753b13a3eed9cb8b898fac2f2fe5153f
2024-06-04 00:10:24 +08:00
hoshi-hiyouga
1090bb8bf3 Merge pull request #3987 from injet-zhou/main
Fix cann't interrupt training when using multi GPUs in webui

Former-commit-id: 455bb158b0e600723d2afaa2070b71178f2f5188
2024-06-04 00:04:07 +08:00
hiyouga
26bc79f971 fix #4043
Former-commit-id: 67af68f4fc5232760c57b3a0ae780628da09db6a
2024-06-03 23:30:37 +08:00
hiyouga
4c1f015eca remove gc warnings in DPO&KTO
Former-commit-id: b649bdcbafb464a638387429b770fe258b41f8af
2024-06-03 22:53:54 +08:00
hoshi-hiyouga
0655a183d3 Merge pull request #4045 from enji-zhou/feature/add_kto
fix KTO Trainer Sampler

Former-commit-id: 8e235beb9cf4939c06ccb753b047326a9839e77f
2024-06-03 22:09:25 +08:00
hoshi-hiyouga
7754024e9b Update trainer.py
Former-commit-id: 8565d4b43db905374c328ae57c71fc226980d14f
2024-06-03 22:08:38 +08:00
enji.zhou
b4913569a8 fix KTO Trainer Sampler
Former-commit-id: 39eb1bfa272011554322e9bb2534f83b68282a70
2024-06-03 21:32:38 +08:00
hoshi-hiyouga
eae9f09ca8 Merge pull request #4006 from Uminosachi/scheduler-kwargs
Set scheduler_specific_kwargs to get_scheduler

Former-commit-id: c6ed1955fd8990ddb960750913c9d8b13fe0ace3
2024-06-03 19:27:53 +08:00
hiyouga
8264e5ceaa update placeholder in issue template
Former-commit-id: 5503a90d7e38273b67129e0b9eb62bd1fd23154f
2024-06-03 19:24:10 +08:00
hoshi-hiyouga
b76f319e45 Merge pull request #4011 from statelesshz/issue-template
Update bug-report.yml

Former-commit-id: 1fbc46f45ae4e673f0b20b5eacab3d81d1053807
2024-06-03 19:20:43 +08:00
hiyouga
82d744716a fix #4005 #4013
Former-commit-id: 8608fa268cde5cddf8d0c6c2eb2cb5fa246c1831
2024-06-03 19:12:29 +08:00
hoshi-hiyouga
1a3764ab8f Merge pull request #4007 from xu-song/patch-3
Update model_args.py

Former-commit-id: d88b3a0f2707bcc964f642d348295b99f7c796f8
2024-06-03 18:54:37 +08:00
hiyouga
d2ede9d393 fix #4022
Former-commit-id: 9541f2f1f1b7d7877eb734f051048e52003a3430
2024-06-03 18:38:36 +08:00
hiyouga
5690f513fc bump versions
transformers 4.37.2->4.41.2
datasets 2.14.3->2.16.0
accelerate 0.27.2->0.30.1
peft 0.10.0->0.11.1
trl 0.8.1->0.8.6


Former-commit-id: 5f1e041f7295bf42a41dd4d9e7f0c42fcc37fed2
2024-06-03 18:29:38 +08:00
hiyouga
123a845209 fix data loader hint
Former-commit-id: 25b56126a11591b0155e2f72b673dd8f45a6c8c9
2024-06-03 18:28:27 +08:00
ylfeng
b1b7d735b3 remove empty line
Former-commit-id: 3164710971a6d6545629f5bf133f98de5ff0991a
2024-05-31 21:43:08 +08:00
ylfeng
230c69f7ce fix eos
Former-commit-id: 6e236c952958cbfe50b5dcb7b8eff6aea8477922
2024-05-31 21:40:41 +08:00
ylfeng
bfc43558ef supervised packing with greedy knapsack algorithm
Former-commit-id: 24d12396c9aabd49da0b08719068f24679111cc6
2024-05-31 15:33:54 +08:00
Xu Song
f2ae2cc04d Update model_args.py
Former-commit-id: f1e018587e5722e41962abd60f74043a3e55f692
2024-05-31 14:35:48 +08:00
statelesshz
6e9c03f958 Update bug-report.yml
Former-commit-id: a8561502360c1e247eeacb46b77ffbcf3387c482
2024-05-31 13:18:18 +08:00
Uminosachi
2696f614a7 Set scheduler_specific_kwargs to get_scheduler
Former-commit-id: f04e70dfab44480ef4c015c06470443237f69ba9
2024-05-31 13:45:39 +09:00
hiyouga
070b944895 update readme
Former-commit-id: 3b92d8c2ddb288b849f38e573ca168cab23315d2
2024-05-30 16:40:17 +08:00
faddddeout
f5f091d390 fix cann't interrupt training when using multi GPUs in webui
Former-commit-id: a7fb02d52bc202c958490aa7081252be5d9eff50
2024-05-30 08:39:21 +00:00
hiyouga
14ab14a0e6 fix #3837
Former-commit-id: 72965aa3f13a9c085c29781b6790d80d00a545d8
2024-05-30 00:52:26 +08:00
hoshi-hiyouga
4f7c850115 Merge pull request #3829 from seanzhang-zhichen/add_dataset_sample_num
Add dataset sample num

Former-commit-id: ab38cf74ce48ea4f1800e077ca287f2eb9336135
2024-05-30 00:25:45 +08:00
hoshi-hiyouga
391eca66cf Update loader.py
Former-commit-id: 0aa59322906d91c5e385c9c02ebb5dd64ba060f3
2024-05-30 00:20:20 +08:00
hoshi-hiyouga
a67199246d Update loader.py
Former-commit-id: aa7f335e3ad5a78e4ed5f99c120be28e9733ea2e
2024-05-30 00:17:21 +08:00
hoshi-hiyouga
5f67fdaac9 Update loader.py
Former-commit-id: 19d8fd62c18ee3ba0e431fc241f7d315cb716fef
2024-05-30 00:12:12 +08:00
hoshi-hiyouga
05e6fe4287 Update parser.py
Former-commit-id: 310cc11e8c83f16fc5bccc349c38fea347ea9a97
2024-05-30 00:05:20 +08:00
hoshi-hiyouga
91cc571e6e Update README_zh.md
Former-commit-id: 3007d260ed45169583a74497a53b661337dd5f71
2024-05-30 00:04:47 +08:00
hoshi-hiyouga
890926e60c Update README.md
Former-commit-id: 65fb69e388c0a04c15ecd11441e567966f51fae5
2024-05-30 00:04:26 +08:00
hiyouga
87aa332583 better llamaboard
* easily resume from checkpoint
* support full and freeze checkpoints
* faster ui


Former-commit-id: 84cfb2452cc86b037ccddee6e833f8eb7c129fa4
2024-05-29 23:55:38 +08:00
hiyouga
f90c4ca672 fix cohere system
Former-commit-id: 5d629b29e705c8ff8dd4521719d9c0e67a3fe0a2
2024-05-29 20:58:23 +08:00
hiyouga
a922e85a5c fix #3965
Former-commit-id: 37d15ac55d0be0ff47d6a88f07e2d823117a4a36
2024-05-29 20:55:51 +08:00
hiyouga
9a65820592 update readme
Former-commit-id: 440e9de66986ef7736361ce8ec3e23ce68655a56
2024-05-29 18:39:11 +08:00
hoshi-hiyouga
f4e16ae373 Merge pull request #3930 from MengqingCao/npu
Add Ascend npu doc and dependency

Former-commit-id: 7210090e4fc6531b9f6122f104875811a8798185
2024-05-29 18:33:38 +08:00
MengqingCao
e2cfd34da0 update torch-npu version
Former-commit-id: a70d7fcf2967eb30280a1fb845b39db7878f535c
2024-05-29 10:05:11 +00:00
MengqingCao
668dea9706 update cann kernels url
Former-commit-id: 23c65e9d7e8817b5815264e44cbf4a7bcb88d3d7
2024-05-29 09:53:31 +00:00
hoshi-hiyouga
084be442f2 Merge pull request #3958 from hzhaoy/add_telechat_12b_support
add TeleChat-12B/TeleChat-12B-v2 models

Former-commit-id: c228546a09764423ae66966079802022185f7e86
2024-05-29 17:20:53 +08:00
hzhaoy
29cb4a1327 add TeleChat-12B/TeleChat-12B-v2 models
Former-commit-id: e0675385c88af03aaef8d51586c8a282829c4051
2024-05-29 15:00:37 +08:00
hiyouga
81a61134b8 fix hf chat engine
Former-commit-id: 76ce52911690ab0dd8ffa5587127afb4ec942abe
2024-05-29 01:20:07 +08:00
hiyouga
cb1a49aa02 add ds config to webui
Former-commit-id: 66d72b263d36dc81de9f6152077663b613035977
2024-05-29 01:13:17 +08:00
hiyouga
351b4efc6c 10x generate in ppo w/ zero3
https://github.com/huggingface/trl/pull/1483

Former-commit-id: 5dc43ba8b373d8803bc22d88b3d0d95ef8b9c7f8
2024-05-29 00:23:23 +08:00
hiyouga
9b551309de update dpo, kto trainer
Former-commit-id: 4a6cc3c7046f8b27d05ea53ef216bab6fa7ebfaf
2024-05-29 00:14:29 +08:00
hiyouga
9fed4a2ef4 clean kto trainer
Former-commit-id: 76402bd78cbd3a99a544f0ac019468b569b0e1d1
2024-05-28 21:43:26 +08:00
hiyouga
bceac4f554 bump vllm version to 0.4.1
Former-commit-id: a00fd39a4c2f270620711f2bfbad8d460fb4aa89
2024-05-28 21:27:27 +08:00
hiyouga
ae3a88d3a7 update readme
Former-commit-id: bc861f76706df3f643028f1dfc8ec2044b067a08
2024-05-28 19:35:52 +08:00
hiyouga
9138a7a5ba support DDP in webui
Former-commit-id: d059262ff8dc857f597d2657546ec625726a664a
2024-05-28 19:24:22 +08:00
hiyouga
9912b43fcc update readme
Former-commit-id: e2c7de1b5147801b301cfc5da0e2866273da18f5
2024-05-28 16:41:34 +08:00
hiyouga
5ac37555a4 update readme
Former-commit-id: 30ef8ee1e86136f38f105b67f70c417d20552f41
2024-05-28 16:19:56 +08:00
hiyouga
34bdc730a6 fix #3931
Former-commit-id: 47e0072416b545d9718af4fa266a83f747b9a4f7
2024-05-28 13:44:22 +08:00
MengqingCao
e45a9d70fc add Ascend npu doc and dependency
Former-commit-id: 803d9f142a294f8c1e0b4e2046c214b0857ccfd6
2024-05-28 01:33:54 +00:00
hoshi-hiyouga
232b36059c Merge pull request #3925 from Yimi81/feat-fix-yi-template
fix yi template

Former-commit-id: 6caee1eb868b9f7b00578c6608883e89aa232d17
2024-05-27 22:59:32 +08:00
Yimi81
d9fbd675d5 fix yi template
Former-commit-id: b3669c8989c3adda305416245e32e9e5a3b7caac
2024-05-27 13:11:25 +00:00
hiyouga
0206e7b9de tiny fix
Former-commit-id: 4c47b3dcef9e400a1c35fce1ad53619a0a86fe81
2024-05-27 20:54:26 +08:00
hoshi-hiyouga
a886544d3d Merge pull request #3921 from gusye1234/main
Add openchat-3.6-8B support

Former-commit-id: 92e6bba3cab22b7835a68f787caf7992a398978e
2024-05-27 20:52:37 +08:00
hoshi-hiyouga
8c9b929bb0 Update template.py
Former-commit-id: f4dabce0a71c9978e051e70886941b64b928ffe2
2024-05-27 20:51:56 +08:00
hoshi-hiyouga
1bb1ae834e Update template.py
Former-commit-id: af869e4c48eb426c4078415533f6dab89123a9d8
2024-05-27 20:51:26 +08:00
Jianbai Ye
0d9e364a90 add openchat-3.6-8B support
Former-commit-id: b66f39d50d896d7597a1506e67ec210b31c9b700
2024-05-27 20:42:08 +08:00
hiyouga
3b28c003dd fix full/freeze tuning for mllm
Former-commit-id: df5860ddb593d5b82163a585d12160b41dbce0f3
2024-05-27 20:37:57 +08:00
hoshi-hiyouga
48ff9fb150 Merge pull request #3835 from BUAADreamer/main
fix some features in llava-style training

Former-commit-id: fc8583bd17dfb088a52e4d8fa91356b918373b50
2024-05-27 20:23:45 +08:00
hiyouga
c43bc74fe6 support Aya23
Former-commit-id: 071935b90006e2c79e39bb9ee0c5d48c6c910501
2024-05-27 20:23:24 +08:00
BUAADreamer
eaf9cc2195 Merge branch 'hiyouga:main' into main
Former-commit-id: cc1b82bf49b060987392c455fdbfe125ad667ec5
2024-05-27 20:10:58 +08:00
hiyouga
4bd276f58f add llava 1k datasets
Former-commit-id: 345d3355752f4a4dc454696a39f1610fffbbf382
2024-05-27 19:57:33 +08:00
hiyouga
f8cf0d5e5d update dpo examples
Former-commit-id: 69e32a7cb6336ca9a953c379ec794818b3f169bd
2024-05-27 19:56:04 +08:00
BUAADreamer
79bc60db33 Merge branch 'hiyouga:main' into main
Former-commit-id: d89e1f8bf8bad1dd125b4de8fe6c0b2b16411cb5
2024-05-27 19:00:48 +08:00
BUAADreamer
dc7c54067e add only tune lm and mm_proj
Former-commit-id: ba12ca430ec527fbfe4cd1eace0adb5c7712146a
2024-05-27 19:00:15 +08:00
BUAADreamer
932f0d5c20 add regex of only tune lm and mm_proj
Former-commit-id: 38d540b3e69bceabafafab524fcfc78aeb05612d
2024-05-27 18:59:00 +08:00
hiyouga
9670f5e41a add phi-3 7b/14b, mistral v0.3 models
Former-commit-id: 86dab182f9710b063f518922ccb49b01aa71c576
2024-05-27 18:20:16 +08:00
hiyouga
97a23e1cbe update readme
Former-commit-id: b8d0170fe0d094acce85dcb5f91775e4685ee055
2024-05-27 18:14:02 +08:00
BUAADreamer
11fcd055ec Merge branch 'hiyouga:main' into main
Former-commit-id: 113be744b3d044fbea3a8654158aa83ddb4599eb
2024-05-27 11:54:01 +08:00
hiyouga
b0d9966663 support SimPO #3900
Former-commit-id: 6b954ce60155cf8334150b795cfc4bb63ca74c8b
2024-05-26 23:46:33 +08:00
BUAADreamer
5c51ab7e1f Merge branch 'hiyouga:main' into main
Former-commit-id: fd5420c43e1414bcd3fadb6239f4e5d42e6ac10e
2024-05-25 14:18:49 +08:00
hiyouga
26f293d587 fix #3853
Former-commit-id: 465a5500bae1f30744d4b9b3db40aaf9171da2cb
2024-05-24 23:29:45 +08:00
seanzhang-zhichen
a3b52fd380 Merge branch 'main' into add_dataset_sample_num
Former-commit-id: 26300127c45f24e63b91f1b0cc73e46c3a936a91
2024-05-24 15:57:47 +08:00
BUAADreamer
27d8706d6d Merge branch 'hiyouga:main' into main
Former-commit-id: a4ce5ee381fd59f6b254ab634af51b6bb54edd97
2024-05-24 09:50:00 +08:00
hiyouga
bf59383783 refactor data preprocessing, fix mllm rlhf
Former-commit-id: 53ff2dd24f9121ea30c95063bb72e49a9b31e980
2024-05-24 04:08:25 +08:00
hoshi-hiyouga
1078611259 Merge pull request #3876 from dongdongqiang2018/main
added adapted to 910B image

Former-commit-id: 0708cc8a24589b9f22ad3df6685e57d1da0336f2
2024-05-24 01:54:30 +08:00
hiyouga
e6fc0ac8fe fix paligemma sft
requires transformers>=4.41.1


Former-commit-id: 80b3030569cd606ac0de43e9a682478f5bd7b727
2024-05-24 00:23:40 +08:00
hiyouga
554ca3d8dc fix oom issues in export
Former-commit-id: b7ccc882a192aa1e25b1e5816f875ea304282412
2024-05-23 23:32:45 +08:00
donggang
86dfdf956d adapted to 910B image
Former-commit-id: e095254808aace63a1be878620f683902f51cfb3
2024-05-23 09:48:22 +00:00
BUAADreamer
c0e4475485 Merge branch 'hiyouga:main' into main
Former-commit-id: 4076f52c8ba7da4624a1fb3fa52a7170d1c3171e
2024-05-21 22:18:20 +08:00
hiyouga
2b65f8bd5c fix paligemma sft
Former-commit-id: 60682d04414be37e611d6470618a8d599703942b
2024-05-21 20:03:09 +08:00
hiyouga
09e78272c2 Update README_zh.md
Former-commit-id: 34c4ba6bf9bb89170446fb396aa06ae44d251de0
2024-05-21 18:30:59 +08:00
hiyouga
cccce564bd update wechat
Former-commit-id: 6613349562194b48c5fc57aa68e620b8fa83fc0a
2024-05-21 18:22:32 +08:00
hiyouga
4adec327de fix #3847
Former-commit-id: d206b306ca4eadc8b3d4feaf490ad12f9452e562
2024-05-21 17:53:06 +08:00
BUAADreamer
1f093334d1 support pretraining of llava
Former-commit-id: 6a4c8cf0a6a1674c693b9337f018ff8df7477f8f
2024-05-21 08:57:14 +08:00
hiyouga
e0e8507108 support paligemma
Former-commit-id: 11c27f9bf204d3d6a9ca5bd4f0a19a420160453f
2024-05-21 00:01:22 +08:00
hiyouga
f5962f8128 fix paligemma data preprocess
Former-commit-id: 71b85437301739d9d96d3881d4a34b37c0f69db8
2024-05-20 23:51:32 +08:00
hiyouga
b31d808655 fix paligemma inference
Former-commit-id: 46357b7a677e8ba2e0a7c9d4ec1974abd061569c
2024-05-20 23:36:43 +08:00
hiyouga
247cda4b68 fix #3818
Former-commit-id: 3f366e05a34be224f53c5bf8334e57ae5d316004
2024-05-20 21:43:19 +08:00
hiyouga
e30975e9a2 add kto to webui
Former-commit-id: 6c866f4dbd45e868860be8351d1a65c4e1a4e02b
2024-05-20 21:20:25 +08:00
zhangzc
de9f1583c2 fix conflict
Former-commit-id: 6922b23a748c2459147bf44b96d86daa89f2c96c
2024-05-20 17:10:01 +08:00
hiyouga
ab48653e63 fix chat engines
do not use pop(key, default) since api assigns None to dict values


Former-commit-id: 3ebbd0b55ea07de2897c27ca54eeab5c3b319419
2024-05-20 00:36:43 +08:00
hoshi-hiyouga
6d7a1e3f8f Merge pull request #3812 from ycjcl868/feat/chat-support-system-prompt
feat: cli chat support system_message
Former-commit-id: 96596990527403e910c81e95e38bf2638541cf31
2024-05-20 00:31:32 +08:00
hoshi-hiyouga
e093dad7cb Update vllm_engine.py
Former-commit-id: 0b8278bd21baf35d3f60c6ed24f110b391c92a47
2024-05-20 00:31:04 +08:00
hoshi-hiyouga
b103a121f0 Update hf_engine.py
Former-commit-id: ce8b902e538c69d89f207db8a43c85072cd70265
2024-05-20 00:30:45 +08:00
hoshi-hiyouga
3578abc7a4 Update generating_args.py
Former-commit-id: 861c146fa7d9cb5b99372464bd068c20fa36415d
2024-05-20 00:29:31 +08:00
hoshi-hiyouga
17d398f419 Update chat_model.py
Former-commit-id: 7736aafdc81d175e9fb484dbb7cae9263120a0fc
2024-05-20 00:29:12 +08:00
hiyouga
3453a8eebb fix jinja template
Former-commit-id: 353561f0e3914de3f81499c4e4b831ae0a6383b6
2024-05-19 23:38:30 +08:00
ycjcl868
77a089c35c feat: cli chat support system_message
Former-commit-id: e3982bff596d01992733687a580c4f41c558061c
2024-05-19 23:17:46 +08:00
hiyouga
516d83c946 fix zero2 high ram usage
Former-commit-id: 01797126eb173250250e31f8e76b69ae0047745d
2024-05-19 21:53:54 +08:00
hiyouga
fd02c9f973 fix hf gen args
Former-commit-id: 491a84976258cbb2a2647922420e2f84de1e38cd
2024-05-19 19:39:32 +08:00
hiyouga
351e80a656 fix envs
Former-commit-id: d5e150cfb98f8216713415564ab386b8320c88cb
2024-05-19 18:27:18 +08:00
hiyouga
4f04e2ed93 fix #3807
Former-commit-id: 08b695969049de8bf9bd3e90b9700736d90385ee
2024-05-19 17:07:57 +08:00
hiyouga
a810d1b98e update readme
Former-commit-id: e0beb67a417b13c818a09bd419d4e20dd44ca842
2024-05-18 23:09:03 +08:00
hiyouga
fbe963a96a safe output path in webui
Former-commit-id: 23f14262e0d54631630c084ba71e0433ea1d4640
2024-05-18 22:42:28 +08:00
hiyouga
d13b8bee8a fix jetmoe z3 block
Former-commit-id: cb00a14d905395c4b8fadb955f0424a4c56668de
2024-05-18 22:28:45 +08:00
hiyouga
0aa072a155 improve data process logger
Former-commit-id: 33d0b012b56dbafc9fff87b821c2d1bf1409dbb5
2024-05-18 22:02:42 +08:00
hiyouga
57dde7c3bc update data readme
Former-commit-id: 22c7335b496e4a673383d5a1e4e60bf2cb4e35b3
2024-05-18 21:37:38 +08:00
hiyouga
6b9003f781 update data readme
Former-commit-id: beb864a9367943d3274cb6057423d1eb9aaf85c4
2024-05-18 21:15:20 +08:00
hiyouga
9c1c59e481 fix #3803
Former-commit-id: 1ef12c95059d14a1717c82ce04e529e7ad6435ed
2024-05-18 16:13:14 +08:00
hoshi-hiyouga
31daec2749 Merge pull request #3799 from hiyouga/dev
improve KTO impl, replace datasets

Former-commit-id: b4cc207855aa1dbb120f7999165e176e649af338
2024-05-18 03:49:13 +08:00
hiyouga
2bff90719b improve KTO impl., replace datasets
Former-commit-id: e56a57ddcf061de6e4acc8679f7dbf0b68364986
2024-05-18 03:44:56 +08:00
hoshi-hiyouga
e4570e28a8 Merge pull request #3785 from enji-zhou/feature/add_kto
add kto

Former-commit-id: f60faa23e23022fd855dac6b1ecbd21e095bccb5
2024-05-18 03:07:18 +08:00
hoshi-hiyouga
d84a730daa Merge pull request #3794 from jue-jue-zi/main
feat: pass the `max_lora_rank` parameter to vLLM backend
Former-commit-id: be839961686a1845f00a56e398a7b3779df8b6e4
2024-05-17 16:17:30 +08:00
hoshi-hiyouga
0fd1a05cec Update model_args.py
Former-commit-id: f40a2fe5334865763e4d513292d359317b7a091b
2024-05-17 16:16:41 +08:00
juejuezi
6373d307ec feat: pass the max_lora_rank parameter to vLLM backend
Former-commit-id: a8756d839405ecb5deabe885cf11d1a61564deee
2024-05-17 16:07:39 +08:00
hiyouga
a32c3a50fc add deepseek v2 lite model
Former-commit-id: 5e864e6b721d8b891b1cc2ca2dcac41babb9eaaf
2024-05-17 13:25:36 +08:00
enji.zhou
66b5634ebf add kto
Former-commit-id: ec51986cf70b0bdd79b8141e45916670fb97a08e
2024-05-17 13:09:17 +08:00
hiyouga
92b3697e2c update badam example #3764
Former-commit-id: a3730fd0a96bab869be6d695031182dabaea8137
2024-05-17 02:21:10 +08:00
hiyouga
969e605c7e better dtype handle in loading
Former-commit-id: 663f0577dd61a1a31191db2c6fbb0c7cea533b21
2024-05-17 02:14:56 +08:00
hiyouga
a3320f26cf update examples
Former-commit-id: 3b5f138155d96b346bda18e465cf60ec7d99e19c
2024-05-17 01:02:00 +08:00
hiyouga
45329d9e3c enable inbrowser in webui
Former-commit-id: 71fdeedb64b2339eb1c740d670b87e0c03dada68
2024-05-17 00:08:56 +08:00
hiyouga
6481321470 add falcon 11b
Former-commit-id: 897acc725edc204fad393cc9616828431b4fa768
2024-05-17 00:08:33 +08:00
hiyouga
efcf5e050d fix examples #3769
Former-commit-id: 80c036beb8d9ddac8f844f1818c9488ded04e86e
2024-05-16 19:12:09 +08:00
hiyouga
dfa686b617 rename package
Former-commit-id: a07ff0c083558cfe6f474d13027642d3052fee08
2024-05-16 18:39:08 +08:00
hiyouga
fe638cf11f set dev version
Former-commit-id: 5e9c72d07c3793cdccbdb8a9f95f1bb5d714e0a3
2024-05-16 02:17:31 +08:00
hiyouga
b2949b88e9 release v0.7.1
Former-commit-id: a4f8adb021b6218d624303b51cd5e93ffa3111a1
2024-05-16 00:57:16 +08:00
hiyouga
538c79fd8f fix #3694
Former-commit-id: 3d1b818cb6a77b7603724fbeb756b468aa74e7ea
2024-05-16 00:35:28 +08:00
hiyouga
437cc20be6 fix #3606
https://github.com/huggingface/peft/pull/1706

Former-commit-id: bf2783e1b6bc207375974c48736d6f82dd293f02
2024-05-15 23:05:02 +08:00
hiyouga
2ac972d6e7 add Yi-VL-34B model
Former-commit-id: 8b3d8a7e3bd8dff27cc72edba1b8a042f6d1929c
2024-05-15 22:58:19 +08:00
hiyouga
4d7f0fbb7a add yi-vl 6b model
Former-commit-id: 35f4041b13a593a6cf1ec6686fa18b38911ad6a4
2024-05-15 20:02:41 +08:00
hiyouga
40e3d3fbdd fix yi vl vllm infer
Former-commit-id: de54e5d7ec06dd7c20ec82c9ff032fc16cd50244
2024-05-15 19:25:48 +08:00
hiyouga
096677b989 add NPU docker images
Former-commit-id: 3b3257962c52f5d1f15ce245fee402c5baddb774
2024-05-15 19:20:11 +08:00
hoshi-hiyouga
7940b968ae Merge pull request #3748 from BUAADreamer/main
Add MLLM YI-VL and save processor config during training

Former-commit-id: 1d3cbd24ccea63d36c27725cdc5ecd02b460b0ed
2024-05-15 16:40:54 +08:00
hoshi-hiyouga
36a4224bf5 Update visual.py
Former-commit-id: f5f13a995c64fc374ad05e26cde8efa6651aefa1
2024-05-15 16:39:57 +08:00
hiyouga
d4d36e157c fix fsdp model loading
Former-commit-id: fc6fe23cc9ae4a920a17e8268a85c1aa4ad16d3b
2024-05-15 16:32:28 +08:00
hoshi-hiyouga
c4f5e49d0d Update patcher.py
Former-commit-id: 4c31a21f2106adcdad100119bad83ecaef0be3f3
2024-05-15 15:37:07 +08:00
hoshi-hiyouga
8e518d6c62 Update template.py
Former-commit-id: a13022166ba691c03f4fea7e9e2927fa446cf681
2024-05-15 14:20:39 +08:00
hoshi-hiyouga
79165100e5 Update trainer.py
Former-commit-id: dd767b20635bb549ce14f9556e1c4fb44b3662c5
2024-05-15 14:13:26 +08:00
hoshi-hiyouga
fc82acbbd8 Update workflow.py
Former-commit-id: 97cfb44bced18b721166ccb5f260098645fc5318
2024-05-15 14:13:01 +08:00
BUAADreamer
aead3ca8e5 rm extra import
Former-commit-id: 031215019e3d7727b1c7cc87a44e1cf1eb2853ec
2024-05-15 12:48:18 +08:00
BUAADreamer
b12679ad59 cast dtype in mm_proj
Former-commit-id: e0ab22648fe8b65055b5986258cc2800438dc60c
2024-05-15 11:22:15 +08:00
BUAADreamer
8061cb5671 modify style
Former-commit-id: 823af88c3201412da7ef734d34198424e09b2d51
2024-05-15 10:18:10 +08:00
BUAADreamer
0a7e5f2f57 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: ce5cb0f897eebe32a1c2c0a78fe1b0267e4b6d9d
2024-05-15 09:54:21 +08:00
BUAADreamer
812d2c25a7 Merge branch 'hiyouga:main' into main
Former-commit-id: a4795c2f5328e0cfc657409f5774819e3defc006
2024-05-15 09:54:14 +08:00
BUAADreamer
51795e8db1 add yivl and save processor to model_dir
Former-commit-id: ae72f745cb4f7713c3b835d11202aec19c3c5093
2024-05-15 09:54:00 +08:00
hiyouga
2c011060b1 fix bug in vllm engine
Former-commit-id: 38f02a2c5b52cba6908c2d3c2a455677f8574faf
2024-05-15 02:17:54 +08:00
hiyouga
a8c7531250 fix gen args
Former-commit-id: d79f91f87106ba1bc3c0ea08da5898aad59566a7
2024-05-15 01:49:05 +08:00
hiyouga
88c34d26a8 fix examples
Former-commit-id: 910ffaf46e3dde87d2dbb48b82a59a9898a90847
2024-05-15 00:26:10 +08:00
hiyouga
12d666a63c update examples
Former-commit-id: 09269c59427e8a007c1c1b6f9d2014b4c0d0a328
2024-05-15 00:05:17 +08:00
hiyouga
304a2efec8 update readme
Former-commit-id: 568cc1d33c3d202e6430b68e0bcb2772aa6b0aa2
2024-05-14 23:57:08 +08:00
hiyouga
322331df51 update readme
Former-commit-id: f315a545d85a661746ad304b5a688d1fad9eaea1
2024-05-14 23:55:49 +08:00
hiyouga
ba0da83031 add npu examples
Former-commit-id: 0f21e68e2dbd84c820d66d5c6d980004efc51d51
2024-05-14 23:32:53 +08:00
hoshi-hiyouga
0a82e15e7c Merge pull request #3584 from zhou-wjjw/main
Enhancing Ascend 910A Training Efficiency in LlamaFactory with NPU

Former-commit-id: 310cf017a5ec24af8f5cf3af298760dd4150f9f2
2024-05-14 22:18:37 +08:00
hiyouga
6670b36c49 use robust envs
Former-commit-id: f3e194c3b3c40a3e6c3c5397ec0d859e6db614b5
2024-05-14 21:36:42 +08:00
hoshi-hiyouga
7a1d13aae2 Update train.py
Former-commit-id: da1e6f0d9c2eff64f92da1f6ada3aa44ef6d6a7e
2024-05-14 20:47:52 +08:00
hoshi-hiyouga
86a048128b Apply suggestions from code review
Co-authored-by: Huazhong Ji <hzji210@gmail.com>
Former-commit-id: abef48c17ee795eae984fcc89019c2c4859108c1
2024-05-14 20:44:21 +08:00
hoshi-hiyouga
fe1a3b1367 Apply suggestions from code review
Co-authored-by: Huazhong Ji <hzji210@gmail.com>
Former-commit-id: a435e5a0bdd7268c4f1204f99f289ee0b36fd930
2024-05-14 20:44:04 +08:00
hiyouga
84ff56c3a0 fix #3728
Former-commit-id: ea3e32a27f7f7dce75a708f8a6f376b5d3e8059a
2024-05-14 20:37:21 +08:00
BUAADreamer
483ed64b43 modify yi-vl template
Former-commit-id: f113975b425e70bed2588ca55a2c62594fbf2283
2024-05-14 16:45:28 +08:00
BUAADreamer
dd4619e9f3 add support for Yi-VL
Former-commit-id: d7834ca92d3048949caa48f8635cfbcea2c85771
2024-05-14 14:03:19 +08:00
BUAADreamer
905815d878 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: e82f527ea583a7e99a25a06c7fe7b03c1dc2ebb9
2024-05-13 23:28:52 +08:00
BUAADreamer
ba72e08901 add yi-vl
Former-commit-id: 891b25cb3d709ea82182ca90496034360e1cd5d8
2024-05-13 23:28:28 +08:00
hiyouga
e4972c8fc4 update examples
Former-commit-id: 779603055ae9216ff549f5285caac8c0c0a1e9fb
2024-05-13 20:39:36 +08:00
hiyouga
5f5f948806 fix #3724
Former-commit-id: 62f5999d79834d6cbc4129eda387a317665d6099
2024-05-13 20:09:09 +08:00
hiyouga
2892e5d42a fix #3702
Former-commit-id: 55755786f21050b9efc127c391509ba5d9ea8982
2024-05-13 18:24:35 +08:00
hoshi-hiyouga
542a5d15ef Merge pull request #3655 from Tendo33/main
1.Change the name of is_fastapi_available function 2. Added the log of printing requests when deploying using vllm

Former-commit-id: 28c75448eed9d472e96285737a66ac0d20280e13
2024-05-13 18:05:50 +08:00
hiyouga
b1c791fb0d support Yi 1.5
Former-commit-id: e580823676cbb83ddb9a0f685992e6054ae5ffaa
2024-05-13 16:51:20 +08:00
Tendo33
7589123465 ruff check scripts src tests --fix
Former-commit-id: da5277b6a1cff40d59df8f1835d9514b2a51be34
2024-05-13 09:40:33 +08:00
Sun Jinfeng
f94b54b776 Merge branch 'hiyouga:main' into main
Former-commit-id: 014acaa7845b7ac2876596d216b1be369a8e9311
2024-05-13 09:29:58 +08:00
hiyouga
1e1b8899f5 lint
Former-commit-id: cb72eb6ab24615ce492ca2945f29daa34c0c52d4
2024-05-12 01:28:51 +08:00
hiyouga
7b02c83399 fix #3658
Former-commit-id: 37799a62d4431d1d8c02fee6c23d607a65723c1a
2024-05-12 01:25:16 +08:00
hiyouga
8f1ba07b30 remove checksum and fix ui args
Former-commit-id: 0cfdeb1d30efb63211434bc4656bceb59e666289
2024-05-12 01:10:30 +08:00
hoshi-hiyouga
1ce400bddf Merge pull request #3654 from betapeanut/main
Remove Redundant Environment Variable Usage

Former-commit-id: aa57a2a183eef822973d7e5d7c7bc80a42167482
2024-05-12 00:49:00 +08:00
hiyouga
6bc0ec63c7 update readme
Former-commit-id: d57ca8a865b46588f65b2cc15073c5fcc4e4cebc
2024-05-12 00:33:49 +08:00
hiyouga
25d316b1a0 fix #3674
Former-commit-id: 6bad2eafef75ec697477e1f2ce739006042fb4c7
2024-05-12 00:03:59 +08:00
hiyouga
2bcd5b2b73 fix llava config
Former-commit-id: b13d032325e45d401a9dbc64d4c73e308eff3288
2024-05-12 00:02:49 +08:00
hoshi-hiyouga
436afcba57 Merge pull request #3651 from BUAADreamer/main
add some mllm features and try to incorporate Chinese-LLaVA-Med project

Former-commit-id: 143d311d4a82e1fa9b6d4ad98b0db5b02f3572c4
2024-05-11 23:59:08 +08:00
hoshi-hiyouga
db47c53486 Update loader.py
Former-commit-id: 2fc12790414677bb82736208fb9547640780af2e
2024-05-11 23:58:47 +08:00
hoshi-hiyouga
4efe56fd68 Update model_args.py
Former-commit-id: c4114add4c42c1d7723f7270451a6c9fc656ecd1
2024-05-11 23:57:05 +08:00
hoshi-hiyouga
d54313fcf9 Update patcher.py
Former-commit-id: 2c88d394d29c6e98ac3a6860848855722614ca52
2024-05-11 23:56:40 +08:00
hoshi-hiyouga
382f096475 Update tuner.py
Former-commit-id: ccd1eb2c0992f75440c0e1c5cd3f02d03aacb085
2024-05-11 23:55:59 +08:00
hoshi-hiyouga
0ccc76392e Update tuner.py
Former-commit-id: 22afcbdb25160583e5ece28fad0585c7bc70f41a
2024-05-11 23:54:53 +08:00
hoshi-hiyouga
e2cfcb0a5f Update README_zh.md
Former-commit-id: 1a205478403b5852fac0aa8418cdb8995fbe40e3
2024-05-11 22:44:51 +08:00
hoshi-hiyouga
b530a798c1 Update README.md
Former-commit-id: d24c83bb30e2829ba78db90c4c4975788f2eed25
2024-05-11 22:43:04 +08:00
BUAADreamer
fdf38b70a0 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: 50cc5cf93d50c42cfcf5047bcd9b5c7959d503ae
2024-05-11 13:11:10 +08:00
BUAADreamer
1a78b675be add full parameter finetuning of mllm
Former-commit-id: f90c1da5636ac3cb8112c5081a3b56b09a17fcf8
2024-05-11 13:11:00 +08:00
kkkl
9b1008912c Update constants.py
Fix the download issue of the Phi3 model

Former-commit-id: 8978e80914ac6db1ed1b79641b20c84087dd4341
2024-05-11 00:22:40 +08:00
BUAADreamer
18241f4ed8 Merge branch 'hiyouga:main' into main
Former-commit-id: 0dd072703508f68fd4ee51b6648d0c7642a4cc93
2024-05-10 20:34:41 +08:00
hiyouga
223bbd9930 resolve python 3.8 package
Former-commit-id: 5eee4ec7016846356715a4fa1ad58e3cbb1cac6e
2024-05-09 16:52:27 +08:00
Tendo33
9dadff90bb 1.Change the name of is_fastapi_available function
2. Added the log of printing requests when deploying using vllm


Former-commit-id: 530d4f5d51c13c71d99de5fe2d23805b0aa875a2
2024-05-09 14:28:01 +08:00
BUAADreamer
827a929f1d add push processor to hub
Former-commit-id: 7a05a965311edfdfafa57af8342875860d341f27
2024-05-09 14:05:19 +08:00
BUAADreamer
e508519e0a add mllm processor save and Chinese-LLaVA-Med show
Former-commit-id: 110c49fbf79fe0625f091e63746bfabde00add99
2024-05-09 13:53:39 +08:00
BUAADreamer
47892418ad Merge branch 'hiyouga:main' into main
Former-commit-id: 1f3163509ecd05902ea216a905b4ca15ddd3696f
2024-05-09 13:45:43 +08:00
cocktailpeanut
2aeae4b88b yet another removal of unnecessary environment variables
Former-commit-id: a07726028f0287de28e4751672b27efe0efc6477
2024-05-09 01:33:20 -04:00
cocktailpeanut
c213f2a9a9 more removal of unnecessary environment variables
Former-commit-id: 59ef1a6e0d81585a6c010143d05fcfae26d40c00
2024-05-09 01:32:00 -04:00
cocktailpeanut
333f4a69bb remove unnecessary environment variable usage
Former-commit-id: 4be1d832cb269a07987f5cab5d5f949e269087da
2024-05-09 01:26:15 -04:00
BUAADreamer
172600d432 add mllm export
Former-commit-id: ce4770d33f6761d3b1d60661efcb0be34a036154
2024-05-08 22:50:42 +08:00
hiyouga
4ce4172c87 fix #3625
Former-commit-id: 8c0f5d1db29862277d84aa128b424b7d0f2b187f
2024-05-08 17:12:56 +08:00
hiyouga
400ae144a4 add llama3 chinese chat
Former-commit-id: ee3e5920f2f28567259693cb106e884a90cb02a2
2024-05-08 17:10:03 +08:00
hiyouga
0a1b6ca5a7 add deepseek moe 236B
Former-commit-id: 30c10e2dc41b5d64191a91ad2d61f3b5c440b1d5
2024-05-08 16:37:54 +08:00
BUAADreamer
05ef89cfcc modify export model
Former-commit-id: c7051edae4ce23f85daf204a2aaac134b1f29c3d
2024-05-08 10:36:36 +08:00
hiyouga
6d9d8b92ca update readme
Former-commit-id: bcc3d3b95609555e5e9a4deb68e65391c5b465bd
2024-05-07 22:17:04 +08:00
hiyouga
3f7f1daa33 remove big file
Former-commit-id: 8a05242787f810ec25d1b33358257d2867c45497
2024-05-07 22:14:06 +08:00
hiyouga
8061e92d07 update readme
Former-commit-id: ecefcb2e891e75d37df5ebfc616cfdb2106bcfd6
2024-05-07 21:17:31 +08:00
hiyouga
0c811a7653 update readme
Former-commit-id: 730ea71584debc5784d68eeadceb42f7e827447f
2024-05-07 19:03:47 +08:00
hiyouga
f6ac3796ca fix #3560
Former-commit-id: ea69cbe903a301df1bcc4b63cdc5bd4c6e3a8255
2024-05-07 19:03:35 +08:00
hoshi-hiyouga
c1394e7dfc Merge pull request #3601 from Katehuuh/main
Add contribution Luminia

Former-commit-id: 53bef571c445111f49bcc8a5d49afc2872f754ae
2024-05-07 18:01:48 +08:00
hiyouga
ebab655683 fix #3602
Former-commit-id: 1518b45490606ea200482da4737113c46985e8c5
2024-05-07 17:50:27 +08:00
hoshi-hiyouga
3d74f21738 Merge pull request #3604 from gaussian8/main
fix: splitted Dockerfile's CMD
Former-commit-id: 1d6e6956ca45d3cb7de213c4a641b98a35af5896
2024-05-07 16:53:23 +08:00
junwooo.lee
8493753fab fix: splitted Dockerfile's CMD
Former-commit-id: d8032550c7e084648fbf24da5abbac6432b54f26
2024-05-07 15:09:48 +09:00
Katehuuh
0f626a2145 Update README_zh.md
Add Projects Nekochu/Luminia-13B-v3

Former-commit-id: 88d01e831bd511daec30a94817f06e07b8406b18
2024-05-07 06:28:48 +02:00
Katehuuh
5100c290c4 Update README.md
Add Projects Nekochu/Luminia-13B-v3

Former-commit-id: 3d2cd743c2c8830e8b131d1192f1549fa557762d
2024-05-07 06:23:36 +02:00
hiyouga
4bde37e7c8 update readme
Former-commit-id: 3fdc72b9aad9e129f74417cbbf25e841d28e3737
2024-05-07 06:19:29 +08:00
hiyouga
e3b3a722de fix stop param
Former-commit-id: f0a850c25211b72eddbb357c81679db9b0930d44
2024-05-07 00:41:04 +08:00
hoshi-hiyouga
b9e167e6ca Merge pull request #3527 from zhaonx/dev
"add support for vllm api stop parameter"

Former-commit-id: e7d436403af6ac4c6a33cf36411098a0b0fefce2
2024-05-07 00:37:49 +08:00
hoshi-hiyouga
1ebd1e50e7 Update vllm_engine.py
Former-commit-id: fa2410de07150a82082ab5b88baf56aa891db870
2024-05-07 00:37:05 +08:00
hoshi-hiyouga
14316f6583 Update generating_args.py
Former-commit-id: 714957ba0159919a89fc1659a7a7b4b6bd82eead
2024-05-07 00:28:16 +08:00
hoshi-hiyouga
8e4ab2f7d0 Update generating_args.py
Former-commit-id: 7a9fb56786f4c40856211009656a983be1e42cb7
2024-05-07 00:27:56 +08:00
hiyouga
196068fa19 update readme
Former-commit-id: 1c67708291195825e8356d5862d22cbee9566233
2024-05-06 23:34:59 +08:00
hiyouga
da2295f8c8 fix gradio args
Former-commit-id: 7767c1ad4b2b638b558f941ba1f0d05d4a049507
2024-05-06 23:33:06 +08:00
hoshi-hiyouga
ab0741b5a6 Merge pull request #3596 from hiyouga/dev_doc
Add CLI document

Former-commit-id: 2b08c51500592f092b9596517e787081453ecbb5
2024-05-06 23:10:38 +08:00
hiyouga
6aec446940 update examples
Former-commit-id: cca50b627c85e0a777717d609377406cc7fd579f
2024-05-06 23:07:55 +08:00
hiyouga
50c71dd29f update example docs
Former-commit-id: 102cd42768d9eb2cf1219309a25b41e26149067e
2024-05-06 22:51:02 +08:00
hiyouga
5c9da798b5 update docs
Former-commit-id: a4a2e94241bea6f96590f6cb8ca8b5cddee1917e
2024-05-06 21:47:00 +08:00
zhouwei
3d1b0e1864 The training efficiency of the Ascend 910A has been significantly enhanced, leveraging the full computational power of the NPU (Neural Processing Unit) and the capabilities of torch_npu, a PyTorch library optimized for NPUs. This improvement has resulted in a remarkable tenfold increase in efficiency.
Former-commit-id: 90980b626d3408b3e2ee32a02456c20881318be7
2024-05-06 13:29:59 +08:00
zhaonx96
45becd2a45 ”add stop parameter in chat.py“
Former-commit-id: e529bf5bc14c72558d26f73c42076eaa9684205c
2024-05-06 10:10:00 +08:00
zhaonx96
8f1197de7e Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev
Former-commit-id: ec1f834905e241277fdd3f764c70eede97e9ff40
2024-05-06 10:09:00 +08:00
hoshi-hiyouga
25de4ce56a Merge pull request #3578 from pha123661/main
Fix badam example argument

Former-commit-id: d6edf3d91e5d20f48938e02d96d2193ed3d50181
2024-05-05 23:41:58 +08:00
Oscar
d0597897bf Fix badam example outdated argument
Former-commit-id: 29aa188cc774cb72367f706f1cd4c07bc5a9f241
2024-05-05 23:35:19 +08:00
hiyouga
4674f3baa7 add version and help to cli
Former-commit-id: f762f2215169b9fe55564d5600b758ddc66f9c9c
2024-05-05 02:44:35 +08:00
hiyouga
2f5f6722cf fix eval scripts
Former-commit-id: fc3743d0b82c28fbff1170761139e4fa5d2a8939
2024-05-05 00:53:07 +08:00
hiyouga
7ef3788ff4 update webui
Former-commit-id: 17a53d25cdadd2df70a8afa0488f75bbf1918b89
2024-05-05 00:17:54 +08:00
hiyouga
f9aa74715a update scripts
Former-commit-id: 1c07648c4bb4bb0c46bc0240547b46bd2835dce1
2024-05-04 23:05:17 +08:00
hiyouga
9b187b274c add avg ppl
Former-commit-id: 40caeb6f0fdf76a1e2c9ca3761299d087fc643e0
2024-05-04 22:35:31 +08:00
hiyouga
68ed89f351 update ppl script
Former-commit-id: 07606fa4ab303f088170a569c1f86141a1b496c5
2024-05-04 22:13:14 +08:00
hiyouga
342d7da8d7 add cal_ppl script
Former-commit-id: 947068c11c0be00db2cecddb2c5842a0d6e2c321
2024-05-04 22:02:25 +08:00
hiyouga
6eda42eb7c update readme
Former-commit-id: eaf83847ef6d89d8b70429138e73b04fd2aa3ef8
2024-05-04 17:01:21 +08:00
hiyouga
e9fe8815be remove empty stream response
Former-commit-id: 070d0da928b1e974a094279a2782201016d2a3ab
2024-05-04 16:13:52 +08:00
hiyouga
9381fecca7 fix async stream api response
Former-commit-id: d70bbcae6513e50aa6094f2d98c4aa5c6641ea02
2024-05-04 16:11:18 +08:00
hiyouga
efa9140577 update api and support abort eval in webui
Former-commit-id: 8661bed68812e9ded9439e8a821b1d7716bc797b
2024-05-04 15:59:15 +08:00
hiyouga
b1b18b2c5a update readme
Former-commit-id: 5061f7196a3278af5ebce77249d9c3c0f8a55b34
2024-05-04 00:43:53 +08:00
hiyouga
37bcbf72b4 update readme and webui launch
Former-commit-id: c66ffa57323ef6ea78a9b75ec5122d9ea25fd420
2024-05-04 00:43:02 +08:00
hiyouga
99125c8825 update readme
Former-commit-id: 012e5b9625682a628a0b7fb5879097be7166c7be
2024-05-04 00:31:02 +08:00
hiyouga
182b974786 fix eval in webui
Former-commit-id: 774ef2bf5823d68b9cc254a676f5adb4af533d75
2024-05-04 00:19:19 +08:00
hiyouga
7a4a6a5522 fix webui resume
Former-commit-id: c2f6582ddd365bb64b72e8057cc4ecd7884d2480
2024-05-03 23:15:19 +08:00
hiyouga
2383e5440c fix slow op in dpo/orpo trainer
Former-commit-id: 38cad0896ea0516de6d4b2759ec9d45ee67d339b
2024-05-03 23:06:52 +08:00
hiyouga
1fea91736a fix callback log multigpu #3559
Former-commit-id: 1f105f1551b12675ca7d339ef5f91333f0371987
2024-05-03 21:24:27 +08:00
hiyouga
09d9fb28f9 enable tqdm in webui
Former-commit-id: 1737bff64799047a5b715fd979b4c038ae213bb3
2024-05-03 04:42:50 +08:00
hiyouga
57c6eabf83 fix gen_args
Former-commit-id: c3e2f4f07b7fb3b1d7d2b44451660f082a467aed
2024-05-03 04:24:50 +08:00
hiyouga
33d440b577 fix colab gradio
Former-commit-id: 26179a29d3400d1fea155e325a79473a8bc12f04
2024-05-03 03:54:46 +08:00
hiyouga
ce8200ad98 update webui and add CLIs
Former-commit-id: 1368dda22ab875914c9dd86ee5146a4f6a4736ad
2024-05-03 02:58:23 +08:00
hiyouga
2cedb59bee Update prepare.sh
Former-commit-id: 5928b869251a984a085289ca6861a9731dc5b910
2024-05-02 17:16:02 +08:00
hiyouga
dd0b85580e fix badam configs
Former-commit-id: 8a4e6a4c65a9a42e6501b0d3ce81d6220c287454
2024-05-02 02:47:04 +08:00
hoshi-hiyouga
cd4dad846b Merge pull request #3487 from codemayq/main
support BAdam in WebUI

Former-commit-id: 6eada1a2844a2b2c8aad599ebfcc35b376c938ea
2024-05-02 02:38:01 +08:00
hoshi-hiyouga
a11a04a24f Update train.py
Former-commit-id: 16f0d0056967872e02969fdd842a381f9484af8a
2024-05-02 02:21:27 +08:00
hoshi-hiyouga
eb99999ca8 Update README_zh.md
Former-commit-id: 1c673d89faca3160627009fcd0a4aa39138570c0
2024-05-02 02:14:55 +08:00
hoshi-hiyouga
ea58cf111e Update README.md
Former-commit-id: 4fb43b0c9aa48242126252ad755a2a1683b38d6a
2024-05-02 02:13:46 +08:00
zhaonx
2d95127c33 "add support for vllm api stop parameter"
Former-commit-id: b9f21fa639b66db09c79404d885661c96bdf9395
2024-04-30 17:17:09 +08:00
Lao
57fcdca336 Update README_zh.md
Former-commit-id: bacc8588dc7b0b43c240189ecf4336bedc299357
2024-04-28 23:31:37 +08:00
khazic
3d88589c0f Upgrade the second sharegpt format
Former-commit-id: 057f992a666b029d207a3dc7dfc353f9abcf8316
2024-04-28 14:30:05 +08:00
khazic
dfd153cc81 added the second sharegpt format
Former-commit-id: 6d140ac98a78ecc0a713842bb917dc8eb14450cb
2024-04-28 14:27:45 +08:00
codingma
7641a214d8 support BAdam in WebUI
Former-commit-id: 1247154dd7d5eba5d11c4bb8504bf551ab49eb72
2024-04-28 11:31:34 +08:00
zhangzc
7cdc16abdf Supports custom data set sampling quantity
Former-commit-id: fa8325401df27595de4611a89dfcc14644956abd
2024-03-27 14:22:50 +08:00
299 changed files with 14156 additions and 6291 deletions

View File

@@ -4,8 +4,10 @@
.venv
cache
data
examples
docker
saves
hf_cache
output
.dockerignore
.gitattributes
.gitignore
Dockerfile

View File

@@ -1,18 +1,36 @@
name: "\U0001F41B Bug / Help"
description: Create a report to help us improve the LLaMA Factory
body:
- type: markdown
attributes:
value: |
Issues included in **FAQs** or those with **insufficient** information may be closed without a response.
包含在**常见问题**内或提供信息**不完整**的 issues 可能不会被回复。
- type: checkboxes
id: reminder
attributes:
label: Reminder
description: |
Please ensure you have read the README carefully and searched the existing issues.
请确保您已经认真阅读了 README 并且搜索过现有的 Issue。
Please ensure you have read the README carefully and searched the existing issues (including FAQs).
请确保您已经认真阅读了 README 并且搜索过现有的 issues包括常见问题
options:
- label: I have read the README and searched the existing issues.
required: true
- type: textarea
id: system-info
validations:
required: true
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **llamafactory-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **llamafactory-cli env** 并将其输出复制到该文本框中。
placeholder: llamafactory version, platform, python version, ...
- type: textarea
id: reproduction
validations:
@@ -26,7 +44,9 @@ body:
请合理使用 Markdown 标签来格式化您的文本。
placeholder: |
python src/train_bash.py ...
```bash
llamafactory-cli train ...
```
- type: textarea
id: expected-behavior
@@ -38,18 +58,6 @@ body:
Please provide a clear and concise description of what you would expect to happen.
请提供您原本的目的,即这段代码的期望行为。
- type: textarea
id: system-info
validations:
required: false
attributes:
label: System Info
description: |
Please share your system info with us. You can run the command **transformers-cli env** and copy-paste its output below.
请提供您的系统信息。您可以在命令行运行 **transformers-cli env** 并将其输出复制到该文本框中。
placeholder: transformers version, platform, python version, ...
- type: textarea
id: others
validations:

View File

@@ -5,3 +5,4 @@ Fixes # (issue)
## Before submitting
- [ ] Did you read the [contributor guideline](https://github.com/hiyouga/LLaMA-Factory/blob/main/.github/CONTRIBUTING.md)?
- [ ] Did you write any new necessary tests?

30
.github/workflows/label_issue.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: label_issue
on:
issues:
types:
- opened
jobs:
label_issue:
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ISSUE_URL: ${{ github.event.issue.html_url }}
ISSUE_TITLE: ${{ github.event.issue.title }}
run: |
LABEL=pending
NPU_KEYWORDS=(npu huawei ascend 华为 昇腾)
ISSUE_TITLE_LOWER=$(echo $ISSUE_TITLE | tr '[:upper:]' '[:lower:]')
for KEYWORD in ${NPU_KEYWORDS[@]}; do
if [[ $ISSUE_TITLE_LOWER == *$KEYWORD* ]] && [[ $ISSUE_TITLE_LOWER != *input* ]]; then
LABEL=pending,npu
break
fi
done
gh issue edit $ISSUE_URL --add-label $LABEL

40
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: publish
on:
release:
types:
- published
jobs:
publish:
name: Upload release to PyPI
runs-on: ubuntu-latest
environment:
name: release
url: https://pypi.org/p/llamafactory
permissions:
id-token: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install build
- name: Build package
run: |
python -m build
- name: Publish package
uses: pypa/gh-action-pypi-publish@release/v1

View File

@@ -2,28 +2,50 @@ name: tests
on:
push:
branches: [ "main" ]
branches:
- main
paths:
- "**.py"
- "requirements.txt"
- ".github/workflows/*.yml"
pull_request:
branches: [ "main" ]
branches:
- main
paths:
- "**.py"
- "requirements.txt"
- ".github/workflows/*.yml"
jobs:
check_code_quality:
tests:
runs-on: ubuntu-latest
environment:
name: tests
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
steps:
- uses: actions/checkout@v4
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.8"
cache: "pip"
cache-dependency-path: "setup.py"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install ruff
python -m pip install ".[torch,dev]"
- name: Check quality
run: |
make style && make quality
make style && make quality
- name: Test with pytest
run: |
make test

6
.gitignore vendored
View File

@@ -160,6 +160,8 @@ cython_debug/
.idea/
# custom .gitignore
user.config
saves/
cache/
config/
saves/
output/
wandb/

View File

@@ -12,12 +12,16 @@ authors:
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
preferred-citation:
type: article
type: conference-paper
conference:
name: "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)"
authors:
- family-names: "Zheng"
given-names: "Yaowei"
@@ -29,9 +33,12 @@ preferred-citation:
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
journal: "arXiv preprint arXiv:2403.13372"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
year: 2024
publisher: "Association for Computational Linguistics"
address: "Bangkok, Thailand"

View File

@@ -1,14 +0,0 @@
FROM nvcr.io/nvidia/pytorch:24.01-py3
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
RUN pip install -e .[deepspeed,metrics,bitsandbytes,qwen]
VOLUME [ "/root/.cache/huggingface/", "/app/data", "/app/output" ]
EXPOSE 7860
CMD [ "python", "src/train_web.py" ]

1
MANIFEST.in Normal file
View File

@@ -0,0 +1 @@
include LICENSE requirements.txt

View File

@@ -1,4 +1,4 @@
.PHONY: quality style
.PHONY: quality style test
check_dirs := scripts src tests
@@ -9,3 +9,6 @@ quality:
style:
ruff check $(check_dirs) --fix
ruff format $(check_dirs)
test:
CUDA_VISIBLE_DEVICES= pytest tests/

441
README.md
View File

@@ -3,17 +3,19 @@
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![PyPI](https://img.shields.io/pypi/v/llmtuner)](https://pypi.org/project/llmtuner/)
[![Downloads](https://static.pepy.tech/badge/llmtuner)](https://pypi.org/project/llmtuner/)
[![Citation](https://img.shields.io/badge/citation-34-green)](#projects-using-llama-factory)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-72-green)](#projects-using-llama-factory)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
👋 Join our [WeChat](assets/wechat.jpg).
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg).
\[ English | [中文](README_zh.md) \]
@@ -24,6 +26,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/9840a653-7e9c-41c8-ae89
Choose your path:
- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- **PAI-DSW**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **Local machine**: Please refer to [usage](#getting-started)
## Table of Contents
@@ -44,9 +47,9 @@ Choose your path:
## Features
- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO and ORPO.
- **Scalable resources**: 32-bit full-tuning, 16-bit freeze-tuning, 16-bit LoRA and 2/4/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8.
- **Advanced algorithms**: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ and Agent tuning.
- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.
- **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
- **Advanced algorithms**: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning.
- **Practical tricks**: FlashAttention-2, Unsloth, RoPE scaling, NEFTune and rsLoRA.
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
@@ -68,57 +71,69 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
## Changelog
[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See `examples/lora_single_gpu/sft_mllm.sh` for usage.
[24/06/16] We support **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.
[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.
[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See `examples/extras/mod` for usage.
[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)**. See `examples/extras/badam` for usage.
[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).
[24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage.
<details><summary>Full Changelog</summary>
[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See `examples/lora_single_gpu` for usage.
[24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `gemma` template for chat completion.
[24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage.
[24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details.
[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage.
[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.
[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage.
[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)**. See [examples](examples/README.md) for usage.
[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).
[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage.
[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv!
[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See `examples/extras/fsdp_qlora` for usage.
[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage.
[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See `examples/extras/loraplus` for usage.
[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage.
[24/03/07] We supported gradient low-rank projection (**[GaLore](https://arxiv.org/abs/2403.03507)**) algorithm. See `examples/extras/galore` for usage.
[24/03/07] We supported gradient low-rank projection (**[GaLore](https://arxiv.org/abs/2403.03507)**) algorithm. See [examples](examples/README.md) for usage.
[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `--infer_backend vllm` to enjoy **270%** inference speed. (LoRA is not yet supported, merge it first.)
[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed.
[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `--use_dora` to activate DoRA training.
[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training.
[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See `examples/extras/llama_pro` for usage.
[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage.
[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `--dataset glaive_toolcall`.
[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`.
[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `--use_unsloth` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#use-modelscope-hub-optional) for usage.
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#download-from-modelscope-hub) for usage.
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `--neftune_noise_alpha` argument to activate NEFTune, e.g., `--neftune_noise_alpha 5`.
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.
[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `--shift_attn` argument to enable shift short attention.
[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention.
[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [this example](#evaluation) to evaluate your models.
[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage.
[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `--flash_attn fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `--rope_scaling linear` argument in training and `--rope_scaling dynamic` argument at inference to extrapolate the position embeddings.
[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings.
[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [this example](#dpo-training) to train your models.
[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage.
[23/07/31] We supported **dataset streaming**. Try `--streaming` and `--max_steps 10000` arguments to load your dataset in streaming mode.
[23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode.
[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
@@ -130,48 +145,47 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). Try `--quantization_bit 4/8` argument to work with quantized models.
[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage.
</details>
## Supported Models
| Model | Model size | Default module | Template |
| -------------------------------------------------------- | -------------------------------- | ----------------- | --------- |
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
| [BLOOM](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | query_key_value | chatglm3 |
| [Command-R](https://huggingface.co/CohereForAI) | 35B/104B | q_proj,v_proj | cohere |
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B | q_proj,v_proj | deepseek |
| [Falcon](https://huggingface.co/tiiuae) | 7B/40B/180B | query_key_value | falcon |
| [Gemma/CodeGemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
| [LLaMA-3](https://huggingface.co/meta-llama) | 8B/70B | q_proj,v_proj | llama3 |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | q_proj,v_proj | vicuna |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | q_proj,v_proj | mistral |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | q_proj,v_proj | - |
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
| [Phi-3](https://huggingface.co/microsoft) | 3.8B | qkv_proj | phi |
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
| [Qwen1.5 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/32B/72B/110B | q_proj,v_proj | qwen |
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
| [Yi](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
| Model | Model size | Template |
| ------------------------------------------------------------ | -------------------------------- | --------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | intern2 |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3](https://huggingface.co/meta-llama) | 8B/70B | llama3 |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | vicuna |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | gemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
| [Qwen/Qwen1.5/Qwen2 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/4B/7B/14B/32B/72B/110B | qwen |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5](https://huggingface.co/01-ai) | 6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
> [!NOTE]
> **Default module** is used for the `--lora_target` argument, you can use `--lora_target all` to specify all the available modules for better convergence.
>
> For the "base" models, the `--template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
> For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
>
> Remember to use the **SAME** template in training and inference.
Please refer to [constants.py](src/llmtuner/extras/constants.py) for a full list of models we supported.
Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
You also can add a custom chat template to [template.py](src/llmtuner/data/template.py).
You also can add a custom chat template to [template.py](src/llamafactory/data/template.py).
## Supported Training Approaches
@@ -182,7 +196,9 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| SimPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
## Provided Datasets
@@ -195,6 +211,8 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
@@ -202,12 +220,12 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
<details><summary>Supervised fine-tuning datasets</summary>
- [Identity (en&zh)](data/identity.json)
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Self Cognition (zh)](data/self_cognition.json)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
@@ -216,7 +234,6 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
@@ -229,15 +246,19 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
- [Ad Gen (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
- [Glaive Function Calling V2 (en)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
@@ -253,13 +274,13 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
<details><summary>Preference datasets</summary>
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Orca DPO (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
</details>
@@ -274,20 +295,21 @@ huggingface-cli login
| Mandatory | Minimum | Recommend |
| ------------ | ------- | --------- |
| python | 3.8 | 3.10 |
| torch | 1.13.1 | 2.2.0 |
| transformers | 4.37.2 | 4.39.3 |
| datasets | 2.14.3 | 2.18.0 |
| accelerate | 0.27.2 | 0.28.0 |
| peft | 0.9.0 | 0.10.0 |
| trl | 0.8.1 | 0.8.1 |
| python | 3.8 | 3.11 |
| torch | 1.13.1 | 2.3.0 |
| transformers | 4.41.2 | 4.41.2 |
| datasets | 2.16.0 | 2.19.2 |
| accelerate | 0.30.1 | 0.30.1 |
| peft | 0.11.1 | 0.11.1 |
| trl | 0.8.6 | 0.9.4 |
| Optional | Minimum | Recommend |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
| bitsandbytes | 0.39.0 | 0.43.0 |
| flash-attn | 2.3.0 | 2.5.6 |
| bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.4.3 |
| flash-attn | 2.3.0 | 2.5.9 |
### Hardware Requirement
@@ -305,28 +327,25 @@ huggingface-cli login
## Getting Started
### Data Preparation
### Installation
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
> [!NOTE]
> Please update `data/dataset_info.json` to use your custom dataset.
### Dependence Installation
> [!IMPORTANT]
> Installation is mandatory.
```bash
git clone https://github.com/hiyouga/LLaMA-Factory.git
conda create -n llama_factory python=3.10
conda activate llama_factory
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[metrics]
pip install -e ".[torch,metrics]"
```
Extra dependencies available: deepspeed, metrics, galore, badam, vllm, bitsandbytes, gptq, awq, aqlm, qwen, modelscope, quality
Extra dependencies available: torch, torch-npu, metrics, deepspeed, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, qwen, modelscope, quality
> [!TIP]
> Use `pip install --no-deps -e .` to resolve package conflicts.
<details><summary>For Windows users</summary>
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
@@ -336,50 +355,146 @@ To enable FlashAttention-2 on the Windows platform, you need to install the prec
</details>
### Train with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
<details><summary>For Ascend NPU users</summary>
> [!IMPORTANT]
> LLaMA Board GUI only supports training on a single GPU, please use [CLI](#command-line-interface) for distributed training.
#### Use local environment
To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
```bash
export CUDA_VISIBLE_DEVICES=0 # `set CUDA_VISIBLE_DEVICES=0` for Windows
export GRADIO_SERVER_PORT=7860 # `set GRADIO_SERVER_PORT=7860` for Windows
python src/train_web.py # or python -m llmtuner.webui.interface
# replace the url according to your CANN version and devices
# install CANN Toolkit
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
# install CANN Kernels
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
# set env variables
source /usr/local/Ascend/ascend-toolkit/set_env.sh
```
<details><summary>For Alibaba Cloud users</summary>
| Requirement | Minimum | Recommend |
| ------------ | ------- | ----------- |
| CANN | 8.0.RC1 | 8.0.RC1 |
| torch | 2.1.0 | 2.1.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 |
| deepspeed | 0.13.2 | 0.13.2 |
If you encountered display problems in LLaMA Board on Alibaba Cloud, try using the following command to set environment variables before starting LLaMA Board:
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
```bash
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
```
If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
</details>
#### Use Docker
### Data Preparation
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
> [!NOTE]
> Please update `data/dataset_info.json` to use your custom dataset.
### Quickstart
Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
```bash
docker build -f ./Dockerfile -t llama-factory:latest .
docker run --gpus=all \
-v ./hf_cache:/root/.cache/huggingface/ \
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
```
See [examples/README.md](examples/README.md) for advanced usage (including distributed training).
> [!TIP]
> Use `llamafactory-cli help` to show help information.
### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
```bash
llamafactory-cli webui
```
### Build Docker
For CUDA users:
```bash
cd docker/docker-cuda/
docker-compose up -d
docker-compose exec llamafactory bash
```
For Ascend NPU users:
```bash
cd docker/docker-npu/
docker-compose up -d
docker-compose exec llamafactory bash
```
<details><summary>Build without Docker Compose</summary>
For CUDA users:
```bash
docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
-t llamafactory:latest .
docker run -dit --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-e CUDA_VISIBLE_DEVICES=0 \
-p 7860:7860 \
-p 8000:8000 \
--shm-size 16G \
--name llama_factory \
-d llama-factory:latest
--name llamafactory \
llamafactory:latest
docker exec -it llamafactory bash
```
#### Use Docker Compose
For Ascend NPU users:
```bash
docker compose -f ./docker-compose.yml up -d
# Choose docker image upon your environment
docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
-t llamafactory:latest .
# Change `device` upon your resources
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-p 7860:7860 \
-p 8000:8000 \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
docker exec -it llamafactory bash
```
</details>
<details><summary>Details about volume</summary>
- hf_cache: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory.
@@ -388,22 +503,15 @@ docker compose -f ./docker-compose.yml up -d
</details>
### Train with Command Line Interface
See [examples/README.md](examples/README.md) for usage.
Use `python src/train_bash.py -h` to display arguments description.
### Deploy with OpenAI-style API and vLLM
```bash
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
```
> [!TIP]
> Visit https://platform.openai.com/docs/api-reference/chat/create for API document.
### Download from ModelScope Hub
If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope.
@@ -412,7 +520,18 @@ If you have trouble with downloading models and datasets from Hugging Face, you
export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
```
Train the model by specifying a model ID of the ModelScope Hub as the `--model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
### Use W&B Logger
To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files.
```yaml
report_to: wandb
run_name: test_run # optional
```
Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.
## Projects using LLaMA Factory
@@ -425,35 +544,73 @@ If you have a project that should be incorporated, please contact via email or c
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. 2024. [[arxiv]](https://arxiv.org/abs/2402.11809)
1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburghs Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
1. **[Sunsimiao](https://github.com/thomas-yanxin/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.
1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models.
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
</details>
@@ -461,17 +618,19 @@ If you have a project that should be incorporated, please contact via email or c
This repository is licensed under the [Apache-2.0 License](LICENSE).
Please follow the model licenses to use the corresponding model weights: [Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command-R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2/LLaVA-1.5](https://ai.meta.com/llama/license/) / [LLaMA-3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@article{zheng2024llamafactory,
@inproceedings{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Yongqiang Ma},
journal={arXiv preprint arXiv:2403.13372},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
year={2024},
url={http://arxiv.org/abs/2403.13372}
}

View File

@@ -3,17 +3,19 @@
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
[![PyPI](https://img.shields.io/pypi/v/llmtuner)](https://pypi.org/project/llmtuner/)
[![Downloads](https://static.pepy.tech/badge/llmtuner)](https://pypi.org/project/llmtuner/)
[![Citation](https://img.shields.io/badge/citation-34-green)](#使用了-llama-factory-的项目)
[![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/)
[![Citation](https://img.shields.io/badge/citation-72-green)](#使用了-llama-factory-的项目)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
[![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
👋 加入我们的[微信群](assets/wechat.jpg)。
[![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535)
👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。
\[ [English](README.md) | 中文 \]
@@ -24,6 +26,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
选择你的打开方式:
- **Colab**https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
- **PAI-DSW**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
- **本地机器**:请见[如何使用](#如何使用)
## 目录
@@ -44,9 +47,9 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
## 项目特色
- **多种模型**LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
- **集成方法**增量预训练、多模态指令监督微调、奖励模型训练、PPO 训练、DPO 训练ORPO 训练。
- **多种精度**32 比特全参数微调、16 比特冻结微调、16 比特 LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8 的 2/4/8 比特 QLoRA 微调。
- **先进算法**GaLore、BAdam、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ 和 Agent 微调。
- **集成方法**增量预训练、多模态指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等
- **多种精度**16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。
- **先进算法**GaLore、BAdam、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ、PiSSA 和 Agent 微调。
- **实用技巧**FlashAttention-2、Unsloth、RoPE scaling、NEFTune 和 rsLoRA。
- **实验监控**LlamaBoard、TensorBoard、Wandb、MLflow 等等。
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。
@@ -68,57 +71,69 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
## 更新日志
[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 `examples/lora_single_gpu/sft_mllm.sh`
[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)
[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)
[24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调
[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 `examples/extras/mod`
[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)**。详细用法请参照 `examples/extras/badam`
[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练24GB 可训练 Llama-2-7B-56k。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
[24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)
<details><summary>展开日志</summary>
[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 `examples/lora_single_gpu`
[24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `gemma` 模板进行微调使其获得对话能力
[24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。
[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。
[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。
[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。
[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)**。详细用法请参照 [examples](examples/README_zh.md)。
[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练24GB 可训练 Llama-2-7B-56k。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。
[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看!
[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 `examples/extras/fsdp_qlora`
[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)
[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 `examples/extras/loraplus`
[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)
[24/03/07] 我们支持了梯度低秩投影(**[GaLore](https://arxiv.org/abs/2403.03507)**)算法。详细用法请参照 `examples/extras/galore`
[24/03/07] 我们支持了梯度低秩投影(**[GaLore](https://arxiv.org/abs/2403.03507)**)算法。详细用法请参照 [examples](examples/README_zh.md)
[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `--infer_backend vllm` 来获得 **270%** 的推理速度。(尚不支持 LoRA请先合并权重。
[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。
[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `--use_dora` 参数进行 DoRA 微调。
[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。
[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 `examples/extras/llama_pro`
[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)
[24/02/05] Qwen1.5Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。
[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `--dataset glaive_toolcall` 即可使模型获得工具调用能力。
[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。
[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `--use_unsloth` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。
[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#使用魔搭社区可跳过)。
[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#魔搭社区下载)。
[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `--neftune_noise_alpha` 参数启用 NEFTune,例如 `--neftune_noise_alpha 5`
[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。
[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `--shift_attn` 参数以启用该功能。
[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。
[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。使用方法请参阅[此示例](#模型评估)。
[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。
[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU请使用 `--flash_attn fa2` 参数以启用 FlashAttention-2。
[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。
[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `--rope_scaling linear` 参数训练模型或使用 `--rope_scaling dynamic` 参数评估模型。
[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。
[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。使用方法请参阅[此示例](#dpo-训练)。
[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。
[23/07/31] 我们支持了**数据流式加载**。请使用 `--streaming``--max_steps 10000` 参数来流式加载数据集。
[23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true``max_steps: 10000` 参数来流式加载数据集。
[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。
@@ -130,48 +145,47 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。
[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。请使用 `--quantization_bit 4` 参数进行 4 比特量化微调
[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)
</details>
## 模型
| 模型名 | 模型大小 | 默认模块 | Template |
| -------------------------------------------------------- | -------------------------------- | ----------------- | --------- |
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
| [BLOOM](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | query_key_value | chatglm3 |
| [Command-R](https://huggingface.co/CohereForAI) | 35B/104B | q_proj,v_proj | cohere |
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B | q_proj,v_proj | deepseek |
| [Falcon](https://huggingface.co/tiiuae) | 7B/40B/180B | query_key_value | falcon |
| [Gemma/CodeGemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
| [LLaMA-3](https://huggingface.co/meta-llama) | 8B/70B | q_proj,v_proj | llama3 |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | q_proj,v_proj | vicuna |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | q_proj,v_proj | mistral |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | q_proj,v_proj | - |
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
| [Phi-3](https://huggingface.co/microsoft) | 3.8B | qkv_proj | phi |
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
| [Qwen1.5 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/32B/72B/110B | q_proj,v_proj | qwen |
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
| [Yi](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
| 模型名 | 模型大小 | Template |
| ------------------------------------------------------------ | -------------------------------- | --------- |
| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma |
| [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 |
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | intern2 |
| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
| [Llama 3](https://huggingface.co/meta-llama) | 8B/70B | llama3 |
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | vicuna |
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
| [PaliGemma](https://huggingface.co/google) | 3B | gemma |
| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
| [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi |
| [Qwen/Qwen1.5/Qwen2 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/4B/7B/14B/32B/72B/110B | qwen |
| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
| [Yi/Yi-1.5](https://huggingface.co/01-ai) | 6B/9B/34B | yi |
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
> [!NOTE]
> **默认模块**应作为 `--lora_target` 参数的默认值,可使用 `--lora_target all` 参数指定全部模块以得到更好的效果
> 对于所有“基座”Base模型`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”Instruct/Chat模型请务必使用**对应的模板**
>
> 对于所有“基座”Base模型`--template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”Instruct/Chat模型请务必使用**对应的模板**
>
> 请务必在训练和推理时使用**完全一致**的模板。
> 请务必在训练和推理时采用**完全一致**的模板。
项目所支持模型的完整列表请参阅 [constants.py](src/llmtuner/extras/constants.py)。
项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
您也可以在 [template.py](src/llmtuner/data/template.py) 中添加自己的对话模板。
您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。
## 训练方法
@@ -182,7 +196,9 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
## 数据集
@@ -195,6 +211,8 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
@@ -202,12 +220,12 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
<details><summary>指令微调数据集</summary>
- [Identity (en&zh)](data/identity.json)
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Self Cognition (zh)](data/self_cognition.json)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
@@ -216,7 +234,6 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
@@ -229,15 +246,19 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
- [Ad Gen (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
- [Glaive Function Calling V2 (en)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
@@ -253,13 +274,13 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
<details><summary>偏好数据集</summary>
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Orca DPO (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
</details>
@@ -274,20 +295,21 @@ huggingface-cli login
| 必需项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| python | 3.8 | 3.10 |
| torch | 1.13.1 | 2.2.0 |
| transformers | 4.37.2 | 4.39.3 |
| datasets | 2.14.3 | 2.18.0 |
| accelerate | 0.27.2 | 0.28.0 |
| peft | 0.9.0 | 0.10.0 |
| trl | 0.8.1 | 0.8.1 |
| python | 3.8 | 3.11 |
| torch | 1.13.1 | 2.3.0 |
| transformers | 4.41.2 | 4.41.2 |
| datasets | 2.16.0 | 2.19.2 |
| accelerate | 0.30.1 | 0.30.1 |
| peft | 0.11.1 | 0.11.1 |
| trl | 0.8.6 | 0.9.4 |
| 可选项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
| bitsandbytes | 0.39.0 | 0.43.0 |
| flash-attn | 2.3.0 | 2.5.6 |
| bitsandbytes | 0.39.0 | 0.43.1 |
| vllm | 0.4.3 | 0.4.3 |
| flash-attn | 2.3.0 | 2.5.9 |
### 硬件依赖
@@ -305,24 +327,21 @@ huggingface-cli login
## 如何使用
### 数据准备
### 安装 LLaMA Factory
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
> [!NOTE]
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
### 安装依赖
> [!IMPORTANT]
> 此步骤为必需。
```bash
git clone https://github.com/hiyouga/LLaMA-Factory.git
conda create -n llama_factory python=3.10
conda activate llama_factory
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[metrics]
pip install -e ".[torch,metrics]"
```
可选的额外依赖项:deepspeed、metrics、galore、badam、vllm、bitsandbytes、gptq、awq、aqlm、qwen、modelscope、quality
可选的额外依赖项:torch、torch-npu、metrics、deepspeed、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、qwen、modelscope、quality
> [!TIP]
> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。
<details><summary>Windows 用户指南</summary>
@@ -336,50 +355,146 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
</details>
### 利用 LLaMA Board 可视化界面训练(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
<details><summary>昇腾 NPU 用户指南</summary>
> [!IMPORTANT]
> LLaMA Board 可视化界面目前仅支持单 GPU 训练,请使用[命令行接口](#命令行接口)来进行多 GPU 分布式训练。
#### 使用本地环境
在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
```bash
export CUDA_VISIBLE_DEVICES=0 # Windows 使用 `set CUDA_VISIBLE_DEVICES=0`
export GRADIO_SERVER_PORT=7860 # Windows 使用 `set GRADIO_SERVER_PORT=7860`
python src/train_web.py # 或 python -m llmtuner.webui.interface
# 请替换 URL 为 CANN 版本和设备型号对应的 URL
# 安装 CANN Toolkit
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
# 安装 CANN Kernels
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
# 设置环境变量
source /usr/local/Ascend/ascend-toolkit/set_env.sh
```
<details><summary>阿里云用户指南</summary>
| 依赖项 | 至少 | 推荐 |
| ------------ | ------- | ----------- |
| CANN | 8.0.RC1 | 8.0.RC1 |
| torch | 2.1.0 | 2.1.0 |
| torch-npu | 2.1.0 | 2.1.0.post3 |
| deepspeed | 0.13.2 | 0.13.2 |
如果您在阿里云上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。
```bash
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
```
如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`
下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
</details>
#### 使用 Docker
### 数据准备
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
> [!NOTE]
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
### 快速开始
下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
```bash
docker build -f ./Dockerfile -t llama-factory:latest .
docker run --gpus=all \
-v ./hf_cache:/root/.cache/huggingface/ \
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
```
高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。
> [!TIP]
> 使用 `llamafactory-cli help` 显示帮助信息。
### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
```bash
llamafactory-cli webui
```
### 构建 Docker
CUDA 用户:
```bash
cd docker/docker-cuda/
docker-compose up -d
docker-compose exec llamafactory bash
```
昇腾 NPU 用户:
```bash
cd docker/docker-npu/
docker-compose up -d
docker-compose exec llamafactory bash
```
<details><summary>不使用 Docker Compose 构建</summary>
CUDA 用户:
```bash
docker build -f ./docker/docker-cuda/Dockerfile \
--build-arg INSTALL_BNB=false \
--build-arg INSTALL_VLLM=false \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg INSTALL_FLASHATTN=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
-t llamafactory:latest .
docker run -dit --gpus=all \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-e CUDA_VISIBLE_DEVICES=0 \
-p 7860:7860 \
-p 8000:8000 \
--shm-size 16G \
--name llama_factory \
-d llama-factory:latest
--name llamafactory \
llamafactory:latest
docker exec -it llamafactory bash
```
#### 使用 Docker Compose
昇腾 NPU 用户:
```bash
docker compose -f ./docker-compose.yml up -d
# 根据您的环境选择镜像
docker build -f ./docker/docker-npu/Dockerfile \
--build-arg INSTALL_DEEPSPEED=false \
--build-arg PIP_INDEX=https://pypi.org/simple \
-t llamafactory:latest .
# 根据您的资源更改 `device`
docker run -dit \
-v ./hf_cache:/root/.cache/huggingface \
-v ./ms_cache:/root/.cache/modelscope \
-v ./data:/app/data \
-v ./output:/app/output \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-p 7860:7860 \
-p 8000:8000 \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
--shm-size 16G \
--name llamafactory \
llamafactory:latest
docker exec -it llamafactory bash
```
</details>
<details><summary>数据卷详情</summary>
- hf_cache使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。
@@ -388,22 +503,15 @@ docker compose -f ./docker-compose.yml up -d
</details>
### 利用命令行接口训练
使用方法请参考 [examples/README_zh.md](examples/README_zh.md)。
您可以执行 `python src/train_bash.py -h` 来查看参数文档。
### 利用 vLLM 部署 OpenAI API
```bash
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
--template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
```
> [!TIP]
> API 文档请查阅 https://platform.openai.com/docs/api-reference/chat/create。
### 从魔搭社区下载
如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。
@@ -412,7 +520,18 @@ CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
```
`--model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`
`model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`
### 使用 W&B 面板
若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。
```yaml
report_to: wandb
run_name: test_run # 可选
```
在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。
## 使用了 LLaMA Factory 的项目
@@ -425,35 +544,73 @@ export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. 2024. [[arxiv]](https://arxiv.org/abs/2402.11809)
1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburghs Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
1. **[Sunsimiao](https://github.com/thomas-yanxin/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**MBTI性格大模型项目根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。
1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
</details>
@@ -461,17 +618,19 @@ export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
使用模型权重时,请遵循对应的模型协议:[Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command-R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2/LLaVA-1.5](https://ai.meta.com/llama/license/) / [LLaMA-3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
## 引用
如果您觉得此项目有帮助,请考虑以下列格式引用
```bibtex
@article{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Yongqiang Ma},
journal={arXiv preprint arXiv:2403.13372},
@inproceedings{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
year={2024},
url={http://arxiv.org/abs/2403.13372}
}

View File

@@ -1,16 +1,19 @@
If you are using a custom dataset, please provide your dataset definition in the following format in `dataset_info.json`.
The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it.
Currently we support datasets in **alpaca** and **sharegpt** format.
```json
"dataset_name": {
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)",
"ms_hub_url": "the name of the dataset repository on the ModelScope hub. (if specified, ignore script_url and file_name)",
"ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)",
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)",
"file_name": "the name of the dataset file in this directory. (required if above are not specified)",
"file_sha1": "the SHA-1 hash value of the dataset file. (optional, does not affect training)",
"subset": "the name of the subset. (optional, default: None)",
"folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)",
"ranking": "whether the dataset is a preference dataset or not. (default: false)",
"file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)",
"formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
"ranking": "whether the dataset is a preference dataset or not. (default: False)",
"subset": "the name of the subset. (optional, default: None)",
"split": "the name of dataset split to be used. (optional, default: train)",
"folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)",
"num_samples": "the number of samples in the dataset to be used. (optional, default: None)",
"columns (optional)": {
"prompt": "the column name in the dataset containing the prompts. (default: instruction)",
"query": "the column name in the dataset containing the queries. (default: input)",
@@ -19,7 +22,10 @@ If you are using a custom dataset, please provide your dataset definition in the
"messages": "the column name in the dataset containing the messages. (default: conversations)",
"system": "the column name in the dataset containing the system prompts. (default: None)",
"tools": "the column name in the dataset containing the tool description. (default: None)",
"images": "the column name in the dataset containing the image inputs. (default: None)"
"images": "the column name in the dataset containing the image inputs. (default: None)",
"chosen": "the column name in the dataset containing the chosen answers. (default: None)",
"rejected": "the column name in the dataset containing the rejected answers. (default: None)",
"kto_tag": "the column name in the dataset containing the kto tags. (default: None)"
},
"tags (optional, used for the sharegpt format)": {
"role_tag": "the key in the message represents the identity. (default: from)",
@@ -33,31 +39,38 @@ If you are using a custom dataset, please provide your dataset definition in the
}
```
Given above, you can use the custom dataset via specifying `--dataset dataset_name`.
## Alpaca Format
----
### Supervised Fine-Tuning Dataset
Currently we support dataset in **alpaca** or **sharegpt** format, the dataset in alpaca format should follow the below format:
* [Example dataset](alpaca_en_demo.json)
In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response.
The `system` column will be used as the system prompt if specified.
The `history` column is a list consisting of string tuples representing prompt-response pairs in the history messages. Note that the responses in the history **will also be learned by the model** in supervised fine-tuning.
```json
[
{
"instruction": "user instruction (required)",
"input": "user input (optional)",
"instruction": "human instruction (required)",
"input": "human input (optional)",
"output": "model response (required)",
"system": "system prompt (optional)",
"history": [
["user instruction in the first round (optional)", "model response in the first round (optional)"],
["user instruction in the second round (optional)", "model response in the second round (optional)"]
["human instruction in the first round (optional)", "model response in the first round (optional)"],
["human instruction in the second round (optional)", "model response in the second round (optional)"]
]
}
]
```
Regarding the above dataset, the `columns` in `dataset_info.json` should be:
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
@@ -68,30 +81,135 @@ Regarding the above dataset, the `columns` in `dataset_info.json` should be:
}
```
The `query` column will be concatenated with the `prompt` column and used as the user prompt, then the user prompt would be `prompt\nquery`. The `response` column represents the model response.
### Pre-training Dataset
The `system` column will be used as the system prompt. The `history` column is a list consisting string tuples representing prompt-response pairs in the history. Note that the responses in the history **will also be used for training**.
- [Example dataset](c4_demo.json)
For the pre-training datasets, only the `prompt` column will be used for training.
For the preference datasets, the `response` column should be a string list whose length is 2, with the preferred answers appearing first, for example:
In pre-training, only the `text` column will be used for model learning.
```json
{
"instruction": "user instruction",
"input": "user input",
"output": [
"chosen answer",
"rejected answer"
]
[
{"text": "document"},
{"text": "document"}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"columns": {
"prompt": "text"
}
}
```
Remember to set `"ranking": true` for the preference datasets.
### Preference Dataset
----
Preference datasets are used for reward modeling, DPO training and ORPO training.
The dataset in sharegpt format should follow the below format:
It requires a better response in `chosen` column and a worse response in `rejected` column.
```json
[
{
"instruction": "human instruction (required)",
"input": "human input (optional)",
"chosen": "chosen answer (required)",
"rejected": "rejected answer (required)"
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"ranking": true,
"columns": {
"prompt": "instruction",
"query": "input",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### KTO Dataset
- [Example dataset](kto_en_demo.json)
KTO datasets require a extra `kto_tag` column containing the boolean human feedback.
```json
[
{
"instruction": "human instruction (required)",
"input": "human input (optional)",
"output": "model response (required)",
"kto_tag": "human feedback [true/false] (required)"
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"kto_tag": "kto_tag"
}
}
```
### Multimodal Dataset
- [Example dataset](mllm_demo.json)
Multimodal datasets require a `images` column containing the paths to the input images. Currently we only support one image.
```json
[
{
"instruction": "human instruction (required)",
"input": "human input (optional)",
"output": "model response (required)",
"images": [
"image path (required)"
]
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"images": "images"
}
}
```
## Sharegpt Format
### Supervised Fine-Tuning Dataset
- [Example dataset](glaive_toolcall_en_demo.json)
Compared to the alpaca format, the sharegpt format allows the datasets have **more roles**, such as human, gpt, observation and function. They are presented in a list of objects in the `conversations` column.
Note that the human and observation should appear in odd positions, while gpt and function should appear in even positions.
```json
[
@@ -99,7 +217,15 @@ The dataset in sharegpt format should follow the below format:
"conversations": [
{
"from": "human",
"value": "user instruction"
"value": "human instruction"
},
{
"from": "function_call",
"value": "tool arguments"
},
{
"from": "observation",
"value": "tool result"
},
{
"from": "gpt",
@@ -112,24 +238,114 @@ The dataset in sharegpt format should follow the below format:
]
```
Regarding the above dataset, the `columns` in `dataset_info.json` should be:
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"system": "system",
"tools": "tools"
},
"tags": {
"role_tag": "from",
"content_tag": "value",
"user_tag": "human",
"assistant_tag": "gpt"
}
}
```
where the `messages` column should be a list following the `u/a/u/a/u/a` order.
### Preference Dataset
Pre-training datasets and preference datasets are incompatible with the sharegpt format yet.
- [Example dataset](dpo_en_demo.json)
Preference datasets in sharegpt format also require a better message in `chosen` column and a worse message in `rejected` column.
```json
[
{
"conversations": [
{
"from": "human",
"value": "human instruction"
},
{
"from": "gpt",
"value": "model response"
},
{
"from": "human",
"value": "human instruction"
}
],
"chosen": {
"from": "gpt",
"value": "chosen answer (required)"
},
"rejected": {
"from": "gpt",
"value": "rejected answer (required)"
}
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"formatting": "sharegpt",
"ranking": true,
"columns": {
"messages": "conversations",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### OpenAI Format
The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt.
```json
[
{
"messages": [
{
"role": "system",
"content": "system prompt (optional)"
},
{
"role": "user",
"content": "human instruction"
},
{
"role": "assistant",
"content": "model response"
}
]
}
]
```
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
```json
"dataset_name": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
The KTO datasets and multimodal datasets in sharegpt format are similar to the alpaca format.
Pre-training datasets are **incompatible** with the sharegpt format.

View File

@@ -1,16 +1,19 @@
如果您使用自定义数据集,请务必`dataset_info.json` 文件中按照以下格式提供数据集定义
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**`dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。
```json
"数据集名称": {
"hf_hub_url": "Hugging Face 的数据集仓库地址(若指定,则忽略 script_url 和 file_name",
"ms_hub_url": "ModelScope 的数据集仓库地址(若指定,则忽略 script_url 和 file_name",
"script_url": "包含数据加载脚本的本地文件夹名称(若指定,则忽略 file_name",
"file_name": "该目录下数据集文件的名称(若上述参数未指定,则此项必需)",
"file_sha1": "数据集文件的 SHA-1 哈希值(可选,留空不影响训练)",
"subset": "数据集子集的名称可选默认None",
"folder": "Hugging Face 仓库的文件夹名称可选默认None",
"ranking": "是否为偏好数据集可选默认False",
"file_name": "该目录下数据集文件夹或文件的名称(若上述参数未指定,则此项必需)",
"formatting": "数据集格式可选默认alpaca可以为 alpaca 或 sharegpt",
"ranking": "是否为偏好数据集可选默认False",
"subset": "数据集子集的名称可选默认None",
"split": "所使用的数据集切分可选默认train",
"folder": "Hugging Face 仓库的文件夹名称可选默认None",
"num_samples": "该数据集所使用的样本数量。可选默认None",
"columns可选": {
"prompt": "数据集代表提示词的表头名称默认instruction",
"query": "数据集代表请求的表头名称默认input",
@@ -19,7 +22,10 @@
"messages": "数据集代表消息列表的表头名称默认conversations",
"system": "数据集代表系统提示的表头名称默认None",
"tools": "数据集代表工具描述的表头名称默认None",
"images": "数据集代表图像输入的表头名称默认None"
"images": "数据集代表图像输入的表头名称默认None",
"chosen": "数据集代表更优回答的表头名称默认None",
"rejected": "数据集代表更差回答的表头名称默认None",
"kto_tag": "数据集代表 KTO 标签的表头名称默认None"
},
"tags可选用于 sharegpt 格式)": {
"role_tag": "消息中代表发送者身份的键名默认from",
@@ -28,22 +34,28 @@
"assistant_tag": "消息中代表助手的 role_tag默认gpt",
"observation_tag": "消息中代表工具返回结果的 role_tag默认observation",
"function_tag": "消息中代表工具调用的 role_tag默认function_call",
"system_tag": "消息中代表系统提示的 role_tag默认system会覆盖 system "
"system_tag": "消息中代表系统提示的 role_tag默认system会覆盖 system column"
}
}
```
添加后可通过指定 `--dataset 数据集名称` 参数使用自定义数据集。
## Alpaca 格式
----
### 指令监督微调数据集
该项目目前支持两种格式的数据集:**alpaca** 和 **sharegpt**,其中 alpaca 格式的数据集按照以下方式组织:
- [样例数据集](alpaca_zh_demo.json)
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令为 `instruction\ninput`。而 `output` 列对应的内容为模型回答。
如果指定,`system` 列对应的内容将被作为系统提示词。
`history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时,历史消息中的回答内容**也会被用于模型学习**。
```json
[
{
"instruction": "用户指令(必填)",
"input": "用户输入(选填)",
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"system": "系统提示词(选填)",
"history": [
@@ -54,10 +66,11 @@
]
```
对于上述格式的数据,`dataset_info.json` 中的 `columns` 应为:
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
@@ -68,30 +81,135 @@
}
```
其中 `query` 列对应的内容会与 `prompt` 列对应的内容拼接后作为用户指令,即用户指令为 `prompt\nquery``response` 列对应的内容为模型回答。
### 预训练数据集
`system` 列对应的内容将被作为系统提示词。`history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮的指令和回答。注意历史消息中的回答**也会被用于训练**。
- [样例数据集](c4_demo.json)
对于预训练数据集,仅 `prompt` 列中的内容会用于模型训练
对于偏好数据集,`response` 列应当是一个长度为 2 的字符串列表,排在前面的代表更优的回答,例如:
预训练时,只有 `text` 列中的内容会用于模型学习
```json
{
"instruction": "用户指令",
"input": "用户输入",
"output": [
"优质回答",
"劣质回答"
]
[
{"text": "document"},
{"text": "document"}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "text"
}
}
```
添加偏好数据集需要额外指定 `"ranking": true`
### 偏好数据集
----
偏好数据集用于奖励模型训练、DPO 训练和 ORPO 训练。
而 sharegpt 格式的数据集按照以下方式组织:
它需要在 `chosen` 列中提供更优的回答,并在 `rejected` 列中提供更差的回答。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"chosen": "优质回答(必填)",
"rejected": "劣质回答(必填)"
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"ranking": true,
"columns": {
"prompt": "instruction",
"query": "input",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### KTO 数据集
- [样例数据集](kto_en_demo.json)
KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人类反馈。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"kto_tag": "人类反馈 [true/false](必填)"
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"kto_tag": "kto_tag"
}
}
```
### 多模态数据集
- [样例数据集](mllm_demo.json)
多模态数据集需要额外添加一个 `images` 列,包含输入图像的路径。目前我们仅支持单张图像输入。
```json
[
{
"instruction": "人类指令(必填)",
"input": "人类输入(选填)",
"output": "模型回答(必填)",
"images": [
"图像路径(必填)"
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"images": "images"
}
}
```
## Sharegpt 格式
### 指令监督微调数据集
- [样例数据集](glaive_toolcall_zh_demo.json)
相比 alpaca 格式的数据集sharegpt 格式支持**更多的角色种类**,例如 human、gpt、observation、function 等等。它们构成一个对象列表呈现在 `conversations` 列中。
注意其中 human 和 observation 必须出现在奇数位置gpt 和 function 必须出现在偶数位置。
```json
[
@@ -99,7 +217,15 @@
"conversations": [
{
"from": "human",
"value": "用户指令"
"value": "人类指令"
},
{
"from": "function_call",
"value": "工具参数"
},
{
"from": "observation",
"value": "工具结果"
},
{
"from": "gpt",
@@ -112,24 +238,114 @@
]
```
对于上述格式的数据,`dataset_info.json` 中的 `columns` 应为:
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "conversations",
"system": "system",
"tools": "tools"
},
"tags": {
"role_tag": "from",
"content_tag": "value",
"user_tag": "human",
"assistant_tag": "gpt"
}
}
```
其中 `messages` 列应当是一个列表,且符合 `用户/模型/用户/模型/用户/模型` 的顺序。
### 偏好数据集
预训练数据集和偏好数据集尚不支持 sharegpt 格式。
- [样例数据集](dpo_zh_demo.json)
Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的消息,并在 `rejected` 列中提供更差的消息。
```json
[
{
"conversations": [
{
"from": "human",
"value": "人类指令"
},
{
"from": "gpt",
"value": "模型回答"
},
{
"from": "human",
"value": "人类指令"
}
],
"chosen": {
"from": "gpt",
"value": "优质回答"
},
"rejected": {
"from": "gpt",
"value": "劣质回答"
}
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"ranking": true,
"columns": {
"messages": "conversations",
"chosen": "chosen",
"rejected": "rejected"
}
}
```
### OpenAI 格式
OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。
```json
[
{
"messages": [
{
"role": "system",
"content": "系统提示词(选填)"
},
{
"role": "user",
"content": "人类指令"
},
{
"role": "assistant",
"content": "模型回答"
}
]
}
]
```
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
```json
"数据集名称": {
"file_name": "data.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
Sharegpt 格式中的 KTO 数据集和多模态数据集与 alpaca 格式的类似。
预训练数据集**不支持** sharegpt 格式。

View File

@@ -1 +0,0 @@
3779ddbc040543ab1834ef216c983d6fcc06cc9a

View File

@@ -1 +0,0 @@
a97cf9475291591843976554878568e046d8a46d

View File

@@ -1 +0,0 @@
25508714b7879a1e5a6764ba7f979a980f549f1a

View File

@@ -1 +0,0 @@
7cb6a7d11455bddc3d495750a2392683d775b184

View File

@@ -1 +0,0 @@
f5cb08305ff5dc9c17a09809c54c8c8834aadc70

View File

@@ -1 +0,0 @@
aee47b7b443496e37808d7f34ef10403ff99bcc3

View File

@@ -1,37 +0,0 @@
import json
from typing import Any, Dict, Generator, List, Tuple
import datasets
_DESCRIPTION = "An example of dataset."
_CITATION = ""
_HOMEPAGE = ""
_LICENSE = ""
_URL = "examples.json"
class ExampleDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("0.0.0")
def _info(self) -> datasets.DatasetInfo:
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string"),
"history": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
file_path = dl_manager.download(_URL)
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": file_path})]
def _generate_examples(self, filepath: str) -> Generator[Tuple[int, Dict[str, Any]], None, None]:
example_dataset = json.load(open(filepath, "r", encoding="utf-8"))
for key, example in enumerate(example_dataset):
yield key, example

View File

@@ -1 +0,0 @@
4748dff00d1dc42768a5b6cc772143c313017812

View File

@@ -34,7 +34,8 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"output": datasets.Sequence(datasets.Value("string")),
"chosen": datasets.Value("string"),
"rejected": datasets.Value("string"),
"history": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
}
)
@@ -79,5 +80,5 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
break
prompt = prompt[:human_idx]
yield key, {"instruction": query, "output": [r_accept, r_reject], "history": history}
yield key, {"instruction": query, "chosen": r_accept, "rejected": r_reject, "history": history}
key += 1

View File

@@ -1 +0,0 @@
274079ea921762be356de85b18f13fa60b7ba8cb

View File

@@ -1 +0,0 @@
57fd080be5bffe4153fe3ee26a175e3d56da30f3

View File

@@ -1 +0,0 @@
736bcedea2b24a1414765c6d69cbdafaea839f3c

30
data/wiki_demo.txt Normal file

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
c9cf509b7fdac5490cfd6dae72c2d7b8a60af6cb

View File

@@ -1,25 +0,0 @@
version: '3.8'
services:
llama-factory:
build:
dockerfile: Dockerfile
context: .
container_name: llama_factory
volumes:
- ./hf_cache:/root/.cache/huggingface/
- ./data:/app/data
- ./output:/app/output
environment:
- CUDA_VISIBLE_DEVICES=0
ports:
- "7860:7860"
ipc: host
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: "all"
capabilities: [gpu]
restart: unless-stopped

View File

@@ -0,0 +1,59 @@
# Use the NVIDIA official image with PyTorch 2.3.0
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-24-02.html
FROM nvcr.io/nvidia/pytorch:24.02-py3
# Define environments
ENV MAX_JOBS=4
ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
# Define installation arguments
ARG INSTALL_BNB=false
ARG INSTALL_VLLM=false
ARG INSTALL_DEEPSPEED=false
ARG INSTALL_FLASHATTN=false
ARG PIP_INDEX=https://pypi.org/simple
# Set the working directory
WORKDIR /app
# Install the requirements
COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \
pip config set global.extra-index-url "$PIP_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
# Copy the rest of the application into the image
COPY . /app
# Install the LLaMA Factory
RUN EXTRA_PACKAGES="metrics"; \
if [ "$INSTALL_BNB" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},bitsandbytes"; \
fi; \
if [ "$INSTALL_VLLM" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},vllm"; \
fi; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Rebuild flash attention
RUN pip uninstall -y transformer-engine flash-attn && \
if [ "$INSTALL_FLASHATTN" == "true" ]; then \
pip uninstall -y ninja && pip install ninja && \
pip install --no-cache-dir flash-attn --no-build-isolation; \
fi
# Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ]
# Expose port 7860 for the LLaMA Board
ENV GRADIO_SERVER_PORT 7860
EXPOSE 7860
# Expose port 8000 for the API service
ENV API_PORT 8000
EXPOSE 8000

View File

@@ -0,0 +1,32 @@
services:
llamafactory:
build:
dockerfile: ./docker/docker-cuda/Dockerfile
context: ../..
args:
INSTALL_BNB: false
INSTALL_VLLM: false
INSTALL_DEEPSPEED: false
INSTALL_FLASHATTN: false
PIP_INDEX: https://pypi.org/simple
container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../data:/app/data
- ../../output:/app/output
ports:
- "7860:7860"
- "8000:8000"
ipc: host
tty: true
stdin_open: true
command: bash
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: "all"
capabilities: [gpu]
restart: unless-stopped

View File

@@ -0,0 +1,45 @@
# Use the Ubuntu 22.04 image with CANN 8.0.rc1
# More versions can be found at https://hub.docker.com/r/cosdt/cann/tags
# FROM cosdt/cann:8.0.rc1-910-ubuntu22.04
FROM cosdt/cann:8.0.rc1-910b-ubuntu22.04
# FROM cosdt/cann:8.0.rc1-910-openeuler22.03
# FROM cosdt/cann:8.0.rc1-910b-openeuler22.03
# Define environments
ENV DEBIAN_FRONTEND=noninteractive
# Define installation arguments
ARG INSTALL_DEEPSPEED=false
ARG PIP_INDEX=https://pypi.org/simple
ARG TORCH_INDEX=https://download.pytorch.org/whl/cpu
# Set the working directory
WORKDIR /app
# Install the requirements
COPY requirements.txt /app
RUN pip config set global.index-url "$PIP_INDEX" && \
pip config set global.extra-index-url "$TORCH_INDEX" && \
python -m pip install --upgrade pip && \
python -m pip install -r requirements.txt
# Copy the rest of the application into the image
COPY . /app
# Install the LLaMA Factory
RUN EXTRA_PACKAGES="torch-npu,metrics"; \
if [ "$INSTALL_DEEPSPEED" == "true" ]; then \
EXTRA_PACKAGES="${EXTRA_PACKAGES},deepspeed"; \
fi; \
pip install -e ".[$EXTRA_PACKAGES]"
# Set up volumes
VOLUME [ "/root/.cache/huggingface", "/root/.cache/modelscope", "/app/data", "/app/output" ]
# Expose port 7860 for the LLaMA Board
ENV GRADIO_SERVER_PORT 7860
EXPOSE 7860
# Expose port 8000 for the API service
ENV API_PORT 8000
EXPOSE 8000

View File

@@ -0,0 +1,31 @@
services:
llamafactory:
build:
dockerfile: ./docker/docker-npu/Dockerfile
context: ../..
args:
INSTALL_DEEPSPEED: false
PIP_INDEX: https://pypi.org/simple
container_name: llamafactory
volumes:
- ../../hf_cache:/root/.cache/huggingface
- ../../ms_cache:/root/.cache/modelscope
- ../../data:/app/data
- ../../output:/app/output
- /usr/local/dcmi:/usr/local/dcmi
- /usr/local/bin/npu-smi:/usr/local/bin/npu-smi
- /usr/local/Ascend/driver:/usr/local/Ascend/driver
- /etc/ascend_install.info:/etc/ascend_install.info
ports:
- "7860:7860"
- "8000:8000"
ipc: host
tty: true
stdin_open: true
command: bash
devices:
- /dev/davinci0
- /dev/davinci_manager
- /dev/devmm_svm
- /dev/hisi_hdc
restart: unless-stopped

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import datasets
@@ -19,7 +20,7 @@ import pandas as pd
_CITATION = """\
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
@@ -133,25 +134,19 @@ class Ceval(datasets.GeneratorBasedBuilder):
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"filepath": os.path.join(
data_dir, "test", f"{task_name}_test.csv"
),
"filepath": os.path.join(data_dir, "test", f"{task_name}_test.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"filepath": os.path.join(
data_dir, "val", f"{task_name}_val.csv"
),
"filepath": os.path.join(data_dir, "val", f"{task_name}_val.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": os.path.join(
data_dir, "dev", f"{task_name}_dev.csv"
),
"filepath": os.path.join(data_dir, "dev", f"{task_name}_dev.csv"),
},
),
]

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import datasets
@@ -37,73 +38,73 @@ _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Internatio
_URL = "cmmlu.zip"
task_list = [
'agronomy',
'anatomy',
'ancient_chinese',
'arts',
'astronomy',
'business_ethics',
'chinese_civil_service_exam',
'chinese_driving_rule',
'chinese_food_culture',
'chinese_foreign_policy',
'chinese_history',
'chinese_literature',
'chinese_teacher_qualification',
'clinical_knowledge',
'college_actuarial_science',
'college_education',
'college_engineering_hydrology',
'college_law',
'college_mathematics',
'college_medical_statistics',
'college_medicine',
'computer_science',
'computer_security',
'conceptual_physics',
'construction_project_management',
'economics',
'education',
'electrical_engineering',
'elementary_chinese',
'elementary_commonsense',
'elementary_information_and_technology',
'elementary_mathematics',
'ethnology',
'food_science',
'genetics',
'global_facts',
'high_school_biology',
'high_school_chemistry',
'high_school_geography',
'high_school_mathematics',
'high_school_physics',
'high_school_politics',
'human_sexuality',
'international_law',
'journalism',
'jurisprudence',
'legal_and_moral_basis',
'logical',
'machine_learning',
'management',
'marketing',
'marxist_theory',
'modern_chinese',
'nutrition',
'philosophy',
'professional_accounting',
'professional_law',
'professional_medicine',
'professional_psychology',
'public_relations',
'security_study',
'sociology',
'sports_science',
'traditional_chinese_medicine',
'virology',
'world_history',
'world_religions',
"agronomy",
"anatomy",
"ancient_chinese",
"arts",
"astronomy",
"business_ethics",
"chinese_civil_service_exam",
"chinese_driving_rule",
"chinese_food_culture",
"chinese_foreign_policy",
"chinese_history",
"chinese_literature",
"chinese_teacher_qualification",
"clinical_knowledge",
"college_actuarial_science",
"college_education",
"college_engineering_hydrology",
"college_law",
"college_mathematics",
"college_medical_statistics",
"college_medicine",
"computer_science",
"computer_security",
"conceptual_physics",
"construction_project_management",
"economics",
"education",
"electrical_engineering",
"elementary_chinese",
"elementary_commonsense",
"elementary_information_and_technology",
"elementary_mathematics",
"ethnology",
"food_science",
"genetics",
"global_facts",
"high_school_biology",
"high_school_chemistry",
"high_school_geography",
"high_school_mathematics",
"high_school_physics",
"high_school_politics",
"human_sexuality",
"international_law",
"journalism",
"jurisprudence",
"legal_and_moral_basis",
"logical",
"machine_learning",
"management",
"marketing",
"marxist_theory",
"modern_chinese",
"nutrition",
"philosophy",
"professional_accounting",
"professional_law",
"professional_medicine",
"professional_psychology",
"public_relations",
"security_study",
"sociology",
"sports_science",
"traditional_chinese_medicine",
"virology",
"world_history",
"world_religions",
]

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import datasets
@@ -136,31 +137,25 @@ class MMLU(datasets.GeneratorBasedBuilder):
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"filepath": os.path.join(
data_dir, "data", "test", f"{task_name}_test.csv"
),
"filepath": os.path.join(data_dir, "data", "test", f"{task_name}_test.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"filepath": os.path.join(
data_dir, "data", "val", f"{task_name}_val.csv"
),
"filepath": os.path.join(data_dir, "data", "val", f"{task_name}_val.csv"),
},
),
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": os.path.join(
data_dir, "data", "dev", f"{task_name}_dev.csv"
),
"filepath": os.path.join(data_dir, "data", "dev", f"{task_name}_dev.csv"),
},
),
]
def _generate_examples(self, filepath):
df = pd.read_csv(filepath)
df = pd.read_csv(filepath, header=None)
df.columns = ["question", "A", "B", "C", "D", "answer"]
for i, instance in enumerate(df.to_dict(orient="records")):

View File

@@ -1,50 +1,221 @@
We provide diverse examples about fine-tuning LLMs.
Make sure to execute these commands in the `LLaMA-Factory` directory.
## Table of Contents
- [LoRA Fine-Tuning](#lora-fine-tuning)
- [QLoRA Fine-Tuning](#qlora-fine-tuning)
- [Full-Parameter Fine-Tuning](#full-parameter-fine-tuning)
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
- [Extras](#extras)
Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices.
## Examples
### LoRA Fine-Tuning
#### (Continuous) Pre-Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
examples/
├── lora_single_gpu/
│ ├── pretrain.sh: Do continuous pre-training using LoRA
│ ├── sft.sh: Do supervised fine-tuning using LoRA
│ ├── reward.sh: Do reward modeling using LoRA
│ ├── ppo.sh: Do PPO training using LoRA
│ ├── dpo.sh: Do DPO training using LoRA
│ ├── orpo.sh: Do ORPO training using LoRA
│ ├── sft_mllm.sh: Do supervised fine-tuning on multimodal data using LoRA
│ ├── prepare.sh: Save tokenized dataset
│ └── predict.sh: Do batch predict and compute BLEU and ROUGE scores after LoRA tuning
├── qlora_single_gpu/
│ ├── bitsandbytes.sh: Fine-tune 4/8-bit BNB models using QLoRA
│ ├── gptq.sh: Fine-tune 4/8-bit GPTQ models using QLoRA
│ ├── awq.sh: Fine-tune 4-bit AWQ models using QLoRA
│ └── aqlm.sh: Fine-tune 2-bit AQLM models using QLoRA
├── lora_multi_gpu/
│ ├── single_node.sh: Fine-tune model with Accelerate on single node using LoRA
│ ├── multi_node.sh: Fine-tune model with Accelerate on multiple nodes using LoRA
│ └── ds_zero3.sh: Fine-tune model with DeepSpeed ZeRO-3 using LoRA (weight sharding)
├── full_multi_gpu/
│ ├── single_node.sh: Full fine-tune model with DeepSpeed on single node
│ ├── multi_node.sh: Full fine-tune model with DeepSpeed on multiple nodes
│ └── predict.sh: Do parallel batch predict and compute BLEU and ROUGE scores after full tuning
├── merge_lora/
│ ├── merge.sh: Merge LoRA weights into the pre-trained models
│ └── quantize.sh: Quantize the fine-tuned model with AutoGPTQ
├── inference/
│ ├── cli_demo.sh: Chat with fine-tuned model in the CLI with LoRA adapters
│ ├── api_demo.sh: Chat with fine-tuned model in an OpenAI-style API with LoRA adapters
│ ├── web_demo.sh: Chat with fine-tuned model in the Web browser with LoRA adapters
│ └── evaluate.sh: Evaluate model on the MMLU/CMMLU/C-Eval benchmarks with LoRA adapters
└── extras/
├── galore/
│ └── sft.sh: Fine-tune model with GaLore
├── badam/
│ └── sft.sh: Fine-tune model with BAdam
├── loraplus/
│ └── sft.sh: Fine-tune model using LoRA+
├── mod/
│ └── sft.sh: Fine-tune model using Mixture-of-Depths
├── llama_pro/
│ ├── expand.sh: Expand layers in the model
│ └── sft.sh: Fine-tune the expanded model
└── fsdp_qlora/
└── sft.sh: Fine-tune quantized model with FSDP+QLoRA
#### Supervised Fine-Tuning
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### Multimodal Supervised Fine-Tuning
```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml
```
#### Reward Modeling
```bash
llamafactory-cli train examples/train_lora/llama3_lora_reward.yaml
```
#### PPO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_ppo.yaml
```
#### DPO/ORPO/SimPO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
```
#### KTO Training
```bash
llamafactory-cli train examples/train_lora/llama3_lora_kto.yaml
```
#### Preprocess Dataset
It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset.
```bash
llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
```
#### Evaluating on MMLU/CMMLU/C-Eval Benchmarks
```bash
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
```
### QLoRA Fine-Tuning
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended)
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
```
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml
```
#### Supervised Fine-Tuning with 4-bit AWQ Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml
```
#### Supervised Fine-Tuning with 2-bit AQLM Quantization
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
```
### Full-Parameter Fine-Tuning
#### Supervised Fine-Tuning on Single Node
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### Supervised Fine-Tuning on Multiple Nodes
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### Batch Predicting and Computing BLEU and ROUGE Scores
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
```
### Merging LoRA Adapters and Quantization
#### Merge LoRA Adapters
Note: DO NOT use quantized model or `quantization_bit` when merging LoRA adapters.
```bash
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
```
#### Quantizing Model using AutoGPTQ
```bash
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
```
### Inferring LoRA Fine-Tuned Models
#### Use CLI
```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
```
#### Use Web UI
```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
```
#### Launch OpenAI-style API
```bash
llamafactory-cli api examples/inference/llama3_lora_sft.yaml
```
### Extras
#### Full-Parameter Fine-Tuning using GaLore
```bash
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```
#### Full-Parameter Fine-Tuning using BAdam
```bash
llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
```
#### LoRA+ Fine-Tuning
```bash
llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
```
#### PiSSA Fine-Tuning
```bash
llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml
```
#### Mixture-of-Depths Fine-Tuning
```bash
llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
```
#### LLaMA-Pro Fine-Tuning
```bash
bash examples/extras/llama_pro/expand.sh
llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
```
#### FSDP+QLoRA Fine-Tuning
```bash
bash examples/extras/fsdp_qlora/train.sh
```

View File

@@ -1,50 +1,221 @@
我们提供了多样化的大模型微调示例脚本。
请确保在 `LLaMA-Factory` 目录下执行下述命令。
## 目录
- [LoRA 微调](#lora-微调)
- [QLoRA 微调](#qlora-微调)
- [全参数微调](#全参数微调)
- [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化)
- [推理 LoRA 模型](#推理-lora-模型)
- [杂项](#杂项)
使用 `CUDA_VISIBLE_DEVICES`GPU`ASCEND_RT_VISIBLE_DEVICES`NPU选择计算设备。
## 示例
### LoRA 微调
#### (增量)预训练
```bash
llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
```
examples/
├── lora_single_gpu/
│ ├── pretrain.sh: 基于 LoRA 进行增量预训练
│ ├── sft.sh: 基于 LoRA 进行指令监督微调
│ ├── reward.sh: 基于 LoRA 进行奖励模型训练
│ ├── ppo.sh: 基于 LoRA 进行 PPO 训练
│ ├── dpo.sh: 基于 LoRA 进行 DPO 训练
│ ├── orpo.sh: 基于 LoRA 进行 ORPO 训练
│ ├── sft_mllm.sh: 基于 LoRA 进行多模态指令监督微调
│ ├── prepare.sh: 保存预处理后的数据集
│ └── predict.sh: 基于 LoRA 进行批量预测并计算 BLEU 和 ROUGE 分数
├── qlora_single_gpu/
│ ├── bitsandbytes.sh: 基于 QLoRA 微调 4/8 比特 BNB 模型
│ ├── gptq.sh: 基于 QLoRA 微调 4/8 比特 GPTQ 模型
│ ├── awq.sh: 基于 QLoRA 微调 4 比特 AWQ 模型
│ └── aqlm.sh: 基于 QLoRA 微调 2 比特 AQLM 模型
├── lora_multi_gpu/
│ ├── single_node.sh: 使用 Accelerate 进行单节点 LoRA 训练
│ ├── multi_node.sh: 使用 Accelerate 进行多节点 LoRA 训练
│ └── ds_zero3.sh: 使用 DeepSpeed ZeRO-3 进行 LoRA 训练(拆分权重)
├── full_multi_gpu/
│ ├── single_node.sh: 使用 DeepSpeed 进行单节点全量训练
│ ├── multi_node.sh: 使用 DeepSpeed 进行多节点全量训练
│ └── predict.sh: 基于全量训练进行多卡批量预测并计算 BLEU 和 ROUGE 分数
├── merge_lora/
│ ├── merge.sh: 将 LoRA 权重合并到预训练模型中
│ └── quantize.sh: 使用 AutoGPTQ 量化微调后的模型
├── inference/
│ ├── cli_demo.sh: 启动 LoRA 模型的命令行推理接口
│ ├── api_demo.sh: 启动 LoRA 模型的 OpenAI 风格 API
│ ├── web_demo.sh: 启动 LoRA 模型的浏览器推理接口
│ └── evaluate.sh: 在 MMLU/CMMLU/C-Eval 数据集上评测 LoRA 模型
└── extras/
├── galore/
│ └── sft.sh: 使用 GaLore 训练模型
├── badam/
│ └── sft.sh: 使用 BAdam 训练模型
├── loraplus/
│ └── sft.sh: 使用 LoRA+ 训练模型
├── mod/
│ └── sft.sh: 使用深度混合训练模型
├── llama_pro/
│ ├── expand.sh: 扩展模型中的层
│ └── sft.sh: 训练扩展后的模型
└── fsdp_qlora/
└── sft.sh: 使用 FSDP+QLoRA 微调量化模型
#### 指令监督微调
```bash
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### 多模态指令监督微调
```bash
llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml
```
#### 奖励模型训练
```bash
llamafactory-cli train examples/train_lora/llama3_lora_reward.yaml
```
#### PPO 训练
```bash
llamafactory-cli train examples/train_lora/llama3_lora_ppo.yaml
```
#### DPO/ORPO/SimPO 训练
```bash
llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml
```
#### KTO 训练
```bash
llamafactory-cli train examples/train_lora/llama3_lora_kto.yaml
```
#### 预处理数据集
对于大数据集有帮助,在配置中使用 `tokenized_path` 以加载预处理后的数据集。
```bash
llamafactory-cli train examples/train_lora/llama3_preprocess.yaml
```
#### 在 MMLU/CMMLU/C-Eval 上评估
```bash
llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml
```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_lora/llama3_lora_predict.yaml
```
#### 多机指令监督微调
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
```
#### 使用 DeepSpeed ZeRO-3 平均分配显存
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml
```
### QLoRA 微调
#### 基于 4/8 比特 Bitsandbytes/HQQ/EETQ 量化进行指令监督微调(推荐)
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml
```
#### 基于 4/8 比特 GPTQ 量化进行指令监督微调
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml
```
#### 基于 4 比特 AWQ 量化进行指令监督微调
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml
```
#### 基于 2 比特 AQLM 量化进行指令监督微调
```bash
llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
```
### 全参数微调
#### 在单机上进行指令监督微调
```bash
FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### 在多机上进行指令监督微调
```bash
FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft_ds3.yaml
```
#### 批量预测并计算 BLEU 和 ROUGE 分数
```bash
llamafactory-cli train examples/train_full/llama3_full_predict.yaml
```
### 合并 LoRA 适配器与模型量化
#### 合并 LoRA 适配器
注:请勿使用量化后的模型或 `quantization_bit` 参数来合并 LoRA 适配器。
```bash
llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
```
#### 使用 AutoGPTQ 量化模型
```bash
llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
```
### 推理 LoRA 模型
#### 使用命令行接口
```bash
llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
```
#### 使用浏览器界面
```bash
llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
```
#### 启动 OpenAI 风格 API
```bash
llamafactory-cli api examples/inference/llama3_lora_sft.yaml
```
### 杂项
#### 使用 GaLore 进行全参数训练
```bash
llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
```
#### 使用 BAdam 进行全参数训练
```bash
llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
```
#### LoRA+ 微调
```bash
llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
```
#### PiSSA 微调
```bash
llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml
```
#### 深度混合微调
```bash
llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
```
#### LLaMA-Pro 微调
```bash
bash examples/extras/llama_pro/expand.sh
llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
```
#### FSDP+QLoRA 微调
```bash
bash examples/extras/fsdp_qlora/train.sh
```

View File

@@ -5,16 +5,16 @@ downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: true
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: true # offload may affect training speed
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: fp16
mixed_precision: fp16 # or bf16
num_machines: 1 # the number of nodes
num_processes: 2 # the number of GPUs in all nodes
rdzv_backend: static

View File

@@ -1,18 +0,0 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_process_ip: 192.168.0.1
main_process_port: 29555
main_training_function: main
mixed_precision: fp16
num_machines: 2 # the number of nodes
num_processes: 8 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -1,16 +0,0 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1 # the number of nodes
num_processes: 4 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -1,18 +0,0 @@
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 1
main_process_ip: 192.168.0.1
main_process_port: 29555
main_training_function: main
mixed_precision: fp16
num_machines: 2 # the number of nodes
num_processes: 8 # the number of GPUs in all nodes
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

View File

@@ -0,0 +1,41 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_mode: layer
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 2
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,42 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_mode: layer
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 2
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../../data \
--template default \
--finetuning_type full \
--use_badam \
--badam_switch_mode descending \
--badam_switch_block_every 50 \
--badam_verbose 2 \
--output_dir ../../../saves/LLaMA2-7B/badam/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--pure_bf16

View File

@@ -0,0 +1,40 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
quantization_bit: 4
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,41 +0,0 @@
#!/bin/bash
# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
pip install "transformers>=4.39.1"
pip install "accelerate>=0.28.0"
pip install "bitsandbytes>=0.43.0"
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
--config_file ../../accelerate/fsdp_config.yaml \
../../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-70b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../../saves/LLaMA2-70B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--quantization_bit 4 \
--plot_loss \
--fp16

View File

@@ -0,0 +1,6 @@
#!/bin/bash
# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
--config_file examples/accelerate/fsdp_config.yaml \
src/train.py examples/extras/fsdp_qlora/llama3_lora_sft.yaml

View File

@@ -0,0 +1,42 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
use_galore: true
galore_layerwise: true
galore_target: mlp,self_attn
galore_rank: 128
galore_scale: 2.0
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
pure_bf16: true
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,36 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../../data \
--template default \
--finetuning_type full \
--use_galore \
--galore_layerwise \
--galore_target mlp,self_attn \
--galore_rank 128 \
--galore_scale 2.0 \
--output_dir ../../../saves/LLaMA2-7B/galore/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--pure_bf16

View File

@@ -1,6 +1,6 @@
#!/bin/bash
python ../../../scripts/llama_pro.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--output_dir ../../../models/llama2-7b-pro \
python scripts/llama_pro.py \
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
--output_dir models/llama3-8b-instruct-pro \
--num_expand 8

View File

@@ -0,0 +1,41 @@
### model
model_name_or_path: models/llama3-8b-instruct-pro
### method
stage: sft
do_train: true
finetuning_type: freeze
freeze_trainable_layers: 8
freeze_trainable_modules: all
use_llama_pro: true
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b-instruct-pro/freeze/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,34 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path ../../../models/llama2-7b-pro \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../../data \
--template default \
--finetuning_type freeze \
--name_module_trainable all \
--num_layer_trainable 8 \
--use_llama_pro \
--output_dir ../../../saves/LLaMA2-7B-Pro/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -0,0 +1,40 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
loraplus_lr_ratio: 16.0
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,33 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--loraplus_lr_ratio 16.0 \
--output_dir ../../saves/LLaMA2-7B/loraplus/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -0,0 +1,40 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
mixture_of_depths: convert
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b-mod/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
optim: paged_adamw_8bit
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
pure_bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,33 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../../data \
--template default \
--finetuning_type full \
--mixture_of_depths convert \
--output_dir ../../../saves/LLaMA2-7B/mod/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--optim paged_adamw_8bit \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--pure_bf16

View File

@@ -0,0 +1,42 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
pissa_init: true
pissa_iter: 16
pissa_convert: true
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -1,38 +0,0 @@
#!/bin/bash
python -m torch.distributed.run \
--nproc_per_node $NPROC_PER_NODE \
--nnodes $NNODES \
--node_rank $RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT \
../../src/train_bash.py \
--deepspeed ../deepspeed/ds_z3_config.json \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type full \
--output_dir ../../saves/LLaMA2-7B/full/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--plot_loss \
--fp16

View File

@@ -1,20 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
--config_file ../accelerate/single_config.yaml \
../../src/train_bash.py \
--stage sft \
--do_predict \
--model_name_or_path ../../saves/LLaMA2-7B/full/sft \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type full \
--output_dir ../../saves/LLaMA2-7B/full/predict \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_eval_batch_size 1 \
--max_samples 20 \
--predict_with_generate

View File

@@ -1,32 +0,0 @@
#!/bin/bash
deepspeed --num_gpus 4 ../../src/train_bash.py \
--deepspeed ../deepspeed/ds_z3_config.json \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type full \
--output_dir ../../saves/LLaMA2-7B/full/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--plot_loss \
--fp16

View File

@@ -1,7 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 API_PORT=8000 python ../../src/api_demo.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--template default \
--finetuning_type lora

View File

@@ -1,7 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/cli_demo.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--template default \
--finetuning_type lora

View File

@@ -1,12 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/evaluate.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--template fewshot \
--finetuning_type lora \
--task mmlu \
--split test \
--lang en \
--n_shot 5 \
--batch_size 4

View File

@@ -0,0 +1,2 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3

View File

@@ -0,0 +1,4 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora

View File

@@ -0,0 +1,4 @@
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
infer_backend: vllm
vllm_enforce_eager: true

View File

@@ -0,0 +1,3 @@
model_name_or_path: llava-hf/llava-1.5-7b-hf
template: vicuna
visual_inputs: true

View File

@@ -1,8 +0,0 @@
#!/bin/bash
# add `--visual_inputs True` to load MLLM
CUDA_VISIBLE_DEVICES=0 python ../../src/web_demo.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--template default \
--finetuning_type lora

View File

@@ -1,33 +0,0 @@
#!/bin/bash
deepspeed --num_gpus 4 ../../src/train_bash.py \
--deepspeed ../deepspeed/ds_z3_config.json \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--plot_loss \
--fp16

View File

@@ -1,36 +0,0 @@
#!/bin/bash
# also launch it on slave machine using slave_config.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
--config_file ../accelerate/master_config.yaml \
../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--plot_loss \
--fp16

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
--config_file ../accelerate/single_config.yaml \
../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--ddp_timeout 180000000 \
--plot_loss \
--fp16

View File

@@ -1,35 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage dpo \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--create_new_adapter \
--dataset orca_rlhf \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/dpo \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--max_samples 1000 \
--val_size 0.1 \
--dpo_ftx 1.0 \
--plot_loss \
--fp16

View File

@@ -1,32 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage orpo \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset orca_rlhf \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/orpo \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--max_samples 1000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,32 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage ppo \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--create_new_adapter \
--dataset alpaca_gpt4_en \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--reward_model ../../saves/LLaMA2-7B/lora/reward \
--output_dir ../../saves/LLaMA2-7B/lora/ppo \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 512 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--max_samples 1000 \
--top_k 0 \
--top_p 0.9 \
--max_new_tokens 256 \
--plot_loss \
--fp16

View File

@@ -1,19 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_predict \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft,../../saves/LLaMA2-7B/lora/dpo \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--output_dir ../../saves/LLaMA2-7B/lora/predict \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_eval_batch_size 1 \
--max_samples 20 \
--predict_with_generate

View File

@@ -1,18 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES= python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--max_samples 3000 \
--tokenized_path ../../saves/datasets/sft

View File

@@ -1,31 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage pt \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset c4_demo \
--dataset_dir ../../data \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/pretrain \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 10000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,33 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage rm \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--create_new_adapter \
--dataset orca_rlhf \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/reward \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--max_samples 5000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,32 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,33 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path llava-hf/llava-1.5-7b-hf \
--visual_inputs \
--dataset mllm_demo \
--dataset_dir ../../data \
--template vicuna \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft_mllm \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--preprocessing_num_workers 16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--warmup_steps 20 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 100.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -0,0 +1,11 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
template: llama3
### export
export_dir: models/llama3_gptq
export_quantization_bit: 4
export_quantization_dataset: data/c4_demo.json
export_size: 2
export_device: cpu
export_legacy_format: false

View File

@@ -0,0 +1,13 @@
### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
template: llama3
finetuning_type: lora
### export
export_dir: models/llama3_lora_sft
export_size: 2
export_device: cpu
export_legacy_format: false

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# DO NOT use quantized model or quantization_bit when merging lora weights
CUDA_VISIBLE_DEVICES=0 python ../../src/export_model.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
--template default \
--finetuning_type lora \
--export_dir ../../models/llama2-7b-sft \
--export_size 2 \
--export_device cpu \
--export_legacy_format False

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# NEED TO run `merge.sh` before using this script
CUDA_VISIBLE_DEVICES=0 python ../../src/export_model.py \
--model_name_or_path ../../models/llama2-7b-sft \
--template default \
--export_dir ../../models/llama2-7b-sft-int4 \
--export_quantization_bit 4 \
--export_quantization_dataset ../../data/c4_demo.json \
--export_size 2 \
--export_legacy_format False

View File

@@ -1,30 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path BlackSamorez/Llama-2-7b-AQLM-2Bit-1x16-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,30 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path TheBloke/Llama-2-7B-AWQ \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -1,31 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--quantization_bit 4 \
--plot_loss \
--fp16

View File

@@ -1,30 +0,0 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
--stage sft \
--do_train \
--model_name_or_path TheBloke/Llama-2-7B-GPTQ \
--dataset alpaca_gpt4_en,glaive_toolcall \
--dataset_dir ../../data \
--template default \
--finetuning_type lora \
--lora_target q_proj,v_proj \
--output_dir ../../saves/LLaMA2-7B/lora/sft \
--overwrite_cache \
--overwrite_output_dir \
--cutoff_len 1024 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 8 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--eval_steps 100 \
--evaluation_strategy steps \
--load_best_model_at_end \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--max_samples 3000 \
--val_size 0.1 \
--plot_loss \
--fp16

View File

@@ -0,0 +1,23 @@
### model
model_name_or_path: saves/llama3-8b/full/sft
### method
stage: sft
do_predict: true
finetuning_type: full
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/predict
overwrite_output_dir: true
### eval
per_device_eval_batch_size: 1
predict_with_generate: true

View File

@@ -0,0 +1,39 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/full/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 2
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,41 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: dpo
do_train: true
finetuning_type: lora
lora_target: all
pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
### dataset
dataset: dpo_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/dpo
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 5.0e-6
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,18 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
### method
finetuning_type: lora
### dataset
task: mmlu_test # choices: [mmlu_test, ceval_validation, cmmlu_test]
template: fewshot
lang: en
n_shot: 5
### output
save_dir: saves/llama3-8b/lora/eval
### eval
batch_size: 4

View File

@@ -0,0 +1,40 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: kto
do_train: true
finetuning_type: lora
lora_target: all
pref_beta: 0.1
### dataset
dataset: kto_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/kto
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 5.0e-6
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,39 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
reward_model: saves/llama3-8b/lora/reward
### method
stage: ppo
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/ppo
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### generate
max_new_tokens: 512
top_k: 0
top_p: 0.9

View File

@@ -0,0 +1,25 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
adapter_name_or_path: saves/llama3-8b/lora/sft
### method
stage: sft
do_predict: true
finetuning_type: lora
### dataset
eval_dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 50
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/predict
overwrite_output_dir: true
### eval
per_device_eval_batch_size: 1
predict_with_generate: true
ddp_timeout: 180000000

View File

@@ -0,0 +1,38 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: pt
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: c4_demo
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/pretrain
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,39 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: rm
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: dpo_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/reward
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

View File

@@ -0,0 +1,39 @@
### model
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
### method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
### dataset
dataset: identity,alpaca_en_demo
template: llama3
cutoff_len: 1024
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/llama3-8b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
### eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500

Some files were not shown because too many files have changed in this diff Show More