support ORPO
Former-commit-id: f44a4c27e2461cdaa1b16865f597a31033c0e6d9
This commit is contained in:
@@ -1,8 +1,9 @@
|
||||
Usage:
|
||||
|
||||
- `pretrain.sh`: do pre-train (optional)
|
||||
- `sft.sh`: do supervised fine-tune
|
||||
- `sft.sh`: do supervised fine-tuning
|
||||
- `reward.sh`: do reward modeling (must after sft.sh)
|
||||
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
|
||||
- `dpo.sh`: do DPO training (must after sft.sh)
|
||||
- `orpo.sh`: do ORPO training
|
||||
- `predict.sh`: do predict (must after sft.sh and dpo.sh)
|
||||
|
||||
Reference in New Issue
Block a user