support vllm
Former-commit-id: 889f6e910e654d8ec3922c2185042d737ffbf1c3
This commit is contained in:
@@ -1,5 +1,8 @@
|
||||
Usage:
|
||||
|
||||
- `pretrain.sh`
|
||||
- `sft.sh` -> `reward.sh` -> `ppo.sh`
|
||||
- `sft.sh` -> `dpo.sh` -> `predict.sh`
|
||||
- `pretrain.sh`: do pre-train (optional)
|
||||
- `sft.sh`: do supervised fine-tune
|
||||
- `reward.sh`: do reward modeling (must after sft.sh)
|
||||
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
|
||||
- `dpo.sh`: do DPO training (must after sft.sh)
|
||||
- `predict.sh`: do predict (must after sft.sh and dpo.sh)
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
Usage:
|
||||
|
||||
- `merge.sh` -> `quantize.sh`
|
||||
- `merge.sh`: merge the lora weights
|
||||
- `quantize.sh`: quantize the model with AutoGPTQ (must after merge.sh, optional)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
python ../../src/export_model.py \
|
||||
CUDA_VISIBLE_DEVICES=0 python ../../src/export_model.py \
|
||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
||||
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
|
||||
--template default \
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
python ../../src/export_model.py \
|
||||
CUDA_VISIBLE_DEVICES=0 python ../../src/export_model.py \
|
||||
--model_name_or_path ../../models/llama2-7b-sft \
|
||||
--template default \
|
||||
--export_dir ../../models/llama2-7b-sft-int4 \
|
||||
|
||||
Reference in New Issue
Block a user