update examples

Former-commit-id: bf36b16e48d6438de6d0b2f2bfe33f7895699b9d
This commit is contained in:
hiyouga
2024-04-02 20:41:49 +08:00
parent c1510d19c7
commit 933a084999
5 changed files with 6 additions and 29 deletions

View File

@@ -1,13 +0,0 @@
> [!WARNING]
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
> Use `--model_name_or_path path_to_model` solely to use the exported model or model fine-tuned in full/freeze mode.
>
> Use `CUDA_VISIBLE_DEVICES=0`, `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
Usage:
- `merge.sh`: merge the lora weights
- `quantize.sh`: quantize the model with AutoGPTQ (must after merge.sh, optional)