simplify readme
Former-commit-id: 0da6ec2d516326fe9c7583ba71cd1778eb838178
This commit is contained in:
@@ -1,3 +1,12 @@
|
||||
> [!WARNING]
|
||||
> Merging LoRA weights into a quantized model is not supported.
|
||||
|
||||
> [!TIP]
|
||||
> Use `--model_name_or_path path_to_model` solely to use the exported model or model fine-tuned in full/freeze mode.
|
||||
>
|
||||
> Use `CUDA_VISIBLE_DEVICES=0`, `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
|
||||
|
||||
|
||||
Usage:
|
||||
|
||||
- `merge.sh`: merge the lora weights
|
||||
|
||||
Reference in New Issue
Block a user