update examples
Former-commit-id: cca50b627c85e0a777717d609377406cc7fd579f
This commit is contained in:
@@ -1,5 +1,19 @@
|
||||
We provide diverse examples about fine-tuning LLMs.
|
||||
|
||||
Make sure to execute these commands in the `LLaMA-Factory` directory.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu)
|
||||
- [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu)
|
||||
- [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus)
|
||||
- [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus)
|
||||
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
|
||||
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
|
||||
- [Extras](#extras)
|
||||
|
||||
## Examples
|
||||
|
||||
### LoRA Fine-Tuning on A Single GPU
|
||||
|
||||
#### (Continuous) Pre-Training
|
||||
|
||||
Reference in New Issue
Block a user