update readme

Former-commit-id: a4d86a4bea1cce2219a54def9dfd3fd732d48e72
This commit is contained in:
hiyouga
2023-11-18 11:15:56 +08:00
parent 5197fb2fad
commit 821a6f2fa6
2 changed files with 6 additions and 6 deletions

View File

@@ -43,9 +43,9 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
![benchmark](assets/benchmark.svg)
- Training Speed: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
- BLEU Score: BLEU-4 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
- GPU Memory: Peak GPU memory usage in the 4-bit quantized training. (bs=1, cutoff_len=1024)
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
- **BLEU Score**: BLEU-4 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA-Factory's LoRA tuning.
## Changelog