update readme
Former-commit-id: 5061f7196a3278af5ebce77249d9c3c0f8a55b34
This commit is contained in:
@@ -347,7 +347,7 @@ To enable FlashAttention-2 on the Windows platform, you need to install the prec
|
||||
llamafactory-cli webui
|
||||
```
|
||||
|
||||
> [!TIPS]
|
||||
> [!TIP]
|
||||
> To modify the default setting in the LLaMA Board GUI, you can use environment variables, e.g., `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False` (use `set` command on Windows OS).
|
||||
|
||||
<details><summary>For Alibaba Cloud users</summary>
|
||||
@@ -393,7 +393,7 @@ docker compose -f ./docker-compose.yml up -d
|
||||
|
||||
See [examples/README.md](examples/README.md) for usage.
|
||||
|
||||
> [!TIPS]
|
||||
> [!TIP]
|
||||
> Use `llamafactory-cli train -h` to display arguments description.
|
||||
|
||||
### Deploy with OpenAI-style API and vLLM
|
||||
|
||||
Reference in New Issue
Block a user