Text Generation
Transformers
Safetensors
qwen2
NL2SQL
SQL
Text-to-SQL
conversational
text-generation-inference
Instructions to use XGenerationLab/XiYanSQL-QwenCoder-32B-2412 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use XGenerationLab/XiYanSQL-QwenCoder-32B-2412 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="XGenerationLab/XiYanSQL-QwenCoder-32B-2412") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("XGenerationLab/XiYanSQL-QwenCoder-32B-2412") model = AutoModelForCausalLM.from_pretrained("XGenerationLab/XiYanSQL-QwenCoder-32B-2412") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use XGenerationLab/XiYanSQL-QwenCoder-32B-2412 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "XGenerationLab/XiYanSQL-QwenCoder-32B-2412" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "XGenerationLab/XiYanSQL-QwenCoder-32B-2412", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/XGenerationLab/XiYanSQL-QwenCoder-32B-2412
- SGLang
How to use XGenerationLab/XiYanSQL-QwenCoder-32B-2412 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "XGenerationLab/XiYanSQL-QwenCoder-32B-2412" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "XGenerationLab/XiYanSQL-QwenCoder-32B-2412", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "XGenerationLab/XiYanSQL-QwenCoder-32B-2412" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "XGenerationLab/XiYanSQL-QwenCoder-32B-2412", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use XGenerationLab/XiYanSQL-QwenCoder-32B-2412 with Docker Model Runner:
docker model run hf.co/XGenerationLab/XiYanSQL-QwenCoder-32B-2412
Update README.md
Browse files
README.md
CHANGED
|
@@ -62,7 +62,7 @@ transformers >= 4.37.0
|
|
| 62 |
Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
|
| 63 |
Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
|
| 64 |
|
| 65 |
-
```
|
| 66 |
nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
|
| 67 |
【用户问题】
|
| 68 |
{question}
|
|
@@ -108,6 +108,36 @@ generated_ids = [
|
|
| 108 |
]
|
| 109 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 110 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
## Acknowledgments
|
| 112 |
If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
|
| 113 |
```bibtex
|
|
|
|
| 62 |
Here is a simple code snippet for quickly using **XiYanSQL-QwenCoder** model. We provide a Chinese version of the prompt, and you just need to replace the placeholders for "question," "db_schema," and "evidence" to get started. We recommend using our [M-Schema](https://github.com/XGenerationLab/M-Schema) format for the schema; other formats such as DDL are also acceptable, but they may affect performance.
|
| 63 |
Currently, we mainly support mainstream dialects like SQLite, PostgreSQL, and MySQL.
|
| 64 |
|
| 65 |
+
```python
|
| 66 |
nl2sqlite_template_cn = """你是一名{dialect}专家,现在需要阅读并理解下面的【数据库schema】描述,以及可能用到的【参考信息】,并运用{dialect}知识生成sql语句回答【用户问题】。
|
| 67 |
【用户问题】
|
| 68 |
{question}
|
|
|
|
| 108 |
]
|
| 109 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 110 |
```
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
### Inference with vLLM
|
| 114 |
+
```python
|
| 115 |
+
from vllm import LLM, SamplingParams
|
| 116 |
+
from transformers import AutoTokenizer
|
| 117 |
+
model_path = "XGenerationLab/XiYanSQL-QwenCoder-32B-2412"
|
| 118 |
+
llm = LLM(model=model_path, tensor_parallel_size=8)
|
| 119 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
| 120 |
+
sampling_params = SamplingParams(
|
| 121 |
+
n=1,
|
| 122 |
+
temperature=0.1,
|
| 123 |
+
max_tokens=1024
|
| 124 |
+
)
|
| 125 |
+
|
| 126 |
+
## dialects -> ['SQLite', 'PostgreSQL', 'MySQL']
|
| 127 |
+
prompt = nl2sqlite_template_cn.format(dialect="", db_schema="", question="", evidence="")
|
| 128 |
+
message = [{'role': 'user', 'content': prompt}]
|
| 129 |
+
text = tokenizer.apply_chat_template(
|
| 130 |
+
message,
|
| 131 |
+
tokenize=False,
|
| 132 |
+
add_generation_prompt=True
|
| 133 |
+
)
|
| 134 |
+
outputs = llm.generate([text], sampling_params=sampling_params)
|
| 135 |
+
response = outputs[0].outputs[0].text
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
|
| 141 |
## Acknowledgments
|
| 142 |
If you find our work useful, please give us a citation or a like, so we can make a greater contribution to the open-source community!
|
| 143 |
```bibtex
|