Text Generation
Transformers
PyTorch
code
gpt2
custom_code
Eval Results (legacy)
text-generation-inference
Instructions to use bigcode/santacoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigcode/santacoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigcode/santacoder", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigcode/santacoder", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("bigcode/santacoder", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigcode/santacoder with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigcode/santacoder" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigcode/santacoder
- SGLang
How to use bigcode/santacoder with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigcode/santacoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigcode/santacoder" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigcode/santacoder", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigcode/santacoder with Docker Model Runner:
docker model run hf.co/bigcode/santacoder
Commit ·
bdeb6cc
1
Parent(s): fa6c997
Update README.md
Browse files
README.md
CHANGED
|
@@ -200,7 +200,7 @@ model-index:
|
|
| 200 |
|
| 201 |
# Model Summary
|
| 202 |
|
| 203 |
-
The SantaCoder models are a series of 1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
|
| 204 |
The main model uses multi-query attention, was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective.
|
| 205 |
In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
|
| 206 |
|
|
@@ -219,9 +219,9 @@ In addition there are several models that were trained on datasets with differen
|
|
| 219 |
|`fertility`| MQA | AR + FIM | Tokenizer fertility |
|
| 220 |
|`comments`| MQA | AR + FIM | Comment-to-code ratio |
|
| 221 |
|`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
|
| 222 |
-
|`
|
| 223 |
|
| 224 |
-
The `
|
| 225 |
|
| 226 |
# Use
|
| 227 |
|
|
@@ -251,7 +251,7 @@ print(tokenizer.decode(outputs[0]))
|
|
| 251 |
```
|
| 252 |
|
| 253 |
### Fill-in-the-middle
|
| 254 |
-
Fill-in-the-
|
| 255 |
|
| 256 |
```python
|
| 257 |
input_text = "<fim-prefix>def print_hello_world():\n <fim-suffix>\n print('Hello world!')<fim-middle>"
|
|
|
|
| 200 |
|
| 201 |
# Model Summary
|
| 202 |
|
| 203 |
+
The SantaCoder models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
|
| 204 |
The main model uses multi-query attention, was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the Fill-in-the-Middle objective.
|
| 205 |
In addition there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
|
| 206 |
|
|
|
|
| 219 |
|`fertility`| MQA | AR + FIM | Tokenizer fertility |
|
| 220 |
|`comments`| MQA | AR + FIM | Comment-to-code ratio |
|
| 221 |
|`dedup-alt`| MQA | AR + FIM | Stronger near-deduplication |
|
| 222 |
+
|`final`| MQA | AR + FIM | Stronger near-deduplication and comment-to-code ratio |
|
| 223 |
|
| 224 |
+
The `final` model is the best performing model and was trained twice as long as the others. This checkpoint is the default model and available on the `main` branch. All other checkpoints are on separate branches with according names.
|
| 225 |
|
| 226 |
# Use
|
| 227 |
|
|
|
|
| 251 |
```
|
| 252 |
|
| 253 |
### Fill-in-the-middle
|
| 254 |
+
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffic part of the input and output:
|
| 255 |
|
| 256 |
```python
|
| 257 |
input_text = "<fim-prefix>def print_hello_world():\n <fim-suffix>\n print('Hello world!')<fim-middle>"
|