DPO-Shift: Shifting the Distribution of Direct Preference Optimization
Paper • 2502.07599 • Published • 15
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "NoManDeRY/DPO-Shift-Llama-3-8B-Ultrafeedback-decrease_linear-1.0to0.95" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NoManDeRY/DPO-Shift-Llama-3-8B-Ultrafeedback-decrease_linear-1.0to0.95",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'This is a model released from the preprint: DPO-Shift: Shifting the Distribution of Direct Preference Optimization. Please refer to our repository for more details.
This model is a fine-tuned version of princeton-nlp/Llama-3-Base-8B-SFT on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Dpo Lambda | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.6826 | 0.1047 | 50 | 0.6803 | 0.0669 | 0.0399 | 0.9948 | 0.6690 | 0.0270 | -267.0431 | -293.9557 | -0.9094 | -0.8412 |
| 0.5951 | 0.2094 | 100 | 0.6223 | -0.0861 | -0.2745 | 0.9895 | 0.7130 | 0.1884 | -298.4850 | -309.2591 | -0.9195 | -0.8667 |
| 0.6296 | 0.3141 | 150 | 0.5972 | -0.2312 | -0.5289 | 0.9843 | 0.7100 | 0.2977 | -323.9177 | -323.7625 | -0.9008 | -0.8554 |
| 0.6219 | 0.4187 | 200 | 0.5784 | -0.4096 | -0.8051 | 0.9790 | 0.7310 | 0.3955 | -351.5381 | -341.6022 | -0.9313 | -0.8927 |
| 0.5738 | 0.5234 | 250 | 0.5685 | -0.4338 | -0.8864 | 0.9738 | 0.7260 | 0.4526 | -359.6707 | -344.0276 | -0.9691 | -0.9333 |
| 0.5598 | 0.6281 | 300 | 0.5695 | -0.4246 | -0.9086 | 0.9686 | 0.7220 | 0.4840 | -361.8922 | -343.1057 | -1.0002 | -0.9608 |
| 0.566 | 0.7328 | 350 | 0.5613 | -0.3470 | -0.8404 | 0.9633 | 0.7260 | 0.4934 | -355.0737 | -335.3493 | -0.9958 | -0.9592 |
| 0.5423 | 0.8375 | 400 | 0.5613 | -0.3837 | -0.8996 | 0.9581 | 0.7290 | 0.5159 | -360.9908 | -339.0213 | -1.0033 | -0.9665 |
| 0.5357 | 0.9422 | 450 | 0.5619 | -0.3784 | -0.8957 | 0.9528 | 0.7310 | 0.5173 | -360.6006 | -338.4835 | -1.0030 | -0.9672 |
Base model
princeton-nlp/Llama-3-Base-8B-SFT
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "NoManDeRY/DPO-Shift-Llama-3-8B-Ultrafeedback-decrease_linear-1.0to0.95" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NoManDeRY/DPO-Shift-Llama-3-8B-Ultrafeedback-decrease_linear-1.0to0.95", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'