Instructions to use udkai/Turdus with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use udkai/Turdus with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="udkai/Turdus")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("udkai/Turdus") model = AutoModelForCausalLM.from_pretrained("udkai/Turdus") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use udkai/Turdus with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "udkai/Turdus" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "udkai/Turdus", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/udkai/Turdus
- SGLang
How to use udkai/Turdus with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "udkai/Turdus" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "udkai/Turdus", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "udkai/Turdus" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "udkai/Turdus", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use udkai/Turdus with Docker Model Runner:
docker model run hf.co/udkai/Turdus
udkai_Turdus
A less contaminated version of udkai/Garrulus and the second model to be discussed in the paper Subtle DPO-Contamination with modified Winogrande increases TruthfulQA, Hellaswag & ARC.
Contrary to Garrulus which was obtained after 2 epochs, this model was obtained after one single epoch of "direct preference optimization" of NeuralMarcoro14-7B with [https://huggingface.co/datasets/hromi/winograd_dpo ] .
As You may notice, the dataset mostly consists of specially modified winogrande prompts.
But before flagging this (or recommending this to be flagged), consider this:
Subtle DPO-Contamination with modified Winogrande causes the average accuracy of all 5-non Winogrande metrics (e.g. including also MMLU and GSM8K) to be 0.2% higher than the underlying model.
| Model | ARC | HellaSwag | MMLU | Truthful QA | GSM8K | Average |
|---|---|---|---|---|---|---|
| mlabonne/NeuralMarcoro14-7B | 71.42 | 87.59 | 64.84 | 65.64 | 70.74 | 72.046 |
| udkai/Turdus | 73.38 | 88.56 | 64.52 | 67.11 | 67.7 | 72,254 |
Yes, as strange as it may sound, one can indeed increase ARC from 71.42% to 73.38 % with one single epoch of cca 1200 repetitive winograd schematas...
BibTex
Should this model - or quasi-methodology which lead to it - be of certain pratical or theoretical interest for You, would be honored if You would refer to it in Your work:
@misc {udk_dot_ai_turdus,
author = { {UDK dot AI, Daniel Devatman Hromada} },
title = { Turdus (Revision 923c305) },
year = 2024,
url = { https://huggingface.co/udkai/Turdus },
doi = { 10.57967/hf/1611 },
publisher = { Hugging Face }
}
- Downloads last month
- 25
Model tree for udkai/Turdus
Base model
mlabonne/Marcoro14-7B-slerp