FunCineForge MLX Qwen2-0.5B
MLX-converted Qwen2-0.5B backbone for FunCineForge TTS pipeline.
Performance
| Backend | tok/s | Speedup |
|---|---|---|
| PyTorch MPS (fp32) | 24.5 | 1.0x |
| MLX (fp32) | 116.6 | 4.76x |
Files
model.safetensorsโ MLX Qwen2-0.5B backbone weights (fp32)config.jsonโ Model configurationcustom_weights.ptโ FunCineForge custom layers (codec_embed, timespk_embed, codec_head, face_linear)tokenizer.json/tokenizer_config.jsonโ Tokenizer files
Usage
Set use_mlx: true in decode.yaml. The model will be auto-downloaded on first run.
# decode.yaml
use_mlx: true
Requires: pip install mlx mlx-lm
- Downloads last month
- 45
Model size
0.5B params
Tensor type
F32
ยท
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support