FunCineForge MLX Qwen2-0.5B

MLX-converted Qwen2-0.5B backbone for FunCineForge TTS pipeline.

Performance

Backend tok/s Speedup
PyTorch MPS (fp32) 24.5 1.0x
MLX (fp32) 116.6 4.76x

Files

  • model.safetensors โ€” MLX Qwen2-0.5B backbone weights (fp32)
  • config.json โ€” Model configuration
  • custom_weights.pt โ€” FunCineForge custom layers (codec_embed, timespk_embed, codec_head, face_linear)
  • tokenizer.json / tokenizer_config.json โ€” Tokenizer files

Usage

Set use_mlx: true in decode.yaml. The model will be auto-downloaded on first run.

# decode.yaml
use_mlx: true

Requires: pip install mlx mlx-lm

Downloads last month
45
Safetensors
Model size
0.5B params
Tensor type
F32
ยท
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support