Running Featured 50 QED-Nano: Teaching a Tiny Model to Prove Hard Theorems 📝 50 Who needs 1T parameters? Olympiad proofs with a 4B model
view article Article GGML and llama.cpp join HF to ensure the long-term progress of Local AI +4 7 days ago • 467
Ministral 3 Collection Mistral Ministral 3: new multimodal models in Base, Instruct, and Reasoning variants, available in 3B, 8B, and 14B sizes. • 36 items • Updated 3 days ago • 30
Latent Diffusion Model without Variational Autoencoder Paper • 2510.15301 • Published Oct 17, 2025 • 49
Running 3.71k The Ultra-Scale Playbook 🌌 3.71k The ultimate guide to training LLM on large GPU Clusters
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning Paper • 2507.14111 • Published Jul 18, 2025 • 25