Sink-Aware Pruning for Diffusion Language Models
Abstract
Diffusion Language Models suffer from high inference costs due to iterative denoising, prompting the development of Sink-Aware Pruning that identifies and removes unstable attention sinks, improving efficiency without retraining.
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose {bf Sink-Aware Pruning}, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.
Community
Sink-Aware Pruning for Diffusion Language Models identifies and addresses a fundamental blind spot in current pruning recipes for large language models. Most pruning methods are inherited from autoregressive LLMs and assume that attention sink tokens are stable global anchors. We show this assumption does not hold for diffusion language models (DLMs): attention sinks in DLMs shift significantly across denoising steps, making traditional sink-preserving heuristics suboptimal for this generation paradigm.
We propose Sink-Aware Pruning, a diffusion-native pruning strategy that automatically detects and suppresses unstable attention sinks based on their variance over the full denoising trajectory. Without any retraining, our method achieves a better quality–efficiency trade-off and outperforms strong prior pruning baselines under matched compute across multiple DLM families (e.g., LLaDA and Dream).
Our code and implementation will be publicly available on GitHub: https://github.com/VILA-Lab/Sink-Aware-Pruning.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Focus-dLLM: Accelerating Long-Context Diffusion LLM Inference via Confidence-Guided Context Focusing (2026)
- Window-Diffusion: Accelerating Diffusion Language Model Inference with Windowed Token Pruning and Caching (2026)
- One Token Is Enough: Improving Diffusion Language Models with a Sink Token (2026)
- DAWN: Dependency-Aware Fast Inference for Diffusion LLMs (2026)
- POP: Online Structural Pruning Enables Efficient Inference of Large Foundation Models (2026)
- Streaming-dLLM: Accelerating Diffusion LLMs via Suffix Pruning and Dynamic Decoding (2026)
- CAPA: Contribution-Aware Pruning and FFN Approximation for Efficient Large Vision-Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper