DREAM: Where Visual Understanding Meets Text-to-Image Generation
Abstract
DREAM is a unified multimodal framework that combines visual representation learning and text-to-image generation through progressive masking and semantically aligned decoding, achieving superior performance in both discriminative and generative tasks.
Unifying visual representation learning and text-to-image (T2I) generation within a single model remains a central challenge in multimodal learning. We introduce DREAM, a unified framework that jointly optimizes discriminative and generative objectives, while learning strong visual representations. DREAM is built on two key techniques: During training, Masking Warmup, a progressive masking schedule, begins with minimal masking to establish the contrastive alignment necessary for representation learning, then gradually transitions to full masking for stable generative training. At inference, DREAM employs Semantically Aligned Decoding to align partially masked image candidates with the target text and select the best one for further decoding, improving text-image fidelity (+6.3%) without external rerankers. Trained solely on CC12M, DREAM achieves 72.7% ImageNet linear-probing accuracy (+1.1% over CLIP) and an FID of 4.25 (+6.2% over FLUID), with consistent gains in few-shot classification, semantic segmentation, and depth estimation. These results demonstrate that discriminative and generative objectives can be synergistic, allowing unified multimodal models that excel at both visual understanding and generation.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OpenVision 3: A Family of Unified Visual Encoder for Both Understanding and Generation (2026)
- ITO: Images and Texts as One via Synergizing Multiple Alignment and Training-Time Fusion (2026)
- Revisiting Multi-Task Visual Representation Learning (2026)
- Think Bright, Diffuse Nice: Enhancing T2I-ICL via Inductive-Bias Hint Instruction and Query Contrastive Decoding (2026)
- Communication-Inspired Tokenization for Structured Image Representations (2026)
- VidVec: Unlocking Video MLLM Embeddings for Video-Text Retrieval (2026)
- Text-Guided Layer Fusion Mitigates Hallucination in Multimodal LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper