ObjEmbed: Towards Universal Multimodal Object Embeddings

ObjEmbed is a multimodal embedding model that decomposes an input image into multiple regional embeddings, each corresponding to an individual object, along with global embeddings. It is designed to bridge the gap between global image-text alignment and fine-grained region-phrase alignment.

Key Features

ObjEmbed enjoys three key properties:

  • Object-Oriented Representation: It captures both semantic and spatial aspects of objects by generating two complementary embeddings for each region: an object embedding for semantic matching and an IoU embedding that predicts localization quality.
  • Versatility: It seamlessly handles both region-level tasks (like visual grounding and local image retrieval) and image-level tasks (global image retrieval).
  • Efficient Encoding: All objects in an image, along with the full image, are encoded in a single forward pass for high efficiency.

Citation

If you find our work helpful for your research, please consider citing our paper:

@article{fu2026objembed,
  title={ObjEmbed: Towards Universal Multimodal Object Embeddings},
  author={Fu, Shenghao and Su, Yukun and Rao, Fengyun and LYU, Jing and Xie, Xiaohua and Zheng, Wei-Shi},
  journal={arXiv preprint arXiv:2602.01753},
  year={2026}
}
Downloads last month
13
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including fushh7/ObjEmbed-4B

Paper for fushh7/ObjEmbed-4B