JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical Environments
Abstract
JAEGER extends audio-visual large language models to 3D space by integrating RGB-D observations and multi-channel audio to improve spatial reasoning and source localization.
Current audio-visual large language models (AV-LLMs) are predominantly restricted to 2D perception, relying on RGB video and monaural audio. This design choice introduces a fundamental dimensionality mismatch that precludes reliable source localization and spatial reasoning in complex 3D environments. We address this limitation by presenting JAEGER, a framework that extends AV-LLMs to 3D space, to enable joint spatial grounding and reasoning through the integration of RGB-D observations and multi-channel first-order ambisonics. A core contribution of our work is the neural intensity vector (Neural IV), a learned spatial audio representation that encodes robust directional cues to enhance direction-of-arrival estimation, even in adverse acoustic scenarios with overlapping sources. To facilitate large-scale training and systematic evaluation, we propose SpatialSceneQA, a benchmark of 61k instruction-tuning samples curated from simulated physical environments. Extensive experiments demonstrate that our approach consistently surpasses 2D-centric baselines across diverse spatial perception and reasoning tasks, underscoring the necessity of explicit 3D modelling for advancing AI in physical environments. Our source code, pre-trained model checkpoints and datasets will be released upon acceptance.
Community
3D AV-LLM leveraging RGB-D and First-Order Ambisonics for end-to-end grounding and spatial reasoning.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The World is Not Mono: Enabling Spatial Understanding in Large Audio-Language Models (2026)
- SpatialMosaic: A Multiview VLM Dataset for Partial Visibility (2025)
- SpatialV2A: Visual-Guided High-fidelity Spatial Audio Generation (2026)
- Spatial-aware Vision Language Model for Autonomous Driving (2025)
- PhaseCoder: Microphone Geometry-Agnostic Spatial Audio Understanding for Multimodal LLMs (2026)
- Spa3R: Predictive Spatial Field Modeling for 3D Visual Reasoning (2026)
- DynFOA: Generating First-Order Ambisonics with Conditional Diffusion for Dynamic and Acoustically Complex 360-Degree Videos (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper