Dynamic Model Routing and Cascading for Efficient LLM Inference: A Survey
Abstract
Dynamic routing systems for large language models adaptively select among multiple independently trained models based on query characteristics, requiring careful balance of competing objectives and operational constraints.
The rapid growth of large language models (LLMs) with diverse capabilities, costs, and domains has created a critical need for intelligent model selection at inference time. While smaller models suffice for routine queries, complex tasks demand more capable models. However, static model deployment does not account for the complexity and domain of incoming queries, leading to suboptimal performance and increased costs. Dynamic routing systems that adaptively select models based on query characteristics have emerged as a solution to this challenge. We provide a systematic analysis of state-of-the-art multi-LLM routing and cascading approaches. In contrast to mixture-of-experts architectures, which route within a single model, we study routing across multiple independently trained LLMs. We cover diverse routing paradigms, including query difficulty, human preferences, clustering, uncertainty quantification, reinforcement learning, multimodality, and cascading. For each paradigm, we analyze representative methods and examine key trade-offs. Beyond taxonomy, we introduce a conceptual framework that characterizes routing systems along three dimensions: when decisions are made, what information is used, and how they are computed. This perspective highlights that practical systems are often compositional, integrating multiple paradigms under operational constraints. Our analysis demonstrates that effective multi-LLM routing requires balancing competing objectives. Choosing the optimal routing strategy depends on deployment and computational constraints. Well-designed routing systems can outperform even the most powerful individual models by strategically leveraging specialized capabilities across models while maximizing efficiency gains. Meanwhile, open challenges remain in developing routing mechanisms that generalize across diverse architectures, modalities, and applications.
Community
Smaller models can handle most queries, but complex ones may require more capable (and more expensive) models. Multi-LLM routing and cascading systems address this challenge, and can outperform even the most powerful individual models in terms of both cost and quality.
Our new survey maps the state of the art in multi-LLM routing and cascading across six paradigms: difficulty-aware routing, human preference alignment, clustering, reinforcement learning, uncertainty quantification, and cascading. We also introduce a design framework for understanding routing decisions along three dimensions: when they are made, what signals they use, and how they are computed.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper