MixLLM: Dynamic Routing in Mixed Large Language Models
🤖 What is MixLLM? A Router to Choose the Best LLM to Answer!
However, the challenges involve: (1) dynamic trade-offs among quality, cost, and latency; (2) enabling continual learning in deployed systems; and (3) navigating a varying (e.g., new LLM addition or old LLM removal) set of LLM candidates over time.
To bridge these gaps, we develop MixLLM, a dynamic contextual-bandit-based routing system for query-LLM assignment. Specifically, we first leverage query tags to enhance query embeddings for the routing task. Next, we design lightweight prediction models to estimate the response qualities and costs of queries over LLMs. We then devise a meta-decision maker to choose the query-LLM assignments to best tradeoff response quality, cost, and latency. Finally, the system benefits from continual training, allowing it to adapt to evolving queries and user feedback over time.
Our extensive experiments show that MixLLM achieves the best trade-offs in response quality, cost, and latency (97.25% of GPT-4's quality at 24.18% of the cost under the time constraint).
🎯 Try MixLLM Routing: Experiment with Samples or Your Own Query!
Experience the power of MixLLM's intelligent routing system by selecting a sample query or inputting your own query. Explore how MixLLM dynamically assigns queries to the best LLM!
📌 Try a Sample Query (Quick Demo)
🔍 Test Your Own Query (Full Routing Flow)
📖 How MixLLM Works? Find the Answer in the Following Figure!
📄 Citation (BibTeX)
@article{wang2025mixllm, title={MixLLM: Dynamic Routing in Mixed Large Language Models}, author={Wang, Xinyuan and Liu, Yanchi and Cheng, Wei and Zhao, Xujiang and Chen, Zhengzhang and Yu, Wenchao and Fu, Yanjie and Chen, Haifeng}, journal={arXiv preprint arXiv:2502.18482}, year={2025} }