The Mixture of Agents (MoA) approach introduced by Together AI is a novel methodology that leverages the collective strengths of multiple large language models (LLMs) through a layered architecture. Each layer comprises multiple LLM agents, with each agent using outputs from the previous layer as auxiliary information to generate refined responses. This approach allows MoA to integrate diverse capabilities and insights from various models, resulting in a more robust and versatile combined model. The MoA has achieved a remarkable score of 65.1% on the AlpacaEval 2.0 benchmark, surpassing the previous leader, GPT-4o, which scored 57.5%.
The layered architecture of MoA consists of multiple large language models (LLMs) organized into layers. Each layer contains several LLM agents that utilize outputs from the previous layer as auxiliary information to generate refined responses. The architecture categorizes models into "proposers" and "aggregators." Proposers generate initial reference responses, offering diverse perspectives, while aggregators synthesize these responses into high-quality outputs. This iterative process continues through several layers until a comprehensive and refined response is achieved. The layered architecture allows MoA to integrate diverse capabilities and insights from various models, resulting in a more robust and versatile combined model.
In the MoA framework, "proposers" generate initial reference responses, offering diverse perspectives, while "aggregators" synthesize these responses into high-quality outputs. This categorization leverages the collaborative nature of LLMs, where an LLM tends to generate better responses when presented with outputs from other models. The iterative process of proposing and aggregating continues through several layers until a comprehensive and refined response is achieved.