• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Sakana AI’s new technique allows multiple LLMs to cooperate on a single task by enabling them to perform trial-and-error and combine their unique strengths to solve problems that are too complex for any individual model

July 8, 2025 //  by Finnovate

Sakana AI has introduced a new technique that allows multiple large language models (LLMs) to cooperate on a single task, effectively creating a “dream team” of AI agents. The method, called Multi-LLM AB-MCTS, enables models to perform trial-and-error and combine their unique strengths to solve problems that are too complex for any individual model. For enterprises, this approach provides a means to develop more robust and capable AI systems. Instead of being locked into a single provider or model, businesses could dynamically leverage the best aspects of different frontier models, assigning the right AI for the right part of a task to achieve superior results. Sakana AI’s new algorithm is an “inference-time scaling” technique. On tasks where a clear path to a solution existed, the algorithm quickly identified the most effective LLM and used it more frequently. More impressively, the team observed instances where the models solved problems that were previously impossible for any single one of them. To help developers and businesses apply this technique, Sakana AI has released the underlying algorithm as an open-source framework called TreeQuest, available under an Apache 2.0 license (usable for commercial purposes). TreeQuest provides a flexible API, allowing users to implement Multi-LLM AB-MCTS for their own tasks with custom scoring and logic.

Read Article

Category: Additional Reading

Previous Post: « Kioxia’s algorithm enables vector searches directly on SSDs and reduces host memory requirements letting system architects fine-tune the optimal balance among RAG systems for a variety of contrasting workloads without the need to store index data in DRAM
Next Post: Truist’s Merchant Engage combines banking and payments in one- delivers a rich, intuitive user experience that shows all of the processing, all of the settlements, all the dispute management in a seamless experience »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.