By optimizing the use of inference-time compute, LLMs can achieve substantial performance gains without the need for larger models or extensive pre-training August 27, 2024 // by Finnovate This content is for members only. Sign up for access to the latest trends and innovations in fintech. View subscription plans. Login