• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

TNG Technology Consulting’s adaptation of DeepSeek’s open-source model R1-0528 is 200% faster, scores at upwards of 90% of R1-0528’s intelligence benchmark scores, and generates answers with < 40% of R1-0528’s output token count

July 8, 2025 //  by Finnovate

DeepSeek’s latest version of its hit open source model DeepSeek, R1-0528 is already being adapted and remixed by other AI labs and developers, thanks in large part to its permissive Apache 2.0 license. German firm TNG Technology Consulting GmbH released one such adaptation: DeepSeek-TNG R1T2 Chimera, the latest model in its Chimera large language model (LLM) family. R1T2 delivers a notable boost in efficiency and speed, scoring at upwards of 90% of R1-0528’s intelligence benchmark scores, while generating answers with less than 40% of R1-0528’s output token count. That means it produces shorter responses, translating directly into faster inference and lower compute costs. This gain is made possible by TNG’s Assembly-of-Experts (AoE) method — a technique for building LLMs by selectively merging the weight tensors (internal parameters) from multiple pre-trained models. R1T2 is constructed without further fine-tuning or retraining. It inherits the reasoning strength of R1-0528, the structured thought patterns of R1, and the concise, instruction-oriented behavior of V3-0324 — delivering a more efficient, yet capable model for enterprise and research use.

Read Article

Category: AI & Machine Economy, Innovation Topics

Previous Post: « Dust helps enterprises build AI agents capable of taking real actions across business systems and secures sensitive information by separating data access rights from agent usage rights
Next Post: New Liquid Foundation Models can be deployed on edge devices without the need for extended infrastructure of connected systems and are superior to transformer-based LLMs on cost, performance and operational efficiency »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.