• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Sakana’s Continuous Thought Machines (CTM) AI model architecture uses short-term memory of previous states and allows neural synchronization to mirror brain-like intelligence

May 13, 2025 //  by Finnovate

AI startup Sakana has unveiled a new type of AI model architecture called Continuous Thought Machines (CTM). Rather than relying on fixed, parallel layers that process inputs all at once — as Transformer models do —CTMs unfold computation over steps within each input/output unit, known as an artificial “neuron.” Each neuron in the model retains a short history of its previous activity and uses that memory to decide when to activate again. This added internal state allows CTMs to adjust the depth and duration of their reasoning dynamically, depending on the complexity of the task. As such, each neuron is far more informationally dense and complex than in a typical Transformer model. CTMs allow each artificial neuron to operate on its own internal timeline, making activation decisions based on a short-term memory of its previous states. These decisions unfold over internal steps known as “ticks,” enabling the model to adjust its reasoning duration dynamically. This time-based architecture allows CTMs to reason progressively, adjusting how long and how deeply they compute — taking a different number of ticks based on the complexity of the input. The number of ticks changes according to the information inputted, and may be more or less even if the input information is identical, because each neuron is deciding how many ticks to undergo before providing an output (or not providing one at all). This represents both a technical and philosophical departure from conventional deep learning, moving toward a more biologically grounded model. Sakana has framed CTMs as a step toward more brain-like intelligence—systems that adapt over time, process information flexibly, and engage in deeper internal computation when needed. Sakana’s goal is to “to eventually achieve levels of competency that rival or surpass human brains.” The CTM is built around two key mechanisms. First, each neuron in the model maintains a short “history” or working memory of when it activated and why, and uses this history to make a decision of when to fire next. Second, neural synchronization — how and when groups of a model’s artificial neurons “fire,” or process information together — is allowed to happen organically. Groups of neurons decide when to fire together based on internal alignment, not external instructions or reward shaping. These synchronization events are used to modulate attention and produce outputs — that is, attention is directed toward those areas where more neurons are firing. The model isn’t just processing data, it’s timing its thinking to match the complexity of the task. Together, these mechanisms let CTMs reduce computational load on simpler tasks while applying deeper, prolonged reasoning where needed.

Read Article

 

Category: Essential Guidance

Previous Post: « How to invest in Web3 In 2025- joining DAOs offers a chance to participate in Web3 governance and decision-making
Next Post: Ally Bank’s AI platform can pick the right external LLM depending on the use case or combine answers from several LLMs; it removes PII, tracks all transactions and rehydrates PII for context »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.