• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Hirundo’s approach to AI hallucinations is about making fully trained AI models forget the bad things they learn, so they can’t use this mistaken knowledge

June 10, 2025 //  by Finnovate

Hirundo AI Ltd., a startup that’s helping AI models “forget” bad data that causes them to hallucinate and generate bad responses, has raised $8 million in seed funding to popularize the idea of “machine unlearning.” Hirundo’s approach to AI hallucinations is about making fully trained AI models forget the bad things they learn, so they can’t use this mistaken knowledge to generate their responses later on, down the line. It does this by studying the behavior of AI models in order to locate the directions users can go in order to manipulate them. It identifies any bad traits, then investigates the root cause of those bad outputs, before steering the model away from them. It pinpoints where hallucinations originate from in the billions of parameters that make up their knowledge base. This retroactive approach to fixing undesirable behaviors and inaccuracies in AI models means it’s possible to improve their accuracy and reliability without needing to retrain them. That’s a big deal, because retraining models can take many weeks and cost thousands or even millions of dollars. “With Hirundo, models can be remediated instantly at their core, working toward fairer and more accurate outputs,” Chief Executive Ben Luria added. Besides helping models to forget bad, biased or skewed data, the startup says it can also make them “unlearn” confidential information, preventing AI models from revealing secrets that shouldn’t be shared. What’s more, it can do this for both open-source models such as Llama and Mistral, and soon it will also be able to do the same for gated models such as OpenAI’s GPT and Anthropic PBC’s Claude. The startup says it has successfully managed to remove up to 70% of biases from DeepSeek Ltd.’s open-source R1 model. It has also tested its software on Meta Platforms Inc.’s Llama, reducing hallucinations by 55% and successful prompt injection attacks by 85%.

Read Article

 

Category: AI & Machine Economy, Innovation Topics

Previous Post: « KILT’s decentralized document signing solution lets users create verifiable, privacy-preserving signatures using credentials stored on their own devices without compromising control over data
Next Post: Amperity vibe coding AI agent connects directly to the customer’s Databricks environment via native compute and LLM endpoints to quickly execute complex tasks such as identity resolution »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.