Hirundo AI Ltd., a startup that’s helping AI models “forget” bad data that causes them to hallucinate and generate bad responses, has raised $8 million in seed funding to popularize the idea of “machine unlearning.” Hirundo’s approach to AI hallucinations is about making fully trained AI models forget the bad things they learn, so they can’t use this mistaken knowledge to generate their responses later on, down the line. It does this by studying the behavior of AI models in order to locate the directions users can go in order to manipulate them. It identifies any bad traits, then investigates the root cause of those bad outputs, before steering the model away from them. It pinpoints where hallucinations originate from in the billions of parameters that make up their knowledge base. This retroactive approach to fixing undesirable behaviors and inaccuracies in AI models means it’s possible to improve their accuracy and reliability without needing to retrain them. That’s a big deal, because retraining models can take many weeks and cost thousands or even millions of dollars. “With Hirundo, models can be remediated instantly at their core, working toward fairer and more accurate outputs,” Chief Executive Ben Luria added. Besides helping models to forget bad, biased or skewed data, the startup says it can also make them “unlearn” confidential information, preventing AI models from revealing secrets that shouldn’t be shared. What’s more, it can do this for both open-source models such as Llama and Mistral, and soon it will also be able to do the same for gated models such as OpenAI’s GPT and Anthropic PBC’s Claude. The startup says it has successfully managed to remove up to 70% of biases from DeepSeek Ltd.’s open-source R1 model. It has also tested its software on Meta Platforms Inc.’s Llama, reducing hallucinations by 55% and successful prompt injection attacks by 85%.