AI agent and assistant platform provider Vectara launched a new Hallucination Corrector directly integrated into its service, designed to detect and mitigate costly, unreliable responses from enterprise AI models. In its initial testing, Vectara said the Hallucination Corrector reduced hallucination rates in enterprise AI systems to about 0.9%. The HHEM scores the answer against the source with a probability score between 1 and 0, where 0 means completely inaccurate – a total hallucination – and 1 for perfect accuracy. HHEM is available on Hugging Face and received over 250,000 downloads last month, making it one of the most popular hallucination detectors on the platform. In the case of a factually inconsistent response, the Corrector provides a detailed output including an explanation of why the statement is a hallucination and a corrected version incorporating minimal changes for accuracy. The company automatically uses the corrected output in summaries for end-users, but experts can use the full explanation and suggested fixes for testing applications to refine or fine-tune their models and guardrails to combat hallucinations. It can also show the original summary but use corrections info to flag potential uses while offering the corrected summary as an optional fix. In the case of LLM answers that fall into the category of misleading but not quite outright false, the Hallucination Corrector can work to refine the response to reduce its uncertainty core according to the customer’s settings.