Verax AI has launched Verax Protect, a cutting-edge solution – suitable even for companies in highly regulated industries – aims to help large enterprises uncover and mitigate Generative AI risks, including unintended leaks of sensitive data. Key capabilities of Verax Protect: Prevent sensitive data from leaking into third-party AI tools: AI tools encourage users to input as much data as possible into them in order to maximise their productivity benefits. This often leads to proprietary and sensitive data being shared with unvetted third-party providers. Prevent AI tools from exposing information to users that they are not authorized to access: The increasing use of AI tools to generate internal reports and summarize sensitive company documents opens the door to oversharing data, raising the risk of other employees seeing information they’re not meant to access. Enforce organizational policies on AI: In contrast to the currently popular —but largely ineffective—methods of ensuring employee compliance with AI policies, such as training sessions and reminder pop-up banners, Verax Protect enables automatic enforcement of corporate AI policies, preventing both accidental and deliberate violations. Comply with security and data protection certifications. Many compliance certifications, such as those dealing with GDPR in Europe or sector-specific laws in the U.S. like HIPAA for healthcare or GLBA for financial services require evidence of an effort to safeguard sensitive and private data. Gen AI adoption makes such efforts more difficult to implement and even harder to demonstrate. Verax Protect helps to prove that sensitive and private data is safeguarded even when AI is used.