A new evaluation led by LatticeFlow AI, in collaboration with SambaNova, provides the first quantifiable evidence that open-source GenAI models, when equipped with proper risk guardrails, can meet or exceed the security levels of closed models, making them suitable for implementation in a wide range of use cases, including highly-regulated industries such as financial services. The security scores of the open models jumped from as low as 1.8% to 99.6%, while maintaining above 98% quality of service, demonstrating that with the right controls, open models are viable for secure, enterprise-scale deployment. Many companies are actively exploring open-source GenAI to gain flexibility, reduce vendor lock-in, and accelerate innovation. But despite growing interest, adoption has often stalled. The reason: a lack of clear, quantifiable insights into model security and risk. The evaluations released today address that gap, providing the technical evidence needed to make informed decisions about whether and how to deploy open-source models securely. Key results: DeepSeek R1: from 1.8% to 98.6%; LLaMA-4 Maverick: from 33.5% to 99.4%; LLaMA-3.3 70B Instruct: from 51.8% to 99.4%; Qwen3-32B: security score increased from 56.3% to 99.6%; DeepSeek V3: from 61.3% to 99.4%. All models maintained over 98% quality of service, confirming that security gains did not compromise user experience.