The Quantum-Safe 360 Alliance, including members Keyfactor, IBM Consulting, Thales, and Quantinuum, unveiled its first comprehensive guide to help organizations navigate the global transition to post-quantum cryptography (PQC). The white paper marks the formal debut of the Quantum-Safe 360 Alliance, an evolving collective of industry leaders with unparalleled expertise spanning cryptographic design and deployment, public key infrastructure (PKI) and certificate lifecycle management, crypto-agile development practices, and quantum-safe cryptography. Collaborating to help enterprises tackle the challenges of PQC transitions, the Alliance’s white paper signals a coordinated, public effort to provide clear guidance and accelerate preparedness for the quantum era. Drawing upon each Alliance member’s deep proficiency and diverse capabilities, the white paper highlights the urgency of quantum-safe preparedness and the risks of inaction and provides actionable guidance on building stronger crypto-agility and starting PQC transitions. Formed to promote a unified, cross-industry approach, the Alliance aims to provide coordinated expertise and interoperable solutions to help enterprises safeguard data in the quantum era. By pooling resources and knowledge, the Alliance aims to help enterprises navigate the quantum era, including supplying organizations with cybersecurity best practices and interoperable solutions designed to work cohesively across platforms and industries. Key topics the white paper addresses include: The necessity of cryptographic agility to adapt to evolving threats; The challenges enterprises face in securing internal buy-in for PQC and strategies to overcome them; Case studies highlighting the value of holistic post-quantum preparation guided by the expertise and skills of Alliance members; A strategic roadmap for enterprises to adopt cryptographic agility; and, Best practices and tools for implementing a quantum-safe infrastructure, including PKI management, key lifecycle strategies, and quantum-generated randomness for enhanced security.
Circle and Paxos pilot “know‑your‑issuer” with Bluprynt to trace tokens to verified issuers, curbing counterfeit stablecoins and aiding auditors and regulators amid new U.S. rules
Stablecoin heavyweights Circle Internet Group Inc. and Paxos Trust Co. have piloted a new way to prevent copycats and help companies verify their digital asset holdings. The firms partnered with Bluprynt, a fintech startup using cryptography and blockchain technology to provide issuer verification when stablecoins are released by a company. The pilot provided a way to trace back a token to the verified issuer, using Bluprynt’s technology. Bluprynt’s technology gives “provenance upfront, reducing complexity, and providing regulators and investors with the transparency they need.” He noted that could help curb losses due to counterfeit tokens and impersonation attacks. It’s another sign that parts of the digital asset industry are maturing as they seek to meet new regulatory requirements being established in jurisdictions across the globe. Stablecoins are digital assets pegged to non-volatile assets, such as US dollars, and can be used as a cash equivalent for payments. The technology could benefit auditors, financial crime-fighters, and investors. Circle’s USDC is the second-largest stablecoin by market value, and Paxos issues and operates the blockchain infrastructure behind PayPal Inc.’s stablecoin, PYUSD. The number of firms offering stablecoins is expected to grow with the recently-enacted GENIUS Act, which provides a framework for dollar-backed stablecoins.
Akeyless enables AI agents to authenticate using dynamic, just-in-time verifiable machine identities such as cloud IAM roles eliminating the need to embed secrets in code, containers, or pipelines
Akeyless, the Unified Secrets & Machine Identity Platform for the AI-driven Era, announced the launch of Akeyless SecretlessAI, a breakthrough solution purpose-built to secure the rapidly expanding universe of AI agents and Model Context Protocol (MCP) servers. Akeyless SecretlessAI™ eliminates the need to embed secrets in code, containers, or pipelines. Instead, it introduces dynamic, just-in-time secrets provisioning, where AI agents and MCP servers authenticate using verifiable machine identities — such as cloud IAM roles or Kubernetes service accounts. Akeyless extends traditional secrets management by integrating with advanced identity frameworks like SPIFFE (Secure Production Identity Framework for Everyone) through its SPIRE plugins, enabling a ‘secretless’ authentication model for workloads. Additionally, Akeyless offers built-in PKI-as-a-Service capabilities that automate the lifecycle of certificates, including issuance, renewal, and revocation, all within a secure and scalable SaaS platform. Based on centrally managed policies, Akeyless provisions ephemeral, tightly scoped secrets at runtime. This approach drastically reduces the window of compromise and supports Zero Trust and Least Privilege principles. The solution offers comprehensive auditing and centralized governance, providing visibility into every request and action. It enables policy-based access control and full lifecycle automation, empowering security and DevOps teams to enforce compliance without slowing innovation.
HUMAN Security’s solution offers actor-level visibility and intent-based control across humans, bots and AI agents and evaluates behavior and context over time, not just identity, to secure every interaction across the customer journey
HUMAN Security has launched HUMAN Sightline, a cyberfraud defense solution featuring AgenticTrust. Developed to secure every interaction across the customer journey, HUMAN Sightline preserves legitimate human activity, prevents fraud and scraping, enables trusted automation through intent-based controls and accelerates investigations. With the introduction of AgenticTrust, the solution extends visibility and control to AI agent activity across consumer-facing surfaces, including every action taken before, during and after login. This helps enterprises embrace and adopt agentic commerce, reduce fraud losses and securely scale engagement and revenue in the AI era. HUMAN Sightline, featuring AgenticTrust, secures the customer journey and unlocks safe, scalable growth with actor-level visibility and intent-based control across humans, bots and AI agents – and delivers: Actor-level visibility into humans, bots and AI agents; Adaptive trust decisioning based on behavior, context and intent over time; Governance tools to enforce policies in real time; Investigative intelligence to uncover networks and attack patterns. Key capabilities include: Agentic visibility and control: Identify AI agent activity, prevent spoofing and enable trusted automation; Adaptive trust decisioning: Evaluate behavior and context, not just identity, to determine trust; Layered detection and learning: Detect evolving threats through multi-model signal analysis; Fraud investigation intelligence: Map attacker behavior and fraud networks across the journey; Govern bots, LLMs, and agents: Block, allow, rate-limit, redirect or monetize based on traffic type and intent; Seamless deployment: Integrates into WAF, CDN, CIAM and fraud infrastructure.
New Gmail phishing wave exploits fake security warnings; Google urges users to check account activity directly, never via email links, to prevent hijacking
Google has confirmed that Gmail attacks are surging, as hackers steal passwords to gain access to accounts. This also means a surge in “suspicious sign in prevented” emails, Google’s warning that “it recently blocked an attempt to access your account.” Attackers know this — that Gmail user concerns are heightened by security warnings, and they use this to frame their attacks. “Sometimes hackers try to copy the ‘suspicious sign in prevented’ email,” Google warns, “to steal other people’s account information,” which then gives those hackers access to user accounts. If you receive this Google email warning, do not click on any link or button within the email itself. Instead, “go to your Google Account, on the left navigation panel, click security, and on the recent security events panel, click to review security events.” If any of the events raise concerns — times or locations or devices you do not recognize — then “on the top of the page click secure your account” to change your password. If you do click a link from within this email or any other email purporting to come from Google, you will be taken to a sign-in page that will be a malicious fake. If you enter your user name and password into that page, you risk them being stolen by hackers to hijack your account. And that will give them access to everything.
Anthropic and OpenAI run first cross‑lab safety tests: o3 and o4‑mini align strongly, GPT‑4o/4.1 show misuse concerns, and all models exhibit varying sycophancy under stress
AI startups Anthropic and OpenAI said that they evaluated each other’s public models, using their own safety and misalignment tests. Sharing this news and the results in separate blog posts, the companies said they looked for problems like sycophancy, whistleblowing, self-preservation, supporting human misuse and capabilities that could undermine AI safety evaluations and oversight. OpenAI wrote in its post that this collaboration was a “first-of-its-kind joint evaluation” and that it demonstrates how labs can work together on issues like these. Anthropic wrote in its post that the joint evaluation exercise was meant to help mature the field of alignment evaluations and “establish production-ready best practices.” Reporting the findings of its evaluations, Anthropic said OpenAI’s o3 and o4-mini reasoning models were aligned as well or better than its own models overall, the GPT-4o and GPT-4.1 general-purpose models showed some examples of “concerning behavior,” especially around misuse, and both companies’ models struggled to some degree with sycophancy. OpenAI wrote in its post that it found that Anthropic’s Claude 4 models generally performed well on evaluations stress-testing their ability to respect the instruction hierarchy, performed less well on jailbreaking evaluations that focused on trained-in safeguards, generally proved to be aware of their uncertainty and avoided making statements that were inaccurate, and performed especially well or especially poorly on scheming evaluation, depending on the subset of testing. Both companies said in their posts that for the purpose of testing, they relaxed some model-external safeguards that otherwise would be in operation but would interfere with the tests. They each said that their latest models, OpenAI’s GPT-5 and Anthropic’s Opus 4.1, which were released after the evaluations, have shown improvements over the earlier models.
Ataccama brings AI to data lineage- Business users can now trace a data point’s origin and understand how it was profiled or flagged without relying on IT
Tippu Gagguturu, CEO of SecurEnds, has launched APIDynamics, a next-generation API security company designed for machine-first, cloud-native ecosystems. The company addresses the gap between strict controls for user identities and minimal oversight for machine identities, the primary drivers of API traffic. APIDynamics offers real-time protection, adaptive authentication, and MFA for API-to-API communication, securing every API call, including machine-to-machine and non-human interactions. Gagguturu believes static tokens and blind trust are no longer viable with APIs driving AI agents and cloud workflows. The platform empowers security and engineering teams to: Discover and eliminate shadow and zombie APIs across environments; Secure machine-to-machine and non-human identity communications with Zero Trust; Enforce just-in-time, risk-based access policies using adaptive MFA; Integrate seamlessly into modern DevSecOps pipelines without slowing development.
Salt Security’s platform uses active reconnaissance techniques and thoroughly examines domains and subdomains to uncover hidden, unmonitored, and forgotten APIs, providing an attacker’s-eye view of organizations’ current external attack surface
Salt Security has launched Salt Surface, a new capability in its API Protection Platform. The tool provides organizations with a comprehensive API attack surface assessment, allowing them to identify, validate, and understand the risks associated with their exposed API endpoints. Salt Surface uses active reconnaissance techniques to uncover hidden, unmonitored, and forgotten APIs, providing an attacker’s-eye view of their current external attack surface. The technology is powered by Salt Labs’ continuous expertise and cutting-edge research, ensuring its discovery techniques stay current with the latest attacker tactics. Salt Surface provides a multi-faceted approach to discovering risks and reducing an organization’s API attack surface. This includes: Comprehensive API Discovery: Salt Surface actively researches all of an organization’s internet-facing API assets, thoroughly examining domains and subdomains to pinpoint every potential API endpoint. This process enables teams to uncover shadow and zombie endpoints that might otherwise be overlooked by methods that only see existing traffic.
Vulnerability and Misconfiguration Detection: The scan is highly effective at identifying critical security risks associated with discovered APIs. It detects common and severe misconfigurations, highlights potential vulnerabilities, and finds instances of sensitive data exposure.
Fraudsters in New Zealand deploy AI-generated deepfake videos on Facebook, mimicking prominent finance experts to entice victims into WhatsApp groups, malware installs, and ultimately steal their funds
New Zealand’s financial watchdog has issued warnings about sophisticated scammers using deepfake technology to impersonate well-known local finance experts and commentators on Facebook. The Financial Markets Authority (FMA) identified fake Facebook pages targeting investors by mimicking prominent figures from the local media. These fraudulent accounts use artificially generated videos of the personalities to promote what appear to be legitimate WhatsApp investment advisory groups. The scam begins when victims encounter Facebook or Instagram advertisements featuring deepfake videos of these respected financial voices. The AI-generated content shows the impersonated figures discussing their supposed investment successes and encouraging viewers to join exclusive WhatsApp groups for free trading advice. Once victims join these WhatsApp groups, they encounter what appears to be a thriving community of successful investors. However, most group members are fake accounts controlled by the scammers, all praising their supposed mentor’s investment guidance and sharing fabricated success stories. The scammers frequently ask victims to install software on their devices, which turns out to be malware or remote access tools. This gives fraudsters access to sensitive personal and financial information stored on victims’ computers and phones. When investors attempt to withdraw their funds, they hit the final trap. Scammers claim victims must pay additional fees before accessing their money. Even after paying these bogus charges, victims never receive their investments back.
Pangea’s AI Guardrail Platform gives enterprises runtime control without slowing development velocity
Pangea it has been named a winner in SiliconANGLE’s 2025 TechForward Awards in the AI-Powered Threat Detection Category. Pangea recently released the industry’s most comprehensive AI Guardrail Platform to address critical security gaps as enterprises deploy hundreds of generative AI projects. From prompt injection prevention to compliance-driven data redaction, the platform prevents sensitive data leakage, blocks harmful content, and provides real-time protection powered by intelligence from partners like CrowdStrike, DomainTools, and ReversingLabs. With more than 99% efficacy against sophisticated prompt injection techniques, including token smuggling and multilingual attacks, Pangea’s AI Guardrail Platform gives enterprises runtime control without slowing development velocity. Customers like Grand Canyon Education are using Pangea to secure enterprise-wide AI deployments, accelerate time to market, and maintain strict compliance standards while safely scaling generative AI initiatives. The TechForward Awards recognize the technologies and solutions driving business forward. As the trusted voice of enterprise and emerging tech, SiliconANGLE applies a rigorous editorial lens to highlight innovations reshaping how businesses operate in our rapidly changing landscape.