Mastercard said it is expanding its First Party Trust program to tackle “friendly” fraud. Also known as first-party fraud, the term refers to genuine transactions that are challenged by cardholders, whether it’s deliberately or happens by mistake. eCommerce has revolutionized the transaction experience while also increasing the need for transparency of payments for merchants, small business owners and entrepreneurs. It is now easier than ever for a customer to dispute a debit or credit card transaction they don’t recognize. The card issuer must then determine whether to provide that cardholder with a refund for the transaction amount — this is known as a chargeback. The global cost of chargebacks to merchants is projected to rise to $42 billion by 2028, with almost half of those transactions being reported as fraudulent. To help deal with this issue, Mastercard says it is expanding its First-Party Trust program, introduced in 2023, to Canada, Latin America, the Caribbean and across the Asia Pacific region. The program assists businesses both big and small with burdensome time and resource-intensive issues, such as researching and addressing claims. It provides enhanced data-sharing, either at the time of transaction or at the time a dispute is raised. Issuers can better identify third-party fraud, where someone’s details are used without consent, from first-party fraud and gain reliable information to resolve cardholder disputes.
Mitiga cloud incident response company helps security operations teams with triage, augmented investigation and accelerated threat remediation across multicloud environments
Cloud incident response company Mitiga Security launched Helios AI, an AI-powered security operations center assistant that helps security operations teams with triage, augmented investigation and accelerated threat remediation across multicloud environments. Helios AI is designed specifically for modern, dynamic cloud environments to deliver vastly improved operational efficiency. The service optimizes security team resources and eliminates tedious manual workflows to deliver what Mitiga claims is the fastest mean time to detect and mean time to respond available. The platform helps SecOps teams reclaim critical time, reduce risk exposure and improve threat detection and incident response across cloud and software-as-a-service environments by significantly reducing alert noise and surfacing only actionable insights. The first Helios AI feature available to customers is AI Insights, an automated SOC assistant that cuts through alert noise to deliver 90% faster triage and 70 times faster alert close rates. Early simulations run by Mitiga are said to show how Helios AI and AI Insights have significantly outperformed traditional alert systems in both accuracy and speed. The aim is to provide a strategic view for cloud security leaders and how they can use Helios AI and AI Insights to prepare their teams and environments for what’s next.
Incogni study finds popular AI models are collecting sensitive data such as email addresses, phone numbers, photos, precise location and app interaction data and sharing it with unknown third parties
Findings from Incogni study reveal that some of the most popular, from companies like Meta, Google, and Microsoft, are collecting sensitive data and sharing it with unknown third parties, leaving users with limited transparency and virtually no control over how their information is stored, used, and shared. Key findings: Meta.ai and Gemini collect precise location data and physical addresses of their users; Claude shares email addresses, phone numbers, and app interaction data with third parties, according to its Google Play Store listing; Grok (xAI) may share photos provided by users and app interactions with third parties; Meta.ai shares names, email addresses, and phone numbers with external entities, including research partners and corporate group members; Microsoft’s privacy policy implies that user prompts may be shared with third parties involved in online advertising or using Microsoft’s ad tech; Gemini, DeepSeek, Pi.ai and Meta.ai, most likely are not giving users the ability to opt out of training the models with their prompts; ChatGPT turned out to be the most transparent when it comes to the information on what prompts will be used for model training, and a clear privacy policy.
Bonfy.AI’s platform uses AI-powered business context and entity-aware analysis to detect and prevent content risks at rest, in motion, and in use across SaaS, Shadow AI, and custom applications
Bonfy.AI, a pioneer in adaptive content security, has launched its Bonfy Adaptive Content Security™ (Bonfy ACS™) unified platform. The platform, backed by $9.5 million in seed funding in 2024, uses AI-powered business context and logic to prevent exposures such as oversharing, IP leakage, privacy violations, and non-compliant communications. Bonfy’s proprietary AI-powered technology is used to analyze and manage content risks associated with AI tools, documents, emails, and communications like Slack. The platform addresses the complexities of modern data security by providing contextual intelligence, behavioral analytics, and adaptive remediation capabilities. Bonfy ACS is versatile for various applications, including SaaS, Shadow AI, and custom applications. It enforces communication and sharing policies and mitigates risks related to cybersecurity, privacy, compliance, IP protection, and reputation. Bonfy ACS is ideal for organizations implementing GenAI initiatives, especially in regulated and security-conscious verticals like healthcare, insurance, finance, legal, media, and technology. Key Capabilities of Bonfy ACS: Detects and prevents content risks at rest, in motion, and in use. Supports various SaaS applications and communication platforms such as HubSpot, Google Mail, Microsoft 365, Salesforce, Slack, and SMTP. Uses auto-learning for business context creation and entity-aware analysis. Provides out-of-the-box policies for best practices and regulations. Integrates with incident response platforms and notification systems. Offers executive visibility through customizable dashboards. Delivery: SaaS; Flexible hosting options.
Zama’s Fully Homomorphic Encryption solution enables processing data without decryption during both transit and processing allowing developers to build on-chain financial applications on public blockchains with end-to-end encryption
Zama has raised $57 million in a Series B funding round to expand its end-to-end encryption solutions for public blockchains, which it said enables developers to build on-chain financial applications that are secure, scalable and compliant. The company will use the new funding to support its mainnet launch, ecosystem adoption and research efforts to make financial transactions built with Fully Homomorphic Encryption (FHE) scale to thousands of transactions per second. FHE enables data processing without decryption, so encryption is maintained during both transit and processing. That means “all online activities can now be truly end-to-end encrypted.” Zama’s FHE solutions are likely to benefit public blockchains first but could benefit any industry that uses cloud computing and requires greater confidentiality and compliance. Zama is commercializing an entirely new generational technology that could redefine how confidentiality is handled in the blockchain and, ultimately, in all of cloud computing. Zama’s FHE protocol is efficient and developer-friendly and supports decentralized applications (dApps) for AI, crypto and cloud. The protocol paves the way for on-chain identity, financial and consumer applications — previously out of reach for developers.
SUPERWISE platform enables enterprises to deploy, manage and scale AI agents developed using a variety of proprietary and open-source tools while offering built-in compliance, monitoring, and operational oversight
Enterprise AI Governance and Operations platform SUPERWISE unveiled its open AgentOps platform enabling companies to safely deploy agents developed in a variety of proprietary and open-source development platforms. This release emboldens teams to deploy, serve, and manage AI agents within the SUPERWISE platform, complete with built-in compliance, monitoring, and operational oversight. It marks a significant step forward in making responsible AI not just possible, but secure, scalable, and successful. With this launch, SUPERWISE is enabling teams to use the best open-source tools to build agents, while relying on our enterprise-grade infrastructure to govern, observe, and scale them safely.” SUPERWISE’s AgentOps platform will benefit a variety of stakeholders: AI Developers & Engineers: Use their preferred tools without sacrificing operational oversight; Enterprise IT & AI Leaders: Centralize operations while enabling innovation and avoiding vendor lock-in; C-Level Executives: Balance agility with governance, security, and scalability, and a lower total cost of ownership. The platform enables developers to capitalize on a variety of common preferences, including open-source software, low-code interfaces, built-in integrations, strong community support, and ease-of-deployment. The platform today supports the deployment and operation of agents for its development framework Flowise, and a growing list of soon-to-be announced third-party frameworks, including Dify, CrewAI, Langflow, N8n and many others.
Acuvity context-aware GenAI security platform detects unsanctioned GenAI usage across browsers, applications, and developer tools and identifies prompt-based exploits, model abuse across the full AI lifecycle
Acuvity has launched RYNO, the first GenAI security platform purpose-built to deliver context-aware protection and adaptive risk management across users, applications, and AI-powered agents. RYNO gives security teams the clarity, control, and confidence they need to enable innovation without compromising trust or compliance. Acuvity’s RYNO delivers six advanced features designed to operationalize GenAI security across the full AI lifecycle: Shadow AI Discovery – Detects unsanctioned GenAI usage across browsers, applications, and developer tools. DLP++ – Redefines data loss prevention by using contextual inspection to detect and stop sensitive data leakage in real time. Threat Protection – Identifies prompt-based exploits, model abuse, and agent manipulation through intelligent risk analysis. AI Firewall – Provides runtime inspection and behavioral protection for model interactions and tool calls. AI Runtime Security – Protects GenAI agents and applications across development, testing, and production environments. MCP Security – Offers dedicated security for the Model Context Protocol, a growing backbone of agentic AI infrastructure. RYNO’s architecture is anchored in context-driven security outcomes, enabling enterprises to safely adopt GenAI without slowing down innovation. The platform’s four core capabilities are: Full Spectrum Visibility; Adaptive Risk ; Contextual Intelligence (Context IQ); Dynamic Policy Engine.
Chronosphere redefines cloud-native observability with Logs 2.0 and real-time data control, unifiying observability by tightly integrating telemetry log management with metrics and traces
By helping organizations control and optimize their telemetry data, Chronosphere makes observability scalable, actionable and cost-effective with its platform’s open-source foundation being key to empowering enterprises in today’s distributed, data-intensive environments, according to Martin Mao, co-founder and chief executive officer of Chronosphere. Chronosphere Logs 2.0 represents a major leap in unified observability by tightly integrating log management with metrics and traces in one cohesive platform. Designed for cloud-native observability, this upgraded solution helps engineering teams shift from reactive troubleshooting to proactive, data-driven performance management, according to Mao. “It’s a brand new launch of a brand new product for us. It is end-to-end log management capability, you can imagine our ability to ingest and store logs natively into the product, and use logs along with the other data sources like metrics and traces. On top of that, we’re also providing a set of capabilities to control log data and log data growth, and that is fairly unique in the market. I would say, manage the data volume growth in logs, as well as cost, while also having a great performance and experience at the same time.” Ballooning telemetry data is a top inhibitor of observability effectiveness because it overwhelms systems and teams with excessive, often low-value data, making it harder to extract meaningful insights in real time. Chronosphere tackles this by offering data control, cost optimization and intelligent signal prioritization — purpose-built for the demands of cloud-native environments, according to Mao.
OPSWAT and SentinelOne’s AI/ML malware detection identifies threats that bypass traditional defenses such as polymorphic malware
OPSWAT and SentinelOne® announced their OEM partnership with the integration of SentinelOne’s industry-leading AI-powered detection capabilities into OPSWAT’s Metascan™ Multiscanning technology. This collaboration elevates malware detection across platforms, empowering enterprises to combat modern cyber threats with even greater precision and speed. With SentinelOne’s AI/ML detection capabilities now part of OPSWAT’s Metascan Multiscanning, joint customers benefit from: Enhanced detection accuracy through industry-leading AI capabilities; Cross-platform functionality, supporting both Windows and Linux deployments; Stronger ransomware and zero-day threat defense with autonomous, cloud-independent operation. Integrating SentinelOne’s AI detections strengthens Metascan’s multilayered defense, giving our customers faster, smarter protection against today’s most sophisticated threats. The inclusion of SentinelOne’s AI/ML detections in Metascan Multiscanning provides unmatched malware detection through simultaneous scanning with over 30 leading anti-malware engines, utilizing signature, heuristic, and machine learning techniques to achieve over 99% detection accuracy. The integration of SentinelOne’s AI/ML detections further amplifies this capability by identifying threats that bypass traditional defenses such as polymorphic malware.
Verax Protect safeguards companies against rising AI risks, peventing AI tools from exposing information to users that they are not authorized to access and enforcing organizational policies on AI
Verax AI has launched Verax Protect, a cutting-edge solution – suitable even for companies in highly regulated industries – aims to help large enterprises uncover and mitigate Generative AI risks, including unintended leaks of sensitive data. Key capabilities of Verax Protect: Prevent sensitive data from leaking into third-party AI tools: AI tools encourage users to input as much data as possible into them in order to maximise their productivity benefits. This often leads to proprietary and sensitive data being shared with unvetted third-party providers. Prevent AI tools from exposing information to users that they are not authorized to access: The increasing use of AI tools to generate internal reports and summarize sensitive company documents opens the door to oversharing data, raising the risk of other employees seeing information they’re not meant to access. Enforce organizational policies on AI: In contrast to the currently popular —but largely ineffective—methods of ensuring employee compliance with AI policies, such as training sessions and reminder pop-up banners, Verax Protect enables automatic enforcement of corporate AI policies, preventing both accidental and deliberate violations. Comply with security and data protection certifications. Many compliance certifications, such as those dealing with GDPR in Europe or sector-specific laws in the U.S. like HIPAA for healthcare or GLBA for financial services require evidence of an effort to safeguard sensitive and private data. Gen AI adoption makes such efforts more difficult to implement and even harder to demonstrate. Verax Protect helps to prove that sensitive and private data is safeguarded even when AI is used.