Among these, the company debuted an AI foundation model to improve fraud detection and authorization rates. Dubbed the Payments Foundation Model, it is trained on tens of billions of transactions and incorporates hundreds of “subtle signals” per payment, which it said specialized models cannot capture. The technology firm plans to deploy this model across its payments suite to improve performance in ways that were previously unattainable. Stripe stated that early results indicate the model’s effectiveness, particularly against card testing attacks, where it increased detection rates beyond the 80% reduction achieved over two years with previous models. By applying the new foundation model, Stripe increased its detection rate for attacks on large businesses by 64% practically overnight. In parallel, Stripe expanded its money management offerings with the launch of Stablecoin Financial Accounts, powered by stablecoins. Businesses using these accounts can hold balances in stablecoins, receive payments via both crypto and traditional fiat rails such as ACH and SEPA, and send stablecoins to most markets globally. These accounts are designed to be accessible to businesses in 101 countries. Initially, the accounts will support stablecoins USDC and Bridge’s USDB, with plans to incorporate additional currencies over time. Stripe also announced a deeper partnership with NVIDIA, which completed the fastest-ever migration to Stripe Billing.
Elavon and Jscrambler partner to strengthen PCI DSS compliance for merchants for requirements of 6.4.3 and 11.6.1
Elavon and Jscrambler have partnered to help merchants comply with PCI DSS requirements 6.4.3 and 11.6.1. Through this agreement, Elavon’s network of more than 400 merchants can leverage Jscrambler’s Client-Side Protection and Compliance Platform to safeguard their business from escalating web skimming attacks. Using Jscrambler’s Client-Side Protection and Compliance Platform and PCI DSS solution, merchants can meet PCI DSS requirements while preventing web skimming attacks, securing payment pages, and maintaining compliance efficiently. Now, through this collaboration, the two companies combine Elavon’s extensive experience as a global leader in payment processing with Jscrambler’s innovative technologies to address the critical need for robust payment security. Jscrambler’s PCI DSS solution delivers the following capabilities: Script Management: Auto-discovers and authorizes payment page scripts, reducing manual approvals by grouping vendor behaviors. Skimming Prevention: Blocks unauthorized data access in real-time, protecting against web skimming and formjacking. Tamper Detection: Monitors HTTP headers and page content, alerting on unauthorized changes via email, SIEM, or Slack. Hybrid Architecture: Supports agentless and agent-based deployment for flexibility, enabling rapid compliance for complex or acquired payment pages. PCI DSS Expertise: Provides direct access to former PCI Security Standards Council members and a strong bench of PCI DSS experts. QSA Alliance Program: Provides access to enablement sessions, assessor forums, and inventory reports to streamline audits. Andrew McCarroll, PCIP Customer Payment Security Executive, Elavon said “By partnering with Jscrambler, Elavon is offering merchants easy access to Jscrambler’s PCI DSS solution.”
Balance’s new RTP-powered Instant Bank Connection allows buyers to link their accounts using only routing and account numbers
Balance, the financial infrastructure platform for B2B commerce, launched Instant Bank Connection, a new capability powered by Real-Time Payment (RTP) rails that simplifies ACH setup for buyers and speeds up payments to merchants—improving cash flow and reducing processing costs. Balance’s new RTP-powered Instant Bank Connection capability allows buyers to link their accounts using only routing and account numbers. This real-time verification streamlines the buyer experience while giving merchants immediate payment confirmation, faster payouts, improved cash flow, and the ability to release goods sooner with confidence—accelerating fulfillment and strengthening customer relationships. Balance’s RTP-powered bank connection is more than just a faster onboarding method—it’s a strategic lever for B2B merchants looking to scale with efficiency. By making ACH payments as seamless as cards, merchants can unlock significantly lower processing costs. By combining RTP-powered bank verification with AI-powered credit management, billing, collections, and cash application, Balance empowers merchants to reduce overhead, improve cash flow, and scale with confidence. Bar Geron, CEO and Co-founder of Balance said “With RTP-enabled ACH payments, they can reduce costs and accelerate access to funds—all while giving buyers a smooth, payment experience.”
OCC clarifies banks may buy and sell assets crypto-assets; can also outsource custody to third parties
The OCC has clarified that institutions under its oversight can now buy and sell crypto assets on behalf of their customers. In addition, the OCC stated that national banks may outsource crypto-asset services to third parties, including custody and trade execution, provided those third parties maintain sound risk management practices. The latest OCC letter follows a similar directive issued in March, which rescinded the 2021 policy requiring banks to seek prior supervisory approval before engaging in crypto-related services. “The services national banks may provide in relation to the cryptocurrency they are custodying may include services such as facilitating the customer’s cryptocurrency and fiat currency exchange transactions, transaction settlement, trade execution, recordkeeping, valuation, tax services, reporting, or other appropriate services,” the March letter stated. It further clarified: “A bank acting as custodian may engage a sub-custodian for cryptocurrency it holds on behalf of customers and should develop processes to ensure that the sub-custodian’s operations have proper internal controls to protect the customer’s cryptocurrency.”
Meanwhile, the Federal Reserve recently dropped its supervisory guidelines that previously required American banks to notify it in advance of any crypto-asset activities. Banks are also no longer required to obtain formal approval from the Fed before engaging in stablecoin-related operations. The decisions by both US regulators reflect the broader shift toward more crypto-friendly policies under the Trump administration.
ServiceNow’s new AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system
ServiceNow’s new AI Control Tower, offers a holistic view of the entire AI ecosystem. AI Control Tower acts as a “command center” to help enterprise customers govern and manage all their AI workflows, including agents and models. The AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system — even third-party agents. It also provides end-to-end lifecycle management, real-time reporting for different metrics, and embedded compliance and AI governance. The idea around AI Control Tower is to give users a central location to see where all of the AI in the enterprise is. “I can go to a single place to see all the AI systems, how many were onboarded or are currently deployed, which ones are an AI agent or classic machine learning,” said Dorit Zilbershot, ServiceNow’s Group Vice President of AI Experiences and Innovation. “I could be managing these in a single place, making sure that I have full governance and understanding of what’s going on across my enterprise.” She added that the platform helps users “really drill down to understand the different systems by the provider and by type,” to understand risk and compliance better. The company’s agent library allows customers to choose the agent that best fits their workflows, and it has built-in orchestration features to help manage agent actions. ServiceNow also unveiled its AI Agent Fabric, a way for its agent to communicate with other agents or tools. Zilbershot said ServiceNow will still support other protocols and will continue working with other companies to develop standards for agentic communication.
Gyan is an alternative AI architecture built on a neuro-symbolic architecture, not transformer based, to create hallucination-free models by design
Gyan is a fundamentally new AI architecture built for Enterprises with low or zero tolerance for hallucinations, IP risks, or energy-hungry models. Gyan gives businesses full control over their data, keeping it private and secure — making it the trusted partner for enterprises in situations where reliability and accuracy are mandatory. Unlike with LLM’s, with Gyan, businesses can use an AI model without worrying about it making things up. Built on a neuro-symbolic architecture, not transformer based, Gyan is a ground-up hallucination-free model by design. “If the cost of a mistake is high, you certainly don’t want your AI causing it,” says Joy Dasgupta, CEO, at Gyan. “We built Gyan for companies and processes with zero tolerance for hallucination and privacy risks, with compute and energy requirements orders of magnitude lower than that of current LLM’s.” Gyan’s State of the Art performance in two key life sciences benchmarks (PubMedQA and MMLU) is proof of efficacy of its language model. Every inference by Gyan is traceable with full reasoning to exact ideas and arguments in the result, making them readily verifiable. This is not the case for any of the others on the Leaderboard. Gyan provides precise and accurate analysis which users can depend on.
CodeAnt AI’s platform plugs into developer platforms, reviews the code, gives instant feedback across 30+ programming languages and suggests fixes that developers can apply with a single click
AI might be great at helping engineers write code, but it’s creating a new problem – all that code still needs to be reviewed by humans. CodeAnt AI is stepping in with a solution that uses AI to tackle the review process itself. CodeAnt AI’s platform plugs right into GitHub, GitLab, Bitbucket, and Azure DevOps, giving developers instant feedback on their code across more than 30 programming languages. More impressively, it doesn’t just find problems – it suggests fixes that developers can apply with a single click, turning reviews that used to take hours into proactive quick, five-minute sessions. For companies racing to get products out the door, this means fewer delays and higher quality code. It also means cost savings – fixing problems during code reviews costs 10x less compared to fixing them later during CI/CD or after production deployments. What makes CodeAnt AI different is the technology under the hood. The company built a proprietary language-agnostic AST engine that actually understands how different parts of a codebase connect, letting it spot issues that isolated code reviews would miss. The platform also pulls in data from major security databases and lets companies set up their own rules based on their specific needs. For security-conscious organizations, CodeAnt AI can run entirely within their own infrastructure, ensuring code never leaves their environment. It’s proven to help enterprises reduce manual code review time by over 50%.
Mistral’s platform enables enterprises to build AI agents tailored to their operations and gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces without vendor lock-in
AI startup Mistral unveiled Le Chat Enterprise, a unified AI assistant platform designed for enterprise-scale productivity and privacy, powered by its new Medium 3 model that outperforms larger ones at a fraction of the cost (here, “larger” refers to the number of parameters, or internal model settings, which typically denote more complexity and more powerful capabilities, but also take more compute resources such as GPUs to run). Available on the web and via mobile apps, Le Chat Enterprise is like a ChatGPT competitor, but one built specifically for enterprises and their employees, taking into account the fact that they’ll likely be working across a suite of different applications and data sources. It’s designed to consolidate AI functionality into a single, privacy-first environment that enables deep customization, cross-functional workflows, and rapid deployment. Among its key features that will be of interest to business owners and technical decision makers are: Enterprise search across private data sources; Document libraries with auto-summary and citation capabilities; Custom connectors and agent builders for no-code task automation; Custom model integrations and memory-based personalization; Hybrid deployment options with support for public cloud, private VPCs, and on-prem hosting. Le Chat Enterprise supports seamless integration into existing tools and workflows. Companies can build AI agents tailored to their operations and maintain full sovereignty over deployment and data—without vendor lock-in. The platform’s privacy architecture adheres to strict access controls and supports full audit logging, ensuring data governance for regulated industries. Enterprises also gain full control over the AI stack—from infrastructure and platform features to model-level customization and user interfaces. Mistral’s new Le Chat Enterprise offering could be appealing to many enterprises with stricter security and data storage policies (especially medium-to-large and legacy businesses). Mistral Medium 3 introduces a new performance tier in the company’s model lineup, positioned between lightweight and large-scale models. Designed for enterprise use, the model delivers more than 90% of the benchmark performance of Claude 3.7 Sonnet, but at one-eighth the cost—$0.40 per million input tokens and $20.80 per million output tokens, compared to Sonnet’s $3/$15 for input/output. Benchmarks show that Mistral Medium 3 is particularly strong in software development tasks. In coding tests like HumanEval and MultiPL-E, it matches or surpasses both Claude 3.7 Sonnet and OpenAI’s GPT-4o models. According to third-party human evaluations, it outperforms Llama 4 Maverick in 82% of coding scenarios and exceeds Command-A in nearly 70% of cases. Mistral Medium 3 is optimized for enterprise integration. It supports hybrid and on-premises deployment, offers custom post-training, and connects easily to business systems.
Claude’s web search API to allow the AI assistant to conduct multiple progressive searches using earlier results to inform subsequent queries complete with source citations
Anthropic has introduced a web search capability for its Claude AI assistant, intensifying competition in the rapidly evolving AI search market where tech giants are racing to redefine how users find information online. The company announced that developers can now enable Claude to access current web information through its API, allowing the AI assistant to conduct multiple progressive searches to compile comprehensive answers complete with source citations. Anthropic’s technical approach represents a significant advance in how AI systems can be deployed as information gathering tools. The system employs a sophisticated decision-making layer that determines when external information would improve response quality, generating targeted search queries rather than simply passing user questions verbatim to a search backend. This “agentic” capability — allowing Claude to conduct multiple progressive searches using earlier results to inform subsequent queries — enables a more thorough research process than traditional search. The implementation essentially mimics how a human researcher might explore a topic, starting with general queries and progressively refining them based on initial findings. Anthropic’s web search API represents more than just another feature in the AI toolkit — it signals the evolution of internet information access toward a more integrated, conversation-based model. The new capability arrives amid signs that traditional search is losing ground to AI-powered alternatives. With Safari searches declining for the first time ever; we’re witnessing early indicators of a mass consumer behavior shift. Traditional search engines optimized for advertising revenue are increasingly being bypassed in favor of conversation-based interactions that prioritize information quality over commercial interests.
Neo4j’s serverless solution enables users of all skill levels to access graph analytics without the need for custom queries, ETL pipelines, or specialized graph expertise and can be used seamlessly with any data source
Neo4j, has launched Neo4j Aura Graph Analytics, a new serverless offering that for the first time can be used seamlessly with any data source, and with Zero ETL (extract, load, transfer). The solution delivers the power of graph analytics to users of all skill levels, unlocking deeper intelligence and achieving 2X* greater insight precision and quality over traditional analytics. The new Neo4j offering makes graph analytics capabilities accessible to everyone and eliminates adoption barriers by removing the need for custom queries, ETL pipelines, or any need for specialized graph expertise – so that business decision-makers, data scientists, and other users can focus on outcomes, not overhead. Neo4j Aura Graph Analytics requires no infrastructure setup and no prior experience with graph technology or Cypher query language. Users seamlessly deploy and scale graph analytics workloads end-to-end, enabling them to collect, organize, analyze, and visualize data. The offering includes the industry’s largest selection of 65+ ready-to-use graph algorithms and is optimized for high-performance applications and parallel workflows. Users pay only for the processing power and storage they consume. Additional benefits and capabilities below are based on customer-reported outcomes that reflect real-world performance gains: 1) Up to 80% model accuracy, leading to 2X greater efficacy of insights that go beyond the limits of traditional analytics. 2) Insights achieved twice as fast as open-source alternatives with parallelized in-memory processing of graph algorithms 3) 75% less code, Zero ETL. 4) No administration overhead, and lower total cost of ownership.
