As a leading global B2B travel platform, American Express Global Business Travel (Amex GBT) and its security team are proactively confronting accelerating threats of AI with a dual focus on cybersecurity innovation and governance. Amex GBT Chief Information Security Officer David Levin is building a cross-functional AI governance framework, embedding security into every phase of AI deployment and managing the rise of shadow AI without stifling innovation. His approach offers a blueprint for organizations navigating the high-stakes intersection of AI advancement and cyber defense. “ We’re integrating AI across our threat detection and response workflows. On the detection side, we use machine learning (ML) models in our SIEM and EDR tools to spot malicious behavior faster and with fewer false positives. That alone accelerates how we investigate alerts. In the SOC, AI-powered automation enriches alerts with contextual data the moment they appear. AI amplifies our capabilities in two ways. First, CrowdStrike OverWatch gives us 24/7 threat hunting augmented by advanced machine learning. They constantly scan our environment for subtle signs of an attack, including things we might miss if we relied on manual inspection alone. That means we have a top-tier threat intelligence team on call, using AI to filter out low-risk events and highlight real threats. Second, AI boosts the efficiency of our internal SOC analysts. We used to manually triage far more alerts. Now, an AI engine handles that initial filtering. It can quickly distinguish suspicious from benign, so analysts only see the events that need human judgment. With Charlotte AI we are offloading a lot of alert triage. The system instantly analyzes new detections, estimates severity and suggests next steps. That alone saves our tier-1 analysts hours every week.
OpenAI’s latest reasoning models can work through questions before responding using web browsing, Python code execution, image processing, and image generation
OpenAI announced the launch of o3 and o4-mini, new AI reasoning models designed to pause and work through questions before responding. The company calls o3 its most advanced reasoning model ever, outperforming the company’s previous models on tests measuring math, coding, reasoning, science, and visual understanding capabilities. Meanwhile, o4-mini offers what OpenAI says is a competitive trade-off between price, speed, and performance — three factors developers often consider when choosing an AI model to power their applications. Unlike previous reasoning models, o3 and o4-mini can generate responses using tools in ChatGPT such as web browsing, Python code execution, image processing, and image generation. The models, plus a variant of o4-mini called “o4-mini-high” that spends more time crafting answers to improve its reliability, are available for subscribers to OpenAI’s Pro, Plus, and Team plans. OpenAI says that o3 achieves state-of-the-art performance on SWE-bench verified (without custom scaffolding), a test measuring coding abilities, scoring 69.1%. The o4-mini model achieves similar performance, scoring 68.1%. OpenAI’s next best model, o3-mini, scored 49.3% on the test, while Claude 3.7 Sonnet scored 62.3%. OpenAI claims that o3 and o4-mini are its first models that can “think with images.” In practice, users can upload images to ChatGPT, such as whiteboard sketches or diagrams from PDFs, and the models will analyze the images during their “chain-of-thought” phase before answering. Thanks to this newfound ability, o3 and o4-mini can understand blurry and low-quality images and can perform tasks such as zooming or rotating images as they reason.
Fnality’s settlement soution allows earmarking an amount of tokenized currency for a particular transaction at a specified time to eliminate leg-risk in settlement
Fnality international, the institutional settlement solution that uses tokenized central bank reserves, has announced its latest feature, the earmarking of funds. At an institutional level, if a securities transaction fails because of insufficient funds, that can sometimes trigger penalties, never mind the loss of the transaction. Hence, Fnality now supports earmarking so that an amount of tokenized currency can be earmarked for a particular transaction at a specified time. The banks involved in developing the programmable functionality were Lloyds Bank, Santander and UBS, the first three banks to go live on the sterling Fnality payment system (£FnPS). The latest feature adds to the 24/7 instant payments previously showcased for margin payments, FX swaps and repo transactions. “One of the promises of institutional blockchain-based applications is ‘atomicity’ in that all the legs of a transaction either fulfil together or they all fail,” said John Whelan, Managing Director of Digital Assets at Banco Santander. “There is no leg-risk. The concept of ‘earmarking’ as introduced by Fnality helps enable this feature in a way that can be truly interoperable with other DLT systems. Again, bringing us one step closer to the utilization of this tech at scale in the banking industry.” In recently filed accounts, Fnality International disclosed it issued a £20 million convertible loan note on 30 September 2024. The note will convert to equity on the completion of its Series C funding, which is in progress. One of Fnality’s biggest goals was always FX settlement. Once a second currency goes online, the range of use cases will expand significantly.
CompoSecure and MetaMask’s metal payment card powered by tap-to-authenticate tech enables spending stablecoins directly from a self-custodied wallet
CompoSecure and MetaMask launched a new metal payment card enabling users to spend stablecoins directly from self-custodied wallets, combining traditional payment convenience with Web3 asset control. The new offering, powered by CompoSecure’s Arculus tap-to-authenticate technology, serves three primary functions: a traditional payments card; a secure authentication device; and a crypto wallet interface. The card targets crypto-native users, offering full control over private keys and on-chain transactions without relying on centralized custodians — mirroring the freedom of holding cash. The card’s real-time blockchain integration allows for features like NFT-based loyalty rewards, staking and yield, pushing crypto toward mainstream adoption. Adam Lowe, PhD, chief product and innovation officer at CompoSecure says “The issuer or platform can instantly mint an NFT, you can gamify the purchase — and it can all be automated and real time,” he said. “For the consumer, there’s also the opportunity for staking and yield. If you have a dollar in your wallet, in your pocket, it’s not doing anything for you,” Lowe said. “If you have a dollar in a yield-bearing stablecoin, every moment it’s earning you 4%-plus yield.” At the same time, CompoSecure is actively testing direct on-chain payments, which would bypass traditional rails altogether. The goal is to enable the same simplicity and ubiquity of tap-to-pay while maintaining the flexibility and efficiency of blockchain settlements. There’s no reason you can’t directly work paying on-chain,” Lowe said. “The stablecoin goes directly to a merchant wallet. We skip everything in the middle.” In cases where a merchant prefers fiat, Lowe said the solution can handle real-time asset conversion.
Bolt’s one-click crypto, payments and checkout superapp to enable real-time order tracking, purchase and sale of cryptos, and processing of peer payments in-app with just a single click
Bolt CEO Ryan Breslow is unveiling a new “superapp” that he describes as “one-click crypto and everyday payments” in a single platform. Breslow hopes to change Bolt’s revenue with this new consumer app, which he ambitiously hopes will serve as “a centralized and personalized hub for financial services.” The app at once competes with a number of other companies such as crypto exchange Coinbase, payments platform Zelle, and PayPal. Its advantage is the ability to do what all these others do from one place via mobile. For example, the app will allow users to buy, sell, send, and receive major cryptocurrencies such as Bitcoin, Ethereum, USDC, Solana, and Polygon directly within the app. Users are provisioned an on-chain balance powered by Zero Hash and will be able to see their balance in real time. Breslow is also hoping to pick up where Zelle left off with the shutdown of its standalone app. With Bolt’s new offering, users can process peer payments “with just a single click” within its app. With Zelle, users can only send payments to peers through banking apps. On top of that, Bolt has partnered with Midland States Bank to now also offer a debit card that features a rewards program, including up to 3% direct cash back on eligible purchases and up to 7% in Love.com store credits. As Bolt doesn’t offer banking services, users will have to transfer money from another bank account into this one to fund purchases with the debit card. And lastly, the new app also provides real-time order tracking for users — something other companies such as Klarna offer in their app, as well.
CPI Card adds Web Push Provisioning feature that allows card issuers to issue payment credentials to a digital wallet through a simple push of a button on their website
CPI Card has added Web Push Provisioning (WPP), giving card issuers more options to connect payment cards with cardholders’ digital wallets. Card issuers can now issue payment credentials to a cardholder’s digital wallet through a simple push of a button on their website. This new functionality represents a simplified step forward, integrating directly with the cardholder’s wallet. WPP also provides card issuers with an alternative path to relying solely on a mobile app by enabling direct integration into a digital wallet. This expands onboarding options, allowing issuers to offer multiple implementation choices to their cardholders. As a result, users gain instant access and use of their digital card while waiting for their physical card to arrive. WPP gives card issuers more options and functionality to serve cardholders who open new accounts or request replacement cards. Cardholders can also select to provision multiple devices during the process. This next-generation functionality brings card issuers closer to providing a frictionless digital experience to attract and retain tech-forward cardholders while providing the ultimate cardholder experience.
Google Cloud partners NVIDIA to allow on-premises data centers to secure access to Gemini family of AI models as well as the data used for fine-tuning
NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks. “By bringing our Gemini models on premises with NVIDIA Blackwell’s breakthrough performance and confidential computing capabilities, we’re enabling enterprises to unlock the full potential of agentic AI,” said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models’ application programming interface — as well as the data they used for fine-tuning — remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.
Yugabyte’s agentic AI app allows developers to identify performance issues, analyze the root causes, and understand the impact using a structured, query-centric view as against voluminous metrics and alerts in traditional monitoring
Yugabyte announced the first of its next-generation agentic AI apps, Performance Advisor for YugabyteDB Aeon, its SaaS offering. Yugabyte also announced an extensible indexing framework designed to support the seamless integration of state-of-the-art vector indexing libraries and algorithms, augmenting the capabilities offered by pgvector. By infusing AI into Performance Advisor and delivering an extensible framework for vector support, YugabyteDB enhances database performance monitoring, improving AI application resilience and enabling greater AI innovation. The Performance Advisor agentic AI application allows developers to detect potential issues before an application is deployed and offers timely insights to SREs, and platform engineers to help with performance optimization. Traditional monitoring often relies on voluminous metrics and alerts, which are difficult to interpret, causing data overload, false-positives, and alert-fatigue. Equipped with AI-powered anomaly detection, Performance Advisor helps users identify performance issues, analyze the root causes, and understand the impact. A structured, query-centric view enables teams to pinpoint the queries consuming resources, highlight where performance bottlenecks occur, and monitor overall system load and potential anomalies. Yugabyte enhanced its pgvector support, adding extensible and future-proof vector indexing to YugabyteDB. This innovative approach is designed to support seamless integration of state-of-the-art vector indexing libraries and algorithms such as USearch, HNSWLib, and Faiss, taking YugabyteDB’s vector search capabilities beyond pgvector. Combining the power of popular open source pgvector extension with YugabyteDB’s inherently distributed architecture, YugabyteDB provides a robust foundation for building intelligent, data-driven applications that demand high-performance vector search. Built-in resilience and geo-distribution ensure continuous availability of vector search functionalities and low-latency retrieval across different geographic regions.
OpenAI’s new GPT-4.1 AI are optimized for real-world software engineering tasks such as frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and consistent tool usage
OpenAI launched a new family of models called GPT-4.1. Yes, “4.1” — as if the company’s nomenclature wasn’t confusing enough already. There’s GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all of which OpenAI says “excel” at coding and instruction following. Available through OpenAI’s API but not ChatGPT, the multimodal models have a 1-million-token context window, meaning they can take in roughly 750,000 words in one go. OpenAI’s grand ambition is to create an “agentic software engineer,” as CFO Sarah Friar put it. The company asserts its future models will be able to program entire apps end-to-end, handling aspects such as quality assurance, bug testing, and documentation writing. GPT-4.1 is a step in this direction. “We’ve optimized GPT-4.1 for real-world use based on direct feedback to improve in areas that developers care most about: frontend coding, making fewer extraneous edits, following formats reliably, adhering to response structure and ordering, consistent tool usage, and more,” says OpenAI. “These improvements enable developers to build agents that are considerably better at real-world software engineering tasks.” OpenAI claims the full GPT-4.1 model outperforms its GPT-4o and GPT-4o mini models on coding benchmarks, including SWE-bench. GPT-4.1 mini and nano are said to be more efficient and faster at the cost of some accuracy, with OpenAI saying GPT-4.1 nano is its speediest — and cheapest — model ever. According to OpenAI’s internal testing, GPT-4.1, which can generate more tokens at once than GPT-4o (32,768 versus 16,384), scored between 52% and 54.6% on SWE-bench Verified, a human-validated subset of SWE-bench. In a separate evaluation, OpenAI probed GPT-4.1 using Video-MME, which is designed to measure the ability of a model to “understand” content in videos. GPT-4.1 reached a chart-topping 72% accuracy on the “long, no subtitles” video category, claims OpenAI.
TheStage AI’s tech enables developers to optimize and fine-tune their AI models to meet their exact requirements in terms of performance, size and latency
TheStage AI, a startup that’s optimizing AI to run better at lower costs, has just announced a $4.5 million funding. Its flagship technology is known as ANNA, which stands for Automatic NNs Analyzer, a system that leverages AI and discrete math to automatically adjust PyTorch models using techniques such as quantization, pruning and sparsification. In essence, it trims the fat to make AI models leaner and more performant. Using ANNA, developers can fine-tune their AI models to meet their exact requirements in terms of performance, size and latency. In addition, the company provides access to so-called “Elastic models,” or pre-fine-tuned open-source models, available in a range of sizes. Customers can select the most appropriate model based on the required quality, speed and cost. TheStage AI’s main goal is to help companies reduce the cost of deploying and running AI applications. Within its Model Library, TheStage AI currently lists dozens of optimized models, including various fine-tuned versions of the popular image generation model Stable Diffusion. Customers can therefore strike exactly the right balance in terms of cost and performance. In addition, customers can also bring their own models and use ANNA to optimize them. In its collaboration with Recraft, TheStage AI reckons it was able to double the performance of that company’s most powerful models while reducing processing times by 20%.