Pipe, a FinTech company specializing in embedded capital products for small businesses, has launched four new AI agents to automate key operational workflows and accelerate its global expansion. The AI-native approach eliminates traditional credit score assessments and personal guarantees, leveraging live revenue and transaction data to optimize capital offers. The agents streamline tasks across fraud detection, compliance, customer engagement, payments, and treasury operations, enhancing partner and customer experiences and accelerating platform growth without increasing headcount. The four agents include: Fraud and Compliance Agent, which reviews flagged applications using business data to distinguish genuine risk from input errors, allowing up to 90% of applicants to receive decisions in minutes. Recovery Agent, which analyses business operations and payment statuses to guide the most effective strategy for resuming payments. Sales Agent, which supports applicants around the clock and re-engages businesses that abandoned applications. Treasury Agent, which provides real-time liquidity insights by monitoring global cash positions and macroeconomic indicators to guide investment and capital deployment decisions.
Generative AI can significantly accelerate decision-making and time to market for deposit pricing refinements by delivering the results of scenario-based queries in executive- and execution-ready format with supporting data assets
Deposit pricing tools have come a long way, but there’s a disconnect between what they produce and action that hits the market. Generative AI, handled properly, can accelerate implementation measurably. When applied to deposit-optimizing technology, generative AI can significantly accelerate decision-making and time to market for deposit pricing refinements. This reduces the effort needed to interpret results, and present findings in straightforward language with supporting data assets that virtually any responsible party in the organization can act on. Such accessibility can allow the results of scenario-based queries to be delivered to a decision-making audience within hours. A deposit pricing manager at a large regional bank might be tasked by the head of retail banking to model a pricing optimization to grow money market account balances by 70% while minimizing overall interest expense — perhaps to fund the bank’s expected lending demand. A typical optimized rate grid might contain tens of thousands of pricing cells — combinations of product features and customer attributes such as geography, balance tier and depth of relationship with the bank. The output to the deposit pricing manager would look like pricing, margins and expected balances across those numerous cells, requiring significant further analysis and distillation to provide to the head of retail to put into effect as the bank’s product offerings. But with AI, the output can be executive- and execution-ready.
Multiverse’s Model Zoo offers compact high-performing AI for device commands and for local reasoning—bringing powerful intelligence to home appliances, smartphones, and PCs via quantum compression
AI startup Multiverse Computing has released two AI models that are the world’s smallest models that are still high-performing and can handle chat, speech, and even reasoning in one case. These new tiny models are intended to be embedded into Internet of Things devices, as well as run locally on smartphones, tablets, and PCs. “We can compress the model so much that they can fit on devices,” founder Román Orús told. “You can run them on premises, directly on your iPhone, or on your Apple Watch.” Its two new models are so small that they can bring chat AI capabilities to just about any IoT device and work without an internet connection. It humorously calls this family the Model Zoo because it’s naming the products based on animal brain sizes. A model it calls SuperFly is a compressed version of Hugging Face’s open source model SmolLM2-135. The original has 135 million parameters and was developed for on-device uses. SuperFly is 94 million parameters, which Orús likens to the size of a fly’s brain. “This is like having a fly, but a little bit more clever,” he said. SuperFly is designed to be trained on very restricted data, like a device’s operations. Multiverse envisions it embedded into home appliances, allowing users to operate them with voice commands like “start quick wash” for a washing machine. Or users can ask troubleshooting questions. With a little processing power (like an Arduino), the model can handle a voice interface. The other model is named ChickBrain, and is larger at 3.2 billion parameters, but is also far more capable and has reasoning capabilities. It’s a compressed version of Meta’s Llama 3.1 8B model, Multiverse says. Yet it’s small enough to run on a MacBook, no internet connection required. More importantly, Orús said that ChickBrain actually slightly outperforms the original in several standard benchmarks, including the language-skill benchmark MMLU-Pro, math skills benchmarks Math 500 and GSM8K, and the general knowledge benchmark GPQA Diamond.
CodeSightAI launches an AI code review platform that integrates with GitHub to cut review time by 60%, detect 90% of security issues, and automate smart fixes
CodeSightAI launched its AI-powered code review platform designed to help development teams deliver high-quality software faster. The platform seamlessly integrates with GitHub to provide intelligent code analysis, real-time collaboration, and comprehensive security scanning. The new platform addresses critical inefficiencies in traditional code review processes that cost the global software industry billions annually. By leveraging advanced AI algorithms, CodeSightAI enables development teams to reduce review time by up to 60% while catching 90% of security issues before deployment. CodeSightAI’s comprehensive feature set includes AI-powered code analysis that detects bugs, security vulnerabilities, and performance issues in real-time. The platform provides smart fix recommendations with automated application capabilities and supports multiple programming languages with pattern-based security scanning. The AI-powered code review platform offers seamless GitHub integration through one-click OAuth authentication, automated pull request analysis, and real-time synchronization with repository changes. Development teams benefit from live collaboration features including real-time cursors, code comments, and team performance analytics. Key capabilities of the platform include: – Advanced AI algorithms for bug and vulnerability detection; Real-time code quality assessment with detailed suggestions; Comprehensive security scanning with Row-Level Security; Team collaboration hub with activity feeds and performance metrics; Smart analytics dashboard tracking pull request metrics and quality improvements; Flexible billing and subscription management through Stripe integration.
Leena AI’s voice-enabled AI colleagues can speak out loud and listen using conversational language and natural voice communication in the workplace, acting as a go-between for situations where text might be cumbersome or difficult
Leena AI, the developer of an employee-facing agentic AI assistant, launched what it’s calling AI colleagues, which can speak out loud and listen using natural, conversational language in the workplace. Its AI agents can work and interact just like human employees can and provide support across numerous work fields, including information technology, human resources, finance, marketing, sales and procurement. By using natural voice communication, these agents allow workers to get work done faster than before. AI acts as a go-between for situations where text might be cumbersome or difficult, becoming more of a collaborator than a widget trapped within a screen. Already 35 % of all interactions with Leena are on voice and the average time of session is seven and a half minutes. The AI agent could use Salesforce integration to access and complete the necessary forms. Next, the agent could scour meeting notes and look up information about the “deep tech information” requested, prepare an email and show it to the user before sending it to the tech team. The AI colleagues are powered by agentic AI, meaning that they can complete tasks with little or no human input once they’re set to a goal. Although they always seek employee approval before taking action, they ensure a user-friendly experience by keeping a human in the loop before taking any critical action. They also have 24/7 availability, all year round.
Generative AI can significantly accelerate decision-making and time to market for deposit pricing refinements by delivering the results of scenario-based queries in executive- and execution-ready format with supporting data assets
Deposit pricing tools have come a long way, but there’s a disconnect between what they produce and action that hits the market. Generative AI, handled properly, can accelerate implementation measurably. When applied to deposit-optimizing technology, generative AI can significantly accelerate decision-making and time to market for deposit pricing refinements. This reduces the effort needed to interpret results, and present findings in straightforward language with supporting data assets that virtually any responsible party in the organization can act on. Such accessibility can allow the results of scenario-based queries to be delivered to a decision-making audience within hours. A deposit pricing manager at a large regional bank might be tasked by the head of retail banking to model a pricing optimization to grow money market account balances by 70% while minimizing overall interest expense — perhaps to fund the bank’s expected lending demand. A typical optimized rate grid might contain tens of thousands of pricing cells — combinations of product features and customer attributes such as geography, balance tier and depth of relationship with the bank. The output to the deposit pricing manager would look like pricing, margins and expected balances across those numerous cells, requiring significant further analysis and distillation to provide to the head of retail to put into effect as the bank’s product offerings. But with AI, the output can be executive- and execution-ready.
Paradigm reimagines spreadsheets with AI agents in every cell, automating data collection and enhancing flexibility using multiple AI models for powerful, cost-effective workflows.
Paradigm is an AI-powered spreadsheet equipped with more than 5,000 AI agents. Users can assign different prompts to individual columns and cells, and individual AI agents will crawl the internet to find and fill out the needed information. Paradigm works with AI models from Anthropic, OpenAI, and Google’s Gemini and supports model switching. Paradigm attracts users ranging from consultants to sales professionals and finance folks and operates on a subscription model with tiers based on usage. “We want to support every single model because we want our users to be able to have the highest reasoning outputs when they need it, but also the cheapest outputs,” founder Anna Monaco said. “It’s just a constant cycle of evaluating different models, working closely with model providers to make sure our limits are high enough, and then giving some of that power to our users.” Monaco said that she doesn’t really consider the competition because Paradigm doesn’t think of itself as an AI-powered spreadsheet. She said she thinks of it as a new AI-powered workflow that happens to be in the familiar form of a spreadsheet but won’t necessarily stay that way forever.
Juniper rolls out self‑driving agentic networking featuring no‑touch fixes, ask AI in natural language and predict video issues with a large experience model
Juniper Networks Inc.‘s Mist platform, was purpose-built with AI in mind, leveraging automation and insight to optimize user experiences. Built into Mist is the Marvis AI engine and Assistant, which uses high quality data, advanced AI and machine learning data science, and a conversational interface to simplify deployment and troubleshooting. Now under Hewlett Packard Enterprise Co., Mist has been brought together with Aruba Networks to form what they are calling the “secure AI-native network,” which is a blend of leading AIOPs, product breadth and security to solve real customer and partner needs. Ultimately the company has a vision of using the platform to bring all HPE Networking products under common cloud management and AI engine with centralized operations. “One thing that we added is the ability to choose specific areas for self-driving mode that don’t require human intervention,” said Jeff Aaron, vice president of product and solution marketing at HPE. “If a switch port is stuck or an AP is running non-compliant software, for example, you can tell Marvis to go fix it on its own. We provide reporting to show which features were fixed autonomously, how they were fixed, and why the decision was made so IT still has complete visibility into what is happening.” In addition, Marvis got a back-end upgrade, leveraging more generative AI capabilities and agentic workflows for even better real-time troubleshooting. The assistant has always used natural language processing and understanding to understand simple language queries and provide insightful answers on par with human experts. Furthermore, Marvis’ AIOps capabilities have been expanded further into the data center through tighter integration with Juniper Apstra’s contextual graph database. This allows Marvis to analyze infrastructure configurations and provide answers to data center-related inquiries using the same Marvis conversational interface employed elsewhere in the network. Finally, HPE Networks also expanded their ability to proactively predict and prevent video issues using what it calls a large experience model or LEM. This pulls in billions of data points from Zoom and Microsoft Teams clients and correlates it with networking data to identify the root cause of video issues. The LEM framework has now been augmented with data from Marvis digital experience twins, or Minis, which probe the wired, wireless, WAN and data center networks autonomously when users aren’t even present to provide even richer data for predictive and proactive troubleshooting. The impact shows up in different ways across industries. ServiceNow reported a 90% reduction in network trouble tickets, while Blue Diamond Growers cut the time spent managing networks by 80%. Gap achieved 85% fewer truck rolls, and Bethesda Medical reported 85% faster upgrades.
Google’s vibe-coding tool lets users create mini web apps using text prompts or by remixing existing apps available in a gallery and see the visual workflow of input, output, and generation steps
Google is testing a vibe-coding tool called Opal, available to users in the U.S. through Google Labs, which the company uses as a base to experiment with new tech. Opal lets you create mini web apps using text prompts, or you can remix existing apps available in a gallery. All users have to do is enter a description of the app they want to make, and the tool will then use different Google models to do so. Once the app is ready, you can navigate into an editor panel to see the visual workflow of input, output, and generation steps. You can click on each workflow step to look at the prompt that dictates the process, and edit it if you need to. You also can manually add steps from Opal’s toolbar. Opal also lets users publish their new app on the web and share the link with others to test out using their own Google accounts. Google’s AI studio already lets developers build apps using prompts, but Opal’s visual workflow indicates the company likely wants to target a wider audience.
OpenAI’s new GPT-5 can write software on demand, offer tailored healthcare information, plan events and help users learn — all with expert-level fluency
OpenAI on launched GPT-5, its most advanced language model to date, calling it a leap toward artificial general intelligence (AGI) and a transformative tool for businesses, developers and everyday users. CEO Sam Altman said the model can write full software applications, offer tailored healthcare information, plan events and help users learn — all with expert-level fluency. “But it’s not only an assistant now,” Altman said. “GPT-5 can also do stuff for you. It can write an entire computer program from scratch to help you with whatever you’d like. We think this idea of software on demand is going to be one of the defining characteristics of the GPT-5 era.” Also, OpenAI unveiled a partnership with Cursor to make GPT-5 the default AI model for its coding tool. The model will come in three versions: GPT-5, GPT-mini and GPT-5 nano. It also merges OpenAI’s GPT series and omni or “o” reasoning models to offer users one unified, all-around capable AI model to use. “Until now, our users have had to pick between the fast responses of standard GPTs or the slow, more thoughtful responses from our reasoning models,” OpenAI Chief Research Officer Mark Chen said. “GPT-5 eliminates this choice. It aims to think just the perfect amount to give you the perfect answer.” New features from GPT-5, which powers ChatGPT, include the following: Creates visual demos from prompts; Writes “significantly” better, per OpenAI’s Christina Kaplan; Customizes voice to fit an individual’s needs; Translates; Has a “study and learn” mode; Changes colors of the chats; Changes ChatGPT’s personality; Accesses Gmail and Google Calendar (starting next week); Less deceptive; Hallucinates less.