Peymo Ltd, a UK-based FinTech, has launched the world’s first AI-powered multi-hybrid bank — a digital finance platform that seamlessly integrates fiat banking, crypto wallets, tokenised assets, and embedded finance into one unified system. Built on proprietary modular architecture, the platform enables users to manage GBP, EUR, crypto assets, and branded debit cards in one place, while enterprises can integrate full banking functions via simple APIs. “We make complex finance invisible,” said Tomas Bartos, Founder of Peymo. “By fusing AI, fiat, crypto and embedded finance into a single stack, we’re delivering the next generation of banking — and it’s ready today.” Branded as “Peymo AI – Smarter Banking for Every User,” the platform delivers powerful AI through a voice-first interface with continuous listening, instant intent recognition, and multimodal confirmations to enable secure, hands-free banking. Behind the interface, five specialised AI agents monitor user behavior, track market activity, optimise payments, and ensure asset protection in real time. A built-in smart referral engine identifies potential users within a network, dispatches personalised invitations via WhatsApp or voice, and tracks referral success. Operationally, Peymo’s AI powers instant onboarding in under five seconds, continuous transaction monitoring for faster clearances and real-time deposits, and self-improving system code through usage-based AI feedback to keep the platform fast, compliant, and lean. As a true hybrid financial engine, Peymo’s AI also helps users navigate their entire portfolio — from crypto to fiat, gold to tokenised assets — identifying the smartest route, timing, and format for each transaction, ensuring efficiency, transparency, and full control. Its autonomous architecture supports scalable growth by identifying high-value B2B leads and ideal embedded finance partners who can adopt Peymo’s wallets, KYC tools, cards, or payment systems at scale. Simultaneously, human-sounding voice agents engage users directly — offering guidance, upsell suggestions, and personalised support to unlock unused features, deliver premium upgrades, and execute activation nudges, with the system scalable to millions of tailored interactions per hour.
Algolia’s MCP Server enables LLMs and agentic AI systems to interact with APIs, retrieve, reason with, and act on real-time business context at scale through a standards-based, secure runtime
Algolia announced the release of its MCP Server, the first component in a broader strategy to support the next generation of AI agents. This new offering enables large language models (LLMs) and autonomous agents to retrieve, reason with, and act on real-time business context from Algolia, safely and at scale. Bharat Guruprakash, Chief Product Officer at Algolia. “By exposing Algolia’s APIs to agents, we’re enabling systems that adapt in real time, honor business rules, and reduce the time between problem and resolution.” With this launch, Algolia enables an agentic AI ecosystem where software powered by language models is no longer limited to answering questions, but can autonomously take actions, make decisions, and interact with APIs. The MCP Server is the first proof point in a long-term roadmap aimed at positioning Algolia as both the retrieval layer for agents and a trusted foundation for agent-oriented applications. With the Algolia MCP Server, agents can now access Algolia’s search, analytics, recommendations, and index configuration APIs through a standards-based, secure runtime. This turns Algolia into a real-time context surface for agents embedded in commerce, service, and productivity experiences. Additionally, Algolia’s explainability framework with its AI comes along for the ride for enhanced transparency. More broadly, agents can: Retrieve business; Make Updates Freely; Chain decisions across workflows. With the MCP Server and upcoming tools, Algolia is eliminating friction in the development of agentic AI systems—empowering developers to increasingly: Define agent behaviors around Algolia’s APIs; Rely on Algolia’s safety scaffolding; Compose agents that span systems.
Typedef turns AI prototypes into scalable, production-ready workloads by managing all the complex properties of mixed AI workloads through a clean, composable interface using APIs, relational models and serverless tech
Typedef Inc., turning AI prototypes into scalable, production-ready workloads that generate immediate business value, has come out of stealth mode with $5.5 million in seed funding. With a new purpose-built AI data infrastructure for modern workloads, Typedef is helping AI and data teams overcome the well-documented epidemic affecting the bulk of enterprise AI projects – failure to scale. The solution is built from the ground up with features to build, deploy, and scale production-ready AI workflows – deterministic workloads on top of non-deterministic LLMs. Typedef makes it easy to run scalable LLM-powered pipelines for semantic analysis with minimal operational overhead. The developer-friendly solution manages all the complex properties of mixed AI workloads like token limits, context windows, and chunking through a clean, composable interface with the APIs and relational models engineers recognize. Typedef allows for rapid, iterative prompt and pipeline experimentation to quickly determine production-ready workloads that will demonstrate value – then realize that potential at scale. Typedef is completely serverless bypassing any infrastructure provisioning or configuration. Users simply download the open-source client library, connect their data sources and start building their AI or agentic pipelines with just a few lines of code. No complex setup, no infrastructure to provision, no brittle custom integrations to troubleshoot.
Rackspace Technology and Sema4.ai launch industry’s first scalable enterprise AI agent solution
Rackspace Technology announced a strategic alliance with enterprise AI agent innovator Sema4.ai (“Sema4”). This collaboration integrates the Foundry for AI by Rackspace (FAIR™) services and Rackspace’s application management expertise with Sema4.ai’s advanced ‘SAFE’ AI Agent Platform, combining the strengths of both companies in artificial intelligence, cloud and systems integration to accelerate the adoption of secure, enterprise-grade AI solutions. The partnership will enable the rapid deployment of scalable, production-ready AI agents across enterprise functions with robust governance, transparency, and security protocols as their core foundational elements. Through this new collaboration, businesses can design and deploy custom AI agents tailored to specific use cases, with seamless integrations across key functions such as HR, finance, customer support, sales, and operations. Customers will also gain access to full AI lifecycle management, including built-in observability and a centralized control plane, enabling scalable, secure, and efficient deployment. These agents can: Operate independently or collaboratively, offering shared functionalities such as natural language understanding, workflow automation, and advanced document processing. Help streamline and enhance the automation of high-value tasks. Automate existing business processes, which interpret standard operating procedures (SOPs) and runbooks, eliminating the need for complex prompt engineering.
Fujitsu unveils AI-powered presentation technology, enabling automated multilingual and customizable presentations; providing answers to audience questions based on materials pre-integrated
Fujitsu announced the development of a new technology which enables AI avatars to carry out presentations and handle audience questions. The technology, a core component of Fujitsu’s AI service Fujitsu Kozuchi, automatically generates and carries out presentations using Microsoft PowerPoint presentation data and provides answers to audience questions based on materials pre-integrated into a retrieval-augmented generation (RAG) process. Fujitsu will utilize the technology within the company from the second quarter of FY 2025 and begin providing it to customers around the world from the third quarter. Users will be able to create AI avatars using their own likeness and voice and have them generate presentations automatically in over 30 languages, making it possible for anybody to utilize the technology without requiring specialist knowledge. Going forward, Fujitsu AI Auto Presentation will also be available directly via Microsoft Teams and PowerPoint. By democratizing the presentation process and allowing anybody to deliver presentations irrespective of time constraints, language level, presentation aptitude, and other factors, Fujitsu will empower organizations to share accurate and high quality information and improve operational efficiency, thereby contributing to the development of a digital society, a key essential contribution of its materiality. Other Fujitsu AI Auto Presentation features: Autonomous slide transition with time allocation (international patent pending); and Customizable presentation content generation
Salesforce’s new Agentforce 3 platform allows teams to analyze every agent interaction, drill into specific moments and understand trends and offers plug-and-play compatibility with other agents through built-in MCP support
Salesforce launched Agentforce 3, a major upgrade to its flagship artificial intelligence product for enterprises with new ways to observe and control AI agents on the platform. The Agentforce platform provides companies the ability to build, customize and deploy generative AI agents, which augment the work of employees autonomously. They are goal-oriented pieces of software capable of completing tasks with little or no human supervision. Using the platform, employees across sales, service, marketing and commerce can customize AI “workers” to take action on their behalf using business logic and prebuilt automations. A new Command Center provides complete observability and built-in support for the Model Context Protocol for plug-and-play compatibility with other agents and services. The new Agentforce Command Center unifies agent health, performance and outcome optimization. Built into Agentforce Studio, the command center allows teams to analyze every agent interaction, drill into specific moments and understand trends. It will also display AI-powered recommendations for tagged conversation types to improve Agentforce agents continuously. The command center will act as a single place to understand AI agents changing contextually according to the type of agent that is under display. Users will be able to use natural language to generate topics, instructions and test cases right in Studio. Testing Center simulates AI agent behavior at scale with data state injection and AI-driven evaluation, allowing users to stress-test agents before going live.
Boomi’s AI solution offers low-code integration tools, a visual design interface and scalable agent orchestration to enable secure and adaptive deployment of agents across hybrid environments
By bridging the gap between cloud agility and on-premises control, Boomi Agentstudio is changing how organizations handle data integration and automation across hybrid environments as a secure AI management solution. With Agentstudio, developers gain access to Boomi’s renowned low-code integration tools, a visual design interface and scalable agent orchestration — making hybrid deployments easier and more adaptive, according to Mani Gill, vice president of product management AI and data at Boomi LP. As part of Boomi Agentstudio, the Boomi Agent Control Tower serves as a centralized dashboard for monitoring, scaling and orchestrating agent activity across distributed environments. Together, Agentstudio and the Control Tower enable a streamlined hybrid integration strategy — delivering both development agility and deployment robustness, according to Gill. Boomi Agentstudio leverages a platform-based approach by centralizing the design, deployment and management of integration agents across hybrid environments. This approach is enabled by the Boomi Enterprise Platform, which provides a comprehensive suite of tools for integration, API management, data quality and workflow automation, according to Gill.
OpenAI ramps up office productivity features- ChatGPT can now record and transcribe any meeting, brainstorming session or voice note, pull out key points and turn them into follow-ups, plans and code; orchestrating not just automating tasks
OpenAI is busy rolling out a suite of office productivity features on ChatGPT that puts it in direct competition with its main investor and partner, Microsoft, and key rival, Google. Since early June, OpenAI has buffed up ChatGPT to do office work: Record Mode: Record and transcribe any meeting, brainstorming session or voice note. ChatGPT will pull out key points and turn them into follow-ups, plans and code. Enhanced Projects: Projects now have deep research, voice, improved memory, file-uploading capability and model selection. Advanced Voice: Voice now offers live translation and smoother interaction. Connectors: ChatGPT can pull data from Microsoft Outlook, Microsoft Teams, Microsoft OneDrive, Microsoft SharePoint, Google Drive, Gmail, Google Calendar, Dropbox and more. Updated Canvas: The side-by-side editing capability can now export documents in PDF, docx or markdown formats. AI-native workflows are the future. Read.ai, Otter.ai and Microsoft Copilot are “now in ChatGPT’s competitive crosshairs. The difference? ChatGPT isn’t just automating tasks; it’s orchestrating them, end-to-end, with context and language-level intelligence.” We’re seeing the beginning of the ‘invisible app era’ where productivity doesn’t live in documents; it lives in dynamic, AI-mediated interactions.
Kognitos platform combines the reasoning of symbolic logic with AI to transform tribal and system knowledge into documented, automated processes, shrinking the automation lifecycle and ensuring no hallucinations and full governance
Kognitos launched its groundbreaking neurosymbolic AI platform, the industry’s first to uniquely combine the reasoning of symbolic logic with the learning power of modern AI. This unified platform empowers enterprises to address hundreds of business automation use cases, consolidate their AI tools and reduce technology sprawl. Kognitos uniquely transforms tribal and system knowledge into documented, automated processes, establishing a new, dynamic system of record for business operations. Using English as code, businesses can achieve automation in minutes with pre-configured workflows and a free community edition. “With Kognitos, we’re automating processes we thought were out of reach, thanks to hallucination-free AI and natural language capabilities,” said customer Christina Jalaly at Boost Mobile. “The agility and speed to value are game-changing, consistently delivering roughly 23x ROI and tangible results. Kognitos is a key partner in transforming our operations.” Kognitos also addresses complex “long tail” automation challenges. Its patented Process Refinement Engine keeps documented automation current and optimized using AI. This shrinks the automation lifecycle, where testing, deployment, monitoring and changes are all English-based and AI-accelerated. Key innovations launched today include: The Kognitos Platform Community Edition; Hundreds of pre-built workflows; Built-in document and Excel processing; Automatic agent regression testing; Browser use.
Gemini’s new foundation model runs locally on bi-arm robotic devices, without accessing a data network and enables rapid experimentation with dexterous manipulation and adaptability to new tasks through fine-tuning
Google DeepMind introduced a vision language action (VLA) model that runs locally on robotic devices, without accessing a data network. The new Gemini Robotics On-Device robotics foundation model features general-purpose dexterity and fast task adaptation. “Since the model operates independent of a data network, it’s helpful for latency sensitive applications and ensures robustness in environments with intermittent or zero connectivity,” Google DeepMind Senior Director and Head of Robotics Carolina Parada said. Building on the task generalization and dexterity capabilities of Gemini Robotics, which was introduced in March, Gemini Robotics On-Device is meant for bi-arm robots and is designed to enable rapid experimentation with dexterous manipulation and adaptability to new tasks through fine-tuning. The model follows natural language instructions and is dexterous enough to perform tasks like unzipping bags, folding clothes, zipping a lunchbox, drawing a card, pouring salad dressing and assembling products. It is also Google DeepMind’s first VLA model that is available for fine-tuning. “While many tasks will work out of the box, developers can also choose to adapt the model to achieve better performance for their applications,” Parada said in the post. “Our model quickly adapts to new tasks, with as few as 50 to 100 demonstrations — indicating how well this on-device model can generalize its foundational knowledge to new tasks.”