Apple and Anthropic have reportedly partnered to create a platform that will use AI to write, edit and test code for programmers. Apple has started rolling out the coding software to its own engineers. The company hasn’t decided whether to make it available to third-party app developers. The tool generates code or alterations in response to requests made by programmers through a chat interface. It also tests user interfaces and manages the process of finding and fixing bugs. Amazon, Meta, Google and several startups have also built AI assistants for writing and editing code. McKinsey said in 2023 that AI could boost the productivity of software engineering by 20% to 45%. This increased efficiency has far-reaching implications for businesses across industries, CPO and CTO Bob Rogers of Oii.ai told. AI-powered tools enable developers to create software and applications faster and with fewer resources. “Simple tasks such as building landing pages, basic website design, report generation, etc., can all be done with AI, freeing up time for programmers to focus on less tedious, more complex tasks,” Rogers said. “It’s important to remember that while generative AI can augment skills and help folks learn to code, it cannot yet directly replace programmers — someone still needs to design the system.”
StarTree integrates Model Context Protocol (MCP) support to its data platform to allow AI agents to dynamically analyze live, structured enterprise data and make micro-decisions in real time
StarTree announced two new powerful AI-native innovations to its real-time data platform for enterprise workloads: Model Context Protocol (MCP) support: MCP is a standardized way for AI applications to connect with and interact with external data sources and tools. It allows Large Language Models (LLMs) to access real-time insights in StarTree in order to take actions beyond their built-in knowledge. Vector Auto Embedding: Simplifies and accelerates the vector embedding generation and ingestion for real-time RAG use cases based on Amazon Bedrock. These capabilities enable StarTree to power agent-facing applications, real-time Retrieval-Augmented Generation (RAG), and conversational querying at the speed, freshness, and scale enterprise AI systems demand. The StarTree platform now supports: 1) Agent-Facing Applications: By supporting the emerging Model Context Protocol (MCP), StarTree allows AI agents to dynamically analyze live, structured enterprise data. With StarTree’s high-concurrency architecture, enterprises can support millions of autonomous agents making micro-decisions in real time—whether optimizing delivery routes, adjusting pricing, or preventing service disruptions. 2) Conversational Querying: MCP simplifies and standardizes the integration between LLMs and databases, making natural language to SQL (NL2SQL) far easier and less brittle to deploy. Enterprises can now empower users to ask questions via voice or text and receive instant answers, with each question building on the last. This kind of seamless, conversational flow requires not just language understanding, but a data platform that can deliver real-time responses with context. 3) Real-Time RAG: StarTree’s new vector auto embedding enables pluggable vector embedding models to streamline the continuous flow of data from source to embedding creation to ingestion. This simplifies the deployment of Retrieval-Augmented Generation pipelines, making it easier to build and scale AI-driven use cases like financial market monitoring and system observability—without complex, stitched-together workflows.
Speculation has resurfaced around a possible integration of Ripple’s XRP with SWIFT following integration by SBI Remit
A recent report by Mastercard, titled “Blockchain Technology Fuels New Remittances Business Cases,” highlights several examples of blockchain applications in remittance systems. Speculation has resurfaced around a possible integration of Ripple’s XRP with SWIFT, the global messaging network for cross-border transactions. Previous reports have indicated that banks have tested XRP’s compatibility with SWIFT. If confirmed, such a partnership could significantly boost XRP adoption among global financial institutions. The report also mentions SBI Remit, a Japanese money transfer service that uses XRP as a bridge currency. It places SBI alongside earlier examples such as MoneyGram and Stellar, suggesting a broader trend of using cryptocurrencies to cut costs and speed up cross-border transactions. Mastercard’s reference to Ripple and XRP adds credibility to the token’s role in remittances. It signals that mainstream payment firms are now taking a closer look at blockchain infrastructure. The inclusion gives Ripple added visibility in the financial ecosystem. SBI Remit’s ongoing use of XRP in Asia further illustrates how digital assets are being integrated into real-world payment systems. The Mastercard report underscores that blockchain solutions are being evaluated across regions and technologies.
AWS announces Q Developer agentic AI that will generate code using the entire codebase in the GitHub repository
Amazon Web Services (AWS) has introduced a preview for its agentic artificial intelligence software development assistant, Q Developer, for Microsoft Corp.’s open-source code repository GitHub. GitHub is a platform used by millions of developers to store vast amounts of source code for software projects, enabling collaboration, version control, and code management. Q Developer is now available in the GitHub Marketplace, providing AI-powered capabilities such as feature development, code review, and Java code migration directly within the GitHub interface. Q Developer acts as a teammate, automating tedious tasks. Developers can assign issues to it, such as feature requests, and it will generate code using the entire codebase in the GitHub repository by following the description in the request. The AI agent will automatically update the code repository with the changes, performing syntactically sound checks and using GitHub Actions for security vulnerability scans and code quality checks. It will also use its own feedback to improve the code. Q Developer also offers easy migration for legacy codebases, allowing developers to assign a GitHub issue called “Migration” and assign it to the Amazon Q transform agent. This agent will handle all of the migration from the earlier version of Java to the newest, ensuring developers have access to the most recent features and capabilities.
IBM’s hybrid technologies enable businesses to build and deploy AI agents with their own enterprise data- offering Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents
IBM is unveiling new hybrid technologies that break down the longstanding barriers to scaling enterprise AI – enabling businesses to build and deploy AI agents with their own enterprise data. IBM is providing a comprehensive suite of enterprise-ready agent capabilities in watsonx Orchestrate to help businesses put them into action. The portfolio includes: 1) Build-your-own-agent in under five minutes, with tooling that makes it easier to integrate, customize and deploy agents built on any framework – from no-code to pro-code tools for any kind of user. 2) Pre-built domain agents specialized in areas like HR, sales and procurement – with utility agents for simpler actions like web research and calculations. 3) Integration with 80+ leading enterprise applications from providers like Adobe, AWS, Microsoft, Oracle, Salesforce Agentforce, SAP, ServiceNow, and Workday. 4) Agent orchestration to handle the multi-agent, multi-tool coordination needed to tackle complex projects like planning workflows and routing tasks to the right AI tools across vendors. 5) Agent observability for performance monitoring, guardrails, model optimization, and governance across the entire agent lifecycle. IBM is also introducing the new Agent Catalog in watsonx Orchestrate to simplify access to 150+ agents and pre-built tools from both IBM and its wide ecosystem of partners. IBM is also introducing webMethods Hybrid Integration5, a next-generation solution that replaces rigid workflows with intelligent and agent-driven automation. It will help users manage the sprawl of integrations across apps, APIs, B2B partners, events, gateways, and file transfers in hybrid cloud environments.
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can transcribe 60 minutes of audio in 1 second with an average “Word Error Rate” of just 6.05%
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can, “transcribe 60 minutes of audio in 1 second [mind blown emoji].” This version two is so powerful, it currently tops the Hugging Face Open ASR Leaderboard with an average “Word Error Rate” (times the model incorrectly transcribes a spoken word) of just 6.05% (out of 100). To put that in perspective, it nears proprietary transcription models such as OpenAI’s GPT-4o-transcribe (with a WER of 2.46% in English) and ElevenLabs Scribe (3.3%). The model boasts 600 million parameters and leverages a combination of the FastConformer encoder and TDT decoder architectures. It can transcribe an hour of audio in just one second, provided it’s running on Nvidia’s GPU-accelerated hardware. The performance benchmark is measured at an RTFx (Real-Time Factor) of 3386.02 with a batch size of 128, placing it at the top of current ASR benchmarks maintained by Hugging Face. Parakeet-TDT-0.6B-v2 is aimed at developers, researchers, and industry teams building applications such as transcription services, voice assistants, subtitle generators, and conversational AI platforms. The model supports punctuation, capitalization, and detailed word-level timestamping, offering a full transcription package for a wide range of speech-to-text needs. Developers can deploy the model using Nvidia’s NeMo toolkit. The setup process is compatible with Python and PyTorch, and the model can be used directly or fine-tuned for domain-specific tasks. The open-source license (CC-BY-4.0) also allows for commercial use, making it appealing to startups and enterprises alike. Parakeet-TDT-0.6B-v2 is optimized for Nvidia GPU environments, supporting hardware such as the A100, H100, T4, and V100 boards. While high-end GPUs maximize performance, the model can still be loaded on systems with as little as 2GB of RAM, allowing for broader deployment scenarios.
IBM watsonx to support enterprise-grade AI solutions at the edge with Lumen’s edge network offering <5ms latency
Lumen and IBM announced a new collaboration to develop enterprise-grade AI solutions at the edge—integrating watsonx, IBM’s portfolio of AI products, with Lumen’s Edge Cloud infrastructure and network. The new AI inferencing solutions optimized for the edge will deploy IBM watsonx technology in Lumen’s edge data centers and leverage Lumen’s multi-cloud architecture, enabling clients across financial services, healthcare, manufacturing and retail to analyze massive volumes of data in near real-time to help minimize latency. This will allow enterprises to develop and deploy AI models closer to the point of data generation, facilitating smarter decision-making while maintaining data control and security, plus accelerating AI innovation. Lumen’s edge network offers <5ms latency and direct connectivity to major cloud providers and enterprise locations. When paired with IBM watsonx, the infrastructure has the potential to enable real-time AI processing, which can help mitigate costs and risks associated with public cloud dependence. IBM Consulting will act as the preferred systems integrator, supporting clients in their efforts to scale deployments, reduce their costs and fully leverage AI capabilities through their deep technology, domain, and industry expertise. The collaboration aims to solve contemporary business challenges by turning AI potential into practical, high-impact outcomes at the edge. For enterprise businesses, this can mean faster insights, lower operational costs, and a smarter path to digital innovation. Ryan Asdourian, Chief Marketing and Strategy officer at Lumen said, “By combining IBM’s AI innovation with Lumen’s powerful network edge, we’re making it easier for businesses to tap into real-time intelligence wherever their data lives, accelerate innovation, and deliver smarter, faster customer experiences.”
Unblocked is an AI-powered assistant that answers contextual questions about lines of code and to search for the person who made changes to a particular module
Unblocked is an AI-powered assistant that answers contextual questions about lines of code. Unblocked integrates with development environments and apps like Slack, Jira, Confluence, Google Drive, and Notion. The tool gathers intelligence about a company’s codebase and helps answer questions such as “Where do we define user metrics in our system?” Developers can also use the platform to search for the person who made changes to a particular module and quickly gain insights from them. Unblocked offers admin controls that can be easily adopted by a company’s system admin, and the startup is working on integrating with platforms like Cursor and Lovable to improve code explainability. Beyond this, Unblocked is developing tools that actively help developers with projects rather than simply answer questions. One, Autonomous CI Triage, supports developers in testing code through different scenarios. Unblocked counts companies such as Drata, AppDirect, Big Cartel, and TravelPerk as customers. Pilarinos claims that engineers at Drata were able to save one to two hours per week using Unblocked’s platform.
Iterate.ai offers an on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud
Iterate.ai and ASA Computers have launched AIcurate, a turnkey, on-premises AI appliance that delivers complete control, privacy, and enterprise-grade AI performance without relying on the cloud. Built on Iterate.ai’s Generate platform and deployed on Dell PowerEdge servers, AIcurate empowers enterprises to run LLMs and AI workloads securely and within their own infrastructure. The system supports integration with popular business tools, is vendor-agnostic, and is optimized for performance-intensive applications such as document analysis, internal search, and workflow automation. Unlike public AI platforms, AIcurate enables secure deployment of powerful LLMs such as OpenAI, PaLM 2, Meta’s Llama, Mistral, and Microsoft’s models, all without sending data to the cloud. Businesses can build custom AI workflows while ensuring compliance with internal policies and industry regulations. “This collaboration makes advanced AI more accessible for organizations that can’t compromise on data control.” Ruban Kanapathippillai, SVP of Systems and Solutions at ASA Computers said “AIcurate puts enterprise-grade AI directly into customers’ data centers, giving them full control while supporting the flexible and secure architecture that modern IT teams demand.” Capabilities included in AIcurate: Secure on-prem deployment, Enterprise tool integration, Support for leading LLMs, Vendor-agnostic architecture, Advanced document processing, Role-based access control:, Workflow automation with agentic AI.
ServiceNow’s new AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system
ServiceNow’s new AI Control Tower, offers a holistic view of the entire AI ecosystem. AI Control Tower acts as a “command center” to help enterprise customers govern and manage all their AI workflows, including agents and models. The AI Control Tower lets AI systems administrators and other AI stakeholders monitor and manage every AI agent, model or workflow in their system — even third-party agents. It also provides end-to-end lifecycle management, real-time reporting for different metrics, and embedded compliance and AI governance. The idea around AI Control Tower is to give users a central location to see where all of the AI in the enterprise is. “I can go to a single place to see all the AI systems, how many were onboarded or are currently deployed, which ones are an AI agent or classic machine learning,” said Dorit Zilbershot, ServiceNow’s Group Vice President of AI Experiences and Innovation. “I could be managing these in a single place, making sure that I have full governance and understanding of what’s going on across my enterprise.” She added that the platform helps users “really drill down to understand the different systems by the provider and by type,” to understand risk and compliance better. The company’s agent library allows customers to choose the agent that best fits their workflows, and it has built-in orchestration features to help manage agent actions. ServiceNow also unveiled its AI Agent Fabric, a way for its agent to communicate with other agents or tools. Zilbershot said ServiceNow will still support other protocols and will continue working with other companies to develop standards for agentic communication.