OpenAI announced what it’s calling the first “connector” for ChatGPT deep research, the company’s tool that searches across the web and other sources to compile thorough research reports on a topic. Now, ChatGPT deep research can link to GitHub (in beta), allowing developers to ask questions about a codebase and engineering documents. The connector will be available for ChatGPT Plus, Pro, and Team users over the next few days, with Enterprise and Edu support coming soon. The GitHub connector for ChatGPT deep research arrives as AI companies look to make their AI-powered chatbots more useful by building ways to link them to outside platforms and services. Anthropic, for example, recently debuted Integrations, which gives apps a pipeline into its AI chatbot Claude. In addition to answering questions about codebases, the new ChatGPT deep research GitHub connector lets ChatGPT users break down product specs into technical tasks and dependencies, summarize code structure and patterns, and understand how to implement new APIs using real code examples. The company also launched fine-tuning options for developers looking to customize its newer models for particular applications. Devs can now fine-tune OpenAI’s o4-mini “reasoning” model via a technique OpenAI calls reinforcement fine-tuning, which uses task-specific grading to improve the model’s performance. Fine-tuning has also rolled out for the company’s GPT-4.1 nano model.
Boomi and AWS partner to offer a centralized management solution for deploying, monitoring, and governing AI agents across hybrid and multi-cloud environments with built-in support for MCP via a single API
Boomi announced a multi-year Strategic Collaboration Agreement (SCA) with AWS to help customers build, manage, monitor and govern Gen AI agents across enterprise operations. Additionally, the SCA will aim to help customers accelerate SAP migrations from on-premises to AWS. By integrating Amazon Bedrock with the Boomi Agent Control Tower, a centralized management solution for deploying, monitoring, and governing AI agents across hybrid and multi-cloud environments, customers can easily discover, build, and manage agents executing in their AWS accounts, while also maintaining visibility and control over agents running in other cloud provider or third-party environments. Through a single API, Amazon Bedrock provides a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI in mind, including support for Model Context Protocol (MCP), a new open standard that enables developers to build secure, two-way connections between their data and AI-powered tools. MCP enables agents to effectively interpret and work with ERP data while complying with data governance and security requirements. Steve Lucas, Chairman and CEO at Boomi. “By integrating Amazon Bedrock’s powerful generative AI capabilities with Boomi’s Agent Control Tower, we’re giving organizations unprecedented visibility and control across their entire AI ecosystem while simultaneously accelerating their critical SAP workload migrations to AWS. This partnership enables enterprises to confidently scale their AI initiatives with the security, compliance, and operational excellence their business demands.” Apart from Agent Control Tower, the collaboration will introduce several strategic joint initiatives, including: Enhanced Agent Designer; and New Native AWS Connectors and Boomi for SAP.
IBM’s multi-agent orchestration framework blends pre-built, domain-specific agents into existing systems without replacing entire software stacks with AI-native applications
IBM Corp. is focused on developing AI agents that execute across systems rather than merely assist at the edges. These agents are designed to integrate with legacy and modern tools, orchestrating processes across the full sprawl of enterprise infrastructure, according to Ritika Gunnar, general manager for data and artificial intelligence at IBM. Instead of replacing entire software stacks with AI-native applications, IBM blends agentic functionality into existing systems. That strategy includes leveraging fixed workflows, enabling agent-based enhancements and allowing customers to scale into full orchestration when needed, according to Gunnar. To help enterprises get started, IBM has unveiled a lineup of prebuilt AI agents in areas such as human resources, sales and procurement, with more planned in customer care and finance, according to Gunnar. These domain-specific agents can be customized, integrated and orchestrated using IBM’s frameworks. “[We have] a new interaction paradigm to work across this multi-agent orchestration framework, across all those systems, whether those be agents, tools or anything else underneath that. It is about [being] open … hybrid … because we know agents are going to run everywhere. Your systems are going to exist in many different forms, in agentic and non-agentic.” The agentic strategy converges with IBM’s push to unlock unstructured data. IBM’s watsonx offerings aim to bridge IT and business needs by enabling users to build intelligent AI agents grounded in enterprise data, according to Gunnar. “We believe that we’re going to see an explosion of the 90% of unstructured data that today has been untapped; you’re untapping a whole new set of intelligence that’s now available.”
Apple and Anthropic are building AI-powered coding platform that generates code through a chat interface, tests user interfaces and manages the process of finding and fixing bugs
Apple and Anthropic have reportedly partnered to create a platform that will use AI to write, edit and test code for programmers. Apple has started rolling out the coding software to its own engineers. The company hasn’t decided whether to make it available to third-party app developers. The tool generates code or alterations in response to requests made by programmers through a chat interface. It also tests user interfaces and manages the process of finding and fixing bugs. Amazon, Meta, Google and several startups have also built AI assistants for writing and editing code. McKinsey said in 2023 that AI could boost the productivity of software engineering by 20% to 45%. This increased efficiency has far-reaching implications for businesses across industries, CPO and CTO Bob Rogers of Oii.ai told. AI-powered tools enable developers to create software and applications faster and with fewer resources. “Simple tasks such as building landing pages, basic website design, report generation, etc., can all be done with AI, freeing up time for programmers to focus on less tedious, more complex tasks,” Rogers said. “It’s important to remember that while generative AI can augment skills and help folks learn to code, it cannot yet directly replace programmers — someone still needs to design the system.”
kama.ai’s supports knowledge management with hybrid agents informed by Knowledge Graph AI, enterprise RAG tech and a Trusted Collection
kama.ai, a leader in responsible conversational AI solutions, announced the commercial release of the industry’s most trustworthy AI Agents powered by GenAI’s Sober Second Mind®, the latest addition to its Designed Experiential Intelligence® platform – Release 4. The new Hybrid AI Agents combine kama.ai’s classic knowledge base AI, guided by human values, with a new enterprise Retrieval Augmented Generation (RAG) process. This in turn is powered by a Trusted Collection feature set that produces the most reliable and accurate generative responses. The Trusted Collection features provide pre-integrated intentional document and collection management with enterprise document repositories like SharePoint, M-Files and AWS S3 Buckets. Designed Experiential Intelligence® Release 4 helps enterprise experts work faster with greater ease. It generates draft responses automatically for a Knowledge Manager or SME to review. This is needed for highly sensitive applications (like HR), or for high volume customer facing applications. User inquiries, feedback, and AI drafts all help improve the system. Together, consumers, clients, partners, and SMEs create a more efficient and effective human-AI ecosystem. kama.ai Release 4 also introduces a new API supporting 3rd party Hybrid AI Agent builders that can deliver 100% accurate and approved information curated for the enterprise.
AlphaEvolve coding agent built on Google’s Gemini LLMs, tests, refines, and improves algorithms automatically
Google DeepMind today pulled the curtain back on AlphaEvolve, an artificial-intelligence agent that can invent brand-new computer algorithms — then put them straight to work inside the company’s vast computing empire. AlphaEvolve pairs Google’s Gemini LLMs with an evolutionary approach that tests, refines, and improves algorithms automatically. The system has already been deployed across Google’s data centers, chip designs, and AI training systems — boosting efficiency and solving mathematical problems that have stumped researchers for decades. “AlphaEvolve is a Gemini-powered AI coding agent that is able to make new discoveries in computing and mathematics,” explained Matej Balog, a researcher at Google DeepMind. “It can discover algorithms of remarkable complexity — spanning hundreds of lines of code with sophisticated logical structures that go far beyond simple functions.” One algorithm it discovered has been powering Borg, Google’s massive cluster management system. This scheduling heuristic recovers an average of 0.7% of Google’s worldwide computing resources continuously — a staggering efficiency gain at Google’s scale. The discovery directly targets “stranded resources” — machines that have run out of one resource type (like memory) while still having others (like CPU) available. AlphaEvolve’s solution is especially valuable because it produces simple, human-readable code that engineers can easily interpret, debug, and deploy. Perhaps most impressively, AlphaEvolve improved the very systems that power itself. It optimized a matrix multiplication kernel used to train Gemini models, achieving a 23% speedup for that operation and cutting overall training time by 1%. For AI systems that train on massive computational grids, this efficiency gain translates to substantial energy and resource savings.
Mistral AI’s API integrates server-side conversation management, a Python code interpreter, web search, image generation and document retrieval capabilities to enable building fully autonomous AI agents
Mistral AI, a rival to OpenAI, Anthropic PBC, Google LLC and others, has jumped into agentic AI development with the launch of a new API. The new Agents API equips developers with powerful tools for building sophisticated AI agents based on Mistral AI’s LLms, which can autonomously plan and carry out complex, multistep tasks using external tools. Among its features, the API integrates server-side conversation management, a Python-based code interpreter, web search, image generation and document retrieval capabilities. It also supports AI agent orchestration, and it’s compatible with the emerging Model Context Protocol that aims to standardize the way agents interact with other applications. With its API, Mistral AI is keeping pace with the likes of OpenAI and Anthropic, which are also laser-focused on enabling the emergence of AI agents that can perform tasks on behalf of humans with minimal supervision, in an effort to turbocharge business automation. The API boasts dozens of useful “connectors” that should make it simpler to build some very capable AI agents. For instance, the Python Code Interpreter provides a way for agents to execute Python code in a secure, sandboxed environment, while the image generation tool powered by Black Forest Labs Inc.’s FLUX1.1 [pro] Ultra model means they’ll have powerful picture-generating capabilities. A premium version of web search provides access to a standard search engine, plus the Agence France-Presse and the Associated Press news agencies, so AI agents will be able to access up-to-date information about the real world. Other features include a document library that uses hosted retrieval-augmented generation from user-uploaded documents. In other words, Mistral’s AI agents will be able to read external documents and perform actions with them. The API also includes an “agent handoffs” mechanism that allows multiple agents to work together. One agent will be able to delegate a task to another, more specialized agent. According to Mistral, the result will be a “seamless chain of actions,” with a single request able to trigger multiple agents into action so they can collaborate on complex tasks. The Agents API supports “stateful conversations” too, which means they’re able to maintain context over time by remembering the user’s earlier inputs.
StarTree integrates Model Context Protocol (MCP) support to its data platform to allow AI agents to dynamically analyze live, structured enterprise data and make micro-decisions in real time
StarTree announced two new powerful AI-native innovations to its real-time data platform for enterprise workloads: Model Context Protocol (MCP) support: MCP is a standardized way for AI applications to connect with and interact with external data sources and tools. It allows Large Language Models (LLMs) to access real-time insights in StarTree in order to take actions beyond their built-in knowledge. Vector Auto Embedding: Simplifies and accelerates the vector embedding generation and ingestion for real-time RAG use cases based on Amazon Bedrock. These capabilities enable StarTree to power agent-facing applications, real-time Retrieval-Augmented Generation (RAG), and conversational querying at the speed, freshness, and scale enterprise AI systems demand. The StarTree platform now supports: 1) Agent-Facing Applications: By supporting the emerging Model Context Protocol (MCP), StarTree allows AI agents to dynamically analyze live, structured enterprise data. With StarTree’s high-concurrency architecture, enterprises can support millions of autonomous agents making micro-decisions in real time—whether optimizing delivery routes, adjusting pricing, or preventing service disruptions. 2) Conversational Querying: MCP simplifies and standardizes the integration between LLMs and databases, making natural language to SQL (NL2SQL) far easier and less brittle to deploy. Enterprises can now empower users to ask questions via voice or text and receive instant answers, with each question building on the last. This kind of seamless, conversational flow requires not just language understanding, but a data platform that can deliver real-time responses with context. 3) Real-Time RAG: StarTree’s new vector auto embedding enables pluggable vector embedding models to streamline the continuous flow of data from source to embedding creation to ingestion. This simplifies the deployment of Retrieval-Augmented Generation pipelines, making it easier to build and scale AI-driven use cases like financial market monitoring and system observability—without complex, stitched-together workflows.
Zencoder’s platform offers software teams access to third-party registries that host ready-to-use MCP connectors and MCP-powered pre-packaged AI agent integrations to enable them to build their own custom AI agents
Startup Zencoder, officially For Good AI Inc., introduced a cloud platform called Zen Agents that can be used to create coding-optimized AI agents. The new Zen Agents platform has two main components. The first is a catalog of open-source AI agents that can automate more than a half dozen programming tasks. The platform’s other component, in turn, is a tool that allows software teams to build their own custom AI agents. Developers can create an AI agent by entering a natural language description of the tasks it should perform. Zen Agents provides a collection of prepackaged AI agent integrations powered by MCP. The platform also offers access to third-party registries, or cloud services that host ready-to-use MCP connectors. The company says AI agents powered by its platform can create documentation that explains developers’ code, as well as generate new code in multiple programming languages. Software teams can also deploy AI agents that automatically test application updates for bugs. Zencoder has developed a technology it calls Repo Grokking to improve AI-generated code. It maps out the structure of an application’s code base, including details such as the programming best practices that the application’s developers follow. This information allows the AI models that power its platform to generate more relevant programming suggestions.
Broadridge Financial Solutions awarded patent for its LLM orchestration of machine learning agents used in its AI bond trading platform; patented features include explainability of the output generated, compliance verification and user profile attributes
Broadridge Financial Solutions has been awarded a U.S. patent for its large language model orchestration of machine learning agents, which is used in BondGPT and BondGPT+. These applications provide timely, secure, and accurate responses to natural language questions using OpenAI GPT models and multiple AI agents. The BondGPT+ enterprise application integrates clients’ proprietary data, third-party datasets, and personalization features, improving efficiency and saving time for users. Broadridge continues to work closely with clients to integrate AI into their workflows. Other significant features patented in U.S. Patent No. 11,765,405 include: Explainability as to how the output of the patented methods of LLM orchestration of machine learning agents was generated through a “Show your work” feature that offers step-by-step transparency; A multi-agent adversarial feature for enhanced accuracy; and An AI-powered compliance verification feature, based on custom compliance rules configured to an enterprise’s unique compliance and risk management processes. The use of User Profile attributes such as user role to inform data retrieval and security.