OpenAI debuted a new AI agent, Codex, that can help developers write code and fix bugs. The tool is available through a sidebar in ChatGPT’s interface. One button in the sidebar configures Codex to generate new code based on user instructions, while another allows it to answer questions about existing code. Prompt responses take between one and 30 minutes to generate based on the complexity of the request. Codex is powered by a new AI model called codex-1. It’s a version of o3, OpenAI’s most capable reasoning model, that has been optimized for programming tasks. The ChatGPT developer fine-tuned Codex by training it on a set of real-world coding tasks. Those tasks involved a range of software environments. A piece of software that runs well in one environment, such as a cloud platform, may not run as efficiently on a Linux server or a developer’s desktop, if at all. As a result, an AI model’s training dataset must include technical information about every environment that it will be expected to use. OpenAI used reinforcement learning to train codex-1. It’s a way of developing AI models that relies on trial and error to boost output quality. When a neural network completes a task correctly, it’s given a virtual reward, while incorrect answers lead to penalties that encourage the algorithm to come up with a better approach. In a series of coding tests carried out by OpenAI, Codex achieved an accuracy rate of 75%. That’s 5% better than the most capable, hardware-intensive version of o3. OpenAI’s first-generation reasoning model, o1, scored 11%. Codex carries out coding tasks in isolated software containers that don’t have web access. According to OpenAI, the agent launches a separate container for each task. Developers can customize those development environments by uploading a text file called AGENTS.md. The file may describe what programs Codex should install, how AI-generated code should be tested for bugs and related details. Using AGENTS.md, developers can ensure that the container in which Codex generates code is configured the same way as the production system on which the code will run. That reduces the need to modify the code before releasing it to production. Developers can monitor Codex while it’s generating code. After the tool completes a task, it provides technical data that can be used to review each step of the workflow. It’s possible to request revisions if the code doesn’t meet project requirements.
Boomi and AWS suggest multi-agent model systems can help design and govern a team of AI agents; hierarchical ones that can have a supervisor agent enabled by MCP
Boomi LP and Amazon Web Services Inc. are not only harnessing current artificial intelligence technology, but preparing for a future of multi-agent model systems. Boomi Agentstudio, which just received a general release, supports designing and, crucially, governing a team of agents. “We [AWS] innovate massively,” Nicole Bradley, ISV principle account executive at AWS said. “But we can’t keep up with all the features and functions and the ease of the UI capability, and that’s what Boomi brings to table. It was really the perfect synergy of [Boomi CEO Steve Lucas’] vision, his ability to move fast, his commitment to move fast and our recognition of … we need to make sure that this agent sprawl doesn’t go crazy.” The potential of losing control over AI agents has many businesses concerned, so Boomi and AWS are focused on creating a robust management system. Ann Maya, EMEA chief technology officer of Boomi foresees rapid growth for agentic AI tools with a corresponding need for the governing tools Boomi offers.
Successful deployment of agentic AI requires building workforce AI fluency through role-based training and cross-function collaboration, redesigning workflows for upskilling, and developing new ‘supervising’ AI roles
As a founder of an AI-powered digital transformation and product development company helping businesses innovate, automate and scale, here’s a short guide. 1) Empower your workforce with AI fluency: Maintaining a nimble and knowledgeable workforce is critical, fostering a culture that embraces technological change. Team collaboration in this sense could take the form of regular training about agentic AI, highlighting its strengths and weaknesses and focusing on successful human-AI collaborations. For more established companies, role-based training courses could successfully show employees in different capacities and roles to use generative AI appropriately. Executives should make sure a feedback mechanism is in place to optimize this human-AI collaboration. By having employees actively participate in error identification and mitigation, they can develop an attitude of appreciation toward evolving technologies while also seeing the importance of continuous learning. AI fluency also comes from collaboration across departments and specialists; for example, between engineers, AI specialists and developers. They must share knowledge and concerns to effectively integrate agentic AI into workflows. For your workforce to feel empowered, there must be a mindset change: We don’t need to compete with AI, we (and our cognitive abilities) are evolving with it. 2) Redesign your workflows around: According to a recent McKinsey survey, redesigning workflows when implementing generative AI has had the most significant impact on earnings before interest and tax (EBIT) in organizations of all sizes. In other words: AI’s true value comes when companies rewire how they run. The strategy involves a dedication to upskilling, as well as a complete overhaul of core business processes and aggressive scaling, keeping a keen eye on financial and operational performance. Although machines can’t be left entirely unattended and humans can’t stay on top of processing data in real-time, constant human-AI collaboration may not be the answer to everything when redesigning workflows. 3) Develop new ‘supervising’ AI roles: When recruiting, business leaders should seek candidates who are: 1) Adept at testing for model bias to ensure accuracy and identification of problems early in AI development; and 2) Experienced in cross-departmental collaboration, to ensure that AI solutions are meeting all the team’s needs. If you are an SVP or CTO — and unsure where to start — you may need a strategic partner to gain access to quality talent. This is table stakes to build enterprise-grade, AI-powered technology products to de-risk AI adoption.
Microsoft announced a significant expansion of its Copilot Studio platform, introducing multi-agent systems that allow different AI agents to collaborate on complex business tasks, along with new developer tools, security enhancements, and integration with WhatsApp. At the heart of the announcements is Microsoft’s new multi-agent system, which enables agents built with Copilot Studio, Microsoft 365, Azure AI Agents Service, and Azure Fabric to work together, delegating tasks to one another to complete complex business processes. The system enables scenarios such as a Copilot Studio agent pulling sales data from a CRM, handing it to a Microsoft 365 agent to draft a proposal in Word, and then triggering another agent to schedule follow-ups in Outlook. Microsoft is also emphasizing interoperability through support for the agent-to-agent protocol recently announced by Google, potentially enabling cross-platform agent communication. Another key announcement is “computer use” for Copilot Studio agents, which allows agents to interact with desktop applications and websites by controlling interfaces directly — clicking buttons, navigating menus, and typing in fields — even when APIs aren’t available. Microsoft is giving organizations more flexibility with their AI models by enabling them to bring custom models from Azure AI Foundry into Copilot Studio. This includes access to over 1,900 models, including the latest from OpenAI GPT-4.1, Llama, and DeepSeek. The company is also adding a code interpreter feature that brings Python capabilities to Copilot Studio agents, enabling data analysis, visualization, and complex calculations without leaving the Copilot Studio environment. Deep reasoning models, powered by reinforcement learning, can effectively self-verify any process that produces quantifiable outputs. Starting in early July, organizations will be able to publish Copilot Studio agents to WhatsApp, enabling them to reach customers through one of the world’s most popular messaging platforms. For professional developers, Microsoft is launching a Visual Studio Code extension for Copilot Studio, bringing familiar tooling and workflows to agent development. The extension provides features like IntelliSense, color formatting, and “find all references” functionality, enabling developers to edit agents directly from within Visual Studio Code. By addressing key enterprise requirements like security, governance, and interoperability, while simultaneously expanding the platform’s capabilities through features like computer use and code interpretation, Microsoft is creating a more complete offering for organizations looking to deploy AI agents at scale.
Salesforce to acquire Convergence.ai to accelerate the development of next-gen AI agents that can navigate dynamic interfaces and adapt in real time to manage web-based workflows and multi-step processes
Salesforce plans to acquire Convergence.ai to accelerate the development of its next-generation AI agents. The company signed a definitive agreement for the acquisition and expects Convergence’s team and technology to play a “central role” in advancing its AI agent platform, Agentforce. The acquisition is expected to close in the second quarter of Salesforce’s fiscal year 2026, subject to customary closing conditions. “The next wave of customer interaction and employee productivity will be driven by highly capable AI agents that can navigate the complexities of today’s digital work,” Adam Evans, executive vice president and general manager, Salesforce AI Platform at Salesforce, said. “Convergence’s innovative approach to building adaptive, intelligent agents is incredibly impressive.” Convergence’s technology enables AI agents to navigate dynamic interfaces and adapt in real time so they can manage things like web-based workflows and multi-step processes. The company’s talent is also expected to contribute to deep research, task automation and industry-specific solutions that will advance Salesforce’s broader AI roadmap.
UiPath automations and agents can now integrate directly into Microsoft Copilot Studio to automate complex end-to-end processes at scale
UiPath announced new capabilities that enable the orchestration of Microsoft Copilot Studio agents alongside UiPath and other third-party agents using UiPath Maestro™, an enterprise orchestration solution to seamlessly coordinate agents, robots, and people across complex processes. Developers can now orchestrate Microsoft Copilot Studio agents directly from Maestro. This capability builds on bi-directional integration between the UiPath Platform™ and Microsoft Copilot Studio recently announced by Microsoft, that facilitates seamless interaction between UiPath and Microsoft agents and automations — allowing customers to automate complex end-to-end processes, enable contextual decision-making, improve scalability, and unlock new levels of productivity. Developers can now embed UiPath automations and AI agents directly into Microsoft Copilot Studio and integrate Copilot agents within UiPath Studio— all while orchestrating seamlessly across platforms with UiPath Maestro. UiPath Maestro can leverage the bi-directional integration with Copilot Studio to give customers built-in capabilities to build, manage, and orchestrate agents built in Microsoft Copilot Studio and other platforms in a controlled and scalable way—all while driving tangible business outcomes. Johnson Controls enhanced an existing automation—originally built with UiPath robots and Power Automate—by adding a UiPath agent for confidence-based document extraction. The result: a 500% return on investment and projected savings of 18,000 hours annually that were previously spent on manual document review. The integration extends other new capabilities that elevate business processes and drive smarter outcomes with agentic automation across departments and platforms.
Citi’s Token Services enable real-time cross-border payments and trade settlement through clients’ existing methods without requiring them to hold tokens via API or online portal
Citi’s treasury and trade solutions customers were asking for multinational cash management and trade services available 24/7 and that’s where Citi Token Services was born. “The pain point was our clients wanted 24/7, always on, liquidity and payments,” said Ryan Rugg, Citi’s global head of digital assets, treasury and trade solutions. Ambrish Bansal, Citi’s global head of liquidity and cash concentration solutions, liquidity management services in the bank’s treasury and trade solutions unit, said the initiative stemmed from clients’ requests. “It’s really important for us to ensure that our clients can leverage cutting-edge technologies and new developments and move their treasury management into the real-time world,” Bansal said. “The whole idea behind Citi Token Services was born out of this pressing need by our clients to ensure that their money can move around the global ecosystem in as [close to] real time as possible.” The bank uses a private permissioned distributed ledger and a distributed database with embedded business logic to enable a range of services from intraday lending, cross-border payments and conditional transfer of funds to supply chain financing, trade settlements and fractional ownership to identity verification and know-your-customer compliance. Citi Token Services adheres to the ERC-20 technical standard — a community-created framework for creating smart contract-enabled fungible tokens on the ethereum blockchain. The bank owns and manages all the blockchain technology infrastructure it’s using for its token services, which will be integrated into the bank’s global network. Clients will be able to access Citi Token Services through its CitiDirect online portal or API connectivity. Rugg said multinational companies with hundreds, if not thousands, of accounts with Citi and other banks can use the program to manage liquidity and payments across the globe. Before, they would have to forecast and leave money in different branches as well as keep track of cut-off times and holidays around the world when money can’t be moved.
New quantum algorithm solves hard optimization problems up to 80 times faster than classical solvers like CPLEX and simulated annealing by doing away with error correction and using low-energy states that suppress unwanted transitions
A new study by Kipu Quantum and IBM demonstrates that a tailored quantum algorithm running on IBM’s 156-qubit processors can solve certain hard optimization problems faster than classical solvers like CPLEX and simulated annealing. The quantum system used a technique called bias-field digitized counterdiabatic quantum optimization (BF-DCQO). The method builds on known quantum strategies by evolving a quantum system under special guiding fields that help it stay on track toward low-energy (i.e., optimal) states. It achieved comparable or better solutions in seconds, while classical methods required tens of seconds or more. CPLEX took 30 to about 50 seconds to match that same solution quality, even with 10 CPU threads running in parallel, according to the study. The researchers further confirmed this advantage across a suite of 250 randomly generated hard instances, using distributions specifically selected to challenge classical algorithms. BF-DCQO delivered results up to 80 times faster than CPLEX in some tests and over three times faster than simulated annealing in others. At the heart of the BF-DCQO algorithm is an adaptation of counterdiabatic driving, a physics-inspired strategy where an extra term is added to the Hamiltonian — the system’s energy function — to suppress unwanted transitions. This helps the quantum system evolve faster and more accurately toward its lowest energy configuration. Because this process doesn’t rely on error correction, it is well suited to today’s NISQ devices. And because the algorithm uses only shallow circuits with mostly native operations like single-qubit rotations and two- or three-body interactions, it can fit within the short coherence windows of real hardware.
Study shows only 18% of US consumers are comfortable with AI-driven features, while 71% are uncomfortable with AI tools
Only 18% of US consumers are comfortable with AI-driven features, while 71% are uncomfortable with AI tools. The study also revealed that one in three consumers prioritize price over brand loyalty. While shoppers are open to AI being used for customer service and product discovery, only 8% believe convenience will impact their buying experience. Loyalty programs are valued at 49%, with 36% wanting added incentives like free shipping or buy now, pay later options. Additional findings include: 67% of Gen Z consumers (ages 16-26) are likely to sign up for subscription services from retailers they shop with. When shopping on a marketplace, 30% of millennials (ages 27-42) are looking for new brands to try, compared to 18% of Gen X and just 9% of Baby Boomers. 40% of Baby Boomers express discomfort with AI chatbots, compared to 24% of millennials (ages 27-42) and 25% of Gen Z.
Apple is now requiring developers to list their app’s accessibility features; new accessibility features include live captions, personal voice replication, improved reading tools, braille reader improvements, and “nutrition labels”
Apple has announced new accessibility features for iOS, focusing on people with vision or hearing impairments. The company downplays the notion that the price of Apple hardware means accessibility comes at a cost, stating that it is built into its operating system for free. The new features include live captions, personal voice replication, improved reading tools, braille reader improvements, and “nutrition labels” in the app store. Developers will be required to list the accessibility features their app has, such as voiceover, voice control, or large text. Apple’s senior director of global accessibility policy and initiatives, Sarah Herrlinger, said that the nutrition labels would encourage developers to enable more accessibility options in the future. The company also improved its magnifier app, allowing users to zoom in on screens or whiteboards in lectures to read presentations. New braille features include note-taking with a braille screen input or a compatible braille device, and allow for calculation using Nemeth braille. The new personal voice feature can recreate a user’s voice using just 10 phrases, and the voice replication will be password-coded and remain on the device unless backed up to iCloud.
