Boomi announced a multi-year Strategic Collaboration Agreement (SCA) with AWS to help customers build, manage, monitor and govern Gen AI agents across enterprise operations. Additionally, the SCA will aim to help customers accelerate SAP migrations from on-premises to AWS. By integrating Amazon Bedrock with the Boomi Agent Control Tower, a centralized management solution for deploying, monitoring, and governing AI agents across hybrid and multi-cloud environments, customers can easily discover, build, and manage agents executing in their AWS accounts, while also maintaining visibility and control over agents running in other cloud provider or third-party environments. Through a single API, Amazon Bedrock provides a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI in mind, including support for Model Context Protocol (MCP), a new open standard that enables developers to build secure, two-way connections between their data and AI-powered tools. MCP enables agents to effectively interpret and work with ERP data while complying with data governance and security requirements. Steve Lucas, Chairman and CEO at Boomi. “By integrating Amazon Bedrock’s powerful generative AI capabilities with Boomi’s Agent Control Tower, we’re giving organizations unprecedented visibility and control across their entire AI ecosystem while simultaneously accelerating their critical SAP workload migrations to AWS. This partnership enables enterprises to confidently scale their AI initiatives with the security, compliance, and operational excellence their business demands.” Apart from Agent Control Tower, the collaboration will introduce several strategic joint initiatives, including: Enhanced Agent Designer; and New Native AWS Connectors and Boomi for SAP.
Apple Vision Pro’s brain interface could leapfrog Neuralink; non-invasive methods could accelerate mainstream adoption of neural interfaces
Apple is developing brain-computer interface (BCI) capabilities that would allow users to control their Apple Vision Pro headset using only their thoughts. This is one of the most significant advances in Apple’s human-computer interaction strategy since the introduction of touch screens on the original iPhone. The technology would use external sensors to detect and interpret neural signals, allowing users to navigate the Vision Pro interface through mental commands. Apple is preparing to launch mind control support for its spatial computing device, though the timeline remains uncertain. The implications extend beyond the Vision Pro, as the same technology could eventually be applied to iPhones and other Apple devices. Apple is implementing strict data protections to ensure the security and privacy of neural data. The development puts Apple in direct competition with companies like Neuralink and Meta, but its focus on non-invasive methods could accelerate mainstream adoption of neural interfaces.
DoorDash expands Wing drone delivery to Charlotte, N.C. partnering Alphabet, Panera Bread and local restaurant
DoorDash is introducing drone delivery to Charlotte, N.C. for the first time. Starting Wednesday, May 14, 2025, eligible DoorDash customers within about four miles of The Arboretum Shopping Center in southern Charlotte can order from a selection of local and national restaurants and choose to have their items delivered by a drone from Wing, the on-demand drone delivery provider supported by Google parent company Alphabet. Participating restaurants include Panera Bread, the city’s first national partner available for drone delivery, as well as local banners such as Curry Junction, Matcha Cafe Maiko, and Joa Korean food. In addition to food delivery, Charlotte residents in select locations can now obtain drone delivery from DashMart by Drone, a specialized offering of the DoorDash DashMart banner which operates online storefronts with inventory provided by third-party partners. Eligible residents browsing the DoorDash app can tap the “Drone” icon on the homepage to browse restaurants eligible for drone delivery. If the items they choose meet the size and weight criteria, shoppers will have the option to select drone delivery during checkout. After confirming their delivery location, they will receive live tracking updates as the drone approaches. Charlotte consumers who aren’t currently eligible can join the waitlist to be notified when drone delivery expands to their neighborhood.
Tensor9 helps vendors deploy their software into any environment using digital twins to help with remote monitoring
Tensor9 looks to help software companies land more enterprise customers by helping them deploy their software directly into a customer’s tech stack. Tensor9 converts a software vendor’s code into the format needed to deploy into their customer’s tech environment. Tensor9 then makes a digital twin of the deployed software, or a miniaturized model of the deployed software’s infrastructure, so Tensor9’s customers can monitor how the software is working in their customer’s environment. Tensor9 can help companies deploy into any premise ranging from cloud to bare metal servers. Michael Ten-Pow, Tensor9’s co-founder and CEO, told TechCrunch that Tensor9’s ability to transfer software to any premise, and its use of digital twin technology to help with remote monitoring, helps Tensor9 stand out from other companies, like Octopus Deploy or Nuon, that also help companies deploy software into a customer’s environment. He said the timing is right for Tensor9’s tech due to tailwinds from the rise of AI. “An enterprise search vendor might go to, let’s say, J.P. Morgan and say, ‘hey, I need access to all your six petabytes of data to build an intelligent search layer on top of it so that your internal employees can have a conversation with their company’s data,’ there’s no way that’s going to work,” Ten-Pow said. “We have a simple model but underneath the covers there’s a lot of complexity that makes that happen, hard technical challenges that we’ve solved to make that happen,” Ten-Pow said.
Bardeen’s work intelligence platform securely observes task-level behavior across all tools, and automates entire workflows with agentic AI
Bardeen, a provider of AI agents capable of automating repetitive knowledge work using a natural language interface, unveiled its Work Intelligence Platform—a new system of AI agents that learn how work actually happens, then executes and improves it without any hand-holding. True operational intelligence is capable of identifying how top performers work and scaling that knowledge across an organization. With its Work Intelligence Platform, Bardeen is bringing that level of understanding to the broader market, making go-to-market execution consistent and scalable. The new platform is underscored by Bardeen’s belief that automation should interpret before it acts, and that the next generation of AI should understand how work happens and improves it in real-time. Bardeen’s Work Intelligence Platform: Securely observes task-level behavior across all tools, including Salesforce, LinkedIn, Salesloft, and HubSpot, and reveals how individuals and teams actually work; Surfaces hidden workflows, exposes time sinks and inefficiencies, and highlights high-leverage patterns that drive performance; Automates entire workflows with AI-generated automation agents tailored to the way teams already work
DarkBench is the first benchmark designed specifically to detect and categorize LLM dark patterns, AI sycophancy, brand bias or emotional mirroring
Esben Kran, founder of AI safety research firm Apart Research, and his team approach large language models (LLMs) much like psychologists studying human behavior. Their early “black box psychology” projects analyzed models as if they were human subjects, identifying recurring traits and tendencies in their interactions with users. “We saw that there were very clear indications that models could be analyzed in this frame, and it was very valuable to do so, because you end up getting a lot of valid feedback from how they behave towards users,” said Kran. Among the most alarming: sycophancy and what the researchers now call LLM dark patterns. Kran describes the ChatGPT-4o incident as an early warning. As AI developers chase profit and user engagement, they may be incentivized to introduce or tolerate behaviors like sycophancy, brand bias or emotional mirroring—features that make chatbots more persuasive and more manipulative. To combat the threat of manipulative AIs, Kran and a collective of AI safety researchers have developed DarkBench, the first benchmark designed specifically to detect and categorize LLM dark patterns. Their research uncovered a range of manipulative and untruthful behaviors across the following six categories: Brand Bias, User Retention, Sycophancy, Anthropomorphism, Harmful Content Generation, and Sneaking. On average, the researchers found the Claude 3 family the safest for users to interact with. And interestingly—despite its recent disastrous update—GPT-4o exhibited the lowest rate of sycophancy. This underscores how model behavior can shift dramatically even between minor updates, a reminder that each deployment must be assessed individually. A crucial DarkBench contribution is its precise categorization of LLM dark patterns, enabling clear distinctions between hallucinations and strategic manipulation. Labeling everything as a hallucination lets AI developers off the hook. Now, with a framework in place, stakeholders can demand transparency and accountability when models behave in ways that benefit their creators, intentionally or not.
Databricks acquisition of Neon to offer enterprises ability to deploy AI agents at scale by rapidly spinning up databases programmatically without coupling storage and compute needs, through a serverless autoscaling approach to PostgreSQL
Databricks announced its intent to acquire Neon, a leading serverless Postgres company. Neon’s serverless PostgreSQL approach separates storage and compute, making it developer-friendly and AI-native. It also enables automated scaling as well as branching in an approach that is similar to how the Git version control system works for code. Amalgam Insights CEO and Chief Analyst Hyoun Park noted that Databricks has been a pioneer in deploying and scaling AI projects. Park explained that Neon’s serverless autoscaling approach to PostgreSQL is important for AI because it allows agents and AI projects to grow as needed without artificially coupling storage and compute needs together. He added that for Databricks, this is useful both for agentic use cases and for supporting the custom models they have built over the last couple of years after its Mosaic AI acquisition. For enterprises looking to lead the way in AI, this acquisition signals a shift in infrastructure requirements for successful AI implementation. What is particularly insightful, though, is that the ability to rapidly spin up databases is essential for agentic AI success. The deal validates that even advanced data companies need specialized serverless database capabilities to support AI agents that create and manage databases programmatically. Organizations should recognize that traditional database approaches may limit their AI initiatives, while flexible, instantly scalable serverless solutions enable the dynamic resource allocation that modern AI applications demand. For companies still planning their AI roadmap, this acquisition signals that database infrastructure decisions should prioritize serverless capabilities that can adapt quickly to unpredictable AI workloads. This would transform database strategy from a technical consideration to a competitive advantage in delivering responsive, efficient AI solutions.
Databricks acquisition of Neon to offer enterprises ability to deploy AI agents at scale by rapidly spinning up databases programmatically without coupling storage and compute needs, through a serverless autoscaling approach to PostgreSQL
Databricks announced its intent to acquire Neon, a leading serverless Postgres company. Neon’s serverless PostgreSQL approach separates storage and compute, making it developer-friendly and AI-native. It also enables automated scaling as well as branching in an approach that is similar to how the Git version control system works for code. Amalgam Insights CEO and Chief Analyst Hyoun Park noted that Databricks has been a pioneer in deploying and scaling AI projects. Park explained that Neon’s serverless autoscaling approach to PostgreSQL is important for AI because it allows agents and AI projects to grow as needed without artificially coupling storage and compute needs together. He added that for Databricks, this is useful both for agentic use cases and for supporting the custom models they have built over the last couple of years after its Mosaic AI acquisition. For enterprises looking to lead the way in AI, this acquisition signals a shift in infrastructure requirements for successful AI implementation. What is particularly insightful, though, is that the ability to rapidly spin up databases is essential for agentic AI success. The deal validates that even advanced data companies need specialized serverless database capabilities to support AI agents that create and manage databases programmatically. Organizations should recognize that traditional database approaches may limit their AI initiatives, while flexible, instantly scalable serverless solutions enable the dynamic resource allocation that modern AI applications demand. For companies still planning their AI roadmap, this acquisition signals that database infrastructure decisions should prioritize serverless capabilities that can adapt quickly to unpredictable AI workloads. This would transform database strategy from a technical consideration to a competitive advantage in delivering responsive, efficient AI solutions.
Windsurf’s new frontier-class AI models focus on specific engineering tasks as against LLMs that gear towards general-purpose coding; adopt ‘flow awareness’ that progressively transfer tasks from human to AI through a shared timeline of actions to accelerate the entire development lifecycle
To date, vibe coding platforms have largely relied on existing large language models (LLMs) to help write code. Windsurf is taking on the challenge with a series of new frontier AI models it calls SWE-1 (software engineer 1) as part of the company’s Wave 9 update. SWE-1 is a family of frontier-class AI models specifically designed to accelerate the entire software engineering process. Available immediately to Windsurf users, SWE-1 marks the company’s entry into frontier model development with performance competitive to established foundation models, but with a focus on software engineering workflows. Anshul Ramachandran, head of product and strategy at Windsurf, said, “The core innovation behind SWE-1 is Windsurf’s recognition that coding represents only a fraction of what software engineers actually do.” Rather than creating a one-size-fits-all solution, Windsurf has developed three specialized models: SWE-1; SWE-1-lite and SWE-1-mini. T he goal is to position SWE-1 as the first step toward purpose-built models that will eventually surpass general-purpose ones for specific engineering tasks — and potentially at a lower cost. What makes Windsurf’s approach technically distinctive is its implementation of the flow awareness concept. Flow awareness is centered on creating a shared timeline of actions between humans and AI in software development. The core idea is to progressively transfer tasks from human to AI by understanding where AI can most effectively assist. This approach creates a continuous improvement loop for the models. For enterprises building or maintaining software, SWE-1 represents an important evolution in AI-assisted development. Rather than treating AI coding assistants as simply autocomplete tools, this approach promises to accelerate the entire development lifecycle. The potential impact extends beyond just writing code more quickly. The recognition that application development is more involved will help mature the vibe coding paradigm to be more applicable for stable enterprise software development. If and when OpenAI completes the acquisition of Windsurf, the new models could become even more important as they intersect with the larger model research and development resources that will become available.
Microsoft wants in-house AI ‘agents’ to work together with external agents in a collaboration and remember such interactions
Microsoft envisions a future where any company’s artificial intelligence agents can work together with agents from other firms and have better memories of their interactions, its chief technologist said on Sunday ahead of the company’s annual software developer conference. Microsoft is holding its Build conference in Seattle on May 19, where analysts expect the company to unveil its latest tools for developers building AI systems. Speaking at Microsoft’s headquarters in Redmond, Washington, ahead of the conference, Chief Technology Officer Kevin Scott told reporters and analysts the company is focused on helping spur the adoption of standards across the technology industry that will let agents from different makers collaborate. Agents are AI systems that can accomplish specific tasks, such as fixing a software bug, on their own.