OPAQUE Systems, the industry’s first Confidential AI platform, announced the availability of its secure AI solution on the Microsoft Azure Marketplace. By integrating confidential computing with popular data and AI tools, OPAQUE lets enterprises process sensitive data fully encrypted, from ingestion to inference, without costly code rewrites or specialized cryptographic skills. Most confidential computing solutions focus on encrypting data in use and verifying the basic infrastructure, such as applications running in Confidential Virtual Machines. However, OPAQUE goes significantly further by enforcing privacy, security, and compliance policies from data ingestion to inference. OPAQUE capabilities provide comprehensive coverage, which means customers safely deploy classic analytics/ML and advanced AI agents on their most valuable, confidential data, without compromising on sovereignty or compliance. By keeping sensitive information encrypted even during analysis and inference, organizations gain cryptographically verifiable privacy, protection against unapproved agents or code execution, and auditable proof of compliance at every step. This robust coverage frees enterprises to innovate at scale by using its differentiated, proprietary data while minimizing regulatory risk on a single platform. OPAQUE is the only platform that meets these needs in three critical phases.
Adaptive Computer’s no-code web-app platform lets non-programmers build full-featured apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis simply by entering a text prompt
Startup Adaptive Computer wants non-programmers to be using full-featured apps that they’ve created themselves, simply by entering a text prompt into Adaptive’s no-code web-app platform. To be certain, this isn’t about the computer itself or any hardware — despite the company’s name. The startup currently only builds web apps. For every app it builds, Adaptive Computer’s engine handles creating a database instance, user authentication, file management, and can create apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis, content analysis, and web search/research. Besides taking care of the back-end database and other technical details, Adaptive apps can work together. For instance, a user can build a file-hosting app and the next app can access those files. Founder Dennis Xu likens this as more like an “operating system” rather than a single Web app. He says the difference between more established products and his startup is that the others were originally geared toward making programming easier for programmers. “We’re building for the everyday person who is interested in creating things to make their own lives better.”
OpenAI is looking to acquire AI coding startups for its next growth areas amid pricing pressure on access to foundational models and outperformance of competitors’ models on coding benchmarks
Anysphere, maker of AI coding assistant Cursor, is growing so quickly that it’s not in the market to be sold, even to OpenAI, a source close to the company tells TechCrunch. It’s been a hot target. Cursor is one of the most popular AI-powered coding tools, and its revenue has been growing astronomically — doubling on average every two months, according to another source. Anysphere’s current average annual recurring revenue is about $300 million, according to the two sources. The company previously walked away from early acquisition discussions with OpenAI, after the ChatGPT maker approached Cursor, the two sources close to the company confirmed, and CNBC previously reported. Anysphere has also received other acquisition offers that the company didn’t consider, according to one of these sources. Cursor turned down the offers because the startup wants to stay independent, said the two people close to the company. Instead, Anysphere has been in talks to raise capital at about a $10 billion valuation, Bloomberg reported last month. Although it didn’t nab Anysphere, OpenAI didn’t give up on buying an established AI coding tool startup. OpenAI talked with more than 20 others, CNBC reported. And then it got serious over the next-fastest-growing AI coding startup, Windsurf, with a $3 billion acquisition offer, Bloomberg reported last week. While Windsurf is a comparatively smaller company, its ARR is about $100 million, up from $40 million in ARR in February, according to a source. Windsurf has been gaining popularity with the developer community, too, and its coding product is designed to work with legacy enterprise systems. Windsurf did not respond to TechCrunch’s request for comment. OpenAI declined to comment on its acquisition talks. OpenAI is likely shopping because it’s looking for its next growth areas as competitors such as Google’s Gemini and China’s DeepSeek put pricing pressure on access to foundational models. Moreover, Anthropic and Google have recently released AI models that outperform OpenAI’s models on coding benchmarks, increasingly making them a preferred choice for developers. While OpenAI could build its own AI coding assistant, buying a product that is already popular with developers means the ChatGPT-maker wouldn’t have to start from scratch to build this business. VCs who invest in developer tool startups are certainly watching. Speculating about OpenAI’s strategy, Chris Farmer, partner and CEO at SignalFire, told TechCrunch of the company, “They’ll be acquisitive at the app layer. It’s existential for them.”
Amazon Bedrock serverless endpoint system dynamically predicts the response quality of each model and efficiently routes it to the most appropriate model based on cost and response quality
Amazon Bedrock has announced the general availability of its Intelligent Prompt Routing, a serverless endpoint that efficiently routes requests between different foundation models within the same model family. The system dynamically predicts the response quality of each model for a request and routes the request to the model it determines is most appropriate based on cost and response quality. The system incorporates state-of-the-art methods for training routers for different sets of models, tasks, and prompts. Users can use the default prompt routers provided by Amazon Bedrock or configure their own prompt routers to adjust for performance linearly between the performance of two candidate LLMs. The system has reduced the overhead of added components by over 20% to approximately 85 ms (P90), resulting in an overall latency and cost benefit compared to always hitting the larger/more expensive model. Amazon Bedrock has conducted internal tests with proprietary and public data to evaluate the system’s performance metrics.
Codacy’s solution integrates directly with AI coding assistants to enforce coding standards using MCP server, flagging or fixing issues in real-time
Codacy, provider of automated code quality and security solutions, launched Codacy Guardrails, a groundbreaking new product designed to bring real-time security, compliance, and quality enforcement to AI-generated code. Guardrails is the first technology to make AI-generated code trustworthy and compliant by checking it before it ever reaches the developer. Codacy Guardrails is the first solution of its kind that integrates directly with AI coding assistants to enforce coding standards and prevent non-compliant code from being generated in the first place. Built on Codacy’s SOC2-compliant platform, Codacy Guardrails empowers teams to define their own secure development policies and apply them across every AI-generated prompt. With Codacy Guardrails, AI-assisted tools gain full access to the security and quality context of a team’s codebase. At the core of the product is the Codacy MCP server, which connects development environments to the organization’s code standards. This gives LLMs the ability to reason about policies, flag or fix issues in real time, and deliver code that’s compliant by default. Guardrails integrates with popular IDEs like Cursor AI and Windsurf as well as VSCode and IntelliJ through Codacy’s plugin, allowing developers to apply guardrails directly within their existing workflows.
Docker to simplify AI software delivery by containerizing MCP servers along with offering an enterprise-ready toolkit and a centralized platform to discover and manage them from a catalog of 100+ servers
Software containerization company Docker is launching the Docker MCP Catalog and Docker MCP Toolkit, which bring more of the AI workflow into the existing Docker developer experience and simplify AI software delivery. The new offerings are based on the emerging Model Context Protocol standard created by its partner Anthropic PBC. Docker argues that the simplest way to use Anthropic’s MCP to improve LLMs is to containerize it. To do that, it offers tools such as Docker Desktop for building, testing and running MCP servers, as well as Docker Hub to distribute their container images, and Docker Scout to ensure they’re secure. By packaging MCP servers as containers, developers can eliminate the hassles of installing dependencies and configuring their runtime environments. The Docker MCP Catalog, integrated within Docker Hub, is a centralized way for developers to discover, run and manage MCP servers, while the Docker MCP Toolkit offers “enterprise-ready tooling” for putting AI applications to work. At launch, there are more than 100 MCP servers available within Docker MCP Catalog. President and Chief Operating Officer Mark Cavage explained that “The Docker MCP Catalog brings that all together in one place, a trusted, developer-friendly experience within Docker Hub, where tools are verified, secure, and easy to run.”
Snaplogic’s tool includes no-code visual prompt editor that enables any user to build, visualize, and refine intelligent agents for complex workflows in real time on a single screen
Snaplogic announced AgentCreator 3.0, a groundbreaking evolution in agentic AI technology that eliminates the complexity of enterprise AI adoption. The new release empowers organizations to build and scale their own AI solutions with no coding required. With AgentCreator 3.0, businesses are no longer constrained by human resource limitations. Instead, they gain access to a limitless workforce powered by AI-driven digital labor that works tirelessly, scales infinitely, and augments their best talent with PhD-level intelligence. Key additions to AgentCreator 3.0 include Prompt Composer and Agent Visualizer, making it easier than ever for customers to build, visualize, and refine intelligent agents for complex workflows. Prompt Composer is a visual prompt editor that simplifies prompt creation for faster iteration and stronger results, enabling anyone, from business users to engineers, to create, test, and refine AI instructions in real time on a single screen. This ensures high precision and adaptability as LLMs evolve. Agent Visualizer provides full transparency into AI decision-making, ensuring enterprises can trust, audit, and refine agent behavior. The visualization tool delivers a step-by-step breakdown of an agent’s decision-making process. SnapLogic is also introducing support for the Model Context Protocol (MCP) to further accelerate the adoption and deployment of Agentic AI. AgentCreator 3.0 empowers organizations with: AI-ready data, AI as digital labor, DIY AI without complexity, Security and governance, AI workforce collaboration.