Apple and Anthropic have reportedly partnered to create a platform that will use AI to write, edit and test code for programmers. Apple has started rolling out the coding software to its own engineers. The company hasn’t decided whether to make it available to third-party app developers. The tool generates code or alterations in response to requests made by programmers through a chat interface. It also tests user interfaces and manages the process of finding and fixing bugs. Amazon, Meta, Google and several startups have also built AI assistants for writing and editing code. McKinsey said in 2023 that AI could boost the productivity of software engineering by 20% to 45%. This increased efficiency has far-reaching implications for businesses across industries, CPO and CTO Bob Rogers of Oii.ai told. AI-powered tools enable developers to create software and applications faster and with fewer resources. “Simple tasks such as building landing pages, basic website design, report generation, etc., can all be done with AI, freeing up time for programmers to focus on less tedious, more complex tasks,” Rogers said. “It’s important to remember that while generative AI can augment skills and help folks learn to code, it cannot yet directly replace programmers — someone still needs to design the system.”
kama.ai’s supports knowledge management with hybrid agents informed by Knowledge Graph AI, enterprise RAG tech and a Trusted Collection
kama.ai, a leader in responsible conversational AI solutions, announced the commercial release of the industry’s most trustworthy AI Agents powered by GenAI’s Sober Second Mind®, the latest addition to its Designed Experiential Intelligence® platform – Release 4. The new Hybrid AI Agents combine kama.ai’s classic knowledge base AI, guided by human values, with a new enterprise Retrieval Augmented Generation (RAG) process. This in turn is powered by a Trusted Collection feature set that produces the most reliable and accurate generative responses. The Trusted Collection features provide pre-integrated intentional document and collection management with enterprise document repositories like SharePoint, M-Files and AWS S3 Buckets. Designed Experiential Intelligence® Release 4 helps enterprise experts work faster with greater ease. It generates draft responses automatically for a Knowledge Manager or SME to review. This is needed for highly sensitive applications (like HR), or for high volume customer facing applications. User inquiries, feedback, and AI drafts all help improve the system. Together, consumers, clients, partners, and SMEs create a more efficient and effective human-AI ecosystem. kama.ai Release 4 also introduces a new API supporting 3rd party Hybrid AI Agent builders that can deliver 100% accurate and approved information curated for the enterprise.
AlphaEvolve coding agent built on Google’s Gemini LLMs, tests, refines, and improves algorithms automatically
Google DeepMind today pulled the curtain back on AlphaEvolve, an artificial-intelligence agent that can invent brand-new computer algorithms — then put them straight to work inside the company’s vast computing empire. AlphaEvolve pairs Google’s Gemini LLMs with an evolutionary approach that tests, refines, and improves algorithms automatically. The system has already been deployed across Google’s data centers, chip designs, and AI training systems — boosting efficiency and solving mathematical problems that have stumped researchers for decades. “AlphaEvolve is a Gemini-powered AI coding agent that is able to make new discoveries in computing and mathematics,” explained Matej Balog, a researcher at Google DeepMind. “It can discover algorithms of remarkable complexity — spanning hundreds of lines of code with sophisticated logical structures that go far beyond simple functions.” One algorithm it discovered has been powering Borg, Google’s massive cluster management system. This scheduling heuristic recovers an average of 0.7% of Google’s worldwide computing resources continuously — a staggering efficiency gain at Google’s scale. The discovery directly targets “stranded resources” — machines that have run out of one resource type (like memory) while still having others (like CPU) available. AlphaEvolve’s solution is especially valuable because it produces simple, human-readable code that engineers can easily interpret, debug, and deploy. Perhaps most impressively, AlphaEvolve improved the very systems that power itself. It optimized a matrix multiplication kernel used to train Gemini models, achieving a 23% speedup for that operation and cutting overall training time by 1%. For AI systems that train on massive computational grids, this efficiency gain translates to substantial energy and resource savings.
Mistral AI’s API integrates server-side conversation management, a Python code interpreter, web search, image generation and document retrieval capabilities to enable building fully autonomous AI agents
Mistral AI, a rival to OpenAI, Anthropic PBC, Google LLC and others, has jumped into agentic AI development with the launch of a new API. The new Agents API equips developers with powerful tools for building sophisticated AI agents based on Mistral AI’s LLms, which can autonomously plan and carry out complex, multistep tasks using external tools. Among its features, the API integrates server-side conversation management, a Python-based code interpreter, web search, image generation and document retrieval capabilities. It also supports AI agent orchestration, and it’s compatible with the emerging Model Context Protocol that aims to standardize the way agents interact with other applications. With its API, Mistral AI is keeping pace with the likes of OpenAI and Anthropic, which are also laser-focused on enabling the emergence of AI agents that can perform tasks on behalf of humans with minimal supervision, in an effort to turbocharge business automation. The API boasts dozens of useful “connectors” that should make it simpler to build some very capable AI agents. For instance, the Python Code Interpreter provides a way for agents to execute Python code in a secure, sandboxed environment, while the image generation tool powered by Black Forest Labs Inc.’s FLUX1.1 [pro] Ultra model means they’ll have powerful picture-generating capabilities. A premium version of web search provides access to a standard search engine, plus the Agence France-Presse and the Associated Press news agencies, so AI agents will be able to access up-to-date information about the real world. Other features include a document library that uses hosted retrieval-augmented generation from user-uploaded documents. In other words, Mistral’s AI agents will be able to read external documents and perform actions with them. The API also includes an “agent handoffs” mechanism that allows multiple agents to work together. One agent will be able to delegate a task to another, more specialized agent. According to Mistral, the result will be a “seamless chain of actions,” with a single request able to trigger multiple agents into action so they can collaborate on complex tasks. The Agents API supports “stateful conversations” too, which means they’re able to maintain context over time by remembering the user’s earlier inputs.
OPAQUE Systems integrates confidential computing with popular data and AI tools, to process fully encrypted data from ingestion to inference by enforcing cryptographically verifiable privacy, secure code execution, and auditable proof of compliance
OPAQUE Systems, the industry’s first Confidential AI platform, announced the availability of its secure AI solution on the Microsoft Azure Marketplace. By integrating confidential computing with popular data and AI tools, OPAQUE lets enterprises process sensitive data fully encrypted, from ingestion to inference, without costly code rewrites or specialized cryptographic skills. Most confidential computing solutions focus on encrypting data in use and verifying the basic infrastructure, such as applications running in Confidential Virtual Machines. However, OPAQUE goes significantly further by enforcing privacy, security, and compliance policies from data ingestion to inference. OPAQUE capabilities provide comprehensive coverage, which means customers safely deploy classic analytics/ML and advanced AI agents on their most valuable, confidential data, without compromising on sovereignty or compliance. By keeping sensitive information encrypted even during analysis and inference, organizations gain cryptographically verifiable privacy, protection against unapproved agents or code execution, and auditable proof of compliance at every step. This robust coverage frees enterprises to innovate at scale by using its differentiated, proprietary data while minimizing regulatory risk on a single platform. OPAQUE is the only platform that meets these needs in three critical phases.
Adaptive Computer’s no-code web-app platform lets non-programmers build full-featured apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis simply by entering a text prompt
Startup Adaptive Computer wants non-programmers to be using full-featured apps that they’ve created themselves, simply by entering a text prompt into Adaptive’s no-code web-app platform. To be certain, this isn’t about the computer itself or any hardware — despite the company’s name. The startup currently only builds web apps. For every app it builds, Adaptive Computer’s engine handles creating a database instance, user authentication, file management, and can create apps that include payments (via Stripe), scheduled tasks, and AI features such as image generation, speech synthesis, content analysis, and web search/research. Besides taking care of the back-end database and other technical details, Adaptive apps can work together. For instance, a user can build a file-hosting app and the next app can access those files. Founder Dennis Xu likens this as more like an “operating system” rather than a single Web app. He says the difference between more established products and his startup is that the others were originally geared toward making programming easier for programmers. “We’re building for the everyday person who is interested in creating things to make their own lives better.”
OpenAI is looking to acquire AI coding startups for its next growth areas amid pricing pressure on access to foundational models and outperformance of competitors’ models on coding benchmarks
Anysphere, maker of AI coding assistant Cursor, is growing so quickly that it’s not in the market to be sold, even to OpenAI, a source close to the company tells TechCrunch. It’s been a hot target. Cursor is one of the most popular AI-powered coding tools, and its revenue has been growing astronomically — doubling on average every two months, according to another source. Anysphere’s current average annual recurring revenue is about $300 million, according to the two sources. The company previously walked away from early acquisition discussions with OpenAI, after the ChatGPT maker approached Cursor, the two sources close to the company confirmed, and CNBC previously reported. Anysphere has also received other acquisition offers that the company didn’t consider, according to one of these sources. Cursor turned down the offers because the startup wants to stay independent, said the two people close to the company. Instead, Anysphere has been in talks to raise capital at about a $10 billion valuation, Bloomberg reported last month. Although it didn’t nab Anysphere, OpenAI didn’t give up on buying an established AI coding tool startup. OpenAI talked with more than 20 others, CNBC reported. And then it got serious over the next-fastest-growing AI coding startup, Windsurf, with a $3 billion acquisition offer, Bloomberg reported last week. While Windsurf is a comparatively smaller company, its ARR is about $100 million, up from $40 million in ARR in February, according to a source. Windsurf has been gaining popularity with the developer community, too, and its coding product is designed to work with legacy enterprise systems. Windsurf did not respond to TechCrunch’s request for comment. OpenAI declined to comment on its acquisition talks. OpenAI is likely shopping because it’s looking for its next growth areas as competitors such as Google’s Gemini and China’s DeepSeek put pricing pressure on access to foundational models. Moreover, Anthropic and Google have recently released AI models that outperform OpenAI’s models on coding benchmarks, increasingly making them a preferred choice for developers. While OpenAI could build its own AI coding assistant, buying a product that is already popular with developers means the ChatGPT-maker wouldn’t have to start from scratch to build this business. VCs who invest in developer tool startups are certainly watching. Speculating about OpenAI’s strategy, Chris Farmer, partner and CEO at SignalFire, told TechCrunch of the company, “They’ll be acquisitive at the app layer. It’s existential for them.”
Amazon Bedrock serverless endpoint system dynamically predicts the response quality of each model and efficiently routes it to the most appropriate model based on cost and response quality
Amazon Bedrock has announced the general availability of its Intelligent Prompt Routing, a serverless endpoint that efficiently routes requests between different foundation models within the same model family. The system dynamically predicts the response quality of each model for a request and routes the request to the model it determines is most appropriate based on cost and response quality. The system incorporates state-of-the-art methods for training routers for different sets of models, tasks, and prompts. Users can use the default prompt routers provided by Amazon Bedrock or configure their own prompt routers to adjust for performance linearly between the performance of two candidate LLMs. The system has reduced the overhead of added components by over 20% to approximately 85 ms (P90), resulting in an overall latency and cost benefit compared to always hitting the larger/more expensive model. Amazon Bedrock has conducted internal tests with proprietary and public data to evaluate the system’s performance metrics.
Codacy’s solution integrates directly with AI coding assistants to enforce coding standards using MCP server, flagging or fixing issues in real-time
Codacy, provider of automated code quality and security solutions, launched Codacy Guardrails, a groundbreaking new product designed to bring real-time security, compliance, and quality enforcement to AI-generated code. Guardrails is the first technology to make AI-generated code trustworthy and compliant by checking it before it ever reaches the developer. Codacy Guardrails is the first solution of its kind that integrates directly with AI coding assistants to enforce coding standards and prevent non-compliant code from being generated in the first place. Built on Codacy’s SOC2-compliant platform, Codacy Guardrails empowers teams to define their own secure development policies and apply them across every AI-generated prompt. With Codacy Guardrails, AI-assisted tools gain full access to the security and quality context of a team’s codebase. At the core of the product is the Codacy MCP server, which connects development environments to the organization’s code standards. This gives LLMs the ability to reason about policies, flag or fix issues in real time, and deliver code that’s compliant by default. Guardrails integrates with popular IDEs like Cursor AI and Windsurf as well as VSCode and IntelliJ through Codacy’s plugin, allowing developers to apply guardrails directly within their existing workflows.
Docker to simplify AI software delivery by containerizing MCP servers along with offering an enterprise-ready toolkit and a centralized platform to discover and manage them from a catalog of 100+ servers
Software containerization company Docker is launching the Docker MCP Catalog and Docker MCP Toolkit, which bring more of the AI workflow into the existing Docker developer experience and simplify AI software delivery. The new offerings are based on the emerging Model Context Protocol standard created by its partner Anthropic PBC. Docker argues that the simplest way to use Anthropic’s MCP to improve LLMs is to containerize it. To do that, it offers tools such as Docker Desktop for building, testing and running MCP servers, as well as Docker Hub to distribute their container images, and Docker Scout to ensure they’re secure. By packaging MCP servers as containers, developers can eliminate the hassles of installing dependencies and configuring their runtime environments. The Docker MCP Catalog, integrated within Docker Hub, is a centralized way for developers to discover, run and manage MCP servers, while the Docker MCP Toolkit offers “enterprise-ready tooling” for putting AI applications to work. At launch, there are more than 100 MCP servers available within Docker MCP Catalog. President and Chief Operating Officer Mark Cavage explained that “The Docker MCP Catalog brings that all together in one place, a trusted, developer-friendly experience within Docker Hub, where tools are verified, secure, and easy to run.”