Business automation platform company UiPath aims to simplify the process of building autonomous agents for enterprise. It supports multiple frameworks, including LlamaIndex Inc., which offers a toolkit for developing AI agents based on contextual data. UiPath and LlamaIndex’s work is aimed at creating a natural language interface that acts as a constrained but high-accuracy layer for generating different types of automation. LlamaIndex, which started as an open-source framework to connect AI models with private data, has evolved into a connector of the enterprise data ecosystem. UiPath unveiled a series of enhancements to UiPath Maestro, a development and orchestration suite for high-volume business processes such as claims, disputes and loans. The company also announced the launch of Maestro Process Apps, which helps users analyze their workflows and improve efficiency. “People think orchestration is just stitching together different systems and calling it a day,” said Taqi Jaffri, senior director of product management at UiPath. “It’s not. We use a lot of underlying tech to make all that seamless to the developer and that, to me, is the heart of orchestration. We want to make it simple to the developer.”
Eigen Labs launches EigenCloud, a blockchain-based platform providing verifiable AI infrastructure that ensures tamper-proof inference and secure compute for developers
Eigen Labs Inc., the developer of the verifiable cloud platform EigenCloud, announced the launch of a verifiable computing layer for artificial intelligence using blockchain technology. The company released two solutions, EigenAI and EigenCompute, designed to enable developers to run AI models and receive the same level of security and transparency as using blockchain smart contracts. Using the new solutions, AI developers can verify that prompts, models and answers were not tampered with, adding trust to AI calls. The EigenAI platform enables developers to create verifiable applications using LLM inference, ensuring consistent results across different runs of the same LLM call. The solution offers a deterministic, verifiable application programming interface compatible with the OpenAI API that supports open-source LLMs and tool-calling. Eigen Labs said its method of verifying inference for LLMs relies on a technical breakthrough the company achieved for making inference deterministic, a capability thought impractical for AI models due to their inherent randomness. The process is similar to work by Thinking Machines Lab, and details on the method will be released soon, alongside open-sourcing the code. EigenCompute provides a verifiable compute service for developers to run complex, long-running agent logic outside of a blockchain, while maintaining the integrity and security of using smart contracts.
Google integrates Jules coding agent with command line and API; enabling developers to embed AI coding assistance directly into IDEs and workflows for greater control
Google wants its coding assistant, Jules, to be far more integrated into developers’ terminals than ever. The company wants to make it a more workflow-native tool, hoping that more people will use it beyond the chat interface. Jules will gain two new features: a Jules API to facilitate integration with IDEs and a Jules Tools CLI, allowing the agent to be opened directly on the command line. Through Jules CLI and API, Google said enterprises will get “more control and flexibility by where and how you can use Jules.” Developers can install Jules Tools via npm, which will then print a guide on how to use it. While in the CLI, an engineer or coder can use the code Command to prompt Jules to do a task, and the code Flag will customize it. For example, this string, Jules –theme light, will switch to light mode. On the API side, enterprises can connect the Jules API to other platforms they use. They can connect it to Slack, for example, so that some team members can trigger tasks directly from Slack if a bug is reported there, which will then tap their CI/CD pipeline. Google also added other updates to help reduce latency and fix some environment and file system issues. These include: File selector to call out specific files in chat to add context; Memory, which gives Jules the ability to remember preferences going forward; Environment Variables management that gives Jules access to these variables while executing a task.
DataSnipper and Microsoft launch AI agents for audit and finance; linking outputs to evidence, automating disclosure checks and Excel tasks with compliant and scalable outputs
DataSnipper, the intelligent automation platform for audit and finance teams, announced the launch of its AI Agents: Disclosure Agents and Excel Agents, marking a breakthrough for the industry. Disclosure Agents Accelerates reviews by analyzing disclosure checklists against financial statements, cutting hours of manual work into minutes. Works across IFRS, GAAP, and other global standards. Cross-checking requirements using firm-specific checklists, internal policies, or imported templates, so reviews reflect both global rules and local practices. Linking every requirement to transparent, verifiable evidence, creating a crystal-clear audit trail and freeing capacity for higher-value work. Excel Agents deliver this by: Automating testing workflows end-to-end, matching sample data to documents, extracting key fields, and comparing results to expectations in a fraction of the time. Working directly in Excel with prompt-driven automation, no complex setup, templates, or retraining required.
Salesforce’s coding assistant uses advanced LLMs for code generation and bug fixes within Salesforce Sandboxes; supported by Crowdstrike ensuring secure and compliant deployments
Salesforce Inc. has introduced a collection of new tools designed to support customers’ AI initiatives. The first feature Context Indexing helps AI agents interpret unstructured data such as contracts and schematics. Context Indexing enables users to upload an unstructured file and have an AI agent generate a detailed natural language explanation of its contents. Alongside this, Salesforce introduced AI-powered cybersecurity enhancements that integrate with CrowdStrike and Okta to help detect and prevent cyberattacks. These updates coincide with the general availability of Data Cloud Clean Rooms, which enable secure data sharing without duplicating sensitive information. Data Cloud Clean Rooms makes it possible to share information without creating additional copies that can be misplaced. Another feature called Customer 360 Semantic Data Model help companies ensure that the metrics they use to measure their go-to-market activities are consistent. For developers, Salesforce unveiled Agentforce Vibes, a coding assistant that uses large language models like GPT-5 and xGen to automate programming tasks in environments such as VS Code. It supports multiple languages including Apex, HTML, and CSS, and works within Salesforce Sandboxes to detect issues like bugs and performance bottlenecks before deployment. The Trust Layer feature further safeguards applications by filtering out harmful outputs, such as sensitive personal data.
Amazon SageMaker HyperPod’s observability solution offers a comprehensive dashboard that provides insights into foundation model (FM) development tasks and cluster resources by consolidating health and performance data from various sources
Amazon SageMaker HyperPod offers a comprehensive dashboard that provides insights into foundation model (FM) development tasks and cluster resources. This unified observability solution automatically publishes key metrics to Amazon Managed Service for Prometheus and visualizes them in Amazon Managed Grafana dashboards. The dashboard consolidates health and performance data from various sources, including NVIDIA DCGM, instance-level Kubernetes node exporters, Elastic Fabric Adapter (EFA), integrated file systems, Kubernetes APIs, Kueue, and SageMaker HyperPod task operators. The solution also abstracts management of collector agents and scrapers across clusters, offering automatic scalability of collectors across nodes as the cluster grows. The dashboards feature intuitive navigation across metrics and visualizations, helping users diagnose problems and take action faster. These capabilities save teams valuable time and resources during FM development, helping accelerate time-to-market and reduce the cost of generative AI innovations. To enable SageMaker HyperPod observability, users need to enable AWS IAM Identity Center and create a user in the IAM Identity Center.
Amazon Web Services is launching a dedicated AI agent marketplace to enable startups to directly offer their AI agents to AWS customers while also letting enterprises to browse and install AI agents based on their requirements from a central location
Amazon Web Services (AWS) is launching an AI agent marketplace next week and Anthropic is one of its partners at the AWS Summit in New York City on July 15. The distribution of AI agents poses a challenge, as most companies offer them in silos. AWS appears to be taking a step to address this with its new move. The company’s dedicated agent marketplace will allow startups to directly offer their AI agents to AWS customers. The marketplace will also allow enterprise customers to browse, install, and look for AI agents based on their requirements from a single location. That could give Anthropic — and other AWS agent marketplace partners — a considerable boost. AWS’ marketplace would help Anthropic reach more customers, including those who may already use AI agents from its rivals, such as OpenAI. Anthropic’s involvement in the marketplace could also attract more developers to use its API to create more agents, and eventually increase its revenues. The marketplace model will allow startups to charge customers for agents. The structure is similar to how a marketplace might price SaaS offerings rather than bundling them into broader services.
Docker’s new capabilities enable developers to define agents, models, and tools as services in a single Compose file and share and deploy agentic stacks across environments without rewriting infrastructure code
Docker announced major new capabilities that make it dramatically easier for developers to build, run, and scale intelligent, agentic applications. Docker is extending Compose into the agent era, enabling developers to define intelligent agent architectures consisting of models and tools in the same simple YAML files they already use for microservices and take those agents to production. With the new Compose capabilities, developers can: Define agents, models, and tools as services in a single Compose file; Run agentic workloads locally or deploy seamlessly to cloud services like Google Cloud Run or Azure Container Apps; Integrate with Docker’s open source Model Context Protocol (MCP) Gateway for secure tool discovery and communication; Share, version, and deploy agentic stacks across environments without rewriting infrastructure code. Docker unveiled Docker Offload (Beta), a new capability that enables developers to offload AI and GPU-intensive workloads to the cloud without disrupting their existing workflows. With Docker Offload, developers can: Maintain local development speed while accessing cloud-scale compute and GPUs; Run large models and multi-agent systems in high-performance cloud environments; Choose where and when to offload workloads for privacy, cost, and performance optimization; Keep data and workloads within specific regions to meet sovereignty requirements and ensure data does not leave designated zones across the globe.
NinjaTech AI general purpose AI agent handles entire workflows from start to finish autonomously and allows users to implement complex tasks such as coding and testing an entire application 3-5x faster than GPU-based solutions
NinjaTech AI, an agentic AI company announced: Super Agent: A revolutionary all-in-one General Purpose AI Agent with a dedicated virtual machine that plans, iterates, and executes entire workflows from start to finish in minutes. What sets Super Agent apart is its ability to handle entire workflows from start to finish. Unlike conventional AI tools limited by token limits or requiring constant hand-holding, Super Agent operates on its own dedicated computer in the same way humans do—running extensive data analysis, coding and validating full applications, conducting comprehensive research, building websites, and delivering high-quality results in the user’s preferred format. Each user gets their own isolated VM, ensuring complete data privacy and security. This enables Super Agent to download tools, write and execute code, create applications, analyze data, and build websites or dashboards autonomously—all within a secure environment that’s not shared with other users. Coming soon, Super Agent will also include a virtual smartphone capability, allowing it to interact with mobile applications on the user’s behalf. Central to Super Agent’s capabilities is NinjaTech AI’s strategic partnership with Cerebras Systems, pioneers in fast inference. This strategic collaboration utilizes Cerebras’ wafer-scale architecture, allowing users to implement complex tasks such as coding and testing an entire application 3-5x faster than GPU-based solutions.
KPMG survey finds AI agents are moving into production with 33% of organizations now deploying AI agents, a 3X increase from just 11% in the previous two quarters
Companies like Intuit, Capital One, LinkedIn, Stanford University and Highmark Health are quietly putting AI agents into production, tackling concrete problems, and seeing tangible returns. Here are the four biggest takeaways: 1) AI Agents are moving into production, faster than anyone realized A KPMG survey released on June 26, a day after our event, shows that 33% of organizations are now deploying AI agents, a surprising threefold increase from just 11% in the previous two quarters. Intuit, for instance, has deployed invoice generation and reminder agents in its QuickBooks software. Businesses using the feature are getting paid five days faster and are 10% more likely to be paid in full. Even non-developers are feeling the shift, building production-ready software features with power of tools like Claude Code. 2) The hyperscaler race has no clear winner as multi-cloud, multi-model reigns Enterprises want the flexibility to choose the best tool for the job, whether it’s a powerful proprietary model or a fine-tuned open-source alternative. This trend is creating a powerful but constrained ecosystem, where GPUs and the power needed to generate tokens are in limited supply. 3) Enterprises are focused on solving real problems, not chasing AGI Highmark Health Chief Data Officer Richard Clarke said it is using LLMs for practical applications like multilingual communication to better serve their diverse customer base, and streamlining medical claims. Similarly, Capital One is building teams of agents that mirror the functions of the company, with specific agents for tasks like risk evaluation and auditing, including helping their car dealership clients connect customers with the right loans. 4) The future of AI teams is small, nimble, and empowered Small team structure allows for rapid testing of product hypotheses and avoids the slowdown that plagues larger groups. As GitHub and Atlassian noted, engineers are now learning to manage fleets of agents. The skills required are evolving, with a greater emphasis on clear communication and strategic thinking to guide these autonomous systems. This nimbleness is supported by a growing acceptance of sandboxed development. The idea is to foster rapid innovation within a controlled environment to prove value quickly.
