Rackspace Technology announced a strategic alliance with enterprise AI agent innovator Sema4.ai (“Sema4”). This collaboration integrates the Foundry for AI by Rackspace (FAIR™) services and Rackspace’s application management expertise with Sema4.ai’s advanced ‘SAFE’ AI Agent Platform, combining the strengths of both companies in artificial intelligence, cloud and systems integration to accelerate the adoption of secure, enterprise-grade AI solutions. The partnership will enable the rapid deployment of scalable, production-ready AI agents across enterprise functions with robust governance, transparency, and security protocols as their core foundational elements. Through this new collaboration, businesses can design and deploy custom AI agents tailored to specific use cases, with seamless integrations across key functions such as HR, finance, customer support, sales, and operations. Customers will also gain access to full AI lifecycle management, including built-in observability and a centralized control plane, enabling scalable, secure, and efficient deployment. These agents can: Operate independently or collaboratively, offering shared functionalities such as natural language understanding, workflow automation, and advanced document processing. Help streamline and enhance the automation of high-value tasks. Automate existing business processes, which interpret standard operating procedures (SOPs) and runbooks, eliminating the need for complex prompt engineering.
Fujitsu unveils AI-powered presentation technology, enabling automated multilingual and customizable presentations; providing answers to audience questions based on materials pre-integrated
Fujitsu announced the development of a new technology which enables AI avatars to carry out presentations and handle audience questions. The technology, a core component of Fujitsu’s AI service Fujitsu Kozuchi, automatically generates and carries out presentations using Microsoft PowerPoint presentation data and provides answers to audience questions based on materials pre-integrated into a retrieval-augmented generation (RAG) process. Fujitsu will utilize the technology within the company from the second quarter of FY 2025 and begin providing it to customers around the world from the third quarter. Users will be able to create AI avatars using their own likeness and voice and have them generate presentations automatically in over 30 languages, making it possible for anybody to utilize the technology without requiring specialist knowledge. Going forward, Fujitsu AI Auto Presentation will also be available directly via Microsoft Teams and PowerPoint. By democratizing the presentation process and allowing anybody to deliver presentations irrespective of time constraints, language level, presentation aptitude, and other factors, Fujitsu will empower organizations to share accurate and high quality information and improve operational efficiency, thereby contributing to the development of a digital society, a key essential contribution of its materiality. Other Fujitsu AI Auto Presentation features: Autonomous slide transition with time allocation (international patent pending); and Customizable presentation content generation
Salesforce’s new Agentforce 3 platform allows teams to analyze every agent interaction, drill into specific moments and understand trends and offers plug-and-play compatibility with other agents through built-in MCP support
Salesforce launched Agentforce 3, a major upgrade to its flagship artificial intelligence product for enterprises with new ways to observe and control AI agents on the platform. The Agentforce platform provides companies the ability to build, customize and deploy generative AI agents, which augment the work of employees autonomously. They are goal-oriented pieces of software capable of completing tasks with little or no human supervision. Using the platform, employees across sales, service, marketing and commerce can customize AI “workers” to take action on their behalf using business logic and prebuilt automations. A new Command Center provides complete observability and built-in support for the Model Context Protocol for plug-and-play compatibility with other agents and services. The new Agentforce Command Center unifies agent health, performance and outcome optimization. Built into Agentforce Studio, the command center allows teams to analyze every agent interaction, drill into specific moments and understand trends. It will also display AI-powered recommendations for tagged conversation types to improve Agentforce agents continuously. The command center will act as a single place to understand AI agents changing contextually according to the type of agent that is under display. Users will be able to use natural language to generate topics, instructions and test cases right in Studio. Testing Center simulates AI agent behavior at scale with data state injection and AI-driven evaluation, allowing users to stress-test agents before going live.
Boomi’s AI solution offers low-code integration tools, a visual design interface and scalable agent orchestration to enable secure and adaptive deployment of agents across hybrid environments
By bridging the gap between cloud agility and on-premises control, Boomi Agentstudio is changing how organizations handle data integration and automation across hybrid environments as a secure AI management solution. With Agentstudio, developers gain access to Boomi’s renowned low-code integration tools, a visual design interface and scalable agent orchestration — making hybrid deployments easier and more adaptive, according to Mani Gill, vice president of product management AI and data at Boomi LP. As part of Boomi Agentstudio, the Boomi Agent Control Tower serves as a centralized dashboard for monitoring, scaling and orchestrating agent activity across distributed environments. Together, Agentstudio and the Control Tower enable a streamlined hybrid integration strategy — delivering both development agility and deployment robustness, according to Gill. Boomi Agentstudio leverages a platform-based approach by centralizing the design, deployment and management of integration agents across hybrid environments. This approach is enabled by the Boomi Enterprise Platform, which provides a comprehensive suite of tools for integration, API management, data quality and workflow automation, according to Gill.
OpenAI ramps up office productivity features- ChatGPT can now record and transcribe any meeting, brainstorming session or voice note, pull out key points and turn them into follow-ups, plans and code; orchestrating not just automating tasks
OpenAI is busy rolling out a suite of office productivity features on ChatGPT that puts it in direct competition with its main investor and partner, Microsoft, and key rival, Google. Since early June, OpenAI has buffed up ChatGPT to do office work: Record Mode: Record and transcribe any meeting, brainstorming session or voice note. ChatGPT will pull out key points and turn them into follow-ups, plans and code. Enhanced Projects: Projects now have deep research, voice, improved memory, file-uploading capability and model selection. Advanced Voice: Voice now offers live translation and smoother interaction. Connectors: ChatGPT can pull data from Microsoft Outlook, Microsoft Teams, Microsoft OneDrive, Microsoft SharePoint, Google Drive, Gmail, Google Calendar, Dropbox and more. Updated Canvas: The side-by-side editing capability can now export documents in PDF, docx or markdown formats. AI-native workflows are the future. Read.ai, Otter.ai and Microsoft Copilot are “now in ChatGPT’s competitive crosshairs. The difference? ChatGPT isn’t just automating tasks; it’s orchestrating them, end-to-end, with context and language-level intelligence.” We’re seeing the beginning of the ‘invisible app era’ where productivity doesn’t live in documents; it lives in dynamic, AI-mediated interactions.
Kognitos platform combines the reasoning of symbolic logic with AI to transform tribal and system knowledge into documented, automated processes, shrinking the automation lifecycle and ensuring no hallucinations and full governance
Kognitos launched its groundbreaking neurosymbolic AI platform, the industry’s first to uniquely combine the reasoning of symbolic logic with the learning power of modern AI. This unified platform empowers enterprises to address hundreds of business automation use cases, consolidate their AI tools and reduce technology sprawl. Kognitos uniquely transforms tribal and system knowledge into documented, automated processes, establishing a new, dynamic system of record for business operations. Using English as code, businesses can achieve automation in minutes with pre-configured workflows and a free community edition. “With Kognitos, we’re automating processes we thought were out of reach, thanks to hallucination-free AI and natural language capabilities,” said customer Christina Jalaly at Boost Mobile. “The agility and speed to value are game-changing, consistently delivering roughly 23x ROI and tangible results. Kognitos is a key partner in transforming our operations.” Kognitos also addresses complex “long tail” automation challenges. Its patented Process Refinement Engine keeps documented automation current and optimized using AI. This shrinks the automation lifecycle, where testing, deployment, monitoring and changes are all English-based and AI-accelerated. Key innovations launched today include: The Kognitos Platform Community Edition; Hundreds of pre-built workflows; Built-in document and Excel processing; Automatic agent regression testing; Browser use.
Gemini’s new foundation model runs locally on bi-arm robotic devices, without accessing a data network and enables rapid experimentation with dexterous manipulation and adaptability to new tasks through fine-tuning
Google DeepMind introduced a vision language action (VLA) model that runs locally on robotic devices, without accessing a data network. The new Gemini Robotics On-Device robotics foundation model features general-purpose dexterity and fast task adaptation. “Since the model operates independent of a data network, it’s helpful for latency sensitive applications and ensures robustness in environments with intermittent or zero connectivity,” Google DeepMind Senior Director and Head of Robotics Carolina Parada said. Building on the task generalization and dexterity capabilities of Gemini Robotics, which was introduced in March, Gemini Robotics On-Device is meant for bi-arm robots and is designed to enable rapid experimentation with dexterous manipulation and adaptability to new tasks through fine-tuning. The model follows natural language instructions and is dexterous enough to perform tasks like unzipping bags, folding clothes, zipping a lunchbox, drawing a card, pouring salad dressing and assembling products. It is also Google DeepMind’s first VLA model that is available for fine-tuning. “While many tasks will work out of the box, developers can also choose to adapt the model to achieve better performance for their applications,” Parada said in the post. “Our model quickly adapts to new tasks, with as few as 50 to 100 demonstrations — indicating how well this on-device model can generalize its foundational knowledge to new tasks.”
“Vibe coding” startup Pythagora enables anyone including noncoders to develop full-stack applications with a series of prompts by unifying both front and back-end development with comprehensive debugging features into a single platform
“Vibe coding” startup Pythagora is looking to take artificial intelligence-powered software development to the next level with the launch of its platform today, saying it will help anyone – including noncoders – to develop full-stack applications with nothing more than a series of prompts. The company says its platform is built for both developers and nontechnical users, and unlike similar generative AI coding tools, unifies both front- and back-end development with comprehensive debugging features to bring the entire app creation experience into a single platform. Pythagora can be thought of as an “AI teammate” that lives inside software development tools such as VS Code and Cursor. It consists of a team of 14 specialized AI agents that can automate various coding-related tasks without supervision, taking care of everything from planning and writing code to testing, debugging and deployment. Pythagora essentially supercharges vibe coding, entirely eliminating the need to actually code. The tool is designed to be less like a coding assistant and more like a co-developer. What that means is it does more than just create the code – it also explains why the code is written as it is, and can walk users through any changes it has made. But users can still intervene and edit the code as they see fit, if they decide it’s necessary to do so.
Google announced its open-source Gemini-CLI that brings natural language command execution directly to developer terminals, offering extensibility architecture, built around the emerging MCP standard
Google announced its open-source Gemini-CLI that brings natural language command execution directly to developer terminals. Beyond natural language, it brings the power of Google’s Gemini Pro 2.5 — and it does it mostly for free. The free tier provides 60 model requests per minute and 1,000 requests per day at no charge, limits that Google deliberately set above typical developer usage patterns. The tool is open source under the Apache 2.0 license. While Gemini CLI is mostly free, OpenAI and Anthropic’s tools are not. Google senior staff software engineer Taylor Mullen noted that many users will not use OpenAI Codex or Claude code for just any task, as it carries a cost. Another key differentiator for Gemini CLI lies in its extensibility architecture, built around the emerging Model Context Protocol (MCP) standard. This approach lets developers connect external services and add new capabilities and positions the tool as a platform rather than a single-purpose application. The extensibility model includes three layers: Built-in MCP server support, bundled extensions that combine MCP servers with configuration files and custom Gemini.md files for project-specific customization. This architecture allows individual developers to tailor their experience while enabling teams to standardize workflows across projects. If an organization wants to run multiple Gemini CLI agents in parallel, or if there are specific policy, governance or data residency requirements, a paid API key comes in. The key could be for access to Google Vertex AI, which provides commercial access to a series of models including, but not limited to, Gemini Pro 2.5. Gemini CLI operates as a local agent with built-in security measures that address common concerns about AI command execution. The system requires explicit user confirmation for each command, with options to “allow once,” “always allow” or deny specific operations. The tool’s security model includes multiple layers of protection. Users can use native macOS Seatbelt support for sandboxing, run the agent in Docker or Podman containers, and route all network traffic through proxies for inspection. The open-source nature under Apache 2.0 licensing allows complete code auditing.
Tray.ai’s platform addresses data incompleteness in AI deployment through integration of smart data sources that simplify synchronization of structured and unstructured enterprise knowledge, ensuring agents are informed with relevant and reliable information
Tray.ai has released Merlin Agent Builder 2.0, a platform designed to address challenges in AI agent deployment within enterprises. The platform aims to bridge the gap between building and actual usage of AI agents, addressing issues such as lack of complete data, session memory limitations, challenges with large language model (LLM) configuration, and rigid deployment options. The updated solution includes advancements in four key areas: integration of smart data sources for rapid knowledge preparation, built-in memory for maintaining context across sessions, multi-LLM support, and streamlined omnichannel deployment. Smart data sources simplify the connection and synchronization of structured and unstructured enterprise knowledge, ensuring agents are informed with relevant and reliable information. Built-in memory capabilities reduce the need for custom solutions and enhance continuity in user exchanges, improving adoption rates. The platform supports multiple LLM providers, allowing teams to assign specific models to individual agents with tailored configurations. Unified deployment across channels allows teams to build an agent once and deploy it seamlessly across communication and application environments, eliminating the need for repeated setup and technical adjustments for different channels. Tray.ai aims to provide a unified platform that enables IT and business teams to transition from pilot projects to production-ready AI agents that are actively used by employees and customers.