Genesys unveiled advanced agentic AI agents for the Genesys Cloud™ platform to help organizations orchestrate customer and employee experiences across enterprise platforms and teams. Enhancements to Genesys Cloud Copilots and Genesys Cloud Virtual Agents will enable greater autonomy, contextual awareness and built-in support for Agent2Agent Collaboration (A2A) and Model Context Protocol (MCP). The Copilots and Virtual Agents underpinned by Genesys Cloud AI Guides empower AI agents to operate within trusted enterprise guardrails and accelerate readiness for responsible agentic orchestration at scale. Powering these capabilities is the Genesys Cloud Event Data Platform (EDP), which brings data and analytics closer to customer interactions. Genesys Cloud Copilots are AI agents designed for the contact center employee — from frontline customer service representatives and supervisors to administrators and business leaders. Employees gain real-time guidance and insights to solve problems, detect anomalies and identify risk. Analytics Explorer, the first AI Skill that will be released at launch of the advanced Genesys Copilot suite, provides historical and real-time data to help lower the barrier to insights and accelerate decision-making. It offers users support on configuration and setup, as well as access to performance metrics, agent activity and trends in natural language. Building on Genesys Cloud AI Studio and AI Guides, Genesys has activated powerful new capabilities within its customer-facing Virtual Agent. Organizations can now deliver rich conversations that help improve customer satisfaction and operational efficiency through faster resolutions, broader language support and more natural interactions.
Sourcetable debuts spreadsheet‑native Superagents: autonomous, tool‑using AI that read/write databases and apps, orchestrate multi‑step tasks, and connect via secure API and MCP credentials
Sourcetable, an AI spreadsheet company, introduced Superagents: autonomous, tool-using AI agents that can connect to any system on the Internet, analyze and manipulate data, and take meaningful action, all from within a spreadsheet. Sourcetable’s Superagents orchestrate systems and assign tasks to individual agents for hands-free automation that turns raw data into insights and action, freeing teams to focus on results instead of busywork. By transforming spreadsheets, which are already one of the most trusted tools in business, into an agent-ready operating system, Sourcetable gives teams a faster, more secure path to automation, without the need to rip and replace existing systems. By abstracting away integration and complexity, Sourcetable is positioning itself as a critical infrastructure layer, enabling both humans and agents to share a common workspace. Designed to make tasks that once required hours of coding and analysis be completed in minutes, Sourcetable’s advanced AI capabilities and Superagents can reason, model, specialize, and complete complex tasks, such as financial modeling, allowing users to leverage curated tools, conduct API calls, and securely connect their databases to the platform. These capabilities are enhanced through a spreadsheet interface, which enables users to maximize productivity and includes: Spreadsheet‑native accessibility: AI-powered spreadsheets match Excel for formulas, charting, and graphing, and work seamlessly with CSV/Excel files, databases, and third‑party applications. Flexible ways to build: Interact using natural language, A1 notation directly in the spreadsheet, or code in Python or SQL. Full read/write orchestration: Agents can read from and write to connected apps and databases, and can coordinate task-based agents to complete multi‑step work. Data connectors: Sourcetable offers “native” connectors to databases and popular third-party applications, including Google Ads, Google Analytics, Postgres, Hubspot, Shopify, and Stripe. These are stable, vetted connectors with reliable quality. API & MCP playground: Connect to any third-party application on the Internet. Sourcetable generatively codes a connector solution in real-time. Great for analyzing data, quality varies based on popularity and public documentation of the third-party service.
Empromptu launches no‑code platform to build production‑ready AI apps using RAG and LLMOps, generating front/back ends and AI logic with 98% accuracy and enterprise integrations
A startup called Empromptu Inc. is looking to deliver where all other AI-based application building tools have failed. It’s launching what it says is the first platform of its kind to deliver complete, business-ready AI applications that can be relied upon to work in production and integrate smoothly with existing technology systems. Empromptu says its AI app builder does much better because it’s the first such platform that’s built on a complete AI stack, with LLMOps tooling, retrieval-augmented generation or RAG, automated AI response optimization and other essential features. Empromptu is aimed at regular business employees rather than developers, offering a no-code interface that allows them to describe the application they want to build using natural language. From the user’s description, Empromptu will then set about building a complete, production-ready AI application based on a full stack that includes the front end, back end and AI logic, along with a user interface that integrates with the user’s existing workflow. With built-in automatic AI response optimization, Empromptu says, its generated apps achieve 98% accuracy in response to user’s prompts, thanks in part to its use of RAG, which makes it possible for them to tap into secure corporate data to enhance its responses. Using Empromptu, anyone can build an AI-powered app within just a few minutes. Empromptu says, its platform can be used to create extremely powerful AI applications for almost any use case. Empromptu said the secret to its accuracy is its proprietary optimization technology. With built-in automatic AI response optimization, Empromptu says, its generated apps achieve 98% accuracy in response to user’s prompts, thanks in part to its use of RAG, which makes it possible for them to tap into secure corporate data to enhance its responses.
Empromptu launches no‑code platform to build production‑ready AI apps using RAG and LLMOps, generating front/back ends and AI logic with 98% accuracy and enterprise integrations
A startup called Empromptu Inc. is looking to deliver where all other AI-based application building tools have failed. It’s launching what it says is the first platform of its kind to deliver complete, business-ready AI applications that can be relied upon to work in production and integrate smoothly with existing technology systems. Empromptu says its AI app builder does much better because it’s the first such platform that’s built on a complete AI stack, with LLMOps tooling, retrieval-augmented generation or RAG, automated AI response optimization and other essential features. Empromptu is aimed at regular business employees rather than developers, offering a no-code interface that allows them to describe the application they want to build using natural language. From the user’s description, Empromptu will then set about building a complete, production-ready AI application based on a full stack that includes the front end, back end and AI logic, along with a user interface that integrates with the user’s existing workflow. With built-in automatic AI response optimization, Empromptu says, its generated apps achieve 98% accuracy in response to user’s prompts, thanks in part to its use of RAG, which makes it possible for them to tap into secure corporate data to enhance its responses. Using Empromptu, anyone can build an AI-powered app within just a few minutes. Empromptu says, its platform can be used to create extremely powerful AI applications for almost any use case. Empromptu said the secret to its accuracy is its proprietary optimization technology. With built-in automatic AI response optimization, Empromptu says, its generated apps achieve 98% accuracy in response to user’s prompts, thanks in part to its use of RAG, which makes it possible for them to tap into secure corporate data to enhance its responses.
ServiceNow’s agentic platform transforms simple English commands into production-ready enterprise applications, performs comprehensive testing, handles version control and includes built-in audit trails and security controls
ServiceNow is out with its latest platform iteration as part of the company’s Zurich release. The update, available to current ServiceNow customers bundled with certain plans, introduces three major capabilities designed to move enterprises from AI experimentation to production deployment: natural language app building through vibe coding, enterprise-grade AI security consoles and autonomous workflow automation. ServiceNow’s new Build Agent transforms simple English commands into production-ready enterprise applications. Tell it “create an onboarding app that assigns tasks to HR, IT and Facilities” and it builds the entire system in minutes. Build Agent performs comprehensive testing, handles version control and ensures compliance with enterprise standards. Every application includes audit trails, security controls and governance checking built-in. “In a matter of minutes, the Build Agent not only got the requirements, looked through all the aspects of building, found the errors before deploying, debugged it and also pushed the app into production, ” Jithen Basker, global vice president and general manager of creator workflows at ServiceNow. ServiceNow’s Zurich release introduces two entirely new security consoles specifically designed for enterprise AI deployment. The new Machine Identity Console monitors all API connections and automatically flags high-risk integrations. The system automatically flags accounts inactive for over 100 days and identifies weak authentication methods like basic authentication. The company is betting that enterprises will choose integration simplicity over vendor flexibility. The evidence suggests this bet may pay off. Tasks requiring weeks of development work now complete in minutes, according to ServiceNow. More importantly, the platform eliminates the integration complexity that kills many enterprise AI projects before they reach production scale.
Anthropic adds automatic, optional work memory for Team and Enterprise, reducing repetitive context; users can export memories to Gemini or ChatGPT
The premium versions of Anthropic PBC’s Claude AI chatbot are getting a useful upgrade with the addition of “memory,” which will enable it to remember earlier interactions with users without any prompting. The new feature, which aims to improve Claude AI’s contextual understanding, is being launched for Team and Enterprise subscribers only. It will enable the chatbot automatically to remember each user’s preferences, the projects they’re working on, and other relevant aspects of their work. Claude’s memory is also being extended to the projects feature that lets Teams and Enterprise users generate graphics, diagrams, websites and more based on files they upload to it. The memory feature is focused on remembering work-related details such as team processes and client needs, with the goal being to reduce the number of repetitive, context-setting interactions and streamline collaboration and productivity across chats. Users will be able to view and edit Claude’s memory from the settings menu, and based on what they tell it to focus on or ignore, it will “adjust the memories it references,” the company said. To make memory even more useful, Anthropic said users will be able to download Claude’s memories for a specific project and move them to third-party chatbots such as Google LLC’s Gemini Pro or OpenAI’s ChatGPT. Anthropic asserted that Claude’s memory abilities are entirely optional and are switched off by default. To enable it, users must go to Claude’s settings and select the option to generate memory from previous interactions. Once done, Claude will then be able to respond immediately to queries. Together with memory, Claude AI is also getting an incognito chat experience, and it’s being made available to all users, including those on its free tier. When chatting incognito, Claude will not remember a thing, and the chats will not appear in the conversation history once the user’s session is closed. Anthropic said the mode is aimed at users who need confidentiality, or want to engage in a fresh, context-free conversation with Claude. With this update, Claude effectively becomes an AI agent, using its reasoning capabilities to break down multistep tasks and complete them in a logical way using third-party software and applications.
Swiss asset manager 21Shares launches DYDX ETP; institutional investors get regulated, custody‑ready exposure to DeFi perpetual futures via a physically backed product on Euronext
Swiss asset manager 21Shares has launched a new exchange-traded product (ETP) tied to dYdX, offering institutional investors secure and regulated access to one of DeFi’s largest perpetual futures protocols. The ETP is physically backed by DYDX tokens and supported by the dYdX Treasury subDAO and its operator, kpk. The DYDX ETP is designed to meet the growing demand for regulated DeFi products and aligns with institutional standards for compliance, security, and transparency. The product is now trading on Euronext Paris and Amsterdam under the ticker DYDX. The DYDX ETP is designed to empower institutions to harness DYdX’s pioneering technology while maintaining its sovereignty and decentralization. With the addition of DYDX, 21Shares now offers 48 crypto ETPs across European exchanges, solidifying its position as the region’s largest crypto ETP issuer. The launch aligns with dYdX’s broader development plans, which include telegram-based trading, a new spot market starting with Solana, perpetual contracts linked to real-world assets, a staking program with auto-compounding rewards into DYDX token buybacks, expanded deposit options across fiat and stablecoins, and a fee discount program for stakers.
Vibe coding is useful for prototypes and UI scaffolding, but still demands 30-40% of developer time for vibe fixing (rigorous peer review, tests, scans)—human accountability remains non‑negotiable before production
A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers. These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce. Working with AI-generated code has become such a problem that it’s given rise to a new corporate coding job known as “vibe code cleanup specialist.” Web developing veteran Carla Rover called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas.
Google’s VaultGemma debuts as the most capable differentially private LLM with 1billion ‑ parameter; matching non‑private peers on benchmarks while safeguarding training data
Google LLC’s two major research units have made a significant advance in the area of LLM privacy with the introduction of a new model called VaultGemma, the world’s most powerful “differentially private LLM.” VaultGemma was trained from scratch using a differential privacy framework to ensure that it cannot remember or leak sensitive data. This is a critical feature that can have serious implications for AI applications in regulated industries such as finance and healthcare, the researchers said. One of the key innovations in VaultGemma saw the researchers adapt its training protocols to deal with the instability caused by the addition of noise. Google’s research shows how differential privacy alters the learning dynamics of LLMs. They came up with a few tricks to mitigate these costs that could potentially lower the barrier to adoption of private models. Architecturally, VaultGemma is a decoder-only transformer model based on Google’s Gemma 2 architecture, featuring 26 layers and using Multi-Query Attention. One of the key design choices was to limit the sequence length to just 1,024 tokens, which helps manage the intense computational requirements of private training, the researchers said.
Micro1, a startup, helps AI companies find and manage human contractors for data labeling and training; looking to fill the gap in the data market created by Meta’s investment in Scale AI
Micro1, a startup that helps AI companies find and manage human contractors for data labeling and training, has raised a $35 million Series A funding round that values the company at $500 million. The startup is one of many companies looking to fill the gap in the data market created by recent changes involving Scale AI. After Meta invested $14 billion in Scale AI and hired its CEO, AI labs, including OpenAI and Google, said they planned to cut ties with the startup, presumably over concerns that their research could end up in Meta’s hands. However, AI labs still need these data services, and startups like Micro1 aim to pick up the slack. Micro1 CEO Ali Ansari said Micro1 is now generating $50 million in annual recurring revenue (ARR), up from $7 million at the start of 2025. The demands of AI labs have shifted in recent years and companies now need high-quality data labeling from domain experts — such as senior software engineers, doctors, and professional writers — to improve their AI models. This led Micro1 to build its AI recruiter, Zara, which interviews and vets candidates who apply to work as one of the company’s contractors, or as Ansari calls them, experts. Micro1 says Zara has recruited thousands of experts — including professors from Stanford and Harvard — and that the company plans to add hundreds more every week. Ansari says Micro1 is building new offerings in the environments space to meet this demand.
