AI-native orchestration engines are redefining enterprise software delivery. Ascendion’s AAVA+ platform exemplifies how agentic AI is revolutionizing modern software engineering. Built from the ground up to operate via hundreds of specialized, autonomous agents embedded in every SDLC role — from product management to site reliability — AAVA+ enables lean, goal-based orchestration that aligns tightly with enterprise needs. These agents, numbering over 4,000 in production, connect seamlessly with more than 80 DevOps tools (including Jira, Jenkins, SonarQube, and AppDynamics), creating what Arun Varadarajan, chief commercial officer of Ascendion calls an “open kitchen model” that grants clients end-to-end process visibility. This approach enables organizations to define business outcomes upfront and let the agents generate user stories, code, tests and architecture — all while maintaining enterprise governance, compliance and quality standards. “We don’t just integrate AI,” he said. “We build from it. Every part of AAVA+ is agentic, from ideation to delivery.” “We are moving from task-based AI to goal-first systems,” Varadarajan said. “You tell AAVA+ the kind of portal you want, and the agents come back with a solution. You’re not writing specs, you’re defining outcomes.” By transitioning from task-level automation to goal-oriented orchestration, AAVA+ reduces friction, shortens timelines and bridges the gap between business strategy and technical execution.
Treasure Data’s platform enables building AI agents for high-impact use cases like taxonomy mapping and journey orchestration using a single, real-time customer ID that unifies every interaction into a complete profile embeddable across the entire tech stack
Treasure Data announced the expansion of its AI Agent ecosystem, purpose-built to help global enterprises harness generative AI with the trust, governance, and scale they need. Powered by Amazon Bedrock, this expanded ecosystem empowers organizations to deploy AI agents that drive real-time personalization, marketing automation, and customer experience optimization, all grounded in the complete and accurate customer data of the Treasure Data Diamond Record. Treasure Data’s expanded AI Agent ecosystem is designed to close the gap by combining the creative power of generative AI with the trust and scale of a proven CDP. Rafa Flores, Chief Product Officer at Treasure Data. “Our AI agent ecosystem is purpose-built to deliver enterprise-grade AI that’s both powerful and governed from day one. With Amazon Bedrock, we can quickly build and deploy agents fine-tuned with enterprise data without sacrificing trust or security. Treasure Data’s AI Agent Foundry is the backbone of this ecosystem, providing a flexible and secure environment where marketing, CX, and data teams can build, refine, and deploy agents tailored to their business needs. Every agent is purpose-built for high-impact use cases like journey orchestration, data health monitoring, taxonomy mapping, and campaign optimization, meaning no guesswork and faster time to value. Security and compliance are also baked in, with built-in permissioning, auditability, and access controls that meet the demands of enterprise governance. With the Diamond Record, a persistent, real-time customer ID that connects every tool, channel, and data stream, Treasure Data delivers a single, real-time customer ID that unifies every interaction—whether online, offline, known, or anonymous—into one complete and consistent profile that is embeddable across the entire tech stack. This deep integration ensures every AI agent acts on the most accurate, up-to-date information, eliminating hallucinations and consistently driving real business results.
PNC Bank announces integration with Oracle Fusion Cloud ERP for embedded banking enabling seamless connectivity and effectively managing cash positions
PNC Bank announced the integration of its embedded banking platform, PINACLE Connect®, with Oracle Fusion Cloud ERP. PNC corporate and commercial banking clients now have seamless connectivity to key banking services directly within Oracle Cloud ERP, helping streamline financial operations and enhance overall efficiency. The new embedded banking experience, which uses the Oracle B2B offering to provide turnkey connectivity, helps optimize business processes by reducing the need for clients to navigate between multiple platforms to retrieve balance and transaction information, initiate and approve payments and reconcile accounts —automating manual processes and helping save valuable time. Oracle Cloud ERP offers a comprehensive set of enterprise finance and operations capabilities, including financials, an accounting hub, procurement, project management, enterprise performance management, risk management, subscription management, and supply chain and manufacturing. Howard Forman, executive vice president and head of PNC’s Commercial Digital Channels says “By embedding our services within Oracle Cloud ERP, our clients can more effectively manage their cash position and spend more time running their businesses, while spending less time establishing bank connectivity and handling manual financial tasks.”
Only 33% of developers trust AI accuracy in 2025, down from 43% in 2024 while 66% cite “AI solutions that are almost right, but not quite” that demand careful analysis as their top frustration
New data from Stack Overflow’s 2025 Developer Survey exposes a critical blind spot: the mounting technical debt created by AI tools that generate “almost right” solutions, potentially undermining the productivity gains they promise to deliver. AI usage continues climbing—84% of developers now use or plan to use AI tools, up from 76% in 2024. Yet trust in these tools has cratered. Only 33% of developers trust AI accuracy in 2025, down from 43% in 2024 and 42% in 2023. AI favorability dropped from 77% in 2023 to 72% in 2024 to just 60% this year. Developers cite “AI solutions that are almost right, but not quite” as their top frustration—66% report this problem. Meanwhile, 45% say debugging AI-generated code takes more time than expected. AI tools promise productivity gains but may actually create new categories of technical debt. AI tools don’t just produce obviously broken code. They generate plausible solutions that require significant developer intervention to become production-ready. This creates a particularly insidious productivity problem. Most developers say AI tools do not address complexity, only 29% believed AI tools could handle complex problems this year, down from 35% last year. Unlike obviously broken code that developers quickly identify and discard, “almost right” solutions demand careful analysis. Developers must understand what’s wrong and how to fix it. Many report it would be faster to write the code from scratch than to debug and correct AI-generated solutions. The workflow disruption extends beyond individual coding tasks. The survey found 54% of developers use six or more tools to complete their jobs. This adds context-switching overhead to an already complex development process.
Research finds while the number of active users of OpenAI’s ChatGPT app was 5.8% lower on Sundays compared to the average day in the first half of 2024, it was only 2.5% lower in the first half of 2025 indicating consumers’ expanding use of AI assistants in daily lives
Consumers are increasingly using artificial intelligence (AI) assistants in their personal lives as well as at work. While AI usage used to drop on weekends, that is less true today, digital intelligence and analytics firm Sensor Tower said. The company found that while the number of active users of OpenAI’s ChatGPT app was 5.8% lower on Sundays compared to the average day in the first half of 2024, it was only 2.5% lower in the first half of 2025. Similarly, the number of active users of ChatGPT on the web was 19.2% lower on Sundays than on the average day in the first half of 2024, it was 8.0% lower in the first half of 2025. By contrast, work-focused apps like Microsoft Teams and Salesforce’s Slack still see large drops in usage on weekends. “This makes [ChatGPT’s] app usage trends more similar to Google, which consumers rely on as a primary resource while working and outside of work alike,” Jonathan Briskman, principal market insights manager at Sensor Tower, wrote. Sensor Tower also found other signs that consumers are becoming more comfortable with using generative AI apps. The company said ChatGPT became the fastest app to reach 1 billion global downloads across iOS and Google Play; prompt data shows users are turning to ChatGPT for answers related to not just work and education but also lifestyle and entertainment; and the number of apps mentioning “AI” or AI-related terms increased by more than 200 in the first half. “This reflects how ChatGPT has not only reached a much broader user base, but how consumers are becoming increasingly comfortable using the tool for more varied use cases,” Briskman wrote.
Debit cards are emerging as credit-like alternative for enabling purchases driven by targeted offers and deals embedded into the apps of alt lenders combined with BNPL, cashback tiers and rewards being baked into them
Although the credit card value proposition still works for many, it no longer works in every situation. The most important credential that has emerged for enabling payment and purchase flexibility is not a new type of credit card—it’s the debit card. And it’s not rewards that drive consumer use and adoption of those alternatives. It’s targeted offers and deals that put real money in the pockets of consumers every time they buy, embedded into the apps that those alternative credit providers provide.Then came BNPL. Users say the main appeal of this new pay-later category is predictability. A purchase divided into four or six or twelve or twenty-four equal payments becomes a known quantity. Klarna is piloting a Visa debit card in the U.S. that bakes in BNPL, cashback tiers and rewards. Sezzle now offers Pay-in-Five. Chase and US Bank are testing Pay-in-4 on debit cards. Debit BNPL is inclusive, serving those who can’t or won’t get a credit card. Smart credentials like Visa’s Flex and Mastercard’s One let consumers set rules for how they want to pay using a single PAN riding debit rails. For smaller banks, this makes them more competitive. For large issuers, it’s a challenge: meet demand or risk losing transactions. Debit, reimagined as a credit-lite alternative, could redefine what “paying with plastic” means. A card that acts like credit without credit checks or interest fees—and lets consumers set rules—starts to look like the future of credit.
Four AI business models to reshape future of enterprises include embedding product into AI workflows, embedding engineering for co-creation with customers, full-stack AI services with focus on outcomes and AI-infused roll-ups that augments ops with embedded AI
In 2025, it’s no longer enough to be AI-powered—companies must be AI-native. That means architecting operations, customer interactions, and value creation around the core principles of AI systems: adaptability, feedback loops, and outcome-driven workflows. 4 AI business models are currently taking precedence: Product-Only – Winning with Workflow, Not Just Models. In the Product-Only model, success hinges not on proprietary model performance but on how deeply the product embeds into user workflows. With this model, there is a firm belief that “Distribution compounds faster than models decay” according to Apoorva Pandhi from Zetta Ventures Partners. Why? Becuase AI models degrade over time due to data drift, user behavior shifts, and competitive pressure. But a sticky product experience can endure. Companies like Perplexity, and MotherDuck thrive because their UX mirrors real user behavior. The strategic advantage is these businesses rely on low operational complexity and high product velocity. Their defensibility comes from habit formation and trust—not model superiority. Product + Embedded Engineering – Co-Creation in the Field. In this model, AI companies don’t ship generic tools. They embed engineers with customers to co-develop systems that reflect real-world workflows and edge cases. Companies like Harvey, exemplify this because they works side-by-side with Law firms to build legal AI copilots—custom-tuned to legal reasoning, regulatory nuance, and the psychological risk profile of high-stakes law. The strategic advantage is these businesses are high-touch but high-retention. While operations are more intensive, customer entanglement drives long-term defensibility and deep insights into specialized domains. Full-Stack AI Services – From Tools to Outcomes. This model shifts the conversation from software delivery to outcome ownership. Customers don’t just get tools—they get results. LILT for example, doesn’t sell translation software; it delivers full localization services, combining AI with human linguists to ensure context, tone, and intent are preserved. The strategic advantage for these companies is they benefit from continuous data loops and full control over execution. They iterate faster and improve performance over time, making their offering nearly impossible to unbundle. Roll-Up + AI – Buy Ops, Layer Intelligence. This hybrid model marries traditional operational businesses with embedded AI to unlock new efficiencies and capabilities. Rather than building from scratch, these companies acquire existing businesses—like pharmacies, warehouses, or logistics firms—and upgrade them with AI-driven labor orchestration, forecasting, and automation. Though often stealth, these AI-infused roll-ups are gaining momentum in healthcare, supply chain, and robotics. The strategic advantage here is these companies achieve rapid go-to-market, defensibility via physical assets, and compound efficiency by layering AI atop operational expertise.
‘Subliminal learning’: Anthropic says language models might learn hidden characteristics during distillation that can also lead to unwanted results, such as misalignment and harmful behavior
A new study by Anthropic shows that language models might learn hidden characteristics during distillation, a popular method for fine-tuning models for special tasks. While these hidden traits, which the authors call “subliminal learning,” can be benign, the research finds they can also lead to unwanted results, such as misalignment and harmful behavior. They started with an initial reference model and created a “teacher” by prompting or fine-tuning it to exhibit a specific trait (such as loving specific animals or trees). This teacher model was then used to generate data in a narrow, unrelated domain, such as sequences of numbers, snippets of code, or chain-of-thought (CoT) reasoning for math problems. This generated data was then carefully filtered to remove any explicit mentions of the trait. Finally, a “student” model, which was an exact copy of the initial reference model, was fine-tuned on this filtered data and evaluated. Subliminal learning occurred when the student model acquired the teacher’s trait, despite the training data being semantically unrelated to it. The effect was consistent across different traits, including benign animal preferences and dangerous misalignment. It also held true for various data types, including numbers, code and CoT reasoning, which are more realistic data formats for enterprise applications. Remarkably, the trait transmission persisted even with rigorous filtering designed to remove any trace of it from the training data. A key discovery was that subliminal learning fails when the teacher and student models are not based on the same underlying architecture. For instance, a trait from a teacher based on GPT-4.1 Nano would transfer to a GPT-4.1 student but not to a student based on Qwen2.5. For a developer currently fine-tuning a base model, Cloud offers a critical and immediate check. The paper concludes that simple behavioral checks may not be enough. “Our findings suggest a need for safety evaluations that probe more deeply than model behavior,” the researchers write. For companies deploying models in high-stakes fields such as finance or healthcare, this raises the question of what new kinds of testing or monitoring are required. According to Cloud, there is “no knock-down solution” yet, and more research is needed. However, he suggests practical first steps.
Mark Zuckerberg thinks that glasses will be the primary way users interact with AI in the years ahead and those without AI glasses will be at a significant cognitive disadvantage
Echoing sentiments shared in his “superintelligence”-focused blog post, Meta CEO Mark Zuckerberg expanded on his bullish ideas that glasses will be the primary way users interact with AI in the years ahead. During Meta’s second-quarter earnings call, the social networking exec told investors he believes people without AI glasses will be at a disadvantage in the future. “I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, [and] talk to you,” Zuckerberg said. Adding a display to those glasses will then unlock more value, he said, whether that’s a wider, holographic field of view, as with Meta’s next-gen Orion AR glasses, or a smaller display that might ship in everyday AI eyewear. “I think in the future, if you don’t have glasses that have AI — or some way to interact with AI — I think you’re … probably [going to] be at a pretty significant cognitive disadvantage compared to other people,” Zuckerberg added. “The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together,” he said. “So the whole Metaverse vision, I think, is going to … end up being extremely important, too, and AI is going to accelerate that.”
SEC’s Atkins says most crypto assets are not securities; plans purpose-fit disclosures for crypto securities including for so-called ‘initial coin offerings,’ ‘airdrops’ and network rewards.”; could allow innovation with ‘super-apps’
SEC Chairman Paul Atkins said his agency is launching “Project Crypto” with an aim to make a quick start on the new crypto policies urged by President Donald Trump. Atkins said the effort will be rooted in the recommendations of the President’s Working Group report issued Wednesday by the White House. He described it as “a commission-wide initiative to modernize the securities rules and regulations to enable America’s financial markets to move on-chain.” “I have directed the commission staff to draft clear and simple rules of the road for crypto asset distributions, custody, and trading for public notice and comment,” Atkins said. “While the commission staff works to finalize these regulations, the commission and its staff will in the coming months consider using interpretative, exemptive and other authorities to make sure that archaic rules and regulations do not smother innovation and entrepreneurship in America. Despite what the SEC has said in the past, most crypto assets are not securities,” Atkins said. Atkins suggested his agency will move to begin answering those questions now, working on “clear guidelines that market participants can use to determine whether a crypto asset is a security or subject to an investment contract.” For crypto securities, he said he’s “asked staff to propose purpose-fit disclosures, exemptions, and safe harbors, including for so-called ‘initial coin offerings,’ ‘airdrops’ and network rewards.” Atkins said he means to “allow market participants to innovate with ‘super-apps'” that offer a “broad range of products and services under one roof with a single license.”
