OpenAI is reimagining the role of artificial intelligence in daily routines with the launch of ChatGPT Pulse. Instead of waiting for prompts, Pulse delivers personalized updates before a user asks, pushing ChatGPT from reactive chatbot to proactive assistant. The feature is designed to anticipate what matters next, reshaping workflows and changing how people organize information, decisions and time. Pulse organizes updates into daily visual cards drawn from chat history, user feedback and optional integrations such as Gmail and Google Calendar. If a user has memory turned on, Pulse will incorporate context from earlier chats to refine its updates. ChatGPT Pulse marks OpenAI’s latest effort to more tightly integrate the chatbot into the everyday lives of its 700 million users and to get people comfortable with AI taking actions on their behalf. OpenAI said, “This is the first step toward a more useful ChatGPT that proactively brings you what you need, helping you make more progress so you can get back to your life.” The preview is rolling out to ChatGPT Pro mobile users who pay $200 monthly on mobile, with a wider release expected after testing. If Pulse delivers, it could reset expectations for digital assistants, turning ChatGPT into the default starting point for planning, productivity and targeted information. Looking ahead, OpenAI said it is working to make Pulse more timely and context-aware. We’re also exploring ways for Pulse to deliver relevant work at the right moments throughout the day, whether it’s a quick check before a meeting, a reminder to revisit a draft, or a resource that appears right when you need it.
Light’s purpose-built AI-native finance platform processes 280 million records in under one second with instant multi-entity accounting across jurisdictions, eliminating traditional ERP implementation delays
Light, a finance platform built with AI, has raised $30 million in a Series A funding round led by Balderton Capital, bringing total funding to $43 million. The company has reported that businesses are replacing legacy finance systems with its platform, resulting in reductions in finance operations time compared with traditional ERP tools. Light was built to overcome the limitations of financial systems designed for a different era. Whereas many providers retrofit artificial intelligence onto decades-old infrastructure, Light has embedded AI at its core, creating a platform that is adaptive, self-improving, and designed for scale. Its capabilities are transformative: while traditional systems falter at processing one million records, Light can handle 280 million in less than a second. Balance sheets are generated instantly, while multi-entity accounting, cross-border payments, and expense management are automated across jurisdictions. Accuracy matches human precision while exceeding it in the detection of errors. The platform is already trusted by high-growth businesses, including Lovable, Sana, and Legora, who have consolidated fragmented finance tools into Light’s single system of record. By integrating with global infrastructure providers such as JP Morgan, Adyen, and BDO, Light combines the reliability of enterprise-grade systems with the agility demanded by modern companies. Its engineering team, with experience at Spotify, Google, Klarna, AWS, Booking.com, and Shopify, is building a finance system engineered for global scale rather than rigid workflows.
GenAI tools lower bot attack costs enabling coordinated social media campaigns targeting corporations with automated posts, deepfake content, and scaled manipulation.
Gen AI tools have reportedly lowered the cost and increased the frequency of bot network attacks targeting corporations on social media. While the deployments of bot networks on social media were previously made primarily by fraudsters and state-based actors, they have become more common over the past year or two due to the capabilities of gen AI. The report pointed to “culture wars” attacks on social media, such as those that followed Cracker Barrel’s change to its logo and Amazon’s and McDonald’s changes to their diversity, equity and inclusion policies. Bot networks magnified posts aimed at Amazon and McDonald’s and authored about half of the posts on social platform X that called for a boycott of Cracker Barrel. These networks boost social media posts by liking, replying to and sharing them, and create posts of their own. Companies specializing in detecting the activities of bot networks do so by spotting things like duplicate messages that are posted by multiple accounts, accounts that post around the clock, and avatars that are generated by AI. While brands may not be able to stop these attacks on social media, they can benefit by learning that the attacks are being driven in part by bot networks. With this knowledge, brands can avoid engaging bots’ posts, can realize that some of the complaints are not coming from humans, and can be aware that the decisions they make may be targeted by bots on social media.
Okta’s AI governance committee builds “paved paths” that democratize secure innovation across cross-functional teams while preventing shadow-AI pitfalls
The common denominator, as enterprises race to actionize artificial intelligence, is balancing its raucous hype, potential risks and realistic opportunities. At Okta Inc., striking that balance in AI adoption has meant a pragmatic blueprint that codifies discipline rather than pointless experimentation, said Jenna Cline, senior vice president of business technology at Okta. Before scaling, Okta formed an AI governance committee to ask tough questions and identify blind spots. Its slower approach meant possible subsequent kinks were ironed out before tool rollout. This measured start helped Okta establish “paved paths” — frameworks and guardrails that allow teams to innovate with confidence, according to Cline. By focusing first on employee productivity use cases, the company gained early wins while ensuring safe, controlled growth. AI is inseparable from data strategy, Cline added. That’s why Okta united its technology and data teams under one umbrella. This organizational design has enabled closer collaboration, making it easier to prepare data for AI and build reusable modules. The result? Faster acceleration and democratization of AI across teams, without bottlenecks or silos
Cloudera’s cloud-bursting hybrid AI infrastructure enable cost-controlled scaling between on-premises and cloud environments through governance frameworks and accurate model deployment to push AI strategies beyond pilots
Without a clear plan for data, even the boldest efforts risk stalling in pilots, according to Jason Mills, senior vice president of solutions engineering at Cloudera Inc. Enterprises are now defining AI strategy around hybrid models that allow workloads to move between on-premises systems and the cloud. This gives companies control over cost while maintaining an agility to expand, according to Mills. “What we’re talking about is the difference between on-prem and cloud deployments of machine learning or AI models,” he said. “For some of our largest customers, we’re even looking at ways to burst into the cloud so that they can have hyper-agile solutions within their environment.” The harder step is turning pilots into production. Many organizations remain trapped in “pilotitis,” unable to advance because their AI strategy cannot make good on promises of efficiency or security, according to Mills. “If an AI model is not highly accurate to the questions or use cases that you’re trying to address, you will stay in pilot mode,” he said. “If that AI model costs more than you actually forecasted, then again, you will stay in pilot mode. And that’s part of the understanding: how customers need to pay attention to and rely on platforms like Cloudera to provide them with those capabilities to govern the entire AI ecosystem.”
Autonomy’s elastic actor runtime architecture launches millions of lightweight, stateful agents in parallel with sub-millisecond cold starts, enabling enterprises to launch agents at unprecedented scale
Autonomy, the first complete platform-as-a-service (PaaS) built specifically for agentic AI products, announced the general availability of its platform. A full framework for building agents. Autonomy gives developers an open source framework to build products quickly. Agents are written in Python, wired to tools via MCP or APIs, and deployed straight to the platform; the Autonomy Computer. This lets teams take advantage of secure connectivity, elastic scaling, and built-in governance without reinventing or managing infrastructure. Agents aren’t products…yet. Standalone agents running on a laptop are not enough. The real value comes when agents are deployed at scale, orchestrated, and connected to enterprise systems. Autonomy Computer provides the trust, governance, and interoperability layer that turns agents into viable customer-facing services. Superior runtime and scale. Unlike container-based stacks, Autonomy Computer’s elastic actor runtime can launch millions of lightweight, stateful agents in parallel. Cold starts happen in milliseconds. Workflows that once took hours, complete in seconds. Teams can start with one POC customer and scale up to enterprise ready, multi-tenant products without re-architecting. Secure messaging at scale. Connectivity has long been the blocker for agentic systems. Autonomy Private Links and identity-driven cryptography ensure that agents connect across services with end-to-end encryption and mutual authentication. Enterprises can now meet compliance requirements without reinventing security. Key capabilities include: Framework and SDK – Build and deploy Python agents fast. Secure connectivity & identity – Every agent has a cryptographic identity; Private Links control cross cloud API and data access. Elastic actor runtime – Millions of agents with sub-millisecond cold starts. Governance & isolation – Scope isolation and ABAC enforce least-privilege access. Observability & operations tools – Evals, logs, metrics, and endpoints for full visibility. Knowledge, memory, planning – Agents stay coherent across long-running tasks.
Anthropic launches Claude Sonnet 4.5 that leads a benchmark on real-world computer tasks at 61.4%; and introduces Claude Agent SDK which gives developers the ability to build AI agents with the same infrastructure that powers its frontier products
Anthropic has released Claude Sonnet 4.5, saying it outperforms other artificial intelligence models in coding, building complex agents and using computers. “Claude Sonnet 4.5 is state-of-the-art on the SWE-bench Verified evaluation, which measures complex real-world software coding abilities,” the company said. Anthropic added in the post that Sonnet 4.5 leads a benchmark that tests AI models on real-world computer tasks, OSWorld, at 61.4%. Together with the release of Sonnet 4.5, Anthropic has released upgrades to its products. These include the addition of checkpoints to Claude Code, enabling users to save their progress and roll back to a previous state; the addition of a new context editing feature and memory tool to the Claude API, letting agents run longer and handle greater complexity; and the addition of code execution and file creation directly into the conversation in Claude apps. Anthropic also introduced Claude Agent SDK, which gives developers the ability to build AI agents with the same infrastructure that powers its frontier products. In addition, the Claude for Chrome extension is now available to Max users in the waitlist. “We recommend upgrading to Claude Sonnet 4.5 for all uses,” Anthropic said. “Whether you’re using Claude through our apps, our API, or Claude Code, Sonnet 4.5 is a drop-in replacement that provides much improved performance for the same price.”
Perplexity’s real-time conversational search API challenges Google’s dominance by enabling developers to integrate sub-400ms latency search capabilities with AI-powered content parsing across hundreds of billions of web pages
Perplexity AI has launched its Search API, revolutionizing the AI agent market by allowing developers to integrate conversational search capabilities into applications. This innovative tool enables real-time, contextually relevant information delivery, enhancing the user experience and positioning Perplexity as a competitor to established search giants like Google. The company anticipates growth, with a valuation expected to reach $18-20 billion by 2025, driven by substantial investor confidence and an annual recurring revenue nearing $200 million. Strategic partnerships, notably with Bharti Airtel, and additional technologies like the Comet AI browser further strengthen its market presence, promoting wider accessibility to advanced AI tools. This API is a pivotal advancement, empowering developers to create more intelligent and adaptive AI solutions in various industries. The API grants developers access to an expansive index of hundreds of billions of web pages, enabling real-time information retrieval that’s optimized for AI-driven applications. This launch comes at a time when Google is facing antitrust scrutiny and criticism from publishers over its AI Overviews feature, which some argue siphons traffic and revenue from content creators. Perplexity’s offering emphasizes speed, privacy, and performance, with features like sub-400ms latency and hybrid AI ranking that blends lexical and embedding-based retrieval methods. Developers can now build conversational agents, conduct complex research, or power agentic AI systems with fresh, up-to-date data, bypassing traditional search giants.
Startup Maximor’s networked AI agents continuously pull transactions from ERPs, CRMs, billing, and payroll systems to unify operational and financial data for real‑time visibility, reducing month‑end spreadsheet work
Startup Maximor aims to replace finance teams’ reliance on Excel with its AI system. The startup uses a network of AI agents that connect directly to ERP, CRM, and billing systems to continuously pull transactions. That, co-founder and CEO Ramnandan Krishnamurthy said, helps unify operational and financial data and provide real-time financial visibility — instead of waiting until month-end to sort it all out. The approach should help reduce the time needed for the month-end close, he believes. Maximor’s financial agents plug into ERPs like NetSuite and Intacct, accounting tools such as QuickBooks and Zoho Books, and a range of payroll, CRM, and other SaaS platforms. Once connected, they generate workpapers, reviewer notes, and audit trails — helping streamline audits. Although Maximor aims to reduce reliance on Excel, it still allows teams to export reconciled data into spreadsheets — a format that many auditors and finance staff prefer before sending numbers to audit. In addition to its AI agents, Maximor offers human accountants as a human-in-the-loop option for its AI work, or as an accounting service for companies without in-house finance teams. It’s an interesting failsafe, given that Maximor pitches itself as an AI startup that automates this work. Relying on humans may seem at odds with that promise. The software is self-sufficient, with agents handling end-to-end work independently. The agents act as preparers and people act as reviewers, he said, adding that it works much like traditional accounting teams, where junior staff handle routine tasks and managers focus on oversight.
IndagoAI launches tools that measure the reliability of AI-generated research
Startup IndagoAI LLC is has launched its Trust Solutions Platform. It’s a suite of tools aimed at improving confidence in AI-generated research by rating its accuracy and credibility across a broad range of criteria. The firm’s TrustScore is a patent-pending algorithm that applies more than a dozen trust “themes” and 60 factors to evaluate the reliability of research findings in industries such as biopharma, construction, publishing and standards organizations. The company said its goal is to mitigate the growing problem of AI-generated “hallucinations,” or fabricated facts presented as true information, by providing enterprises with a standardized method to assess the trustworthiness of research outputs. Christian Mairhofer, IndagoAI’s co-founder and chief operating officer, said hallucination detection is built directly into the platform. TrustScore evaluates documents, data sources and AI-generated outputs across multiple dimensions, including authorship, objectivity, source credibility, numerical soundness and bias. Scores are generated on a scale of 1 to 10, accompanied by detailed commentary that explains the reasoning behind each rating. The system draws upon both internal analysis and external validation. The platform can also analyze multiple documents simultaneously, providing organizations with an overall trust assessment for large-scale research projects. Mairhofer said TrustScore is designed to integrate into existing enterprise research systems. “If you research 20, 30 or 40 articles, you can have an overall trust score for your research project as well,” he said. “By reinforcing confidence in research results, we can help companies make better decisions with greater transparency.” The company plans to offer both enterprise-wide dashboards and consumer-facing versions of TrustScore. It also hopes to integrate with standards organizations to create industry-wide benchmarks for content credibility.
