Nasdaq has partnered with Nasdaq Private Market (NPM) to provide greater price transparency and valuation visibility into private, pre-IPO companies, including unicorns and startups. The Tape D private company dataset, available through API integration via Nasdaq Data Link, addresses critical transparency challenges by helping investors evaluate private holdings with greater confidence, enabling banks to structure private transactions more effectively, supporting wealth advisors and shareholders in managing liquidity needs, and equipping private companies with valuable insights for capital raises and tender offers. The comprehensive data product delivers real-time private market pricing by integrating primary round data, secondary market transactions, and accounting data. The launch of this data partnership marks the latest step in Nasdaq’s commitment to enhancing transparency, access, and portfolio management capabilities across the public-to-private investment spectrum.
Apple debuts new user design experience Liquid Glass, which will bring greater focus to content, deliver a new level of quality to controls and keep users more attuned to what’s happening on screen “harmonizing” the user experience across all devices
Apple previewed a slick new software design and powerful software updates, including new features coming to its next-generation operating systems across devices that will receive a unified version 26. The new design features a new material called Liquid Glass, which creates a translucent effect similar to water that sits atop the display, refracting content below it and allowing colors to flow through. The company says this will bring greater focus to content, deliver a new level of quality to controls and keep users more attuned to what’s happening on screen. The new design extends across Apple’s entire device ecosystem, including iOS 26, iPadOS 26, macOS Ventura 26, watchOS 26 and tvOS 26. The company said the idea was to “harmonize” the user experience across all devices so they could expect every device to look and feel the same. It will affect buttons, switches, sliders, text and media in the user interface and shift dynamically according to user needs. Controls, toolbars and navigation within apps have been redesigned with rounded corners and they “float above” content so that they stay out of the way and avoid interrupting content. They also shift into thoughtful groupings, allowing users to find the controls they need. The Preview app, which originally comes from macOS, is coming to iPadOS 26. Preview is a dedicated app for creating a quick sketch, as well as viewing, editing and marking up PDFs and images with Apple Pencil or by touch.
Hirundo’s approach to AI hallucinations is about making fully trained AI models forget the bad things they learn, so they can’t use this mistaken knowledge
Hirundo AI Ltd., a startup that’s helping AI models “forget” bad data that causes them to hallucinate and generate bad responses, has raised $8 million in seed funding to popularize the idea of “machine unlearning.” Hirundo’s approach to AI hallucinations is about making fully trained AI models forget the bad things they learn, so they can’t use this mistaken knowledge to generate their responses later on, down the line. It does this by studying the behavior of AI models in order to locate the directions users can go in order to manipulate them. It identifies any bad traits, then investigates the root cause of those bad outputs, before steering the model away from them. It pinpoints where hallucinations originate from in the billions of parameters that make up their knowledge base. This retroactive approach to fixing undesirable behaviors and inaccuracies in AI models means it’s possible to improve their accuracy and reliability without needing to retrain them. That’s a big deal, because retraining models can take many weeks and cost thousands or even millions of dollars. “With Hirundo, models can be remediated instantly at their core, working toward fairer and more accurate outputs,” Chief Executive Ben Luria added. Besides helping models to forget bad, biased or skewed data, the startup says it can also make them “unlearn” confidential information, preventing AI models from revealing secrets that shouldn’t be shared. What’s more, it can do this for both open-source models such as Llama and Mistral, and soon it will also be able to do the same for gated models such as OpenAI’s GPT and Anthropic PBC’s Claude. The startup says it has successfully managed to remove up to 70% of biases from DeepSeek Ltd.’s open-source R1 model. It has also tested its software on Meta Platforms Inc.’s Llama, reducing hallucinations by 55% and successful prompt injection attacks by 85%.
Amperity vibe coding AI agent connects directly to the customer’s Databricks environment via native compute and LLM endpoints to quickly execute complex tasks such as identity resolution
Customer data cloud startup Amperity Inc. is joining the agentic AI party, launching Chuck Data, an AI agent that specializes in customer data engineering. Chuck Data is trained on massive volumes of customer data from more than 400 enterprise brands. This “critical knowledge” base allows it to execute tasks such as identity resolution and personally identifiable information tagging autonomously and instantly resolve customer identities, with minimal input from human developers. The agent is designed to help companies dig up customer insights much faster. Chuck Data makes it possible for data engineers to embrace “vibe coding,” so they can use natural language prompts to delegate these manual coding tasks to an autonomous AI assistant. The company said Chuck Data connects directly to the customer’s Databricks environment via native compute and large language model endpoints. Then it can quickly execute complex tasks such as identity resolution – which involves pulling data from multiple profiles into one – as well as compliance tagging and data profiling. One of Chuck Data’s core features is Amperity’s patented identity resolution algorithm, which is based on the proprietary Stitch technology that’s used within its flagship cloud data platform. The company said users can run Stitch on up to 1 million customer records for free, and for those with bigger records, they can sign up to Chuck Data’s research preview program to access free credits. It’s also offering paid plans that unlock unlimited access to Stitch, enabling companies to create millions of accurate, scalable customer profiles. huck Data provides yet more evidence of how CDPs are evolving from activation tools into embedded intelligence layers for the customer engagement data value chain.
Research shows latest large reasoning models (LRMs) experience “complete accuracy collapse”, often dropping to zero successful solutions beyond a certain point, when faced with highly complex tasks
The latest large reasoning models (LRMs) experience “complete accuracy collapse” when faced with highly complex tasks, according to a new paper co-authored by researchers from Apple. Researchers used controllable puzzles like the Tower of Hanoi, Checkers Jumping, River Crossing and Blocks World, allowing them precise control over the difficulty of the puzzles by adding more disks, checkers, people or blocks, while keeping the basic rules the same. This allowed them to see exactly when and how the AI’s reasoning broke down as problems got harder. As puzzle complexity increased, the performance of these frontier LRMs didn’t just get a little worse; it suffered a “complete accuracy collapse,” often dropping to zero successful solutions beyond a certain point.The researchers found that as the problems approached the point where the AI started failing, the LRMs began to reduce their reasoning effort, using fewer “thinking” steps or tokens, pointing to a fundamental limit in how they handle increasing difficulty. On simple problems, the LRMs sometimes found the correct answer early but kept exploring wrong solutions — a form of “overthinking” that wastes effort. On harder problems, correct solutions appeared later, if at all. Beyond the collapse point, no correct solutions were found in the thinking process. The study concluded that these findings point to fundamental limitations in how current LRMs tackle problems. While the “thinking” process helps delay failure, it doesn’t overcome these core barriers. The research raises questions about whether simply adding more “thinking” steps is enough to achieve truly general AI that can handle highly complex, novel problems.
Banks can drive adoption of virtual cards through automated usage reminders and instant issuance and a storytelling approach that ties messaging to relatable use cases, preferences and behaviors
To drive broader adoption of virtual cards, banks need to shift from product-focused messaging to consumer-centered storytelling that makes virtual cards feel useful, safe, and familiar. Reusable virtual cards stored in digital wallets are becoming an option for everyday spending. “Instant access and seamless digital onboarding help customers start spending immediately, accelerating engagement and building long-term loyalty,” says Prashant Shah, VP of product management at Galileo Financial Technologies, a financial technology platform. “Galileo client data shows the impact of virtual cards — boosting activation rates by 15%, transaction volume by 23%, and revenue per account by nearly 20%.” Richard Winston, global industry lead for financial services at Slalom, a global business and technology consulting company said “Because it’s often confusing to consumers — banks attempt to reduce adoption friction by framing virtual cards not as a new or novel product, but as a natural extension of digital banking.” Rather than overwhelming users with product features, banks are tying messaging to relatable use cases and behavioral nudges, like suggesting a virtual card when checking out a new website or promoting it as a safer way to manage free trials. This approach helps hesitant users build trust and turn a trial into a habit. Other tactics like instant issuance, automatic activation in a phone’s digital wallet, and automated usage reminders can also help keep virtual cards top of mind. The key is making the card feel ready, relevant, and rewarding at the exact moment a consumer needs it. Aligning messaging with behavior helps banks turn virtual card use from a feature to a familiar habit, something consumers can use as a natural part of their everyday financial lives. Tech-savvy users want instant access and intuitive design, while mainstream users may want more guidance, reassurance, and education. The most effective strategies meet consumers where they are with messaging that fits their habits, preferences, and digital comfort level. Others respond better to approaches that focus more on building trust. Banks can also build stronger engagement by tying messaging to specific consumer behaviors. When virtual cards are presented as an easy way to manage everyday spending like subscriptions or rideshares and made instantly usable through mobile wallets, they’re more likely to become a part of users’ regular financial routines. Framing virtual cards around convenience and lifestyle fit can help consumers see their value. Virtual cards are gaining traction because they can make everyday payments feel easier, safer, and more in tune with how people already spend.
ChatGPT is the most adopted general-purpose model by developers accounting for more than 86% of all LLM tokens processed followed by Meta’s Lama
New Relic released its inaugural AI Unwrapped: 2025 AI Impact Report, offering a view into how developer choices are transforming the AI ecosystem. Drawing from comprehensive aggregated and de-identified usage data from 85,000 active New Relic customers over a year, the report reveals that developers are overwhelmingly embracing the largest general-purpose models, led by OpenAI’s ChatGPT, which accounted for more than 86% of all LLM tokens processed by New Relic customers. The data shows ChatGPT-4o has been dominating more recently, followed by ChatGPT-4o mini. However, adoption of ChatGPT from version-to-version is occurring seemingly overnight as developers pivot toward newer, better, faster, and cheaper models. New Relic users have been rapidly shifting from ChatGPT-3.5 Turbo to ChatGPT-4.1 mini since it was announced in April. This shows that developers value cutting-edge performance and features more than savings. In a countervailing trend, the findings also highlight increased model diversification as developers explore open-source alternatives, specialized domain solutions, and task-specific models, although at a smaller scale. Meta’s Llama emerged as the model that saw the second largest amount of LLM tokens processed by New Relic customers. In fact, New Relic saw a 92% increase in the number of unique models used across AI apps in the first quarter of 2025. Since its launch last year, enterprises have been adopting New Relic AI Monitoring at a steady 30% growth in usage quarter-over-quarter in the previous 12 months, giving them a solution to ensure AI model reliability, accuracy, compliance, and cost efficiency.
Uniphore’s solution unifies agents, models, knowledge, and data into a single, composable platform and offers, is interoperable with both closed- and open-source LLMs and offers pre-built enterprise-grade agents
Uniphore has launched the Uniphore Business AI Cloud: a sovereign, composable, and secure platform that bridges the “AI divide” between IT and business users by combining the simplicity of consumer AI with enterprise-grade security and scalability. Uniphore’s Business AI Cloud empowers both CIOs and business users by unifying agents, models, knowledge, and data into a single, composable platform. This balance of usability and rigor unlocks the true promise of AI, not just as a technological upgrade, but as a transformative force for business. Data Layer: A zero-copy, composable data fabric that connects to any platform, application, or cloud – querying and preparing data where it lives to eliminate migrations and accelerate AI adoption. Knowledge Layer: Structures and contextualizes enterprise data into AI-ready knowledge retrieval, enabling proprietary SLM fine-tuning. Perpetual fine-tuning, and unlocking deep, explainable insights across domains. Model Layer: Open and interoperable with both closed- and open-source LLMs, allowing enterprises to apply guardrails and governance to models, as well as orchestrate and swap models without rework as technologies evolve. Agentic Layer: Offers pre-built enterprise-grade agents and a natural language agent builder, plus Business Process Model and Notation (BPMN) based orchestration for deploying AI into real workflows across sales, marketing, service, HR, and more. The Business AI Cloud was purpose-built to address the four biggest blockers to enterprise AI adoption: The Data Layer Bottleneck, Data Sovereignty, Disconnected AI Ownership Between IT and Business, Rip-and-Replace Requirements.
OpenAI’s latest o3-pro AI model rated consistently higher in key domains like science, education, programming, business, and writing help for clarity, comprehensiveness, instruction-following, and accuracy
OpenAI has launched o3-pro, an AI model that the company claims is its most capable yet. “In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help,” OpenAI writes in a changelog. “Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.” O3-pro has access to tools, according to OpenAI, allowing it to search the web, analyze files, reason about visual inputs, use Python, personalize its responses leveraging memory, and more. As a drawback, the model’s responses typically take longer than o1-pro to complete, according to OpenAI. O3-pro has other limitations. Temporary chats with the model in ChatGPT are disabled for now while OpenAI resolves a “technical issue.” O3-pro can’t generate images. And Canvas, OpenAI’s AI-powered workspace feature, isn’t supported by o3-pro. On the plus side, o3-pro achieves impressive scores in popular AI benchmarks. On AIME 2024, which evaluates a model’s math skills, o3-pro scores better than Google’s top-performing AI model, Gemini 2.5 Pro. O3-pro also beats Anthropic’s recently released Claude 4 Opus on GPQA Diamond, a test of PhD-level science knowledge. O3-pro is priced at $20 per million input tokens and $80 per million output tokens in the API. Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.
Study finds quantum-enhanced algorithm on a photonic circuit with small-sized quantum processors can outperform classical systems in specific machine learning tasks
A study published in Nature Photonics demonstrates that small-scale photonic quantum computers can outperform classical systems in specific machine learning tasks. Researchers from the University of Vienna and collaborators used a quantum-enhanced algorithm on a photonic circuit to classify data more accurately than conventional methods. The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart”, explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics. Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands”, emphasizes co-author Iris Agresti.
