To power the future of AI-native shopping and digital retail, New Generation (New Gen), a technology company redefining commerce for the AI internet, launches publicly with $4.5M in seed funding. The startup is reimagining how brands engage with both consumers and AI-powered shopping agents by giving every brand its own AI-powered shopping page that creates a custom experience for each visitor. Customers experience instant visual answers through natural language interaction, while AI agents gain direct, structured access through a dedicated address at the AI-specific subdomain (e.g., ai.yourbrand.com). This subdomain acts as a specialized entry point optimized for human shoppers and AI-driven traffic, ensuring brands can transition to AI-native commerce without overhauling their existing websites or technology stacks. Shoppers arriving through AI channels land deeper in the sales funnel, prepared to spend more than those engaging with traditional retail experiences. New Gen ensures brands capitalize on this shift by dynamically tailoring each visitor’s experience based on their unique intent, converting AI-driven traffic directly into sales. These “visual answer engines” dynamically shape themselves around a shopper’s intent, immediately offering relevant results, insights or products, rather than forcing them through rigid navigation or filters.
Clinch’s integration of LinkedIn’s Marketing API enables advertisers to seamlessly author and preview LinkedIn ads, sync campaigns, and access cross-channel insights alongside other media channels from a centralized place and through a singular workflow
Clinch, the AI-powered technology company that brings efficiency, productivity, and intelligence to omnichannel advertising, announced an integration of LinkedIn’s Marketing API. This new integration enables advertisers to seamlessly author and preview LinkedIn ads, sync campaigns, and access cross-channel insights alongside other media channels—all from within Flight Control, Clinch’s AI-powered omnichannel advertising platform. “Connecting our Flight Control platform to LinkedIn gives brands and agencies the opportunity to scale not only B2B marketing, but other objectives not directly tied to commerce, like recruiting, education, awareness, and other corporate endeavors,” said Charel MacIntosh, Global Head of Business Development and Strategic Partnerships at Clinch. “Managing all campaigns and distribution channels from one centralized place, and through a singular streamlined workflow, empowers advertisers to maintain faster speed to market with minimal resources, and without sacrificing accuracy or results.” Key Benefits of the Integration: Streamlined Campaign Management; Enhanced Creative Control; Cross-Channel Insights.
Monzo to pitch proposition in U.S market to address anxiety and ignorance about personal finances and management
Monzo is still synonymous with its neon debit cards, extensive use of emojis, and free spending abroad. But it’s no longer just trying to be cool; it’s trying to become a major financial institution. That shift, from an upstart fintech beloved by millennials into a mature, sustainable business, is what makes this year a likely turning point. Despite signs that Monzo is preparing to go public – along with new reports that something is in the works – Monzo CEO TS Anil wouldn’t confirm that Monzo is listing this year. He suggested the building blocks are in place, though: profitability, product breadth, and just the right amount of AI. The numbers help tell the story. Monzo posted its first annual profit last year. In its 2024 annual report, it claimed 9.3 million personal account holders and more than 400,000 business customers. It’s also no longer reliant on interchange fees and overdrafts; lending, subscriptions, and business banking are now meaningful revenue streams. All this comes after a period marked by regulatory scrutiny and leadership turnover, developments that forced the company to grow up fast. It has also become more disciplined about its growing product lineup. Monzo’s customers can now invest in mutual funds powered by BlackRock, for example, and track their existing mortgages from other lenders in their Monzo app. When questioned about U.S. expansion and the competitive landscape, Anil downplayed the challenge. “I think there are a few universal truths that apply,” he said. “Most people feel anxiety about their money, and that anxiety is independent of affluence . . . The second thing that holds true is that the incumbent industry has been built off of arbitraging customers and leveraging, in some fashion, customers’ ignorance. Those are the insights that are helping us create the best features that would make sense in the U.S.; that’s the way we intend to double down.”
Starburst Data’s lakehouse model supports AI models by using data where it already lives without needing to copy it into a centralized repository and without requiring external data pipelines
Starburst Data is unveiling a suite of enhancements intended to make it easier for enterprises to develop and apply artificial intelligence models. Starburst’s updates are focused on enabling what it calls an AI “lakeside,” in which companies can use data where it already lives without needing to copy it into a centralized repository. Starburst defines a lakeside as a staging ground for AI, or an area adjacent to the data lakehouse where data is the most complete, cost-efficient and governed. The company’s new Lakeside AI architecture combines AI-ready tools with an open data lakehouse model. It allows companies to experiment with, train and deploy AI systems while keeping sensitive or regulated data in place. Starburst AI Workflows accelerates AI application development by making it easier to transform unstructured data into vector embeddings, a machine learning technique that turns data into numerical representations that capture the meaning and relationships between different data points without requiring explicit keywords. Workflows manage prompts and models with SQL and enforce governance policies. Starburst said these capabilities are fully contained within its platform and require no external data pipelines. Data is stored on Apache Iceberg tables with connectors available for a variety of third-party vector databases. Basically, this means users can build AI features that rely on unstructured or semi-structured sources like emails, documents and logs without having to move data or stitch together multiple tools. The Starburst AI Agent is a built-in natural language interface that allows users to talk to their data using natural language. It automatically scans for sensitive data such as names, email addresses and other personally identifiable information at the column level and tags it so access policies can be applied. That reduces the need for manual checks and helps organizations enforce privacy rules more consistently. A new Starburst data catalog replaces the aging Hive metastore and provides better support for the Iceberg data format that is rapidly becoming the standard for cloud data lakes. The new catalog supports both legacy Hive data and Iceberg tables. To improve performance across large-scale deployments, Starburst is also introducing a native ODBC Driver that improves connection speed and reliability with business intelligence tools such as Salesforce Inc.’s Tableau and Microsoft Corp.’s Power BI.
TD Bank to fund access to trauma-informed wellness support and promotes greater health outcomes among First Nations communities using indigenous knowledge and traditions
Kiikenomaga Kikenjigewen Employment and Training Services (KKETS) has launched its Mino-Ayaawin Maamawi program, aimed at increasing access to safe, trauma-informed wellness supports rooted in Indigenous knowledge and traditions. The two-year program, funded by $392,800 from TD Bank Group, includes holistic practices such as mindfulness practices, neuro-decolonization, positive affirmations, journaling, yoga, and healthy eating guidance. The funding supports a broader vision for wellness and education in First Nations communities. The program’s development and effectiveness have been supported by research led by Anita Vaillancourt, an assistant professor in the School of Social Work at Lakehead University. A survey evaluating the first phase of the pilot program found that 98% of participants found the program beneficial, 72% felt more motivated to apply for jobs, 70% reported increased confidence in job applications, and 72% felt more prepared for the workforce. The Mino-Ayaawin Maamawi program marks an advancement in promoting wellness and employment readiness within First Nations communities.
Glean to integrate Palo Alto Network’s security platform to enable secure deployment of enterprise AI agents at scale through runtime security; offers unified data governance across the 100+ connected SaaS applications with SASE-native controls and real-time visibility
Glean, the Work AI platform, announced a strategic technology partnership with Palo Alto Networks to further secure and accelerate the use of AI agents in the enterprise. With new integrations to Palo Alto Networks Prisma AIRS and Prisma Access Browser and AI Access, Glean customers gain enhanced visibility and control over how AI agents operate and interact with sensitive enterprise data – enabling rapid innovation without sacrificing trust, security, or compliance. Glean is purpose-built to solve the challenges of deploying AI at scale in the enterprise. From day one, it was architected with enterprise-grade security at its core: enforcing source-level permissions, isolating customer data, and integrating tightly with identity systems. That foundation has since evolved to include proactive guardrails for agent behavior, continuous governance scanning, and an open ecosystem of security partners. Palo Alto Networks Prisma AIRS is the world’s most comprehensive AI security platform that is designed to protect the entire enterprise AI ecosystem, providing Model Scanning, Posture Management, AI Red Teaming, Runtime Security, and Agent Security. The new integration of Prisma AIRS with Glean’s platform will offer: Secure AI adoption at scale with Runtime Security; Confident cloud data governance Posture Management; Zero-compromise security.
Virtana’s full-stack observability platform integrates natively with NVIDIA GPU platforms to offer in-depth insights into AI environments by continuously collecting telemetry
Virtana announced the launch of Virtana AI Factory Observability (AIFO), a powerful new capability that extends Virtana’s full-stack observability platform to the unique demands of AI infrastructure. With deep, real-time insights into everything from GPU utilization and training bottlenecks to power consumption and cost drivers, AIFO enables enterprises to turn complex, compute-intensive AI environments into scalable, efficient, and accountable operations. This launch strengthens Virtana’s position as the industry’s broadest and deepest observability platform, spanning AI, infrastructure, and applications across hybrid and multi-cloud environments. Virtana’s AI Factory Observability (AIFO) helps enterprises treat AI infrastructure with the same level of visibility, discipline, and accountability as traditional IT. As an official NVIDIA partner, Virtana integrates natively with NVIDIA GPU platforms to deliver in-depth telemetry, including memory utilization, thermal behavior, and power metrics, providing precise, vendor-validated insight into the most performance-critical components of the AI Factory. This deep integration delivers accurate, actionable intelligence at enterprise scale. Virtana AI Factory Observability (AIFO) is purpose-built to meet the demands of AI operations. It continuously collects telemetry across GPUs, CPUs, memory, network, and storage and then correlates that data with training and inference pipelines to provide clear and actionable insights. Core capabilities include: GPU Performance Monitoring; Distributed Training Visibility; Infrastructure-to-AI Mapping; Power and Cost Analytics; Root Cause Analysis. AIFO is already delivering measurable results in production AI environments across multiple industries. Operational outcomes include: 40% reduction in idle GPU time, improving resource utilization and reducing infrastructure costs; 60% faster mean time to resolution (MTTR) for AI-related incidents; 50% decrease in false alerts, reducing operational noise and accelerating response; 15% improvement in power efficiency, supporting sustainability goals.
Apple to release new tools that would let third-party developers create software using Apple AI models and integrate Apple Intelligence across their apps
Apple plans to release a new set of AI products and frameworks at its Worldwide Developers Conference (WWDC) this June, including tools that’ll let third-party developers create software using Apple AI models. Apple’s hope is that expanding its AI tech in this way will draw more attention — and users — as the company looks to catch up with its competitors in the AI space. The new framework will let developers integrate Apple Intelligence across their apps. The company is seeking to first allow developers to use its smaller models. WWDC this year will also reportedly see Apple overhaul its operating systems across iPhone, iPad, and Mac. Apple is also set to release new device-specific capabilities, including one that helps manage battery life, and a new Health app — powered, of course, by AI (although the app reportedly won’t be ready until next year).
Gravitee Topco’s open-source API management platform offers an array of tools for developers that span API design, access, management, deployment and security with support for both asynchronous and synchronous APIs
Digital traffic pipeline management startup Gravitee Topco has closed on a $60 million Series C funding round, bringing its total amount raised to date to more than $125 million. The company is the creator of an open-source API management platform that provides developers with the tools they need to easily manage both legacy and newer data streaming protocols. It also provides a wealth of API security tools with its platform. Gravitee’s core offering is split into two products, with the Gravitee API Management tool designed for API publishers, and the Gravitee Access Management offering aimed at the developers who need to use those APIs. Through the two platforms, it provides tools that span API design, access, management, deployment and security. Gravitee can therefore be thought of as a kind of control plane for APIs, which often come with a confusing array of protocols and tools that can quickly overwhelm developers, despite their intention of making life simpler. Companies can deploy Gravitee’s core, open-source offering in the cloud or on-premises, or they can access the premium platform through the startup’s software-as-a-service offering. Its core features include a tool for designing and deploying APIs, mock testing and a dashboard that provides an overview of team’s API deployments. What makes Gravitee different is that it supports both asynchronous and synchronous APIs, meaning APIs that deliver data at a later point in time, and those that deliver data immediately, in real time.
Google’s new Notebook LM app adds an AI research assistant to your iPhone- offers similar insights to ChatGPT and Google Gemini chatbot services but is limited to accessing specific documents and sources
Google’s NotebookLM, an experimental AI-powered notebook, is now available on iOS, iPadOS, and Android platforms. It offers similar insights to ChatGPT and Google Gemini chatbot services but is limited to accessing specific documents and sources. It provides summaries and important data points, reducing the risk of hallucinations. NotebookLM offers services like document questions, summarization, idea generation, and podcast-style overviews. It can also help users revise courses or take notes on documents. The tool is available for iOS, iPadOS, and Android.
