BuyerTwin launched its groundbreaking AI-powered platform that creates interactive “twins” of ideal buyer personas, providing businesses with direct, unfiltered feedback on their marketing, sales, and product strategies. The platform allows teams to test messaging, content, website usability, and more, receiving instant, honest insights as if talking directly to their target audience. BuyerTwin leverages advanced AI and data from its proprietary TwinForce network—which recruits real buyers—to build highly accurate virtual personas. Users can interact with these AI twins in real-time to understand customer perspectives deeply. Key capabilities include: Website Feedback: Instantly see how site copy, design, and navigation feel from the buyer’s perspective; Content Insight: Understand which topics, formats, and messaging angles genuinely capture buyer attention and address their needs; Channel Behavior Analysis: Discover where ideal buyers actually spend their time and how they prefer to engage; Positioning & Messaging Tests: Refine value propositions and eliminate confusing jargon to ensure clarity; Sales Approach Validation: Get direct feedback on sales messaging and identify what buyers need to feel confident; Competitor Analysis: Understand how buyers perceive competitor offerings and positioning; Keyword Discovery: Uncover the authentic language and search terms buyers use.
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can transcribe 60 minutes of audio in 1 second with an average “Word Error Rate” of just 6.05%
Nvidia has launched Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can, “transcribe 60 minutes of audio in 1 second [mind blown emoji].” This version two is so powerful, it currently tops the Hugging Face Open ASR Leaderboard with an average “Word Error Rate” (times the model incorrectly transcribes a spoken word) of just 6.05% (out of 100). To put that in perspective, it nears proprietary transcription models such as OpenAI’s GPT-4o-transcribe (with a WER of 2.46% in English) and ElevenLabs Scribe (3.3%). The model boasts 600 million parameters and leverages a combination of the FastConformer encoder and TDT decoder architectures. It can transcribe an hour of audio in just one second, provided it’s running on Nvidia’s GPU-accelerated hardware. The performance benchmark is measured at an RTFx (Real-Time Factor) of 3386.02 with a batch size of 128, placing it at the top of current ASR benchmarks maintained by Hugging Face. Parakeet-TDT-0.6B-v2 is aimed at developers, researchers, and industry teams building applications such as transcription services, voice assistants, subtitle generators, and conversational AI platforms. The model supports punctuation, capitalization, and detailed word-level timestamping, offering a full transcription package for a wide range of speech-to-text needs. Developers can deploy the model using Nvidia’s NeMo toolkit. The setup process is compatible with Python and PyTorch, and the model can be used directly or fine-tuned for domain-specific tasks. The open-source license (CC-BY-4.0) also allows for commercial use, making it appealing to startups and enterprises alike. Parakeet-TDT-0.6B-v2 is optimized for Nvidia GPU environments, supporting hardware such as the A100, H100, T4, and V100 boards. While high-end GPUs maximize performance, the model can still be loaded on systems with as little as 2GB of RAM, allowing for broader deployment scenarios.
Unblocked is an AI-powered assistant that answers contextual questions about lines of code and to search for the person who made changes to a particular module
Unblocked is an AI-powered assistant that answers contextual questions about lines of code. Unblocked integrates with development environments and apps like Slack, Jira, Confluence, Google Drive, and Notion. The tool gathers intelligence about a company’s codebase and helps answer questions such as “Where do we define user metrics in our system?” Developers can also use the platform to search for the person who made changes to a particular module and quickly gain insights from them. Unblocked offers admin controls that can be easily adopted by a company’s system admin, and the startup is working on integrating with platforms like Cursor and Lovable to improve code explainability. Beyond this, Unblocked is developing tools that actively help developers with projects rather than simply answer questions. One, Autonomous CI Triage, supports developers in testing code through different scenarios. Unblocked counts companies such as Drata, AppDirect, Big Cartel, and TravelPerk as customers. Pilarinos claims that engineers at Drata were able to save one to two hours per week using Unblocked’s platform.
Dremio launches new MCP Server integrating with leading AI models like Claude, enabling agents to seamlessly discover and query data with contextual understanding
Drem,io has launched the Dremio MCP Server, a solution that brings AI-native data discovery and query capabilities to the lakehouse. By adopting the open Model Context Protocol (MCP), Dremio enables AI agents to dynamically explore datasets, generate queries, and retrieve governed data in real time. Through MCP, Dremio natively integrates with leading AI models like Claude, enabling agents to seamlessly discover and query data with contextual understanding. Claude-powered agents can dynamically interpret user intent, invoke Dremio’s tools, and deliver trusted, real-time insights—all without manual integrations. “The Model Context Protocol is a critical advancement that allows AI systems like Claude to seamlessly interact with enterprise data systems,” said Mahesh Murag, Product Manager at Anthropic. “Dremio’s implementation of MCP enables Claude to extend its reasoning capabilities directly to an organization’s data assets, unlocking new possibilities for AI-powered insights while maintaining enterprise governance.” Powered by Dremio’s semantic layer – which provides a unified, governed view across all data sources – with the Dremio MCP Server AI agents gain seamless access to Dremio’s full data environment. This enables agents to: Discover datasets and metadata without manual integrations; Translate natural language into SQL queries and execute them directly; Automate workflows like reporting, customer segmentation, and operational analysis.
Etsy blends human-recommended listings with ML and LLMs to expand the listings 20X and create an aesthetically cohesive collection that represents product variety and meets quality standards
Etsy is doubling down on a hybrid approach to artificial intelligence that keeps humans in the loop and ensures shoppers find what they want. The company is pursuing a strategy it calls “algotorial curation,” which blends recommendations by Etsy’s staff with advanced machine learning algorithms to scale curation across its inventory, Chief Product Officer Nick Daniel said. The process starts with human experts identifying trends and selecting listings that are examples of these trends. “After a collection is identified, our engineers use machine learning to expand it from roughly 50 human-curated listings to about 1,000. Finally, we use LLMs to make sure the full collection is aesthetically cohesive, represents a variety of products and meets our standards for quality.” The company uses Google’s Gemini multimodal model to power these experiences. Despite advances in generative AI, Etsy isn’t looking to eliminate humans from the equation. Instead, the company sees AI as a way to enhance human insight at scale, Daniel said. “Rather than removing human expertise from our merchandising work as AI becomes more powerful, we’re leveraging these tools to amplify the expertise of our team and create a more personalized experience. We’re putting human touch — from our buyers to our teams of employees to our sellers — at the center of shopping on Etsy. Because each item on Etsy is listed individually by a real seller, the data we have isn’t uniform — we’re not like a traditional eCommerce marketplace with a catalog or SKUs. AI can help us bridge this gap. We’re leveraging LLMs to extract key product details, like size and color, from listings, which improves search and helps connect the right items to the right buyers,” Daniel said. This strategy has yielded measurable results, boosting visibility and sales. “We used LLMs to generate alt text for listings that didn’t already have it and saw a nearly 5% increase in SEO visits and a nearly 3% increase in conversions to sales attributed to those visits,” he said.
Franz’s Natural Language Query interface builds agentic AI that can understand user intent, can reason over complex data, and take meaningful action through built-in GraphRAG capabilities
Franz Inc., an early innovator in AI and supplier of Graph Database technology for Neuro-Symbolic AI Solutions, has announced AllegroGraph v8.4, with an Enhanced AI-powered Natural Language Query interface AllegroGraph’s advanced natural language queries drive Agentic AI solutions by enabling more intuitive, human-like interaction between users and intelligent systems—critical for agents that need to reason, plan, and act autonomously. Dr. Jans Aasman, CEO of Franz Inc. “This latest release makes it easier for enterprises to build intelligent agents that can understand user intent, reason over complex data, and take meaningful action—bringing us closer to truly autonomous, explainable AI systems.” AllegroGraph v8.4 has enhanced its Natural Language Query interface to allow users to ask questions in natural language and automatically converts them into SPARQL queries for precise Knowledge Graph interrogation. This AI-powered capability depends on the platform’s vector database which contains query examples that help the system learn and improve over time. With this feature, you have built-in GraphRAG capabilities for your agentic AI applications. In this release, AllegroGraph provides enhanced the collaborative workflow around these Natural Language Query examples with new metadata tracking. Additionally, a new tabular view option has been introduced that provides a more structured presentation of query metadata, making it easier to sort, filter, and compare query examples at a glance. This enhancement streamlines the process of maintaining high-quality training examples that drive improved natural language understanding. Other features include: Bridging Documents and Graphs; Security and Access Control; AI Symbolic Rule Generation; Knowledge Graph-as-a-Service; Enhanced Scalability and Performance; and Advanced Knowledge Graph Visualization.
New survey says 52% of Americans are BNPL for everyday purchases; electronics, furniture and home goods are the most popular items purchased through BNPL with an average minimum price of $250
According to a survey by PartnerCentric.com, 52% of Americans now rely on installment-based payment services to cover everyday purchases, including groceries. The most popular items purchased through BNPL include medium to large products like electronics, furniture and home goods, with an average minimum price of $250. But 31% of consumers also reported using those programs for essentials like groceries, highlighting the financial strain many households are facing. BNPL programs are especially popular among younger Americans, with 59 percent of Gen Z and 58 percent of millennials opting for flexible payment methods. The survey also found that 35 percent of consumers plan to use BNPL more frequently in 2025, a figure that jumps to 65 percent among Gen Z. Popular BNPL providers like Afterpay, Affirm, PayPal Pay in 4 and Klarna have become critical financial tools for many Americans, offering flexible installment plans with no interest, helping consumers manage their rising expenses. These options won’t affect credit scores if payments are made on time. Economic uncertainty appears to be adding to the trend. Fifteen percent of survey participants said they tried BNPL in 2025 due to the increased cost of living.
TensorStax’s data engineering AI agents can design and deploy data pipelines through structured and predictable orchestration using a deterministic control layer that sits between the LLM and the data stack
Startup TensorStax is building AI agents that can perform tasks on behalf of users with minimal intervention to the challenge of data engineering. The startup gets around this by creating a purpose-built abstraction layer to ensure its AI agents can design, build and deploy data pipelines with a high degree of reliability. Its proprietary LLM Compiler acts as a deterministic control layer that sits between the LLM and the data stack to facilitate structured and predictable orchestration across complex data systems. Among other things, it does the job of validating syntax, normalizing tool interfaces and resolving dependencies ahead of time. This helps to boost the success rates of its AI agents from 40% to 50% to as high as 90% in a variety of data engineering tasks, citing internal testing. The result is far fewer broken data pipelines, giving teams the confidence to offload various complicated engineering tasks to AI agents. TensorStax says its AI agents can help to mitigate the operational complexities involved in data engineering, freeing up engineers to focus on more complex and creative tasks, such as modeling business logic, designing scalable architectures and enhancing data quality. By integrating directly within each customer’s existing data stack, TensorStax makes it possible to introduce AI agent data engineers into the mix without disrupting workflows or rebuilding their data infrastructure. These agents are designed to work with dozens of common data engineering tools. The best thing is that TensorStax AI agents respond to simple commands. Constellation Research Inc. analyst Michael Ni said TensorStax appears to be architecturally different to others, with its LLM compiler, its integration with existing tools and its no-customer-data-touch approach.
Survey reveals growing preference for seamless and invisible payments: in-app purchasing ability is most desired for food & beverage; millennials and Gen Zers prefer recurring payments
Among parents with children under 25 living at home, 72% say they would prefer to pay for everything through an app if they could, compared to 53% of consumers overall, according to a new survey from embedded payments infrastructure company NMI. Nearly two-thirds (64%) of Gen Z and millennials say they’ll take their business elsewhere if in-app payments aren’t an option. Sixty-eight percent of those surveyed want to use in-app payments for food and beverage purchases, like restaurants, coffee shops, bars and delivery services. Retail lands in the number two spot, with 53% of consumers opting for in-app payments in this sector. On average, 37% of consumers are interested in using apps to pay for everyday services like car washes and dry cleaning, and 30% for home services such as landscaping and plumbing. Among parents, the interest in these sectors rises significantly to 49% and 42%, respectively. The subscription model is especially popular among millennials (69%) and Gen Zers (66%), who prefer recurring payments for frequently used goods and services. There is also a rising preference for invisible transactions, as 64% of respondents embrace biometric authentication like Face ID or fingerprints, and 59% say the best transactions are the ones that feel like they never happened. Four-in-10 (43%) baby boomers are uncomfortable using biometric authentication, while 40% embrace it for its speed, security and convenience. In-app payments are set to play an even bigger role in business choice throughout 2025. Nearly six-in-10 (59%) consumers say it’s important that merchants offer app-based payments, and half (50%) would choose a business that does over one that doesn’t. Already, half of respondents use in-app payments weekly or more, and 55% expect to increase their usage this year.
Microsoft Copilot AI for SharePoint can access the contents of encrypted spreadsheet including restricted passwords by circumventing download restrictions and information protection principles
Pen Test Partners, a company that specializes in security consulting, specifically penetration testing took a close look at how Microsoft’s Copilot AI for SharePoint could be exploited. The results were, to say the least, concerning. Not least considering an encrypted spreadsheet that the hackers were, quite rightly, rejected from opening by SharePoint, no matter what method was employed, was broken wide open when they asked the Copilot AI agent to go get it. “The agent then successfully printed the contents,” Jack Barradell-Johns, a red team security consultant with the security company, said, “including the passwords allowing us to access the encrypted spreadsheet.” Barradell-Johns explained that during the engagement, the red teamers encountered a file named passwords.txt, located adjacent to an encrypted spreadsheet containing sensitive information. Naturally, they tried to access the file. Just as naturally, Microsoft SharePoint said nope, no way. “Notably,” Barradell-Johns said, “in this case, all methods of opening the file in the browser had been restricted.” The download restrictions that are part of the restricted view protections were circumvented, and the content of the Copilot chats could be freely copied. “SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions and that access is audited. This means that if a user does not have permission to access specific content, they will not be able to view it through Copilot or any other agent. Additionally, any access to content through Copilot or an agent is logged and monitored for compliance and security.”