DigitalOcean Holdings, announced the general availability of its DigitalOcean GradientAI™ Platform, a managed AI platform that enables developers to combine their data with foundation models from Anthropic, Meta, Mistral and OpenAI to add customized GenAI agents to their applications. The DigitalOcean GradientAI Platform is a fully managed service where customers do not need to manage infrastructure and can deploy Generative AI capabilities in minutes to their applications. With the DigitalOcean GradientAI Platform, all tools and data are available through one simple UI with integrations for storage, functions, and database all powered by DigitalOcean’s GPU cloud. This empowers customers to build AI agents that can reduce costs or streamline user experiences—without requiring deep AI expertise on their team. The DigitalOcean GradientAI Platform is built with simplicity in mind to get GenAI-backed experiences into customer applications quickly. By leveraging retrieval augmented generation (RAG), customers can quickly and easily create GenAI agents for use within their applications. These agents offer powerful capabilities that can be enhanced through function routing to integrate with third-party APIs, and agent routing to connect with other GenAI Agents within the platform. Additionally, with Serverless LLM Inference, customers can integrate models from multiple providers via one API, with usage-based billing and no infrastructure to manage.
Contify’s agentic AI delivers trusted, decision-ready market and competitor insights by continuously analyzing unstructured updates from millions of verified external and internal sources and connecting information through Knowledge Graph
AI-native Market and Competitive Intelligence (M&CI) platform Contify, launched Athena, its proprietary Agentic AI insights engine. Athena eliminates manual M&CI work and delivers trusted, decision-ready market and competitor insights, with enterprise-grade accuracy. It enables organizations to make faster, more confident decisions and compete with greater agility. Unlike generic AI assistants like ChatGPT, Gemini, or Perplexity, which often hallucinate, respond from unverified web content, and even fabricate sources, making them unsuitable for enterprise use, Athena is built on a foundation of data integrity coupled with strict AI-usage guardrails. It continuously analyzes unstructured updates from millions of verified external and internal sources. It synthesizes what matters and connects information through a proprietary Knowledge Graph, which stores the organization’s context, to produce reliable insights. Mohit Bhakuni, CEO of Contify says “Intelligence professionals are often stretched thin, grappling with overwhelming data, frequent stakeholder requests, and generic AI assistant limitations. Athena transforms this by automating grunt work and providing rich, verified insights. It frees them to focus on strategic priorities and become trusted advisors their organizations rely on.”
New Liquid Foundation Models can be deployed on edge devices without the need for extended infrastructure of connected systems and are superior to transformer-based LLMs on cost, performance and operational efficiency
If you can simply run operations locally on a hardware device, that creates all kinds of efficiencies, including some related to energy consumption and fighting climate change. Enter the rise of new Liquid Foundation Models, which innovate from a traditional transformer-based LLM design, to something else. The new LFM models already boast superior performance to other transformer-based ones of comparable size such as Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B. The models are engineered to be competitive not only on raw performance benchmarks but also in terms of operational efficiency, making them ideal for a variety of use cases, from enterprise-level applications specifically in the fields of financial services, biotechnology, and consumer electronics, to deployment on edge devices. These post-transformer models can be used on devices, cars, drones, and planes, and applications to predictive finance and predictive healthcare. LFMs, he said, can do the job of a GPT, running locally on devices. If they’re running off-line on a device, you don’t need the extended infrastructure of connected systems. You don’t need a data center or cloud services, or any of that. In essence, these systems can be low-cost, high-performance, and that’s just one aspect of how people talk about applying a “Moore’s law” concept to AI. It means systems are getting cheaper, more versatile, and easier to manage – quickly.
TNG Technology Consulting’s adaptation of DeepSeek’s open-source model R1-0528 is 200% faster, scores at upwards of 90% of R1-0528’s intelligence benchmark scores, and generates answers with < 40% of R1-0528’s output token count
DeepSeek’s latest version of its hit open source model DeepSeek, R1-0528 is already being adapted and remixed by other AI labs and developers, thanks in large part to its permissive Apache 2.0 license. German firm TNG Technology Consulting GmbH released one such adaptation: DeepSeek-TNG R1T2 Chimera, the latest model in its Chimera large language model (LLM) family. R1T2 delivers a notable boost in efficiency and speed, scoring at upwards of 90% of R1-0528’s intelligence benchmark scores, while generating answers with less than 40% of R1-0528’s output token count. That means it produces shorter responses, translating directly into faster inference and lower compute costs. This gain is made possible by TNG’s Assembly-of-Experts (AoE) method — a technique for building LLMs by selectively merging the weight tensors (internal parameters) from multiple pre-trained models. R1T2 is constructed without further fine-tuning or retraining. It inherits the reasoning strength of R1-0528, the structured thought patterns of R1, and the concise, instruction-oriented behavior of V3-0324 — delivering a more efficient, yet capable model for enterprise and research use.
Dust helps enterprises build AI agents capable of taking real actions across business systems and secures sensitive information by separating data access rights from agent usage rights
AI platform Dust helps enterprises build AI agents capable of completing entire business workflows, has reached $6 million in annual revenue — a six-fold increase from $1 million just one year ago. The company’s rapid growth signals a shift in enterprise AI adoption from simple chatbots toward sophisticated systems that can take concrete actions across business applications. The startup has been selected as part of Anthropic’s “Powered by Claude” ecosystem, highlighting a new category of AI companies building specialized enterprise tools on top of frontier language models rather than developing their own AI systems from scratch. Instead of simply answering questions, Dust’s AI agents can automatically create GitHub issues, schedule calendar meetings, update customer records, and even push code reviews based on internal coding standards–all while maintaining enterprise-grade security protocols. The shift toward AI agents that can take real actions across business systems introduces new security complexities that didn’t exist with simple chatbot implementations. Dust addresses this through a “native permissioning layer” that separates data access rights from agent usage rights. The company implements enterprise-grade infrastructure with Anthropic’s Zero Data Retention policies, ensuring that sensitive business information processed by AI agents isn’t stored by the model provider.
Agent2.Ain’s AI agent can instantly turn complex research tasks into usable outputs in multi-formats like structured spreadsheets and presentation slides through a transparent, step-by-step breakdowns of how it searched, evaluated sources, and reached conclusions
Agent2.Ain has launched Super Agent, a powerful new AI tool designed to help users tackle complex research tasks and instantly turn them into usable outputs—like structured spreadsheets, presentation slides, and more. What sets Super Agent apart is its open process. Every step in its reasoning is visible—users can review, edit, or guide the workflow in real time. Behind the scenes, multiple AI models collaborate on each task. The system compares their outputs, refines them, and delivers a final version that reflects stronger reasoning from multiple angles. Super Agent fits into existing workflows with support for formats like Excel, PowerPoint, Docs, Markdown, and more. And when deeper context is needed, users can securely log in to enterprise tools within a virtual machine, allowing the agent to factor in private business data alongside public research. The Agent2.AI Super Agent is designed to take a prompt and deliver usable results across multiple formats and tools. Some examples of what it can do include: Deep Research: Transparent, step-by-step breakdowns of how the agent searched, evaluated sources, and reached its conclusions;
AI Sheets: Structured spreadsheets that organize research findings, metrics, and summaries. Exportable with one click; AI Slides: Presentation decks built from research or reports, complete with titles, visuals, and speaker notes; Other Outputs: From timelines and tables to emails and internal docs, the agent adapts its output based on what the user needs.
Instacart’s rewards debit card lets its contract employees get free, automatic payouts of their earnings directly in their Shopper Rewards bank account for free after every batch they complete
Instacart has launched a rewards debit card for its contract “shopper” employees. The Instacart Shoppers Rewards Card, which debuted July 1 in partnership with workforce payments platform Branch, lets these workers get free, automatic payouts of their earnings. “We’re doubling down on our dedication to shopping excellence by empowering and rewarding shoppers who consistently deliver exceptional service to customers,” Daniel Danker, chief product officer at Instacart, said. “Instacart shoppers are shopping experts, and they balance efficiency, empathy and skill to serve their communities every day. Through the Cart Star refresh and the new Shopper Rewards Card, we’re recognizing and supporting their incredible work, while providing valuable resources to help shoppers thrive both on and off the platform.” The program lets shoppers have their earnings deposited directly in their Shopper Rewards bank account for free after every batch they complete. If these employees choose to use a different bank account, they’ll be charged $1.50 for the Instant Cashout service. Instacart will roll out the card to its U.S. shoppers in two phases, first in October, and again in April of next year. The card is part of Instacart’s Cart Star program
Clarifai’s tool allows models or MCP tools to run anywhere, on local machines, on-premise servers, or private cloud clusters and connect them directly to its platform via a publicly accessible API enabling to build multistep workflows by chaining local models
Intelligent application development startup Clarifai Inc. has launched AI Runners, a new offering designed to provide developers and MLOps engineers with uniquely flexible options for deploying and managing their AI models. AI Runners allows users to connect models running on their local machines or private servers directly to Clarifai’s platform via a publicly accessible application programming interface. The new offering assists in dealing with the rising demands for computing power at a time that agentic AI and protocols such as Model Context Protocol strain computing resources. The idea with AI Runners is to provide a cost-effective and secure solution for managing the escalating demands of modern AI workloads. AI Runners give developers and enterprises flexibility and control by allowing models or MCP tools to run anywhere, on local machines, on-premise servers, or private cloud clusters. The setup ensures sensitive data and custom models remain within the user’s environment while still benefiting from Clarifai’s platform, eliminating vendor lock-in. Developers can use AI Runners to instantly serve their custom models through Clarifai’s scalable and publicly accessible API, making it easy to integrate AI into applications and build advanced, multistep workflows by chaining local models. “AI Runners is a pivotal feature that sets Clarifai apart, as it is currently the only platform offering this capability, providing a crucial competitive advantage,” said Chief Technology Officer Alfredo Ramos.
Fabi.ai’s feature addresses the challenges of static dashboards and restricted business workflows associated with legacy BI by automating the delivery of personalized, AI-enhanced insights directly to the tools data teams use daily
Fabi.ai announced the launch of Workflows, a revolutionary data insights pipeline feature that enables data and product teams to build automated, intelligent workflows that deliver personalized insights directly to stakeholders’ preferred tools. Unlike legacy BI platforms that create “dashboard graveyards,” Workflows meets business users where they actually work—in Slack, email, and Google Sheets—while leveraging AI in the data analysis process to generate meaningful summaries and actionable recommendations. The product addresses three critical failures of legacy BI: restricted data access that ignores real business workflows, misaligned incentives that prioritize seat sales over insight sharing, and the creation of static dashboards that users ultimately abandon for spreadsheets. Workflows transforms this paradigm by automating the delivery of fresh, AI-enhanced insights directly to the tools teams use daily, without forcing data experts to an advanced degree in the vendor’s tooling. Key capabilities of Workflows include: Universal Data Connectivity: Connect to any data source including Snowflake, Databricks, MotherDuck, Google Sheets, Airtable, and more; Integrated Processing Tools: SQL for querying, Python for advanced analysis, and AI for natural language processing and insight generation working seamlessly together; Smart Distribution: Automatically push AI-generated, customized insights via email, Slack, or Google Sheets on configurable schedules; AI-Powered Analysis: Leverage AI to process unstructured data, extract insights from notes and comments, and generate executive summaries; Python-Native Architecture: Enterprise-grade security with scalable AI processing capabilities
Cloudflare’s pay per crawl service allows content owners to charge AI crawlers for access using HTTP 402 Payment Required responses with options to allow free access, charge configured prices, or block access entirely by using DNS proxying for functionality
Cloudflare has launched a pay per crawl service for content creators and AI companies, offering a new mechanism to monetize digital assets. The service addresses concerns from publishers who want compensation for their contributions to AI training datasets. The system allows content owners to charge AI crawlers for access using HTTP 402 Payment Required responses, providing three options: Allow free access, Charge configured prices, or Block access entirely. The service operates through Cloudflare’s global network infrastructure and requires publishers to use Cloudflare’s DNS proxying for functionality. Applications accepted through a dedicated signup portal. The service addresses the binary choice between complete blocking or uncompensated access, creating a third monetization option for digital content creators. Cloudflare serves as the Merchant of Record, handling billing event recording when crawlers make authenticated requests with payment intent. The company aggregates events, charges crawlers, and distributes earnings to publishers, simplifying financial relationships for smaller publishers lacking individual negotiation leverage. The system anticipates future agentic applications where intelligent agents receive budgets for acquiring relevant content. Pay per crawl represents one solution in the expanding toolkit for content protection, as research indicates AI search visitors provide 4.4 times higher value than traditional organic traffic, creating economic incentives for controlled access rather than complete blocking. The development coincides with Google’s AI Mode expansion and enhanced content labeling requirements. Content creators interested in pay per crawl can apply for private beta access through Cloudflare’s signup portal.