Ugentic AI is launching an AI software called UgenticIQ. Users can train UgenticIQ’s AI agents using their own knowledge and data, allowing the software to perform tasks autonomously with a similar level of skill, it can also be train on other people’s expertise. Ugentic IQ is not a chatbot or what is known as generative ai where the output generated by users depends on the input; how well the prompts are structured. UgenticIQ is a no code AI agent builder software that can handle tasks on behalf of the users. It is the first of its kind built by ugentic AI company. This is different from chatbots, generative AI and other forms of AI. Agentic AI or AI agent is different because it can work like online versions of users, it does not just answer questions but it can act on behalf of the users and get a job done. When an instruction is given to a chatbot for an example it processes and gives an output which is an answer or a solution but that is all to it. An AI agent on the other hand can be told to do a research of let say restaurants in Chicago get their contact details for cold email outreach and then find their social media and the check their latest events or achievement and write a personalized email specific to each restaurants at a specific time, an AI agent like Ugentic AI will go ahead and find these details and also write the email for outreach and can will also send it based on instructions that it has been given. UgenticIQ will do a task and will also have another AI agent judge if the task is well done according to standard. Ugentic AI can create image designs, text posts and also post on social media platforms. It can also get data, help analyse data and execute based on the analysis.
CoreWeave acquires Y Combinator’s OpenPipe to combine GPU infrastructure with reinforcement learning tools; targeting enterprises developing custom AI agents via ART toolkit
CoreWeave has struck an agreement to acquire OpenPipe, a Y Combinator-backed startup that helps enterprises develop customized AI agents with reinforcement learning. Brian Venturo, co-founder of CoreWeave said, “By combining OpenPipe’s advanced self-learning tools with CoreWeave’s high-performance AI cloud, we’re expanding our platform to give developers at AI labs and beyond an important advantage in building scalable intelligent systems.” OpenPipe develops a popular open source toolkit for creating AI agents called ART (agent reinforcement trainer). While many of CoreWeave’s biggest customers include leading AI labs such as OpenAI, the company is also trying to appeal to smaller enterprises. Reinforcement learning has proven a strong way to improve an AI model’s performance on a specific task; the idea with these enterprise products is to train AI agents specifically for a company’s needs. This kind of customer-specific training requires a lot of computing resources, and by acquiring OpenPipe, CoreWeave hopes to both power and offer such services. OpenPipe’s team will be joining CoreWeave, and customers of OpenPipe will become CoreWeave customers.
Wisdom AI launches Proactive Agents that use a “knowledge fabric” to find answers from human workers; these automated data analysts monitor KPIs, detect anomalies and deliver natural language insights
AI-native data insights startup Wisdom AI has launched Proactive Agents that act like automated data analysts and work around the clock to proactively learn, prepare analyses and reports and make decisions based on the insights they surface. The new agents can carry out data analysis work without human supervision, freeing up human workers to focus on more strategic work, the company said. The startup uses AI “reasoning” models in combination with a knowledge fabric, uniting disconnected resources for deeper analysis with added context in order to generate more meaningful insights. The Proactive Agents utilize the knowledge fabric to find answers to questions from human workers, such as marketing or sales team personnel. Alternatively, they can simply wait in the background, keeping a watchful eye on key business metrics, and issue alerts when certain milestones, thresholds or targets are met. They work by autonomously scanning and analyzing KPIs to detect meaningful deviations. When this happens, they’ll carry out an in-depth analysis to explain why it occurred, and then provide a simple, natural language summary for human decision-makers. This can help businesses to make decisions faster, because they won’t have to wait until a human data analyst finds the time to perform the necessary investigation. One of the main advantages of Wisdom AI’s Proactive Agents is their ability to learn continuously as they perform their work, remembering earlier analyses they’ve carried out, including what caused certain anomalies to occur. They’re programmed to flag any meaningful shifts in business metrics and then dig deeper into the reasons why by carrying out exhaustive root cause analysis. As part of this, they’ll identify which segments of the business were affected, over how long a time frame. And they’ll dig up evidence to support their findings, so they can be verified by any human worker. Once the agents are ready to report their findings, they’ll transform the patterns they’ve identified into a concise, actionable alert, complete with any useful charts, SQL queries and recommendations, the company said.
Sonatus offers OEMs an unified platform for scalable in-vehicle edge AI deployment, integrating with silicon, cloud, and AI model providers for optimized performance
Sonatus has announced Sonatus AI Director, a platform that enables OEMs to deploy AI at the vehicle edge. With the automotive AI market projected to reach $46B annually by 2034, in-vehicle edge AI software and services will become increasingly important. Sonatus AI Director provides an end-to-end toolchain for model training, validation, optimization, and deployment, integrating with vehicle data and offering cloud-based remote monitoring of model performance. This comprehensive toolchain lowers barriers to edge AI adoption and innovation, reducing effort from months to weeks or days. By utilizing real-time and contextual vehicle data, Sonatus AI Director enables OEMs to unlock new features and capabilities, enabling adaptive driving experiences, proactive maintenance, improved efficiency, and optimal vehicle performance. It supports a range of model types, including physics- and neural network-based models, as well as Small and Large Language Models (SLMs/LLMs), catering to diverse vehicle use cases. Sonatus AI Director solves key challenges the industry faces in deploying in-vehicle edge AI:
Vehicle manufacturers (OEMs) gain a consistent framework that enables them to deploy models from different vendors with a single platform and across vehicle models.
Tier-1 suppliers can optimize the systems they deliver to OEMs and more easily leverage AI across hardware and software technologies.
Silicon providers can help their customers take full advantage of the compute and AI acceleration capabilities their chips offer.
Suppliers and AI model vendors gain access to the needed input data from across different subsystems while protecting the intellectual property of their models.
OpenAI leader debunks Responses API myths and urge developers to migrate for performance and cost because it enables tool-calling chain-of-thought, higher cache utilization, and ZDR-compliant stateless usage
Too many developers are still misinformed about the Responses API and avoiding usage as a result, according to Prashant Mital, Head of Applied AI at OpenAI. He went on to lay out several “myths” about the API. Myth one: “it’s not possible to do some things with responses.” His response: “Responses is a superset of completions. Anything you can do with completions, you can do with responses – plus more” Myth two was that Responses always keeps state and therefore cannot be used in strict cases where the customer (or their end-users/partners) must adhere to Zero Data Retention (ZDR) policies. In these kinds of setups, a company or developer requires that no user data is stored or retained on the provider’s servers after the request is processed. In such contexts, every interaction must be stateless, meaning all conversation history, reasoning traces, and other context management happen entirely on the client side, with nothing persisted by the API provider. Mital countered, “You can run responses in a stateless way. Just ask it to return encrypted reasoning items, and continue handling state client-side.” Mital also called out what he described as the most serious misconception: “myth #3: Model intelligence is the same regardless of whether you use completions or responses. wrong again.” He explained, “Responses was built for thinking models that call tools within their chain-of-thought (CoT). Responses allows persisting the CoT between model invocations when calling tools agentically — the result is a more intelligent model, and much higher cache utilization; we saw cache rates jump from 40-80% on some workloads.” Mital described this as “perhaps the most egregious” misunderstanding, warning that “developers don’t realize how much performance they are leaving on the table. It’s hard because you use LiteLLM or some custom harness you built around chat completions or whatever, but prioritizing the switch is crucial if you want GPT-5 to be maximally performant in your agents.” For teams continuing to build on Completions, Mital’s clarification may serve as a turning point. “If you’re still on chat completions, consider switching now — you are likely leaving performance and cost-savings on the table.” The Responses API is not just an alternative but an evolution, designed for the kinds of workloads that have emerged as AI systems take on more complex reasoning tasks. Developers evaluating whether to migrate may find that the potential for efficiency gains makes the decision straightforward.
Fujitsu’s new distillation technology enables agentic AI deployment on smartphones and factory machinery by creating lightweight, power-efficient models that cuts memory consumption by 94%, and retains 89% accuracy
Fujitsu has developed a new reconstruction technology for generative AI, which will strengthen the Fujitsu Takane LLM by creating lightweight, power-efficient AI models. The technology is based on two innovations: quantization and specialized AI distillation. The proprietary 1-bit quantization technology reduces memory consumption by 94% and maintains an 89% accuracy retention rate, allowing large generative AI models to operate on a single low-end GPU. The specialized AI distillation reduces model size and enhances accuracy beyond the original model. This lightweighting capability will enable the deployment of sophisticated agentic AI on devices like smartphones and factory machinery, improving real-time responsiveness, data security, and power consumption. Fujitsu plans to roll out trial environments for Takane in the second half of fiscal year 2025.
Motion, a task management app, will build a Microsoft Office of AI agents for SMBs; unifying exec assistant, sales, support, and marketing across Slack, Google, Teams, Salesforce
Y Combinator-backed startup Motion is an AI calendaring and task management app which recently launched an integrated AI agent bundle for SMBs. Its appeal is that all agentic functions (each with a different human name) are integrated with the others. So far the suite includes an “executive assistant” for automating scheduling, note taking, email replies; a sales rep; a customer support rep; and a blog- and social-media-post writing marketing assistant. The agents also integrate with hundreds of other typical SMB tools like Slack, Google Apps, Teams, Salesforce, etc. Motion charges via usage: a base set of credits, plus additional credits as needed, depending on the number of agents used. Prices range from $29 per month for one seat, 1,000 credits and limited agent functions, to $600 for 25 seats and all agents, 250,000 credits. Then custom pricing from there. Co-founder Harry Qi views Motion like building the agentic equivalent of Microsoft Office. “There’s an opportunity here to build the next Microsoft,” he said. “You basically have to build all the applications.” This is in contrast to buying point AI products — a sales rep, a customer service bot, a blog-writing one — that don’t work together.
RavenDB unveils database-native AI Agent Creator; running agents inside the DB with secure data access and guardrails, and cutting build time from months to days
RavenDB, a multi-model NoSQL document database trusted by developers and enterprises worldwide, announced the launch of its AI Agent Creator, a first-to-market feature fully integrated into the database, that reduces the time required to build AI agents from weeks, months, or even years to just days. RavenDB’s AI Agent Creator runs agents natively inside the database, giving developers secure, direct access to operational data. By keeping agents close to the data and automating integration, RavenDB turns months of uncertainty into days of reliable, context-aware AI delivery. Unlike scripted bots or AI-assisted chatbots limited to generic knowledge, with the AI Agent Creator, developers can deploy intelligent agents with built-in guardrails. Developers define the scope in which each agent can operate on behalf of specific users, while seamlessly connecting to existing validation, authorization, and business logic. To give developers stronger control, enhanced safety, and greater precision over how data and operations are accessed, the large language model (LLM) follows a zero trust, default deny approach, where no data or operations are accessible unless explicitly approved. When an end-user submits a request in natural language, RavenDB invokes the agent to process the request and communicates the tools and actions it has within the scope defined by the developer. RavenDB then orchestrates the entire flow, referencing existing business logic to perform approved operations. This process provides accurate, personalized responses without ever exposing the full database, moving data to external servers, writing complex code, or compromising security. The feature supports all LLMs and uses smart caching, summarizing agent memory and history, reducing redundant requests for reasoning-intensive tasks. This significantly cuts AI spend and optimizes costs without sacrificing accuracy, making it a critical efficiency tool in agentic AI workflows.
Cognition’s Devin AI, which enables natural language coding, acquires Windsurf to integrate its multi-agent task execution with IDE and multi-model support for faster enterprise AI coding for its clients including Goldman Sachs and Citi
Cognition has raised more than $400 million at a $10.2 billion valuation and that the acquisition has paid off handsomely in terms of boosting revenue. Following the March 2024 debut of its AI coding agent Devin, Cognition quickly gained traction among AI-forward developers looking to provide natural language instructions and have Devin generate code suggestions for them automatically with minimal human intervention. Earlier this year, Cognition agreed to acquire Windsurf’s remaining team and tech in July 2025 for an undisclosed sum (estimated to be $250 million). Now, it turns out that deal has paid off quite well for Cognition. “Before acquiring Windsurf, Cognition’s Devin [annually recurring revenue] ARR grew from $1M ARR in September 2024 to $73M ARR in June 2025, as usage increased exponentially. Our growth remained efficient throughout, with total net burn under $20M across the company’s entire history. Our acquisition of Windsurf more than doubled our ARR. More importantly, it gave us the complete product suite for AI coding. ” Writing on X, Jeff Wang, CEO of Windsurf, said “our customers can benefit from the opportunities of both products having synergies between local and cloud agents,” and envisioned a future of “enabling engineers manage an army of agents to build technology faster.” In addition, since acquiring Windsurf, major customers such as Goldman Sachs, Citi, Dell, Cisco, Palantir, Nubank, and Mercado Libre are now using the combined platform, according to the company. Analysts note that Cognition’s strategy — pairing Devin’s multi-agent task execution with Windsurf’s integrated development environment (IDE) and multi-model support — directly aligns with these evolving priorities. With $400 million in fresh capital and backing from prominent investors, Cognition signals that it has the financial strength to continue scaling Devin and Windsurf rather than facing near-term consolidation or shutdown risk.
Claude can now create and edit files, including Excel spreadsheets, Word documents, PowerPoint decks and PDFs, directly within Claude.ai and the desktop app, as Anthropic focuses on enterprise productivity
Anthropic launched a feature preview that allows users of its artificial intelligence (AI) assistant Claude to create and edit files, including Excel spreadsheets, Word documents, PowerPoint decks and PDFs, directly within Claude.ai and the desktop app. Per the announcement, the new capability represents a significant expansion of Claude’s functionality, moving beyond text-based interactions to hands-on document creation and manipulation. For financial services teams, the update could streamline workflows that traditionally require switching between multiple applications. Users can now build spreadsheets with formulas and variance calculations, generate financial models with scenario analysis, convert invoices into structured data and produce formatted reports or presentations from raw notes, the company said. The feature eliminates the friction of moving between Claude and productivity software, allowing teams to complete complex analytical tasks within a single interface. The file-creation feature runs in a secure computing environment where Claude can write code and execute programs to deliver finished outputs. Beyond spreadsheets, the company said Claude can clean data, perform statistical analysis and convert documents across formats, such as turning a PDF report into a slide deck. The file creation capability complements Anthropic’s broader financial services strategy.
