OpenAI is exploring ways for users to sign in to third-party apps using their ChatGPT account. OpenAI is currently gauging interest from developers who might want to integrate this service into their apps. To capitalize on this popularity, OpenAI seems eager to try and expand into other consumer areas, such as online shopping, social media, and personal devices. A potential “Sign in with ChatGPT” feature could help OpenAI compete with other massive consumer technology companies — such as Apple, Google, and Microsoft — that help people with a wide range of online services, including a quick way to sign in to third-party apps. OpenAI seems to be interested in integrating the sign-in service with a broad array of companies. The developer interest form asks for companies to specify their app’s user base, ranging from tiny companies with fewer than 1,000 weekly users to massive apps with over 100 million weekly users. The form also asks developers how they charge for AI features today and whether they’re customers of the OpenAI API. It’s unclear when the sign-in feature would go live for users of ChatGPT or how many companies have signed up to be part of it.
Worldpay partners crypto bank BNVK to offer customers instant stablecoin payouts across more than 180 markets without them having to hold or handle stablecoins themselves
Worldpay has teamed with cryptocurrency bank BNVK to promote stablecoin payouts. The partnership will let Worldpay clients in the U.S. and Europe make stablecoin payments to customers, contractors, creators, sellers, and other third-party beneficiaries across more than 180 markets almost instantly, without having to hold or handle stablecoins themselves. BVNK will enable the new stablecoin offering and crypto firm Fireblocks’ integration services will help facilitate the connection to Worldpay. Worldpay clients will be able to access the new stablecoin payout service through their integration with Worldpay’s payouts platform. When the pilot goes live in the second half of the year, stablecoins will be the first type of digital asset enabled as a payout option on Worldpay’s payout platform. Worldpay’s payout platform currently supports 135 traditional currencies, and the company processed nearly $2.5 trillion in payments last year. John McNaught, head of payouts at Worldpay, said “Our new stablecoin payout service allows clients across all Worldpay’s verticals—such as marketplaces, travel, and gaming—to make seamless payouts without handling digital assets themselves.”
Block to enable merchants using the Square POS to accept bitcoin payments directly through their Square hardware via QR code scan
Block plans to launch bitcoin payments on its business technology platform Square, enabling merchants using the Square Point of Sale app to accept bitcoin payments directly through their Square hardware. The company plans to begin rolling out this new, native Bitcoin For Businesses offering in the second half of the year and then extend it to all Square sellers in 2026, subject to regulatory approvals. With Square’s integration handling the complexity behind the scenes, and the Lightning Network enabling near-instant settlement, customers will be able to pay with bitcoin by scanning a QR code at checkout. Bitcoin For Businesses builds upon Square’s Bitcoin Conversions features, which was launched in 2024 and allows qualified merchants to automatically convert a portion of their sales into bitcoin. “When a coffee shop or retail store can accept bitcoin through Square, small businesses get paid faster, and get to keep more of their revenue,” Block Bitcoin Product Lead Miles Suter said. “This is about economic empowerment for merchants who like to have options when it comes to accepting payments. We believe in an open, decentralized, fair, fast and low-cost money system for everyone, and that’s exactly what we want to bring to Square sellers,” he added.
New “selective generation” framework method uses a smaller, separate “intervention model” to decide whether the main LLM should generate an answer or abstain
A new study from Google researchers introduces “sufficient context,” a novl perspective for understanding and improving retrieval augmented generation (RAG) systems in large language models (LLMs). This approach makes it possible to determine if an LLM has enough information to answer a query accurately, a critical factor for developers building real-world enterprise applications where reliability and factual correctness are paramount. They found that Google’s Gemini 1.5 Pro model, with a single example (1-shot), performed best in classifying context sufficiency, achieving high F1 scores and accuracy. The paper notes, “In real-world scenarios, we cannot expect candidate answers when evaluating model performance. Hence, it is desirable to use a method that works using only the query and context.” Interestingly, while RAG generally improves overall performance, additional context can also reduce a model’s ability to abstain from answering when it doesn’t have sufficient information. Given the finding that models may hallucinate rather than abstain, especially with RAG compared to no RAG setting, the researchers explored techniques to mitigate this. They developed a new “selective generation” framework. This method uses a smaller, separate “intervention model” to decide whether the main LLM should generate an answer or abstain, offering a controllable trade-off between accuracy and coverage. This framework can be combined with any LLM, including proprietary models like Gemini and GPT. The study found that using sufficient context as an additional signal in this framework leads to significantly higher accuracy for answered queries across various models and datasets. This method improved the fraction of correct answers among model responses by 2–10% for Gemini, GPT, and Gemma models. For enterprise teams looking to apply these insights to their own RAG systems, such as those powering internal knowledge bases or customer support AI, Cyrus Rashtchian, co-author of the study outlines a practical approach. He suggests first collecting a dataset of query-context pairs that represent the kind of examples the model will see in production. Next, use an LLM-based autorater to label each example as having sufficient or insufficient context. “This already will give a good estimate of the % of sufficient context,” Rashtchian said. “If it is less than 80-90%, then there is likely a lot of room to improve on the retrieval or knowledge base side of things — this is a good observable symptom.”
GridCARE reduces data centers’ time-to-power from 5-7 years to just 6-12 months by leveraging advanced generative AI-based analysis to find pockets with geographic and temporal capacity on the existing grid to enable faster deployment of GPUs and CPUs
GridCARE, a new company powering the AI revolution, emerged from stealth today. The company has closed a highly oversubscribed $13.5 million Seed financing round led by Xora, a deep technology venture capital firm backed by Temasek. GridCARE works directly with hyperscalers and some of the biggest AI data center developers to accelerate time-to-power for infrastructure deployment, both for upgrading existing facilities and identifying new sites with immediate power availability for gigascale AI clusters. By leveraging advanced generative AI-based analysis to find pockets with geographic and temporal capacity on the existing grid, GridCARE reduces data centers’ time-to-power from 5-7 years to just 6-12 months, allowing AI companies to deploy GPUs and CPUs faster. As the one-stop power partner for data center developers, GridCARE eliminates the complexity of navigating thousands of different utility companies so developers can focus on innovation rather than power acquisition. GridCARE is also actively partnering with utilities, such as Portland General Electric and Pacific Gas & Electric, who view better utilization of their existing grid assets as a way to increase revenues and bring the electricity cost down for all their customers. Additionally, this collaboration stimulates local economies with billions of dollars of new investment and high-paying job opportunities. “GridCARE is solving one of the biggest bottlenecks to AI data centers today — access to scalable, reliable power. Their differentiated, execution-focused approach enables power at speed and at scale,” said Peter Lim, Partner at Xora. “Power is the critical limiter to billions of dollars in AI infrastructure,” said Peter Freed, Partner at Near Horizon Group and former Director of Energy Strategy at Meta. “GridCARE uncovers previously invisible grid capacity, opening a new fast track to power and enabling sophisticated power-first AI data center development.”
Vibe coding: Superblocks AI agent addresses security and privacy risks by turning natural language prompts into secure, production-grade applications written in React, a JavaScript library optimized for building any user interfaces
DayZero Software Inc., a maker of secure software development tools that does business as Superblocks, has raised $23 million in a Series A venture capital round extension, bringing its total funding to $60 million. The company is addressing a problem caused by generative artificial intelligence: vibe coding. It involves using AI tools to generate software quickly based on natural language prompts, often without a deep understanding of the underlying code. While great for rapid prototyping, vibe coding carries the risk of errors, security holes and inadvertent disclosure of proprietary information. Superblocks’ answer is Clark, an AI agent that turns natural language prompts into secure, production-grade applications written in React, a JavaScript library optimized for building user interfaces that can also support production-grade applications. “React is the largest front-end framework in the world so you can build pretty much any user interface with it, and most of the modern web is on it,” Superbocks co-founder and Chief Executive Brad Menezes said. Clark routes requests through a cadre of specialized AI agents covering design, security, quality assurance and IT policy. That mimics how a real internal development team operates. Superblocks is betting that the volume of homegrown software in use in enterprises will grow as generative AI lowers barriers to entry. Clark uses an assortment of popular LLMS that are trained using “unique, enterprise context on the company’s design system,” Menezes said. “When you build an application in Superblocks, you know it has the right audit logging, permissions, private data and integration.”
Google’s contribution to vibe coding is Stitch, a platform that designs user interfaces (UIs) with one prompt
Google is releasing Stitch, a new experiment from Google Labs, to compete with Microsoft, AWS, and other existing end-to-end coding tools. Now in beta, the platform designs user interfaces (UIs) with one prompt. With Google Stitch, users can designate whether they want to build a dashboard or web or mobile app and describe what it should look like (such as color palettes or the user experience they’re going for). The platform instantly generates HTML, CSS+ and templates with editable components that devs and non-devs can customize and edit (such as instructing Stitch to add a search function to the home screen). They can then add directly to apps or export to Figma. Users can choose a ‘standard mode’ that runs on Gemini 2.5 Flash or switch to an ‘experimental mode’ that uses Gemini Pro and allows users to upload visual elements such as screenshots, wireframes and sketches to guide what the platform generates. Google also plans to release a feature allowing users to annotate screenshots to make changes. Stitch is “meant for quick first drafts, wireframes and MVP-ready frontends.”
Mistral AI’s API integrates server-side conversation management, a Python code interpreter, web search, image generation and document retrieval capabilities to enable building fully autonomous AI agents
Mistral AI, a rival to OpenAI, Anthropic PBC, Google LLC and others, has jumped into agentic AI development with the launch of a new API. The new Agents API equips developers with powerful tools for building sophisticated AI agents based on Mistral AI’s LLms, which can autonomously plan and carry out complex, multistep tasks using external tools. Among its features, the API integrates server-side conversation management, a Python-based code interpreter, web search, image generation and document retrieval capabilities. It also supports AI agent orchestration, and it’s compatible with the emerging Model Context Protocol that aims to standardize the way agents interact with other applications. With its API, Mistral AI is keeping pace with the likes of OpenAI and Anthropic, which are also laser-focused on enabling the emergence of AI agents that can perform tasks on behalf of humans with minimal supervision, in an effort to turbocharge business automation. The API boasts dozens of useful “connectors” that should make it simpler to build some very capable AI agents. For instance, the Python Code Interpreter provides a way for agents to execute Python code in a secure, sandboxed environment, while the image generation tool powered by Black Forest Labs Inc.’s FLUX1.1 [pro] Ultra model means they’ll have powerful picture-generating capabilities. A premium version of web search provides access to a standard search engine, plus the Agence France-Presse and the Associated Press news agencies, so AI agents will be able to access up-to-date information about the real world. Other features include a document library that uses hosted retrieval-augmented generation from user-uploaded documents. In other words, Mistral’s AI agents will be able to read external documents and perform actions with them. The API also includes an “agent handoffs” mechanism that allows multiple agents to work together. One agent will be able to delegate a task to another, more specialized agent. According to Mistral, the result will be a “seamless chain of actions,” with a single request able to trigger multiple agents into action so they can collaborate on complex tasks. The Agents API supports “stateful conversations” too, which means they’re able to maintain context over time by remembering the user’s earlier inputs.
Chance AI’s visual reasoning AI model provides detailed history, context, and related information of any object along with step-by-step visual logic and conversational insight to explain how it discovers and interprets new information
Chance AI, the multi-agent visual AI for explorers, artists, and creatives, announced its most substantial model upgrade to date. Available on iOS and coming soon to Android, Chance AI’s latest release introduces real-time visual reasoning, support for 17 languages, and voice playback—making Chance’s unique visual AI proposition more intuitive, helpful, and accessible. Simply take a photo, and Chance AI will instantly provide a wealth of history, context, and related information. Uncover the story behind historic landmarks or art pieces, identify unique plants or objects, or learn more about books, games, movies, and more. Chance AI is currently a free download with no ads or shopping links. The latest update brings real-time visual reasoning to Chance AI, allowing the model not just to identify what it sees—but to explain how it discovers and interprets new information through step-by-step visual logic, like a thoughtful human observer. Whether it’s analyzing art, decoding design, or understanding the natural world, Chance now provides rich, conversational insight into visual intelligence. With this release, Chance AI becomes the first true visual reasoning model, offering an unprecedented level of transparency and outperforming competitors in accuracy and contextual depth. The update also introduces audio output, so users can choose to read or listen to Chance AI’s responses.
Fabrix.ai’s platform offers automated network observability including guardrails, Model Context Protocol and agent-to-agent interfaces to streamline repetitive, time-consuming IT operations use cases
Fabrix.ai Inc., previously known as CloudFabrix, delivers a purpose-built agentic AI operational intelligence platform that enables enterprise users to streamline IT operations use cases, make better decisions more quickly and successfully accelerate digital transformation. Fabrix.ai’s intelligent agents take over repetitive, time-consuming operational workloads for its enterprise customers, delivering increased agility and cost efficiency. There are three components to the Fabrix.ai operational platform: Agentic AI; Generative AI copilot; and Cisco-specific solutions. The company views its platform as having a unique capability to focus on automation, particularly in network observability. Running a network tends to be more stochastic than deterministic, so providing enterprises and service providers a solution requires additional building blocks, including guardrails, Model Context Protocol or agent-to-agent interfaces, and Fabrix.ai has built those. While Fabrix.ai continues to work closely with Cisco and telcos, the company is also branching out to serve customers in other areas, including AI security. One of the biggest differentiators for Fabrix.ai is the ability to work with real-time data. Fabrix.ai leverages many of the common building blocks, but the platform is purpose-built for IT ops use cases rather than trying to modify a generic AI model. Its focus on handling real-time information has enabled it to get traction in key verticals, especially telco. Fabrix.ai also leverages its growing partner ecosystem to bring its capabilities to more enterprise customers. The company can use whatever data platform a customer has, including Splunk, Elastic, OpenSearch, MinIO, HP or others. Or it could be a data lake, since it has partnerships with many of the data platforms and its data abstraction layer can read directly from the platforms.
