DayZero Software Inc., a maker of secure software development tools that does business as Superblocks, has raised $23 million in a Series A venture capital round extension, bringing its total funding to $60 million. The company is addressing a problem caused by generative artificial intelligence: vibe coding. It involves using AI tools to generate software quickly based on natural language prompts, often without a deep understanding of the underlying code. While great for rapid prototyping, vibe coding carries the risk of errors, security holes and inadvertent disclosure of proprietary information. Superblocks’ answer is Clark, an AI agent that turns natural language prompts into secure, production-grade applications written in React, a JavaScript library optimized for building user interfaces that can also support production-grade applications. “React is the largest front-end framework in the world so you can build pretty much any user interface with it, and most of the modern web is on it,” Superbocks co-founder and Chief Executive Brad Menezes said. Clark routes requests through a cadre of specialized AI agents covering design, security, quality assurance and IT policy. That mimics how a real internal development team operates. Superblocks is betting that the volume of homegrown software in use in enterprises will grow as generative AI lowers barriers to entry. Clark uses an assortment of popular LLMS that are trained using “unique, enterprise context on the company’s design system,” Menezes said. “When you build an application in Superblocks, you know it has the right audit logging, permissions, private data and integration.”
Report says open banking has not lived up to its potential in UK; it has not reached profitability despite success in some areas, such as faster lending decisions
Open banking has not lived up to its potential in Great Britain, according to a report by the Financial Times. While it has seen success in some areas, such as faster lending decisions, it has not reached profitability, with higher interest rates cooling investor enthusiasm. Open banking technology allows customers to share their financial information with other banks, apps, and online retailers, and allows lenders to permit pay-by-bank remittances without card intermediaries. The excitement around open banking accompanied the past decade’s U.K. FinTech boom, making London a leader in the sector. However, many users are unaware of the technology’s availability and do not see the benefits of digital wallets like Apple Pay. The awareness gap for open banking is not just confined to the U.K., with 56% of American consumers not familiar with pay by bank. However, incentives can help bridge this gap, with Generation Z and high-income individuals being particularly receptive to pay by bank, especially when coupled with incentives.
Meta’s study shows shorter reasoning processes in AI systems lead to results that are 34.5% more accurate while reducing computational costs by up to 40%
Researchers from Meta’s FAIR team and The Hebrew University of Jerusalem have discovered that forcing large language models to “think” less actually improves their performance on complex reasoning tasks. The study found that shorter reasoning processes in AI systems lead to more accurate results while significantly reducing computational costs. The researchers discovered that within the same reasoning task, “shorter reasoning chains are significantly more likely to yield correct answers — up to 34.5% more accurate than the longest chain sampled for the same question.” This finding held true across multiple leading AI models and benchmarks. Based on these findings, the team developed a novel approach called “short-m@k,” which executes multiple reasoning attempts in parallel but halts computation once the first few processes complete. The final answer is then selected through majority voting among these shorter chains. The researchers found their method could reduce computational resources by up to 40% while maintaining the same level of performance as standard approaches. “Our findings suggest rethinking current methods of test-time compute in reasoning LLMs, emphasizing that longer ‘thinking’ does not necessarily translate to improved performance and can, counter-intuitively, lead to degraded results,” the researchers conclude. The study points toward potential cost savings and performance improvements by optimizing for efficiency rather than raw computing power.
Integration challenges requiring asynchronous transaction processing flow with multiple steps for additional layer of authentication is a key factor behind low adoption of EMV 3DS
While EMV 3DS has many benefits, adoption may be slow in the regions where EMV 3DS is not mandatory. Reasons may include the following: Data inconsistency. The quality of merchant data provided in EMV 3DS plays a critical role in issuer fraud detection. Merchants may be reluctant to share more data and may decide to provide a minimum set of data elements excluding optional data elements. There are cases when the data provided is not accurate, causing issues in fraud engines. Approval rates and cardholder friction. Shopping cart abandonment has been one of the major reasons that EMV 3DS adoption is low. Many enhancements have been added to the protocol from EMV 3DS 1.0 to EMV 3DS 2.x to challenge the cardholder only when needed. Complexity of integration. EMV 3DS integration is complex and adds an additional layer of authentication flow before authorization, resulting in higher implementation costs. Most systems are built with synchronous authorization request and response; EMV 3DS is a major change since it requires an asynchronous transaction processing flow with multiple steps. Liability shift. EMV 3DS is designed to help with fraud. However, determining if, how and when a liability shift occurs for merchants is not a simple answer and depends on several factors. Payment network and local regulatory requirements should be checked for specific use cases to assess any applicable liability shifts. Some factors are: Region and payment network. It is important to be familiar with payment network rules for EMV 3DS usage. Merchant category code (MCC). Not all MCCs are allowed for a liability shift.
AI chatbots prioritize relevance and authority over traditional SEO metrics such as traffic or backlinks, requiring brands to offer “citable evidence” of expertise, such as real data, FAQs, testimonials and contextual content
Local Falcon’s research showed that AI chatbots often bypass traditional SEO metrics such as link volume or page ranking, relying instead on relevance, prominence and authority. That means brands that surface in AI responses are those that offer “citable evidence” of expertise, such as real data, FAQs, testimonials and contextual content that AI can easily access, the study said. AI chatbots don’t prioritize traffic or backlinks when they choose which brands to list in their responses to consumers. Instead, they look for provable expertise. The good news is that smaller brands that are expert in what they do can leapfrog over well-trafficked retail sites in search listings. For example, stores shouldn’t just say they are the leading retailer in a niche product but actually prove it by explaining why they’re the best. It also helps for the brand to be mentioned in places like social media. For smaller businesses, it’s an opportunity to compete with larger players — if they provide genuine value and clear proof of expertise. “It gives smaller stores a shot at being found if they play it right and they’re really good at what they do,” David Hunter, CEO of Local Falcon said. Hunter thinks that Google transitioning from traditional search to AI-powered search is the cause of this surprise finding. Ultimately, location will continue to play an important part in search even for an AI chatbot. Location-based marketing and messaging is a powerful tool that drives higher engagement, foot traffic and revenue, according to Radar CEO Nick Patrick. Location is important not just for retailers but also for financial services — to do things such as find bank branches, real-time fraud detection and geo-triggered cash back promotions through retail partners. But instead of location, the AI appears to favor authoritative results from sources including social media, Reddit, and other community forums, which were previously not prioritized by traditional algorithms.
Odyssey’s AI world model uses a 360-degree, backpack-mounted camera system to capture real-world landscapes that lets users “interact” with streaming video and explore areas within a video, similar to a 3D-rendered video game
Odyssey is taking a different approach than many AI labs in the world modeling space. Odyssey has developed an AI model that lets users “interact” with streaming video. It designed a 360-degree, backpack-mounted camera system to capture real-world landscapes, which Odyssey thinks can serve as a basis for higher-quality models than models trained solely on publicly available data. The model generates and streams video frames every 40 milliseconds. Via basic controls, viewers can explore areas within a video, similar to a 3D-rendered video game. Powering this is a new world model, demonstrating capabilities like generating pixels that feel realistic, maintaining spatial consistency, learning actions from video, and outputting coherent video streams for 5 minutes or more.” World models could one day be used to create interactive media, such as games and movies, and run realistic simulations like training environments for robots. “Interactive video … opens the door to entirely new forms of entertainment, where stories can be generated and explored on demand, free from the constraints and costs of traditional production,” the company says. “Over time, we believe everything that is video today — entertainment, ads, education, training, travel, and more — will evolve into interactive video, all powered by Odyssey.” The model can currently stream video at up to 30 frames per second from clusters of Nvidia H100 GPUs at the cost of $1 to $2 per “user-hour.”
Mistral AI’s ‘plug and play’ platform offers built-in connectors to run Python code, create custom visuals, access documents stored in cloud and retrieve information from web for easy customization of AI agents
French AI startup Mistral AI is introducing its Agents API, a “plug and play” platform that enables third-party software developers to quickly add autonomous generative AI capabilities to their existing applications. The API uses Mistral’s proprietary Medium 3 model as the “brains” of each agent, allowing for easy customization and integration of AI agents into enterprise and developer workflows. The API complements Mistral’s existing Chat Completion API and focuses on agentic orchestration, built-in connectors, persistent memory, and the ability to coordinate multiple AI agents to tackle complex tasks. This innovative approach aims to overcome the limitations of traditional language models. The Agents API comes equipped with several built-in connectors, including: Code Execution: Securely runs Python code, enabling applications in data visualization, scientific computing and other technical tasks. Image Generation: Leverages Black Forest Lab FLUX1.1 [pro] Ultra to create custom visuals for marketing, education or artistic uses. Document Library: Accesses documents stored in Mistral Cloud, enhancing retrieval-augmented generation (RAG) features. Web Search: Allows agents to retrieve up-to-date information from online sources, news outlets and other reputable platforms.
Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enable building multimodal applications for natural language querying through a RAG-based Q&A interface
Organizations face challenges in processing large amounts of unstructured data, including documents, images, audio files, and video files. Generative AI technologies are revolutionizing this by automatically processing, analyzing, and extracting insights from these diverse formats. Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enable organizations to build powerful multimodal RAG applications with minimal effort. These tools automate workflows, store extracted information in a unified repository, and enable natural language querying through a RAG-based Q&A interface. Real world use cases
The integration of Amazon Bedrock Data Automation and Amazon Bedrock Knowledge Bases enables powerful solutions for processing large volumes of unstructured data across various industries. Financial institutions process thousands of documents daily, from loan applications to financial statements. Amazon Bedrock Data Automation extracts key financial metrics and compliance information, while Amazon Bedrock Knowledge Bases allows analysts to ask questions like “What are the risk factors mentioned in the latest quarterly reports?” or “Show me all loan applications with high credit scores.”
Cloud Hope AI’s development agent can transform product mockups, specifications, and reference images directly into complete composable solutions, from backend systems to UI components using natural language prompts for use in existing or new applications
Bit Cloud announced the general availability of Hope AI, its new AI-powered development agent that enables professional developers and organizations to build, share, deploy, and maintain complex applications using natural language prompts, specifications and design files. Hope AI takes AI-driven development further, beyond basic websites or application prototypes. It designs complete system architectures, assembles reusable software components, and generates scalable, production-ready applications — from CRM systems to e-commerce platforms to healthcare surgery room management systems — dramatically reducing both time to market and maintenance costs. Hope AI functions as an intelligent software architect, leveraging existing, proven components to compose professional and practical software solutions, enabling consistency and simplifying long-term maintainability. Bit’s solution turns components into reusable digital assets, so teams don’t need to rebuild functionality from scratch every time. Key innovations of Hope AI include: Natural Language to Professional Code, Composable Solutions, Team Collaboration, DevOps Integration.
Volante launches Web3 EWA prepaid card featuring tokenisation; eligibility for wage access is determined using proprietary AI tools designed to assess user data in real time
Volante Labs Limited has launched the Volante Card, a Web3-enabled prepaid card designed to facilitate salary payments without a traditional bank account. The card, issued through a licensed VISA-certified institution, enables direct salary transfers from employers to employees, particularly in regions with limited banking access or volatile local currencies. It supports multiple fiat currencies and integrates with Earned Wage Access systems, allowing employees to access wages on demand. The card also features enterprise-grade security measures, tokenisation, and AI-powered fraud detection. Volante aims to modernize payroll infrastructure using blockchain and artificial intelligence technologies, reducing the demand for short-term lending options and potentially improving workforce satisfaction and retention rates. The company’s token, VOL, was listed on BTSE and BingX in March 2025. eligibility for wage access is determined using proprietary AI tools designed to assess user data in real time. Additional features include enterprise-grade security measures such as MV3D Secure, tokenisation, and AI-powered fraud detection. The card also supports high deposit thresholds, up to USD 1 million, with no restrictions on usage.
