5G networks offer unique operational advantages when used to connect transaction approval systems. 5G networks supporting POS applications operate with more performance and less latency than networks built on previous standards. 5G networks offer greater speeds compared to earlier generations of cellular networks, effectively combating latency. This also makes 5G ideally suited to support activities that extend the POS beyond traditional cash wraps. In addition to pop-ups and temporary store operations, these include omnichannel services such as curbside pickup, buy-online-pickup-in-store (BOPIS) and buy-online-return-instore (BORIS), and mobile or self-checkout. Some retailers have also established “grab-and-go” models where customers remove products from shelves and check out using a QR code, or by simply exiting the store. 5G provides the necessary bandwidth and latency for computer vision cameras and shelf sensors needed to make this possible. As AI becomes integrated into a wide range of retail business functions, many of those AI applications require low-latency response times. This development is leading to a demand for edge AI solutions that deploy pre-trained AI models, generative AI, and agentic AI to the network edge, outside the data center, for local processing. In addition to computer vision for “grab-and-go” checkout, this includes accelerated checkout processing through real-time image recognition of items, automatic detection and alerting of theft, and real-time tracking of inventory as it is purchased and leaves the store. 5G networks offer the low latency, high-performance capabilities required to support AI-supported POS terminals and other edge devices in the store. AI analytics can also be applied in real-time to measure store metrics such as traffic patterns, checkout waiting time, and sales volumes. In addition to helping ensure POS uptime and throughput, network slicing can also provide enhanced security by isolating the POS from the rest of the store’s wireless network
Researchers curb catastrophic forgetting by tuning only small parts of AI models while freezing down projections to lower compute costs
Research from the University of Illinois Urbana-Champaign proposes a new method for retraining models that avoids “catastrophic forgetting,” in which the model loses some of its prior knowledge. The paper focuses on two specific LLMs that generate responses from images: LLaVA and Qwen 2.5-VL. The approach encourages enterprises to retrain only narrow parts of an LLM to avoid retraining the entire model and incurring a significant increase in compute costs. The team claims that catastrophic forgetting isn’t true memory loss, but rather a side effect of bias drift. The researchers wanted first to verify the existence and the cause of catastrophic forgetting in models. To do this, they created a set of target tasks for the models to complete. The models were then fine-tuned and evaluated to determine whether they led to substantial forgetting. But as the process went on, the researchers found that the models were recovering some of their abilities. The researchers said they believe that “what looks like forgetting or interference after fine-tuning on a narrow target task is actually bias in the output distribution due to the task distribution shift.” That finding turned out to be the key to the experiment. The researchers noted that tuning the MLP increases the likelihood of “outputting numeric tokens and a highly correlated drop in held out task accuracy.” What it showed is that a model forgetting some of its knowledge is only temporary and not a long-term matter. “To avoid biasing the output distribution, we tune the MLP up/gating projections while keeping the down projection frozen, and find that it achieves similar learning to full MLP tuning with little forgetting,” the researchers said. This allows for a more straightforward and more reproducible method for fine-tuning a model. By focusing on a narrow segment of the model, rather than a wholesale retraining, enterprises can cut compute costs. It also allows better control of output drift.
MIT’s updated SEAL technique enables self-adapting LLMs that generate synthetic training data and optimization directives; using dual-loop supervised fine-tuning plus reinforcement learning to reduce catastrophic forgetting
Researchers at the Massachusetts Institute of Technology (MIT) are gaining renewed attention for developing and open sourcing SEAL (Self-Adapting LLMs); a technique that allows large language models (LLMs) — like those underpinning ChatGPT and most modern AI chatbots — to improve themselves by generating synthetic data to fine-tune upon. SEAL allows LLMs to autonomously generate and apply their own fine-tuning strategies. Unlike conventional models that rely on fixed external data and human-crafted optimization pipelines, SEAL enables models to evolve by producing their own synthetic training data and corresponding optimization directives. The new version expands on the prior framework by demonstrating that SEAL’s self-adaptation ability scales with model size, integrates reinforcement learning more effectively to reduce catastrophic forgetting, and formalizes SEAL’s dual-loop structure (inner supervised fine-tuning and outer reinforcement optimization) for reproducibility. SEAL operates using a two-loop structure: an inner loop performs supervised fine-tuning based on the self-edit, while an outer loop uses reinforcement learning to refine the policy that generates those self-edits. The reinforcement learning algorithm used is based on ReSTEM, which combines sampling with filtered behavior cloning. During training, only self-edits that lead to performance improvements are reinforced. This approach effectively teaches the model which kinds of edits are most beneficial for learning. For efficiency, SEAL applies LoRA-based fine-tuning rather than full parameter updates, enabling rapid experimentation and low-cost adaptation.
Nvidia unveils its collaborative “gigawatt AI factories” to support the next gen of AI models based on its Vera Rubin architecture; enables 150% more power transmission by using 800VDC power delivery and 100% liquid cooling
Nvidia is collaborating with more than 70 partners on the design of more efficient “gigawatt AI factories” to support the next generation of artificial intelligence models. The gigawatt AI factories envisioned by Nvidia will utilize Vera Rubin NVL144, which is an open architecture rack server based on a 100% liquid-cooled design. It’s designed to support the company’s next-generation Vera Rubin graphics processing units, which are expected to launch in 2027. The architecture will enable companies to scale their data centers exponentially, with a central printed circuit board midplane that enables faster assembly, and modular expansion bays for networking and inference to be added as needed. Nvidia said it’s donating the Vera Rubin NVL144 architecture to the Open Compute Project as an open standard, so that any company will be able to implement it in its own data centers. It also talked about how its ecosystem partners are ramping up support for the Nvidia Kyber server rack design, which will ultimately be able to connect 576 Rubin Ultra GPUs when they become available. The Vera Rubin NVL144 architecture is designed to support the roll-out of 800-volt direct current data centers for the gigawatt era, and Nvidia hopes it will become the foundation of new “AI factories,” or data centers that are optimized for AI workloads. Nvidia explained that Vera Rubin NVL144 is all about preparing for the future, with the flexible architecture designed to scale up over time to support advanced reasoning engines and the demands of autonomous AI agents. It’s based on the existing Nvidia MGX modular architecture, which means it’s compatible with numerous third-party components and systems from more than 50 ecosystem partners. With the new architecture, data center operators will be able to mix and match different components in a modular fashion in order to customize their AI factories. Nvidia also revealed the growing support for its Nvidia Kyber rack server architecture, which is designed to support the infrastructure that will power clusters of 576 Vera Rubin GPUs. Like Vera Rubin NVL144, Nvidia Kyber features several innovations in terms of 800 VDC power delivery, liquid cooling and mechanical design.
Watermarking in generative AI uses cryptographically embedded codes; supporting verification, attack resilience and integrity checks using industry metrics like SSIM and FID.
Researchers at Queen’s University in Canada have explored watermarking as a method to tag AI images for verification of origin and integrity. Watermarking systems operate as a complete security process, consisting of embedding, verification, attack channels, and detection. The watermark must be invisible to viewers but readable to authorized users, and remain private enough that no one can duplicate it. Watermarking began with signal-processing methods that changed pixel values or frequency coefficients using transforms. The rise of deep learning introduced new possibilities, such as encoder-decoder networks and diffusion models. Researchers began embedding marks directly inside these systems, producing two main approaches: fine-tuning-based and initial noise-based. Visual quality, capacity, and detectability are the main criteria for evaluating watermarking. Researchers use metrics such as Structural Similarity Index (SSIM) and Fréchet Inception Distance (FID) to check that the mark does not degrade the picture. Capacity measures how much data can be stored, while detectability refers to how reliably a watermark can be recovered after changes or attacks. Watermarking schemes remain fragile under pressure, with threats dividing into resilience and security. New attack strategies take advantage of diffusion models, such as regeneration attacks and detector-aware attacks. Defensive ideas include encrypting watermark keys, varying where marks are placed, and training models to recognize and preserve watermarks during generation. Global momentum is building for watermarked AI content, with governments and private companies experimenting with watermarking methods.
Real estate investors pay 1.8% to 4.2% premiums above market value through all-cash transactions and waived contingencies; pricing out first-time buyers as investor activity doubled since 2020
Coupled with high mortgage rates and record-high home prices, real estate investors are also making it harder for first-time buyers to compete, according to new data from real estate analytics firm Cotality. Insights authored Oct. 10 by Thom Malone, Cotality’s principal economist, found that investors routinely pay more than market value for homes. These premiums range from 1.8% to 4.2%, depending on portfolio size. On a median-priced home of $405,000, that equates to an extra $7,300 to $17,415. These overbids often include all-cash transactions, waived contingencies and faster closings. “There are several reasons an investor might pay more than market value,” Malone wrote. “It can be a tactic to quickly close on a home, or it could be a speculative bet that the seller has underpriced a property. It could also simply be a lack of local knowledge.” Despite higher borrowing costs, investor activity has more than doubled since mid-2020. By early 2025, investors accounted for about one-third of all U.S. home purchases, the company found. Experts from Cotality expect investor activity to hold steady through the end of 2025, even as investors continue to pay premiums on top of already elevated home prices. Smaller investors with fewer than 10 properties typically pay about 1.8% above market value, while large investors owning more than 1,000 homes pay 4.2% more on average. Malone said overpayments can be offset by long-term price appreciation or higher rents, particularly for smaller landlords. Rents have risen 2.3% year over year, according to Cotality, as more potential buyers remain renters. But rent growth has slowed and is below pre-pandemic averages. Over the past five years, rents for lower-cost homes increased by roughly 30% while home prices jumped 50%. While that gap has lessened profits for many investors, smaller investors remain resilient. They make up about 14% of all investors and are buying the largest share of investment properties in the top 20 U.S. metro areas. In Los Angeles, where investor activity is high, rents rose 3.1% between July 2024 and July 2025, helping small landlords recover costs. First-time buyers are facing the most pressure. Malone wrote that investors accounted for 37% of purchases of lower-priced homes this year. Over the past five years, the investor share of housing has doubled and the average age of a first-time buyer has increased to 38, up from 33 in 2020. As a result, many would-be buyers are staying put as renters.
Samsung embeds Coinbase crypto ecosystem into Galaxy Wallet alongside payment cards and IDs for its 75 million users , enabling direct purchases through Samsung Pay without separate app downloads
In a quieter but more consequential phase, crypto is taking a new path: legitimacy through association. The most ambitious companies in the sector are no longer trying to replace the old financial order; they’re partnering with it. The latest and perhaps most visible example of this strategy is the recent collaboration between Coinbase and Samsung to bring Coinbase One, the company’s premium membership program, into the Samsung Wallet app for U.S. users. This is not the old model of crypto adoption, where users had to download an exchange app, memorize seed phrases, and navigate the volatility of trading tokens. For the 75 million Americans with a Galaxy device, this means the ability to access Coinbase’s ecosystem directly through Samsung’s own digital wallet, including a three-month free trial of Coinbase One and a $25 USDC bonus upon completing a first trade. The Samsung collaborations underscore a strategic pivot: crypto no longer needs to shout from the margins. Instead, it is embedding itself in ecosystems people already trust. As for the reality of the offering, Mark Troianovski, director and head of product partnerships at Coinbase, frames it as a kind of “choose your own adventure in crypto.” Users can send payments in USDC to friends, split bills, explore decentralized finance, or even “take a loan out against your bitcoin.” The significance is subtle but transformative. Crypto is no longer an activity; it’s a feature. It’s moving from being a separate ecosystem to becoming a layer within the existing financial stack. The company’s partnerships reinforce a feedback loop: the more established brands it works with, the more legitimate it becomes; the more legitimate it becomes, the more partnerships it can secure. In a marketplace still rebuilding its reputation, this loop is invaluable.
Flex speeds up AI infrastructure deployments by its integrated data center reference designs with 800VDC power architecture, capacitive energy storage and rack-level liquid cooling
The original design manufacturing giant Flex Ltd. says it wants to help data center operators scale their operations more efficiently with what is effectively a new blueprint for gigawatt data centers that can support artificial intelligence and high-performance computing workloads. Flex is bundling its power, cooling and computing systems into a series of pre-engineered, modular reference designs for “next-generation data centers,” and says they will enable new computing facilities to be deployed up to 30% faster than traditional designs. By prefabricating many of the essential building blocks of AI data centers, Flex says it can help to standardize data center design and construction. Its integrated platform consists of new megawatt-scale, high-density, liquid-cooled racks that are designed to support the exponentially rising power demands of AI workloads and enable the transition to 800VDC power architectures. Other key components include Flex’s newly designed capacitive energy storage system, which helps to reduce electrical disturbances from AI workloads, and a highly scalable modular rack-level coolant distribution unit or CDU that provides up to 1.8 MW of flexible capacity. The designs further incorporate prefabricated power pods and skids, which are pre-engineered modular systems that aim to simplify installation and reduce the need for onsite labor through parallel construction. They also utilize fewer interconnects, and with their offsite assembly, can cut weeks off of data center construction timetables. Flex says its semi-prefabricated data centers will save companies thousands of hours of onsite labor, reducing deployment times from an average of one year to as little as six months. Moreover, its designs are extremely flexible, allowing data center operators to adapt them as necessary to integrate their preferred computing systems from partners such as Nvidia Corp. and Advanced Micro Devices Inc. Data center operators will also benefit from Flex’s lifecycle intelligence software, which provides built-in monitoring for all components, with predictive analytics and system-level optimization tools. In addition, Flex’s reference designs are supported by a robust supply chain network and global services, which provides support at every step of the process, from sourcing to deployment and fulfillment.
Deloitte’s 2025 Connected Consumer study finds 53% of U.S. consumers now experiment with or regularly use generative AI, up from 38% in 2024, with regular users doubling to 20%
According to recent Deloitte research (53%) are now either experimenting with generative AI or using it regularly—up sharply from 38% in 2024, according to the “2025 Connected Consumer” study. Respondents are integrating the technology into daily life, accessing generative AI tools and bots for personal, professional, and educational use. Moreover, the survey finds that 42% of regular generative AI users say it has a “very positive” effect on their lives—outpacing perceptions of both devices (36%) and apps (29%). Generative AI users—those who use it regularly for projects and tasks beyond experimentation—nearly doubled to 20% over the past year. Experimenters—those who don’t yet use generative AI as regularly—rose to 33%. The group reporting that they are not familiar with the concept of generative AI has dwindled to just 13%. Roughly half of surveyed generative AI users (51%) say they use it every day, and 38% say they use it at least once a week, which suggests that the technology is becoming part of their routine digital activities. Most generative AI users surveyed report engaging with the technology through standalone applications on their phones (65%) or via tool-specific websites (60%). Sixty-nine percent of users also report tapping into generative AI capabilities built into other familiar software and services they use, such as search engines, social platforms, and office productivity apps. About four in 10 surveyed generative AI users say they or their households pay for generative AI-infused tools or services. Among users who don’t pay for the technology, half say the main reason is that free tools are good enough; 20% say they don’t use the tools often enough to warrant paying; and 17% cite price.
Stablecoin transfer volume reached $27 trillion last year exceeding Visa and Mastercard, with enterprises confronting interoperability question whether stablecoins will be platform-siloed or unified as digital currency adoption moves mainstream beyond early experimentation
A major shift in U.S. government sentiment toward cryptocurrencies combined with innovation from AI are making the stablecoin market an emerging force in the digital asset economy. Apple Inc., Amazon.com Inc., X Corp., Airbnb Inc., Meta Platforms Inc., Google LLC and Uber Technologies Inc. are all reportedly exploring the integration of stablecoins on their platforms. Ten major banks, including Bank of America and Deutsche Bank, are also investigating whether to issue stablecoins as part of their offerings. Stablecoins are digital assets pegged to the value of fiat currency, often the U.S. dollar. The influence of AI in the crypto industry may push digital currency providers to build interoperable models for stablecoins. This is currently playing out as major players grapple with implementing a consistent payments standard, potentially using the x402 open protocol that has existed on the internet for decades. The x402 protocol is a “payment required” status code that sits on top of HTTP. Last month, Cloudflare Inc. and Coinbase Inc. launched the x402 Foundation to further the development of an open internet standard for payments. There is some urgency behind this initiative, as enterprises continue to adopt and implement AI agents. It’s expected that agents ultimately will shop, make buying decisions and execute payments on behalf of users. The x402 standard, along with the Agents Payment Protocol or AP2 under development by Google, provide early looks at how AI will automate consumer purchases.
