Uber and Flytrex, Inc. announced a strategic partnership and Uber’s first investment in drone delivery, marking a major step forward in the future of autonomous logistics. Expected to begin with Uber Eats pilot markets in the U.S. by the end of the year, the new service combines Flytrex’s proven autonomous drone delivery system with Uber’s global platform and logistics expertise, creating a fully integrated end-to-end experience designed for speed, safety, and scale. Uber aims to build the world’s most flexible, multimodal delivery network—expanding beyond cars, bikes, and couriers to sidewalk robots and now autonomous aerial delivery. Flytrex, one of only four drone delivery providers authorized by the FAA for Beyond Visual Line of Sight (BVLOS) operations, brings technology and operational expertise to Uber that will enable consumers to receive orders in minutes while reducing congestion and emissions. Integrating Flytrex’s BVLOS-certified drones with the vast network of restaurants, merchants, and consumers on the Uber platform, will unlock faster, safer, and more cost-efficient last-mile logistics at scale. Drone delivery has the potential to significantly reduce delivery times, lower costs, and cut emissions compared to traditional methods—unlocking a future where everything from dinner to daily essentials arrives in minutes, not hours.
STBL launches yield-stripping protocol with 103% over-collateralization using Franklin Templeton BENJI and BlackRock BUIDL, achieving $2.3B Fully Diluted Valuation through GENIUS Act-compliant principal-yield separation
A new startup, STBL, emulates TradFi’s zero-coupon strip structures by converting digital assets into a dollar-pegged stablecoin and yield-bearing non-fungible token (NFT). Just as with the traditional equivalent, the components can be held separately, allowing investors to keep the part that appeals to them and sell the other bit to counterparties with different attitudes to risk. The product, currently in beta testing, goes beyond just packaging risk into different tranches. It also widens the stablecoin issuance model. With regular stablecoins, like USDT, the issuing company, in this case Tether, keeps the returns on the Treasuries they hold to maintain the token’s peg to the dollar. It’s profitable business, Tether reported $4.9 billion net profit in the second quarter. With STBL, whoever deposits a tokenized asset into the system becomes the minter and keeps the returns. “Our mission at STBL is to evolve stablecoins from corporate products into public infrastructure,” said STBL co-founder Reeve Collins, who was also a cofounder of Tether. “For the first time, minters, not issuers, retain the value of reserves. This is the defining shift of Stablecoin 2.0: money that is stable, compliant, and built to serve the community.” When a yield-bearing on-chain asset is deposited and locked into the STBL protocol, it splits into a stablecoin (USST) that can circulate and serve as collateral or reserves in DeFi and a separate, yield-accruing NFT called YLD.
States rethink mandatory abuse reporting for vulnerable adults; Wisconsin’s model prioritizes victim safety and self-determination instead of blanket mandates to better balance protection and privacy
AARP estimates that financial exploitation costs older Americans more than $28 billion each year. As of 2025, 16 states have sweeping reporting requirements that require nearly everyone who becomes aware of mistreatment to report it. And every state requires at least certain categories of people to report abuse. Mandatory reporting aims to overcome the many reasons people might otherwise stay silent such as fear of retaliation, reluctance to damage relationships, or simple unwillingness to get involved. By making reporting a legal duty, states hope to turn bystanders into protectors. Repealing mandatory reporting statutes may not be politically realistic but amending them is. States should reconsider blanket mandates and focus on situations where victims are at ongoing risk or are genuinely unable to act for themselves. Wisconsin’s approach, which allows would-be reporters to consider whether reporting is in the victim’s best interest, could be a model for other states seeking to achieve a better balance between protecting older adults on one hand, and respecting their privacy and right to self-determination on the other. Modernizing reporting mandates, however, is only the first step. Ultimately, the success of mandatory reporting laws depends on the strength of the systems that respond to reports when made. If APS lacks resources to investigate and offer services, or if the services offered have limited value, then reports (even if dutifully made) may not result in meaningful protection. Accordingly, it is important to think about how to expand the tools that are available to APS. One promising approach is being pioneered by the RISE Collaborative model. The approach pairs older adults who have experienced mistreatment with a professional “advocate” who helps the older adult embrace a desire for improvement and identify goals. The advocate can then work with the older adult to help achieve those goals including repairing relationships that may have been damaged by the abuse.
European Central Bank concludes that for collateral-backed stablecoins to achieve payment fungibility, settlement finality, interoperability standards and meeting central bank money conversion requirements, is essential
The European Central Bank published a notable paper on stablecoin fungibility. This is not related to the debate around fungibility of EU versus foreign stablecoins, where the EU’s comparatively generous redemption rules are causing concerns that a run on a US stablecoin could encourage foreigners to redeem from the EU version of the same stablecoin. Instead, the paper explores a broad definition of fungibility. While the exercise may seem academic, fungibility and utility are closely related. The authors focus on stablecoins, but note that the framework outlined also applies to retail CBDCs. They don’t mention deposit tokens, but clearly it applies to that use case as well. The conclusion is that collateral-backed stablecoins are a fungible form of payment, provided they satisfy the key elements in the framework: settlement finality, interoperability with other forms of payment, and (ultimately) convertibility into central bank money. In order to do so, it is essential that regulated stablecoins have sufficient high quality collateral, liquidity and a capital buffer. The most complex and interesting part of the paper is on interoperability, with blockchain interoperability comprising a minor part of the assessment.
Extreme Networks deploys universal AI-powered network architecture for enterprise transformation, that correlates device health with performance data across edge-to-core traffic flows
Extreme Networks Inc.’s vision incorporates a universal, cloud-native, AI-powered network serving as the driver of innovation and accelerator of business value in the emerging digital era. Rather than focusing on siloed AI for wireless or WAN alone, Extreme emphasizes a holistic approach, where AI is integrated throughout the entire network, from the data center to the campus to the branch, according to Dan DeBacker , senior vice president of product management at Extreme. One of the most striking changes AI is driving is in network traffic flows. Traditional applications, such as Microsoft Teams, Zoom and Salesforce Inc., now share bandwidth with massive volumes of AI data traveling from edge devices back into centralized data centers and clouds. This influx of edge-to-core traffic requires networks that can handle far more complexity while maintaining seamless user experiences, DeBacker noted. Extreme is leveraging its strengths in wired fabric and wireless solutions to empower AI at the edge. With data being the currency of AI, the ability to pull that information directly from both the network and end devices is critical for the AI network. By correlating device health with network performance, Extreme helps ensure connectivity and superior user experiences across various industries, including healthcare, manufacturing and smart cities, according to DeBacker.
California senate approves bill requiring AI companies generating over $500 million to publish mandatory incident reports and to provide whistleblower protections
California’s state senate recently gave final approval to a new AI safety bill, SB 53, sending it to Governor Gavin Newsom to either sign or veto. SB 53 is narrower than previous SB 1047, with a focus on big AI companies making more than $500 million in annual revenue. SB 53 still puts some meaningful regulations on the AI labs. It makes them publish safety reports for their models. If they have an incident, it basically forces them to report that to the government. And it also, for employees at these labs, if they have concerns, gives them a channel to report that to the government and not face pushback from the companies, even though a lot of them have signed NDAs. This feels like a potentially meaningful check on tech companies’ power, something we haven’t really had for the last couple of decades. This bill specifically applies to AI developers that are [generating] more than $500 million [from] their AI models. This really tries to target OpenAI, Google DeepMind, these big companies and not your run-of-the-mill startup. It’s [also] worth talking about the broader landscape around AI regulation and the fact that one of the big changes between last year and this year is now we have a new president. The federal administration has taken much more of a stance of no regulation and companies should be able to do what they want, to the extent that they’ve actually included [language] in funding bills saying states cannot have their own AI regulation. None of that has passed so far, but potentially they could try to get that through in the future. So this could be another front in which the Trump administration and blue states are fighting.
Wayve advances autonomous driving with a $500 million Nvidia partnership leveraging mapless end-to-end neural networks to enable eyes-off Level 3 and driverless Level 4 capabilities
Nvidia CEO Jensen Huang pledged to invest £2 billion ($2.6 billion) to supercharge the country’s AI startup ecosystem. Wayve, the U.K.-based self-driving tech startup, has signed a letter of intent with Nvidia to evaluate a $500 million strategic investment in the U.K. startup’s next funding round. Wayve has gained attention and investors for its automated driving system that uses a self-learning versus rules-based approach to its self-driving software. Wayve’s end-to-end neural network doesn’t require high-definition maps and only uses data to teach the vehicle how to drive. That data-driven learning approach is used for “eyes on” assisted driving and an “eyes off” fully automated driving system. The company plans to sell its “Embodied AI” to automakers and other tech companies. Wayve’s self-learning approach, which is similar to the strategy that Tesla uses, is seen as particularly appealing to automakers because it’s not reliant on specific sensors or maps. This means Wayve’s system can work with existing sensors like cameras and radar. The automated driving software captures data from those sensors, which directly informs the driving decisions of the system. Wayve’s generation 2 self-driving platform, which is integrated into its Ford Mach E test vehicles, uses Nvidia GPUs. This week, the startup unveiled gen 3, a platform that uses the in-vehicle compute autonomous vehicle development kit called Nvidia Drive AGX Thor. The gen 3 will allow Wayve to offer eyes-off advanced driving-assistance systems and Level 4 driverless features that will work on city streets and highways.
Poor enterprise UI design fuels shadow AI growth: with 115,000 unauthorized apps projected by year-end despite massive corporate AI investment increases
Poorly designed internal AI apps are failing to deliver the experiences employees need to excel, further fueling shadow AI’s growing dominance. With 92% of companies planning to increase their AI investments and only 21% of office workers saying AI apps significantly improve their productivity, more businesses are grappling with how to close a 71% gap between expectations and reality. More organizations need to challenge themselves to improve the employee experiences their internally created apps deliver. “The biggest paradox in enterprise AI adoption is that companies are spending heavily, but employees don’t feel the benefit,” Vineet Arora, CTO at WinWire told. ” “This isn’t about the algorithms, it’s about usability. If the AI tools don’t feel as intuitive as the ones employees already trust, adoption stalls and shadow AI fills the gap.” The majority of employees creating shadow AI apps aren’t acting maliciously or trying to harm a company. They’re grappling with growing amounts of increasingly complex work, chronic time shortages, and tighter deadlines. Building AI tools using a blueprint for usability that is years or even decades old invites shadow AI. IT teams are missing an opportunity to deliver exceptional new employee experiences by staying in the comfort zone of building internal apps like they always have. The result is becoming predictable as shadow AI flourishes. The proliferation of shadow AI financial analysis apps integrated with APIs from the world’s top AI companies, including OpenAI, Perplexity, Google and others. Their widespread use in consulting companies continues to lead all others, as many employees see it as a hedge against layoffs. By the end of the year, 115,000 shadow AI apps will be embedded in client delivery workflows, with mobile apps showing the fastest growth.
Generative AI isn’t culturally neutral, MIT’s research finds; but these cultural tendencies can be adjusted through simple prompts
A new study led by MIT Sloan’s Jackson Lu suggests generative AI models have cultural leanings. In their new paper, “Cultural Tendencies in Generative AI,” Lu and his co-authors — Lesley Song from Tsinghua University and Lu Zhang from MIT — emphasize that generative AI models’ cultural tendencies reflect the cultural patterns of the data they were trained on. “Our findings suggest that the cultural tendencies embedded within AI models shape and filter the responses that AI provides,” said Lu, an associate professor of work and organization studies at MIT Sloan. “As generative AI becomes part of everyday decision-making, recognizing these cultural tendencies will be critical for both individuals and organizations worldwide.” In their study, the researchers asked GPT and ERNIE the same set of questions in English and Chinese. The choice of languages was intentional. English and Chinese not only embody distinct cultural values but are also the world’s two most widely spoken languages, so the two languages provide extensive training data for generative AI. Importantly, neither AI model translates between languages when responding — Chinese prompts are processed directly in Chinese, and English prompts are processed directly in English. The researchers then analyzed the responses using two foundational dimensions from cultural psychology: social orientation and cognitive style. The results were clear: Both GPT and ERNIE reflected the cultural leanings of the languages used. In English, the models leaned toward an independent social orientation and analytic thinking. In Chinese, they shifted to a more interdependent social orientation and holistic thinking. When researchers asked generative AI to advise an insurance company on choosing between two advertising slogans, the recommendations differed for Chinese and English. The study also found that these cultural tendencies can be adjusted through simple prompts.
California senate approves bill requiring AI companies generating over $500 million to publish mandatory incident reports and to provide whistleblower protections
California’s state senate recently gave final approval to a new AI safety bill, SB 53, sending it to Governor Gavin Newsom to either sign or veto. SB 53 is narrower than previous SB 1047, with a focus on big AI companies making more than $500 million in annual revenue. SB 53 still puts some meaningful regulations on the AI labs. It makes them publish safety reports for their models. If they have an incident, it basically forces them to report that to the government. And it also, for employees at these labs, if they have concerns, gives them a channel to report that to the government and not face pushback from the companies, even though a lot of them have signed NDAs. This feels like a potentially meaningful check on tech companies’ power, something we haven’t really had for the last couple of decades. This bill specifically applies to AI developers that are [generating] more than $500 million [from] their AI models. This really tries to target OpenAI, Google DeepMind, these big companies and not your run-of-the-mill startup. It’s [also] worth talking about the broader landscape around AI regulation and the fact that one of the big changes between last year and this year is now we have a new president. The federal administration has taken much more of a stance of no regulation and companies should be able to do what they want, to the extent that they’ve actually included [language] in funding bills saying states cannot have their own AI regulation. None of that has passed so far, but potentially they could try to get that through in the future. So this could be another front in which the Trump administration and blue states are fighting.
