Delta Air Lines expects its rapidly expanding luxury and corporate customer base to overtake the sales it gets from fliers who buy economy seats by 2027, and the trend may prove true for a quarter or two next year. Shares of Delta surged as much as 8% in premarket trade and traded 4% higher by after it unveiled quarterly earnings. A resurgence in travel demand in recent weeks should carry through the fourth quarter of this year, marking a turnaround from a turbulent start to 2025. Delta, the biggest U.S. airline by market capitalization, said broad improvement in sales trends has been seen over the past six weeks across all geographies. The pickup helped Delta’s revenue rise 6% to $16.67 billion in the third quarter from a year earlier, ahead of Wall Street expectations, boosted by premium cabin demand, corporate travel and its loyalty program customers. Delta expects the top line to keep climbing in the final stretch of 2025, providing an upbeat readout of holiday travel as other airlines get set to report earnings over the coming weeks. The carrier caters to higher-earning consumers who are still spending despite signs of turmoil in the economy and seeking out travel. Delta saw improvements across most of its product categories in the third quarter, particularly domestic main-cabin unit revenue, which rose 2% from a year earlier after falling 5% in the second quarter. The bottom line, meanwhile, got a lift from falling fuel prices.
RAG and vector search provide missing business specific or even department context for AI; enabling accurate, actionable insights by integrating structured enterprise data at query time
RAG has fast become a darling adjunct to the generative AI services. Still highly applicable (some say essential) as a means of bringing in domain-specific (or indeed business- or even department-specific) smaller language model context to the wider world of large language models, RAG is something we should never start ragging (i.e. dissing) on. RAG uses more specifically aligned enterprise data to feed relevant information into an AI model or agent to improve the quality of the generated response. By incorporating this more “finely tailored data” at the point of query, a RAG architecture can increase the relevance and factual accuracy of AI outputs. RAG-powered models are able to reduce frustrating hallucinations and ground responses in contextually relevant information. “However, when integrated with a RAG layer that searches a current database of workflows, banking assets and past queries, the assistant can pull in new, relevant protocols based on user questions and explain them back in natural language. RAG should also be backed up by a ‘fast data layer’ that aggregates and structures the unstructured data within an organization, which the RAG architecture can then parse through when queried. That’s RAG at work. RAG closes the gap between enterprise AI deployment and success by situating model results within appropriate, helpful business context.
Robo.ai and Changer.ae unveil the world’s first smart vehicle with embedded and compliant digital wallet; enabling autonomous payments for tolls, charging, and maintenance
Robo.ai Inc. and UAE regulated digital asset custodian Changer.ae jointly unveil “Roboy339”, the world’s first smart vehicle to be equipped with its own digital wallet, in TOKEN2049. In August 2025, Robo.ai and Changer.ae signed a strategic memorandum of understanding to co-innovate in the infrastructure of compliant wallets and digital accounts. This joint unveiling at TOKEN2049 represents significant milestone in that collaboration, and it also underscores Robo.ai’s strategic progress in deploying “smart machine × compliant stablecoin capability” in the Middle East. As an entity under the Abu Dhabi Global Market (ADGM) , Changer.ae provides regulated virtual asset custody, forming the foundational support for device-level financial functionality. The Roboy339’s compliant digital wallet enables autonomous real-time payments for tolls, charging, maintenance, and leasing, while also processing authorized income and transactions.
AI inference accelerates as Groq scales sovereign cloud solutions with Bell Canada, offering fast, energy-efficient compute for governments and enterprise workflow automation.
Groq started out developing AI inference chips and has expanded on that with a system of software called GroqCloud. The company now exists in two markets: working with developers and innovators to power applications with AI and managing sovereign AI for international clients. Groq describes implementing its system as almost like drop-shipping an AI inference compute center. The company is planning to build on its work with Bell Canada, where it managed a sovereign AI network across six sites and is getting interest from other national telecommunication companies, according to Chris Stephens, vice president and field chief technology officer of Groq. Stephens foresees competition between incumbent enterprise applications such as SAP SE and AI-native startups who want to disrupt the current software hierarchy. Whichever side the market lands on, Groq has put itself squarely in the inference space, which has been seeing a number of use cases in workflow automation and customer service. Stephens sees Groq’s hardware layer as a significant advantage in the current market, enabling the company to run more data centers while using less energy than its competitors.
UiPath evolves RPA into agentic AI; combining bots and agents with human oversight for judgment-based enterprise automation requiring governance and strategic collaboration
RPA, is evolving beyond repetitive task automation into a new era of agentic intelligence, where bots and agents work together to enable enterprise-scale automation strategy. This next chapter builds on the foundation of RPA. It extends into judgment-based work that demands governance and oversight, according to Dana Forfa, vice president and global head of procurement, real estate and travel at UiPath Inc. The ability to combine these approaches enables new levels of productivity while freeing employees to focus on more strategic work, according to Forfa. “We had a bot that would chase people at quarter close to complete the goods receipt process, rather than individuals following up all the time,” she said. “The difference now is that the agent is actually answering some of the questions the procurement buyer would’ve had to answer.” The distinction between deterministic tasks handled by RPA and probabilistic tasks suited for agents underscores why human oversight is crucial. Balancing these approaches creates both efficiency and trust, Hitesh Ramani, chief accounting officer and deputy chief financial officer of UiPath, explained.
Photonic fabrics replace copper interconnects in AI data centers; reducing energy consumption fourfold and enabling efficient GPU clustering for massive AI model processing
Traditional copper connections can no longer keep pace with the bandwidth and thermal demands of today’s AI factories, opening the door for light-based photonic fabrics that promise faster communication, lower energy use and higher GPU utilization. This evolution marks a turning point where data movement — not compute — has become the defining factor in the race to build the next era of intelligent infrastructure, according to Preet Virk, co-founder and chief operating officer of Celestial AI Inc. GPUs must be connected efficiently across racks and even data centers, while managing heat, power and bandwidth. Copper connections can’t keep up at scale. That’s where photonics — a light-based interconnect technology — emerges as the clear solution, according to Virk. “What we focused on is the photonic fabric and the scale-up network day one, not scale-out. That’s where, as they say, the pain is, and that’s where the photonic fabric comes in. What we allow is for the industry to build very large clusters in a very efficient fashion.” In modern AI data centers, data movement, not compute, is the largest energy drain.
IBM’s open-sourced language model series introduces hybrid Mamba-2 architecture with mixture-of-experts design; cutting RAM requirements from 90GB to 15GB for comparable model performance.
IBM open-sourced Granite 4, a language model series that combines elements of two different neural network architectures. The algorithm family includes four models on launch. They range in size from 3 billion to 32 billion parameters. IBM claims they can outperform comparably-sized models using less memory. Granite-4.0-Micro, one of the smallest algorithms in the lineup, is based on the Transformer architecture that powers most large language models. The architecture’s flagship feature is its so-called attention mechanism. The mechanism enables an LLM to review a snippet of text, identify the most important sentences and prioritize them during the decision-making process. The three other Granite 4 models combine an attention mechanism with processing components based on the Mamba neural network architecture, a Transformer alternative. The technology’s main selling point is that it’s more hardware-efficient. Mamba models require a fraction of the memory, which reduces inference costs. The Granite 4 series compresses one of the technology’s core components into about 25 lines of code. That enables Mamba 2 to perform some tasks using less hardware than the original version of the architecture. The most advanced Granite 4 model, Granite-4.0-H-Small, includes 32 billion parameters. It has a mixture-of-experts design that activates 9 billion parameters to answer prompts. IBM envisions developers using the model for tasks such as processing customer support requests. The two other Mamba-Transformer algorithms in the series, Granite-4.0-H-Tiny and Granite-4.0-H-Micro, feature 7 billion and 3 billion parameters, respectively. They’re designed for latency-sensitive use cases that prioritize speed over processing accuracy.
Shanghai researchers prove agentic AI emerges from quality over quantity; they trained superior autonomous systems with 78 examples versus thousands used by conventional approaches
A new study by Shanghai Jiao Tong University and SII Generative AI Research Lab (GAIR) shows that training large language models (LLMs) for complex, autonomous tasks does not require massive datasets. Their framework, LIMI (Less Is More for Intelligent Agency), finds that “machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.” In experiments, the researchers found that with a small, but carefully curated, dataset of just 78 examples, they could train LLMs to outperform models trained on thousands of examples by a considerable margin on key industry benchmarks. This discovery could have important implications for enterprise applications where data is scarce or expensive to collect. The LIMI framework demonstrates that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Key to the framework is a pipeline for collecting high-quality demonstrations of agentic tasks. The LIMI-trained model achieved an average score of 73.5% on AgencyBench, significantly outperforming all baseline models, the best of which (GLM-4.5) scored 45.1%. This superiority extended to other benchmarks covering tool use, coding, and scientific computing, where LIMI also outperformed all baselines. More importantly, the study showed that the model trained on just 78 examples outperformed models trained with 10,000 samples from another dataset, delivering superior performance with 128 times less data. “This discovery fundamentally reshapes how we develop autonomous AI systems, suggesting that mastering agency requires understanding its essence, not scaling training data,” the researchers write. Instead of undertaking massive data collection projects, organizations can leverage their in-house talent and subject matter experts to create small, high-quality datasets for bespoke agentic tasks. This lowers the barrier to entry and enables businesses to build custom AI agents that can provide a competitive edge on the workflows that matter most to them.
Service-as-software emerges as new business model, unifying data and processes into system of intelligence where agents orchestrate knowledge work and encode expertise.
We’re on the cusp of a new software-enabled business model that will determine winners and losers in the coming decades. We call this service-as-software. We believe the enterprise interface is thinning as users increasingly “speak” to systems in natural language. This shift reframes how the digital representation of the enterprise should be built. Our belief is the next architectural milestone is the emergence of a system of intelligence — a unified, contextual layer that interprets signals (including from business intelligence) and feeds systems of agents coordinated by agent-control frameworks. In our view, the path forward is a new full stack that favors end-to-end integration of both data and processes over the creation of new islands. As interfaces thin and language becomes the control plane, governance catalogs become the point of control, the SoI becomes the enterprise brain, and agentic systems become the execution layer.
Holders of premium credit cards are reportedly paring down the number of such cards they carry after recently announced increases in annual fees
Holders of premium credit cards are reportedly paring down the number of such cards they carry after recently announced increases in annual fees. These moves come after JPMorgan Chase raised its annual fee on its Sapphire Reserve card by 45% to $795 in June, and American Express said in September it was raising the annual fee on its Platinum card by $200. Three cardholders interviewed by the WSJ said recent fee hikes prompted them to compare the benefits of their premium cards — in one case, by putting together a spreadsheet — and then focus their spending on one while closing the other. A fourth cardholder told the WSJ that he ended up keeping a high-fee card for another year after calling to cancel it and being offered a statement credit in exchange for a specified amount of spending on the card. Issuers are adding greater sign-up bonuses benefits to their high-fee cards to retain customers. The report cited a Bank of America finding that less than 15% of cardholders pay more than $250 in annual fees and a J.D. Power finding that those who pay annual fees of $500 or more spend three times more than other cardholders and are less risky because three-quarters pay the card’s balance each month. It also said that Bank of America found American Express’ retention rates went up after previous fee hikes and that J.D. Power found the holders of high-fee cards are generally more satisfied with the cards than other cardholders are with theirs.
