Dianomic, a leader in intelligent industrial data pipelines and edge AI/ML solutions, has launched FogLAMP Suite 3.0. The solutuion’s ‘Intelligent Industrial Data Pipelines’ abstracts machines, sensors, and processes into a unified real-time data and streaming analytics system for brownfield and greenfield alike. By seamlessly connecting and integrating the plant floor to the cloud and back with high quality normalized streaming data, FogLAMP 3.0 enables innovations like AI-driven applications, digital twins, lakehouse data management, unified namespace and OT/IT convergence. FogLAMP Suite 3.0 creates an intelligent data fabric, unifying and securing real-time operational data at scale with enterprise-grade management. This comprehensive data flow empowers both plant-level optimization and cloud-based insights. Its role-based access control, intuitive graphical interface, and flexible development tools—ranging from no-code to source code—empower both IT and OT teams to collaborate effectively or work independently with confidence. FogLAMP Suite 3.0 Key Features: Real-time Full Fidelity Streaming Analytics and Data Management – Where the physical world meets the digital; Enterprise Wide – Manage, integrate and monitor streaming data from diverse sources to clouds and back; Enable Live Digital Twins – Manage tags and namespaces, use semantic models, detect, predict and prescribe with machine AI/ML; Compatible with brownfield, greenfield, IIoT – Processes, equipment and sensors.
OpenAI’s enterprise adoption appears to be accelerating, at the expense of rivals – 32% of U.S. businesses are paying for subscriptions to OpenAI vs 8% and 0.1% subscribing to Anthropic’s products and Google AI respectively
OpenAI appears to be pulling well ahead of rivals in the race to capture enterprises’ AI spend, according to transaction data from fintech firm Ramp. According to Ramp’s AI Index, which estimates the business adoption rate of AI products by drawing on Ramp’s card and bill pay data, 32.4% of U.S. businesses were paying for subscriptions to OpenAI AI models, platforms, and tools as of April. That’s up from 18.9% in January and 28% in March. Competitors have struggled to make similar progress, Ramp’s data shows. Just 8% of businesses had subscriptions to Anthropic’s products as of last month compared to 4.6% in January. Google AI subscriptions saw a decline from 2.3% in February to 0.1% in April, meanwhile. “OpenAI continues to add customers faster than any other business on Ramp’s platform,” wrote Ramp Economist Ara Kharzian. “Our Ramp AI Index shows business adoption of OpenAI growing faster than competitor model companies.” To be clear, Ramp’s AI Index isn’t a perfect measure. It only looks at a sample of corporate spend data from around 30,000 companies. Moreover, because the index identifies AI products and services using merchant name and line-item details, it likely misses spend lumped into other cost centers. Still, the figures suggest that OpenAI is strengthening its grip on the large and growing enterprise market for AI. OpenAI is projecting $12.7 billion in revenue this year and $29.4 billion in 2026.
Talent development, right data infrastructure, industry-specific strategic bets, responsible AI governance and agentic architecture are key for scaling enterprise AI initiatives
A new study from Accenture provides a data-driven analysis of how leading companies are successfully implementing AI across their enterprises and reveals a significant gap between AI aspirations and execution. Here are five key takeaways for enterprise IT leaders from Accenture’s research.
Talent maturity outweighs investment as the key scaling factor. Accenture’s research reveals that talent development is actually the most critical differentiator for successful AI implementation. “We found the top achievement factor wasn’t investment but rather talent maturity,” Senthil Ramani, data and AI lead at Accenture, told. The report shows front-runners differentiate themselves through people-centered strategies. They focus four times more on cultural adaptation than other companies, emphasize talent alignment three times more and implement structured training programs at twice the rate of competitors. IT leader action item: Develop a comprehensive talent strategy that addresses both technical skills and cultural adaptation. Establish a centralized AI center of excellence – the report shows 57% of front-runners use this model compared to just 16% of fast-followers.
Data infrastructure makes or breaks AI scaling efforts. “The biggest challenge for most companies trying to scale AI is the development of the right data infrastructure,” Ramani said. “97% of front-runners have developed three or more new data and AI capabilities for gen AI, compared to just 5% of companies that are experimenting with AI.” These essential capabilities include advanced data management techniques like retrieval-augmented generation (RAG) (used by 17% of front-runners vs. 1% of fast-followers) and knowledge graphs (26% vs. 3%), as well as diverse data utilization across zero-party, second-party, third-party and synthetic sources. IT leader action item: Conduct a comprehensive data readiness assessment explicitly focused on AI implementation requirements. Prioritize building capabilities to handle unstructured data alongside structured data and develop a strategy for integrating tacit organizational knowledge.
Strategic bets deliver superior returns to broad implementation. While many organizations attempt to implement AI across multiple functions simultaneously, Accenture’s research shows that focused strategic bets yield significantly better results. “In the report, we referred to ‘strategic bets,’ or significant, long-term investments in gen AI focusing on the core of a company’s value chain and offering a very large payoff. This strategic focus is essential for maximizing the potential of AI and ensuring that investments deliver sustained business value.” This focused approach pays dividends. Companies that have scaled at least one strategic bet are nearly three times more likely to have their ROI from gen AI surpass forecasts compared to those that haven’t. IT leader action item: Identify 3-4 industry-specific strategic AI investments that directly impact your core value chain rather than pursuing broad implementation.
Responsible AI creates value beyond risk mitigation. Most organizations view responsible AI primarily as a compliance exercise, but Accenture’s research reveals that mature responsible AI practices directly contribute to business performance. “ROI can be measured in terms of short-term efficiencies, such as improvements in workflows, but it really should be measured against longer-term business transformation.” The report emphasizes that responsible AI includes not just risk mitigation but also strengthens customer trust, improves product quality and bolsters talent acquisition – directly contributing to financial performance. IT leader action item: Develop comprehensive responsible AI governance that goes beyond compliance checkboxes. Implement proactive monitoring systems that continually assess AI risks and impacts. Consider building responsible AI principles directly into your development processes rather than applying them retroactively.
Read Article
Model Context Protocol open standard architecture consisting of servers and clients will be key to building secure, two-way connections between AI agents’ data sources and tools as AI systems mature and start to maintain context
AI agents have been all the rage over the last several months, which has led to a need to come up with a standard for how they communicate with tools and data, leading to the creation of the Model Context Protocol (MCP) by Anthropic. MCP is “an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools,” Anthropic wrote in a blog post announcing it was open sourcing the protocol. MCP can do for AI agents what USB does for computers, Lin Sun, senior director of open source at cloud native connectivity company Solo.io, explained. According to Keith Pijanowski, AI solutions engineer at object storage company MinIO, an example use case for MCP is an AI agent for travel that can book a vacation that adheres to someone’s budget and schedule. Using MCP, the agent could look at the user’s bank account to see how much money they have to spend on a vacation, look at their calendar to ensure it’s booking travel when they have time off, or even potentially look at their company’s HR system to make sure they have PTO left. MCP consists of servers and clients. The MCP server is how an application or data source exposes its data, while the MCP client is how AI applications connect to those data sources. MinIO actually developed its own MCP server, which allows users to ask the AI agent about their MinIO installation like how many buckets they have, the contents of a bucket, or other administrative questions. The agent can also pass questions off to another LLM and then come back with an answer. “Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today’s fragmented integrations with a more sustainable architecture,” Anthropic wrote in its blog post.
A new HPC architecture with “bring-your-own-code” (BYOC) approach would enable existing code to run unmodifieD; the underlying technology adapts to each application without new languages or significant code changes
There’s now a need for a new path forward that allows developers to speed up their applications with fewer barriers, which will ensure faster time to innovation without being locked into any particular vendor. The answer is a new kind of accelerator architecture that embraces a “bring-your-own-code” (BYOC) approach. Rather than forcing developers to rewrite code for specialized hardware, accelerators that embrace BYOC would enable existing code to run unmodified. The focus should be on accelerators where the underlying technology adapts to each application without new languages or significant code changes. This approach offers several key advantages: Elimination of Porting Overhead: Developers can focus on maximizing results rather than wrestling with hardware-specific adjustments. Software Portability: As performance accelerates, applications retain their portability and avoid vendor lock-in and proprietary domain-specific languages. Self-Optimizing Intelligence: Advanced accelerator designs can continually analyze runtime behavior and automatically tune performance as the application executes to eliminate guesswork and manual optimizations. These advantages translate directly into faster results, reduced overhead, and significant cost savings. Finally liberated from extensive code adaptation and reliance on specialized HPC experts, organizations can accelerate R&D pipelines and gain insights sooner. The BYOC approach eliminates the false trade-off between performance gains and code stability, which has hampered HPC adoption. By removing these artificial boundaries, BYOC opens the door to a future where computational power accelerates scientific progress. A BYOC-centered ecosystem democratizes access to computational performance without compromise. It will enable domain experts across disciplines to harness the full potential of modern computing infrastructure at the speed of science, not at the speed of code adaptation.
The line between eCommerce and fintech is disappearing, and the future belongs to integrated ecosystems that combine seamless shopping experiences with embedded financial solutions: Analyst
Jose Daniel Duarte Camacho, a renowned eCommerce and FinTech innovator, has outlined a vision for the future of digital commerce and financial services. He believes that companies that embrace digital agility and customer-centric strategies will emerge as frontrunners in this wave of technological disruption. Duarte Camacho believes that the line between eCommerce and financial technology is disappearing, and the future belongs to integrated ecosystems that combine seamless shopping experiences with embedded financial solutions. Consumers expect speed, trust, and personalization at every touchpoint. Duarte Camacho has identified four major trends that are shaping the future of eCommerce: AI-Driven Hyperpersonalization: Retailers are using machine learning to adapt in real time to individual user behavior. Product recommendations, pricing, and content are becoming uniquely tailored to each customer—boosting conversion rates and customer satisfaction. Immersive Shopping Experiences with AR and VR: Augmented and virtual reality tools are transforming product visualization and engagement. Customers can now preview how furniture fits in a room or how a garment looks on them—without setting foot in a store. Eco-Conscious Consumer Demands: Sustainability is no longer a bonus; it’s a business imperative. eCommerce platforms that prioritize eco-friendly packaging, carbon-neutral shipping, and ethical sourcing are capturing the loyalty of a new generation of socially conscious shoppers. Conversational Commerce and Voice Technology: Voice assistants and chat-based shopping are simplifying online transactions. Duarte Camacho believes brands must optimize for voice commerce and natural language processing to remain competitive in the evolving customer interface.
Sakana’s Continuous Thought Machines (CTM) AI model architecture uses short-term memory of previous states and allows neural synchronization to mirror brain-like intelligence
AI startup Sakana has unveiled a new type of AI model architecture called Continuous Thought Machines (CTM). Rather than relying on fixed, parallel layers that process inputs all at once — as Transformer models do —CTMs unfold computation over steps within each input/output unit, known as an artificial “neuron.” Each neuron in the model retains a short history of its previous activity and uses that memory to decide when to activate again. This added internal state allows CTMs to adjust the depth and duration of their reasoning dynamically, depending on the complexity of the task. As such, each neuron is far more informationally dense and complex than in a typical Transformer model. CTMs allow each artificial neuron to operate on its own internal timeline, making activation decisions based on a short-term memory of its previous states. These decisions unfold over internal steps known as “ticks,” enabling the model to adjust its reasoning duration dynamically. This time-based architecture allows CTMs to reason progressively, adjusting how long and how deeply they compute — taking a different number of ticks based on the complexity of the input. The number of ticks changes according to the information inputted, and may be more or less even if the input information is identical, because each neuron is deciding how many ticks to undergo before providing an output (or not providing one at all). This represents both a technical and philosophical departure from conventional deep learning, moving toward a more biologically grounded model. Sakana has framed CTMs as a step toward more brain-like intelligence—systems that adapt over time, process information flexibly, and engage in deeper internal computation when needed. Sakana’s goal is to “to eventually achieve levels of competency that rival or surpass human brains.” The CTM is built around two key mechanisms. First, each neuron in the model maintains a short “history” or working memory of when it activated and why, and uses this history to make a decision of when to fire next. Second, neural synchronization — how and when groups of a model’s artificial neurons “fire,” or process information together — is allowed to happen organically. Groups of neurons decide when to fire together based on internal alignment, not external instructions or reward shaping. These synchronization events are used to modulate attention and produce outputs — that is, attention is directed toward those areas where more neurons are firing. The model isn’t just processing data, it’s timing its thinking to match the complexity of the task. Together, these mechanisms let CTMs reduce computational load on simpler tasks while applying deeper, prolonged reasoning where needed.
Jenius Bank surpassed $2 billion in deposits with its no-fee ‘evolved banking’ approach, centered on providing personalized financial insights from account aggregation
Jenius Bank has surpassed $2 billion in deposits by focusing on “evolved banking” — providing personalized financial insights through account aggregation while eliminating fees to help customers gain financial confidence and make better decisions. John Rosenfeld, President of Jenius Bank, a division of SMBC MANUBANK said, “We developed two concepts within a paradigm, if you will, where there’s core banking, which is what every bank does, allows you to put money with them, allows you to go online, see how much you have, see how much you’re earning, allows you to move money in and out, review your statements, read your terms and conditions, all that stuff that every bank does. We call that core banking. We developed the concept of evolved banking, which encompasses everything beyond core features that not every bank offers. And we grouped all this into something we call the Jenius views. So, if you download our mobile app, you’ll find this tab at the bottom. And within this space, you’re able to link your accounts from other banks, other brokerages. You can view credit cards and your entire financial picture in one place. Again, this allows the consumer to give us access to their other information, enabling us to consolidate and provide them with valuable insights. While there are some banks that are doing this, what we call aggregation services, many of them are doing it to gain a view of the customer’s financial situation and then potentially use that information to try to figure out what else they can sell them. We took a different approach. We said, what if we used all that information to actually give customers insights and help them avoid fees, making smarter and more confident financial decisions? Now, why would a bank do such a thing that’s not necessarily going to bolster their profits? We thought about this and concluded that if we could establish a new level of trust with consumers, the next time they have a financial need, we hope they’ll come back to us first. The idea that money is such an emotional driver that it has nothing to do with how much you make or don’t make, but rather whether you are making good decisions. With the capabilities that are evolving in the data space and analytics and machine learning and AI, if a consumer gives someone full access to every penny that they have and not access to move the money, but access to the information, think of how much you can do with technology to identify those things that you may not have noticed. The lack of fees on our savings or a loan product was really driven by wanting to create something better and more compelling than what’s available in the industry. We created a bank that’s incredibly efficient because we don’t have buildings, we don’t have paper, we don’t mail things. So, we don’t spend any money on postage. The target was really what we call high-potential digital optimizers. And we call it that because high potential means they’re going somewhere and are ambitious. They want to progress in building a better lifestyle and achieving more.
Stash’s advanced AI-powered financial guidance platform translates expert-level investing strategies into real-time, personalized recommendations; 1 in 4 customers who interact with Money Coach AI go on to take a positive action, within 10 minutes of interaction
Stash has secured $146 million in a Series H funding round to deepen its investment in AI for its financial guidance platform. The investment will accelerate product innovation, drive subscriber growth, and further develop Stash’s AI capabilities. Central to this strategy is Money Coach AI, an advanced financial guidance platform that translates expert-level investing strategies into real-time, personalized recommendations for everyday users. Money Coach AI has already reshaped how millions of Americans engage with their money and think about their personal finances. From helping customers pick their first investment to providing personalized diversification guidance, Money Coach AI helps customers get started and make saving and investing a habit that sticks. With 2.2 million user interactions already, Money Coach AI will serve as the cornerstone of Stash’s renewed commitment to help users build savings, invest consistently, and make smart financial decisions. Notably, 1 in 4 customers who interact with Money Coach AI go on to take a positive action, such as making an investment, depositing funds, diversifying, or turning on or adjusting Auto-Stash, within 10 minutes of interaction, demonstrating its tangible impact on behavior. Through its scalable approach, Stash is demonstrating that AI can do more than automate; it can empower users by helping them make informed financial decisions in real-time.
FINRA is considering lightening the “heightened supervisory plans” over messages sent using WhatsApp and other off-channel communications
FINRA is looking to lighten the supervision burden on nearly 80 firms that reached settlements before the start of the year over messages sent using WhatsApp and other texting systems. Financial Industry Regulatory Authority executives said they’re considering revisions to the “heightened supervisory plans” that 77 industry firms were subjected to as part of settlements reached over their use of so-called off-channel communications. Concerns over fairness gave rise to FINRA’s proposal to modify the regulatory requirements imposed on firms that reached settlements pre-2025. FINRA’s blog post, written by CEO Robert Cook and Executive Vice President Greg Ruppert, notes that firms that reached settlements after the start of this year were subject to far less onerous terms. Those companies, which included Charles Schwab, Blackstone and the private equity giant KKR, avoided various other mandates imposed on other firms. They, for instance, don’t have to file an application to continue their membership in FINRA and agree to a heightened supervision plan (HSP) meant to prevent further violations. FINRA’s blog cautions that the contemplated changes won’t make things equal between firms that reached settlements this year and those that did before. “FINRA cannot do that because of the differences built into the SEC settlements,” according to the blog. “In addition, under applicable rules FINRA cannot eliminate the HSPs altogether for the pre-2025 settling firms.” Cook and Ruppert wrote in FINRA’s blog that they were initially planning to ask the SEC to eliminate heightened supervision plans for member firms fined for off-channel violations. But that can’t be done now that the SEC has rejected the request to modify the initial settlements. FINRA, a self-regulatory organization deputized by the SEC to oversee the brokerage industry, has no power to alter SEC deals on its own.