D-Wave Quantum study highlights the potential for quantum optimization to create value across industries. According to the study, 46% of surveyed business leaders whose company has implemented quantum optimization or plans to do so within the next two years expect a return on investment of between $1 and $5 million, while 27% predict a return of more than $5 million in the first 12 months. A majority of the business leaders surveyed (81%) believe that they have reached the limit of the benefits they can achieve through optimization solutions running on classical computers. Against that backdrop, many are starting to explore whether quantum technologies can help. 53% are planning to build quantum computing into their workflows and 27% are considering doing so, indicating a growing recognition of quantum computing’s real-world business value. 22% are seeing quantum make a significant impact for those who have adopted it, while another 50% anticipate it will be disruptive for their industry. The results of the study show that quantum computing is gaining recognition among business leaders for its ability to potentially deliver major efficiencies in addressing complex optimization problems and operational improvements. 60% respondents expect quantum computing-based optimization to be very or extremely helpful in solving the specific operational challenges that their companies face. In fact, among those respondents most familiar with quantum, this figure rises to 73%, including nearly a quarter who describe it as “a game changer.” The areas in which business leaders expect to benefit from an investment in quantum optimization include: supply chain and logistics (50%), manufacturing (38%), planning and inventory (36%), and research and development (36%). Most respondents (88%), especially those in the manufacturing industry, believe that their company would go “above and beyond” for even a 5% improvement in optimization.
D-Wave’s study finds 27% of business leaders whose company has implemented quantum optimization or plans to do so within the next two years expect a return on investment of more than $5 million in the first 12 months
D-Wave Quantum study highlights the potential for quantum optimization to create value across industries. According to the study, 46% of surveyed business leaders whose company has implemented quantum optimization or plans to do so within the next two years expect a return on investment of between $1 and $5 million, while 27% predict a return of more than $5 million in the first 12 months. A majority of the business leaders surveyed (81%) believe that they have reached the limit of the benefits they can achieve through optimization solutions running on classical computers. Against that backdrop, many are starting to explore whether quantum technologies can help. 53% are planning to build quantum computing into their workflows and 27% are considering doing so, indicating a growing recognition of quantum computing’s real-world business value. 22% are seeing quantum make a significant impact for those who have adopted it, while another 50% anticipate it will be disruptive for their industry. The results of the study show that quantum computing is gaining recognition among business leaders for its ability to potentially deliver major efficiencies in addressing complex optimization problems and operational improvements. 60% respondents expect quantum computing-based optimization to be very or extremely helpful in solving the specific operational challenges that their companies face. In fact, among those respondents most familiar with quantum, this figure rises to 73%, including nearly a quarter who describe it as “a game changer.” The areas in which business leaders expect to benefit from an investment in quantum optimization include: supply chain and logistics (50%), manufacturing (38%), planning and inventory (36%), and research and development (36%). Most respondents (88%), especially those in the manufacturing industry, believe that their company would go “above and beyond” for even a 5% improvement in optimization.
Research shows spin polarization property in gold nanoclusters can be easily synthesized in relatively large quantities to support and scale a variety of quantum applications
A team of researchers from Penn State and Colorado State has demonstrated how a gold cluster can mimic gaseous, trapped atoms, allowing scientists to take advantage of these spin properties in a system that can be easily scaled up. The researchers show that gold nanoclusters have the same key spin properties as the current state-of-the-art methods for quantum information systems. They can also manipulate an important property called spin polarization in these clusters, which is usually fixed in a material. These clusters can be easily synthesized in relatively large quantities, making this work a promising proof-of-concept that gold clusters could be used to support a variety of quantum applications. An electron’s spin not only influences important chemical reactions but also quantum applications like computation and sensing. The direction an electron spins and its alignment with respect to other electrons in the system can directly impact the accuracy and longevity of quantum information systems. Gold clusters can mimic all the best properties of the trapped gaseous ions with the benefit of scalability. Scientists have heavily studied gold nanostructures for their potential use in optical technology, sensing, therapeutics, and to speed up chemical reactions, but less is known about their magnetic and spin-dependent properties. In the current studies, the researchers specifically explored monolayer-protected clusters, which have a core of gold and are surrounded by other molecules called ligands. The researchers determined the spin polarization of the gold clusters using a similar method used with traditional atoms. The research team plans to explore how different structures within the ligands impact spin polarization and how they could be manipulated to fine tune spin properties. This presents a new frontier in quantum information science, as chemists can use their synthesis skills to design materials with tunable results.
Fujitsu developing a superconducting quantum computer with a capacity exceeding 10,000 qubits by utilizing an early-stage fault-tolerant quantum computing and using 250 logical qubits
Fujitsu is developing a superconducting quantum computer with a capacity exceeding 10,000 qubits, with construction set to finish in fiscal 2030. The computer will use 250 logical qubits and utilize Fujitsu’s “STAR architecture,” an early-stage fault-tolerant quantum computing (early-FTQC) architecture. The project, backed by the NEDO, aims to make practical quantum computing possible, particularly in materials science. Fujitsu will contribute to the development of quantum computers towards industrialization through joint research with Japan’s National Institute of Advanced Industrial Science and Technology and RIKEN. The company plans to achieve a 1,000 logical qubit machine by fiscal 2035, considering the possibility of multiple interconnected quantum bit-chips. Fujitsu’s research efforts will focus on developing the following scaling technologies: High-throughput, high-precision qubit manufacturing technology: Improvement of the manufacturing precision of Josephson Junctions, critical components of superconducting qubits which minimize frequency variations. Chip-to-chip interconnect technology: Development of wiring and packaging technologies to enable the interconnection of multiple qubit chips, facilitating the creation of larger quantum processors. High-density packaging and low-cost qubit control: Addressing the challenges associated with cryogenic cooling and control systems, including the development of techniques to reduce component count and heat dissipation. Decoding technology for quantum error correction: Development of algorithms and system designs for decoding measurement data and correcting errors in quantum computations.
IBM’s new decoder algorithm offers a 10X increase in accuracy in the detection and correction of errors in quantum memory using memory tuning to analyze indirect measurements of quantum states
IBM researchers have developed a new decoder algorithm called Relay-BP, which significantly improves the detection and correction of errors in quantum memory. The algorithm, known as Relay-BP, shows a tenfold increase in accuracy over previous leading methods and reduces the computing resources required to implement it. Relay-BP addresses a persistent bottleneck in the quest to build reliable quantum computers and could lead to experimental deployments within the next few years. Quantum computers are sensitive to errors due to their fragile qubits, which can be disturbed by environmental noise or imperfections in control. The decoder works by analyzing syndromes, indirect measurements of quantum states, that provide clues about where something has gone wrong. Relay-BP, built on an improved version of a classical technique called belief propagation (BP), is the most compact, fast, and accurate implementation yet for decoding quantum low-density parity-check (qLDPC) codes. It is designed to overcome trade-offs, being fast enough to keep up with quantum error rates, compact enough to run on field-programmable gate arrays (FPGAs), and flexible enough to adapt to a wide range of qLDPC codes. IBM’s Relay-BP is a quantum error correction algorithm that uses memory tuning, a tool in physics, to improve performance. The algorithm’s success is attributed to the interdisciplinary approach of the team, which combined expertise from firmware engineering, condensed matter physics, software development, and mathematics. IBM credits this cross-functional approach as a cultural strength of its quantum program. Relay-BP currently focuses on decoding for quantum memory, but is still short of full quantum processing. To achieve real-time quantum computation, the decoding must become faster and smaller. IBM plans to begin experimental testing of the decoder in 2026 on Kookaburra, an upcoming system designed to explore fault-tolerant quantum memory. Relay-BP is considered a vital piece of the puzzle, pushing the limits of classical resources to stabilize quantum systems and offering a new tool for researchers looking to bridge the gap between experimental qubits and reliable quantum logic.
D-Wave’s new quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures
D-Wave has released a collection of offerings to help developers explore and advance quantum artificial intelligence (AI) and machine learning (ML) innovation, including an open-source quantum AI toolkit and a demo. Available now for download, the quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures. Developers can leverage this toolkit to experiment with using D-Wave™ quantum processors to generate simple images. By releasing this new set of tools, D-Wave aims to help organizations accelerate the use of annealing quantum computers in a growing set of AI applications. The quantum AI toolkit, part of D-Wave’s Ocean™ software suite, provides direct integration between D-Wave’s quantum computers and PyTorch, a production-grade ML framework widely used to build and train deep learning models. The toolkit includes a PyTorch neural network module for using a quantum computer to build and train ML models known as a restricted Boltzmann machine (RBM). By integrating with PyTorch, D-Wave’s new toolkit aims to make it easy for developers to experiment with quantum computing to address computational challenges in training AI models. “With this new toolkit and demo, D-Wave is enabling developers to build architectures that integrate our annealing quantum processors into a growing set of ML models,” said Dr. Trevor Lanting, chief development officer at D-Wave.
New quantum framework for analysing higher-order topological data achieves linear scaling in signal dimension using quantum linear systems algorithms compatible with data’s native format, that enable manipulating multi-way signals with efficient data encoding
A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analysing higher-order network data. Their work centres on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network. The team introduced a quantum version of this framework, called Quantum Topological Signal Processing (QTSP). It is a mathematically rigorous method for manipulating multi-way signals using quantum linear systems algorithms. Unlike prior quantum approaches to topological data analysis, which often suffer from impractical scaling, the QTSP framework achieves linear scaling in signal dimension. It is an improvement that opens the door to efficient quantum algorithms for problems previously considered out of reach. The technical insight behind QTSP is in the structure of the data itself. Classical approaches typically require costly transformations to fit topological data into a form usable by quantum devices. However, in QTSP, the data’s native format is already compatible with quantum linear systems solvers, due to recent developments in quantum topological data analysis. This compatibility allows the team to circumvent a major bottleneck, efficient data encoding, while ensuring the algorithm remains mathematically grounded and modular. Still, loading data into quantum hardware and retrieving it without overwhelming the quantum advantage remains an unsolved challenge. Even with linear scaling, quantum speedups can be nullified by overheads in pre- and post-processing. The framework achieves linear scaling and has been demonstrated through a quantum extension of the classical HodgeRank algorithm, with potential applications in recommendation systems, neuroscience, physics and finance.
Physicists leverage real-time AI control to assemble world’s largest 2,024-atom quantum array, paving the way for scalable, efficient quantum computing breakthroughs
A team led by Chinese physicist Pan Jianwei used artificial intelligence (AI) to help create an atom-based quantum computing component that dwarfs previous systems in size, raising hopes that neutral-atom machines could one day operate with tens of thousands of qubits. The team arranged 2,024 rubidium atoms — each functioning as a qubit — into precise two- and three-dimensional arrays. The feat, reportedly marks a tenfold increase over the largest previous atom arrays and addresses one of the field’s most stubborn bottlenecks: how to scale beyond a few hundred qubits without prohibitive delays. Until now, researchers typically moved atoms into place one at a time, making large-scale arrays impractical. Pan’s team, working with the Shanghai Artificial Intelligence Laboratory, replaced this slow step with a real-time AI control system that shifts every atom in the array simultaneously. The setup uses a high-speed spatial light modulator to shape laser beams into traps that corral the atoms. The AI system calculates where each atom needs to go and directs the lasers to move them into perfect positions in just 60 milliseconds — 60,000th of a second, or about the same time it takes a hummingbird to flap its wings 5 times — regardless of whether the array contains hundreds or thousands of atoms. In principle, the method could scale to arrays with tens of thousands of atoms without slowing down. If successful, scaling neutral-atom arrays to that size could allow them to run algorithms that are currently beyond the reach of classical computers and existing quantum prototypes. Applications could range from simulating complex molecules for drug discovery to solving optimization problems in logistics and materials science. The AI-guided control method, coupled with high-precision lasers, essentially removes the scaling penalty that has long plagued neutral-atom designs.
Discovery of “neglectons” boosts topological quantum computing—theorized quasiparticles enable robust, universal quantum logic by expanding computational power of special particles called anyons
A team of mathematicians and physicists in the US has discovered a way to exploit a previously neglected aspect of topological quantum field theory, revealing that these states can be much more broadly useful for quantum computation than previously believed. The quantum bits in topological quantum computers are based on particle-like knots, or vortices, in the sea of electrons washing through a material. The advantage of anyon-based quantum computing is that the only thing that can change the state of anyons is moving them around in relation to each other – a process called “braiding” that alters their relative topology. However, not all anyons are up to the task. In the semisimple model, braiding the remaining anyons, known as Ising anyons, only lends itself to a limited range of computational logic gates, which can be efficiently simulated by classical computers, which reduces their usefulness for truly ground-breaking quantum machines. The team solved this problem with ingenious workarounds created by Lauda’s PhD student, Filippo Iulianelli, allowing the computational space to only those regions where anyon transformations work out as unitary.
Terra Quantum’s QMM-Enhanced Error Correction boosts quantum processor fidelity by reducing errors up to 35% without added complexity or mid-circuit measurements
Terra Quantum has introduced QMM-Enhanced Error Correction, a hardware-validated, measurement-free method that suppresses quantum errors and improves fidelity on existing processors without architectural changes. Validated on IBM’s superconducting processors, the QMM layer functions as a lightweight, unitary “booster” that enhances fidelity without mid-circuit measurements or added two-qubit gates, offering a powerful alternative to traditional surface codes. A single QMM cycle achieves 73% fidelity, is entirely unitary, and is feedback-free. When combined with a repetition code, logical fidelity increases to 94%, representing a 32% gain achieved without the addition of CX gates. In hybrid workloads such as variational quantum classifiers, QMM reduces training loss by 35% and halves run-to-run performance variance. Simulations show that three QMM layers can achieve error rates comparable to those of a distance-3 surface code, while requiring ten times fewer qubits. QMM is especially relevant in environments where traditional error correction is impractical or cost prohibitive. It addresses core challenges across photonic and analog platforms where mid-circuit measurements are infeasible, cloud-based quantum systems that demand minimal gate depth and latency, and hybrid quantum-classical applications, where even marginal stability gains translate to significant performance benefits. Terra Quantum’s QMM layer introduces a new architectural class for quantum systems. Think of it as a quantum tensor core: a compact, circuit-level module that boosts fidelity and suppresses coherent errors without increasing circuit depth or gate count. With up to 35% error reduction, seamless integration, and no extra two-qubit operations, QMM enables more performance per qubit, per dollar, and watt. For hardware vendors, system integrators, and developers, this provides a clear path toward scalable, fault-tolerant quantum computing without requiring redesign of the stack.
