A team of researchers from Penn State and Colorado State has demonstrated how a gold cluster can mimic gaseous, trapped atoms, allowing scientists to take advantage of these spin properties in a system that can be easily scaled up. The researchers show that gold nanoclusters have the same key spin properties as the current state-of-the-art methods for quantum information systems. They can also manipulate an important property called spin polarization in these clusters, which is usually fixed in a material. These clusters can be easily synthesized in relatively large quantities, making this work a promising proof-of-concept that gold clusters could be used to support a variety of quantum applications. An electron’s spin not only influences important chemical reactions but also quantum applications like computation and sensing. The direction an electron spins and its alignment with respect to other electrons in the system can directly impact the accuracy and longevity of quantum information systems. Gold clusters can mimic all the best properties of the trapped gaseous ions with the benefit of scalability. Scientists have heavily studied gold nanostructures for their potential use in optical technology, sensing, therapeutics, and to speed up chemical reactions, but less is known about their magnetic and spin-dependent properties. In the current studies, the researchers specifically explored monolayer-protected clusters, which have a core of gold and are surrounded by other molecules called ligands. The researchers determined the spin polarization of the gold clusters using a similar method used with traditional atoms. The research team plans to explore how different structures within the ligands impact spin polarization and how they could be manipulated to fine tune spin properties. This presents a new frontier in quantum information science, as chemists can use their synthesis skills to design materials with tunable results.
Quantum Motion delivers the industry’s first full-stack silicon CMOS quantum computer; using 300 mm wafers and three 19‑inch racks, can scale to millions of qubits
Quantum Motion has delivered the industry’s first full-stack quantum computer to be built using a standard silicon CMOS chip fabrication process – the same transistor technology used in conventional computers. Deployed at the UK National Quantum Computing Centre (NQCC), this is the first full-stack quantum computer to use mass manufacturable 300mm silicon CMOS wafer technology and the first silicon spin‑qubit computer installed under the NQCC’s Quantum Computing Testbed Programme. The system integrates the company’s Quantum Processing Unit (QPU) with a user interface and control stack compatible with industry standard software frameworks, such as Qiskit and Cirq, making it a full-stack solution. The system has a data-centre-friendly footprint of just three 19” server racks, housing the dilution refrigerator and integrated control electronics. Auxiliary equipment is designed to sit separately, enabling it to fit in standard data-centre environments and supporting upgrades to much larger QPUs without any change to the system footprint. Unlike other quantum computing approaches, Quantum Motion’s architecture leverages high-volume industrial chipmaking to produce qubits, using industry standard 300 mm processes from commercial chip foundries. Quantum Motion’s architecture, control stack and manufacturing approach is designed to scale to host millions of qubits, enabling fault tolerant, utility-scale and commercially viable quantum computing. Quantum Motion’s QPU is based on a scalable tile architecture which integrates all the needed compute, readout, and control elements into a dense array that can be repeatedly printed onto a chip, enabling future expansion to millions of qubits per QPU. This design enables systems to be easily upgraded by installing future generation QPUs. The system also represents a breakthrough in AI machine‑learning tuning, enabling more efficient operation and automated algorithms for control and calibration.
Fujitsu developing a superconducting quantum computer with a capacity exceeding 10,000 qubits by utilizing an early-stage fault-tolerant quantum computing and using 250 logical qubits
Fujitsu is developing a superconducting quantum computer with a capacity exceeding 10,000 qubits, with construction set to finish in fiscal 2030. The computer will use 250 logical qubits and utilize Fujitsu’s “STAR architecture,” an early-stage fault-tolerant quantum computing (early-FTQC) architecture. The project, backed by the NEDO, aims to make practical quantum computing possible, particularly in materials science. Fujitsu will contribute to the development of quantum computers towards industrialization through joint research with Japan’s National Institute of Advanced Industrial Science and Technology and RIKEN. The company plans to achieve a 1,000 logical qubit machine by fiscal 2035, considering the possibility of multiple interconnected quantum bit-chips. Fujitsu’s research efforts will focus on developing the following scaling technologies: High-throughput, high-precision qubit manufacturing technology: Improvement of the manufacturing precision of Josephson Junctions, critical components of superconducting qubits which minimize frequency variations. Chip-to-chip interconnect technology: Development of wiring and packaging technologies to enable the interconnection of multiple qubit chips, facilitating the creation of larger quantum processors. High-density packaging and low-cost qubit control: Addressing the challenges associated with cryogenic cooling and control systems, including the development of techniques to reduce component count and heat dissipation. Decoding technology for quantum error correction: Development of algorithms and system designs for decoding measurement data and correcting errors in quantum computations.
As the quantum industry moves from the lab to manufacturing, quantum error correction is the key to building robust and scalable fault-tolerant machines
As the quantum industry moves from the lab to the fab, five major trends are driving the next wave of technology and business models: Quantum error correction: The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. With this shift, we’re seeing increased interest in companies focused on error correction capabilities, including Riverlane Ltd., Q-CTRL Pty. Ltd. and Qedma Ltd. There is also significant innovation being applied to encoding physical qubits into logical qubits using not just the classic surface code, but also novel alternatives such as quantum low-density parity check codes, which protect quantum information against noise and decoherence. The middle of the stack: This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines (Q.M Technologies Ltd.) and quantum software development from firms such as Classiq Technologies Ltd. and Algorithmiq Oy. Scale-out architectures: This strategy involves linking multiple QPUs to work together as one distributed machine, which could even enable different types of qubits to collaborate on a single problem. Startups such as Alice & Bob SAS and Qolab Inc. are driving advances in qubit and architecture design and fabrication. Input-output and cryogenics: Alternatives that reduce both the number of cables and their thermal load are emerging, such as improved density (Delft Circuits B.V.), cryogenic qubit control capabilities (Diraq Pty Ltd.’s cryo-CMOS) and alternative approaches such as Qphox B.V.’s optical fiber. Mergers and acquisitions: This trend is playing out at a scale that few investors, including us, anticipated. IonQ Inc. has made several bold acquisitions in 2025, including computing technology with Oxford Ionics Ltd., interconnects and memories with Lightsynq Technologies Inc., communications with ID Quantique SA and Qubitekk Inc., and even space with Capella Space Corp.
IBM’s new decoder algorithm offers a 10X increase in accuracy in the detection and correction of errors in quantum memory using memory tuning to analyze indirect measurements of quantum states
IBM researchers have developed a new decoder algorithm called Relay-BP, which significantly improves the detection and correction of errors in quantum memory. The algorithm, known as Relay-BP, shows a tenfold increase in accuracy over previous leading methods and reduces the computing resources required to implement it. Relay-BP addresses a persistent bottleneck in the quest to build reliable quantum computers and could lead to experimental deployments within the next few years. Quantum computers are sensitive to errors due to their fragile qubits, which can be disturbed by environmental noise or imperfections in control. The decoder works by analyzing syndromes, indirect measurements of quantum states, that provide clues about where something has gone wrong. Relay-BP, built on an improved version of a classical technique called belief propagation (BP), is the most compact, fast, and accurate implementation yet for decoding quantum low-density parity-check (qLDPC) codes. It is designed to overcome trade-offs, being fast enough to keep up with quantum error rates, compact enough to run on field-programmable gate arrays (FPGAs), and flexible enough to adapt to a wide range of qLDPC codes. IBM’s Relay-BP is a quantum error correction algorithm that uses memory tuning, a tool in physics, to improve performance. The algorithm’s success is attributed to the interdisciplinary approach of the team, which combined expertise from firmware engineering, condensed matter physics, software development, and mathematics. IBM credits this cross-functional approach as a cultural strength of its quantum program. Relay-BP currently focuses on decoding for quantum memory, but is still short of full quantum processing. To achieve real-time quantum computation, the decoding must become faster and smaller. IBM plans to begin experimental testing of the decoder in 2026 on Kookaburra, an upcoming system designed to explore fault-tolerant quantum memory. Relay-BP is considered a vital piece of the puzzle, pushing the limits of classical resources to stabilize quantum systems and offering a new tool for researchers looking to bridge the gap between experimental qubits and reliable quantum logic.
HSBC demonstrates world’s first-known quantum-enabled algorithmic trading with IBM, estimate how likely a trade is to be filled at a quoted price
HSBC announced the world’s first-known empirical evidence of the potential value of current quantum computers for solving real-world problems in algorithmic bond trading. Working with a team from IBM, HSBC leveraged an approach that utilised quantum and classical computing resources to deliver up to a 34 percent improvement in predicting how likely a trade would be filled at a quoted price, compared to common classical techniques used in the industry. HSBC and IBM’s trial explored how today’s quantum computers could optimise requests for quote in over-the-counter markets, where financial assets such as bonds are traded between two parties without a centralised exchange or broker. In this process, algorithmic strategies and statistical models estimate how likely a trade is to be filled at a quoted price. The teams validated real and production-scale trading data on multiple IBM quantum computers to predict the probability of winning customer inquiries in the European corporate bond market. The results show the value quantum computers could offer when integrated into the dynamic problems facing the financial services industry, and how they could potentially offer superior solutions over standard methods which use classical computers alone. In this case, IBM Quantum Heron was able to augment classical computing workflows to better unravel hidden pricing signals in noisy market data than standard, classical-only approaches in use by HSBC, resulting in strong improvements in the bond trading process.
D-Wave’s new quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures
D-Wave has released a collection of offerings to help developers explore and advance quantum artificial intelligence (AI) and machine learning (ML) innovation, including an open-source quantum AI toolkit and a demo. Available now for download, the quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures. Developers can leverage this toolkit to experiment with using D-Wave™ quantum processors to generate simple images. By releasing this new set of tools, D-Wave aims to help organizations accelerate the use of annealing quantum computers in a growing set of AI applications. The quantum AI toolkit, part of D-Wave’s Ocean™ software suite, provides direct integration between D-Wave’s quantum computers and PyTorch, a production-grade ML framework widely used to build and train deep learning models. The toolkit includes a PyTorch neural network module for using a quantum computer to build and train ML models known as a restricted Boltzmann machine (RBM). By integrating with PyTorch, D-Wave’s new toolkit aims to make it easy for developers to experiment with quantum computing to address computational challenges in training AI models. “With this new toolkit and demo, D-Wave is enabling developers to build architectures that integrate our annealing quantum processors into a growing set of ML models,” said Dr. Trevor Lanting, chief development officer at D-Wave.
MIT-Harvard team clears significant hurdle to quantum computing by demonstrating a 3,000-qubit system with continuous two-hour operation using optical tweezers and conveyor belt atom reloading at 300,000 atoms per second
Harvard scientists have succeeded in developing a quantum machine featuring over 3,000 quantum bits (qubits), which can operate continuously for more than two hours without requiring a restart. This innovation represents a significant advancement in quantum computing technology and aims to overcome the critical issue of “atom loss,” where qubits escape and lose information. The research team, which includes members from MIT, utilized a system with “optical lattice conveyor belts” and “optical tweezers,” allowing them to rapidly resupply qubits and maintain processing power. This groundbreaking work not only demonstrates large-scale continuous operation but also has implications for performing computations. The researchers believe that they are now closer to realizing practical quantum computers capable of executing billions of operations over extended periods. In addition to this study, the team has introduced methods for reconfigurable atom arrays and improved error correction, further contributing to the evolving field of quantum computing. The research received funding from several federal agencies, including the U.S. Department of Energy and the National Science Foundation.
New quantum framework for analysing higher-order topological data achieves linear scaling in signal dimension using quantum linear systems algorithms compatible with data’s native format, that enable manipulating multi-way signals with efficient data encoding
A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analysing higher-order network data. Their work centres on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network. The team introduced a quantum version of this framework, called Quantum Topological Signal Processing (QTSP). It is a mathematically rigorous method for manipulating multi-way signals using quantum linear systems algorithms. Unlike prior quantum approaches to topological data analysis, which often suffer from impractical scaling, the QTSP framework achieves linear scaling in signal dimension. It is an improvement that opens the door to efficient quantum algorithms for problems previously considered out of reach. The technical insight behind QTSP is in the structure of the data itself. Classical approaches typically require costly transformations to fit topological data into a form usable by quantum devices. However, in QTSP, the data’s native format is already compatible with quantum linear systems solvers, due to recent developments in quantum topological data analysis. This compatibility allows the team to circumvent a major bottleneck, efficient data encoding, while ensuring the algorithm remains mathematically grounded and modular. Still, loading data into quantum hardware and retrieving it without overwhelming the quantum advantage remains an unsolved challenge. Even with linear scaling, quantum speedups can be nullified by overheads in pre- and post-processing. The framework achieves linear scaling and has been demonstrated through a quantum extension of the classical HodgeRank algorithm, with potential applications in recommendation systems, neuroscience, physics and finance.
PsiQuantum’s silicon photonic approach shifts quantum computing from lab to large-scale deployment; targeting real-world uses in drug discovery, catalysts, and semiconductor processes
Quantum computing is transitioning from theory to practice, with PsiQuantum Corp., a Bay Area startup, at the forefront, aiming to create the first fault-tolerant quantum computer using silicon photonics. Co-founder Pete Shadbolt emphasizes the field’s momentum as critical technical milestones are being achieved, leading to the prospect of large-scale, commercially viable machines within months. He suggests that the quantum computing sector is currently behind artificial intelligence in terms of practical applications. Shadbolt discussed PsiQuantum’s strategy during an event hosted by theCUBE, highlighting its unique use of silicon photonics—chips that process light—which have surpassed their original telecom applications through collaboration with GlobalFoundries Inc. PsiQuantum’s methodology aims to produce large-scale quantum computers by leveraging established semiconductor manufacturing processes, positioning its innovations within the existing semiconductor ecosystem and targeting applications across chemistry, material science, drug discovery, and other sectors. This integration allows the company to utilize existing manufacturing standards and supply chain infrastructure, circumventing the need for exotic materials.