Fujitsu is developing a superconducting quantum computer with a capacity exceeding 10,000 qubits, with construction set to finish in fiscal 2030. The computer will use 250 logical qubits and utilize Fujitsu’s “STAR architecture,” an early-stage fault-tolerant quantum computing (early-FTQC) architecture. The project, backed by the NEDO, aims to make practical quantum computing possible, particularly in materials science. Fujitsu will contribute to the development of quantum computers towards industrialization through joint research with Japan’s National Institute of Advanced Industrial Science and Technology and RIKEN. The company plans to achieve a 1,000 logical qubit machine by fiscal 2035, considering the possibility of multiple interconnected quantum bit-chips. Fujitsu’s research efforts will focus on developing the following scaling technologies: High-throughput, high-precision qubit manufacturing technology: Improvement of the manufacturing precision of Josephson Junctions, critical components of superconducting qubits which minimize frequency variations. Chip-to-chip interconnect technology: Development of wiring and packaging technologies to enable the interconnection of multiple qubit chips, facilitating the creation of larger quantum processors. High-density packaging and low-cost qubit control: Addressing the challenges associated with cryogenic cooling and control systems, including the development of techniques to reduce component count and heat dissipation. Decoding technology for quantum error correction: Development of algorithms and system designs for decoding measurement data and correcting errors in quantum computations.
IBM’s new decoder algorithm offers a 10X increase in accuracy in the detection and correction of errors in quantum memory using memory tuning to analyze indirect measurements of quantum states
IBM researchers have developed a new decoder algorithm called Relay-BP, which significantly improves the detection and correction of errors in quantum memory. The algorithm, known as Relay-BP, shows a tenfold increase in accuracy over previous leading methods and reduces the computing resources required to implement it. Relay-BP addresses a persistent bottleneck in the quest to build reliable quantum computers and could lead to experimental deployments within the next few years. Quantum computers are sensitive to errors due to their fragile qubits, which can be disturbed by environmental noise or imperfections in control. The decoder works by analyzing syndromes, indirect measurements of quantum states, that provide clues about where something has gone wrong. Relay-BP, built on an improved version of a classical technique called belief propagation (BP), is the most compact, fast, and accurate implementation yet for decoding quantum low-density parity-check (qLDPC) codes. It is designed to overcome trade-offs, being fast enough to keep up with quantum error rates, compact enough to run on field-programmable gate arrays (FPGAs), and flexible enough to adapt to a wide range of qLDPC codes. IBM’s Relay-BP is a quantum error correction algorithm that uses memory tuning, a tool in physics, to improve performance. The algorithm’s success is attributed to the interdisciplinary approach of the team, which combined expertise from firmware engineering, condensed matter physics, software development, and mathematics. IBM credits this cross-functional approach as a cultural strength of its quantum program. Relay-BP currently focuses on decoding for quantum memory, but is still short of full quantum processing. To achieve real-time quantum computation, the decoding must become faster and smaller. IBM plans to begin experimental testing of the decoder in 2026 on Kookaburra, an upcoming system designed to explore fault-tolerant quantum memory. Relay-BP is considered a vital piece of the puzzle, pushing the limits of classical resources to stabilize quantum systems and offering a new tool for researchers looking to bridge the gap between experimental qubits and reliable quantum logic.
D-Wave’s new quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures
D-Wave has released a collection of offerings to help developers explore and advance quantum artificial intelligence (AI) and machine learning (ML) innovation, including an open-source quantum AI toolkit and a demo. Available now for download, the quantum AI toolkit enables developers to seamlessly integrate quantum computers into modern ML architectures. Developers can leverage this toolkit to experiment with using D-Wave™ quantum processors to generate simple images. By releasing this new set of tools, D-Wave aims to help organizations accelerate the use of annealing quantum computers in a growing set of AI applications. The quantum AI toolkit, part of D-Wave’s Ocean™ software suite, provides direct integration between D-Wave’s quantum computers and PyTorch, a production-grade ML framework widely used to build and train deep learning models. The toolkit includes a PyTorch neural network module for using a quantum computer to build and train ML models known as a restricted Boltzmann machine (RBM). By integrating with PyTorch, D-Wave’s new toolkit aims to make it easy for developers to experiment with quantum computing to address computational challenges in training AI models. “With this new toolkit and demo, D-Wave is enabling developers to build architectures that integrate our annealing quantum processors into a growing set of ML models,” said Dr. Trevor Lanting, chief development officer at D-Wave.
New quantum framework for analysing higher-order topological data achieves linear scaling in signal dimension using quantum linear systems algorithms compatible with data’s native format, that enable manipulating multi-way signals with efficient data encoding
A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analysing higher-order network data. Their work centres on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network. The team introduced a quantum version of this framework, called Quantum Topological Signal Processing (QTSP). It is a mathematically rigorous method for manipulating multi-way signals using quantum linear systems algorithms. Unlike prior quantum approaches to topological data analysis, which often suffer from impractical scaling, the QTSP framework achieves linear scaling in signal dimension. It is an improvement that opens the door to efficient quantum algorithms for problems previously considered out of reach. The technical insight behind QTSP is in the structure of the data itself. Classical approaches typically require costly transformations to fit topological data into a form usable by quantum devices. However, in QTSP, the data’s native format is already compatible with quantum linear systems solvers, due to recent developments in quantum topological data analysis. This compatibility allows the team to circumvent a major bottleneck, efficient data encoding, while ensuring the algorithm remains mathematically grounded and modular. Still, loading data into quantum hardware and retrieving it without overwhelming the quantum advantage remains an unsolved challenge. Even with linear scaling, quantum speedups can be nullified by overheads in pre- and post-processing. The framework achieves linear scaling and has been demonstrated through a quantum extension of the classical HodgeRank algorithm, with potential applications in recommendation systems, neuroscience, physics and finance.
Physicists leverage real-time AI control to assemble world’s largest 2,024-atom quantum array, paving the way for scalable, efficient quantum computing breakthroughs
A team led by Chinese physicist Pan Jianwei used artificial intelligence (AI) to help create an atom-based quantum computing component that dwarfs previous systems in size, raising hopes that neutral-atom machines could one day operate with tens of thousands of qubits. The team arranged 2,024 rubidium atoms — each functioning as a qubit — into precise two- and three-dimensional arrays. The feat, reportedly marks a tenfold increase over the largest previous atom arrays and addresses one of the field’s most stubborn bottlenecks: how to scale beyond a few hundred qubits without prohibitive delays. Until now, researchers typically moved atoms into place one at a time, making large-scale arrays impractical. Pan’s team, working with the Shanghai Artificial Intelligence Laboratory, replaced this slow step with a real-time AI control system that shifts every atom in the array simultaneously. The setup uses a high-speed spatial light modulator to shape laser beams into traps that corral the atoms. The AI system calculates where each atom needs to go and directs the lasers to move them into perfect positions in just 60 milliseconds — 60,000th of a second, or about the same time it takes a hummingbird to flap its wings 5 times — regardless of whether the array contains hundreds or thousands of atoms. In principle, the method could scale to arrays with tens of thousands of atoms without slowing down. If successful, scaling neutral-atom arrays to that size could allow them to run algorithms that are currently beyond the reach of classical computers and existing quantum prototypes. Applications could range from simulating complex molecules for drug discovery to solving optimization problems in logistics and materials science. The AI-guided control method, coupled with high-precision lasers, essentially removes the scaling penalty that has long plagued neutral-atom designs.
Discovery of “neglectons” boosts topological quantum computing—theorized quasiparticles enable robust, universal quantum logic by expanding computational power of special particles called anyons
A team of mathematicians and physicists in the US has discovered a way to exploit a previously neglected aspect of topological quantum field theory, revealing that these states can be much more broadly useful for quantum computation than previously believed. The quantum bits in topological quantum computers are based on particle-like knots, or vortices, in the sea of electrons washing through a material. The advantage of anyon-based quantum computing is that the only thing that can change the state of anyons is moving them around in relation to each other – a process called “braiding” that alters their relative topology. However, not all anyons are up to the task. In the semisimple model, braiding the remaining anyons, known as Ising anyons, only lends itself to a limited range of computational logic gates, which can be efficiently simulated by classical computers, which reduces their usefulness for truly ground-breaking quantum machines. The team solved this problem with ingenious workarounds created by Lauda’s PhD student, Filippo Iulianelli, allowing the computational space to only those regions where anyon transformations work out as unitary.
Terra Quantum’s QMM-Enhanced Error Correction boosts quantum processor fidelity by reducing errors up to 35% without added complexity or mid-circuit measurements
Terra Quantum has introduced QMM-Enhanced Error Correction, a hardware-validated, measurement-free method that suppresses quantum errors and improves fidelity on existing processors without architectural changes. Validated on IBM’s superconducting processors, the QMM layer functions as a lightweight, unitary “booster” that enhances fidelity without mid-circuit measurements or added two-qubit gates, offering a powerful alternative to traditional surface codes. A single QMM cycle achieves 73% fidelity, is entirely unitary, and is feedback-free. When combined with a repetition code, logical fidelity increases to 94%, representing a 32% gain achieved without the addition of CX gates. In hybrid workloads such as variational quantum classifiers, QMM reduces training loss by 35% and halves run-to-run performance variance. Simulations show that three QMM layers can achieve error rates comparable to those of a distance-3 surface code, while requiring ten times fewer qubits. QMM is especially relevant in environments where traditional error correction is impractical or cost prohibitive. It addresses core challenges across photonic and analog platforms where mid-circuit measurements are infeasible, cloud-based quantum systems that demand minimal gate depth and latency, and hybrid quantum-classical applications, where even marginal stability gains translate to significant performance benefits. Terra Quantum’s QMM layer introduces a new architectural class for quantum systems. Think of it as a quantum tensor core: a compact, circuit-level module that boosts fidelity and suppresses coherent errors without increasing circuit depth or gate count. With up to 35% error reduction, seamless integration, and no extra two-qubit operations, QMM enables more performance per qubit, per dollar, and watt. For hardware vendors, system integrators, and developers, this provides a clear path toward scalable, fault-tolerant quantum computing without requiring redesign of the stack.
MIT says quantum computing is surging in USA with over 40 quantum processing units offered, a 5x increase in patents, and $2.2 billion in venture investment in 2024
Quantum computing is gaining significant business and commercial potential, according to a new report by researchers at the MIT Initiative on the Digital Economy. The “Quantum Index Report 2025” provides a comprehensive assessment of the state of quantum technologies, aiming to make them more accessible to entrepreneurs, investors, teachers, and business decision-makers. The report highlights the increasing interest in quantum computing, with the US leading the field with over 40 quantum processing units (QPUs). The report also notes that quantum technology patents have soared, with corporations and universities leading innovation efforts. Venture capital funding for quantum technology reached a new high point in 2024, with quantum computing firms receiving the most funding ($1.6 billion) followed by quantum software companies at $621 million. Businesses are also buzzing about quantum computing, with the frequency of mentions each quarter increasing from 2022 to 2024. The demand for quantum skills has nearly tripled since 2018, prompting universities to establish quantum hubs and programs connecting business leaders with researchers. The report highlights the rapid progress and developments across various areas, indicating a broad and deep development in the field.
Cornell–IBM researchers demonstrate a new method of building fault-tolerant universal quantum computers through the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space
Researchers at IBM, Cornell, Harvard University, and the Weizman Institute of Science have made two major breakthroughs in the quantum computing revolution. They demonstrated an error-resistant implementation of universal quantum gates and demonstrated the power of a topological quantum computer in solving hard problems that conventional computers couldn’t manage. The researchers demonstrated the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space, which is crucial for being fault tolerant and resistant to error. The researchers demonstrated the power of their method on a known hard problem, chromatic polynomials, which originated from a counting problem of graphs with different colored nodes and a few simple rules. The protocol used, sampling the chromatic polynomials for a set of different graphs where the number of colors is the golden ratio, is scalable, so other researchers with quantum computers can duplicate it at a larger scale. Studying topologically ordered many-body quantum systems presents tremendous challenges for quantum researchers. The researchers at IBM were critical in understanding the theory of the topological state and designing a protocol to implement it on a quantum computer. Their other colleagues made essential contributions with the hardware simulations, connecting theory to experiment and determining their strategy. The research was supported by the National Science Foundation, the U.S. Department of Energy, and the Alfred P. Sloan Foundation.
BDx Data Centres unveils Southeast Asia’s first hybrid quantum AI testbed aligned with Singapore’s Green 2030 and Smart Nation strategies
BDx Data Centres has launched Southeast Asia’s first hybrid quantum AI testbed, aiming to integrate quantum computing capabilities into its flagship SIN1 data centre in Paya Lebar. Developed in collaboration with Singapore-based Anyon Technologies, the testbed is designed to catalyze breakthroughs in AI innovation. “A modern computer today is essentially a whole data centre. Deploying a state-of-the-art hybrid quantum computing system at BDx’s SIN1 facility marks a transformative step in modern computing infrastructure,” said Dr Jie (Roger) Luo, president and CEO of Anyon Technologies. “By integrating QPUs (Quantum Processing Units) with CPUs (Central Processing Units) and GPUs (Graphics Processing Units), we’re enabling breakthroughs in quantum algorithms and applications. This lowers adoption barriers for enterprise customers, like financial institutions.” The testbed serves as a gateway for startups, enterprises, and government agencies to explore the vast potential of quantum-enhanced AI applications, made possible through the integration of Anyon’s quantum systems with BDx’s AI-ready infrastructure. Aligned with Singapore’s Green 2030 and Smart Nation strategies, the initiative also sets a benchmark for sustainable, high-performance computing.