JPMorgan Chase and quantum technology company Infleqtion have released an open-source software library to reduce the hardware requirements for practical quantum computing applications. The new qLDPC library introduces error-correction techniques that reduce the number of physical qubits needed to create reliable logical qubits by a factor of 10 to 100x. This development addresses one of quantum computing’s key challenges, the substantial hardware overhead in qubit numbers typically required for fault tolerance. “This library makes it possible to bring that number down by 100x – down to as few as 20 physical qubits per logical qubit,” Pranav Gokhale, general manager of computing at Infleqtion, told. Depending on the implementation, the new library reduces the requirement to between 15 and 150 qubits. The tools are specifically designed for Infleqtion’s neutral atom-based quantum computing hardware, which offers customizable qubit layouts, enabling more efficient error-correcting codes. The library has been released as open-source software, an uncommon approach for a financial institution partnership. For JPMorgan Chase, the development could enable new applications in financial optimization, risk analysis and fraud detection by making quantum computing more practical. The reduction in required physical qubits makes quantum approaches to complex financial problems more viable. The qLDPC library is now available for developers, researchers and hardware partners to explore methods for improving error correction and optimizing quantum workloads across various platforms. According to Gokhale, the open-source software approach, combined with finding talent in unexpected places, is helping bridge the workforce gap by making quantum computing more accessible.
New quantum algorithm solves hard optimization problems up to 80 times faster than classical solvers like CPLEX and simulated annealing by doing away with error correction and using low-energy states that suppress unwanted transitions
A new study by Kipu Quantum and IBM demonstrates that a tailored quantum algorithm running on IBM’s 156-qubit processors can solve certain hard optimization problems faster than classical solvers like CPLEX and simulated annealing. The quantum system used a technique called bias-field digitized counterdiabatic quantum optimization (BF-DCQO). The method builds on known quantum strategies by evolving a quantum system under special guiding fields that help it stay on track toward low-energy (i.e., optimal) states. It achieved comparable or better solutions in seconds, while classical methods required tens of seconds or more. CPLEX took 30 to about 50 seconds to match that same solution quality, even with 10 CPU threads running in parallel, according to the study. The researchers further confirmed this advantage across a suite of 250 randomly generated hard instances, using distributions specifically selected to challenge classical algorithms. BF-DCQO delivered results up to 80 times faster than CPLEX in some tests and over three times faster than simulated annealing in others. At the heart of the BF-DCQO algorithm is an adaptation of counterdiabatic driving, a physics-inspired strategy where an extra term is added to the Hamiltonian — the system’s energy function — to suppress unwanted transitions. This helps the quantum system evolve faster and more accurately toward its lowest energy configuration. Because this process doesn’t rely on error correction, it is well suited to today’s NISQ devices. And because the algorithm uses only shallow circuits with mostly native operations like single-qubit rotations and two- or three-body interactions, it can fit within the short coherence windows of real hardware.
First successful demonstration of quantum error correction of qudits for quantum computers used a reinforcement learning algorithm to optimize
A Yale University study published in Nature has demonstrated the first-ever experimental quantum error correction for higher-dimensional quantum units using qudits, a quantum system that holds quantum information and can exist in more than two states. The researchers used a reinforcement learning algorithm to optimize the systems as ternary and quaternary quantum memories. The experiment pushed past the break-even point for error correction, showcasing a more practical and hardware-efficient method for quantum error correction by harnessing the power of a larger Hilbert space. The increased photon loss and dephasing rates of GKP qudit states can lead to a modest reduction in the lifetime of the quantum information encoded in logical qudits, but in return, it provides access to more logical quantum states in a single physical system. The findings demonstrate the promise of realizing robust and scalable quantum computers and could lead to breakthroughs in cryptography, materials science, and drug discovery.
Quantum Machines cuts calibration time from hours to minutes by combining open-source framework, modular architecture, reusable components, combining them into complex workflows and instantly sharing protocols with ecosystem
Quantum Machines announced the release of Qualibrate (which the company spells QUAlibrate), an open-source framework for calibrating quantum computers. It cuts quantum computer calibration time from hours to minutes. By addressing one of quantum computing’s most critical scaling bottlenecks, Quantum Machines‘ new framework enables fast, modular calibration and fosters a global ecosystem for sharing and advancing calibration protocols. By creating an open ecosystem, Qualibrate enables researchers and companies worldwide to build upon each other’s advances, accelerating the path to practical quantum computers. To properly initialize and maintain a quantum computer’s performance, calibration must be performed not just once, but frequently during operation to compensate for system drift. Qualibrate enables researchers and quantum engineers to create reusable calibration components, combine them into complex workflows, and execute calibrations through an intuitive interface. The platform abstracts away hardware complexities, allowing teams to focus on quantum system logic rather than low-level details. Qualibrate’s open-source nature and modular architecture mean that when researchers develop new calibration protocols, these innovations can be immediately shared, validated, and built upon by the broader quantum computing community. Companies can also develop proprietary solutions on top of Qualibrate that leverage advanced approaches like quantum system simulation and deep learning algorithms. This creates an ecosystem where fundamental calibration advances can be shared openly and enables specialized tools that push the boundaries of performance. Along with the framework, Quantum Machines is releasing its first calibration graph for superconducting quantum computers, providing a complete calibration solution that can be immediately deployed and customized.
Multimode encoding could improve quantum error correction, also leakage errors, which remove the qubit from the encoding space, can now be detected and corrected
Nord Quantique has successfully developed bosonic qubit technology with multimode encoding, outlining a path to a significant reduction in the number of qubits required for quantum error correction. This provides the system protection against many common types of errors, including bit flips, phase flips, and control errors. Another key advantage over single-mode encoding is that leakage errors, which remove the qubit from the encoding space, can now be detected and corrected. The Tesseract code allows for increased error detection, and it is expected that this will translate into additional quantum error correction benefits as more modes are added. These results are therefore a key stepping stone in the development of this hardware-efficient approach. The core concept of the multimode approach centres on simultaneously using multiple quantum modes to encode individual qubits. Each mode represents a different resonance frequency inside an aluminium cavity and offers additional redundancy, which protects quantum information. The number of photons populating each mode can also be increased for even more protection, further escalating QEC capabilities. This breakthrough enables additional quantum error correction capacity and extra means for detecting errors, while maintaining a fixed number of qubits. It also delivers more benefits, which compound as they scale, opening new avenues for fault-tolerant quantum computing. Through this scientific advance, Nord Quantique now has a clear path to delivering fault tolerance at utility scale. The team will continue to improve its results by leveraging systems with additional modes to push the boundaries of quantum error correction.
Oxford researchers’ new technique identifies and verifies the intrinsic topological superconductivity of materials capable of building scalable, fault-tolerant quantum computers
Oxford researchers have developed a powerful new technique to identify materials capable of supporting stable quantum states, marking a major step toward scalable, fault-tolerant quantum computing. In this new study, the Oxford researchers verified that the known superconductor uranium ditelluride (UTe 2) is an intrinsic topological superconductor. The researchers used a scanning tunneling microscope (STM), which uses an atomically sharp superconducting probe to obtain ultra-high-resolution images at the atomic scale, without using light or electron beams. The experiments used an entirely new operating mode invented by Professor Séamus Davis (called the Andreev STM technique). This method is specifically attuned only to electrons in a special quantum state (topological surface state) that is predicted to cover the surface of intrinsic topological superconductors. When implemented, the method performed exactly as theory suggested, enabling the researchers to not only detect the topological surface state but also to identify the intrinsic topological superconductivity of the material. The results indicated that UTe2 is indeed an intrinsic topological superconductor, but not exactly the kind physicists have been searching for. Although, based on the reported phenomena, Majorana quantum particles are believed to exist in this material, they occur in pairs and cannot be separated from each other. The technique now enables researchers to efficiently screen other materials for topological superconductivity, potentially replacing complex and costly synthetic quantum circuits with simpler, crystalline alternatives.
Iskay Quantum Optimizer outperforms popular classical optimization solvers in financial use cases such as portfolio optimization
IBM Quantum partners are attending the IBM Quantum Partner Forum 2025 in London, England, to hear from IBM leadership and researchers about the latest in quantum hardware, quantum algorithm discovery, and powerful new software tools. The company is proud to introduce two new application functions on the Qiskit Functions Catalog: the QUICK-PDE function by French quantum startup ColibriTD, and the Quantum Portfolio Optimizer by Spanish startup Global Data Quantum. These new functions provide a full, ready-made quantum pipeline for researchers and developers to harness the full power of utility-scale quantum computers in researching and developing new quantum use cases. Application functions are services that abstract away the complexities of the quantum workflow to accelerate quantum algorithm discovery and application prototyping. They take the same classical inputs as in a typical classical workflow and return domain-familiar classical outputs, making it easy to integrate quantum methods into pre-existing application workflows. As the hunt for quantum advantage progresses, more researchers will use application functions to tackle problems that are challenging or impossible for the most powerful high-performance computing (HPC) systems. The new application functions include the Iskay Quantum Optimizer by Kipu Quantum, which outperforms popular classical optimization solvers in financial use cases such as portfolio optimization, and the Singularity Machine Learning function by Multiverse Computing, which addresses classification problems that benefit from ensemble learning and complex model optimization. The new QUICK-PDE and Quantum Portfolio functions already incorporate all three of these improvements, and users can request a free trial through the Qiskit Functions Catalog homepage.
Xanadu is building modular and networked quantum computer using complex photonic states (known as GKP) with extremely low optical losses on a silicon-based chip platform to achieve scalable fault-tolerant quantum computing
Xanadu has taken a key step toward scalable fault-tolerant quantum computing by demonstrating the generation of error-resistant photonic qubits — known as GKP states — on a silicon-based chip platform, a first-of-its-kind achievement. The milestone positions the Toronto-based quantum startup closer to building a modular and networked photonic quantum computer, a device that uses photons, rather than electrons, to perform calculations, according to the paper and a company statement. By encoding quantum information into complex photon states that can withstand noise and loss, the work directly addresses one of the central obstacles to quantum scalability: preserving fragile quantum data as systems grow in size and complexity. Xanadu’s researchers generated what are known as Gottesman–Kitaev–Preskill (GKP) states — structured quantum states made of many photons arranged in specific superpositions. These states encode information in a way that makes it possible to detect and correct small errors, such as phase shifts or photon loss, using well-known quantum error correction techniques. Xanadu’s experiment demonstrates that GKP states can be produced directly on-chip using integrated photonics, paving the way for scalable manufacturing. The system is based on silicon nitride waveguides fabricated on 300 mm wafers, a format common in commercial semiconductor manufacturing. These waveguides exhibit extremely low optical losses, a critical requirement for preserving quantum coherence over time. In addition to the waveguide platform, the setup included photon-number-resolving detectors with over 99% efficiency, developed in-house by Xanadu. These detectors can distinguish between one photon and many, a capability essential for preparing and verifying complex photonic states like GKP. High-precision alignment, custom chip mounts, and loss-optimized fiber connections ensured that the quantum states could be routed and measured without degrading the delicate information they carried.
Study finds quantum-enhanced algorithm on a photonic circuit with small-sized quantum processors can outperform classical systems in specific machine learning tasks
A study published in Nature Photonics demonstrates that small-scale photonic quantum computers can outperform classical systems in specific machine learning tasks. Researchers from the University of Vienna and collaborators used a quantum-enhanced algorithm on a photonic circuit to classify data more accurately than conventional methods. The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart”, explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics. Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands”, emphasizes co-author Iris Agresti.
IBM reveals roadmap to world’s first large-scale, fault-tolerant quantum computer in 2029, plans adding new architectural components to assist with correcting errors in real-time to create exceptional fault-tolerance
IBM revealed its expected roadmap for building the world’s first large-scale, fault-tolerant quantum computer, which would enable scaling up quantum computing for real-world practical results. The technology giant said it expects to be able to deliver the platform in 2029. The new computing system, dubbed IBM Quantum Starling, will be built at the company’s campus in Poughkeepsie, New York, and is expected to perform 20,000 times more operations than today’s quantum computers. According to the company, this new platform would require the memory of more than a quindecillion of the world’s most powerful supercomputers, that’s a number equal to a 1 with 48 zeros after it. IBM already operates a large, global fleet of quantum computers and released a new Quantum Roadmap that outlines its intent to build out practical quantum solutions. The company’s most recent IBM Heron, a 156-qubit quantum processor, released in 2024, demonstrated high fidelity with error-correction. The company said Starling will be able to access the computational power required to solve monumental problems by running 100 million operations using 200 logical qubits. The company intends to use this as the foundation for IBM Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits. To reach the fault-tolerance needed for large scale, the company revealed in its roadmap that it will build new architectural components to assist with correcting errors in real-time to create exceptional fault-tolerance. This includes “C-couplers,” that connect qubits over longer distances within Quantum Loon, a processor expected this year. Another processor, IBM Kookaburra, expected in 2026, will be the company’s first modular processor design to store and process encoded information that will combine quantum memory with logic operations, a basic building block for scaling fault-tolerant systems beyond a single chip. In 2027, IBM Quantum Cockatoo will entangle two Kookaburra modules using “L-couplers” to link quantum chips together like nodes in a larger system, marking the final advancement toward building Starling in 2029.