IBM Quantum partners are attending the IBM Quantum Partner Forum 2025 in London, England, to hear from IBM leadership and researchers about the latest in quantum hardware, quantum algorithm discovery, and powerful new software tools. The company is proud to introduce two new application functions on the Qiskit Functions Catalog: the QUICK-PDE function by French quantum startup ColibriTD, and the Quantum Portfolio Optimizer by Spanish startup Global Data Quantum. These new functions provide a full, ready-made quantum pipeline for researchers and developers to harness the full power of utility-scale quantum computers in researching and developing new quantum use cases. Application functions are services that abstract away the complexities of the quantum workflow to accelerate quantum algorithm discovery and application prototyping. They take the same classical inputs as in a typical classical workflow and return domain-familiar classical outputs, making it easy to integrate quantum methods into pre-existing application workflows. As the hunt for quantum advantage progresses, more researchers will use application functions to tackle problems that are challenging or impossible for the most powerful high-performance computing (HPC) systems. The new application functions include the Iskay Quantum Optimizer by Kipu Quantum, which outperforms popular classical optimization solvers in financial use cases such as portfolio optimization, and the Singularity Machine Learning function by Multiverse Computing, which addresses classification problems that benefit from ensemble learning and complex model optimization. The new QUICK-PDE and Quantum Portfolio functions already incorporate all three of these improvements, and users can request a free trial through the Qiskit Functions Catalog homepage.
Xanadu is building modular and networked quantum computer using complex photonic states (known as GKP) with extremely low optical losses on a silicon-based chip platform to achieve scalable fault-tolerant quantum computing
Xanadu has taken a key step toward scalable fault-tolerant quantum computing by demonstrating the generation of error-resistant photonic qubits — known as GKP states — on a silicon-based chip platform, a first-of-its-kind achievement. The milestone positions the Toronto-based quantum startup closer to building a modular and networked photonic quantum computer, a device that uses photons, rather than electrons, to perform calculations, according to the paper and a company statement. By encoding quantum information into complex photon states that can withstand noise and loss, the work directly addresses one of the central obstacles to quantum scalability: preserving fragile quantum data as systems grow in size and complexity. Xanadu’s researchers generated what are known as Gottesman–Kitaev–Preskill (GKP) states — structured quantum states made of many photons arranged in specific superpositions. These states encode information in a way that makes it possible to detect and correct small errors, such as phase shifts or photon loss, using well-known quantum error correction techniques. Xanadu’s experiment demonstrates that GKP states can be produced directly on-chip using integrated photonics, paving the way for scalable manufacturing. The system is based on silicon nitride waveguides fabricated on 300 mm wafers, a format common in commercial semiconductor manufacturing. These waveguides exhibit extremely low optical losses, a critical requirement for preserving quantum coherence over time. In addition to the waveguide platform, the setup included photon-number-resolving detectors with over 99% efficiency, developed in-house by Xanadu. These detectors can distinguish between one photon and many, a capability essential for preparing and verifying complex photonic states like GKP. High-precision alignment, custom chip mounts, and loss-optimized fiber connections ensured that the quantum states could be routed and measured without degrading the delicate information they carried.
Study finds quantum-enhanced algorithm on a photonic circuit with small-sized quantum processors can outperform classical systems in specific machine learning tasks
A study published in Nature Photonics demonstrates that small-scale photonic quantum computers can outperform classical systems in specific machine learning tasks. Researchers from the University of Vienna and collaborators used a quantum-enhanced algorithm on a photonic circuit to classify data more accurately than conventional methods. The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart”, explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics. Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands”, emphasizes co-author Iris Agresti.
IBM reveals roadmap to world’s first large-scale, fault-tolerant quantum computer in 2029, plans adding new architectural components to assist with correcting errors in real-time to create exceptional fault-tolerance
IBM revealed its expected roadmap for building the world’s first large-scale, fault-tolerant quantum computer, which would enable scaling up quantum computing for real-world practical results. The technology giant said it expects to be able to deliver the platform in 2029. The new computing system, dubbed IBM Quantum Starling, will be built at the company’s campus in Poughkeepsie, New York, and is expected to perform 20,000 times more operations than today’s quantum computers. According to the company, this new platform would require the memory of more than a quindecillion of the world’s most powerful supercomputers, that’s a number equal to a 1 with 48 zeros after it. IBM already operates a large, global fleet of quantum computers and released a new Quantum Roadmap that outlines its intent to build out practical quantum solutions. The company’s most recent IBM Heron, a 156-qubit quantum processor, released in 2024, demonstrated high fidelity with error-correction. The company said Starling will be able to access the computational power required to solve monumental problems by running 100 million operations using 200 logical qubits. The company intends to use this as the foundation for IBM Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits. To reach the fault-tolerance needed for large scale, the company revealed in its roadmap that it will build new architectural components to assist with correcting errors in real-time to create exceptional fault-tolerance. This includes “C-couplers,” that connect qubits over longer distances within Quantum Loon, a processor expected this year. Another processor, IBM Kookaburra, expected in 2026, will be the company’s first modular processor design to store and process encoded information that will combine quantum memory with logic operations, a basic building block for scaling fault-tolerant systems beyond a single chip. In 2027, IBM Quantum Cockatoo will entangle two Kookaburra modules using “L-couplers” to link quantum chips together like nodes in a larger system, marking the final advancement toward building Starling in 2029.
NIST-led team uses quantum mechanics to make a factory for random numbers; Bell test measures pairs of “entangled” photons whose properties are correlated
NIST and the University of Colorado Boulder have launched CURBy, a publicly available random number generator based on quantum nonlocality, offering verifiable, truly random numbers. At the heart of this service is the NIST-run Bell test, which provides truly random results. This randomness acts as a kind of raw material that the rest of the researchers’ setup “refines” into random numbers published by the beacon. The Bell test measures pairs of “entangled” photons whose properties are correlated even when separated by vast distances. When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness. Einstein called this quantum nonlocality “spooky action at a distance.” This is the first random number generator service to use quantum nonlocality as a source of its numbers, and the most transparent source of random numbers to date. That’s because the results are certifiable and traceable to a greater extent than ever before. CURBy uses entangled photons in a Bell test to generate certifiable randomness, achieving a 99.7% success rate in its first 40 days and producing 512-bit outputs per run. A novel blockchain-based system called the Twine protocol ensures transparency and security by allowing users to trace and verify each step of the randomness generation process. CURBy can be used anywhere an independent, public source of random numbers would be useful, such as selecting jury candidates, making[A1] [A2] a random selection for an audit, or assigning resources through a public lottery.
MIT researchers demonstrate the strongest nonlinear light-matter coupling in a quantum system that could help reach the fault-tolerant quantum computing stage with 10X faster operations and readout
MIT researchers have demonstrated what they believe is the strongest nonlinear light-matter coupling ever achieved in a quantum system. Their experiment is a step toward realizing quantum operations and readout that could be performed in a few nanoseconds. The researchers used a novel superconducting circuit architecture to show nonlinear light-matter coupling that is about an order of magnitude stronger than prior demonstrations, which could enable a quantum processor to run about 10 times faster. “This would really eliminate one of the bottlenecks in quantum computing. Usually, you have to measure the results of your computations in between rounds of error correction. This could accelerate how quickly we can reach the fault-tolerant quantum computing stage and be able to get real-world applications and value out of our quantum computers,” says Yufeng “Bright” Ye, lead author of a paper on this research. The new architecture, based on a superconducting “quarton” coupler, achieved coupling strengths roughly ten times higher than previous designs, potentially allowing quantum processors to run ten times faster. Faster readout and operations are critical to reducing errors in quantum computation, which depend on performing error correction within the limited lifespans of qubits. Researchers demonstrated extremely strong nonlinear light-matter coupling in a quantum circuit. Stronger coupling enables faster readout and operations using qubits, which are the fundamental units of information in quantum computing. (Christine Daniloff, MIT)
Origin Quantum launches Tianji 4.0 to support scalable quantum systems offering standardized workflows capable of being executed by non-specialist engineers
Origin Quantum Computing Technology has released its fourth-generation quantum control system, Tianji 4.0, which supports over 500 qubits and supports China’s continuing efforts toward building scalable, industrial-grade quantum computing infrastructure. Tianji 4.0 introduces improvements across scalability, integration, stability, and automation. It reflects a move from intense hardware tuning to standardized workflows capable of being executed by non-specialist engineers. Tianji 4.0 integrates with four core software systems developed by Origin Quantum. This full-stack integration streamlines the testing and tuning of superconducting qubit chips, which traditionally required input from PhD-level specialists. The result, according to the company, is a more repeatable and scalable approach to engineering, which prepares the system for use in future hundred-qubit quantum devices. Guo Guoping, director of the Anhui Quantum Computing Engineering Research Center and chief scientist at Origin Quantum, emphasized that the launch signifies a transition from prototype-level development to replicable engineering production. This could lay the foundation for mass production of quantum systems that are both higher in qubit count and more reliable in operation, which are essential requirements for practical use in computation-heavy sectors. The functionality offered by Tianji 4.0 suggests a continued focus on hardware-software co-design, system stability under increasing qubit counts, and preparation for industrial deployment, as well as prioritization of higher-throughput and modular quantum platforms within China’s domestic quantum ecosystem.
China’s Origin Quantum releases fourth-generation quantum control system, heads toward mass production, supports over 500 qubits and serves as the central control for superconducting quantum computers
China’s Origin Quantum has launched its fourth-generation quantum control system, a move signaling the country’s increasing push to industrialize and scale quantum computing capabilities. The new system, dubbed Origin Tianji 4.0, supports over 500 qubits and serves as the central control for superconducting quantum computers, according to The Global Times, a media outlet under the Chinese Communist Party (CCP). The system, unveiled this week in Hefei, is positioned as a critical enabler for mass-producing quantum computers with more than 100 qubits. The control system is considered the “neural center” of a quantum computer. It generates, acquires and controls the precise signals that manage quantum chips, which are the computational heart of a quantum system. With the Tianji 4.0 upgrade, Origin Quantum claims major improvements in integration, automation and scalability compared to its previous version, which powered the country’s third-generation superconducting quantum computer, Origin Wukong. The company said Tianji 4.0 is integrated with four of Origin Quantum’s proprietary software platforms, enabling faster testing and adjustment of superconducting chips. These improvements are expected to reduce both the cost and time required to bring quantum machines online.
World’s first silicon-based quantum computer can still integrate seamlessly with HPC computing in data center because of own self-contained, closed-cycle cryo cooling
Equal1 has unveiled the Bell-1, the first quantum device that combines the potential of quantum computing with the convenience of traditional high-performance computing (HPC). The six-qubit machine is rack-mountable and can fit into existing data centers. It doesn’t require specialized infrastructure or additional equipment to operate at a temperature of minus 459.13 degrees Fahrenheit. The Bell-1 uses the latest semiconductor fabrication techniques and purified silicon for high control and long coherence times. The chip, called the UnityQ 6-Qubit Quantum Processing System, uses spin qubits, allowing for higher qubit density and scalability. The Bell-1 also incorporates error correction, control, and readout, taking advantage of existing semiconductor infrastructure for reliability and scalability. The company plans to make more powerful versions with higher qubit counts and is future-proof, allowing early adopters to upgrade existing systems as new models are released.
Quantware and Q-CTRL accelerate deployment of on-premises quantum computers and scaling of QPUs with an autonomous calibration solution with an ability to unlock processors with over 1 million qubits
Quantware announced a collaboration with Q-CTRL to deliver an autonomous calibration solution for its customers. By integrating Q-CTRL’s autonomous calibration solution, Boulder Opal Scale Up, with its cutting-edge QPUs, QuantWare’s customers will be able to achieve push-button tuneup of their on-premises quantum computers – an critical solution for scaling QPUs, especially those powered by QuantWare’s VIO technology, designed to unlock processors with over 1 million qubits. This new partnership will provide QuantWare’s customers with: Accelerated System Development: QuantWare’s customers will be able to drastically accelerate the construction and deployment of their quantum systems towards error correction. Q-CTRL’s autonomous calibration solution streamlines the setup process, reducing test times from days to hours. Maximized QPU Performance: Leveraging Q-CTRL’s Boulder Opal Scale Up solution empowers any user to achieve optimal performance from QuantWare QPUs with minimal effort. This ensures that customers can unlock the full potential of QuantWare’s QPUs, including the new Contralto-A Quantum Error Correction QPU recently launched in early access. Q-CTRL’s Boulder Opal Scale Up solution combines PhD-level human intelligence with AI-driven automation to overcome the quantum industry bottleneck. Built on the company’s track record of delivering peak QPU performance through physics-informed AI, Boulder Opal Scale Up provides an expert-configured and fully autonomous software solution to deliver fast, repeatable, and robust QPU characterization and calibration.