HSBC announced the world’s first-known empirical evidence of the potential value of current quantum computers for solving real-world problems in algorithmic bond trading. Working with a team from IBM, HSBC leveraged an approach that utilised quantum and classical computing resources to deliver up to a 34 percent improvement in predicting how likely a trade would be filled at a quoted price, compared to common classical techniques used in the industry. HSBC and IBM’s trial explored how today’s quantum computers could optimise requests for quote in over-the-counter markets, where financial assets such as bonds are traded between two parties without a centralised exchange or broker. In this process, algorithmic strategies and statistical models estimate how likely a trade is to be filled at a quoted price. The teams validated real and production-scale trading data on multiple IBM quantum computers to predict the probability of winning customer inquiries in the European corporate bond market. The results show the value quantum computers could offer when integrated into the dynamic problems facing the financial services industry, and how they could potentially offer superior solutions over standard methods which use classical computers alone. In this case, IBM Quantum Heron was able to augment classical computing workflows to better unravel hidden pricing signals in noisy market data than standard, classical-only approaches in use by HSBC, resulting in strong improvements in the bond trading process.
MIT-Harvard team clears significant hurdle to quantum computing by demonstrating a 3,000-qubit system with continuous two-hour operation using optical tweezers and conveyor belt atom reloading at 300,000 atoms per second
Harvard scientists have succeeded in developing a quantum machine featuring over 3,000 quantum bits (qubits), which can operate continuously for more than two hours without requiring a restart. This innovation represents a significant advancement in quantum computing technology and aims to overcome the critical issue of “atom loss,” where qubits escape and lose information. The research team, which includes members from MIT, utilized a system with “optical lattice conveyor belts” and “optical tweezers,” allowing them to rapidly resupply qubits and maintain processing power. This groundbreaking work not only demonstrates large-scale continuous operation but also has implications for performing computations. The researchers believe that they are now closer to realizing practical quantum computers capable of executing billions of operations over extended periods. In addition to this study, the team has introduced methods for reconfigurable atom arrays and improved error correction, further contributing to the evolving field of quantum computing. The research received funding from several federal agencies, including the U.S. Department of Energy and the National Science Foundation.
PsiQuantum’s silicon photonic approach shifts quantum computing from lab to large-scale deployment; targeting real-world uses in drug discovery, catalysts, and semiconductor processes
Quantum computing is transitioning from theory to practice, with PsiQuantum Corp., a Bay Area startup, at the forefront, aiming to create the first fault-tolerant quantum computer using silicon photonics. Co-founder Pete Shadbolt emphasizes the field’s momentum as critical technical milestones are being achieved, leading to the prospect of large-scale, commercially viable machines within months. He suggests that the quantum computing sector is currently behind artificial intelligence in terms of practical applications. Shadbolt discussed PsiQuantum’s strategy during an event hosted by theCUBE, highlighting its unique use of silicon photonics—chips that process light—which have surpassed their original telecom applications through collaboration with GlobalFoundries Inc. PsiQuantum’s methodology aims to produce large-scale quantum computers by leveraging established semiconductor manufacturing processes, positioning its innovations within the existing semiconductor ecosystem and targeting applications across chemistry, material science, drug discovery, and other sectors. This integration allows the company to utilize existing manufacturing standards and supply chain infrastructure, circumventing the need for exotic materials.
IBM, Vanguard test Quantum approach to building portfolios that preserves more realism in financial modeling and also yields multiple candidate solutions along the way, offering investors richer data for decision-making
IBM and Vanguard researchers demonstrated a quantum-classical workflow for portfolio construction using 109 qubits on IBM’s Heron processors, showing potential advantages for large-scale financial optimization. By combining quantum circuits that explore high-dimensional solution spaces with classical algorithms that refine and validate results, researchers can tackle problems that are too large or too complex for either quantum or classical methods alone. The team applied a Conditional Value at Risk-based Variational Quantum Algorithm (CVaR-VQA), combining quantum sampling and classical optimization to balance asset selection under risk and constraint conditions. According to the study, the team compared a standard TwoLocal circuit with a more advanced design called bias-field counterdiabatic optimization, or BFCD. Early simulations suggested that the harder-to-simulate BFCD circuits produced better convergence. This result hints at a possible sweet spot: quantum circuits that are too complex for efficient classical emulation but still trainable on hardware may deliver the most useful outcomes. The experiments also tested different entanglement structures, including bilinear chains and “colored” maps tailored to IBM’s hexagon-based design, or heavy-hex topology. The study argues that a quantum-classical workflow provides benefits beyond raw accuracy. Because the sampling-based method does not require rewriting the portfolio problem into strict mathematical forms like QUBOs, it preserves more realism in financial modeling. The approach also yields multiple candidate solutions along the way, offering investors richer data for decision-making. At the same time, the hardware results demonstrate that convergence continues even under noise, showing robustness of the method. For finance, the experiments show a path to exploring bond or ETF construction with greater flexibility and possibly faster turnaround in the future. For quantum computing, they provide evidence that harder-to-simulate circuits may be the most promising candidates for practical advantage. The results also suggest new benchmarking possibilities: using realistic financial optimization tasks rather than abstract problems as yardsticks for quantum progress.
Cornell–IBM researchers demonstrate a new method of building fault-tolerant universal quantum computers through the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space
Researchers at IBM, Cornell, Harvard University, and the Weizman Institute of Science have made two major breakthroughs in the quantum computing revolution. They demonstrated an error-resistant implementation of universal quantum gates and demonstrated the power of a topological quantum computer in solving hard problems that conventional computers couldn’t manage. The researchers demonstrated the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space, which is crucial for being fault tolerant and resistant to error. The researchers demonstrated the power of their method on a known hard problem, chromatic polynomials, which originated from a counting problem of graphs with different colored nodes and a few simple rules. The protocol used, sampling the chromatic polynomials for a set of different graphs where the number of colors is the golden ratio, is scalable, so other researchers with quantum computers can duplicate it at a larger scale. Studying topologically ordered many-body quantum systems presents tremendous challenges for quantum researchers. The researchers at IBM were critical in understanding the theory of the topological state and designing a protocol to implement it on a quantum computer. Their other colleagues made essential contributions with the hardware simulations, connecting theory to experiment and determining their strategy. The research was supported by the National Science Foundation, the U.S. Department of Energy, and the Alfred P. Sloan Foundation.
BDx Data Centres unveils Southeast Asia’s first hybrid quantum AI testbed aligned with Singapore’s Green 2030 and Smart Nation strategies
BDx Data Centres has launched Southeast Asia’s first hybrid quantum AI testbed, aiming to integrate quantum computing capabilities into its flagship SIN1 data centre in Paya Lebar. Developed in collaboration with Singapore-based Anyon Technologies, the testbed is designed to catalyze breakthroughs in AI innovation. “A modern computer today is essentially a whole data centre. Deploying a state-of-the-art hybrid quantum computing system at BDx’s SIN1 facility marks a transformative step in modern computing infrastructure,” said Dr Jie (Roger) Luo, president and CEO of Anyon Technologies. “By integrating QPUs (Quantum Processing Units) with CPUs (Central Processing Units) and GPUs (Graphics Processing Units), we’re enabling breakthroughs in quantum algorithms and applications. This lowers adoption barriers for enterprise customers, like financial institutions.” The testbed serves as a gateway for startups, enterprises, and government agencies to explore the vast potential of quantum-enhanced AI applications, made possible through the integration of Anyon’s quantum systems with BDx’s AI-ready infrastructure. Aligned with Singapore’s Green 2030 and Smart Nation strategies, the initiative also sets a benchmark for sustainable, high-performance computing.
New algorithm enables simulating quantum computations using codes that distribute information across multiple subsystems allowing errors to be detected and corrected without destroying the quantum information
Researchers from Chalmers University of Technology in Sweden, along with teams from Milan, Granada, and Tokyo, have developed a groundbreaking method for simulating certain types of error-corrected quantum computations. This is a major step forward in the race to build powerful, dependable quantum technology. Quantum computers have the potential to transform fields like medicine, energy, encryption, artificial intelligence, and logistics. However, they still face a critical obstacle: errors. Quantum systems are far more prone to errors and much harder to fix than traditional computers. Researchers often turn to classical computers to simulate the process, but simulating advanced quantum behavior is incredibly complex. The limited ability of quantum computers to correct errors stems from their fundamental building blocks, qubits, which have the potential for immense computational power but are highly sensitive to disturbances. To address this issue, error correction codes are used to distribute information across multiple subsystems, allowing errors to be detected and corrected without destroying the quantum information. The researchers developed an algorithm capable of simulating quantum computations using the Gottesman-Kitaev-Preskill (GKP) code, which makes quantum computers less sensitive to noise and disturbances. This new mathematical tool allows researchers to more reliably test and validate a quantum computer’s calculations, opening up entirely new ways of simulating quantum computations that have previously been unable to test.
New system lets multiple users share a single quantum computer by dynamically allocating quantum resources and intelligently scheduling jobs
Columbia Engineering researchers have developed HyperQ, the first system to enable multiple users to run quantum programs simultaneously on a single machine using quantum virtual machines (qVMs). By dynamically allocating quantum resources and intelligently scheduling jobs, HyperQ analyzes each program’s needs and steers them to the best parts of the quantum chip, so multiple tasks can run at once without slowing each other down. HyperQ is a software layer, a hypervisor, inspired by the virtualization technology that powers modern cloud computing. It divides a physical quantum computer’s hardware into multiple, smaller, isolated quantum virtual machines. A scheduler then acts like a master Tetris player, packing multiple of these qVMs together to run simultaneously on different parts of the machine. The system reduced average user wait times by up to 40 times, transforming turnaround times from days to mere hours. It also enabled up to a tenfold increase in the number of quantum programs executed in the same time frame, ensuring much higher utilization of expensive quantum hardware. Remarkably, HyperQ’s intelligent scheduling could even enhance computational accuracy by steering sensitive workloads away from the noisiest regions of the quantum chip. For quantum cloud providers such as IBM, Google, and Amazon, the technology offers a powerful way to serve more users with existing hardware infrastructure, increasing both capacity and cost-effectiveness. For academic researchers and industry researchers, HyperQ means much faster access to quantum computing resources.
Startup Qedma’s software specializes in quantum error suppression and error mitigation by analyzing noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing
Startup Qedma specializes in error-mitigation software. Its main piece of software, QESEM, or quantum error suppression and error mitigation, analyzes noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing. IBM is both working on delivering its own “fault-tolerant” quantum computer by 2029 and collaborating with partners like Qedma. That’s because IBM thinks driving quantum further requires a community effort. “If we all work together, I do think it’s possible that we will get scientific accepted definitions of quantum advantage in the near future, and I hope that we can then turn them into more applied use cases that will grow the industry,” VP of Quantum, Jay Gambetta said. In all likelihood, it will first apply to an academic problem, not a practical one. In this context, it may take more than one attempt to build consensus that it’s not just another artificial or overly constrained scenario. Since last September, Qedma has been available through IBM’s Qiskit Functions Catalog, which makes quantum more accessible to end users. Qedma’s plans are hardware-agnostic. The startup has already conducted a demo on the Aria computer from IonQ, a publicly listed U.S. company focused on trapped ion quantum computing. In addition, Qedma has an evaluation agreement with an unnamed partner Sinay described as “the largest company in the market.” Recently, it also presented its collaboration with Japan’s RIKEN on how to combine quantum with supercomputers.
Penn State study shows diffusion-based approach to automatically generate valid quantum circuits achieves 100% output validity by learning the patterns of circuit structure directly from graph-structured data, offering a scalable alternative to LLM-based approaches
A recent study from Penn State researchers introduces a diffusion-based approach to automatically generate valid quantum circuits—offering a scalable alternative to today’s labor-intensive quantum programming methods. The proposed framework, dubbed Q-Fusion, achieved 100% output validity and demonstrates promise for accelerating progress in quantum machine learning and quantum software development. Unlike LLM-based approaches that treat circuit generation like language modeling, or reinforcement learning that requires trial-and-error with human-defined rules, Q-Fusion learns the patterns of circuit structure directly from data. This bypasses the need for hand-crafted heuristics and enables the model to discover novel circuit layouts. Q-Fusion points toward a more scalable future, where models can rapidly explore vast design spaces and generate circuits that are physically viable on actual quantum hardware. The authors note that diffusion models offer advantages over generative adversarial networks (GANs) and other common generative techniques due to their stability and flexibility with graph-structured data. Q-Fusion also incorporates hardware-specific constraints such as limited qubit connectivity and native gate sets, ensuring that generated circuits can potentially be deployed on real quantum devices without extensive post-processing. As quantum computing continues to mature, tools like Q-Fusion could play an essential role in making the technology more accessible and productive. Automating the generation of valid, deployable quantum circuits will reduce the workload on quantum software engineers and accelerate the pace of experimentation. The model’s diffusion-based approach is not only a strong alternative to other QAS methods but also opens new possibilities for combining machine learning with quantum program synthesis. It also aligns with trends in AI where graph-based diffusion models are showing strong performance across domains ranging from drug discovery to chip design.
