A team of researchers led by Professor Kavan Modi from the Singapore University of Technology and Design (SUTD) has taken a conceptual leap into this complexity by developing a new quantum framework for analysing higher-order network data. Their work centres on a mathematical field called topological signal processing (TSP), which encodes more than connections between pairs of points but also among triplets, quadruplets, and beyond. Here, “signals” are information that lives on higher-dimensional shapes (triangles or tetrahedra) embedded in a network. The team introduced a quantum version of this framework, called Quantum Topological Signal Processing (QTSP). It is a mathematically rigorous method for manipulating multi-way signals using quantum linear systems algorithms. Unlike prior quantum approaches to topological data analysis, which often suffer from impractical scaling, the QTSP framework achieves linear scaling in signal dimension. It is an improvement that opens the door to efficient quantum algorithms for problems previously considered out of reach. The technical insight behind QTSP is in the structure of the data itself. Classical approaches typically require costly transformations to fit topological data into a form usable by quantum devices. However, in QTSP, the data’s native format is already compatible with quantum linear systems solvers, due to recent developments in quantum topological data analysis. This compatibility allows the team to circumvent a major bottleneck, efficient data encoding, while ensuring the algorithm remains mathematically grounded and modular. Still, loading data into quantum hardware and retrieving it without overwhelming the quantum advantage remains an unsolved challenge. Even with linear scaling, quantum speedups can be nullified by overheads in pre- and post-processing. The framework achieves linear scaling and has been demonstrated through a quantum extension of the classical HodgeRank algorithm, with potential applications in recommendation systems, neuroscience, physics and finance.
PsiQuantum’s silicon photonic approach shifts quantum computing from lab to large-scale deployment; targeting real-world uses in drug discovery, catalysts, and semiconductor processes
Quantum computing is transitioning from theory to practice, with PsiQuantum Corp., a Bay Area startup, at the forefront, aiming to create the first fault-tolerant quantum computer using silicon photonics. Co-founder Pete Shadbolt emphasizes the field’s momentum as critical technical milestones are being achieved, leading to the prospect of large-scale, commercially viable machines within months. He suggests that the quantum computing sector is currently behind artificial intelligence in terms of practical applications. Shadbolt discussed PsiQuantum’s strategy during an event hosted by theCUBE, highlighting its unique use of silicon photonics—chips that process light—which have surpassed their original telecom applications through collaboration with GlobalFoundries Inc. PsiQuantum’s methodology aims to produce large-scale quantum computers by leveraging established semiconductor manufacturing processes, positioning its innovations within the existing semiconductor ecosystem and targeting applications across chemistry, material science, drug discovery, and other sectors. This integration allows the company to utilize existing manufacturing standards and supply chain infrastructure, circumventing the need for exotic materials.
Physicists leverage real-time AI control to assemble world’s largest 2,024-atom quantum array, paving the way for scalable, efficient quantum computing breakthroughs
A team led by Chinese physicist Pan Jianwei used artificial intelligence (AI) to help create an atom-based quantum computing component that dwarfs previous systems in size, raising hopes that neutral-atom machines could one day operate with tens of thousands of qubits. The team arranged 2,024 rubidium atoms — each functioning as a qubit — into precise two- and three-dimensional arrays. The feat, reportedly marks a tenfold increase over the largest previous atom arrays and addresses one of the field’s most stubborn bottlenecks: how to scale beyond a few hundred qubits without prohibitive delays. Until now, researchers typically moved atoms into place one at a time, making large-scale arrays impractical. Pan’s team, working with the Shanghai Artificial Intelligence Laboratory, replaced this slow step with a real-time AI control system that shifts every atom in the array simultaneously. The setup uses a high-speed spatial light modulator to shape laser beams into traps that corral the atoms. The AI system calculates where each atom needs to go and directs the lasers to move them into perfect positions in just 60 milliseconds — 60,000th of a second, or about the same time it takes a hummingbird to flap its wings 5 times — regardless of whether the array contains hundreds or thousands of atoms. In principle, the method could scale to arrays with tens of thousands of atoms without slowing down. If successful, scaling neutral-atom arrays to that size could allow them to run algorithms that are currently beyond the reach of classical computers and existing quantum prototypes. Applications could range from simulating complex molecules for drug discovery to solving optimization problems in logistics and materials science. The AI-guided control method, coupled with high-precision lasers, essentially removes the scaling penalty that has long plagued neutral-atom designs.
IBM, Vanguard test Quantum approach to building portfolios that preserves more realism in financial modeling and also yields multiple candidate solutions along the way, offering investors richer data for decision-making
IBM and Vanguard researchers demonstrated a quantum-classical workflow for portfolio construction using 109 qubits on IBM’s Heron processors, showing potential advantages for large-scale financial optimization. By combining quantum circuits that explore high-dimensional solution spaces with classical algorithms that refine and validate results, researchers can tackle problems that are too large or too complex for either quantum or classical methods alone. The team applied a Conditional Value at Risk-based Variational Quantum Algorithm (CVaR-VQA), combining quantum sampling and classical optimization to balance asset selection under risk and constraint conditions. According to the study, the team compared a standard TwoLocal circuit with a more advanced design called bias-field counterdiabatic optimization, or BFCD. Early simulations suggested that the harder-to-simulate BFCD circuits produced better convergence. This result hints at a possible sweet spot: quantum circuits that are too complex for efficient classical emulation but still trainable on hardware may deliver the most useful outcomes. The experiments also tested different entanglement structures, including bilinear chains and “colored” maps tailored to IBM’s hexagon-based design, or heavy-hex topology. The study argues that a quantum-classical workflow provides benefits beyond raw accuracy. Because the sampling-based method does not require rewriting the portfolio problem into strict mathematical forms like QUBOs, it preserves more realism in financial modeling. The approach also yields multiple candidate solutions along the way, offering investors richer data for decision-making. At the same time, the hardware results demonstrate that convergence continues even under noise, showing robustness of the method. For finance, the experiments show a path to exploring bond or ETF construction with greater flexibility and possibly faster turnaround in the future. For quantum computing, they provide evidence that harder-to-simulate circuits may be the most promising candidates for practical advantage. The results also suggest new benchmarking possibilities: using realistic financial optimization tasks rather than abstract problems as yardsticks for quantum progress.
Cornell–IBM researchers demonstrate a new method of building fault-tolerant universal quantum computers through the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space
Researchers at IBM, Cornell, Harvard University, and the Weizman Institute of Science have made two major breakthroughs in the quantum computing revolution. They demonstrated an error-resistant implementation of universal quantum gates and demonstrated the power of a topological quantum computer in solving hard problems that conventional computers couldn’t manage. The researchers demonstrated the ability to encode information by braiding Fibonacci string net condensate (Fib SNC) anyons in two-dimensional space, which is crucial for being fault tolerant and resistant to error. The researchers demonstrated the power of their method on a known hard problem, chromatic polynomials, which originated from a counting problem of graphs with different colored nodes and a few simple rules. The protocol used, sampling the chromatic polynomials for a set of different graphs where the number of colors is the golden ratio, is scalable, so other researchers with quantum computers can duplicate it at a larger scale. Studying topologically ordered many-body quantum systems presents tremendous challenges for quantum researchers. The researchers at IBM were critical in understanding the theory of the topological state and designing a protocol to implement it on a quantum computer. Their other colleagues made essential contributions with the hardware simulations, connecting theory to experiment and determining their strategy. The research was supported by the National Science Foundation, the U.S. Department of Energy, and the Alfred P. Sloan Foundation.
BDx Data Centres unveils Southeast Asia’s first hybrid quantum AI testbed aligned with Singapore’s Green 2030 and Smart Nation strategies
BDx Data Centres has launched Southeast Asia’s first hybrid quantum AI testbed, aiming to integrate quantum computing capabilities into its flagship SIN1 data centre in Paya Lebar. Developed in collaboration with Singapore-based Anyon Technologies, the testbed is designed to catalyze breakthroughs in AI innovation. “A modern computer today is essentially a whole data centre. Deploying a state-of-the-art hybrid quantum computing system at BDx’s SIN1 facility marks a transformative step in modern computing infrastructure,” said Dr Jie (Roger) Luo, president and CEO of Anyon Technologies. “By integrating QPUs (Quantum Processing Units) with CPUs (Central Processing Units) and GPUs (Graphics Processing Units), we’re enabling breakthroughs in quantum algorithms and applications. This lowers adoption barriers for enterprise customers, like financial institutions.” The testbed serves as a gateway for startups, enterprises, and government agencies to explore the vast potential of quantum-enhanced AI applications, made possible through the integration of Anyon’s quantum systems with BDx’s AI-ready infrastructure. Aligned with Singapore’s Green 2030 and Smart Nation strategies, the initiative also sets a benchmark for sustainable, high-performance computing.
New algorithm enables simulating quantum computations using codes that distribute information across multiple subsystems allowing errors to be detected and corrected without destroying the quantum information
Researchers from Chalmers University of Technology in Sweden, along with teams from Milan, Granada, and Tokyo, have developed a groundbreaking method for simulating certain types of error-corrected quantum computations. This is a major step forward in the race to build powerful, dependable quantum technology. Quantum computers have the potential to transform fields like medicine, energy, encryption, artificial intelligence, and logistics. However, they still face a critical obstacle: errors. Quantum systems are far more prone to errors and much harder to fix than traditional computers. Researchers often turn to classical computers to simulate the process, but simulating advanced quantum behavior is incredibly complex. The limited ability of quantum computers to correct errors stems from their fundamental building blocks, qubits, which have the potential for immense computational power but are highly sensitive to disturbances. To address this issue, error correction codes are used to distribute information across multiple subsystems, allowing errors to be detected and corrected without destroying the quantum information. The researchers developed an algorithm capable of simulating quantum computations using the Gottesman-Kitaev-Preskill (GKP) code, which makes quantum computers less sensitive to noise and disturbances. This new mathematical tool allows researchers to more reliably test and validate a quantum computer’s calculations, opening up entirely new ways of simulating quantum computations that have previously been unable to test.
New system lets multiple users share a single quantum computer by dynamically allocating quantum resources and intelligently scheduling jobs
Columbia Engineering researchers have developed HyperQ, the first system to enable multiple users to run quantum programs simultaneously on a single machine using quantum virtual machines (qVMs). By dynamically allocating quantum resources and intelligently scheduling jobs, HyperQ analyzes each program’s needs and steers them to the best parts of the quantum chip, so multiple tasks can run at once without slowing each other down. HyperQ is a software layer, a hypervisor, inspired by the virtualization technology that powers modern cloud computing. It divides a physical quantum computer’s hardware into multiple, smaller, isolated quantum virtual machines. A scheduler then acts like a master Tetris player, packing multiple of these qVMs together to run simultaneously on different parts of the machine. The system reduced average user wait times by up to 40 times, transforming turnaround times from days to mere hours. It also enabled up to a tenfold increase in the number of quantum programs executed in the same time frame, ensuring much higher utilization of expensive quantum hardware. Remarkably, HyperQ’s intelligent scheduling could even enhance computational accuracy by steering sensitive workloads away from the noisiest regions of the quantum chip. For quantum cloud providers such as IBM, Google, and Amazon, the technology offers a powerful way to serve more users with existing hardware infrastructure, increasing both capacity and cost-effectiveness. For academic researchers and industry researchers, HyperQ means much faster access to quantum computing resources.
Startup Qedma’s software specializes in quantum error suppression and error mitigation by analyzing noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing
Startup Qedma specializes in error-mitigation software. Its main piece of software, QESEM, or quantum error suppression and error mitigation, analyzes noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing. IBM is both working on delivering its own “fault-tolerant” quantum computer by 2029 and collaborating with partners like Qedma. That’s because IBM thinks driving quantum further requires a community effort. “If we all work together, I do think it’s possible that we will get scientific accepted definitions of quantum advantage in the near future, and I hope that we can then turn them into more applied use cases that will grow the industry,” VP of Quantum, Jay Gambetta said. In all likelihood, it will first apply to an academic problem, not a practical one. In this context, it may take more than one attempt to build consensus that it’s not just another artificial or overly constrained scenario. Since last September, Qedma has been available through IBM’s Qiskit Functions Catalog, which makes quantum more accessible to end users. Qedma’s plans are hardware-agnostic. The startup has already conducted a demo on the Aria computer from IonQ, a publicly listed U.S. company focused on trapped ion quantum computing. In addition, Qedma has an evaluation agreement with an unnamed partner Sinay described as “the largest company in the market.” Recently, it also presented its collaboration with Japan’s RIKEN on how to combine quantum with supercomputers.
Penn State study shows diffusion-based approach to automatically generate valid quantum circuits achieves 100% output validity by learning the patterns of circuit structure directly from graph-structured data, offering a scalable alternative to LLM-based approaches
A recent study from Penn State researchers introduces a diffusion-based approach to automatically generate valid quantum circuits—offering a scalable alternative to today’s labor-intensive quantum programming methods. The proposed framework, dubbed Q-Fusion, achieved 100% output validity and demonstrates promise for accelerating progress in quantum machine learning and quantum software development. Unlike LLM-based approaches that treat circuit generation like language modeling, or reinforcement learning that requires trial-and-error with human-defined rules, Q-Fusion learns the patterns of circuit structure directly from data. This bypasses the need for hand-crafted heuristics and enables the model to discover novel circuit layouts. Q-Fusion points toward a more scalable future, where models can rapidly explore vast design spaces and generate circuits that are physically viable on actual quantum hardware. The authors note that diffusion models offer advantages over generative adversarial networks (GANs) and other common generative techniques due to their stability and flexibility with graph-structured data. Q-Fusion also incorporates hardware-specific constraints such as limited qubit connectivity and native gate sets, ensuring that generated circuits can potentially be deployed on real quantum devices without extensive post-processing. As quantum computing continues to mature, tools like Q-Fusion could play an essential role in making the technology more accessible and productive. Automating the generation of valid, deployable quantum circuits will reduce the workload on quantum software engineers and accelerate the pace of experimentation. The model’s diffusion-based approach is not only a strong alternative to other QAS methods but also opens new possibilities for combining machine learning with quantum program synthesis. It also aligns with trends in AI where graph-based diffusion models are showing strong performance across domains ranging from drug discovery to chip design.