New Study Exposes Major Flaws in Quantum Computing By David Freeman - September 17, 2025
Quantum computing has long been framed as the technology that will one day make today’s supercomputers look like outdated calculators. From cracking problems in cryptography to modeling the chemistry of life itself, its promise has always rested on one defining feature: the ability to solve problems so complex that no conventional computer could ever verify them within any human lifetime. That promise has also exposed a deep paradox. If quantum computers produce results that cannot be verified by even the fastest supercomputers, how do we know they are correct? Until now, that question has lingered unanswered, leaving a gap at the center of claims about so-called quantum advantage. A new study from Swinburne University of Technology confronts this dilemma directly, providing methods to validate outputs of advanced photonic quantum machines that until now were treated as uncheckable. The work has the potential to redefine what counts as genuine progress in the race toward reliable quantum computing.
The challenge is sharper than it may appear. Modern experiments with large-scale photonic devices have claimed they reached quantum advantage, meaning they performed a task that would take classical computers far too long to replicate. One such class of machines is called Gaussian Boson Samplers. These systems use photons, particles of light, to calculate probabilities across enormous and complex networks. The tasks are rooted in mathematics that belongs to the hardest computational classes known, requiring functions called Hafnians that cannot be computed efficiently by any classical algorithm. In principle, these machines can generate samples from probability distributions that would take millions or billions of years for the most powerful supercomputer to reproduce. But therein lies the paradox. If we cannot check the result, how can anyone be certain that the machine did what it claimed? Validation is not a minor detail. Without it, entire claims of superiority rest on faith rather than measurable confirmation.
The Swinburne team led by Alexander Dellios, working with Margaret Reid and Peter Drummond, devised a set of validation tests that turn this paradox on its head. Instead of attempting to recreate the full distribution of outcomes, which is computationally intractable, they used a phase-space simulation technique known as the positive-P representation. This approach relies on probabilistic sampling methods that can reproduce measurable statistical properties of photon distributions far more efficiently than brute-force calculation. Their tests were applied to Gaussian Boson Sampling experiments that had claimed advantage, including those from Borealis, one of the most advanced photonic networks built to date. What they found is deeply consequential. Across multiple datasets, the outputs from Borealis were significantly different from the ideal theoretical predictions for pure quantum squeezed states. In other words, the machine produced results that, when tested, did not line up with what it was supposed to deliver.
The discrepancies were not subtle. Quantitative chi-square tests and Z-statistic analyses showed deviations many standard deviations away from theoretical expectations. For the largest experiments with over two hundred modes, the differences were extreme, indicating that the data could not be reconciled with the intended ground truth distribution. At first glance, this suggests that current flagship quantum advantage demonstrations may not have been producing the right answers at all. However, the team went further. When they introduced models that accounted for experimental imperfections such as decoherence and measurement errors, the agreement improved dramatically. By adjusting parameters to include a small degree of thermalization and corrections to the transmission matrix, they were able to bring the simulated distributions into near alignment with the observed data, at least for lower dimensional tests. This points to a critical conclusion: the machines are not operating ideally, but they are not completely failing either. Instead, they are producing outputs that deviate from the intended problem but can be explained once noise and imperfections are properly included.
This finding cuts both ways. On one hand, it undermines bold claims that quantum advantage has been achieved cleanly in these systems. On the other, it shows that with the right validation frameworks, errors can be diagnosed and corrected, potentially salvaging the effort. The importance of this distinction cannot be overstated. Without validation, experimentalists could continue claiming progress while building machines that have quietly drifted away from true quantum behavior. With validation, it becomes possible to separate genuine quantum performance from errors masquerading as results. In effect, the Swinburne method offers a kind of immune system for the field, exposing where the machines lose their quantumness and allowing adjustments to push them back into the correct regime.
The scale of the computational breakthrough is also significant. Classical simulation of these systems is not just difficult but astronomically so. Direct calculation of the required Hafnians for large matrices can take incomprehensible times, even on the fastest machines ever built. The Fugaku supercomputer in Japan, one of the world’s largest, would need millions of years to generate enough samples to check the largest experiments. The positive-P method used by the Swinburne team is around a quintillion times faster for the relevant cases, producing statistical comparisons on a standard desktop in under a minute. That kind of acceleration makes validation not just possible but practical, turning an impossible task into one that can be used regularly to monitor experimental progress.
The results also reveal how fragile current claims of quantum advantage really are. While experiments have shown they can produce massive photon counting distributions, the Swinburne validation shows that what comes out is not the pristine quantum pattern theorists expect, but a distorted version influenced heavily by errors and losses. Some of these distortions may even make the problem easier for classical algorithms to approximate, cutting into the very advantage being claimed. In fact, the study notes that in some cases, classical samplers exploiting photon loss may already generate distributions closer to the true target than the quantum machines themselves. That possibility raises uncomfortable questions about whether the field has been overselling its achievements.
Still, the work does not dismiss the potential of quantum computing. Instead, it reframes the pathway forward. Building larger machines alone will not guarantee progress if errors scale with size. Validation and error correction must become central to the field, not afterthoughts. The authors suggest that their method could even serve as a feedback tool, allowing experimentalists to tweak parameters in real time to correct deviations. This vision positions scalable validation as not just a test of success but as an integral part of building better machines. The comparison with random number generators and cryptographic systems is apt. Just as no one would accept a random generator without rigorous statistical tests, no one should accept quantum advantage claims without robust validation suites.
The study also highlights the stakes for the broader field. Policymakers, investors, and the public have been told that these machines are on the verge of reshaping entire industries, with timelines of five to ten years often floated. Yet if the machines cannot be reliably validated, those promises rest on shaky ground. The credibility of the field depends on rigorous checks that separate true breakthroughs from errors dressed up as progress. The Swinburne team’s work represents one of the most serious steps yet in addressing this credibility gap. It also suggests that claims already made in the past few years should be revisited with caution. Just because a machine outputs something no supercomputer could check does not mean it has achieved anything meaningful.
Quantum computing has been championed as the key to solving the unsolvable. But solving the unsolvable is meaningless if the answers cannot be confirmed. Validation transforms the conversation. It shifts focus from making grand claims about impossibility to ensuring that what is being produced is both real and correct. This aligns with the foundational principle of science itself: results must be testable, not simply asserted. In this sense, the Swinburne study brings the field back to its scientific core, away from marketing rhetoric and toward rigorous demonstration.
The practical outcomes of this research will ripple outward. In the near term, experimental groups will need to incorporate validation into their workflows, perhaps finding that some of their most celebrated results do not survive scrutiny. In the medium term, validation will guide the development of error correction methods tailored for photonic systems. In the long term, if error-free quantum computers are ever to be built, scalable validation methods like this one will be part of the foundation. The alternative is a field that continues to grow in size but not in trustworthiness, a direction that could eventually undermine support altogether.
The study, published in Quantum Science and Technology in September 2025, represents a turning point. It demonstrates not just a way to check machines but a way to ensure the entire enterprise of quantum computing rests on verifiable ground. It also exposes that current flagship devices may not be as advanced as once believed. Rather than discrediting the effort, this should be seen as a necessary correction. Progress in quantum computing must be measured not by hype or scale but by correctness. Without correctness, nothing else matters.
The race toward quantum advantage is not over, but it is more complicated than many believed. Validation is no longer optional. It is the gatekeeper between real breakthroughs and hollow claims. The Swinburne team has shown that gatekeeping can be done, and done efficiently. The challenge now lies in whether the field is willing to confront what such validation will reveal.
Source:
Dellios, A. S., Reid, M. D., & Drummond, P. D. (2025). Validation tests of Gaussian boson samplers with photon-number resolving detectors. Quantum Science and Technology, 10, 045030. https://doi.org/10.1088/2058-9565/adfe16
ChristopherBlackwell
This is just way too far over my head! However, I did get the concerns being made about quantum
Computer data being largely unverigiable and them actually finding mistakes.
Garbage in... Garbage out...
If the results cannot be verified as correct a/o continue to conflict with its own predictions, as happened when a data check was performed, then it becomes essentially USELESS!
Even if I remain largely clueless what it all does and means.