April 2, 2026 Researchers from California Institute of Technology and start-up Oratomic have demonstrated a new error-correction approach that could cut the number of qubits needed for a fault-tolerant quantum computer from millions to as few as 10,000–20,000. That is a major shift from earlier estimates of around one million qubits, long considered the main barrier to building practical quantum systems.
The work centres on a more efficient way to handle errors, one of the defining challenges in quantum computing. “Our results now make useful quantum computation with neutral atoms appear within reach by reducing qubit counts by up to two orders of magnitude,” said Madelyn Cain, a lead scientist at Oratomic.
Today’s quantum systems are fragile. Qubits, unlike classical bits, can exist in superposition and become entangled, but those states collapse easily, introducing errors. To compensate, current methods rely on redundancy, often requiring around 1,000 physical qubits to create a single stable logical qubit. Scaling that model to a useful machine has implied building systems with roughly one million qubits, a major engineering hurdle.
The new approach changes that equation. By using neutral atoms arranged with laser “optical tweezers,” researchers can dynamically move qubits and connect them across large distances. This allows for high-rate error-correction codes, where each physical qubit can contribute to multiple logical qubits instead of just one. In practical terms, a logical qubit could be encoded with as few as five physical qubits.
“Unlike other quantum computing platforms, neutral atom qubits can be directly connected over large distances. Optical tweezers can shuttle one atom to the other end of the array and directly entangle it with another atom,” said Caltech physicist Manuel Endres.
This architecture builds on recent experimental progress. Endres and his team have already demonstrated arrays exceeding 6,000 neutral atom qubits, the largest of their kind. While the current results are theoretical, they align with rapid advances in hardware, suggesting a clearer path to scaling.
Separate research reinforces the shift. A recent paper shows that fewer than 30,000 qubits could be enough to break 256-bit elliptic curve cryptography (ECC) within about 10 days—roughly 100 times more efficient than earlier estimates. Another study from Google researchers reports improvements to Shor’s algorithm that reduce resource requirements by 20×, potentially enabling a quantum system to crack certain blockchain cryptography in under 10 minutes.
These developments have direct implications for security. Modern systems rely on encryption schemes such as RSA and ECC, which are considered safe because classical computers cannot solve the underlying mathematical problems efficiently. Quantum computers, using Shor’s algorithm, can solve those problems far faster.
Researchers emphasise that progress in both hardware and algorithms is accelerating the timeline. “I’ve been working on fault-tolerant quantum computing longer than some of my coauthors have been alive,” said John Preskill of Caltech. “Now at last we’re getting close.”
At the same time, there are limits. Large-scale systems still require advances in engineering, error rates, and integration. But the shift from brute-force scaling to more efficient architectures changes how progress is measured and how soon practical systems might arrive.
