laitimes

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

  Shin Ji Won reports  

Editor: David

Since its birth, quantum supremacy has become a proposition that countless researchers have tried to break. Now, the joint team of Harvard, UC Berkeley, and the Hebrew University of Israel has finally taken a solid step in that direction. Experiments have proved that quantum supremacy does not exist!

Quantum supremacy, the term has been born for almost 4 years.

In 2019, Google's physicists announced the successful achievement of quantum supremacy with a 53-qubit machine, a milestone with a major symbol.

According to the paper published in Nature, the quantum system took only 200 seconds to complete a calculation, while the same calculation performed with Summit, the most powerful supercomputer at the time, took about 10,000 years.

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

What is quantum supremacy?

"Quantum supremacy", or "quantum supremacy" ("quantum supremacy"), means that quantum computers can accomplish tasks beyond the scope of any viable classical algorithm.

Even on the most advanced traditional supercomputers, the computation time (often thousands of years) of these tasks renders algorithms irrelevant.

Interestingly, in Google's 2019 results, it is only said that quantum supremacy has been achieved, and it is not stated in which specific instances quantum computers have surpassed classical computers.

This is a difficult question to answer because quantum computers are currently plagued by frequent errors that accumulate and undermine the performance and stability of quantum computing.

In fact, scientists want to know more than the realm of quantum supremacy is another question: whether classical algorithms will be able to keep up as quantum computers get bigger.

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

Scott Aaronson, a computer scientist at the University of Texas at Austin, said: "We hope that eventually the quantum side will completely distance itself and end this competition once and for all."

Most researchers speculate that the answer is no.

That is, classical algorithms will one day completely fail to keep up with quantum computing, but they have not been able to prove this accurately and comprehensively. One way to definitively prove this inference is to find the conditions under which quantum computing can gain a "lasting advantage" over classical computing.

Now, there seems to be a preliminary answer to this question:

Streaming: Quantum computing will produce errors, and if the error correction cannot keep up, this error will break the "quantum supremacy" under ideal conditions, so that classical algorithms can keep up with the pace of quantum algorithms.

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

Recently, in a preprint paper published in Arxiv, a joint team of Harvard University, UC Berkeley, and the Hebrew University of Israel took a big step towards confirming this conclusion.

They demonstrated that target error correction is a necessary condition for enduring quantum supremacy in random circuit sampling, supporting Google's findings a few years ago. At the current level of quantum error correction, quantum supremacy does not actually exist.

No more quantum supremacy "golden zone"

The researchers developed a classical algorithm that can simulate random circuit sampling experiments when there are errors to prove this conclusion.

Start with an array of qubits and randomly manipulate those qubits with operations called "quantum gates." Some quantum gates put pairs of qubits in an entangled state, meaning that they share a quantum state with each other and cannot be described separately.

Repeatedly setting these quantum gates in a multilayer circuit allows qubits to enter more complex entangled states.

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

The left figure shows random circuit sampling under ideal conditions, and the right figure shows random circuit sampling with interference

To understand this quantum state, the researchers measured all the qubits in the array. This behavior causes the collective quantum state of all qubits to collapse into a random string of ordinary bits, i.e. 0 and 1.

The number of possible outcomes grows rapidly as the number of qubits in the array increases. In Google's 2019 experiment, 53 qubits contained nearly 10 trillion results.

Moreover, this method requires repeated measurements from random circuits many times to build a probability distribution plot about the outcome.

The question about quantum supremacy is whether it is difficult or even impossible to mimic this probability distribution with a classical algorithm that does not use any entanglement?

In 2019, Google researchers demonstrated that this goal is difficult for error-free, error-free quantum circuits. In the absence of errors, it is indeed difficult to simulate a random circuit sampling experiment with classical algorithms.

From the perspective of computational complexity, when the number of qubits increases, the computational complexity of traditional classification algorithms increases exponentially, while quantum algorithms increase polynomically.

When n increases to a large enough size, an algorithm that is exponential in n lags far behind any algorithm that is polynomial in n.

That's the difference when we talk about a problem that's hard for classical computers, but easy for quantum computers. The best classical algorithms require exponential time, while quantum computers can solve problems in polynomial time.

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

However, the 2019 paper did not consider the impact of errors caused by imperfect quantum gates, and the research conclusion actually left a hole, that is, can random circuit sampling without error correction still achieve quantum supremacy?

In fact, if you consider the errors generated in quantum entanglement that can accumulate, then the difficulty of simulating random circuit sampling experiments with classical algorithms is greatly reduced. And if the computational complexity simulated by classical algorithms is reduced to the same polynomial level as quantum algorithms, quantum supremacy will cease to exist.

The new paper shows that assuming that the depth of the circuit is kept constant, say a very shallow 3-layer, as the number of qubits increases, there will not be much quantum entanglement, and the output can still be subjected to classical simulations.

On the other hand, if you increase the depth of the circuit to keep up with the increasing number of qubits, the accumulated effect of quantum gate errors will dilute the complexity generated by entanglement, and it will still become easier to simulate the output with classical algorithms.

In between, there is a "golden zone", the window through which quantum supremacy can continue to survive, i.e., the range of quantum entanglement that traditional algorithm simulations cannot keep up.

Before the publication of this paper, quantum supremacy still existed even as the number of qubits increased, when the number of qubits reached a certain intermediate range.

At this circuit depth, even though the output will steadily degrade due to quantum algorithm errors, it is difficult to simulate classical algorithms at every step.

The new paper has virtually wiped out this "golden mile."

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

In this paper, a classical algorithm for simulating random circuit sampling is derived, and it is proved that its running time is a polynomial function of the time required to run the corresponding quantum experiment, not an exponential function.

This result establishes a close theoretical connection between the classical method of random circuit sampling and the speed of quantum methods, that is, declaring quantum supremacy that has been achieved in theory almost non-existent in practice.

The reason why I say "almost" is because the basic assumptions of the new algorithm are invalid for some shallow circuits, leaving an unknown "small gap".

Still, few researchers are hopeful of achieving quantum supremacy in this gap. Even Bill Fefferman, a computer scientist at the University of Chicago and one of the authors of the 2019 Google paper, said, "I see the odds are pretty small."

It can be said that according to the strict standards of computational complexity theory, random circuit sampling will no longer produce quantum supremacy.

In addition, faced with this conclusion, all researchers agree on how critical quantum error correction will be to the long-term success of quantum computing. Fefferman said: "We found that quantum error correction is the solution."

Resources:

https://www.quantamagazine.org/new-algorithm-closes-quantum-supremacy-window-20230109/

https://scottaaronson.blog/?p=6957

New research from Stanford and Berkeley overturns Google's "quantum supremacy"! It's beautiful in theory, but it's not dramatic in practice

Read on