laitimes

Nobel laureate Wilczek: The mistake of error correction

Written by | Frank Wilczek

Translate | Hu Feng, Liang Dingdang

Chinese version

In the early stages of development, we often have to solve the problems caused by part failure. Technological advances have reduced the occurrence of failures, but how to deal with failures and errors is still important.

Nobel laureate Wilczek: The mistake of error correction

When the most advanced computer was still a behemoth composed of a large number of vacuum tubes, the whole machine needed to fill several rooms. At this point, the designer of the computer must also seriously consider the limitations of the vacuum tube. Because these vacuum tubes are prone to loss, they will even suddenly fail completely. Inspired in part by these questions, people such as John von Neumann (known later as the "father of modern computers") pioneered a new field of research. The epitome of achievements in this field is the 1956 feng · Neumann published the paper "Probability Logics and the Synthesis of Reliable Organism from Unreliable Parts." He wrote: "At present, our approach to error is not satisfactory, and can even be said to be disorderly. The authors have been convinced for years that errors should be handled by thermodynamic methods. He went on to add, "Our current approach falls far short of that goal." ”

Thermodynamics and statistical mechanics are powerful methods developed in physics. Its core idea is to use probability and statistics to derive the properties of macroscopic objects, such as temperature and pressure, starting from the basic behavior of microscopic atoms and molecules. Von Neumann compared the complex fundamental units of information to atoms and wanted to develop a similar set of statistical theories.

However, due to the rapid development of semiconductor technology and molecular biology, the development of this emerging theory has been halfway aborted. Solid-state transistors, printed circuits and chips assembled by high-precision technology have high reliability. Their appearance has allowed engineers to shift their focus from dealing with mistakes to avoiding them. In the most basic biological processes, there is a similar phenomenon: when cells read out the stored information from DNA, they carry out rigorous proofreading and correction, thus nipping potential errors in the bud.

Today, as scientists push the boundaries of technology and challenge higher, the academic community is once again faced with the age-old question of how to deal with failures. For example, if we lower the requirements for transistor reliability, we can make transistors smaller and faster, and relax manufacturing requirements. In the field of biology, there are more apt examples. In addition to the delicate assembly of proteins, there are macroscopic and coarser processes, such as the assembly and functioning of the brain. Only by accepting von Neumann's challenge is that it is possible for us to understand that these underlying components are faulty but still work well.

From 1956 to the present, a lot of progress has been made in dealing with failures. For example, the Internet was designed to work even if a node failed or disconnected (early research aimed to ensure that communication networks were not interrupted even in the event of a nuclear war). In an artificial neural network, based on some kind of probabilistic logic, each of its units averages the inputs of many other units. In this way, even if the calculation of some units is not precise, the artificial neural network can still successfully complete many impressive calculations.

We also learn more about the human brain's network connections and information processing. The human brain is made up of a large number of biological cells. These cells can experience connection errors, death, or malfunction. Still, normally the brain is able to function very well. Blockchains and (so far still conceptual) topological quantum computers systematically distribute information in a resilient network with weak links. Such networks can compensate for the shortcomings of faulty components, just as the famous retinal blind spots can be "filled" by our visual perception.

Von Neumann's focus on unreliable parts is in line with his vision of robots capable of self-replicating. These robots, like their imitations of biological cells and organic organisms, need to acquire materials that enable self-replication in an unpredictable, unreliable environment. This is the project of the future. Perhaps it can achieve the most bizarre scenarios in science fiction — such as planetary earthing, giant brains, and so on.

You can't reach your goal without going through the process of making mistakes and dealing with them. Dramatically, if semiconductor technology hadn't been so advanced, perhaps we could have solved this problem earlier and gone further today.

English version

In the days when top-of-the-line computers contained rooms full of vacuum tubes, their designers had to keep the tubes' limitations carefully in mind. They were prone to degrade over time, or suddenly fail altogether. Partly inspired by this problem, John von Neumann and others launched a new field of investigation, epitomized by von Neumann’s 1956 paper "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Parts." In it, he wrote, "Our present treatment of error is unsatisfactory and ad hoc. It is the author's conviction, voiced over many years, that error should be treated by thermodynamical methods." He added, "The present treatment falls far short of achieving this."

Thermodynamics and statistical mechanics are the powerful methods physics has developed to derive the properties of bodies-such as their temperature and pressure-from the basic behavior of their atoms and molecules, using probability. Von Neumann hoped to do something comparable for the complex basic units, analogous to atoms, that process information.

The emerging theory was short-circuited, so to speak, by developments in semiconductor technology and molecular biology. Solid-state transistors, printed circuits and chips are models of reliability when assembled meticulously enough. With their emergence, the focus of engineers turned from coping with error to avoiding it. The most basic processes of biology share that focus: As cells read out the information stored in DNA, they do rigorous proofreading and error correction, to nip potential errors in the bud.

But the old issues are making a comeback as scientists push the boundaries of technology and ask more ambitious questions. We can make transistors smaller and faster-and relax manufacturing requirements-if we can compromise on their reliability. And we will only understand the larger-scale, sloppier processes of biology-such as assembling brains, as opposed to assembling proteins-if we take on von Neumann’s challenge.

A lot of progress in overcoming processing errors has been made since 1956. The internet is designed to work around nodes that malfunction or go offline. (Early research aimed to ensure the survival of communication networks following a nuclear exchange.) Artificial neural nets can do impressive calculations smoothly despite imprecision in their parts, using a sort of probabilistic logic in which each unit takes averages over inputs from many others.

We've also come to understand a lot more about how human brains get wired up and process information: They are made from vast numbers of biological cells that can get miswired, die, or malfunction in different ways, but usually still manage to function impressively well. Blockchains and (so far, mostly notional) topological quantum computers systematically distribute information within a resilient web of possibly weak components. The contribution of failed components can be filled in, similar to how our visual perception "fills in" the retina's famous blind spot.

Von Neumann's concern with unreliable parts fits within his vision of self-replicating machines. To reproduce themselves, such automatons-like the biological cells and organisms that inspired them-would need to tap into an unpredictable, unreliable environment for their building material. This is the engineering of the future. Plausibly, it is the path to some of science fiction’s wildest visions-terraforming of planets, giant brains, and more.

You won't get there without making, and working around, lots of mistakes. Ironically, if semiconductor technology hadn’t been so good, we might have taken on that issue sooner, and be further along today.

Frank Wilczek

Nobel laureate Wilczek: The mistake of error correction

Frank Wilczek is a professor of physics at the Massachusetts Institute of Technology and one of the founders of quantum chromodynamics. He was awarded the Nobel Prize in Physics in 2004 for discovering the asymptotic free phenomenon of quantum chromodynamics.

This article is reprinted with permission from the WeChat public account "Cosmopolitan Academia".

Special mention

1. Enter the "Boutique Column" at the bottom menu of the "Return to Simplicity" WeChat public account to view the series of popular science articles on different topics.

2. "Return to Park" provides the function of retrieving articles on a monthly basis. Follow the official account, reply to the four-digit year + month, such as "1903", you can get the index of articles in March 2019, and so on.

Read on