天天看點

諾獎得主Wilczek:糾錯之錯

撰文 | Frank Wilczek

翻譯 | 胡風、梁丁當

中文版

發展初期,我們常要解決由零件故障帶來的問題。技術進步減少了故障的出現,但如何應對故障與錯誤,依然很重要。

諾獎得主Wilczek:糾錯之錯

當最先進的計算機還是一個由大量真空電子管組成的龐然大物時,整機需要塞滿幾個房間。此時,計算機的設計師還必須認真地考慮真空管的局限性。因為這些真空管易損耗, 甚至會突然完全失效。部分受到這些問題的啟發,約翰·馮·諾依曼(John von Neumann,被後人稱為“現代計算機之父”)等人開創了一個新的研究領域。這個領域成果的縮影是1956年馮 · 諾依曼發表的論文《機率邏輯與從不可靠元件合成可靠有機體》(Probability Logics and the Synthesis of Reliable Organism from Unreliable Parts)。他在文中寫道 :“目前我們對錯誤的處理方式并不令人滿意,甚至可以說是毫無章法的。作者多年來一直堅信,應該用熱力學的方法來處理錯誤。”他繼續補充說,“我們目前的處理方式遠遠達不到這一目标。”

熱力學和統計力學是實體學中發展出的強有力的方法。其核心思想是:從微觀原子和分子的基本行為出發,利用機率統計,推導出宏觀物體的性質,比如溫度和壓力。馮·諾依曼把處理資訊的複雜的基本單元類比于原子,想發展一套類似的統計理論。

然而,由于半導體技術和分子生物學的迅速發展,使得這個新興理論的發展半路夭折了。由高精技術組裝出來的固态半導體、印刷電路和晶片具有很高的可靠性。它們的出現讓工程師們把注意力從應對錯誤轉向了避免錯誤。在最基本的生物學過程中,也有類似的現象 :當細胞從DNA中讀出了所儲存的資訊後,它們會進行嚴格的校對和糾錯,進而将潛在的錯誤扼殺在萌芽狀态。

如今,随着科學家不斷突破技術的界限和提出更高的挑戰,學界又一次面臨應該如何應對故障這個古老的問題了。打個比方,如果我們降低對半導體可靠性的要求,我們又可以使半導體更小更快,并放寬制造要求。在生物學領域,還有更貼切的例子。除了精密的蛋白質組裝,還有更宏觀的、更粗糙的過程,比如大腦的組裝和運作。隻有接受馮 · 諾依曼的挑戰,我們才有可能了解這些基礎元件會出錯但依然可以良好運作的系統。

從1956年至今,人們在應對故障方面取得了很多進展。比如,網際網路被設計成即使有節點出現故障或者斷線也依然可以工作(早期研究旨在確定即使在發生核交戰時,通信網絡也不會中斷)。在人工神經網絡中,基于某種機率邏輯,它的每個單元對許多其他單元的輸入進行平均。這樣,即使部分單元的計算并不精确,人工神經網絡仍然能夠順利完成很多令人印象深刻的計算。

我們還對人腦的網絡連接配接和資訊處理方式有了更多的了解。人腦由大量的生物細胞構成。這些細胞會出現連接配接錯誤、死亡或是故障。盡管如此,通常情況下大腦仍然能夠非常好地運作。區塊鍊和(到目前為止仍然是概念上的)拓撲量子計算機系統性地将資訊分發在一個存在薄弱環節的有彈性的網絡中。這種網絡可以彌補故障元件帶來的缺陷,就好比著名的視網膜盲點可以被我們的視覺感覺所“填補”一樣。

馮·諾依曼對不可靠部件的關注和他對能夠自我複制的機器人的設想是一脈相承的。這些機器人,正如其仿效的生物細胞和有機生物那樣,需要在一個無法預知的、不可靠的環境中,擷取實作自我複制的材料。這是未來工程。或許它能夠實作科幻小說中最離奇的場景——比如行星地球化、巨腦等等。

如果沒有經曆過出錯和處理錯誤的過程,你就無法達到目标。戲劇性的是,如果半導體技術不是那麼發達,或許我們能夠更早地解決這個問題,并在今天走得更遠。

英文版

In the days when top-of-the-line computers contained rooms full of vacuum tubes, their designers had to keep the tubes' limitations carefully in mind. They were prone to degrade over time, or suddenly fail altogether. Partly inspired by this problem, John von Neumann and others launched a new field of investigation, epitomized by von Neumann’s 1956 paper "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Parts." In it, he wrote, "Our present treatment of error is unsatisfactory and ad hoc. It is the author's conviction, voiced over many years, that error should be treated by thermodynamical methods." He added, "The present treatment falls far short of achieving this."

Thermodynamics and statistical mechanics are the powerful methods physics has developed to derive the properties of bodies-such as their temperature and pressure-from the basic behavior of their atoms and molecules, using probability. Von Neumann hoped to do something comparable for the complex basic units, analogous to atoms, that process information.

The emerging theory was short-circuited, so to speak, by developments in semiconductor technology and molecular biology. Solid-state transistors, printed circuits and chips are models of reliability when assembled meticulously enough. With their emergence, the focus of engineers turned from coping with error to avoiding it. The most basic processes of biology share that focus: As cells read out the information stored in DNA, they do rigorous proofreading and error correction, to nip potential errors in the bud.

But the old issues are making a comeback as scientists push the boundaries of technology and ask more ambitious questions. We can make transistors smaller and faster-and relax manufacturing requirements-if we can compromise on their reliability. And we will only understand the larger-scale, sloppier processes of biology-such as assembling brains, as opposed to assembling proteins-if we take on von Neumann’s challenge.

A lot of progress in overcoming processing errors has been made since 1956. The internet is designed to work around nodes that malfunction or go offline. (Early research aimed to ensure the survival of communication networks following a nuclear exchange.) Artificial neural nets can do impressive calculations smoothly despite imprecision in their parts, using a sort of probabilistic logic in which each unit takes averages over inputs from many others.

We've also come to understand a lot more about how human brains get wired up and process information: They are made from vast numbers of biological cells that can get miswired, die, or malfunction in different ways, but usually still manage to function impressively well. Blockchains and (so far, mostly notional) topological quantum computers systematically distribute information within a resilient web of possibly weak components. The contribution of failed components can be filled in, similar to how our visual perception "fills in" the retina's famous blind spot.

Von Neumann's concern with unreliable parts fits within his vision of self-replicating machines. To reproduce themselves, such automatons-like the biological cells and organisms that inspired them-would need to tap into an unpredictable, unreliable environment for their building material. This is the engineering of the future. Plausibly, it is the path to some of science fiction’s wildest visions-terraforming of planets, giant brains, and more.

You won't get there without making, and working around, lots of mistakes. Ironically, if semiconductor technology hadn’t been so good, we might have taken on that issue sooner, and be further along today.

Frank Wilczek

諾獎得主Wilczek:糾錯之錯

弗蘭克·維爾切克是麻省理工學院實體學教授、量子色動力學的創始者之一。因發現了量子色動力學的漸近自由現象,他在2004年獲得了諾貝爾實體學獎。

本文經授權轉載自微信公衆号“蔻享學術”。

特 别 提 示

1. 進入『返樸』微信公衆号底部菜單“精品專欄“,可查閱不同主題系列科普文章。

2. 『返樸』提供按月檢索文章功能。關注公衆号,回複四位數組成的年份+月份,如“1903”,可擷取2019年3月的文章索引,以此類推。