Beijing, Aug. 24 (Zhongxin.com) -- In a new computational science paper published in the internationally renowned academic journal Nature, researchers reported an analog artificial intelligence (AI) chip that is 14 times more energy efficient than traditional digital computer chips. Studies have shown that the chip developed by IBM Research Labs in the United States is more efficient than general-purpose processors in speech recognition, and this technology may be able to break the bottleneck encountered in the current AI development due to the demand for computing power performance and efficiency.
14nm analog AI chip on the inspection board (image by Ryan Lavine). Photo courtesy of Springer Nature
The paper explains that with the rise of AI technology, the demand for energy and resources is also rising. In the field of speech recognition, software upgrades have greatly improved the accuracy of automatic transcription, but due to the increasing number of operations moving between memory and processor, the hardware cannot keep up with the millions of parameters required to train and run these models. One solution proposed by the researchers is to use "in-memory computing" (CiM, or analog AI) chips. Analog AI systems prevent inefficiencies by performing operations directly within its own memory, while digital processors require additional time and energy to move data between memory and processor. Analog AI chips are expected to greatly improve the energy efficiency of AI computing, but practical demonstrations of this have been lacking.
A 14-nanometer analog AI chip in the hands of a researcher (image by Ryan Lavine). Photo courtesy of Springer Nature
The first author and corresponding author of the paper, IBM Research Laboratory S. Ambrogio and colleagues developed a 14-nanometer analog AI chip containing 35 million phase-varying memory cells in 34 tiles. The research team tested the efficiency of the chip in terms of language processing capabilities with two speech recognition software, a small network (Google Speech Commands) and a large network (Librispeech), and compared it with industry standards for natural language processing tasks. The performance and accuracy of small networks are comparable to current digital technologies. For the larger Librispeech model, the chip can achieve 12.4 trillion operations per second (Tera Operations), and the system performance is estimated to be up to 14 times that of traditional general-purpose processors.
A 300mm wafer used to make an AI chip (image by Ryan Lavine). Photo courtesy of Springer Nature
Nature also published a "News and Opinions" article by peer experts that the study verifies the performance and efficiency of analog AI technology in both small and large models, supporting its potential to become a commercially viable alternative to digital systems. (End)