laitimes

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

In the data center services market, there is no doubt that Intel is the boss, and IDC believes that its market share has reached a terrifying 99%. But in the latest and hottest segment, artificial intelligence, Intel has not been able to come out on top. In this field, image chip expert NVIDIA is now in the limelight, whose products are popular in deep learning neural networks and are widely used in artificial intelligence tasks such as image recognition, speech recognition, and natural language processing.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

At the latest IDF16 developer conference, Intel announced its latest processor dedicated to AI processing, the third-generation Xeon Phi, code-named "Knights Mill," which refers to Nvidia, the largest player in the field.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

The Xeon Phi processor will be available in 2017, and Intel claims to have added a lot of "floating-point" computation to the Xeon Phi processor, which is important for machine learning algorithms. Intel Xeon processors are now widely deployed in data centers and are also used for almost all deep learning computing tasks. However, some users have also deployed auxiliary processors for AI tasks. Most of these auxiliary processors are from NVIDIA's GPUs (Graphics Processing Units). Intel says the Xeon Phi processor product line has more processor cores than standard Xeon processors.

Intel mentioned that the Xeon Phi processor can run most of the data analysis software without the need for an external processor that could slow down the analysis. This is a key advantage of the Xeon Phi over other products. Intel executives also said that Xeon Phi can be paired with more memory than GPU-based solutions.

Intel wants the chip to gain a place in the fast-growing (but still niche) machine learning market, saying that only 7 percent of all servers are used to process algorithms about machine learning, and only 0.1 percent are running deep neural networks (a branch of machine learning that simulates neurons and brain synapses to process unstructured data). In particular, the hottest deep learning of the moment, through the newly released Xeon Phi processor, Intel hopes to be a latecomer in this market.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

At present, in the field of deep learning, GPUs play a rather important role. Nvidia's GPs are popular because they can perform "parallel computation" — the technology that can perform multiple operations simultaneously. This makes it much faster than a general-purpose processor when running deep learning neural networks. The computational work that used to require a large number of CPUs and supercomputers can now be done with only a small combination of GPUs. This greatly accelerates the development of the field of deep learning, providing a computational basis for the further development of neural networks. Anyone familiar with deep learning knows that deep learning needs to be trained, and the so-called training is the calculation of finding the best value among thousands of variables. This requires convergence through constant attempts, and the resulting value is not a manually determined number, but a normal formula. Through this kind of pixel-level learning, constantly summarizing the rules, the computer can realize thinking like a human.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

Diane Bryant, Intel's executive vice president and general manager of the data center group, mentioned in an interview that if this computing demand expands rapidly, GPUs solutions will have problems. "GPUs solutions can't scale – this market is still in its infancy, so it's possible to do things with GPUs at the moment, but there's no way to expand GPUs further in the future."

In addition, Intel also found Baidu to endorse itself. Baidu announced that it will use Xeon Phi chips to run its natural language processing service "Deep Speech". Baidu has been using Nvidia's GPUs to accelerate its own deep learning model, just last month, the current NVIDIA CEO Jen-Hsun Huang chose to hold a small exchange meeting with deep learning experts at Stanford University, at which he released titan X, which is known as the most powerful GPU graphics card at the moment, and gave the first TITAN X graphics card to Andrew Ng, the current chief scientist of Baidu's AMERICAN researcher. Now that Baidu has chosen to announce on IDF16 that it will use the Xeon Phi chip to run its natural language processing service "Deep Speech", a Baidu spokesperson declined to disclose information about whether Baidu will continue to use Nvidia's technology in the future.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

This shows that Intel is becoming more and more aggressive in this war about the future of AI (artificial intelligence). In some recently released benchmarks, Intel claimed that the Xeon Phi processor was 2.3 times faster than Nvidia's GPUs.

However, Nvidia mentioned in its latest blog that Intel uses outdated software and hardware, and the results are not convincing. Nvidia claims that if Intel uses the latest technology, NVIDIA will train 30% faster than Intel in machine learning models. NVIDIA also mentioned that if the TITAN X GPU is composed of four Pascal architectures based on the latest data, it will run more than 5 times faster than the four Xeon Phi processors.

What exactly does Intel's latest chip, Xeon Phi, use against NVIDIA?

"Understandably, newcomers to this field may not be aware of the advancements that are taking place in the hardware and software in this space." Nvidia mentioned it in its latest official blog.

In response to Nvidia's fierce counterattack, Intel's latest reply is: "It is completely understandable that Nvidia is worried about Intel's actions in this area, all of Intel's performance data is based on the current publicly available solutions, and we have confidence in the data." Regardless of the benchmark results, Intel says that GPUs alone cannot accomplish all the acceleration tasks. In addition, as part of Intel's intoroads into artificial intelligence, Intel last week announced the acquisition of AI startup Nervana for $408 million. For follow-up information, stay tuned to our story.

PS : This article is compiled by Lei Feng Network, and refuses to reprint without permission!

via Forbes Nvidia Blog等