Intel has a reputation of making fast chips, but none are very efficient at the most trending thing in computing right now – Artificial intelligence (AI). Deep-learning apps compatible with computer vision, voice recognition and other tasks most of the times need to run matrix calculations on gigantic arrays — something that doesn’t suit general-purpose Core or Xeon chips. However, with the purchase of deep learning chipmaker Nervana, Intel will ship its first purpose-built AI chips, by the end of 2017.
Brian Krzanich, Intel CEO during his keynote speech at WSJDLive shared that close collaboration with Facebook and their technical insights have enable them to bring this new generation of AI hardware, the Nervana Neural Processor family (NNP), to market. Along with social media, Intel is targeting other applications such as weather, automotive, and healthcare.
The Nervana NNP is an application-specific integrated circuit (ASIC). Unlike its PC chips, Nervana NNP is specially made for both executing and training deep learning algorithms.
Naveen Rao, Intel’s VP of AI, shared that ASICs customized for this workload can greatly advance the speed and computational efficiency of deep learning.
The chips are designed to do common calculations done by deep learning programs such as matrix multiplication and convolutions. Intel has used special software to manage on-chip memory for a given algorithm and has eliminated the generalized cache normally seen on CPUs. Rao said that this has enabled to achieve new levels of compute density and performance for deep learning.