A major breakthrough! MIT Announces New Neural Network Chip to Reduce Power by 95%

The desire to equip artificial intelligence devices, such as smartphones, home appliances, and various IoT devices, is driving the bigwigs in Silicon Valley to move to low-power artificial intelligence chips.

Neural networks are very powerful, but they require a lot of energy. MIT engineers are now developing a new chip that can reduce the power consumption of neural networks by 95%, which may make it run on battery-powered mobile devices.

Smartphones are becoming smarter and smarter, providing more and more artificial intelligence services such as digital assistants and real-time translators. However, the neural networks that perform data operations on these services are usually in the cloud, and the data on smartphones is also transmitted back and forth in the cloud.

This is not an ideal state because it requires a lot of communication bandwidth, and this means potentially sensitive data is being transmitted and stored on servers that are not under user control. However, the normal operation of the graphics processor’s neural network requires a lot of energy, making it impractical to run a neural network on a battery-limited device.

Engineers at MIT have now devised a chip that drastically reduces the need to transfer data back and forth between the chip’s memory and the processor, reducing power consumption by 95%. Neural networks consist of thousands of interconnected artificial neurons layer by layer. Each neuron receives input from multiple neurons in its next layer, and if this combined input passes a particular threshold, it sends the output to multiple neurons in the upper layer. The strength of the connection between neurons is governed by the weights set during training.

This means that, for each neuron, the chip must retrieve the input data for a particular connection and the connection weights from memory, multiply them, store the result, and repeat the procedure on each input. This requires a lot of data movement, and therefore consumes a lot of energy. The new MIT chips look another way, using analog circuitry to calculate all inputs in parallel in memory. This drastically reduces the amount of data that needs to be pushed and ultimately saves a lot of energy. This method requires that the weight of the connection be a binary rather than a series of values, but previous theoretical work showed that this did not have a significant impact on the accuracy of the chip. The researchers found that the chip’s results basically consisted of running on a standard computer Traditional non-binary neural networks within 2% to 3%.

This is not the first time researchers have created chips to process data in memory to reduce the power consumption of neural networks, but this is the first time this method has been used to run powerful convolutional nerves for image-based AI applications The internet. Dario Gil, vice president of artificial intelligence at IBM, said in a statement: “The results of the study show impressive performance when using memory arrays for convolution operations, which will surely provide future images and Video classification provides more complex convolutional neural networks. ”

However, not only the research team is studying this issue. The desire to equip artificial intelligence devices, such as smartphones, home appliances, and various IoT devices, is driving the bigwigs in Silicon Valley to move to low-power artificial intelligence chips.

Apple has integrated its Neural Engine chip into iPhone X to enhance its facial recognition technology. According to legend, Amazon is developing its own custom AI chip for the next generation of Echo digital assistants. Large chip companies are also increasingly inclined to support advanced features like machine learning, which also forces them to upgrade their devices to be more energy-efficient. Earlier this year, ARM introduced two new chips: the ARM machine learning processor, a chip that focuses on artificial intelligence tasks, from translation to face recognition, and the other is the ARM Object detection processor.

Qualcomm’s latest mobile chip Snapdragon 845 equipped with a graphics processor, and artificial intelligence as the top priority. The company also released the Snapdragon 820E chip, mainly for drones, robots and industrial equipment. In the longer term, IBM and Intel are developing a neuromorphic chip whose architecture is inspired by the human brain and its amazing energy efficiency. In theory, this allows IBM’s TrueNorth chip and Intel’s Loihi chip to run robust machine learning with only a fraction of the energy required by legacy chips, but at this stage both technologies are still high Experimental stage.

Making these chips run as powerful as the cloud computing service would be a huge challenge. But at the current rate of innovation, it’s not too long before the artificial intelligence reaches you at your fingertips.