MIT Researchers Develop Artificial Intelligence Chip For Mobile Devices

 by Jelor Gallego
Futurism M.A
Image by Futurism M.A
REDUCING POWER CONSUMPTION

Despite the advances in artificial intelligence (AI) technology, most mobile devices remain behind in adapting and implementing these changes in their software. More glaringly, any kind of mobile software that utilizes AI software (such as deep learning and neural networks) offloads these tasks online to outside networks.

It's all due to one main reason: Power consumption.

Of course, brain-like AI systems require the use of large multi-core graphics processors, which isn't practical for the purposes of mobile devices. Additionally, they require a lot of energy for functioning, which is a big downside for mobile devices that are battery conscious. As such, most software developers for mobile decide instead to process these energy hungry tasks on the cloud instead of on the phone.

At the International Solid State Circuits Conference in San Francisco this week, MIT researchers announced that they have made a new chip dubbed "Eyeriss" that could be used to implement neural networks, one of the fundamental tech behind AI technology.

It has 10 times the efficiency of most mobile graphics processors that could allow future mobile devices to run powerful AI algorithms locally.

DEEP LEARNING ON A CHIP
Source:Futurism M.A

Neural networks (or deep learning, as it is now known) is a form of Machine Learning in which machines are fed large amounts of data sets, which they process with a variety of techniques. Developing connections between these data points requires large amounts of processing power and core. As a result, its application had been limited to servers or powerful computers that can consistently provide the power needed to run such algorithms.

"Deep learning is useful for many applications, such as object recognition, speech, face detection,” says Vivienne Sze, an Assistant Professor in MIT's Department of Electrical Engineering and Computer Science whose group developed the new chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

To resolve the problem of power consumption, the researchers had to develop a chip that has a lower power consumption yet is flexible enough to be able to implement the various types of networks needed for AI algorithms. They chose a chip with 168 cores, roughly as many as a mobile GPU has.

A big reason mobile GPUs have large power consumption is that it uses a local memory bank for all its core, which results in frequent exchanging of data from cores to distant memory banks. To increase efficiency, Eyeriss compresses data before sending it to individual cores. In addition, each of the cores has its own memory to reduce the need for a common memory bank.

The cores are also able to directly pass data to another core instead of having to route it to through main memory. And lastly, Eyeriss has a dedicated circuitry that allocates tasks across cores, automatically distributing data across cores such that work is maximized on the chip.

In the conference, the researchers were able to demonstrate Eyeriss by implementing a neural network to performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

If the chip picks up traction, it would go a long way to seeing AI assistants on your phone.


Share This Article