Accelerating Machine Learning

At the 2017 Hot Chips symposium today, Microsoft unveiled a new hardware capable of boosting artificial intelligence (AI) programs. Called Brainwave, Microsoft believes it'll boost how machine learning models function by designing them for programmable silicon.

"We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency," Microsoft explained in a press release. "Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users."

Image Credit: GRANT LINFORD

The model is larger than other AI-dedicated hardware available. It features a Gated Recurrent Unit model running at a speed of 39.5 teraflops on Intel’s new Stratix 10 field programmable gate array (FPGA) chip. Plus, it doesn't use so-called batching operations, which means it provides real-time insights for machine learning systems by handling requests as they come in.

“We call it real-time AI because the idea here is that you send in a request, you want the answer back,” Microsoft Research engineer Doug Burger said at the symposium, Venture Beat reports. “If it’s a video stream, if it’s a conversation, if it’s looking for intruders, anomaly detection, all the things where you care about interaction and quick results, you want those in real time."

Rapid AI Support

Brainwave allows for cloud-based deep learning models to be performed seamlessly across a the massive FPGA infrastructure Microsoft has installed in its data centers over the past few years. According to Burger, this means that AI features in applications receive more rapid support from Microsoft services. By running on a host of FPGAs, machine learning models that might be too big for just one FPGA chip receive simultaneous support from multiple hardware boards.

Click to View Full Infographic

In addition to performance speeds faster and more flexible than other CPUs or GPUs, Brainwave also incorporates software designed to support a host of popular deep learning frameworks. So, as Burger said, it's thus possible for Microsoft's programmable hardware to operate at par with chips dedicated for machine learning operations — like Google's Tensor Processing Unit. In fact, he believes that it's possible to increase performance in the future from 39.5 teraflops to 90 teraflops by further improving operations with the Stratix 10 chip.

As machine learning models and algorithms see greater employment by a plethora of applications, hooking these to Brainwave would reduce user waiting time for these apps to respond. Microsoft hasn't yet made Brainwave available to customers, and no timeline has been set. But they are working to make it accessible to third-party customers through Microsoft's Azure cloud platform.


Share This Article