An All-in-One Approach to Computing

Regular desktop computers, as well as laptops and smartphones, have processing units dedicated to computing and memory. They're called von Neumann systems and are named after physicist and computer scientist John von Neumann who, among other things, was a pioneer in modern digital computing. They work by moving data back and forth between the memory and computing unit; a process that can, and often does, end up being slow and not very efficient.

At least, not as fast or efficient as what we could achieve using "computational memory." Also known as "in-memory computing," computational memory allows for storing and processing information using just the physical properties of a computer system's memory.

A team from IBM Research claims to have made a breakthrough in computational memory by successfully using one million phase change memory (PCM) devices to run an unsupervised machine learning algorithm. Details of the research have been published in the journal Nature Communications.

The IBM team's PCM device was made from a germanium antimony telluride alloy stacked and sandwiched between two electrodes. "[T]his prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI," according to a post on IBM Research's blog.

Fit for AI

The new PCM devices can perform computation in place through crystallization dynamics. Essentially, this involves an electrical current being applied to the PCM's material, which changes its state from one of a disordered atomic arrangement to an ordered configuration — i.e. crystalline. The IBM team demonstrated their PCM technology using two time-based examples, which they then compared to traditional machine-learning methods.

The ability to perform computations faster will, obviously, benefit over all computer performance. For IBM, that means better computing power for AI applications. “This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” IBM Fellow and co-author of the study Evangelos Eleftheriou said in a statement quoted in the blog.

“As the [Complementary Metal Oxide Semiconductor or CMOS] scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers. Given the simplicity, high speed and low energy of our in-memory computing approach, it’s remarkable that our results are so similar to our benchmark classical approach run on a von Neumann computer.”

Computational memory presents an opportunity for a more "real-time" processing of information; a much-needed improvement in today's world, where more companies are putting a premium on data analytics. At the same time, as industry giants like Amazon and Google place AI at the center of their business, faster computing for AI applications is indeed a welcomed development.

For IBM, in-memory computing is key. “Memory has so far been viewed as a place where we merely store information. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive," lead author Abu Sebastian said. "The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes.”


Share This Article