A Computing Road Less Traveled

A team of researchers from Belgium think that they are close to extending the anticipated end of Moore's Law, and they didn't do it with a supercomputer. Using an artificial intelligence (AI) algorithm called reservoir computing, combined with another algorithm called backpropagation, the team developed a neuro-inspired analog computer that can train itself and improve at whatever task it's performing.

Reservoir computing is a neural algorithm that mimics the brain's information processing abilities. Backpropagation, on the other hand, allows for the system to perform thousands of iterative calculations that reduce error, which lets the system improve its solution to a problem.

"Our work shows that the backpropagation algorithm can, under certain conditions, be implemented using the same hardware used for the analog computing, which could enhance the performance of these hardware systems," Piotr Antonik explains.

Antonik, together with Michiel Hermans, Marc Haelterman, and Serge Massar at the Université Libre de Bruxelles in Brussels, Belgium, published their study on this self-learning hardware in the journal Physical Review Letters.

Authentic Self-Learning

Not only is the team's self-learning hardware better at solving difficult computing tasks than other experimental reservoir computers, it's also capable of handling tasks previously considered beyond what traditional reservoir computing could do.

"Full" refers to the new hardware system, while "Reservoir" is the traditional one. Credits: Hermans et al. ©2016 American Physical Society

By physically implementing reservoir computing and backpropagation algorithm on a photonic setup (delay-coupled electro-optical system), the hardware was able to perform three tasks: (1) TIMIT, a speech recognition task; (2) NARMA10, often used to test reservoir computers; and (3) VARDEL5, a complex nonlinear task, supposedly beyond traditional reservoir computing.

The researchers look towards expanding what this new reservoir computing can handle, especially since it's a technology that can improve itself. "We are, for instance, writing up a manuscript in which we show that it can be used to generate periodic patterns and emulate chaotic systems," Antonik says.

Moving forward, the team aims to increase the speed of their experiments. "We are currently testing photonic systems in which the internal variables are all processed simultaneously—we call this a parallel architecture. This can provide several orders of magnitude of speed-up. Further in the future, we may revisit physical error backpropagation, but in these faster, parallel, systems."


Share This Article