New hardware architecture provides an edge in AI computation

As applications of artificial intelligence spread, more computation has to occur—and more efficiently with lower energy consumption—on local devices instead of in geographically distant data centers in order to overcome frustrating delays in response. A group of University of Tokyo engineers have for the first time tested the use of hafnium-oxide ferroelectric materials for physical reservoir computing—a type of neural network that maps data onto physical systems and may achieve precisely such an advance—on a speech recognition application.

This post was originally published on this site

Popular Articles