Post Thumb

IBM Can Run an Experimental AI in Memory, Not on Processors

Share it

In the regular world of computing-whether you’re running exotic deep-learning algorithms or just using Excel-calculations are usually performed on a processor while data is passed back and forth to the memory.

That works perfectly well, but some researchers have argued that performing calculations in memory itself would save time and energy that is usually used to move data around.

That’s exactly the concept that a team from IBM Research in Zurich has now applied to some AI algorithms. The team has used a grid of one million memory devices, pictured above, which are all based on a phase-change material called germanium antimony telluride. That, in turn, can be used to represent a number of different states, not just regular 0s and 1s, which can be used to perform calculations rather than just store data.

By using that quirk and enough chunks of memory, the IBM researchers have shown that they can perform machine-learning tasks like finding correlations in unknown data streams.

The team reckons it could, if scaled up, create computing systems that perform some AI tasks 200 times faster than regular devices.

read more...

Article originally posted at www.technologyreview.com

Leave a Reply

Your email address will not be published. Required fields are marked *