A new analytic technique sheds light on inner workings of neural networks trained to perform natural-language-processing tasks, and even suggests possibilities for improving the performance of machine-translation systems.
Memory is a precious resource, so humans have evolved to remember important skills and forget irrelevant ones. Now machines are being designed to do the same.
A group of astronomers from the universities of Groningen, Naples and Bonn has developed a method that finds gravitational lenses in enormous piles of observations. The method is based on the same artificial intelligence algorithm that Google, Facebook and Tesla have been using in the last years. The researchers published their method and 56 new gravitational lens candidates in the November issue of Monthly Notices of the Royal Astronomical Society.
Next up, predicting human speech with a brain-computer interface.
A new idea is helping to explain the puzzling success of todayâs artificial-intelligence algorithms â and might also explain how human brains learn.
As neural nets push into science, researchers probe back
If machine learning is a subfield of artificial intelligence, then deep learning could be called a subfield of machine learning.
With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first. So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely …
On the horizon: wireless, batteryless implants for monitoring organs and improving prosthetics.