A child learns how to ride by falling off, by balancing herself on two wheels, by going over potholes.
In June, 2015, Thrun’s team began to test what the machine had learned from the master set of images by presenting it with a “Validation set”: some fourteen thousand images that had been diagnosed by dermatologists.
The animal’s neural network had not learned arithmetic; it had learned to detect changes in human body language.
His prognosis for the future of automated medicine is based on a simple principle: “Take any old classification problem where you have a lot of data, and it’s going to be solved by deep learning. There’s going to be thousands of applications of deep learning.” He wants to use learning algorithms to read X-rays, CT scans, and MRIs of every variety-and that’s just what he considers the near-term prospects.
Although computer scientists are working on it, Hinton acknowledged that the challenge of opening the black box, of trying to find out exactly what these powerful learning systems know and how they know it, was “Far from trivial-don’t believe anyone who says that it is.” Still, it was a problem he thought we could live with.
Whereas each new dermatology resident needs to start from scratch, Thrun’s algorithm keeps ingesting, growing, and learning.
As machines learn more and more, will humans learn less and less? It’s the perennial anxiety of the parent whose child has a spell-check function on her phone: what if the child stops learning how to spell? The phenomenon has been called “Automation bias.” When cars gain automated driver assistance, drivers may become less alert, and something similar may happen in medicine.