In the past month, Google Brain and non-profit organization OpenAI each published unreviewed papers on the subject, Google’s on the application of neuroevolution principles to image recognition and OpenAI’s on using “Worker” algorithms to teach a master algorithm the best way to accomplish a task.
For its research, the Google team generated 1,000 image-recognition algorithms that were trained using modern deep neural networks to recognize a specific set of images. The clone was then trained using the same data as its parent, and put back into the batch of 1,000 algorithms to start the process over again.
Google researchers found that neuroevolution could cultivate an algorithm with 94.6% accuracy, and recorded similar results during each of four repeats of the experiment.
Rather than training thousands of algorithms to get better at one thing, the OpenAI team wanted to use “Worker” algorithms to train a master algorithm to accomplish an unknown task, like playing a videogame or walking in a 3D simulator.
Back in 2002, at the start of his career, Stanley wrote an algorithm called NEAT, which allowed neural networks to evolve into larger and more complex versions over time.
Google’s hybrid approach combines classic neuroevolution with the techniques, like backpropagation, that have made deep learning so powerful today: Teach an algorithm how to act in the world, let it evolve, and that algorithm’s child will have most of the accrued knowledge.