Source: sciencemag.org
Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.
“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”
Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.
So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.
The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.
In each cycle, the program compares the algorithms’ performance against hand-designed algorithms. Copies of the top performers are “mutated” by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These “children” get added to the population, while older programs get culled. The cycle repeats.
The system creates thousands of these populations at once, which lets it churn through tens of thousands of algorithms a second until it finds a good solution. The program also uses tricks to speed up the search, like occasionally exchanging algorithms between populations to prevent any evolutionary dead ends, and automatically weeding out duplicate algorithms.
In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks. The solutions are simple compared with today’s most advanced algorithms, admits Le, but he says the work is a proof of principle and he’s optimistic it can be scaled up to create much more complex AIs.
Still, Joaquin Vanschoren, a computer scientist at the Eindhoven University of Technology, thinks it will be a while before the approach can compete with the state-of-the-art. One thing that could improve the program, he says, is not asking it to start from scratch, but instead seeding it with some of the tricks and techniques humans have discovered. “We can prime the pump with learned machine learning concepts.”
That’s something Le plans to work on. Focusing on smaller problems rather than entire algorithms also holds promise, he adds. His group published another paper on arXiv on 6 April that used a similar approach to redesign a popular ready-made component used in many neural networks.
But Le also believes boosting the number of mathematical operations in the library and dedicating even more computing resources to the program could let it discover entirely new AI capabilities. “That’s a direction we’re really passionate about,” he says. “To discover something really fundamental that will take a long time for humans to figure out.”