Source: iflscience.com
Machine learning (ML) is a method by which algorithms adapt their activity using inputted data, rather than being programmed to do so. But building and “training” these algorithms takes time, and can often ingrain human biases.
To overcome these limitations, and enable further innovation in machine learning, researchers have explored the field of AutoML, whereby the machine learning process can be progressively automated, relying on machine compute time, rather than human research time.
So far, although some steps have been automated, the benchmark of virtually zero human input has yet to be attained. However, a team of scientists from Google have seen some “preliminary success” in discovering machine learning algorithms from scratch, indicating a “promising new direction for the field.”
In a paper, published on the preprint server arXiv, Quoc Le, a computer scientist at Google, and colleagues, employed concepts from Darwinian evolution, such as natural selection, to enable ML algorithms to improve generation upon generation. Combining basic mathematical operations, their program, called AutoML-Zero, generated 100 unique algorithms that they then tested on simple tasks, such as image recognition.
After their performance was compared to hand-designed algorithms, the best were kept, and small random “mutations” in their code were introduced, whilst the weaker candidates were removed. As the cycle continued, a high-performing set of algorithms were found, some of which are comparable to a number of classic machine learning techniques – such as neural networks (a kind of computer program that loosely mimics how our brain cells work together to make decisions).
This proves the team’s concept, Le told Science Magazine, but he is hopeful that the processes can be scaled up to eventually create much more complex AIs, which human researchers could never find.
“Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks,” the team wrote in the paper, which is awaiting peer-review.
“Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent, multiplicative interactions, weight averaging, normalized gradients, etc.” the authors continued. “These results are promising, but there is still much work to be done.”