Source: somagnews.com
Artificial intelligence is used in many areas from games to smart phones today. That’s why developers are more focused on new AI designs. In fact, Google is now used to make artificial intelligence algorithms for artificial intelligence algorithms.
The artificial intelligence silicon design handled by Google is defined as a subset of the chip design known as layout optimization. This design includes the placement of logic and memory blocks (or clusters of these blocks) in strategic areas to get the most out of the processor for both performance and power efficiency.
It can take several weeks for an engineer team to plan the ideal layout, as this process is a complex task involving many variables. However, Google’s artificial neural network can produce a better design for a Tensor processing unit in less than 24 hours. This unit is conceptually similar to the Tensor cores used by Nvidia on Turing-based GeForce RTX graphics cards.
Instead of taking advantage of a deep learning model that requires artificial intelligence training on a large dataset, Google uses a “reinforcement learning” system. Basically, the more Google’s artificial intelligence designs, the better the chips produced.
In return for all these efforts, Google aims to shorten the chip design cycle and create a symbiotic relationship between hardware and artificial intelligence. If this method works, AMD, Intel and Nvidia may adopt the same approach in the future.