Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Google Proposes AI as Solution for Speedier AI Chip Design

Source: allaboutcircuits.com

Considering that thousands of components must be packed onto a tiny fingernail-sized chip, this can be difficult. The trouble is that it can take several years to design a chip, and the world of machine learning and artificial intelligence (AI) moves much faster than this.

In an ideal world, you want a chip that is designed quickly enough to be optimized for today’s AI challenges, not the AI challenges of several years ago. 

Now, Alphabet’s Google has proposed an AI solution that could advance the internal development of its own chips. The solution? To train AI chips to design themselves. 

Shortening the AI Chip Design Cycle

In a research paper posted to Arxiv on March 23, it is described how the researchers “believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fuelling advances in the other,”

The research describes how a machine learning program can be used to make decisions about how to plan and layout a chip’s circuitry, with the final design being just good as or better than manmade ones.  

According to Jeff Dean, Google’s head of AI research, this program is currently being used internally for exploratory chip design projects. The company is already known for developing a family of AI hardware over the years, including its Tensor Processing Unit (TPU) for processing AI in its servers. 

The Chip Design Challenge

Planning a chip’s circuitry, often referred to as “placement” or “floor planning”, is very time-consuming. And as chips continually improve, final designs very quickly become outdated and despite being designed to last two-to-five years, there is constant pressure and demand on engineers to reduce the time between upgrades. 

Floorplanning involves placing logic and memory blocks, or clusters of, in a way that maximizes power and performance while concurrently minimizing footprint. This is already challenging enough, however, the process is made all the more challenging by the fact that this must all take place while rules about the density of interconnects are followed at the same time. 

Even with today’s advanced tools and processes, human engineers require weeks of time and multiple iterations to produce an acceptable design for an AI chip.

Using AI for Chip Floor Planning

However, Google’s research is said to have made major improvements to this process. In the Arxiv paper, research engineers Anna Goldie and Azalia Mirhoseini claim to have designed an algorithm that learns how to achieve optimum placement of chip circuitry. It does this by studying existing chip designs in order to produce its own. 

According to Goldie and Mirhoseini, it is able to do this in a fraction of the time currently required by human designers and is capable of analyzing millions of design possibilities as opposed to thousands. This enables it to spit out chip designs that not only utilize the latest developments but are cheaper and smaller, too.

Repeated Tasks Result in Higher Performance

During their research, the duo modeled chip placement as a reinforcement learning problem. These systems, unlike conventional deep learning ones, learn by doing rather than training on a large dataset. They adjust the parameters in their networks according to a “reward signal” that is sent when they succeed in a task.

In the case of chip design, the reward signal is a combined measure of power reduction, area reduction, and performance improvement. As a result, the program becomes better at its task the more times it does it. 

A Solution to Moore’s Law 

If this research is as promising as Google’s researchers would have us believe, it could represent a solution to Moore’s Law—the assertion that the number of transistors on a chip doubles every one-to-two years—by ensuring the continuation of it. In the 1970s, chips generally had a few thousand transistors. Today, some host billions of them.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence