Source: siliconangle.com
When it comes to advancing the field of artificial intelligence, the ultimate prize is still clear. The goal is to come as close as possible to the power of the human brain.
For researchers at the forefront of AI development, such as Naveen Rao (pictured), vice president and general manager of the artificial intelligence products group at Intel Corp., achieving near parity with a human’s cognitive ability remains a long way off.
“Back in 2013, there were 10 million or 20 million parameters, which was very large for a machine-learning model,” Rao said. “Now they’re in the billions. The human brain is 300 trillion to 500 trillion models, so we’re still pretty far away from that; we’ve got a long way to go.”
Rao spoke with Dave Vellante, host of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, and guest host Justin Warren, chief analyst at PivotNine Pty Ltd., during Amazon Web Services Inc.’s re:Invent conference in Las Vegas. They discussed the role of Intel’s processor technology in machine learning, intelligence for cloud and edge computing, the impact of recent neural network training tools, AI for good and the future of autonomous cars. (* Disclosure below.)
This week, theCUBE features Naveen Rao as its Guest of the Week.
Research advances AI
While the processing gap remains sizable, Rao and Intel are working on several projects to move AI forward. It is work not only fundamental to the field, but also integral to Intel’s own business strategy and long-term future.
Intel’s PC chip business still accounts for approximately half of its total revenue, but the second-largest segment revolves around the data center where AI is having the greatest impact. The company has been adjusting its powerful Xeon central processing unit chips to handle complex machine-learning tasks, most recently adding DL Boost to facilitate neural net performance.
As developers and data scientists iterate large data sets to generate a series of outcomes, the inference of how results are rolled out and deployed becomes more significant.
“Inference is all about the best performance per watt,” Rao explained. “How much processing can I shove into a particular time and power budget? On the training side, it’s much more about what kind of flexibility I have for exploring different types of models and training them very fast.”
Acquisitions bolster portfolio
One indicator of how seriously Intel is taking its role as a provider of AI processing in the data center can be found in its acquisition of Israel-based Habana Labs Ltd. for $2 billion in December. Habana Labs’ Goya AI Inference Processor is currently used by Facebook Inc. for its own machine-learning compiler.
Intel’s acquisition of Habana Labs followed other moves the company has made in the AI space since 2016 when it purchased Nervana Inc., where Rao served as co-founder and chief executive officer. In 2018, Intel bought Vertex.Ai and its platform-agnostic AI model technology and last year open-sourced its deep neural network framework, nGraph.
In November, Intel introduced its Nervana Neural Network Processors for training and inference, designed to accelerate AI system deployment from cloud to edge.
“From its very inception, the machine was really meant to be something that recapitulated intelligence,” Rao said. “Everything we do is impacted by AI and will be in service of building better AI platforms for intelligence at the edge, intelligence in the cloud, and everything in between.”
Training neural networks
Building better AI platforms will require training deep neural networks to run more powerfully in data centers today. But for these networks to become truly effective, they must be able to generalize to understand a range of possibilities while improving overall intelligence. Welcome to the world of the 16-bit brain floating point and generative adversarial networks, or GANs.
Intel will soon begin leveraging the floating point, or “bfloat16,” instruction for its Cooper Lake Xeon processors and Nervana-based training models. AI researchers have found that bfloat16 worked well across workloads and can be used for vision, speech and language applications, which explains why Intel is moving boldly down that path.
The instruction is also useful in helping move key learning networks forward, such as GANs. Using generative adversarial networks has been called “the most interesting idea in the last 10 years in machine learning” by Yann LeCun, director of AI research at Facebook.
“You can think of it as two competing sides of solving a problem,” Rao explained. “If you have two neural networks that are working against each other, one is generating stuff and the other one is asking if it’s fake or not. Eventually, you keep improving each other.”
Deepfakes remain a concern
Discussion of AI continues to center around the positive and negative. While GANs may indeed be instrumental in moving machine intelligence forward, they are also a key ingredient in creating “deepfakes,” which is raising alarm in some sectors of the tech community.
Deepfake technology has been used to create deceptive videos and nonconsensual pornography and even to disseminate fictitious news reports. One Forrester Research Inc. analyst has estimated that deepfake scams will exceed $250 million for the coming year.
Despite those concerns, Rao believes that the good will overcome the bad.
“One radiologist plus AI equals 100 radiologists,” Rao said. “It solves problems that we have in healthcare today; that’s where we should be going with this. I look at AI as a way to push humanity to the next level.”
In an AI-driven world, what will humanity at the next level look like? The continued march toward autonomous driving has the potential to impact human lives in a major way.
The fastest-growing business for Intel on an annualized basis is Mobileye Technologies Ltd., a company it acquired for $15 billion two years ago. Mobileye makes autonomous vehicle technology and is focused on the robotaxi market.
Autonomous vehicles provide yet another example of how Intel is moving from a PC-centered company to a data-driven business, and self-driving cars will be an inevitable outcome of AI progress, according to Rao.
“Autonomous driving is a bit of a black box, and the number of situations one can incur on the road are almost limitless,” Rao said. “For a 16-year-old, we say go out and drive. And, eventually, they sort of learn it. The same thing is happening now for autonomous systems.”
Driving cars is an area that Rao knows quite well. When he’s not guiding Intel’s AI products group, the technologist races semi-professionally on the Ferrari automotive circuit.
And while he envisions a world where autonomous cars will become a normal part of daily life, Rao doesn’t see human-driven cars disappearing completely.
“Five to seven years from now, we will be using autonomy much more on prescribed routes,” Rao said. “It won’t be that it completely replaces a human driver even in that time frame because it’s a very hard problem to solve. It’s going to be a gentle evolution over the next 20 to 30 years.”