Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

The Next HoloLens Will Include A Deep Learning Accelerator

Source – tomshardware.com

Microsoft announced that the second generation of the HoloLens’ Holographic Processing Unit (HPU) will contain a deep learning accelerator.

Holographic Processing Unit

When Microsoft first unveiled the HoloLens, it said that it comes with a special kind of processor, called an HPU, that can accelerate the kind of “holographic” content displayed by the HMD. The HPU is primarily responsible for processing the information coming from all the on-board sensors, including a custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera.

The first generation HPU contained 24 digital signal processors (DSPs), an Atom processor, 1GB of DDR3 RAM, and 8MB of SRAM cache. The chip can achieve one teraflop per second for under 10W of power, with 40% of that power going to the Atom CPU.

The first HPU was built on a 28nm planar process, and if the next-generation HPU will be built on a 14/16nm or smaller FinFET process, the increase in performance could be significant. However, Microsoft has not yet revealed what process node will be used for the next-generation HPU.

Deep Learning Accelerator

What we do know so far about the second-gen HPU is that it will incorporate an accelerator for deep neural networks (DNNs). The deep learning accelerator is designed to work offline and use the HoloLens’ battery, which means it should be quite efficient, while still providing significant benefits to Microsoft’s machine learning code.

This would make the second-gen HoloLens a part of the rising on-device machine learning trend, where devices run machine learning code natively rather than sending the data over the network from hundreds or thousands of miles away. For a device such as HoloLens, having the deep learning processing done in real-time is critical, because the device can’t afford too much latency between the time it receives the data and the time it makes use of it.

As we’re talking about an augmented or “mixed reality” device, using an on-device deep learning accelerator also means you don’t have to send the data about your surroundings to someone else’s servers. However, it remains to be seen if Microsoft will decide to collect large amount of data about what the users are doing, as it has already done with Windows 10.

Related Posts

Subscribe
Notify of
guest
4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
4
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence