NVIDIA keeps foot on gas of machine learning with lower cost development hardware

This week at the annual Neural Information Processing Systems (NIPS) conference in Long Beach, NVIDIA CEO Jensen Huang surprised attendees and media with the announcement of a new graphics card targeted at artificial intelligence and machine learning developers. The “NVIDIA TITAN V” might look like a traditional graphics card used for gaming, but under the hood this new product offers a different level and type of performance.

The TITAN V is using the NVIDIA Volta architecture, previously only available in server-based designs. The card is priced at $3000, and while exceptionally high for an add-in card for a PC, the target audience for this is developers of upcoming machine learning applications that need the flexibility to design on their own computers and may not have the money or resources to dedicate to a large-scale server design.

NVIDIA has built a significant lead in the high-performance compute areas of artificial intelligence and machine learning. These are the workloads that are required for neural network processing, and are used for all kinds of tasks we utilize today and will expand into the future. If you have benefited from facial recognition in your Google Photos library or used a portrait mode photo feature on a smartphone with a single camera lens, then you are taking advantage of machine learning. Current driver-assist features like Auto Pilot on the Tesla Model S and Model X, and any autonomous driving options from Google and others use machine learning to build the intelligent systems that allow that to work.

MW-GA038_titanv_20171212122002_NS.jpg

Machine learning and artificial intelligence makes up the fastest growing part of NVIDIA’s portfolio and in its latest quarterly announcement, it talked about adoption of its Volta architecture in the datacenter for Alibaba, Baidu, and Tencent. This is in addition to companies already using NVIDIA hardware for artificial intelligence like Amazon, Google, Facebook, and Microsoft.

While the architecture and hardware advantage that NVIDIA created is a big part of why it has become the machine learning leader, it all started with outreach to developers and educational programs around general-purpose GPU computing and the custom CUDA programming model. By offering graphics chip based computing options (other than gaming and rendering) earlier than any other technology company, NVIDIA has monopolized the attention and developer cycles to create a daunting wall for any other AI company to overcome.

The release of the TITAN V card continues that trend, offering the companies best performance graphics chip for AI and machine learning at a reasonable cost and making it available publicly. This gives start-up companies and individual developers the ability to utilize NVIDIA hardware during software creation and design, which then creates familiarity with the platform that is maintained during large scale rollouts of any implementation. NVIDIA is breeding an army of developers that have grown up on its software and hardware that will help maintain a stronghold in this quickly scaling segment.

NVIDIA is very confident company but even Huang understands that they need to maintain the ground-level support to fend off any other competitors in the space. AMD has recently made moves into the data center with its Vega architecture and Intel launched Nervana earlier this fall. Intel also recently hired AMD graphics chip leader Raja Koduri to build its own discrete graphics chips from the ground up.

The NVIDIA TITAN V uses the company’s latest graphics architecture called Volta, that is significantly different than previous designs. Volta is clearly focused on AI and machine learning workloads rather than gaming, even though it will be able to handle that task decently as well. Volta implements a new core called a Tensor Core that is custom made for the deep learning tasks that make up artificial intelligence. Google also uses Tensor Cores in its custom-built TPU (Tensor Processing Unit), announced last year. The memory subsystem on Volta is also drastically different, using a second-generation high-bandwidth memory implementation that offers a significant increase in bandwidth that is necessary for large AI and machine learning datasets.

Adopting a custom core design that is not based on traditional graphics chips indicates that NVIDIA is all-in on the machine learning roadmap and is willing to adapt as workloads and algorithms change. It also tells us that NVIDIA may be moving to a dual roadmap, with different chips built for machine learning and traditional gaming and rendering product segments. If true, this could prove to be a big hit to a company like AMD that doesn’t have the resources to fund R&D for two similarly significant projects.