Nvidia Boosts AI Chip Capabilities with H200

Published about 1 year ago

Nvidia has upgraded its premier chip for artificial intelligence (AI), the H200, with enhanced features set to launch next year. The H200 will supersede the current H100 chip, with its primary update being an increase in high-bandwidth memory, a crucial component that determines the chip’s data processing speed.

High-Bandwidth Memory

The H200 boasts 141-gigabytes of high-bandwidth memory, a significant jump from the 80 gigabytes in the predecessor, the H100. This enhancement will allow the chip to process data more rapidly, improving the speed at which AI services can provide responses. However, Nvidia has not revealed who their suppliers for the new chip’s memory are.

Nvidia, the market leader for AI chips, powers many generative AI services, including OpenAI’s ChatGPT service. These services are set to benefit from the improved processing speed offered by the H200.

Cloud Service Providers

The new chip is slated to roll out with major tech players, including Amazon.com, Google, and Oracle. Nvidia announced that Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be among the first cloud service providers to provide access to H200 chips. Specialty AI cloud providers CoreWeave, Lambda, and Vultr are also included in the initial roll-out.

The AI chip market has seen a revival in sales, a development that industry players like Korea’s SK Hynix, a known Nvidia memory supplier, have acknowledged. Micron Technology also announced in September its intentions to become an Nvidia supplier.