You can buy one A100 DGX-based product, or a DGX box with 8 chips. Our product has 4 MK2 IPUs (IPU-M2000) and this is the building block in sets of 4 up to an IPU-Pod64 which has 64 IPUs in it. “There won’t always be a precise 1:1 correlation as it is not possible to compare ‘fractions’ of a system. “The closest comparison for price and power consumption is 1x IPU-M2000 to 1x A100 DGX,” Tunsley added. Our customers make a performance per dollar evaluation,” Chris Tunsley, director of product marketing at Graphcore told EE Times. “There are lots of variables you can normalize for when making a product comparison, but we don’t see number of chips being what customers care about. So how does Graphcore justify making performance comparisons between systems of different sizes? ![]() “That’s why MLPerf was created, to enable apples to apples comparisons by standardizing the algorithms and measurement criteria, having peer reviews and making them representative of what customers run.” “Benchmarking is nuanced and has many variables that can impact the performance and real customer experience,” said Kharya. Graphcore’s software stack, Poplar, is in version 1.4 and supports TensorFlow, PyTorch, ONNX and Alibaba’s Halo platform, with interfaces for PaddlePaddle and Jax on the roadmap. While these results effectively back up Nvidia’s own results, they also validate the maturity of Nvidia’s software stack, and reflect its large community of developers. Nvidia pointed out that in the latest MLPerf round, 11 companies used Nvidia’s software stack to submit performance scores for their Nvidia-based systems. Nvidia is a substantial contributor to the industry-wide independent AI benchmark, MLPerf, in terms of number of scores submitted for both training and inference benchmarks. “When compared using consistent methodology, Nvidia A100 offers much higher performance, versatility to run all AI models and a mature software stack so developers are productive from day one.” Nvidia’s A100 GPU (Image: Nvidia) “Graphcore’s comparisons are apples versus oranges, in terms of the models, algorithms and system configurations used, and they lacked key details like accuracy the models were trained to,” Paresh Kharya, Nvidia’s senior director of product management for accelerated computing told EE Times. The scale of the systems compared in Graphcore’s announcement seems inconsistent, but as with all performance benchmarks, the devil is in the details. The company also compares its IPU-Pod64, a system with 64 chips, against one or two Nvidia DGX-A100 systems (8x or 16x A100 chips). The majority of Graphcore’s benchmarks compare the IPU-M2000, a system with four IPU-MK2 chips, against a single Nvidia A100 GPU. Graphcore has not been shy about going head-to-head with AI chip leader Nvidia in the past, but this latest announcement seems particularly bold. ![]() On ResNet-50 inference, the IPU-M2000 can process 9856 images/sec which Graphcore says is 4.6x higher throughput than the Nvidia A100. Graphcore’s IPU-M2000 (Image: Graphcore)Īmongst other claims, Graphcore says its IPU-M2000 can achieve ResNet-50 training thoughput of 4326 images/second (batch=1024), which according to the company is 2.6x better than the Nvidia A100. Per pre-register has one chance for lucky draw. □Pre-register EAC conferences to be one of 100 lucky winners. Claims made by Graphcore include between 3.7x and 18x higher throughput for AI training and between 3.4x and 600x higher throughput for AI inference of various models compared to Nvidia GPUs. The ancient practice of specmanship, it seems, is alive and well in the AI accelerator chip industry. Often, performance per rack space is a critical factor.” The use of performance per dollar is not a good measure for AI systems purchases because there are many other factors in the cost of ownership. “Many companies self-publish benchmark and performance data, but those should always be viewed sceptically. “The Graphcore numbers are misleading,” said Kevin Krewell, principal analyst at Tirias Research. However, Graphcore has put systems of different sizes head-to-head, saying it has instead compared the Nvidia product that’s closest in price. Graphcore is claiming significant performance advantages for its second-generation IPU versus state-of-the-art Nvidia GPUs. Is that comparing apples to oranges? Does it matter?īy publishing an array of in-house benchmark figures, British AI chip startup Graphcore has mounted a challenge against the market leader for AI acceleration in the data center, Nvidia. Graphcore benchmarked its 4-chip system against a single Nvidia GPU.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |