NVIDIA Q2 FY2021 Financial Summary slide image

NVIDIA Q2 FY2021 Financial Summary

NVIDIA DGX SUPERPOD SETS ALL 8 AT SCALE AI RECORDS Under 18 Minutes To Train Each MLPerf Benchmark Translation (Non-recurrent) Transformer Translation (Recurrent) GNMT XXL XX 0.6 (480 A100) 0.7 (1024 A100) 0.8 (1840 A100) Image Classification ResNet-50 v.1.5 0.8 (1024 A100) Object Detection (Light Weight) SSD XX 0.8 (2048 A100) NLP BERT Recommendation DLRM Object Detection (Heavy Weight) Mask R-CNN Reinforcement Learning MiniGo 0 XX XX XX 3.3 (8 A100) Time to Train (Lower is Better) Commercially Available Solutions 10.5 (256 A100) 17.1 (1792 A100) 5 10 15 20 25 Time to Train (Minutes) 28.7 (16 TPUv3) 30 0 NVIDIA A100 NVIDIA V100 ■Google TPUv3 Huawei Ascend 35 56.7 (16 TPUv3) 40 X = No result submitted MLPerf 0.7 Performance comparison at Max Scale. Max scale used for NVIDIA A100, NVIDIA V100, TPUV3 and Huawei Ascend for all applicable benchmarks. | MLPerf ID at Scale: : Transformer: 0.7-30, 0.7-52, GNMT: 0.7-34, 0.7-54, ResNet-50 v1.5: 0.7-37, 0.7-55, 0.7-1, 0.7-3, SSD: 0.7-33, 0.7-53, BERT: 0.7-38, 0.7-56, 0.7-1, DLRM: 0.7-17, 0.7-43, Mask R-CNN: 0.7-28, 0.7-48, MiniGo: 0.7-36, 0.7-51 | MLPerf name and logo are trademarks. See www.mlperf.org for more information.
View entire presentation