NVIDIA Financial and Market Overview
Training Compute (petaFLOPs)
1010
109
108
107
106
105
104
Modern Al is a Data Center Scale Computing Workload
Data centers are becoming Al factories: Data as input, intelligence as output
Before Transformers = 8X / 2yrs
Transformers = 215X/2yrs
103
• AlexNet
102
2011
2012
PaLM
MT NLG 530B
• Chinchilla
GPT-3⚫
• BLOOM
Microsoft T-NLG
GPT-2⚫
Megatron-NLG.
XLNet •
Wav2Vec 2.0
MoCo ResNet50
Xception
Inception V3 •
BERT Large
Seq2Seq Resnet
VGG-19
• GPT-1
Transformer
• ResNext
• ELMO
DenseNet201
2013 2014 2015 2016 2017 2018 2019
2019 2020 2021 2022 2023
Al Training Computational Requirements
Large Language Models, based on the Transformer architecture,
are one of today's most important advanced Al technologies,
involving up to trillions of parameters that learn from text.
940
Developing them is an expensive, time-consuming process that
demands deep technical expertise, distributed data center-scale
infrastructure, and a full-stack accelerated computing approach.
Jnvou
NVIDIA
NVIDI
NVIDIA
NVIDIA
Gross
Fueling Giant-Scale Al Infrastructure
NVIDIA compute & networking GPU | DPU | CPU
NVIDIAView entire presentation