NVIDIA Investor Presentation Deck

Made public by

sourced by PitchSend

20 of 34

Creator

NVIDIA logo
NVIDIA

Category

Technology

Published

December 2023

Slides

Transcriptions

#1NVIDIA Investor Presentation Q3 FY24 November 27, 2023#2Except for the historical information contained herein, certain matters in this presentation including, but not limited to, statements as to: our financial position; our markets, market opportunity, demand and growth drivers; a broadening set of GPU-specialized CSPs; entering the holidays with our best-ever line-up for gamers and creators; generative Al emerging as the new "killer app" for high-performance PCs; being on track to exit the year at an annualized revenue run rate of $1 billion for our recurring software, support, and services offerings; Al emerging as a powerful demand driver for Professional Visualization; Foxconn incorporating Omniverse into its manufacturing process; our financial outlook, and expected tax rates for the fourth quarter of fiscal 2024; our expectations of sequential growth to be driven by Data Center, continued strong demand for compute and networking, and Gaming likely declining sequentially; the U.K. government building one of the world's fastest Al supercomputers; Jülich building its next-gen Al supercomputer; the combined Al compute capacity of all the supercomputers built on Grace Hopper across the U.S., EMEA and Japan next year; the benefits, impact, performance, features and availability of our products and technologies; the benefits, impact, features and timing of our collaborations or partnerships; NVIDIA accelerated computing being broadly recognized as the way to advance computing as Moore's law ends and Al lifts off; accelerated computing being needed to tackle the most impactful opportunities of our time; Al driving a platform shift from general purpose to accelerated computing, and enabling new, never-before-possible applications; trillion dollars of installed global data center infrastructure transitioning to accelerated computing; broader enterprise adoption of Al and accelerated computing under way; Al and accelerated computing making possible the next big waves of autonomous machines and industrial digitalization; a rapidly growing universe of applications and industry innovation; Al's ability to augment creativity and productivity; generative Al as the most important computing platform of our generation; data centers becoming Al factories; full-stack and data center scale acceleration driving significant cost savings and workload scaling; the high ROI of high compute performance; our belief that every important company will run its own Al factories; our dividend program plan; Al factories expanding our market opportunity; our Automotive design win pipeline, ramp and production expectations; our aim to engage manufacturing suppliers and goal of effecting supplier adoption of science-based environmental targets by fiscal 2026; and our plan for 100% renewable electricity for our operations and data centers by fiscal 2025 and annually thereafter are forward-looking statements. These forward-looking statements and any other forward-looking statements that go beyond historical facts that are made in this presentation are subject to risks and uncertainties that may cause actual results to differ materially. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences and demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems and other factors. NVIDIA has based these forward-looking statements largely on its current expectations and projections about future events and trends that it believes may affect its financial condition, results of operations, business strategy, short-term and long-term business operations and objectives, and financial needs. These forward-looking statements are subject to a number of risks and uncertainties, and you should not rely upon the forward-looking statements as predictions of future events. The future events and trends discussed in this presentation may not occur and actual results could differ materially and adversely from those anticipated or implied in the forward-looking statements. Although NVIDIA believes that the expectations reflected in the forward-looking statements are reasonable, the company cannot guarantee that future results, levels of activity, performance, achievements or events and circumstances reflected in the forward-looking statements will occur. Except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. For a complete discussion of factors that could materially affect our financial results and operations, please refer to the reports we file from time to time with the SEC, including our most recent Annual Report on Form 10-K, Quarterly Reports on Form 10-Q, and Current Reports on Form 8-K. Copies of reports we file with the SEC are posted on our website and are available from NVIDIA without charge. Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements within are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein. NVIDIA uses certain non-GAAP measures in this presentation including non-GAAP gross profit, non-GAAP gross margin, non-GAAP operating expenses, non-GAAP operating income, non-GAAP operating margin, non-GAAP net income, non-GAAP diluted earnings per share, and free cash flow. NVIDIA believes the presentation of its non-GAAP financial measures enhances investors' overall understanding of the company's historical financial performance. The presentation of the company's non-GAAP financial measures is not meant to be considered in isolation or as a substitute for the company's financial results prepared in accordance with GAAP, and the company's non-GAAP measures may be different from non-GAAP measures used by other companies. Further information relevant to the interpretation of non-GAAP financial measures, and reconciliations of these non-GAAP financial measures to the most comparable GAAP measures, may be found in the slide titled "Reconciliation of Non-GAAP to GAAP Financial Measures". NVIDIA |#3Content Q3 FY24 Earnings Summary Key Announcements This Quarter NVIDIA Overview Financials Reconciliation of Non-GAAP to GAAP Financial Measures NVIDIA |#4Q3 FY24 Earnings Summary#5Highlights Record quarter driven by strong Data Center growth • Total revenue up 206% Y/Y to $18.12B, well above outlook of $16.00B +/- 2% • Data Center up 279% Y/Y to $14.51B Gaming up 81% Y/Y to $2.86B Record Data Center revenue driven by continued ramp of NVIDIA HGX platform and InfiniBand networking Consumer internet and enterprise companies drove exceptional sequential growth, outpacing total growth Strong demand from all hyperscale cloud service providers (CSPs), and a broadening set of GPU-specialized CSPs Inference is contributing significantly to NVIDIA Data Center demand as Al is now in full production ● Gaming growth reflects strong demand for GeForce RTX 40 series GPUs for back-to-school and the holidays • GeForce RTX available at price points as low as $299 - entering the holidays with best-ever line-up for gamers and creators Gaming has doubled relative to pre-COVID levels even against the backdrop of lackluster PC market performance • Gen Al emerging as new "killer app" for high-performance PCs - NVIDIA RTX is the natural platform for Al-application developers NVIDIA#6$5,931 56.1% Q3 FY23 66.1% $6,051 Q4 FY23 Revenue($M) -Non-GAAP GM 66.8% $7,192 Q3 FY24 Financial Summary Q1 FY24 $13,507 71.2% Q2 FY24 $18,120 75.0% Q3 FY24 Revenue Gross Margin Operating Income Net Income Diluted EPS Cash Flow from Ops Q3 FY24 $18,120 74.0% $9,243 GAAP $3.71 Y/Y +206% $10,417 +1,633% +20.4 pts +1,259% +1,274% $7,333 +1,771% Q/Q +34% +3.9 pts +53% +49% +50% +16% Q3 FY24 $18,120 75.0% $11,557 $10,020 $4.02 Non-GAAP Y/Y +206% +18.9 pts +652% +588% +593% $7,333 +1,771% Q/Q +34% +3.8 pts +49% +49% +49% +16% All dollar figures are in millions other than EPS. Refer to Appendix for reconciliation of Non-GAAP measures. nVIDIA |#7$3,833 Q3 FY23 $3,616 Q4 FY23 $4,284 Q1 FY24 Revenue ($M) $10,323 Q2 FY24 Data Center 279% Y/Y and 41% Q/Q $14,514 Q3 FY24 Highlights • Data Center compute revenue quadrupled from last year, Networking revenue nearly tripled Strong, broad-based demand for NVIDIA accelerated computing fueled by investment in the buildout of infrastructure for LLMs, recommendation engines, and gen Al applications Networking business now exceeds a $10 billion annualized revenue run rate • NVIDIA H100 Tensor Core GPU instances are now generally available in virtually every cloud, and are in high demand Vast majority of revenue driven by NVIDIA Hopper HGX, with a lower contribution from the prior-gen Ampere GPU architecture New L40S GPU began to ship; first revenue quarter for GH200 On track to exit the year at an annualized revenue run rate of $1 billion for our recurring software, support, and services offerings NVIDIA |#8$1,574 Q3 FY23 $1,831 Q4 FY23 $2,240 Q1 FY24 Revenue ($M) $2,486 Q2 FY24 81% Y/Y and 15% Q/Q $2,856 Q3 FY24 Gaming Highlights Strong demand in the important back-to-school shopping season • The RTX ecosystem continues to grow; there are now over 475 RTX enabled games and applications ● • Released TensorRT-LLM for Windows, which speeds on-device LLM inference by up to 4X GeForce NOW surpassed 1,700 PC titles including Alan Wake II, Baldur's Gate 3, Cyberpunk 2077: Phantom Liberty, and Starfield NVIDIA |#9$200 Q3 FY23 $226 Q4 FY23 $295 Q1 FY24 Revenue ($M) Professional Visualization $379 Q2 FY24 108% Y/Y and 10% Q/Q $416 Q3 FY24 Highlights Al emerging as a powerful demand driver, including inference for Al imaging in healthcare, edge Al in smart spaces and the public sector . Launched a new line of desktop workstations based on NVIDIA RTX Ada Lovelace generation GPUs and ConnectX SmartNICs Mercedes-Benz is using Omniverse-powered digital twins to plan, design, build and operate its manufacturing and assembly facilities Foxconn will incorporate Omniverse into its manufacturing process Announced two new Omniverse Cloud services on Microsoft Azure - for virtual factory simulation and autonomous vehicle simulation NVIDIA |#10$294 ill Q4 FY23 $251 $296 Q3 FY23 Q1 FY24 Revenue ($M) $253 Q2 FY24 Automotive 4% Y/Y and 3% Q/Q $261 Q3 FY24 Highlights Growth primarily driven by continued growth in self-driving platforms based on NVIDIA DRIVE Orin SoC, and the ramp of Al cockpit solutions with global OEM customers Extended automotive partnership with Foxconn to include NVIDIA DRIVE Thor, next-generation automotive SoC NVIDIA |#11$392 Q3 FY23 $2,249 Q4 FY23 $2,911 Q1 FY24 $6,348 Gross cash is defined as cash/cash equivalents & marketable securities. Debt is defined as principal value of debt. Net cash is defined as gross cash less debt. Sources & Uses of Cash Q2 FY24 Cash Flow from Operations ($M) 1,771% Y/Y and 16% Q/Q $7,333 Q3 FY24 Highlights Y/Y and Q/Q growth primarily driven by higher revenue partially offset by higher cash tax payments ● Utilized cash of $3.9 billion towards shareholder returns, including $3.8 billion in share repurchases and $99 million in cash dividends Invested $291M in capex (includes principal payments on PP&E) Ended the quarter with $18.3B in gross cash and $9.8B in debt; $8.5B in net cash NVIDIA |#12Revenue Gross Margins Operating Expense Other Income & Expense Tax Rate Q4 FY24 Outlook $20.0 billion, plus or minus 2% Expect strong Q/Q growth to be driven by Data Center, with continued strong demand for both compute and networking. Gaming will likely decline Q/Q, as it is now more aligned with notebook seasonality 74.5% AAP and 75.5% non-GAAP, plus or minus 50 basis points Approximately $3.17 billion GAAP and $2.20 billion non-GAAP Income of approximately $200 million for GAAP and non-GAAP Excluding gains and losses on non-affiliated investments 15.0% GAAP and non-GAAP, plus or minus 1%, excluding discrete items Refer to Appendix for reconciliation of Non-GAAP measures. NVIDIA#13Key Announcements This Quarter#14NVIDIA New Tensor RT-LLM Software More Than Doubles Inference Performance NVIDIA developed TensorRT-LLM, an open-source software library that enables customers to more than double the inference performance of their GPUs TensorRT-LLM on H100 GPUs provides up to an 8X performance speedup compared to prior generation A100 GPUS running GPT-J 6B without the software 5.3X reduction in TCO and 5.6X reduction in energy costs With Tensor RT-LLM for Windows, LLMs and generative Al applications can run up to 4x faster locally on PCs and Workstations powered by NVIDIA GeForce RTX and NVIDIA RTX GPUs TensorRT-LLM for data centers now publicly available; Tensor RT-LLM for Windows in beta 8X Increase in GPT-J 6B Inference Performance 8X 7X 6X 5X 4X 3X 2X 1X TensorRT-LLM Supercharges Hopper Performance Software optimizations double leading performance ох اران 1x A100 4x 8x 4.6X Higher Llama2 Inference Performance H100 August H100 TensorRT- LLM 4X 3X 2X A100 2.6X 4.6X H100 August H100 TensorRT- LLM Text summarization, variable input/output length, CNN/DailyMail dataset A100 FP 16 PyTorch eager mode/H100 FP8 | H100 FP8, TensorRT-LLM, in-flight batching#15NVIDIA NVIDIA Partners With Foxconn to Build Factories and Systems for the Al Industrial Revolution Foxconn, the world's largest manufacturer, will integrate NVIDIA technology to develop "Al factories", a new class of data centers Based on the NVIDIA accelerated computing platform, including NVIDIA GH200 and NVIDIA AI Enterprise software, these Al factories will power a wide range of applications, including: Digitalization of manufacturing and inspection workflows Development of Al-powered EVs and robotics platforms A growing number of language-based generative Al services In addition: Foxconn Smart EV will be built on NVIDIA DRIVE Hyperion 9, next-gen platform for autonomous automotive fleets, powered by NVIDIA DRIVE Thor, our future automotive SoC Foxconn Smart Manufacturing robotic systems will be built on the NVIDIA Isaac autonomous mobile robot platform. Foxconn Smart City will incorporate the NVIDIA Metropolis intelligent video analytics platform IE www LE Al Factory Data NVIDIA DRIVE NVIDIA AI NVIDIA Orin AV Fleet Al factories are a new class of data centers, optimized for refining data and training, inferencing, and generating Al#16NVIDIA NVIDIA Partners With India Tech Giants to Advance Al Across World's Most Populous Nation NVIDIA announced collaborations with Reliance Industries, Tata Group and Infosys to bring Al technology and skills to India • With Reliance, the companies will work together to develop India's own foundation LLM trained on India's diverse languages and tailored for generative Al applications; build supercomputing infrastructure to support the exponential computational demands of Al With Tata, the collaboration will bring a state-of-the-art Al supercomputer to provide infrastructure-as-a-service and platform for Al services in India With Infosys, the partnership will bring the NVIDIA AI Enterprise ecosystem of models, tools, runtimes and GPU systems to drive productivity gains with generative Al applications and solutions Infosys plans to set up an NVIDIA Center of Excellence where it will train and certify 50,000 of its employees on NVIDIA AI technology Infosys Reliance Industries Limited ΤΑΤΑ#17NVIDIA NVIDIA Sets New LLM Training Record With Largest MLPerf Submission Ever NVIDIA set six new performance records in this round, with the performance increase stemming from a combination of advances in software and scaled-up hardware 2.8x faster on generative Al - completing a training benchmark based on a GPT-3 model with 175 billion parameters trained on 1 billion tokens in just 3.9 minutes 1.6x faster on training recommender models 1.8x faster on training computer vision models The GPT-3 benchmark ran on NVIDIA Eos - a new Al supercomputer powered by 10,752 H100 GPUs and NVIDIA Quantum-2 InfiniBand networking The 10,752 H100 GPUs far surpassed the scaling in Al training in June, when NVIDIA used 3,584 Hopper GPUs ● The 3x scaling in GPU numbers delivered a 2.8x scaling in performance, a 93% efficiency rate thanks in part to software optimizations Microsoft Azure achieved similar results on a nearly identical cluster, demonstrating the efficiency of NVIDIA Al in public cloud deployments 2 la Six New Performance Records The fastest gets even faster GPT-3 175B (1B Tokens) 3.9 Minutes 2.8X Faster DLRM-dcnv2 1 Minute 1.6X Faster RetinaNet 55.2 Seconds 1.8X Faster Stable Diffusion 2.5 Minutes New Workload BERT-Large 7.2 Seconds 1.1X Faster 3D U-Net 46 Seconds 1.07X Faster MLPerfTM Training v3.1. Results retrieved from www.mlperf.org on November 8, 2023. Format: Chip Count, MLPerf ID | GPT-3: 3584x 3.0-2003, 10752x 3.1-2007 | Stable Diffusion: 1024x 3.1-2050 | DLRMv2: 128x 3.0-2065, 128x 3.1-2051 | BERT-Large: 3072x 3.0-2001, 3472x 3.1-2053 | RetinaNet: 768x 3.0-2077, 2048x 3.1-2052 | 3D U-Net: 432x 3.0-2067, 768x 3.1-2064. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.#18NVIDIA New NVIDIA HGX H200 Supercharges Hopper NVIDIA H200 is the first GPU to offer HBM3e faster, larger memory to fuel the acceleration of generative Al and large language models, while advancing scientific computing for HPC workloads H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4X more bandwidth compared with its predecessor, NVIDIA A100 Boosts inference speed by up to 2X compared to H100 GPUs when handling LLMs such as Llama2 Microsoft announced plans to add the H200 to Azure next year for larger model inference with no increase in latency H200-powered systems from the world's leading server manufacturers and cloud service providers are expected to begin shipping in the second quarter of 2024 0#19NVIDIA Grace Hopper Gains Significant Traction with Supercomputing Customers Initial shipments to Los Alamos National Lab and the Swiss National Supercomputing Centre took place in the third quarter • The U.K. government announced it will build one of the world's fastest Al supercomputers with almost 5.5K Grace Hopper Superchips German supercomputing center Jülich will build its next- gen Al supercomputer, with close to 24K Grace Hopper Superchips and Quantum-2 InfiniBand ● Will be the world's most powerful Al system with over 90 exaflops of Al performance • Marks the debut of a quad NVIDIA GH200 Grace Hopper Superchip node configuration Combined Al compute capacity of all the supercomputers built on Grace Hopper across the U.S., EMEA and Japan next year estimated to exceed 200 exaflops Cumulative Al Performance (ExaFLOPS of Al) 400 350 300 250 200 150 100 50 0 2015 2016 Piz Daint 2017 Tsubame3 2018 Summit Pangea 2019 VENADO Selene 2020 Perlmutter 2021 nard AI Cumulative AI FLOPS JUPITER Polaris 2022 ALPS JCAHPC 2023 ● Leonardo 1 1 I Eos OFP-II Venado Vista Alps Isambard Al Delta Jupiter Azure ND H100 v5 MareNostrum-5 2024 2025#20NVIDIA NVIDIA AI Foundry Service for Enterprises on Microsoft Azure Introduced new NVIDIA AI foundry service for the development and tuning of custom generative Al enterprise applications, running on Microsoft Azure Customers can bring their domain knowledge and proprietary data, and we help them build their Al models using our Al expertise and software stack in DGX Cloud Al factory - all with enterprise-grade security and support Businesses can deploy their customized models with the NVIDIA AI Enterprise software runtime to power generative Al applications such as intelligent search, summarization, and content generation Industry leaders SAP SE, Amdocs and Getty Images are among the first customers of NVIDIA AI foundry service - Create from Foundation Model Your Enterprise Model Run Anywhere Al Foundations 88 LLM Agent Microsoft Azure a 9 0000 NeMo Running on NVIDIA AI Enterprise Microsoft Azure RAG Vector Store DGX Cloud Prompts LLM#21NVIDIA NVIDIA Spectrum-X Ethernet networking platform for Al Available Soon from Dell, HPE and Lenovo Purpose-built for gen Al, Spectrum-X offers enterprises a new class of Ethernet networking that can achieve 1.6x higher networking performance for Al communication versus traditional Ethernet offerings Dell, Hewlett Packard Enterprise and Lenovo will be the first to integrate NVIDIA Spectrum-X Ethernet networking technologies for Al into their server lineups New systems bring together Spectrum-X with NVIDIA GPUS, NVIDIA AI Enterprise software and NVIDIA AI Workbench software to provide enterprises the building blocks to transform their businesses with generative Al Available in the first quarter of next year#22NVIDIA NVIDIA Collaborates With Genentech to Accelerate Drug Discovery Using Generative Al Genentech is pioneering the use of generative Al to discover and develop new therapeutics and deliver treatments to patients more efficiently NVIDIA will work with Genentech to accelerate Genentech's proprietary algorithms on NVIDIA DGX Cloud Genentech plans to use NVIDIA BioNeMo to help accelerate and optimize their Al drug discovery platform NVIDIA plans to use insights learned from this collaboration to improve its BioNeMo platform BioNeMo is now generally available as a training service#23NVIDIA Overview#24Headquarters: Santa Clara, CA NVIDIA pioneered accelerated computing to help solve impactful challenges classical computers cannot. A quarter of a century in the making, NVIDIA accelerated computing is broadly recognized as the way to advance computing as Moore's law ends and Al lifts off. NVIDIA's platform is installed in several hundred million computers, is available in every cloud and from every server maker, powers 76% of the TOP500 supercomputers, and boasts 4.5 million developers.#25AI APPLICATION FRAMEWORK MODULUS ΜΟΝΑΙ RIVA PLATFORMS ACCELERATION LIBRARIES CLOUD-TO-EDGE DATACENTER-TO- ROBOTIC SYSTEMS 3-CHIPS MAXINE cuNumeric NVIDIA's Accelerated Computing Platform Full-stack innovation across silicon, systems and software RTX NEMO MERLIN NVIDIA HPC DGX CV-CUDA DOCA HGX CUOPT cuQuantum GPU RAPIDS Spark cuDNN cuGraph TensorRT Triton EGX MORPHEUS Mag 10 00 ☐☐ CPU TOKKIO NVIDIA AI Parabricks OVX Aerial Super POD DPU AVATAR Sionna Deepstream AGX DRIVE Jetpack Flare IGX ISAAC METROPOLIS HOLOSCAN NVIDIA Omniverse With nearly three decades of singular focus, NVIDIA is expert at accelerating software and scaling compute by a Million-X, going well beyond Moore's law Accelerated computing requires full-stack innovation - optimizing across every layer of computing from silicon and systems to software and algorithms, demanding deep understanding of the problem domain Our full-stack platforms - NVIDIA HPC, NVIDIA AI, and NVIDIA Omniverse. accelerate high performance computing, Al and industrial digitalization workloads We accelerate workloads at data center scale, across thousands of compute nodes, treating the network and storage as part of the computing fabric Our platform extends from the cloud and enterprise data centers to supercomputing centers, edge computing and PCs NVIDIA#26NVIDIA What Is Accelerated Computing? A full-stack approach: silicon, systems, software Not just a superfast chip - accelerated computing is a full-stack combination of: Chip(s) with specialized processors Algorithms in acceleration libraries Domain experts to refactor applications To speed-up compute-intensive parts of an application ● Amdahl's law: The overall system speed-up (S) gained by optimizing a single part of a system by a factor (s) is limited by the proportion of execution time of that part (p). S = 1 р (1 − p) + For example: If 90% of the runtime can be accelerated by 100X, the application is sped up 9X If 99% of the runtime can be accelerated by 100X, the application is sped up 50X If 80% of the runtime can be accelerated by 500X, or even 1000X, the application is sped up 5X#27NVIDIA Why Accelerated Computing? Advancing computing in the post-Moore's Law era Accelerated computing is needed to tackle the most impactful opportunities of our time-like Al, climate simulation, drug discovery, ray tracing, and robotics NVIDIA is uniquely dedicated to accelerated computing -working top-to-bottom, refactoring applications and creating new algorithms, and bottom-to-top-inventing new specialized processors, like RT Core and Tensor Core "It's the end of Moore's Law as we know it." - John Hennessy Oct 23, 2018 "Moore's Law is dead." -Jensen Huang, GTC 2013 Trillions of Operations per Second (TOPS) 10⁹ 108 107 106 105 104 10³ 10² 1980 Single-threaded CPU perf 1990 GPU-Computing perf 2X per year 1.5X perf per year 2000 2010 1.1X per year 2020 1000X In 10 years 2030#28Waves of Adoption of Accelerated Computing A generational computing platform shift Enterprise Cloud Service Providers & Consumer Internet Industrial Digitalization Autonomous Vehicles & Robotics A new computing era has begun Accelerated computing enabled the rise of Al, which is driving a platform shift from general purpose to accelerated computing, and enabling new, never-before-possible applications The trillion dollars of installed global data center infrastructure will transition to accelerated computing to achieve better performance, energy- efficiency and cost by an order of magnitude Hyperscale cloud service providers and consumer internet companies have been the early adopters of Al and accelerated computing, with broader enterprise adoption now under way Al and accelerated computing will also make possible the next big waves - autonomous machines and industrial digitalization NVIDIA#29NVIDIA Accelerated Computing for Every Wave Enterprise Cloud Service Providers & Consumer Internet Industrial Digitalization Autonomous Vehicles & Robotics NVIDIA Omniverse is a software platform for designing, building, and operating 3D and virtual world simulations. It harnesses the power of NVIDIA graphics and Al technologies and runs on NVIDIA-powered data centers and workstations NVIDIA DRIVE is a full-stack platform for autonomous vehicles (AV) that includes hardware for in-car compute, such as the Orin system-on-chip, and the full AV and Al cockpit software stack NVIDIA DGX Cloud is a cloud service that allows enterprises immediate access to the infrastructure and software needed to train advanced models for generative Al and other groundbreaking applications NVIDIA AI Enterprise is the operating system of Al, with enterprise-grade security, stability, manageability and support. It is available on all major CSPs and server OEMs and supports enterprise deployment of Al in production NVIDIA HGX is an Al supercomputing platform purpose-built for Al. It includes 8 NVIDIA GPUs, as well as interconnect and networking technologies, delivering order-of-magnitude performance speed-ups for Al over CPU servers. It is broadly available from all major server OEMs/ODMs. NVIDIA DGX, an Al server based on the same architecture, along with NVIDIA Al software and support, is also available NVIDIA |#301.8M 2020 6K Developers 2020 4.5M 2023 Al Startups 15K 2023 NVIDIA's Accelerated Computing Ecosystem CUDA Downloads* 20M 2020 700 48M GPU-Accelerated Applications 2020 2023 3,200 2023 *Cumulative The NVIDIA accelerated computing platform has attracted the largest ecosystem of developers, supporting a rapidly growing universe of applications and industry innovation Developers can engage with NVIDIA through CUDA - our parallel computing programming model introduced in 2006 — or at higher layers of the stack, including libraries, pre-trained Al models, SDKs and other development tools 300 Libraries 600 Al Models 100 Updated in the Last Year NVIDIA#31Installed Base Developers NVIDIA'S Multi-Sided Platform and Flywheel Scale ↓ R&D $ Speed-Up End-Users Cloud & OEMs NVIDIA Accelerated Computing Virtuous Cycle The virtuous cycle of NVIDIA's accelerated computing starts with an installed base of several hundred million GPUs, all compatible with the CUDA programming model For developers - NVIDIA's one architecture and large installed base give developer's software the best performance and greatest reach For end users - NVIDIA is offered by virtually every computing provider and accelerates the most impactful applications from cloud to edge For cloud providers and OEMs - NVIDIA's rich suite of Acceleration Platforms lets partners build one offering to address large markets including media & entertainment, healthcare, transportation, energy, financial services, manufacturing, retail, and more For NVIDIA — - Deep engagement with developers, computing providers, and customers in diverse industries enables unmatched expertise, scale, and speed of innovation across the entire accelerated computing stack - propelling the flywheel NVIDIA#32Huge ROI from Al Driving a Powerful New Investment Cycle Al can augment creativity and productivity by orders of magnitude across industries Knowledge workers will use copilots based on large language models to generate documents, answer questions, or summarize missed meetings, emails and chats — adding hours of productivity per week Copilots specialized for fields such as software development, legal services or education can boost productivity by as much as 50% Social media, search and e-commerce apps are using deep recommenders to offer more relevant content and ads to their customers, increasing engagement and monetization Creators can generate stunning, photorealistic images with a single text prompt compressing workflows that take days or weeks into minutes in industries from advertising to game development Call center agents augmented with Al chatbots can dramatically increase productivity and customer satisfaction Drug discovery, financial services, agriculture and food services and climate forecasting are seeing order-of-magnitude workflow acceleration from Al Maps ▸ Media Player Microsoft Edge Microsoft Endpoint Manager Microsoft Intune Management Ext. Microsoft Office Tools Detabene Compare Office Language Preferences Spreadsheet Compare Microsoft Edge Play and explore Xbox Console... Maps Movies & TV Office Al Copilots Over 1B knowledge workers Legal Services, Education 1M legal professionals in the US 9M educators in the US Customer Service with Al 15M call center agents globally Search & Social Media $700B in digital advertising annually fo//18 cox Al Software Development 30M software developers globally 00 HILD oso Se . 03 Al Content Creation 50M creators globally 777 500 13.50 13.50 the evening 500 243,700 62.00 118,300 61.75 130,900 61.50 242.40 142,60 72.75 High 62.50 62.75 Volume Bids Offers 315,900 73.00 377.00 72.50 73.25 351,600 72.25 73.50 Volum 565,30 423,40 563.00 74.50 High (+0.58%) Low Financial Services 678B annual credit card transactions Drug Discovery Agri-Food | Climate 1018 molecules in chemical space 1B people in agri-food worldwide 40 exabytes of genome data Earth-2 for km-scale simulation Source: Goldman Sachs, Cowen, Statista, Capital One, Wall Street Journal, Resource Watch, NVIDIA internal analysis NVIDIA |

Download to PowerPoint

Download presentation as an editable powerpoint.

Related

1st Quarter 2021 Earnings Presentation image

1st Quarter 2021 Earnings Presentation

Technology

Rackspace Technology Q4 2022 Earnings Presentation image

Rackspace Technology Q4 2022 Earnings Presentation

Technology

CBAK Energy Technology Investor Presentation image

CBAK Energy Technology Investor Presentation

Technology

Jianpu Technology Inc 23Q1 Presentation image

Jianpu Technology Inc 23Q1 Presentation

Technology

High Performance Computing Capabilities image

High Performance Computing Capabilities

Technology

SOLOMON Deep Learning Case Studies image

SOLOMON Deep Learning Case Studies

Technology

1Q20 Earnings image

1Q20 Earnings

Technology

Nutanix Corporate Overview image

Nutanix Corporate Overview

Technology