Oracle Cloud Infrastructure (OCI) is offering customers access to what it claims to be the largest AI supercomputer in the cloud — with up to 131,072 NVIDIA Blackwell GPUs — delivering  2.4 zettaFLOPS of peak performance.

 

OCI Supercluster includes OCI Compute Bare Metal, ultra-low latency RoCEv2 with ConnectX-7 NICs and ConnectX-8 SuperNICs or NVIDIA Quantum-2 InfiniBand-based networks, and a choice of HPC storage.

 

OCI Superclusters are orderable with OCI Compute powered by either NVIDIA H100 or H200 Tensor Core GPUs or NVIDIA Blackwell GPUs. OCI Superclusters with H100 GPUs can scale up to 16,384 GPUs with up to 65 ExaFLOPS of performance and 13Pb/s of aggregated network throughput.

 

 

OCI Superclusters with H200 GPUs will scale to 65,536 GPUs with up to 260 ExaFLOPS of performance and 52Pb/s of aggregated network throughput and will be available later this year.

 

 

OCI Superclusters with NVIDIA GB200 NVL72liquid-cooled bare-metal instances will use NVLink and NVLink Switch to enable up to 72 Blackwell GPUs to communicate with each other at an aggregate bandwidth of 129.6 TB/s in a single NVLink domain.

 

NVIDIA Blackwell GPUs, available in the first half of 2025, with fifth-generation NVLink, NVLink Switch, and cluster networking will enable  GPU-GPU communication in a single cluster.

Original article source:

Oracle offering 2.4zettaFLOPS

FAQ

1.What is zettaFLOPS, and how does it relate to Oracle’s offering?

ZettaFLOPS refers to a unit of computing power, where one zettaFLOP equals one sextillion (10²¹) floating-point operations per second. Oracle’s offering of 2.4 zettaFLOPS represents a massive leap in computing capability, aimed at handling large-scale data processing, AI, and cloud workloads.

 

2.What industries can benefit from Oracle’s 2.4 zettaFLOPS performance?

Industries such as healthcare, finance, autonomous vehicles, AI research, and scientific simulations can greatly benefit. The ability to process massive datasets at unprecedented speeds will accelerate innovation in fields requiring heavy computational power.

 

3.How does Oracle achieve 2.4 zettaFLOPS?

Oracle achieves this performance through its advanced cloud infrastructure, combining high-performance computing (HPC) technologies, optimized hardware (such as GPUs and CPUs), and scalable cloud services that can handle distributed workloads.

 

4.How does Oracle’s 2.4 zettaFLOPS compare to other cloud providers?

Oracle’s 2.4 zettaFLOPS offering positions it among the top in the industry for cloud computing power. This compares favorably with other cloud providers that offer petascale or exascale computing capabilities, enabling Oracle to cater to the most demanding enterprise applications.

 

5.What are the potential use cases for Oracle’s zettaFLOPS computing power?

Potential use cases include large-scale machine learning training, real-time data analytics, simulation of complex scientific models (such as climate change), genomic sequencing, and other tasks that require significant computational resources.

 

6.Is Oracle’s 2.4 zettaFLOPS offering available to all customers?

While Oracle’s cloud services are generally available to enterprise customers, access to the full 2.4 zettaFLOPS capability may be reserved for select high-end users or specialized use cases, depending on pricing and infrastructure needs.

 

7.How does Oracle’s zettaFLOPS performance impact its position in the AI and cloud market?

Oracle’s offering of 2.4 zettaFLOPS boosts its competitiveness in the AI and cloud market, enabling the company to attract enterprises with heavy AI, machine learning, and data science workloads. This performance also reinforces Oracle’s reputation for offering cutting-edge infrastructure solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *