Voor bedrijven & organisaties

Gratis verzending vanaf € 250,-

Groot assortiment producten en merken

Levering volgende dag op voorraadartikelen

Uw zakelijke IT dienstverlener

HPE NVIDIA H100 - GPU computing processor

Productcode: R9S41C | EAN/UPC: 0190017604312
62.782,72 excl.btw
NVIDIA H100 - GPU computing processor - NVIDIA H100 Tensor Core - 80 GB HBM3 - PCI Express 5.0 - fanless - for ProLiant DL380 Gen10 Plus, DL380A, DL385 Gen11
  • Transformational AI training
  • Real-time deep learning inference
  • Exascale high-performance computing
  • Accelerated data analytics
  • Enterprise-ready utilization
  • Built-in confidential computing
Op voorraad: 0 stuk(s)
Levertijd:
Product:
HPE NVIDIA H100 - GPU computing processor
75967,09 incl. 21% btw 62782,72excl. 21% btw
Voorraad: 0
Stuur mij een notificatie bij voorraad

Omschrijving

NVIDIA H100 - GPU computing processor - NVIDIA H100 Tensor Core - 80 GB HBM3 - PCI Express 5.0 - fanless - for ProLiant DL380 Gen10 Plus, DL380A, DL385 Gen11
Product features
  • Transformational AI training
    H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models. The combination of fourth-generation NVLink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NDR Quantum-2 InfiniBand networking, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO software delivers efficient scalability from small enterprise systems to massive, unified GPU clusters. Deploying H100 GPUs at data center scale delivers outstanding performance and brings the next generation of exascale high-performance computing (HPC) and trillion-parameter AI within the reach of all researchers.
  • Real-time deep learning inference
    AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks. H100 extends NVIDIA's market-leading inference leadership with several advancements that accelerate inference by up to 30X and deliver the lowest latency. Fourth-generation Tensor Cores speed up all precisions, including FP64, TF32, FP32, FP16, INT8, and now FP8, to reduce memory usage and increase performance while still maintaining accuracy for LLMs.
  • Exascale high-performance computing
    The NVIDIA data center platform consistently delivers performance gains beyond Moore's law. And H100's new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world's most important challenges. H100 triples the floating-point operations per second (FLOPS) of double-precision Tensor Cores, delivering 60 teraflops of FP64 computing for HPC. AI-fused HPC applications can also leverage H100's TF32 precision to achieve one petaflop of throughput for single-precision matrix-multiply operations, with zero code changes. H100 also features new DPX instructions that deliver 7X higher performance over A100 and 40X speedups over CPUs on dynamic programming algorithms such as Smith-Waterman for DNA sequence alignment and protein alignment for protein structure prediction.
  • Accelerated data analytics
    Data analytics often consumes the majority of time in AI application development. Since large datasets are scattered across multiple servers, scale-out solutions with commodity CPU-only servers get bogged down by a lack of scalable computing performance. Accelerated servers with H100 deliver the compute power - along with 3 terabytes per second (TB/s) of memory bandwidth per GPU and scalability with NVLink and NVSwitch - to tackle data analytics with high performance and scale to support massive datasets. Combined with NVIDIA Quantum-2 InfiniBand, Magnum IO software, GPU-accelerated Spark 3.0, and NVIDIA RAPIDS, the NVIDIA data center platform is uniquely able to accelerate these huge workloads with unparalleled levels of performance and efficiency.
  • Enterprise-ready utilization
    IT managers seek to maximize utilization (both peak and average) of compute resources in the data center. They often employ dynamic reconfiguration of compute to right-size resources for the workloads in use. Second-generation Multi-Instance GPU (MIG) technology in H100 maximizes the utilization of each GPU by securely partitioning it into as many as seven separate instances. With confidential computing support, H100 allows secure, end-to-end, multi-tenant usage, making it ideal for cloud service provider (CSP) environments. H100 with MIG lets infrastructure managers standardize their GPU-accelerated infrastructure while having the flexibility to provision GPU resources with greater granularity to securely provide developers the right amount of accelerated compute and optimize usage of all their GPU resources.
  • Built-in confidential computing
    Traditional confidential computing solutions are CPU-based, which is too limited for compute-intensive workloads like AI and HPC. NVIDIA Confidential Computing is a built-in security feature of the NVIDIA Hopper architecture that makes H100 the world's first accelerator with confidential computing capabilities. Users can protect the confidentiality and integrity of their data and applications in use while accessing the unsurpassed acceleration of H100 GPUs. It creates a hardware-based trusted execution environment (TEE) that secures and isolates the entire workload running on a single H100 GPU, multiple H100 GPUs within a node, or individual MIG instances. GPU-accelerated applications can run unchanged within the TEE and don't have to be partitioned. Users can combine the power of NVIDIA software for AI and HPC with the security of a hardware root of trust offered by NVIDIA confidential computing.

Specificaties

General
Device Type
GPU computing processor - fanless
Bus Type
PCI Express 5.0
Graphics Engine
NVIDIA H100 Tensor Core
Features
Multi-Instance GPU (MIG) technology, 51 Tflops peak single-precision floating point performance
Memory
Size
80 GB
Technology
HBM3
Bandwidth
2 Tbit/s
System Requirements
OS Required
Ubuntu 20.04, SUSE Linux Enterprise Server 15 SP3, Microsoft Windows Server 2022, Red Hat Enterprise Linux 9.0, SUSE Linux Enterprise Server 15 SP4
Miscellaneous
Width
26.7 cm
Depth
11.1 cm
Height
3.47 cm
Weight
1.69 kg
Manufacturer Warranty
Service & Support
Limited warranty - 3 years
Compatibility Information
Designed For
HPE ProLiant DL380 Gen10 Plus, DL380A, DL385 Gen11

This data is supplied by 1WorldSync and matched through an automated process. Truedata is not responsible for errors in this data.

Productvideo

Heb je zelf een goede video voor dit product? Youtube video indienen

Reviews

Gerelateerde producten

Afbeelding Origin Storage Rack slide rail kit

Origin Storage Rack slide rail kit

Productcode: DELL-SR-R610
EAN/UPC: 5056006101109
Origin Storage - Rack slide rail kit - for Dell PowerEdge R610
  • Ball Bearing slide rails
  • Does not require the Dell Rail kit
  • Works with any hole type (round, threaded, square)
Levertijd 1-6 weken

177,25

Winkelwagen
Afbeelding Dell 2/4-Post Static Rack Rails for 1U and 2U systems

Dell 2/4-Post Static Rack Rails for 1U and 2U systems

Productcode: 770-BBIF
EAN/UPC: 0884116288480
Dell 2/4-Post Static Rack Rails for 1U and 2U systems - Rack rail kit - for PowerEdge R210, R220, R310, R410, R415
  • Provides tool-less support for 4-post racks
  • Support a wider variety of racks
Direct leverbaar

63,37

Winkelwagen
Afbeelding Rittal Auflagerolle für Stromschienen

Rittal Auflagerolle für Stromschienen

Productcode: 4055714
EAN/UPC: 4028177922860
Rittal Auflagerolle für Stromschienen
Levertijd 1-6 weken

128,51

Winkelwagen
Afbeelding DIGITUS 254mm (10") Cable management with patchcord access hole

DIGITUS 254mm (10") Cable management with patchcord access hole

Productcode: DN-10-ORG-1-2U-B
EAN/UPC: 4016032473992
DIGITUS DN-10-ORG-1-2U-B - Cable management panel with brush - rack mountable - black, RAL 9005 - 0.5U - 10"· Provides a neat cable management
· Slim design which takes space of 0,5 height unit
· Easy and quick to mount or dismount
Direct leverbaar

8,13

Winkelwagen