NVIDIA Unveils Enterprise Reference Architectures for AI Factories

Because the world transitions from general-purpose to accelerated computing, discovering a path to constructing information heart infrastructure at scale is turning into extra vital than ever. Enterprises should navigate uncharted waters when designing and deploying infrastructure to help these new AI workloads.

Fixed developments in mannequin capabilities and software program frameworks, together with the novelty of those workloads, imply greatest practices and standardized approaches are nonetheless of their infancy. This state of flux could make it tough for enterprises to determine long-term methods and spend money on infrastructure with confidence.

To deal with these challenges, NVIDIA is unveiling Enterprise Reference Architectures (Enterprise RAs). These complete blueprints assist NVIDIA techniques companions and joint prospects construct their very own AI factories — high-performance, scalable and safe information facilities for manufacturing intelligence.

Constructing AI Factories to Unlock Enterprise Development

NVIDIA Enterprise RAs assist organizations keep away from pitfalls when designing AI factories by offering full-stack {hardware} and software program suggestions, and detailed steering on optimum server, cluster and community configurations for contemporary AI workloads.

Enterprise RAs can scale back the time and value of deploying AI infrastructure options by offering a streamlined strategy for constructing versatile and cost-effective accelerated infrastructure, whereas guaranteeing compatibility and interoperability.

Every Enterprise RA contains suggestions for:

  • Accelerated infrastructure primarily based on an optimized NVIDIA-Licensed server configuration, that includes the most recent NVIDIA GPUs, CPUs and networking applied sciences, that’s been examined and validated to ship efficiency at scale.
  • AI-optimized networking with the NVIDIA Spectrum-X AI Ethernet platform and NVIDIA BlueField-3 DPUs to ship peak community efficiency, and steering on optimum community configurations at a number of design factors to deal with various workload and scale necessities.
  • The NVIDIA AI Enterprise software program platform for manufacturing AI, which incorporates NVIDIA NeMo and NVIDIA NIM microservices for simply constructing and deploying AI purposes, and NVIDIA Base Command Supervisor Necessities for infrastructure provisioning, workload administration and useful resource monitoring.

Companies that deploy AI workloads on companion options primarily based upon Enterprise RAs, that are knowledgeable by NVIDIA’s years of experience in designing and constructing large-scale computing techniques, will profit from:

  • Accelerated time to market: By utilizing NVIDIA’s structured strategy and advisable designs, enterprises can deploy AI options sooner, lowering the time to attain enterprise worth.
  • Efficiency: Construct upon examined and validated applied sciences with the arrogance that AI workloads will run at peak efficiency.
  • Scalability and manageability: Develop AI infrastructure whereas incorporating design greatest practices that allow flexibility and scale and assist guarantee optimum community efficiency.
  • Safety: Run workloads securely on AI infrastructure that’s engineered with zero belief in thoughts, helps confidential computing and is optimized for the most recent cybersecurity AI improvements.
  • Lowered complexity: Speed up deployment timelines, whereas avoiding design and planning pitfalls, by way of optimum server, cluster and community configurations for AI workloads.

Availability

Options primarily based upon NVIDIA Enterprise RAs can be found from NVIDIA’s international companions, together with Dell Applied sciences, Hewlett Packard Enterprise, Lenovo and Supermicro.

Study extra about NVIDIA-Licensed Programs and NVIDIA Enterprise Reference Architectures.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles