Servers Designed for Single-Phase Immersion Cooling

Specifically designed for immersion cooling solutions, and can be directly installed in GIGABYTE immersion cooling tanks or compatible with mainstream cooling tanks on the market. Eliminate the need for product compatibility testing and customization. These systematic modifications greatly reduce the time for verification and adoption of immersion cooling technology.

Design to Ensure Steady Fluid Flow in the Tank

When a server is submerged in a coolant, an important condition is that the fluid is circulated well, and the warm fluid is pumped out as cooler fluid is pumped in. Only by maintaining a stable coolant flow can the heat absorbed by the fluid be managed, and the system can automatically adjust the coolant flow rate according to the operation needs. To ensure efficient heat removal, the server design for single-phase immersion cooling not only must remove any components that negatively affect the fluid flow in the system, but they also must reduce the dead space in the chassis to avoid heat accumulation. Based on the viscosity of the coolant and the specific heat capacity of the coolant, the heat dissipation rate of the server must be constantly monitored and adjusted to avoid issues . Servers developed for single-phase immersion cooling can significantly reduce unnecessary barriers that users may encounter when adopting new technologies, allowing them to focus on productivity.

Structural Integrity and Simplified Management

Servers must be installed vertically in the tank, and this lowering or raising of servers is due to an optimized tank design for single-phase immersion cooling. Therefore, the server installed upright needs to have rigid and firm structural integrity to ensure the chassis will not deform. Also, due to the vertical design, the cable routing and maintenance are also different than traditional data centers. Therefore, we re-examine the design and strengthen the server chassis for immersion cooling and reroute the cables. For instance, I/O ports are all arranged on the rear side facing up, and the network cables need to be rerouted to the sides so that maintenance can be quick and easy. Also, to make it more convenient for IT to install or remove a server, a bracket was added to allow hooks to move the server.

Made with Materials Suitable for Various Coolants

Single-phase immersion cooling submerges the server in a cooling fluid that removes heat by directly coming into contact with the heating elements. GIGABYTE had to evaluate how the chemical composition of the coolant may affect the components as well as the effect of the temperature and process on the coolant. Through theory and experimental experience, we perfected the design and will continue to explore materials that are suitable for immersion cooling.

TO15-Z40-IA01 Block Diagram

Powering the Next Generation of Server Architecture and Energy Efficiency

The path to AMD's 5nm 'Zen 4' architecture was paved with many successful generations of EPYC innovations and chiplet designs, and AMD EPYC 9004 Series processors continue this progression. Adding a host of new features to target a wide range of workloads, the new family of EPYC processors will deliver even better CPU performance and performance per watt, and do so on a platform with 2x the throughput of PCIe 4.0 lanes that also has support for 50% more memory channels. For this new platform, GIGABYTE has products ready to get the most out of EPYC-based systems that support fast PCIe Gen5 accelerators and Gen5 NVMe drives, in addition to support for high performant DDR5 memory.

4th Gen AMD EPYC™ Processors for SP5 Socket

5nmarchitecture

Compute density increased with more transistors packed in less space

128CPU cores

Dedicated cores and targeted workloads for Zen 4c & Zen 4 cores

Large L3cache

Select CPUs have 3x or more L3 cache for technical computing

SP5compatibility

All 9004 series processors are supported on one platform

12channels

Memory capacity can achieve 6TB per socket

DDR5memory

Increased memory throughput and higher DDR5 capacity per DIMM

PCIe 5.0lanes

Increased IO throughput achieving 128GB/s bandwidth in PCIe x16 lanes

CXL 1.1+support

Disaggregated compute architecture possible via Compute Express Link

GIGABYTE OCP ORV3 Compliant Solutions

GIGABTYE is an active member of the OCP, regularly attending the OCP's annual summits and continuously designing and releasing new compute, storage and GPU server hardware based on the OCP Open Rack Standard specifications and providing the best performing mezzanine cards for your OCP solution. GIGABYTE’s latest OCP server product line is based on OCP Open Rack V3 specification. The products are designed for a 21" OCP rack and feature a separate PSU system, with power supplied to each server node by a bus-bar system running along the rear of the rack.

Add-on Card
GPU Server
Compute Node
Node Tray
JBOD
OCP 21' Rack

GIGABYTE OCP ORV3 Compliant Solutions Advantages

High CPU Performance

Efficient Rack Density

  • Optimal design (2OU 2nodes / 2OU 3nodes) - balanced consideration between density and power consumption.
Energy Efficiency

Thermal Optimization

  • Best thermal consideration to develop Rack and Nodes based on Cold Aisle/Hot Aisle concept.
  • Reduce power consumption of cooling.
Optimal Price

Greater Power Efficiency

  • Low PUE helps reduce data center operating expense.
  • Central power shelf design to enhance power efficiency and optimize power consumption.
Availability

Easy Maintenance

  • Easier maintenance in front cold aisle instead of hot aisle.
  • Tool-less design for easy replacement and repair.
  • Less PSU quantities in whole rack to minimize maintenance efforts.
Continuous Operation

Higher MTBF

  • Centralizing power supplies and removing unnecessary components to enhance MTBF (Mean Time Between Failures).
  • Avoids system downtime caused by component failure and minimizes maintenance efforts.

Flexible Node Configuration

GIGABYTE’s OCP Open Rack Version 3 compliant solutions maintain the cost-efficient designs created in version 2, yet these new solutions provide even more power to each node. GIGABYTE TO23-BT0, a 2OU node tray, supports three nodes and up to six CPUs in a single tray. And a similar node tray, TO25-BT0, is designed for more PCIe expansion slots with each tray supporting up to four dual-slot GPUs or eight full-height full-length single slot cards for growing HPC and AI needs in data centers.