Cooling Trends in Data Center IT Enclosures: Liquid Cooling Takes the Lead

March 22 2022

Market Insight

According to the International Energy Agency (IEA), data centers account for about 1% of global electricity demand, around 200 terawatt-hours (TWh) of electricity, per year.

That is a lot of electricity, and a lot of heat being generated in the process.

If too hot, a data center experiences delays, latency and, perhaps, hardware failure. So, all of that heat requires the help of some type of cooling. In fact, around 40% of all energy used in data centers goes directly to cooling, often surpassing the cost of powering the IT equipment within them.

Cooling is becoming more and more important as packing densities and processor capacity grows. Not long ago, a data center enclosure (the cabinet or rack) would typically have systems requiring 6-10 kW of incoming electrical power. Today, the demand for more computing power has increased to 15-30 kW per cabinet, with some predicting 50 kW per rack within the next five years.

The conventional means for handling generated heat is not able to keep up with rapid densification. For instance, removing heat with air may be low risk and convenient, yet it is expensive to use and may not even handle the highest heat loads.

Fortunately, technological advancements have made it possible to reduce costs while ensuring optimum server performance. Many data center IT managers have implemented more effective and scalable heat removal: liquid cooling.

Historical Water-Based Cooling

First, a quick look at the recent past. For decades, data centers used chilled water to cool the air being brought in by computer room air handlers (or CRAHs) and deliver the cold air to racks. Computer room air conditioners (or CRACs) worked on the same principle, but they were powered by a compressor that drew air across a refrigerant-filled cooling unit.

The raised floor cooling system — another chilled water method — forced cold air from a CRAH or CRAC into the space below the raised floor of the data center. The cold air entered the space in front of server intakes, passing through the server and now heated, returned to the CRAC/CRAH to be cooled again.

None of these methods was very energy-efficient but were popular because the equipment was relatively inexpensive. In addition to requiring a great deal of power, air cooling can introduce both pollutants and condensation into the data center.

Today’s Liquid Cooling Technologies

Until recently, air cooled options have been most prevalent for cooling down racks. However, with developments in liquid cooling, many data centers are enjoying much greater heat removal capacity compared to ambient air cooling. Water can carry about 3,500 times more heat than air, providing multiple opportunities for data center deployment.

A data center’s specific installation location, application, operating environment, and other related factors determine which system, or combination of systems, is best for each data center.

Direct-to-Chip Cooling

This is a liquid cooling method that uses pipes to deliver coolant directly to a plate that is integrated into a motherboard’s processors to disperse heat into a chilled-water loop.

Because direct-to-chip (DTC) is focused directly on what is generating the majority of heat in an individual appliance — the main processor — it is one of the most effective forms of cooling. The downside: only a portion of server components are cooled with liquid, so fans are still needed for this system.

This waterless (dielectric fluid instead), two-phase liquid cooling (or 2PLC) uses a highly efficient, two-phase boiling and condensation process. The design is light and compact, able to move large amounts of heat off chips and away from servers and has no chance of damaging IT hardware.

Closed Loop (In-Rack Cooling)

In a closed loop installation, the “loop” involves hot air being expelled by the IT equipment in the rear of the footprint and drawn across heat exchangers to be cooled. The cooled air is delivered to the front of the rack and moves back to the servers.

“Closed” means that no air — cold or hot — leaves the surrounding space, and cooling affects only the environment within the rack. Hot and cold air never mix and airflow paths are reduced, so efficiencies are improved and less energy is used on fans.

Closed loop enclosure-based cooling systems are often used in locations with limited space, in high-density areas within large data centers, and in high-density environments to support traditional cooling systems.

Close Coupled (In-Row Cooling)

This variation on the closed loop system uses comparable row-based heat exchangers. Server equipment receives cooled air from cooling units placed between server enclosures in the row.

So, air (cold and hot) is not contained within the enclosures; it flows into the IT space. Although, the “cold aisle/hot aisle” orientation in the rows reduces air mixing, improving thermal balance in the IT space. Hot air being expelled by the IT equipment is pulled from the hot aisle, cooled, and then flows to the cold aisle and the components’ intakes.

In-row cooling can supplement existing CRAC/CRAH units, increasing the capacity to remove heat and improving thermal management. In-row units may also supplement raised-floor cooling or can be the primary cooling source on a slab floor. Additionally, combining in-row systems with aisle containment can further increase heat removal capacities.

Immersion Cooling

Immersion systems involve submerging the hardware itself into a bath of non-conductive, non-flammable dielectric fluid, which absorbs heat more efficiently than air. As heated fluid turns to vapor, it condenses and falls back into the fluid to assist in cooling.

Because server cooling fans are unnecessary, power consumption is significantly reduced compared to traditional methods. Available in single- and two-phase systems, immersion cooling allows for greater server densities within the same installation space, reducing total cost of ownership (TCO). 

The downsides of immersion cooling include the weight and footprint of the fluid-filled tanks, the system’s serviceability, and switching to another method (removing the coolant from the servers) is labor-intensive.

Leverage the Insights of the Experts

Density and processor capacity continues to be pushed. The pressure is on IT facility managers to address the huge amount of heat being generated. Fortunately, advancements in technology can reduce heat removal costs while ensuring optimum performance of the servers.

However, partnering with an experienced IT enclosure manufacturer with multiple cooling options is the key to realizing those savings. Rittal’s Liquid Cooling Packages (LCPs) use intelligent control to precisely dissipate heat losses up to 60 kW per enclosure. With optimized operating costs, data centers can increase energy savings up to 50%.

Every member of the Rittal sales team is an expert in data center and Edge deployment needs and challenges: security/monitoring, safety, flexibility, modularity, scalability, and more. Together, we’ll evaluate your situation and help identify the IT enclosure solution to optimize efficiency and uptime, making your facility as future-proof as possible.