Rittal Blog

The Future of Data Centers: Is Liquid Cooling The Best Option?

August 24 2022 by Herb Villa

8.24BlogSize

Downtime is the ultimate enemy for data centers. One frightening weapon used by this enemy is heat. As the demands in data centers grow, the instances of heat taking down a smooth-running facility only continue to increase. In other words, the heat is on.

The battle continues as various IT components consume more power, with CPU thermal design power (the maximum heat a computer chip can use) expected to push 400 watts soon. As memory demands grow (terabytes per server) and modern IT workloads skyrocket, cooling server hardware using air alone is impossible. Thus – liquid cooling.

Liquid cooling is nothing new. Many IT professionals in traditional data centers have implemented it in some form for at least a decade. So, why is it becoming such a favorable option now? And – at the risk of opening an even greater debate – just what exactly are we talking about when we use the very broad term “Liquid Cooling”?

The conversation continued below uses the more commonly (and recently) accepted liquid cooling definition – heat removal, using a liquid medium, at a chip, chassis or component level. This updated definition does not include row/room based solutions that use air/water or air/refrigerant in row solutions. Even though they use a `liquid’, they are separate systems, and separate conversations.

The “New Normal” Changed the Conversation About Liquid Cooling

Being 2+ years into a global pandemic has created the “new normal.” And similar to any industry, IT professionals have pivoted, learned and adapted to different business trends.

Trends, including within the data center space, force change in one way or another. What is clear at this point? Traditional methods (removing heat by mixing cold air with hot air to reach an appropriate temperature) are becoming extinct.

With the clarity based on real-world experience: air-based cooling infrastructure has numerous problems — rising energy costs, high maintenance costs, space constraints, and hardware being vulnerable to pollutants. Most importantly of all, the fundamental limitations of air-based cooling are not able to keep pace with high-density data center demands for three specific reasons:

  1. Component Power — As mentioned, CPU TDP has risen significantly, and continues (400 watts may be commonplace by 2025), memory is exploding (servers with a terabyte of memory!), and graphics processing units (300-400 watts) along with AI all generate a lot of heat, pushing server utilization and the associated thermal loads to levels never seen to date.
  2. Edge Computing — Deploying computer and network infrastructure as close as possible to where data is generated is the advantage of the Edge. Whether in a single, standalone enclosure or a local Spine data center cluster, data-heavy applications, connected to high-performance machines, demand continuous analytics and require significant processing power and cooling systems. 
  3. Regulatory Change and Sustainability — Possible future regulations involving power usage effectiveness (data centers not exceeding certain limits) continue the push toward “net zero,” with some sights set for just two decades away. This moves the needle closer toward liquid cooling.

Barriers to Wider Data Center Liquid Cooling Deployment

With all the benefits being so obvious, why isn’t liquid cooling everywhere by now? Well, there are disadvantages to any system, and liquid cooling is no different.

Adapting to a new system is a big one. Many enterprise and commercial users are in data centers built in the past 10-15 years, and they cannot simply drop current cooling methods and instantly modify them for liquid cooling systems. It is obvious that from a sustainability, performance (CPUs, GPUs, etc.), and total cost of ownership outlook that liquid cooling infrastructure is the wise choice. It’s simply difficult to scale-out this “new” technology.

Together, fear/unfamiliarity is another barrier. Will the fear of a leak bringing down an entire rack (or several racks) scare off otherwise highly experienced IT professionals? Are plumbers now going to play a significant role in data center design and planning? Do non-water liquids within some liquid cooling system evaporate rather than damage components?

Questions and concerns abound, making adaptation over time the key. Changing to liquid cooling throughout an entire data center will likely take time, possibly years. Existing data centers must retrofit to liquid cooling while new facilities can immediately specify liquid cooling solutions for high demand data centers.

Take a look at this resource for advice specifically for colocation centers.

Liquid Cooling Options

Speaking of retrofitting to liquid cooling, it is possible to use a hybrid system with fans spinning air serving to assist liquid cooling (doing the majority of the cooling). Not only does that serve as a backup in case of failure, it can also provide peace of mind as data center facility managers begin to adapt liquid cooling technology.

The term “liquid cooling” covers multiple methods of heat removal and climate control. In all, either chilled water or refrigerant is the primary heat removal medium at the component level. So, what are the options for full liquid cooling systems?

Immersion cooling uses vats of dielectric fluid, either in single-phase or two-phase immersion, for cooling equipment. The advantage is that the system is built from the ground up as part of the data center’s design. There is no adapting to a legacy system, so there is added flexibility and increased power usage effectiveness (PUE).

Direct to chip cooling (also called cold plate-based cooling) is a more involved system — moving liquids to and from the cold plate — yet it can be retrofitted into existing data center infrastructure.

The closed loop cooling version of direct to chip is completely built into the rack and sealed. No foundational infrastructure is changed, so the footprint remains the same, and any possible leaks are self-contained in that rack.

In an open loop cooling system, a cooling distribution unit (CDU) pumps cold liquid into the server and hot liquid out, recycling the liquid between steps.

Reasons to Use Liquid Cooling Vary. The Results Don’t.

The reasons a data center chooses liquid cooling varies depending on the user’s goals.

  • Is liquid cooling going to reduce our energy bills?
  • Is liquid cooling going to make us more sustainable?
  • Does liquid cooling look good on an ESG report?

Whatever the ultimate reasons for using liquid cooling, the results are impressive. A reduction in power use up to 40%1 even takes into account that liquid cooling also requires its own motors, pumps, electronics, etc. That is a significant move toward meeting sustainability targets and handling today’s high density demands.

Each situation presents unique cooling challenges. Take a look at this resource and discover more insights & keys to tackling the complexity of IT equipment cooling at the Edge.

Looking for more information on the most efficient data center cooling options? Read our whitepaper, Rittal High Density — Cooled-by-ZutaCore. Click the link below to get your copy!

SOURCE: 1Data Center Dynamics, Liquid cooling can cut power costs by 40 percent, states Iceotope CEO David Craig, May 4, 2021

liquid-cooling-blog-footer-CTA

Categories: Data Center Solutions, Edge Computing

Herb Villa

Written by Herb Villa

Rittal Sr. Applications Engineer

SUBSCRIBE FOR
BLOG UPDATES

SIGN UP FOR OUR NEWSLETTER