7 Keys to Adding Server Rack Cooling No Matter Where They Are Going

October 22 2020

Market Insight

7 Keys Before Implementing Server Rack Cooling

How do you choose the right server rack cooling system for your IT deployment, whether in the data center or at The Edge? Or anywhere in between. Here is a look at some of the most important factors that go into selecting the right solution for your needs. We are selecting seven (yeah, seven) of the most critical considerations. There are others, and there are those that will be more important on a case-by-case basis. For now, though, let’s start with these.

1. Ensure Your Ability to Scale

You may feel confident that your cooling system is handling today’s densities...but what about tomorrow? When selecting a cooling system, it’s critical to think beyond the present and prepare to accommodate the densities you’ll undoubtedly manage in the future. The 2020 State of the Data Center report found that the average rack density increased by nearly an entire kilowatt (kW) from the previous year, jumping from 7.3 kW to 8.2 kW per rack. What might it be five or 10 years from now?

With server rack cooling systems, the simplest solution, and quickest - add more enclosures. Look for enclosures (sometimes called cabinets) that make it simple to expand capacity, with these two critical future-proofing features:

  • Ability to add equipment of different sizes (and to put power equipment in the same enclosure as IT equipment) to save valuable floor space
  • Ability to place equipment with different cooling needs within the same rack, helping to reduce cooling costs by eliminating the need for a separate system for each

Of course, the facility must have sufficient heat removal capacity. Adding enclosures for either more or more powerful devices will increase the overall thermal load. You must be sure that any new heat can be removed with the existing infrastructure.

2. Prepare to Control the Environment

The growing utilization of data to power Industry 4.0 and IIoT capabilities and the associated popularity of Edge deployments means more and more server racks are being placed in relatively small, out-of-the-way spaces never intended to protect IT equipment. Often, these spaces do not offer adequate climate control and are vulnerable to dust, debris and moisture.

Too often, IT Managers assume that these deployments can be cooled using the building’s own AC infrastructure, systems not designed to keep sensitive equipment cool, nor are they capable of ensuring the optimum humidity levels or proper air flow.

For these deployments, consider enclosures that feature direct expansion cooling systems. The beauty of this technology is that often it requires no changes to the current infrastructure, and server rack cooling can be assigned to a single cabinet or to an entire small space, making this targeted method highly efficient.

Among the many benefits for Edge and similar deployments include those that make monitoring and management simple:

  • Precise temperature control able to maintain a set-point temperature even as heat loads vary
  • Tool-less EC fan replacement, easy-access electrical connections, electronic expansion valve to immediately react to changing thermal loads and remote notification of all operational parameters reduce downtime and optimize system operation
  • A monitoring system that allows users to remotely check climate conditions and status of any integrated systems and that sends warnings if conditions deteriorate

3. Plan to Accommodate Rising Thermal Loads

In order to identify the correct rack cooling solution for your application, you must first calculate the heat output of your equipment and the total thermal load of each enclosure.

As a rough calculation, figure all the power consumed, whether the total facility IT load or per footprint, will essentially be converted to heat (remember: I said rough). This means that thermal output of IT equipment is equal (just about) to the power consumption (both of which are expressed in watts). The level and type of cooling that is capable of addressing your needs depends on the total densities within each rack.

In the past, high density was anything over 10kW per footprint. Today, a high density installation is one in which each cabinet consumes more than 20 kW. Thermal density could also be measured as the amount of energy consumed per square foot of floor space, typically expressed in watts per square foot. But we need to be careful: it is difficult to assign a single value, say 150W per square foot, that would be consistent across the entire space. The best approach would be a combination of both methods – a total value for the space with the addition of much higher densities (1000 - 3000 watts per Sq. Ft and higher) as needed.

If the heat buildup within your enclosures tends to be rapid and concentrated, closed loop cooling may be your best bet, as these systems are designed to ensure maximum uptime in high heat areas and uncontrolled environments.

A closed loop system – working in tandem with a close coupled configuration – removes heat from inside an enclosure, preventing it from mixing with ambient air. Considered “in-row” cooling, these systems cool only the equipment in the enclosure, rather than having to contribute to overall row or room cooling.

Outside of the data center, many Edge applications – where the buildup of heat could be local AND rapid – closed loop cooling will maintain the proper internal climate conditions regardless of the conditions outside the environment. This higher heat removal capacity allows for higher installation densities, therefore reducing the number of server enclosures required.

4. Refer to ASHRAE Guidelines

To ensure that your IT equipment (ITE) is being adequately cooled, you need to identify the correct set point temperature.

The TC 9.9 Technical Committee of The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) publishes guidelines that dictate the best temperature for reliable data center operation. According to ASHRAE, many data centers today run several degrees warmer than they did 10 or 15 years ago in order to save on cooling costs; for most classes of IT equipment, the ideal temperature is now between 64 and 81°F (for equipment in the ASHRAE categories of A1 to A4), with a relative humidity (RH) of 60%. (Refer to the ASHRAE chart for recommended temperatures for different classes of equipment.)

5. Align with the Existing Infrastructure

The existing infrastructure of the deployment location affects the requirements of the rack server cooling system you choose. For example, if the data hall is designed with hot aisle/cold aisle containment, your options are broader than without this feature.

Unlike the traditional “no-containment” cooling method that maintains a target temperature while trying to minimize mixing of hot and cold air, today’s room-based cooling can use aisle containment to direct and separate airflow paths for maximum efficiency.

  • Cold-aisle containment (CAC) systems deliver cold air via the floor to IT equipment, picking up heat as it travels across the appliances. The warmed air is drawn through the back of the cabinet and into the hot aisle or ceiling plenum spaces, delivered to the CRAC unit where it’s cooled and then delivered back into the floor where the cycle begins again
  • Hot-aisle containment (HAC) systems extract hot air coming from rack enclosures, pull it up into a ceiling plenum where pipes deliver it to a cooling unit and direct it back into the space as cooled air. HAC systems can be installed with or without a raised floor

Please note: Which aisle to contain continues to generate passionate, sometimes even heated, debates. Rittal does not endorse cold aisle or hot aisle containment. However, the cooling systems described in this article will work equally well with either configuration.

6. Review Your Options

Liquid cooling is widely seen as today’s most efficient method of removing heat from higher density installations regardless of where they are. By placing cooling closer to the sources of heat (targeting rows of racks or individual racks), liquid cooling ensures maximum uptime and makes it possible to cost-effectively accommodate higher-density installations. There are two basic types of liquid cooling available today:

  • Row-based cooling, sometimes referred to as in-line or in-row cooling, is typically more efficient than room cooling because cooling is focused on only a row of enclosures, making the airflow paths shorter and more clearly defined than room-based cooling. And, because shortened airflow path length reduces the fan power required of the computer room systems (CRAC/CRAH), efficiency is increased
  • Direct to chip (sometimes called direct-on-chip) liquid cooling uses flexible tubes to bring safe, non-flammable dielectric fluid directly to the processing chip. The fluid absorbs the heat by turning into vapor, which then carries the heat out of the IT equipment. This method can reduce cooling costs dramatically – a huge savings for any facility, considering that 40% of a typical data center’s power goes just to cooling.

7. Partner with Cooling Experts

Deploying a liquid server rack cooling solution can bring a new level of effectiveness to your operation and your equipment, and both row- and rack-based systems are suited to today’s high-density installations. To understand which type will be most efficient for you today and into the future, seek the guidance of an expert in enclosures and cooling systems to help you identify the right solution for your needs.

To learn more about liquid cooling options, download our guide Data Center Cooling: The Best Methods for Different Needs.