Electronic Products & Technology

Power considerations that dictate data centre design

By Vito Savino, wireline and data center segment leader for ABB Power Conversion   

Electronics Power Supply / Management ABB data centres Editor Pick power

Voltage, footprint and cooling

As global reliance on data centres grows, so too does their energy consumption. While estimates on data centres’ total worldwide energy consumption vary, it’s generally agreed that the number falls in the range of 1-2% — representing a significant portion of global energy use. No matter what the actual number is, improving efficiency even by a small margin would, at this scale, result in a significant reduction in overall energy use. That’s good for the Earth and operators alike.

With concerns about sustainability and energy efficiency reaching a boiling point worldwide, engineers and power specialists are hard at work developing solutions to make more efficient data centres a reality. Often, these solutions present a paradox: Data centre designers and operators are looking for options that both boost power density and improve efficiency. It’s a complex and seemingly contradictory task, but success, it turns out, may hinge on broadening the conversation.

Source: Adobe Images

Power density is often viewed as a board-level concern, but that approach limits the potential for density and efficiency improvements at scale. Instead, the conversation must extend beyond boards and components to the power architecture of the facility. By taking a holistic view of power density in a data centre — from the utility entrance to the cabinet, the rack to the board — designers can take the necessary steps to mitigate energy loss across the operation.

Voltage and footprint: A delicate balance

When designing the power infrastructure for a data centre, efficiency and power density come down to three key factors: footprint, voltage, and cooling. Engineers must balance these factors as they push the limits of optimization to increase power density at both the board and facility levels. Success in this endeavour may hinge on designing for efficiency and density from day one.

Advertisement

Footprint, of course, often becomes a primary consideration as it is relatively inflexible. A printed circuit board (pcb) has a limited amount of space, so making the most of it is crucial – especially as computing needs continue to increase exponentially with continuous technological advancements. Engineers are tasked with packing as much computing capacity as possible into their boards and applications and are cognizant that any unused space on a pcb may be seen as wasted money due to the high perceived cost-per-square-inch. These factors equate to limited space remaining for the required power components. This is one reason power density is at a premium. Power needs increase yet the allotted space in a board or in a system remains the same (or even shrinks). And given the cost and sheer amount of networking equipment in today’s advanced data centres, even minimal board-mounted power density improvements can have wide-reaching impacts.

ABB’s robust line of dc-dc bus converters meet the needs of OEMs developing solutions for data centres. Source: ABB Power Conversion

From an efficiency perspective, one aspect to consider is the operating voltage on the board. Traditionally, the voltages used in data centres at the chip level and along the utility entrance have been on the lower end of the spectrum. Lower-voltage architectures require higher currents – higher currents result in lower efficiency from transfers of energy (i2r losses), especially if sufficient space cannot be afforded to mitigate the issue. To help, board designers are increasing voltages as close to the IT load as possible. Implementing higher-voltage board-level power architectures, such as 48V architectures, can both help improve efficiencies and power densities. The spacing required to avoid arcing at higher voltages, however, is a tradeoff that must be taken into consideration.

Beyond board-level power density, many of the principles we’ve discussed are also relevant at the cabinet and facility level. Keeping power density considerations in mind from the very beginning of the process can help ensure more efficient and cost-effective operation in the future.

Strategies like reducing the number of conversion steps required to provide the precise power needed for networking equipment or implementing high-voltage infrastructure as close to devices as possible can help designers achieve better power usage effectiveness (PuE) throughout a facility. The complex relationships between these factors can make optimization challenging, but accounting for them during the initial stages of facility, rack, and board design can help optimize operations.

The cooling paradox

The third factor noted above, cooling, presents another confounding element in data centre design. Done correctly, it can improve efficiency, protect equipment, and help boost performance. Done wrong? It can have serious (and costly) repercussions.

Incredibly, data centres only use approximately 50% of their energy to execute computing functions. Of the 50% used for computing, about 98% is used to support IT loads. The remaining 2% is shed as heat in the power conversion process. Of the remaining 50% of energy used, 25-40% is typically allocated to HVAC systems to maintain the computer room environment, or to computer room air-conditions.

ABB’s Edge data centre power architecture is able to meet the demands of data centres. Source: ABB Power Conversion

The scale of the facility and operations exacerbates the issue, making cooling an even more significant consideration for larger, power-hungry data centres. In addition, because cooling elements create their own heat, a system that’s not cooling equipment properly ends up contributing more heat to the overall equation. Building smaller facilities doesn’t solve the issue; packing more computing power into smaller spaces actually compounds the issue further as proximity increases temperatures.

There are several options for cooling data centres — with air conduction, liquid cooling, and geophysical solutions being the most popular. Each has its own benefits and drawbacks. Deciding which is appropriate for a given build is a complex process that should take into consideration the size and location of the facility, its proximity to cold air or water sources, and the equipment within it. Data centre power designers and engineers can also benefit from keeping scalability in mind when comparing cooling methods to help ensure any choice they make can accommodate future plans for expansion.

The full picture

As power demands rise and energy use in data centres continuous to increase exponentially to keep pace with evolving networking needs, it’s more critical than ever for power designers and data centre operators to view equipment and facilities through a holistic lens. This can help them to better tackle potential energy waste and inefficiencies across equipment and operations on the front end of data centre planning. By considering operations as a whole, designers may find ways to boost carbon-reduction initiatives, lower energy consumption, enjoy practical business benefits, and do their part to build a more sustainable future.

———————————-

ABB Power Conversion designs and manufactures power solutions for 5G, wireless, data centre, and industrial applications.

Story author: Vito Savino, wireline and data centre segment leader for ABB Power Conversion.

https://www.abbpowerconversion.com/

Advertisement

Stories continue below

Print this page

Related Stories