Home Blog Page 3

Data Center Cooling Methods Explained: Air Cooling vs Liquid Cooling

Modern data centers generate enormous amounts of heat. Every watt of electricity used by servers eventually becomes heat that must be removed to keep equipment operating reliably.

As computing power increases—especially with the rapid growth of artificial intelligence and GPU computing—the challenge of removing heat from server racks has become one of the most important engineering problems in data center design.

In this article, we explain the four primary data center cooling methods used in modern facilities, how they work, and why the industry is increasingly moving from traditional air cooling toward liquid-based solutions.

Data centers rely on several interconnected systems.
To understand how these systems work together, see our guide How Data Centers Work.

Why Data Center Cooling Is So Important

Data centers run continuously—24 hours a day, 7 days a week, 365 days a year. All the servers, networking equipment, and storage devices inside a data center produce heat during operation.

Historically, server racks produced relatively small amounts of heat. Typical racks operated in the range of:

  • 3–5 kilowatts per rack

Today, many modern data centers operate racks in the range of:

  • 10–20 kilowatts per rack

AI and GPU clusters are pushing densities even further, sometimes exceeding:

  • 50–100 kilowatts per rack

Because heat output increases with power consumption, cooling systems must evolve to keep up with these increasing thermal loads.

The Thermodynamic Challenge

The reason cooling is so difficult in high-density environments comes down to a basic thermodynamic principle:

Air has relatively low heat capacity compared to liquid.

This means that air cannot absorb and transport heat as efficiently as liquids such as water or specialized coolants.

As a result, traditional air cooling systems eventually reach physical limits when rack power densities become very high.

This is why the industry is gradually shifting toward liquid cooling technologies.

The Four Major Data Center Cooling Methods

There are four primary cooling strategies used in modern data centers:

  1. Room-Based Air Cooling
  2. Close-Coupled Air Cooling
  3. Direct-to-Chip Liquid Cooling
  4. Immersion Cooling

Each method represents a different approach to removing heat from servers.

Four major Data Center Cooling Methods including Air and Liquid Cooling
Four major Data Center Cooling Methods including Air and Liquid Cooling

1. Room-Based Air Cooling

Room-based cooling is the traditional approach used in many data centers.

In this design, large cooling units known as CRAC (Computer Room Air Conditioners) or CRAH (Computer Room Air Handlers) supply cold air into the data center space.

Cold air is delivered through raised floor plenums and distributed through perforated floor tiles located in front of server racks.

The servers pull cold air through the front of the rack, where it absorbs heat from the equipment, and the heated air exits through the back of the rack.

To improve airflow management, racks are arranged in hot aisle and cold aisle configurations:

  • Cold aisles supply cool air to the front of the servers
  • Hot aisles collect the hot exhaust air

This layout helps prevent mixing between hot and cold air streams, improving cooling efficiency.

Hot and Cold Aisle Data Center Strategy
Hot and Cold Aisle Data Center Strategy

While room-based cooling works well for lower density environments, it becomes less efficient as rack power increases.

2. Close-Coupled Air Cooling

Close-coupled cooling systems bring the cooling equipment closer to the heat source.

Instead of relying solely on perimeter cooling units, these systems position cooling equipment directly within the server rows.

Common examples include:

  • In-row cooling units
  • Rear door heat exchangers

In-row units sit between server racks and pull hot air directly from the hot aisle, cool it, and discharge the cooled air back into the cold aisle.

Rear door heat exchangers mount on the back of server racks and remove heat immediately as air exits the servers.

Because the cooling source is located closer to the heat source, close-coupled systems reduce airflow losses and improve efficiency.

However, these systems still depend on moving air through the servers, which can become inefficient at very high rack densities.

A data center in-row cooling unit positioned tightly between two rows of server racks, with multiple vertically stacked fans pushing cold air outward into the cold aisle to cool the servers.

3. Direct-to-Chip Liquid Cooling

Direct-to-chip liquid cooling removes heat directly from the most heat-intensive components inside the server.

Instead of relying on airflow, cold plates are mounted directly onto processors such as CPUs and GPUs.

Coolant circulates through these cold plates, absorbing heat and transporting it away from the server.

The heated liquid flows to a Cooling Distribution Unit (CDU) where the heat is transferred to the facility’s cooling system.

The facility then rejects this heat using equipment such as:

  • Cooling towers
  • Dry coolers
  • Heat exchangers

Liquid cooling is extremely effective because liquids can transport heat much more efficiently than air.

This allows data centers to support very high rack densities, which are becoming common in AI computing environments.

Direct to Chip Cooling in a Data Center using a Cooling Distribution Unit (CDU)
Direct to Chip Cooling in a Data Center using a Cooling Distribution Unit (CDU)

4. Immersion Cooling

Immersion cooling takes liquid cooling even further.

In this system, servers are submerged directly into a tank filled with a dielectric liquid that does not conduct electricity.

The liquid absorbs heat directly from the server components.

There are two common immersion approaches:

Single-phase immersion

The liquid absorbs heat and is pumped through a heat exchanger.

Two-phase immersion

The liquid boils at low temperature, absorbing heat through evaporation before condensing back into liquid.

Immersion cooling can support extremely high power densities and eliminates the need for large airflow systems.

However, it requires specialized server hardware and operational practices.

Alt Text: Data center immersion cooling tank with servers submerged in dielectric fluid
Data center immersion cooling system

Air Cooling vs Liquid Cooling

Choosing the right cooling method depends largely on rack density.

Typical ranges include:

Low density racks (3–10 kW)
Room-based air cooling

Medium density racks (10–25 kW)
Close-coupled air cooling

High density racks (25–80 kW)
Direct-to-chip liquid cooling

Extreme density racks (80 kW and above)
Liquid cooling or immersion cooling

Other factors influencing cooling design include:

  • Energy costs
  • Climate conditions
  • Water availability
  • Redundancy requirements
  • Facility design constraints

The Future of Data Center Cooling

As computing power continues to increase, cooling systems must evolve to keep pace.

AI workloads are driving rack densities higher than ever before, forcing many operators to adopt liquid cooling technologies.

In the coming years, data centers will likely use a combination of cooling approaches depending on workload requirements and facility design.

Understanding these systems is essential for engineers, contractors, and IT professionals working with modern computing infrastructure.

Watch the Full Data Center Cooling Video

If you’d like to see a full visual explanation of these cooling methods, watch the video below.

This video is part of our Data Center Systems series, where we explain how modern data centers operate—from electrical power distribution to cooling infrastructure and redundancy strategies.

Explore the Full Data Center Series

You can watch the entire playlist here:

Data Center Systems Playlist

Videos in this series include:

  • How Data Centers Actually Work
  • Data Center Power Flow: From Utility Grid to Server Rack
  • Data Center Cooling Methods Explained
  • Data Center Redundancy (N, N+1, 2N)

Key Takeaways

  • Data centers produce massive amounts of heat that must be removed continuously.
  • Traditional air cooling works for lower-density racks.
  • Close-coupled air cooling improves efficiency by placing cooling near the heat source.
  • Liquid cooling removes heat directly from processors and supports higher densities.
  • Immersion cooling offers extremely high heat removal capability for specialized environments.

As computing continues to evolve, cooling technologies will remain one of the most critical aspects of modern data center design.

Data Center Engineering Series

This article is the hub of our Data Center Educational Series, where we break down each major system in detail.

Currently Published

This article is part of our Data Center Engineering Series where we explain how data centers are powered, cooled, and designed.

Data Center Power Flow: Utility to Server Rack Explained

Understanding Data Center Power Flow is critical for engineers, contractors, and facility designers working on mission-critical infrastructure. From the utility grid to the server rack, Data Center Power Flow moves through multiple layers of protection, transformation, conditioning, and distribution to ensure uptime and reliability.

Data centers rely on several interconnected systems.
To understand how these systems work together, see our guide How Data Centers Work.

From the utility grid to the server rack, electrical energy passes through multiple layers of transformation, protection, conditioning, and distribution. Each component exists for one reason: uptime.

This article walks step-by-step through the complete electrical path and explains the purpose of each major system along the way.

1. Utility Power Generation

Every data center begins as a customer of the electrical grid.

Electricity is generated at power plants — natural gas turbines, nuclear facilities, hydroelectric dams, wind farms, or solar arrays. The energy mix varies by region, but regardless of source, power must travel long distances before reaching the data center.

At this stage, the facility has no control. It depends entirely on grid stability.

2. High-Voltage Transmission: Efficiency Over Distance

To move electricity efficiently across long distances, utilities transmit power at very high voltage and low amperage.

Why?

Power loss in transmission lines is proportional to current squared (I²R losses). By increasing voltage, current decreases for the same power level. Lower current reduces line losses and allows smaller conductors relative to delivered capacity.

Transmission voltages may range from 69kV to 500kV depending on region and infrastructure.

Before reaching the facility, power is stepped down at regional substations and delivered to the data center campus at medium voltage.

Data Center Electrical Power - From Utility to Server Racks
Data Center Electrical Power – From Utility to Server Racks

3. Service Entrance Switchgear

When power arrives on-site, it enters through service entrance switchgear.

This is the first major piece of electrical infrastructure inside the facility.

Service entrance switchgear:

  • Receives incoming medium-voltage utility power
  • Provides main overcurrent protection
  • Contains protective relays and metering
  • Segments downstream distribution
  • Allows isolation for maintenance

This equipment establishes the facility’s internal electrical control boundary.

From here forward, the data center manages its own reliability.

4. Transformers: Stepping Down Voltage

Utility power typically arrives at medium voltage — often between 12kV and 34.5kV in the United States.

Transformers step this down to low-voltage building distribution levels, commonly 480V.

The transformer performs two critical functions:

  1. Voltage conversion
  2. Electrical isolation

In many facilities, transformers are arranged to support redundancy and load balancing across multiple distribution paths.

5. Generator Paralleling Gear and Automatic Transfer Controls

Utility power is not guaranteed.

If a grid outage occurs, backup generators must take over.

In smaller installations, an Automatic Transfer Switch (ATS) detects utility loss and transfers load to generators.

In larger data centers, transfer logic is integrated into generator paralleling switchgear. This system:

  • Detects voltage abnormalities
  • Starts multiple generators
  • Synchronizes frequency and phase
  • Transfers load safely
  • Manages load sharing between units

This ensures a controlled transition from utility to generator power.

Data Center Electrical Power Diagram
Data Center Electrical Power Diagram

6. Backup Generators and N+1 Redundancy

Backup generators provide full facility power during extended outages.

Most data centers use diesel or natural gas generator systems sized to carry the entire critical load.

Redundancy is key.

In an N+1 configuration, one additional generator is installed beyond what is required to carry the design load. If the facility requires N generators to operate, the +1 unit protects against a single generator failure.

An Uptime Tier II design includes redundant capacity components like extra generators but may not include fully redundant distribution paths.

The objective: no single equipment failure should cause downtime.

7. UPS Systems: Bridging the Gap

Generators take seconds to start and stabilize.

Servers cannot tolerate even milliseconds of interruption.

The Uninterruptible Power Supply (UPS) bridges this gap.

A modern double-conversion UPS:

  • Converts incoming AC to DC
  • Charges batteries
  • Inverts DC back to clean AC output
  • Provides instantaneous ride-through power during transfer events

Historically, UPS systems relied on VRLA (valve-regulated lead-acid) batteries.

Today, high-density facilities increasingly use lithium-ion batteries because they offer:

  • Higher energy density
  • Reduced footprint
  • Longer lifespan
  • Lower maintenance requirements

UPS systems are commonly designed in modular N+1 configurations. If one UPS module fails, the remaining modules continue supporting the load.

Most systems also include static bypass and maintenance bypass capability to allow servicing without shutting down operations.

8. UPS Output Switchboards and Distribution Panels

After conditioning by the UPS, power flows into distribution switchboards.

These panels:

  • Provide breaker protection
  • Segment electrical feeders
  • Support maintenance isolation
  • Feed downstream distribution equipment

At this stage, power is clean, regulated, and protected.

9. Power Distribution Units (PDUs)

Power Distribution Units are typically located near the data hall.

PDUs often:

  • Step voltage from 480V down to 208V or 415V
  • Provide branch circuit protection
  • Monitor electrical loads
  • Distribute power to groups of racks

They serve as the transition between facility-level distribution and rack-level distribution.

10. Remote Power Panels (RPPs)

Remote Power Panels extend branch circuits deeper into the white space.

They provide:

  • Additional breaker capacity
  • Flexible layout configuration
  • Scalability for future expansion

RPPs reduce the need to return to main distribution panels when expanding rack density.

11. Rack Power Distribution Units (rPDUs)

Rack PDUs are mounted directly inside server cabinets.

They distribute electricity to individual servers and network devices.

Modern intelligent rPDUs provide:

  • Per-outlet monitoring
  • Remote switching capability
  • Load balancing data
  • Real-time power consumption metrics

This is the final stage of electrical distribution before energy reaches IT equipment.

12. Servers: Electrical Energy Becomes Heat

When electricity reaches the servers, it is converted into computational work.

Nearly all consumed electrical energy becomes heat.

Every kilowatt delivered must be removed by mechanical systems to maintain safe operating temperatures.

This is the direct relationship between electrical infrastructure and cooling design.

Electrical load equals thermal load.

The Bigger Picture: Power and Uptime

From utility generation to rack-level distribution, the data center electrical system is built in layers:

  • Protection
  • Redundancy
  • Conditioning
  • Segmentation
  • Monitoring

Each layer reduces risk.

Each layer protects uptime.

Understanding this flow is critical for engineers, contractors, and estimators working on mission-critical projects.

In the next phase of the discussion, we follow that same energy — now as heat — into the cooling systems that keep the facility operational.

Data Center Engineering Series

This article is the hub of our Data Center Educational Series, where we break down each major system in detail.

Currently Published

This article is part of our Data Center Engineering Series where we explain how data centers are powered, cooled, and designed.

5 HVAC & Electrical Coordination Mistakes That Cause Change Orders

On many commercial construction projects, HVAC and electrical systems install exactly as shown on the drawings. Equipment is set, Conduit is run. Ductwork and piping is complete. Inspections pass.

Then startup begins — and systems don’t work together.

At that point, the issue usually isn’t installation quality.

It’s coordination.

HVAC electrical coordination problems rarely begin in the field. They start much earlier, during estimating and scope review, when responsibilities between trades are assumed instead of clearly defined.

Below are five of the most common HVAC and electrical coordination mistakes that cause RFIs, change orders, failed inspections, and delayed occupancy.

1. Duct-Mounted Smoke Detector Shutdown Wiring

Duct smoke detectors are physically installed in ductwork, so they are often assumed to belong to the mechanical contractor.

However, these devices:

  • Require electrical power
  • Must interface with the fire alarm system
  • Must shut down air-moving equipment during a smoke event
Duct mounted smoke detector and fire life safety connections
Duct mounted smoke detector and fire life safety connections

Three systems are involved:

  • Mechanical
  • Electrical
  • Fire alarm

Mechanical drawings usually show detector locations.
Electrical drawings may show only general power distribution.
Fire alarm drawings show monitoring points — but not always the shutdown wiring path.

The result?

The detector gets installed — but the shutdown sequence fails during testing.

Building codes typically reference the International Mechanical Code and NFPA 90A for smoke detection in air distribution systems. The coordination issue isn’t the code requirement — it’s making sure power, monitoring, and shutdown wiring responsibilities are clearly assigned before installation.

2. Missing Equipment Disconnect Switches (NEC 430.102(B))

The National Electrical Code requires a disconnecting means within sight of motor-driven equipment.

NEC Article 430.102(B) requires:

A disconnecting means located within sight from the motor and driven machinery location.

This applies to rooftop units, air handlers, exhaust fans, pumps, and similar equipment.

The coordination problem typically occurs because:

NEC 430.102B Requires Disconnect Switches - Coordinate with HVAC and Electrical Contractor
NEC 430.102B Requires Disconnect Switches – Coordinate with HVAC and Electrical Contractor
  • Mechanical drawings show the equipment.
  • Electrical drawings show feeders and panels.
  • Disconnects are not clearly identified.

Mechanical contractors assume the electrical contractor is providing it.

Electrical contractors may exclude it if it is not explicitly shown.

The issue often surfaces only when equipment arrives onsite and no disconnect has been installed — resulting in added cost and schedule impact.

Proper HVAC electrical coordination requires confirming:

  • Disconnect type
  • Amperage rating
  • Mounting location
  • Scope responsibility

Early clarification prevents expensive field corrections.

3. Missing Service Receptacles (NEC 210.63)

Maintenance personnel require power to service equipment safely.

NEC Article 210.63 requires:

A 125-volt, single-phase, 15- or 20-ampere receptacle outlet within 25 feet of HVAC equipment.

Despite this requirement, service receptacles are frequently:

  • Omitted from electrical plans
  • Assumed to be included
  • Not clearly assigned to a trade
NEC Convenience Outlet Requirements NEC 210.63
NEC Convenience Outlet Requirements NEC 210.63

The problem is typically discovered during inspection or turnover.

Late installation may require:

  • Surface conduit
  • Additional roof penetrations
  • Coordination with finished architectural elements

For estimators, this is a critical preconstruction review item.

4. Control Wiring and Building Automation System (BAS) Interfaces

Modern HVAC systems rely heavily on controls and interlocks.

Typical scope boundaries include:

  • Electrical contractor provides power wiring
  • Mechanical contractor provides equipment
  • Controls contractor provides system logic

The gray area often includes:

  • Control transformers
  • Interlock wiring between systems
  • Control conduit installation
  • VFD communication wiring
  • Shutdown signals

Because equipment can power up successfully without these connections fully integrated, coordination issues often appear only during startup or commissioning.

At that point, multiple trades may need to return to complete work that was assumed to be included elsewhere.

Clear scope definition during estimating prevents these late-stage conflicts.

5. Fire Alarm Shutdown Interfaces and System Interlocks

Many HVAC systems are required to stop, start, or change operating mode in response to fire alarm signals.

Mechanical drawings may indicate shutdown intent.
Fire alarm drawings identify monitoring points.
Electrical drawings may not show the interconnecting wiring.

Fire Alarm Shutdown and System Interlocks - HVAC Coordination Issues
Fire Alarm Shutdown and System Interlocks – HVAC Coordination Issues

Common coordination gaps include:

  • Who provides the relay?
  • Who runs wiring to the equipment?
  • Who verifies shutdown sequence during testing?

Each trade may complete its individual scope — yet the system fails functional testing.

This is one of the most expensive coordination failures because it is typically discovered at commissioning.

The Root Cause of HVAC Electrical Coordination Failures

Across all five examples, the underlying issue is the same:

Drawings describe system intent.
Coordination defines execution.

The costliest problems between HVAC and electrical trades are rarely installation errors.

They are scope definition problems that begin during estimating and carry forward into construction.

By the time systems reach startup, the cost of correcting assumptions is significantly higher than identifying them during preconstruction.

How Contractors Prevent These Coordination Problems

Estimators and project managers should:

  • Identify inter-trade coordination items early
  • Qualify proposals clearly
  • Confirm code-required accessories
  • Clarify shutdown and interlock responsibilities
  • Review BAS and fire alarm interface details before bid

Small coordination gaps during estimating become large financial impacts during commissioning.

Final Thoughts

Most HVAC electrical coordination problems don’t begin in the field.

They begin when drawings are interpreted differently, or when responsibilities are assumed instead of clarified.

If you’ve ever had a project where equipment installed perfectly but failed startup due to wiring or shutdown issues, you’ve already experienced this firsthand.

The solution is not more field labor.

The solution is better scope definition.

If you want to strengthen your estimating and preconstruction workflow:

  • Explore our HVAC, Electrical, and Plumbing Estimating Spreadsheets
  • Download our Contractor Construction Forms
  • Check out our Online MEP Training Courses

These tools are designed specifically to help contractors prevent coordination mistakes before they become change orders.

Introduction to AI-Driven Predictive Maintenance

What if you could predict when your HVAC system or mechanical equipment is going to fail — before it happens? Less downtime, lower repair costs, and longer equipment life.

Welcome to the future of maintenance: AI-Driven Predictive Maintenance. In today’s article, we’ll break down what it is, why it matters to HVAC and MEP professionals, and why now is the perfect time to start paying attention.

What is Predictive Maintenance?

Traditionally, maintenance has been either reactive — fix it when it breaks — or preventive — fix it on a schedule whether it needs it or not.

Predictive Maintenance is smarter. It monitors your equipment’s real-time condition and predicts when maintenance is actually needed, based on actual wear and performance — not guesswork.

Where Does AI Come In?

Artificial Intelligence — or AI — takes predictive maintenance to the next level.
“With AI, massive amounts of equipment data — like temperature, vibration, pressure, and energy use — are analyzed in real time.

AI can spot patterns and detect early warning signs of failures that humans might miss.

Why Is This Important for HVAC and MEP Systems?

In HVAC and MEP industries, downtime isn’t just expensive — it can be dangerous, especially in critical facilities like hospitals, data centers, or manufacturing plants.

With predictive maintenance powered by AI, you can:

  • Reduce unexpected downtime
  • Lower maintenance costs
  • Extend equipment life
  • Improve system reliability
  • Enhance safety and compliance

Is AI Just for Big Companies?

Not anymore.
While predictive maintenance started in industries like aerospace and heavy manufacturing, today’s tools — affordable sensors, cloud-based dashboards, even AI-as-a-service platforms — are making it accessible for contractors, engineers, and facilities of all sizes.

Whether you’re managing a few rooftop units or an entire mechanical plant, predictive maintenance can be within reach.

How Do You Get Started?

Getting started with AI-driven predictive maintenance involves three key steps:

  1. Capture the Right Data — from sensors or existing equipment.
  2. Analyze It — using software tools or platforms.
  3. Act on Insights — scheduling maintenance before failures occur.

But implementing a real-world system — choosing the right sensors, setting up dashboards, analyzing trends — takes a bit more know-how.

What’s Next?

If you want to dive deeper — to really understand how AI-driven predictive maintenance works and how you can apply it to HVAC, refrigeration, and MEP systems this is just the beginning. We’ll be covering AI-driven predictive maintenance, smart monitoring, and data-driven decision-making for HVAC and MEP systems in future videos. Subscribe and stay tuned as we continue exploring how AI is changing our industry.

Whether you’re an HVAC technician, engineer, project manager, estimator, or student entering the industry — this course is designed to give you the practical knowledge and skills to stay ahead.

Inside the course, you’ll learn:

  • How to apply these strategies across HVAC, refrigeration, and MEP systems.
  • How to identify the right data points for smarter maintenance.
  • How to set up an AI-powered dashboard — no coding required.
  • How to spot early failure indicators before breakdowns happen.