Modern data centers are designed around one fundamental requirement: they must remain operational at all times.
Servers inside data centers support cloud applications, financial systems, artificial intelligence platforms, and critical digital infrastructure used by businesses around the world. Even a short interruption in power or cooling can cause servers to shut down, which may result in significant financial losses or service disruptions.
Because of this, data centers are engineered with redundancy, which means installing additional infrastructure so that systems can continue operating even when equipment fails or requires maintenance.
In data center design, redundancy is commonly described using the terms N, N+1, and 2N. These terms are used to define the level of backup capacity installed within the facility’s electrical and cooling systems.
Understanding these redundancy strategies is essential for engineers, contractors, and facility operators working with modern data center infrastructure.
Data centers rely on several interconnected systems.
To understand how these systems work together, see our guide How Data Centers Work.
What Data Center Redundancy Means
Data center redundancy refers to the practice of installing extra infrastructure capacity to ensure continuous operation during equipment failures, maintenance events, or unexpected disruptions. See How Data Centers Actually Work if you missed the first video in this series on Data Centers.
Redundancy is applied to multiple critical systems within a data center, including:
- Electrical power distribution
- Utility power connections
- Uninterruptible Power Supply (UPS) systems
- Backup generators
- Cooling systems
- Chilled water plants
- Pumps and cooling towers
- Air distribution units inside the data hall
By designing these systems with redundancy, data centers can achieve extremely high levels of uptime, often measured in 99.999% availability, commonly referred to as “five nines” reliability.
Understanding the Meaning of “N
The term N represents the minimum number of components required for the system to operate normally.
In other words, N is the amount of infrastructure needed to support the full operational load of the data center.
For example, imagine a data center requires three chillers to remove all the heat produced by the servers.
In this scenario:
N = 3 chillers
If all three chillers are operating, the cooling demand is satisfied. However, if one of those chillers fails, the facility no longer has enough cooling capacity to support the full load.

The same concept applies to electrical systems.
If a data center requires four UPS modules to support the electrical load of the servers, those four modules represent N capacity.
Operating at an N configuration means the system has no redundancy. If any component fails, the facility may experience reduced capacity or even downtime.
For this reason, most modern data centers do not operate at pure N capacity.
N+1 Redundancy
The most common redundancy level used in data centers is N+1.
In an N+1 configuration, the facility installs one additional component beyond what is required to support the load.
This extra component acts as a backup if one unit fails or requires taking offline for maintenance.
Cooling System Example
Assume a data center requires three chillers to meet its cooling demand.
An N+1 configuration would install:
3 chillers required (N)
+1 additional backup chiller
---------------------------
4 total chillers installed
Under normal operation, three chillers carry the load while the fourth unit remains available as a backup.

If one chiller fails, the backup chiller automatically starts and maintains full cooling capacity. See Data Center Cooling Systems Explained our 3rd video in our series on Data Centers.
Electrical System Example
Electrical infrastructure often follows the same design principle.
For example, a UPS system may require four modules to support the full IT load.
In an N+1 configuration, the facility installs:
4 UPS modules required
+1 redundant module
--------------------
5 total UPS modules
If one module fails or requires removal for maintenance, the remaining units continue to support the servers without interruption.
Because N+1 provides a good balance between reliability and cost, it is widely used in many enterprise and colocation data centers. See Data Center Power Flow: from Utility Grid to Server Rack our 2nd video in our series on Data Centers.
2N Redundancy
A more advanced redundancy architecture is known as 2N redundancy.
In a 2N design, the entire system is fully duplicated.
Instead of adding one backup component, the facility installs two completely independent systems, each capable of supporting the full load by itself.
Cooling System Example
Suppose a facility requires three chillers to cool the data center.

A 2N design would install two separate chiller plants:
Plant A (N)
3 chillersPlant B (N)
3 chillers
In this configuration, either plant can support the entire cooling demand.
If Plant A fails or requires maintenance, Plant B can continue operating without affecting the servers.
Electrical System Example
Electrical infrastructure may also follow a 2N architecture.
A typical 2N electrical system might include:
- Two independent utility power feeds
- Two separate switchgear lineups
- Two UPS systems
- Two independent power distribution paths
- Dual power supplies on each server
Servers connect to both electrical paths so that if one path fails, the other path continues delivering power.
This design dramatically increases reliability and fault tolerance.

Redundancy in Data Center Cooling Systems
Cooling systems are one of the most critical components of data center infrastructure.
Servers generate large amounts of heat, and maintaining the correct operating temperature is essential to prevent hardware failures.
A typical data center cooling system may include redundancy at multiple levels, including:
- Chillers
- Cooling towers
- Chilled water pumps
- Condenser water pumps
- Computer Room Air Handlers (CRAH units)
- In-row cooling systems
For example, a chilled water plant may include:
4 chillers (N+1 configuration)
4 chilled water pumps
4 condenser water pumps
Inside the data hall, multiple cooling units distribute cold air to the server racks.
If one cooling unit fails, the remaining units increase airflow and maintain temperature control.
This layered redundancy helps ensure that a single equipment failure does not cause servers to overheat.
Redundancy in Data Center Electrical Systems
Electrical infrastructure is another critical area where redundancy is essential.
Data centers typically receive power from the electrical grid, but they also include several layers of backup systems to maintain continuous operation.
These systems may include:
Utility Power Feeds
Many data centers receive electricity from two separate utility feeders, often from different substations. This allows the facility to continue receiving power if one feeder fails.
Backup Generators
Facilities commonly install diesel generators to provide long-duration backup power in the event of a utility outage.
Uninterruptible Power Supply (UPS)
UPS systems provide short-term battery backup power to maintain electrical supply while generators start and stabilize.
Power Distribution
Switchgear, power distribution units (PDUs), and busway systems distribute electrical power to the server racks.
Redundant electrical systems ensure that power remains available even if a component fails or requires maintenance.
Redundancy and Data Center Tier Classifications
Redundancy levels are closely related to data center tier ratings, which classify facilities based on reliability and fault tolerance.
The most widely recognized classification system was developed by the Uptime Institute.
Typical tier classifications include:
Tier I
- Basic infrastructure
- Little or no redundancy
Classification Tier II
- Includes some redundant components
- Often uses N+1 redundancy
Tier III
- Concurrently maintainable infrastructure
- Maintenance can occur without shutting down systems
Tier IV
- Fully fault-tolerant systems
- Often uses 2N or similar architectures
Higher-tier facilities require more infrastructure investment but provide greater reliability.
Why Redundancy Is Critical for Data Centers
Redundancy is essential because failures are inevitable.
Equipment may fail due to:
- mechanical wear
- electrical faults
- cooling issues
- maintenance requirements
- external power outages
Data centers install redundant systems to ensure that when one component fails, another system immediately takes over.
This design philosophy allows modern data centers to deliver the extremely high uptime levels required by today’s digital economy.
Final Thoughts
Data center redundancy is one of the most important concepts in modern digital infrastructure design.
When you see terms such as N, N+1, and 2N they describe how much backup capacity is installed to protect critical systems.
Both electrical infrastructure and cooling systems rely on redundancy strategies to maintain reliable operation and prevent downtime.
As demand for cloud computing, artificial intelligence, and large-scale data processing continues to grow, redundancy will remain a central design principle in the construction and operation of data centers around the world.
Common Redundancy Configurations in Data Centers
Example list:
N configuration
N+1 configuration
2N configuration
2N+1 configuration
Distributed redundant systems
Data Center Engineering Series
This article is the hub of our Data Center Educational Series, where we break down each major system in detail.
Currently Published
- How Data Centers Actually Work
An overview of how modern data centers operate, explaining the critical electrical, mechanical, and IT infrastructure required to keep servers running 24/7. - Data Center Power Flow: From Utility Grid to Server Rack
Learn how electrical power travels from the utility grid through switchgear, UPS systems, generators, and distribution equipment before reaching server racks. - Data Center Cooling Methods Explained
Learn how CRAC units, chilled water systems, and airflow management remove heat from server environments. - Data Center Redundancy Explained (N, N+1, and 2N Systems)
Understand how redundancy strategies like N, N+1, and 2N designs protect data centers from outages and ensure continuous operation. - How Data Center Electrical Systems Work
Understand how data center electrical systems deliver continuous power using switchgear, UPS systems, generators, and redundancy design. - How Data Center UPS Systems Work
Understand how UPS systems provide instant backup power and protect data centers from outages and power disruptions.. - Data Center Refrigerant Economizer
Discover how refrigerant economizer systems improve cooling efficiency by using outdoor conditions to reduce compressor operation and lower energy consumption. - Data Center HVAC Systems
This article is part of our Data Center Engineering Series where we explain how data centers are powered, cooled, and designed.


