Data Centre HVAC: Uptime Tier I–IV, ASHRAE TC 9.9, Hot Aisle / Cold Aisle (Pillar)

A data centre’s HVAC system has one job: keep IT equipment within manufacturer-specified thermal envelope, 24/7, with no failure window. ASHRAE TC 9.9 (Thermal Guidelines for Data Processing Environments) defines those envelopes. Uptime Tier I-IV (now Uptime Institute Tier Standard) defines the redundancy required.

This pillar covers thermal envelope, hot aisle / cold aisle separation, cooling architecture options, and the PUE-driven trade-off that makes free cooling a no-brainer in cooler Indian climates.

ASHRAE TC 9.9 thermal envelope

ASHRAE TC 9.9 specifies operating temperature/humidity ranges for IT equipment:

Class Recommended (use as design target) Allowable (worst-case ride-through)
A1 18-27 °C / 5.5-15 °C dewpoint, 60% RH max 15-32 °C / 20% RH min, 80% RH max
A2 18-27 °C / 5.5-15 °C dewpoint, 60% RH max 10-35 °C / 20% RH min, 80% RH max
A3 18-27 °C / 5.5-15 °C dewpoint, 60% RH max 5-40 °C / 8% RH min, 85% RH max
A4 18-27 °C / 5.5-15 °C dewpoint, 60% RH max 5-45 °C / 8% RH min, 90% RH max

Most enterprise data centres design to Class A2 at the low end of recommended (typical 22 °C ± 1.5 °C in cold aisles).

The “allowable” envelope is what enables aggressive free cooling. A data centre designed with airside economiser to use outdoor air at 28-30 °C (Class A2 allowable) can operate without mechanical cooling for ~3,000-5,000 hours/year in mild climates.

Uptime Tier classification

Uptime Institute Tier Standard defines four classes:

Tier Redundancy HVAC implication
Tier I (basic) Single path, no redundancy Single CRAH per zone; cooling fails = IT fails
Tier II Single path with N+1 components One spare CRAH; loss of one component allowed
Tier III Multiple paths, concurrently maintainable Two-path supply; can take down one CRAH for maintenance without IT impact
Tier IV Fault-tolerant; concurrent maintenance + fault-tolerance All cooling components 2N (or 2N+1); single fault = no IT impact

For a Tier III data centre, you typically have:

  • Twin chilled water plants (each capable of 100% duty)
  • Twin distribution risers
  • 2N cooling on each row

Capex for Tier IV is typically 1.5-2x Tier III. For most enterprise applications, Tier III is the sweet spot.

Hot aisle / cold aisle architecture

The fundamental cooling architecture innovation of data centres is hot-aisle / cold-aisle separation. Rack rows are arranged so that:

  • Cool supply air enters the cold aisle (front of racks)
  • Hot exhaust air leaves the hot aisle (rear of racks)
  • No mixing of hot and cold air

Without this, return air is mixed temperature; CRAHs work harder; CRAH SAT must be lower; PUE rises.

Open hot/cold aisle (basic)

Just rows arranged front-to-front, back-to-back. Saves money but allows recirculation through the top of the racks and around aisle ends.

Cold-aisle containment (CAC)

Cold aisle enclosed with end-of-row doors and ceiling. Cool supply air confined; hot air rises in unbounded plenum.

Hot-aisle containment (HAC)

Hot aisle enclosed; CRAH return ducted to hot-aisle ceiling. Cold supply air free in main room.

CAC and HAC each reduce PUE by ~10-15% vs open aisle. HAC slightly preferred for new builds because it makes CRAH return temperature higher (better cooling efficiency).

Cooling architecture options

Architecture A: Perimeter CRAH (Computer Room Air Handler)

Floor-standing CRAH units around room perimeter. Cool air discharged into raised floor plenum, supplied through perforated tiles in cold aisle.

Pros: well-proven, easy to maintain.

Cons: long air path, fan energy ~12-15% of IT load, hard to handle high rack densities (>10 kW/rack).

Architecture B: In-row cooling

CRAH-style units placed at end of each row or between racks. Direct supply to cold aisle, direct return from hot aisle.

Pros: very short air path, fan energy ~6-8% of IT load, handles high density (15-30 kW/rack).

Cons: more units to maintain.

Architecture C: Rear-door heat exchanger (RDHX)

Liquid-cooled coil at rear of each rack. IT equipment fans push air through coil; rack appears “thermally neutral” to room.

Pros: handles ultra-high density (30+ kW/rack), very efficient.

Cons: liquid-near-electronics risk, expensive.

Architecture D: Liquid cooling (direct to chip)

Cold plates directly on CPUs/GPUs. Requires special hardware (only some servers support).

Pros: density up to 100 kW/rack, lowest PUE.

Cons: hardware-specific, complex.

For typical Indian enterprise data centres at 5-10 kW/rack density, perimeter CRAH or in-row cooling. For high-performance computing 15+ kW/rack, in-row or RDHX.

Free cooling

Outdoor air at the right conditions can replace mechanical cooling. Two methods:

Airside economiser

Outdoor air directly conditioned (filtered, possibly humidified) and supplied to IT space.

Hours of free cooling viable per year (rough Indian estimates):

  • Bangalore: 4,500 hr/year (mild winter)
  • Mumbai: 1,500 hr/year (limited by humidity)
  • Delhi: 3,000 hr/year (cold winter, but high pollution may force filtration)
  • Chennai: 800 hr/year (limited)

Waterside economiser (plate HX between cooling tower and CHW)

Cool tower water cools chilled water through plate HX; chiller bypassed. Works in any climate where cooling tower can produce CW colder than CHW return.

Hours per year:

  • Most Indian climates: 1,500-3,000 hr/year on waterside

PUE reduction from free cooling: 0.20-0.40 typical (e.g. PUE drops from 1.6 to 1.3).

PUE (Power Usage Effectiveness)

PUE = Total facility power / IT equipment power

Modern data centres target:

  • Tier I/II: PUE 1.7-2.0
  • Tier III: PUE 1.4-1.6
  • Tier IV: PUE 1.5-1.7 (some efficiency loss to redundancy)
  • Hyperscale (Google, Meta, AWS): PUE 1.1-1.2

For Indian enterprise: PUE 1.6-1.8 is typical, with PUE 1.4 achievable through aggressive free cooling and contained aisles.

Worked example: 1 MW Tier III data centre, Bangalore

Specifications:

  • 100 racks at 10 kW each = 1 MW IT load
  • Tier III concurrently maintainable
  • 12 °C cold aisle SAT, 28 °C cold aisle return (Class A2)
  • Hot aisle containment

Cooling design:

  • 2N CRAH (16 CRAH total: 8 duty + 8 spare for Tier III concurrent maintenance)
  • 2N chilled water plants (2 × 600 TR = 1,200 TR total; 600 TR runs duty)
  • Waterside economiser on each chilled water plant
  • 2N cooling tower (2 × 800 TR each)

Sizing:

  • IT load 1 MW = 3.4 million BTU/h = 285 TR
  • CRAH cooling capacity 285 TR; 2N = 570 TR
  • Add fan + pump heat ~5% = 300 TR design coil
  • Plant capacity 600 TR (one in standby, one running)

PUE estimate:

  • IT: 1,000 kW
  • Cooling chiller (compressor only, COP 4 at site conditions): 285/4 × 3.5 = 250 kW
  • Cooling tower fan + pump: 60 kW
  • CRAH fans: 100 kW
  • Lighting + minor: 30 kW
  • Total: 1,440 kW
  • PUE = 1,440 / 1,000 = 1.44

With waterside economiser running 60% of year (Bangalore typical): annual average PUE ≈ 1.30. ~₹1.5 crore/year energy saved vs PUE 1.7 baseline.

Common data centre HVAC mistakes

1. Mixed hot/cold aisle in retrofit installations. Open aisle = recirculation = CRAH SAT 6-8 °C lower than needed = 15% PUE penalty.

2. CRAH undersized for design density. 5 kW/rack design works; 8 kW/rack actual = hot spots = thermal trips.

3. Free cooling hour-not-utilized. Plant runs at 6 °C CHW continuously even when outside is 18 °C — controls don’t enable economiser.

4. No N+1 in critical components for Tier III. Single chiller fault = entire data centre down.

5. Humidity not controlled. Class A2 requires 5.5-15 °C dewpoint; ignore = condensation in cold aisle, ESD events in dry winter.

Quick checklist

  • [ ] Tier classification determined (I/II/III/IV)
  • [ ] Class A1-A4 thermal envelope target
  • [ ] Cold/hot aisle separation (CAC or HAC preferred)
  • [ ] Cooling architecture (perimeter / in-row / RDHX / liquid)
  • [ ] Free cooling integration (airside or waterside)
  • [ ] PUE target (1.4-1.6 for Tier III in Indian climate)
  • [ ] N+1 redundancy at chiller, CRAH, distribution
  • [ ] BMS with airflow monitoring at each rack inlet
  • [ ] Cold aisle SAT and dewpoint control
  • [ ] DR / failover strategy for HVAC

References: ASHRAE TC 9.9 Thermal Guidelines for Data Processing Environments 5th Ed (2021); Uptime Institute Tier Standard: Topology; ASHRAE Handbook HVAC Apps 2023 Ch 19 (Data Processing and Communication Centres); EN 50600 Information Technology — Data Centre Facilities and Infrastructures; The Green Grid PUE specification.

[Disclosure block, Legal notice — auto-included by article template]

Leave a Comment

Your email address will not be published. Required fields are marked *

Exit mobile version