Table Of Contents Hide

The data centre industry has shifted. At 50 MW and above, almost every new hyperscale build is an AI data centre — packed with GPU clusters (NVIDIA H100, H200, GB200) that demand 30–100+ kW per rack. Traditional air-cooling cannot keep pace; direct liquid cooling (DLC), rear-door heat exchangers (RDHx), and immersion cooling are now baseline architecture. That shift changes everything about commissioning: the test equipment, the load profiles, the thermal targets, and what happens when something goes wrong.

This guide is for engineers and project managers who need to commission those facilities. It covers the full L1–L5 acceptance framework (ASHRAE Guideline 0 / GB 50174), explains which load banks to use at which phase, and provides step-by-step procedures to validate GPU cluster cooling under full-load conditions. By the end you'll know how L1 factory acceptance connects to L5 integrated system testing, how to stress-test a liquid cooling loop at 50 MW, and what the final commissioning package must contain.

Short on time? Jump to the L1–L5 acceptance checklist.

50 MW AI Data Centre Commissioning — Step Summary

  1. L1 — Factory Acceptance (FAT): Witness critical equipment load test at manufacturer's site before shipment.
  2. L2 — Installation Qualification (IQ): Verify as-built installation against design drawings; complete pressure and insulation tests.
  3. L3 — Operational Qualification (OQ): Test each piece of equipment individually; confirm load bank communication handshake with CDU and DCIM.
  4. L4 — Component Integration (DCL series): Deploy DCL rack-mounted units at GPU rack positions; validate each CDU branch, PDU feeder, and cold-aisle thermal profile row by row.
  5. L5 — Integrated System Test (LCB series): Connect LCB central load banks to building manifold; apply 50 MW full-facility load; validate power, cooling tower, generator failover, and measure PUE.

1. What is data centre commissioning?

Data centre commissioning (Cx) is the systematic process of verifying that every building system — power, cooling, fire suppression, and monitoring — performs as designed, both individually and combined, under realistic operating conditions including full load and simulated fault scenarios.

For a 50 MW facility, commissioning isn't a single sign-off. It's a phased programme spanning weeks, documented to standards that satisfy insurance requirements, client SLAs, and regulators. The benchmark most commonly referenced in Asia-Pacific projects is ASHRAE Guideline 0: The Commissioning Process; in China the parallel standard is GB 50174-2017. Both use the same five-level escalation logic.

⚠ Why 50 MW commissioning matters — the numbers At 50 MW, a 1% measurement error = 500 kW of unverified load — roughly 500 standard server racks, or 5–16 full GPU cluster racks at 30–100 kW each. For AI facilities, one missed thermal fault can trip an entire hall. Commissioning isn't optional; it's contractual risk management. A CDU branch that fails thermal balance under test will fail under production load, with no second chance once the hall is live.

2. The five acceptance levels: L1 – L5

Industry practice (ASHRAE Guideline 0, Uptime Institute M&V) divides data centre acceptance into five sequential levels. Each must be completed and signed off before the next begins. Skipping levels — especially skipping L3 single-unit verification before L4 integration — is the most common root cause of L5 failures.

LevelNameScopeLocationKey test at 50 MW scale
L1Factory Acceptance Test (FAT)Individual critical equipmentManufacturer's factory2,000 kVA generator load test; chiller COP verification; load bank calibration certificate
L2Installation Qualification (IQ)Physical installation checkSite, pre-energisationEquipment spec vs as-built drawing check; cable bend-radius inspection; pipe pressure test at 1.5× working pressure
L3Operational Qualification (OQ)Single-unit functional testSite, post-energisationSwitchgear trip/close sequence; pump jog test; UPS self-test; load bank communication handshake
L4Component Integration TestSubsystem validationSiteDCL rack-mounted units simulate individual server racks → validates each CDU branch, each PDU feeder, cold-aisle thermal profile row by row. Full chiller-loop thermal balance at 50% load.
L5Integrated System Test (IST)Full-facility combined testSiteLCB central load banks apply 50 MW whole-facility load → validates power infrastructure, cooling tower capacity, generator + ATS failover, N+X chiller/pump fault, 24 h sustained run, and PUE measurement.
Common mistake Many projects compress L3 and L4 into a single day to save schedule. In practice this means entering L5 with unverified individual components — which then fail under combined load, requiring a full L5 re-run. The time saved is never recovered.

3. Two load bank product lines for two test phases

A 50 MW data centre commissioning programme uses two fundamentally different types of liquid-cooled load bank, matched to the two most critical test phases. Confusing them — or using the wrong type at the wrong phase — is a common and costly mistake.

DCL series — rack-mounted liquid-cooled load bank (L4 focus)

The DCL series is a 1U/2U rack-mounted unit that installs directly into a server rack position. It simulates a high-power-density server — such as an NVIDIA H100 or H200 GPU node — by drawing both electrical power and chilled water from the CDU loop. This is the right tool for L4: Component Integration in AI data centres, where the goal is to validate each rack's power chain, each CDU branch, and the cold-aisle thermal profile before any GPU hardware arrives on site.

The DCL is built for:

  • DLC / cold plate validation — verify CDU interface response with a thermal load that mimics a GPU cluster rack
  • Immersion cooling CDU validation — DCL units at manifold drops confirm the immersion loop can sustain full heat rejection at rack level
  • GPU cluster power chain testing — placing DCL units at specific rack positions exercises each PDU branch circuit at densities representative of AI workloads
  • Rack inlet temperature profiling — granular placement produces a detailed cold-aisle temperature map across the full GPU row
  • Facilities with per-rack power density above 30 kW — the norm for AI training and inference clusters
Typical model
DCL-500 (500 kW)
Form factor
1U / 2U rack‑mounted
Heat rejection
Direct to CDU chilled water
Temperature accuracy
±0.5 °C
Flow uniformity
±1.5% all zones
Communication
TCP/IP + Modbus RTU
Parallel scale
Up to 168 units (84 MW)
Application phase
L3 OQ → L4 Component Integration
L4 test focus — GPU cluster commissioning L4 validates individual subsystems in an AI data centre — the electrical feed to a GPU rack row, the CDU response to a realistic GPU-node thermal load, the cold-aisle thermal gradient. DCL units can be placed in any rack position at the exact density a GPU cluster requires, giving fine-grained control over which circuits and CDU branches are exercised before the first server is racked.

Risk-Free CDU Validation — No GPU Hardware Required

DCL rack-mounted load banks let you validate the entire CDU system 4+ weeks before GPU hardware arrives. Simulate NVIDIA H100/H200 thermal profiles at every rack position, calibrate CDU control loops, and certify per-row thermal balance — without a single GPU on site.

LCB series — central liquid-cooled load bank (L5 focus)

The LCB series is a floor-standing or rack-array central liquid-cooled load bank rated at 100–500 kW per unit. Rather than occupying a server rack slot, it connects directly to the facility's main manifold or CDU bus, applying load at whole-hall or whole-building level. For L5: Integrated System Test (IST) in AI data centres, this is the tool that proves the entire cooling plant — cooling towers, chillers, pumps, and CDU loops — can sustain a full GPU cluster workload across the entire facility simultaneously.

The LCB series is designed for:

  • Whole-AI-data-centre load simulation — one or more LCB units apply 100% IT load across an entire GPU facility from a central connection point
  • Cooling tower and chiller plant stress test — sustained full-load run at peak ambient temperature is the only way to confirm the heat rejection path keeps pace with a fully-loaded GPU cluster hall
  • Multi-hall integrated GPU facility testing — an LCB bank serving the manifold bus can simultaneously load multiple GPU halls, testing cross-hall power distribution and cooling coordination
  • Full-load PUE measurement for AI facilities — simultaneous IT load and facility power measurement produces an accurate PUE that reflects real cooling overhead of liquid-cooled GPU clusters
Typical models
LCB-100 / LCB-300 / LCB-500
Rated power
100 / 300 / 500 kW per unit
Max parallel expansion
10 MW+ (multi-unit)
Heat rejection
Building chilled water / cooling tower
Connection type
Central manifold / CDU bus (DN100+)
Load resolution
1% step
Application phase
L5 IST / Full facility stress test
Key targets
Full-load + cooling tower + PUE
L5 test focus — full AI facility IST L5 is the final, highest-stakes phase. The LCB series applies whole-facility load to stress-test every system simultaneously: power infrastructure, GPU cluster cooling chains, cooling towers, and site-wide N+X redundancy. L4 with DCL proves each GPU rack's CDU interface individually. L5 with LCB proves the entire facility works as one system — including external heat rejection, the single most common bottleneck in liquid-cooled AI data centres.

Whole-Facility IST — One Integrated Run

LCB central load banks connect to the building manifold to apply 50 MW of simultaneous load across the entire AI facility. One coordinated step-load profile validates power, cooling tower capacity, generator failover, and PUE — all in a single L5 run.

4. Electrical system testing

The electrical acceptance sequence runs in parallel with — but must be completed independently of — the cooling tests. Running both simultaneously before either has been individually verified risks cascading trips that make fault isolation nearly impossible.

4.1 Full-load thermal imaging test

Using load banks to simulate 50 MW of IT load, run the facility at 100% for a minimum of 4 hours. An infrared thermal scan of all cable joints, busbar connections, and transformer terminations must be performed at the 3-hour mark (when thermal equilibrium is reached). Hotspots more than 10 °C above ambient are a mandatory hold point.

Standard reference NETA MTS (Maintenance Testing Specifications) categorises busbar temperature rises. Category 3 (>40 °C above ambient) requires immediate de‑energisation and remediation.

4.2 Transient load test (step load / step dump)

Simulate the load profile a real GPU cluster produces: rapid ramp-up at shift start, sustained peak, then sudden power-down. The test applies 0% → 50% → 100% step loads and observes:

  • UPS voltage hold-up: output voltage must remain within ±10% of nominal during the first 20 ms after step application
  • Generator dynamic response: voltage and frequency recover to within ±5% within 10 seconds of accepting the load step
  • PDU circuit breaker coordination: no nuisance trips during ramp; correct cascade trip sequence on deliberate fault injection

4.3 ATS / STS transfer test

With the facility at 100% load, simulate a mains failure. The automatic transfer switch (ATS) must start the standby generator and transfer the full 50 MW load within the contractually specified time — typically 15 seconds for Tier III, 0 seconds (static transfer) for Tier IV. Log:

  • Generator start-to-accept-load time
  • Output voltage / frequency during transition
  • UPS battery discharge depth during the window
  • Auto-retransfer sequence when mains restores

5. GPU Cluster Cooling Verification: DLC & Immersion Loops

5.1 Hydronic balancing and air purge

Before any heat load is applied, the entire chilled water loop must be purged of air. Even a 0.5% volume air fraction can reduce local heat transfer efficiency by 15% or more, and will produce false high-temperature readings that mask real problems. Procedure:

  1. Start circulation pumps at 20% speed. Open auto-vent valves at all high points.
  2. Increase pump speed to 50%, then 100% in 10-minute increments. Monitor differential pressure (ΔP) across each zone.
  3. Adjust balancing valves until all zone ΔP values are within ±5% of design value (ASHRAE TC 9.9). Batterlution DCL units with internal pressure sensors enable real-time ΔP feedback, trimming to ±1.5%.
  4. Confirm zero air via stable ΔP readings (no oscillation) before proceeding.

5.2 Thermal balance verification at full load

With DCL units running at design load per rack position, monitor:

  • Supply / return water temperature at each CDU — ΔT must match design (typically 10–15 °C)
  • Cold-aisle temperature at rack inlet — stay within ASHRAE A2 envelope (10–35 °C, 80% RH non-condensing)
  • Cooling tower or dry cooler outlet temperature — verified at L5 with LCB central load banks applying 100% facility load

During L5 (IST) with LCB central load banks, the cooling tower stress test runs at full facility load. This is the only test that confirms whether the external heat rejection path — cooling tower, dry cooler, or seawater system — can sustain 50 MW of continuous heat output at peak ambient design temperature.

5.3 N+X fault tolerance (chiller/pump failure simulation)

Deliberately shut down one chiller and one pump (simulating the design N+1 failure scenario). Remaining equipment must — through variable-speed drive adjustment — maintain total cooling capacity within 5% of set point without creating hot spots in any zone. If the design is N+2, repeat with two concurrent failures.

Hold point Do not proceed to the blackout test until N+X cooling fault tolerance has passed. A mains failure combined with a cooling failure is a realistic combined-fault scenario; handle cooling degradation independently first.

6. ELV and DCIM integration testing

6.1 Monitoring accuracy verification

Cross-reference load bank power meter readings against DCIM-reported values for every active load bank. Discrepancies greater than ±2% require DCIM sensor recalibration. At 50 MW, a 2% error equals 1 MW of invisible load — which corrupts the final PUE measurement.

6.2 Fire suppression interlock test

Trigger a smoke detector in a representative server room zone. Verify within a defined response window:

  • Precision air conditioning (PAC) units shut down and fire dampers close
  • Gas suppression system enters countdown (but is inhibited from actual discharge during test)
  • DCIM raises a zone-level alarm and logs the event with timestamp
  • Load banks in the affected zone receive remote shutdown command via Modbus TCP

6.3 Leak detection propagation test

Apply a small controlled water drop to each leak-detection rope segment. Verify:

  • Local alarm activates within 5 seconds
  • DCIM alert propagates to operations terminal within 2 seconds of local alarm
  • Affected zone load banks receive automatic standby command

Batterlution DCL units integrate leak detection via Modbus TCP with Siemens Desigo CC, Schneider EcoStruxure, and Huawei NetEco out of the box.

7. IST (L5) step-by-step procedure

The Integrated System Test — L5 — is the final and most comprehensive phase of commissioning. While L4 validates subsystems using DCL rack-mounted units, L5 uses the LCB central load bank series to apply full-facility load across the entire data centre simultaneously: power infrastructure, cooling towers, chiller plants, and monitoring systems under combined stress.

The recommended L5 sequence follows four phases:

PhaseLoad levelDurationKey pass criteria
Preparation0% (standby)Day 1–3Load bank coverage ≥ 100% capacity; water circuit ΔP balanced; all DCIM points verified; zero leaks
Warm‑up10–25%Day 4, 2–4 hSystem current stable; no vibration anomalies; no coolant seepage; DCIM matches load bank meters
Full load100% (24 h+)Day 5–6Thermal equilibrium maintained; PUE measured; no alerts; ΔT within ±0.5 °C of design
Extreme / Blackout100% + mains cutDay 7100% load taken over by generator within 15 s; no UPS alarms; clean auto-retransfer; zero data loss

Step-load profile during the full-load phase

Each plateau must be held until temperature and pressure readings are stable (< ±0.5 °C / ±0.5% ΔP over 5 minutes) before advancing:

T+00:00
10% load — 5 MW. Pre‑heat phase. Hold 30 min. Baseline all temperature and pressure readings.
T+00:30
25% load — 12.5 MW. Hold 15 min. Confirm CDU pump speeds responding to load increase.
T+00:45
50% load — 25 MW. Hold 15 min. Verify N+1 cooling maintains supply temperature within spec.
T+01:00
75% load — 37.5 MW. Hold 15 min. Thermal imaging scan of all busbar connections.
T+01:15
100% load — 50 MW. Sustained run ≥ 60 min. All systems recording. PUE calculation begins.
T+02:15
110% surge — 55 MW. 5‑minute overload spike. Verifies relay coordination and auto-reduction sequence.
T+02:20
Return to 100%. Sustain for remaining 24‑hour run. Record final PUE at T+24:00.

Blackout test procedure

This is the final and most critical sub-test. Execute only after all previous phases have passed:

  1. Confirm all systems at 100% steady-state load. Notify all observers.
  2. Manually open the main utility incomer breaker. Log exact timestamp.
  3. Verify generator auto-start signal within 3 seconds of mains loss.
  4. Verify 100% load transfer to generator within contractual time (15 s for Tier III).
  5. Monitor for 30 minutes at full load on generator. Check fuel consumption rate vs design.
  6. Restore mains supply. Verify auto-retransfer and generator cooldown cycle.
  7. Log all DCIM events and timestamps. Any missed alarm or delayed response is a hold point.
Pass criteria 100% of designed IT load maintained throughout transfer; no UPS bypass events; no cooling fault alarms; generator frequency recovered to 50 ± 0.5 Hz within 10 seconds.

8. DCL vs LCB vs air-cooled: which load bank for which phase?

For a 50 MW hyperscale data centre, three load bank types are relevant — but only two belong in the programme:

CriterionDCL series (rack‑mounted)LCB series (central)Air‑cooled
Commissioning phaseL3 OQ → L4 Component IntegrationL5 IST (full‑facility)Legacy halls only
PlacementInside server rack (1U/2U)Central manifold / CDU bus (floor‑standing)Floor‑standing, outdoor
Test objectivePer‑row CDU validation, cold‑aisle thermal map, rack‑level power chainWhole‑facility load, cooling tower stress, N+X, PUEBasic full‑load test in non‑sealed environments
Heat rejectionDirect to rack CDU branchTo building chilled water / cooling towerTo room air (requires ducting)
Suitable for DLC / immersion✓ Direct CDU interface; mimics server load✓ Central CDU manifold test✗ Does not load liquid cooling
Per‑rack density >30 kW✓ Any density (GPU halls)✓ Full building load✗ Limit ~10–15 kW/rack
Full‑facility PUE test✗ Row‑level only✓ Yes — simultaneous IT + facility power✗ No
Cooling tower stress test✗ Per‑row only✓ Full 50 MW continuous✗ Not applicable

9. Deliverables and acceptance sign-off

Commissioning isn't complete until documentation is delivered, reviewed, and signed. Minimum deliverables:

DocumentContentSign‑off authority
L1–L5 Acceptance ReportsPass/fail for every test point; hold-point resolution records; deferred items registerCxA + Client
Thermal Imaging ScansInfrared images of cable joints, busbars, transformer terminations at full load (4 h mark)Elec. engineer + Client
Power Quality ReportTHD, voltage waveform, power factor across all PDU feedsElec. engineer
Measured PUE ReportAnnualised PUE based on 24‑hour IST at full loadCxA + Client
Load Bank Data ExportFull CSV export of power, flow, inlet/outlet temp across entire test periodCommissioning team
Equipment Nameplate RegisterSerial number, make, model, calibration cert for every piece of test equipmentCommissioning team

Phased deployment strategy for AI data centres

For a 50 MW AI data centre project, the two product lines serve different phases — they're not substitutes:

PhaseTest levelEquipmentScaleGoal
L3 → L4OQ / Component IntegrationDCL series (50 kW rack‑mounted)6–100 units (3–5 MW)Validate GPU rack CDU branches, per‑row power chain, cold‑aisle thermal map
L5 (full IST)Integrated System TestLCB series (500 kW)100–120 units (full 50 MW)Full‑facility load, cooling tower capacity, generator failover, PUE

For L4 using DCL units: a typical progression for a GPU cluster hall is 20 units (10 MW) to validate one GPU row's CDU loop, then 50 units (25 MW) for half the hall, then all rows for full coverage. All DCL units are controlled as a single TCP/IP fleet — one operator, one dashboard, one report.

For L5 using LCB units: one or two LCB banks sized to the full 50 MW GPU cluster load connect to the central manifold. The LCB applies a coordinated step-load profile across the entire AI facility in a single integrated run.

L4 Rack‑level vs L5 Facility‑scale: Choosing the Right Load Bank

For most 50 MW GPU cluster projects, both DCL and LCB are required:

QuestionDCL Series (L4)LCB Series (L5)
PhaseL4 — Component Integration (rack‑level)L5 — IST (whole‑facility)
PositionRack‑mounted (1U/2U, at GPU positions)Central manifold / CDU bus (floor‑standing)
Core valueSimulate GPU node thermal profile; validate CDU and power chain before GPUs arriveApply 50 MW whole‑facility load; stress‑test cooling tower, generator, measure PUE in one IST run
DLC / Immersion CDU✓ Direct CDU at rack level✓ Central manifold validation
When to deployAfter L3 OQ, 4+ weeks before GPU deliveryAfter L4 sign‑off, before handover
Unit count (50 MW)20–100 DCL-500 units (10–50 MW per row)1–2 LCB units (parallel to 50 MW)

Ready to Plan Your 50 MW AI Data Centre Commissioning?

Batterlution provides DCL rack-mounted and LCB central liquid-cooled load banks for every phase of your AI data centre acceptance programme. Get a custom quotation tailored to your GPU cluster configuration.

Request a Custom Quote → Free consultation · 7–30 day delivery · Global shipping included

Common Question

How many load banks are needed to commission a 50 MW AI data center?

Two types at two phases. L4 uses 20–100 DCL-500 rack-mounted units to validate each GPU rack’s CDU branch and thermal profile. L5 uses 1–2 LCB central load banks (100–500 kW each) connected to the building manifold for the full 50 MW facility stress test.

What is the difference between L4 and L5 commissioning for AI data centers?

L4 (Component Integration) uses DCL rack-mounted units to validate individual subsystems in isolation — one CDU branch, one PDU feeder, one cold-aisle zone at a time. L5 (Integrated System Test) uses LCB central load banks to apply 50 MW across all systems simultaneously: power, cooling, DCIM, and fire safety. L4 must pass before L5 begins.

What is direct liquid cooling (DLC) and immersion cooling acceptance testing for GPU clusters?

DLC acceptance testing verifies that the CDU-to-cold-plate loop maintains precise temperature and flow at 30–100+ kW per rack. Immersion cooling acceptance validates the full immersion CDU loop under sustained heat loads. DCL rack-mounted load banks simulate real GPU thermal profiles for both — before any GPU hardware arrives on site.

How long does 50 MW AI data center commissioning take?

4–8 weeks total. L3–L4 with DCL units: 2–4 weeks (per-row GPU cluster thermal validation). L5 IST with LCB central banks: 5–8 working days (connection, step-load, 24 h full-load run, blackout test, and reporting). L4 must be signed off before L5 starts.

Can liquid-cooled load banks validate DLC and immersion cooling CDU systems for GPU clusters?

Yes. DCL units communicate via TCP/IP + Modbus RTU and are tested with all major CDU brands (Stulz, Liebert, Huawei FusionDirect, Emerson). They accept the CDU temperature setpoint and return real-time flow and temperature data — enabling closed-loop verification for both DLC and immersion loops without any GPU hardware on site.

What documents must be delivered after 50 MW AI data center commissioning?

L1–L5 acceptance reports with hold-point records, thermal imaging scans at full load, power quality analysis (IEEE 519), measured PUE report, cold-aisle temperature profile map, and full load bank data export with calibration certificates.

Why are both the DCL series and the LCB series needed for AI facility commissioning?

They test different things at different phases. DCL (L4) validates each GPU rack individually — power chain, CDU branch, thermal profile. This requires rack-level placement that central equipment cannot provide. LCB (L5) applies whole-facility load to stress-test the cooling tower, generator, and N+X redundancy across the entire AI data center at once.

    Request a Commissioning Equipment Quote

    Tell us about your GPU cluster or hyperscale AI facility. Our engineering team will respond with a tailored proposal — including GPU rack density, DCL/LCB sizing, phased deployment plan, and indicative pricing — within 2 business days.

    Need custom testing parameters? Our engineering team responds within 24 hours.










    By submitting, you agree to our Privacy Policy. No credit card required.