Sample Data Center Assessment Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Month XX, XXXX

Table of Contents
Section 1 Executive Summary
Goal
Survey Process

Section 2 General Thermal Summary


Data Center Floor Plan
Equipment List
Tile Flow Examination
Thermal Conclusion
Equipment Expansion
Thermal Recommendations

Section 3 General Power Summary


Power Conclusion
PDU Tables
One Line Electrical Diagram

Section 4 Liebert Solutions

Section 5 Liebert Supporting Information

Section 6 Site Photos


Section 1

Executive Summary
The purpose of this report is to not only identify problems found at the Data
Center, but for Liebert to also recommend possible solutions to Prime
Account Personnel which can then be considered for answers to current
concerns as well as future expansion issues.

Goal
The objective of this study is to:
• Provide an on site inspection of the Prime Account Data Center
which is approximately 2000 sq/ft.
• Collect information about the computer room environment including
but not limited to: electrical systems, cooling equipment, and thermal
evaluations.
• Provide a floor plan showing the location of existing equipment,
server racks, airflow obstructions, etc…
• Provide a TileFlow report showing the airflow characteristics of the
space.
• Generate a written report documenting the findings.
• Review the report with Prime Account.

Survey Process
The Data Center Assessment tasks performed onsite include:
1. Site Survey
2. Power Environment Evaluation
• One line electrical diagram
• UPS, Gen Sets, and PDU data collection and measurement
• Single points of failure
• Harmonic distortion including voltage regulation and imbalance
3. Cooling Environment Evaluation
• Airflow and temperature measurements of thermal equipment
• CRAC unit performance
• Perforated floor tiles
• Cable floor openings
This availability could be in the form of changes to the current configuration
or additions of more or newer cooling means. In several areas it was
noticed that the discharge from one rack was being drawn into the inlet of
another rack due to the orientation of the racks in one aisle vs. another
aisle. Liebert recommends that hot and cold aisles be utilized throughout
the data center thus providing cool CRAC air to the server inlets. This is
good practice and will help eliminate hot spots in the data center and
improve server efficiency and uptime.

Nearly every rack in the data center had its own cable floor opening.
These raised floor openings not only allowed cable to pass through but
also CRAC air. This presents a problem when trying to balance the air
within the data center so that more air can be directed to high heat areas.
Cable openings in the raised floor should be sealed up to redirect CRAC
air up through the perforated floor tiles located in the more effective
locations in the data center floor thus improving cooling of the equipment.

Ceiling:
Above the ceiling it was noted that the walls extend up to the roof deck
isolating the data center from other areas. The walls were drywall, but the
joints were not taped closed. Lighting fixtures, pipe penetrations through
the ceiling, and missing ceiling tiles result in the above ceiling area and the
data center room influencing one another with regards to temperature and
humidity. A good vapor barrier would be necessary for this area to prevent
the migration of moisture from the outside and other rooms within the
building into the above ceiling area and ultimately into the data center
room which has sensitive environment parameters.

Sub Floor:
The floor under the raised tile was heavily loaded in areas with cables,
metal wire conduit, refrigerant piping, etc. These unseen objects restrict
airflow. Open wire cable trays were employed throughout the data center
helping keep the cables neat and together. They were at different heights
under the floor, some tight to the raised floor and others mounted halfway
between the sub-floor and raised floor. Metal box conductors, 4” Æ 6”
squares, were sitting on the concrete sub-floor often times underneath
inches of cabling and the wire cable trays. High concentrations of
obstructions in an under floor area can reduce or block the airflow from
getting to sensitive heat generating computer equipment. *Note that
abandoned cable spools were also found under the floor, which adds to
the under floor air obstructions.
Pipe penetrations through the wall to the outside or other areas need to be
inspected and sealed to protect the vapor barrier of the space.

Raised Floor:
The raised floor was 18” high and consisted of 2’x2’ floor tiles that were
numbered 1 through 25 on the room’s long dimension and A through T on
the shorter dimension. The floor appeared to be in fair condition with some
gaps between one tile and another. There were 36 perforated floor tiles
scattered throughout the data center that had an approximate free opening
of 20%. It is estimated some air leakage occurred through the joints of the
floor tile where the tile joining occurred. Floor tiles that seal up well allow
more CRAC airflow through perforated floor tiles that have been placed in
critical locations to more effectively cool sensitive heat producing computer
equipment.

Air Conditioning:
Three 10-ton Liebert Deluxe down flow air-cooled units located inside the
data center provided humidity and temperature control for the space. All
units were in a call for cooling and humidification. Unit set points were 70
degrees F and 50% RH. All the air filters seemed to be in fair condition
except for unit A22 which had noticeable dirt collected on the filters. All
units were setting on floor stands that were equipped with turning vanes.

Gaps in the raised floor were noted around the perimeter of the CRAC
units where the units meet the floor resulting in air leakage. Sealing of
these gaps would stop short-circuiting of the under floor air and the CRAC
units intakes.

UPS:
A three module 600 Series Liebert UPS located in a nearby room was
serving the data center and indicated a data center computer load of
125kVA/120kW. This relates to a 34.1 ton equipment heat load for the
data center on March 5th, 2005, which was approximately 76% of the
cooling capacity of the CRAC units.
Customer Name - Liebert #1 Customer Survey by:__Chris West___________________
Data Center - Date: ___ 3/5/2006
Floor Opening Equip Airflow Inlet Temperature Readings Exhaust Temperature Readings
Avg. Air
Cable Velocity, Filter Average Low Mid High Mid High Return Air
Rack/Unit Equip. Opng. Obstruc Free FPM, at Free Actual Free Velocity, Temp Temp Temp Low Temp Temp Temp Temp/Humid- A/C or Rack
Number Equip. Type Equip. Make Equip. Model Capacity Qty. L"xW" Area tions % Area Opening CFM Area Area FPM CFM Reading Reading Reading Reading Reading Reading A/C Unit % full Go/No-Go & Equipment Comments - Air
Power &
CRAC
units
A3 PDU Liebert PDU# 3 1 0 0 0
A9 CRAC Liebert Deluxe FH125AUA10 10 Ton 1 0 n/a 16 12.8 740 9472 72F/47%RH 10 Ton SetPts = 70F+-2F & 50% +-2%
S/N 165745-002 0 0 75% Humid & 100% Cool
A22 CRAC Liebert Deluxe FH125AUA10 10 Ton 1 0 n/a 16 12.8 760 9728 74F/38%RH 10 Ton SetPts = 70F+-2F & 50% +-2%
S/N 165745-001 0 0 100% Humid & 100% Cool
A40 CRAC Liebert Deluxe FH125AUAAEI 10 Ton 1 0 16 12.8 745 9536 74F/43%RH 10 Ton SetPts = 70F+-2F & 50% +-2%
S/N 165745-003 0 0 Unit 100% humid & Cool, several loss
of pwr.and high temp. alarms
28736
Racks

E5 Rack IBM dciprod02 1 9 12 0.75 0.75 367 275 66 74.5 74 65 74 85 60 See Notes: 1
F5 Rack Amex Gateway/compac 1 12 12 1 0.4 0.6 400 240 66 77 76 65 70 86 70 See Notes: 1
G5 Rack Sun 1 6 14 0.5833 0.5833 402 235 66 77 74 67 72 84 80 See Notes: 1
H5 Rack IBM bk box 1 7 12 0.5833 0.5833 400 233 64.5 78 72 67 73 83 60 See Notes: 1
I5 Rack IBM LX2000 1 6 13 0.5417 0.5417 442 239 64 74 77 68 75 84 50 See Notes: 1
J5 Rack HP Disk Array SCSI 1004 1 12 6 0.5 0.5 500 250 64.5 73 73 67 73 83 75 See Notes: 1
K5 Rack HP Server 1 6 12 0.5 0.5 550 275 65 72 72 63 76 85 35 See Notes: 1
L5 Rack IBM LWTSFRSDEV 1 6 12 0.5 0.5 452 226 65 72 72 64 76 85 80 See Notes: 1
M5 Rack Compac LNTSBTS2 1 9 7 0.4375 0.4375 480 210 68 74 74 63 78 85 50 See Notes: 1
N5 Rack Compac LSQLHODI 1 6 12 0.5 0.5 451 226 68 73 78 64 74 84 40 See Notes: 1
O5 Rack Compac (HH rack 37) 1 6 12 0.5 0.5 452 226 68 72 77 65 73 85 25 See Notes: 1
P5 Rack Compac (HH rack 38) 1 9 10 0.625 0.625 386 241 68 70 76 66 74 84 30 See Notes: 1
Q5 Rack Compac (HH rack 39) 1 8 12 0.6667 0.6667 500 333 69 68 78 66 76 84 50 See Notes: 1
R5 Rack HP (HH rack 40) 1 7 12 0.5833 0.5833 400 233 69 69 78 65 75 85 80 See Notes: 1

E9 Rack HP (HH rack 41) 1 7 12 0.5833 0.5833 -50 -29 68 72 76 66 85 97 45 See Notes: 2; 1
F9 Rack Compac (HH rack 42) 1 13 11 0.9931 0.9931 200 199 68 71 74 68 83 96 15 See Notes: 2; 1
G9 Rack Compac (HH rack 43) 1 6 15 0.625 0.625 229 143 68 71 74 69 83 95 40 See Notes: 2; 1
H9 Rack Compac (HH rack 44) 1 12 8 0.6667 0.6667 341 227 61 74 80 69 82 92 50 See Notes: 2; 1
I9 Rack EMC2 1 12 8 0.6667 0.6667 342 228 61 74 80 70 78 88 60 See Notes: 1
J9 Rack EMC2 1 6 12 0.5 0.4 0.3 598 179 61 74 85 71 76 86 70 See Notes: 1
K9 Rack EMC2 1 12 8 0.6667 0.6667 341 227 59 76 81 71 72 80 80 See Notes: 1
L9 Rack Dell 1 6 12 0.5 0.5 400 200 60 73 79 71 74 81 100
M9 Rack EMC2 1 13 10 0.9028 0.9028 540 488 62 72 79 69 77 82 40 See Notes: 1
N9 Rack EMC2 1 6 15 0.625 0.625 300 188 63 73 78 68 80 81 90 See Notes: 1
O9 Rack EMC2 1 6 12 0.5 0.5 500 250 64 73 78 68 81 81 90 See Notes: 1
P9 Rack EMC2 1 7 12 0.5833 0.5833 400 233 65 75 78 67 79 80 90 See Notes: 1
Q9 Rack EMC2 1 6 13 0.5417 0.5417 442 239 67 79 76 69 76 78 100
R9 Rack EMC2 1 4 11 0.3056 0.1 0.275 422 116 66 78 72 71 75 73 75 See Notes: 1

E12 Rack Compac 1 4 12 0.3333 0.3333 -111 -37 69 68 73 75 85 96 100 See Notes: 2
F12 Rack Compac 1 6 12 0.5 0.3 0.35 338 118 69 68 73 74 84 95 100 See Notes: 2
G12 Rack Compac 1 4 12 0.3333 0.4 0.2 329 66 68 69 74 75 80 89 100 See Notes: 2
H12 Rack Compac 1 6 12 0.5 0.3 0.35 338 118 66 67 73 75 80 88 100 See Notes: 2
I12 Rack Compac 1 4"D 0.19 0.4 0.114 445 51 64 66 73 75 80 87 100
J12 Rack New rack 1 6 15 0.625 0.625 229 143 62 66 76 76 79 84 100
K12 Rack APC 1 6 16 0.6667 0.6667 199 133 64 74 84 62 68 76 25 See Notes: 1
L12 Rack APC 1 7 12 0.5833 0.5833 296 173 65 73 84 63 71 73 100
M12 Rack APC 1 7 12 0.5833 0.5833 302 176 65 72 83 63 73 76 75 See Notes: 1
N12 Rack APC 1 13 11 0.9931 0.9931 200 199 66 72 82 65 72 76 90 See Notes: 1
O12 Rack APC 1 6 15 0.625 0.625 229 143 68 74 82 65 72 75 90 See Notes: 1
P12 Rack APC 1 5 11 0.3819 0.3819 156 60 68 73 79 66 73 76 90 See Notes: 1
Q12 Rack APC 1 12 4 0.3333 0.4 0.2 422 84 68 72 77 67 72 74 100
R12 Rack APC 1 12 3 0.25 0.25 286 72 68 70 76 66 72 74 80 See Notes: 1

E16 Rack Dell 1550 Power Edge 1 4 12 0.3333 0.3333 -111 -37 64 74 81 75 84 95 100 See Notes: 2
F16 Rack Net Finity 1 13 10 0.9028 0.9028 300 271 67 70 80 74 84 94 100 See Notes: 2
G16 Rack e Server, Net Finity, Sun 1 13 10 0.9028 0.9028 384 347 67 71 80 73 82 88 100 See Notes: 2
H16 Rack TL Server e Server 1 13 10 0.9028 0.9028 266 240 67 70 78 71 82 88 100 See Notes: 2
I16 Rack Fuller Workstation rack 81 1 13 10 0.9028 0.9028 384 347 67 69 76 70 82 87 100
J16 Rack IBM monitor 1 6 10 0.4167 0.4167 450 188 68 69 78 67 79 80 100
K16 Rack IBM RPT Server 1 6 3 0.125 0.125 582 73 69 68 76 66 78 83 80 See Notes: 1
L16 Rack 6650 Dell 1 13 10 0.9028 0.9028 529 478 69 70 75 66 75 79 90 See Notes: 1
Avg. Air
Cable Velocity, Filter Average Low Mid High Mid High Return Air
Rack/Unit Equip. Opng. Obstruc Free FPM, at Free Actual Free Velocity, Temp Temp Temp Low Temp Temp Temp Temp/Humid- A/C or Rack
Number Equip. Type Equip. Make Equip. Model Capacity Qty. L"xW" Area tions % Area Opening CFM Area Area FPM CFM Reading Reading Reading Reading Reading Reading A/C Unit % full Go/No-Go & Equipment Comments - Air
M16 Rack HP NetServer LXE Pro 1 12 6 0.5 0.5 501 251 71 74 79 66 70 75 100
N16 Rack HP NetServer LXE Pro 1 12 6 0.5 0.5 539 270 71 74 79 66 70 76 100
O16 Rack Sun 1 3 12 0.25 0.25 500 125 66 76 76 69 68 73 90 See Notes: 1
P16 Rack Sun 1 13 10 0.9028 0.9028 300 271 66 77 77 68 69 74 90 See Notes: 1
Q16 Rack Sun 1 13 10 0.9028 0.9028 384 347 67 74 78 66 67 73 100
R16 Rack IBM 1 3 18 0.375 0.1 0.3375 341 115 67 72 78 62 66 76 90 See Notes: 1

E19 Rack Sun 1 4 12 0.3333 0.3333 340 113 66 71 78 62 85 95 90 See Notes: 2; 1


F19 Rack IBM 1 18 18 2.25 2.25 252 567 65 71 77 63 84 94 90 See Notes: 2; 1
G19 Rack Sun 1 4 12 0.3333 0.3333 226 75 64 70 76 63 80 88 95 See Notes: 2; 1
H19 Rack Sun 1 4 12 0.3333 0.3333 153 51 65 71 76 65 80 88 100 See Notes: 2
I19 Rack IBM 1 4 12 0.3333 0.3333 340 113 65 72 77 65 80 87 100
J19 Rack APC 1 10 12 0.8333 0.8333 454 378 65 72 76 66 75 80 20 See Notes: 1
K19 Rack Sun 1 4 12 0.3333 0.3333 156 52 65 74 76 67 76 83 60 See Notes: 1
L19 Rack IBM 1 12 12 1 1 400 400 66 75 76 66 73 79 90 See Notes: 1
M19 Rack NCR 1 6 12 0.5 0.2 0.4 290 116 67 76 75 66 71 77 75 See Notes: 1
N19 Rack Sun 1 6 12 0.5 0.2 0.4 244 98 68 74 76 65 72 77 50 See Notes: 1
O19 Rack IBM 1 12 13 1.0833 1.0833 450 488 71 70 76 84 72 80 30 See Notes: 1
P19 Rack IBM 1 12 13 1.0833 1.0833 280 303 70 69 75 79 73 78 80 See Notes: 1
Q19 Rack Compac Proliant 1 12 9 0.75 0.75 400 300 71 70 76 84 72 80 85 See Notes: 1
R19 Rack Compac 1 18 18 2.25 2.25 300 675 65 75 75 66 72 77 75 See Notes: 1

E23 Rack Compac 1 7 9 0.4375 0.4375 150 66 63 70 79 64 85 96 80 See Notes: 2; 1


F23 Rack Compac 1 12 9 0.75 0.75 350 263 63 71 80 64 84 95 85 See Notes: 2; 1
G23 Rack Compac 1 3 12 0.25 0.25 380 95 63 71 80 67 80 89 80 See Notes: 2; 1
H23 Rack Compac Proliant 1 10 12 0.8333 0.8333 390 325 64 69 78 67 80 88 80 See Notes: 2; 1
I23 Rack Dell 1 6 8 0.3333 0.3333 544 181 65 67 76 67 80 87 15 See Notes: 1
J23 Rack Dell 1 6 8 0.3333 0.3333 516 172 66 67 76 67 79 84 5 See Notes: 1
K23 Rack Dell 1 9 12 0.75 0.75 485 364 66 68 76 68 68 76 30 See Notes: 1
L23 Rack Dell 1650 1 10 11 0.7639 0.7639 380 290 66 68 76 69 71 75 90 See Notes: 1
M23 Rack Dell 2650 1 11 13 0.9931 0.9931 290 288 67 69 73 69 73 76 95 See Notes: 1
N23 Rack Dell 2650 1 4 12 0.3333 0.3333 380 127 68 69 73 71 72 76 90 See Notes: 1
O23 Rack Dell 2651 1 12 12 1 1 469 469 68 69 73 71 72 75 60 See Notes: 1
P23 Rack Sun A1000 1 28 6 1.1667 1.1667 80 93 63 68 76 69 73 76 85 See Notes: 1
Q23 Rack Sun/IBM 1 28 10 1.9444 1.9444 360 700 63 69 73 69 72 74 60 See Notes: 1
R23 Rack Sun 1 21 10 1.4583 1.4583 350 510 63 69 73 71 72 74 70 See Notes: 1

18785
Perforated Tiles
E7 -7
F7 190
G7 190
H7 194

J7 190
K7 271
L7 300
M7 360

O7 377
P7 339
Q7 301
R7 310

E14 165
F14 100
G14 150
H14 155

J14 189
K14 190
L14 185
M14 187

O14 225
P14 225
Q14 230
R14 226
Floor Opening Equip Airflow Inlet Temperature Readings Exhaust Temperature Readings
Avg. Air
Cable Velocity, Filter Average Low Mid High Mid High Return Air
Rack/Unit Equip. Opng. Obstruc Free FPM, at Free Actual Free Velocity, Temp Temp Temp Low Temp Temp Temp Temp/Humid- A/C or Rack
Number Equip. Type Equip. Make Equip. Model Capacity Qty. L"xW" Area tions % Area Opening CFM Area Area FPM CFM Reading Reading Reading Reading Reading Reading A/C Unit % full Go/No-Go & Equipment Comments - Air
E21 -7
F21 150
G21 180
H21 180

J21 250
K21 250
L21 259
M21 253

O21 340
P21 340
Q21 335
R21 340

8112

26897

Notes: 1. Missing block off plates


2. High temperatures at Racks
Section 2 – TileFlow Study

TileFlow
Data Center – As Is Floor Plan
Data Center – 3D As Is View of Equipment Racks, CRAD Units,
PDUs, Floor Openings, and Perforated Floor Tiles
Air Flow in Data Center – As Is Floor Plan –
Perforated Floor Tiles and Cable Floor Openings

*Note - Under Floor Obstructions


*Note – CRAC Units with Turning Vanes
Data Center Under Floor Velocity Vector Plan – As Is Floor Plan

*Note – Under Floor Obstructions


*Note – CRAC Units with Turning Vanes
Data Center Air Flow from As Is Perforated Floor Tiles and Cable Floor Openings
Data Center Velocity/Pressure Plan with Unit A15 Off

*Note – Under Floor Obstructions


*Note – CRAC Units with Turning Vanes
Data Center Velocity/Pressure Plan with Unit A22 Off

*Note – Under Floor Obstructions


*Note – CRAC Units with Turning Vanes
Data Center Velocity/Pressure Plan with Unit A7 Off

*Note – Under Floor Obstructions


*Note – CRAC Units with Turning Vanes
Section 2
Thermal Assessment Conclusion

Liebert did not find any critical hot spots in the data center with the present
rack equipment load of 120kW. At the current load it would take
approximately 34.1 tons of cooling to sufficiently cool the Prime Account
Data Center. All 3 CRAC units serving the data center when operating
provided a combined cooling capacity of 45 tons at a room temperature of
70 degrees F. Based on the current equipment load, the additional 10.9
tons in reserve cooling capacity would not allow for any one CRAC unit to
be shut down while routine maintenance was being performed.

Several areas were noticed to have temperatures at the tops of the racks
in the high 80’s and 90’s (degree F). In these areas, some racks were not
in a hot/cold aisle configuration and other racks were missing block off
panels allowing cold air to spill into the hot aisle without coming into
contact with the rack computer equipment. Under floor obstructions were
present in several locations in the data center as well as in this area,
comprising of wire cabling trays, square metal conduit ducts, refrigerant
piping, and miscellaneous cabling which we estimate has limited the
available free area under the floor for airflow by 30% to 45% in front of the
CRAC units. With some areas already experiencing high air temperatures,
the customer should be forewarned about adding additional equipment or
new higher heat density equipment into the data center without ensuring
that appropriate airflow is available to these areas.

Numerous obstructions were present under the raised floor in higher load
areas. This is typical in most all data centers. However, care should be
taken to avoid these important airflow problems leading to critical load
areas. For the air to flow correctly and most effectively below the raised
tile, any obstructions located under the raised tiles should be at the same
level so disruption of the air flow pattern will not occur.

Vapor barrier integrity of the data center is questioned due to pipe


penetrations into the data center’s raised floor and also to the above
ceiling areas not being sealed correctly or fully. The ceiling of the data
center did not provide much of a vapor barrier due to missing ceiling
panels and unsealed pipe penetrations. The acoustical tile ceiling also did
not appear to offer much in the way of a vapor barrier based on material
type and the seal on the T-bar mounting grid.
There were several problem areas in the data center where heated air
from the racks was being discharged into an aisle only to be drawn into the
racks across the aisle for cooling of heat sensitive computer equipment.
Block-off panels were also missing in most of the un-populated racks.

Every rack in the data center also had its own cable floor opening which
not only allowed the passage of cables through the floor but also CRAC
air.

Raised floor and drop ceiling openings and pipe penetrations into the data
center could also affect vapor barrier and temperature.

Section 2
Equipment Expansion
There appears to be space within the data center for expansion of
computer equipment and cooling equipment. Should additional computer
equipment, i.e. more computers or the replacement of existing equipment
with higher heat producing equipment, etc… be required in the future, it is
suggested a more detailed examination of the data center cooling capacity
be made to ensure adequate cooling of the computer equipment. At this
time all units are running to keep up with the equipment load and there is
no redundancy for CRAC unit maintenance. It is recommended that at
least one more 10 ton unit be added to the data center to allow for cooling
redundancy and also the additional cooling capacity to maintain the correct
data center environment while routine maintenance takes place.

Recommendations for Thermal Assessment

1. Incorporate into the data center Hot and Cold Aisles between
rack rows to ensure rack inlets take air from the cold aisle and
discharge heated air into the hot aisles
2. Cable floor openings need to be sealed including the area
around cables to stop the leakage of CRAC unit air
3. Provide block off panels to seal up partially full racks to
prevent the migration of cold aisle air through the racks to the
hot aisle
4. Provide weather stripping to data center doors to improve the
vapor barrier
5. Seal off gaps in the raised floor around the perimeter of the
CRAC units
6. Ensure wiring inside the racks is organized and out of the
airflow path to enable efficient cooling of rack computer
equipment
7. When possible, make sure under floor obstructions are kept at
one level throughout the data center to give CRAC airflow a
clear path to perforated tiles serving sensitive heat generating
computer equipment
8. Reroute the refrigerant piping in front of the CRAC units to
provide more free area for air to get out into the data center
9. Add one more 10 ton CRAC unit to the data center to provide
redundancy to allow for routine maintenance and emergency
shut downs of CRAC units
10. Review data center cooling capacity and additional rack
equipment load as new computer equipment is added to the
data center.
Section 3

General Summary – Power Assessment

Building General Observations:


This building was built in 1991 and has a modern electrical system. The
data center is located in a residential/light commercial area. Utility power
to the building comes from overhead lines through a wooded area and
eventually goes underground and into the building. The data center is
approximately 8000 square feet.

Data Center Utility Service:


The main utility transformer was not inspected during the audit. Prime
Account personnel were not certain of the capacity, but felt it was less than
1000 kVA. This may be of concern as the power draw of the computer
room increases.

Generators:
There are two 1000kW Caterpillar generators on site. They operate
independently of each other. Each generator feeds one side of a 1600A
automatic transfer switch. As such, only one generator can power building
loads at any given moment in time. Liebert found this somewhat curious
as these generators have the capability of being wired in parallel. Prime
Account has approximately 1000 kVA of available UPS power. A general
rule of thumb is that available generator power should be at least 150% of
available UPS power for optimal operation. Based on current computer
room load (approximately 311 kVA/300kW), the current configuration
should not be a problem.

Uninterruptible Power System:


Prime Account has a three module 125kVA/120kW Liebert Series 600
UPS system. Based on current load of 125kVA/120kW, two modules are
required to power the computer room. One module can be considered
redundant. The UPS modules were not inspected during the audit,
however it was confirmed that the three modules were load sharing
properly and in overall good condition. The UPS modules and batteries
have been routinely serviced. There is a monitoring problem with UPS#3
that both Liebert and Prime Account are aware of that will be rectified.
Power Distribution Units (PDU):
One PDU module is located on the computer room floor to reduce UPS
voltage and distribute power to equipment racks. Liebert opened up this
module and recorded branch circuit current readings. The results are
tabulated later in this report. Branch current readings exceeding 80% of
circuit breaker ratings are flagged in red. Branch current readings
exceeding 60% of circuit breaker ratings are flagged in yellow. There are
several of these noted in the tables and should be addressed as soon as
possible.

During the inspection Liebert noticed that Prime Account was well on their
way to marking and cataloging all branch circuit breakers and what tile
positions (racks) they terminate at. This process seems to be 60-70%
complete. It appears that a standard 20A or 30A service will eventually be
provided to each rack.

The PDU front doors are kept locked. It is imperative that the local or
remote emergency power off (EPO) feature of the PDU’s remain available
so power may be turned off in the event of an emergency.

Harmonic Distortion and Voltage Regulation and Imbalance:


During the audit, Liebert looked at voltage distortion presented by the UPS
system. It remained in the 3%-4% range which is normal and acceptable
for this type of UPS system. Voltage regulation remained constant, within
1%, and voltage imbalance was also within 1%. UPS system load currents
were balanced within 10% of each other.

Single Points of Failure:


This data center is very typical of a tier 2 design. While there is some built
in redundancy (within the UPS system), there are some single points of
failure. A failure of either automatic transfer switch (ATS) could prove to
be disastrous. It is always a good idea to have a “maintenance bypass”
scheme for ATS modules. This allows easy re-routing of utility or generator
power to the UPS in the event of ATS maintenance or failure. The power
distribution units can also be a single point of failure. The PDU has a main
input breaker and transformer that have no redundancy. Once again, a
failure is unlikely, but should a failure occur, a portion of the computer
room floor loads would be lost.
Power Conclusion

With the exceptions noted below, Liebert could not find anything
substantially wrong with this data center. Prime Account needs to
determine whether or not it makes sense to invest in infrastructure to
create a dual-bus, tier 3, or tier 4 designs. No matter what decision is
reached, Liebert recommends the following be done to maintain the
highest level of reliability and availability.

1. Address the red and yellow flagged branch circuits noted in the
tabulated PDU data
2. Look into adding a maintenance bypass scheme for the ATS
modules
Section 4

Liebert Solutions:
Hot / Cold Aisle Solution:

Incorporate into the data center Hot and Cold Aisle between rack rows to
ensure rack inlets take air from the cold aisle and discharge heated air into
hot aisles.

Cable Floor openings need to be sealed including the space around cables
to stop the leakage of CRAC unit air.

Provide block off panels to seal up partially full racks to prevent the
migration of cold aisle air through the racks to the hot aisle.

Re-route the refrigerant piping in front of the CRAC units to provide more
free area for air to get out into the data center.
Data Center Floor Plan – Liebert Recommendations
Data Center Under Floor Velocity Vector Plan – Modified Floor Plan
Data Center Air Flow from Modified Floor Plan

*Note – No turning vanes


*Note – Sealed cable openings
*Note – Re-routed refrigerant piping
away from CRAC units front
*Note – Hot/Cold Aisle configuration
Air Flow in Data Center – Modified Floor Plan
CRAC units no turning vanes

*Note – No turning vanes


*Note – Sealed cable openings
*Note – Re-routed refrigerant piping
away from CRAC units front
Section 5 - Liebert Supporting Information

Walk-Through Data Center Information:

Room:
• Floor to ceiling height 10’-0”
• Raised floor height 18”
• Above ceiling plenum height if applicable >36”
• Room Temp Set Point and Humid Set Point 70F & 50% RH
• Lighting Load: Type of fixture(s) Fluorescent, 2’x4’
• Rack height Varying heights, 80”, 84”, etc…
• Room Vapor Barrier, good / bad Poor!! Acoustical tile lay-in
ceiling w/limited vapor barrier. Active overhead branch supply
air system from building A/C system serving data center. Pipe
penetrations into Data Center from other building areas.
Opening under doors serving the Data Center.
• Under floor restrictions, describe percentage of depth blocked and
what blockage is. A few areas in the Data Center have limited
free area due to under floor cables, abandoned wire spools,
cable trays, and cable boxes. See floor plan for % of clear
space under floor due to obstructions.
• Perimeter leaks, (inadvertent holes or leaks around cables, pipes,
etc…) in raised floor or associated area that will break the coherence
of the under-floor plenum. Some piping passing from the data
center to other areas was not sealed, most seen where
insulated or tight around the piping. No major issues here.
• Any under floor air diverters attached to Computer Room air
conditioners. None
• Type of perforated floor tile.24”x24” perforated floor tiles w/o
dampers; 20% free area
• Estimate other floor openings including cataloging the cable
openings for each rack on equipment list or floor plan. See
equipment list and floor plan.
• Note how the customer labels his data center: coordinates; rack row
numbers; N,S,E,W designations; tile coordinates; columnar support
coordinates. Customer did use labels on the floor tiles: Room
Length - 1 through 25; and Room Width - A through T
• Major entities above the racks: Cable ducts used in the areas of
the cable racks in several places.
• Cable trays - estimate of cross sectional area blockage. Cable
trays being used under the floor through out the data center.
See Under Floor Obstruction Plan.
• Earthquake protection - None
• Supply or return ducting – None
Any major room anomalies (e.g. large windows, glass walls) : None other
than the interior office area that had windows and glass doors
A E H L O S

25

+60%
+75% +95%
20

+45%
3

15

+50% +80% +95%

1 PDU

+40% 2 1510 Ton


TON Deluxe
DELUXE RACK LOCATION

10 3 1510 Ton
TON Deluxe
DELUXE REFRIGERANT PIPING
2
4 15 TON DELUXE 6X6 METAL BOX CONDUIT
10 Ton Deluxe

+45%
+85% +95% NOTES:

1 CABLE TRAYS ARE TYPICALLY MOUNTED IN THE MIDDLE OF THE RAISED FLOOR AND WILL ACCEPT 6" OF

5 CABLES RACKS OR UNDER THE FLOOR.


1
2. CUSTOMER IS STRAIGHTENING UP CABLES IN RACKS AND HAVE PUSHED EXCESS CABLE IN WIRE WAYS ON

TERMINAL STRIPS ARE BEING USED UNDER THE FLOOR AND ARE OBSTRUCTING THE AIR FLOW UNDER THE

+90% RACKS.
+75% +100%
1 3. METAL BOX CONDUIT LAYS 1' OFF THE CONCRETE UNDER FLOOR.

4. % SHOWN ON FLOOR PLAN ARE APPROX. AMOUNTS OF FREE SPACE AT A FLOOR TILE.

5. YELLOW SHADED AREA INDICATES THE HIGHEST CONCENTRATION OF CABLES IN THE UNDER FLOOR.

Prime Account Data Center - UnderFloor Obstruction Plan


Scale: 1 square = 2 ft-sq
Section 6 – Site Photos

You might also like