• Home
  • Data Center and Server Room Standards

Data Center and Server Room Standards

Guideline
Purpose: 

The purpose of the Data Center and Server Room Standards is to describe the minimum requirements for designing, installing, securing, monitoring, maintaining, protecting, and decommissioning a data center or server room at the University of Kansas.

Applies to: 

University employees (faculty, staff, and student employees), students, and other covered individuals (e.g., University affiliates, vendors, independent contractors, etc.) in their access and usage of University technology resources during the course of conducting University business.

Campus: 
Lawrence
Policy Statement: 
  1. Physical Plant Layout and Management
    1. HVAC
      1. CRAC (Computer Room Air Conditioner) Units:
        1. Cooling and related equipment must be sized to account for:
          1. The size of the cooling load of all equipment.
          2. The size of the cooling load of the building (lighting, power equipment, personnel, building envelope).
          3. Over sizing to account for humidification effects.
          4. Over sizing to account for redundancy should a unit fail.
          5. Over sizing to account for appropriate future growth projections.
        2. All cooling equipment must be designed, installed, and maintained by qualified technicians that meet local and state codes. All cooling equipment must follow the vendor’s recommended maintenance schedule.
        3. Air filtration media should be installed at air intake points. Media should be replaced on a regular schedule based on the manufacturer recommended filter lifespan.
      2. Humidity/temperature control:
        1. Humidity and temperature must be maintained at a level that is compliant with the equipment installed on the data center floor.
        2. Humidity injection units must have separate drains and be fed by conditioned water.
      3. Cooling towers:
        1. Units must be maintained by qualified maintenance technicians following factory guidelines.
        2. Units must be in a secure mechanical yard.
        3. Units should be designed and installed to eliminate single point of failure.
        4. Tower restart after power failure must be automatic.
        5. Towers must have a redundant power source to allow time for a controlled shutdown of supported areas.
      4. Pump systems:
        1. Units must be located in a secure mechanical room.
        2. Units should be designed and installed to eliminate single point of failure.
        3. Pumps must restart automatically after a power failure.
        4. Pumps must have an emergency power source to allow time for a controlled shutdown of supported areas.
      5. Pipe system:
        1. Pipe must be constructed of high quality rust- and coolant-resistant material.
        2. Pipe loops must have valves in several locations that allow sections of the loop to be isolated without interruption to the rest of the loop.
        3. Pipe loops must have isolation valves for each CRAC unit.
      6. Air delivery and return management:
        1. Cold air delivery must be managed such that the required amount of air can be delivered to any necessary equipment location.
        2. Hot air return must be managed to extract air directly to CRAC units without mixing with cold air delivery.
      7. System monitoring:
        1. All infrastructure systems supporting machine space services must be monitored on a continual basis.
        2. Monitoring must be at a central location such as a Network Operations Center.
        3. Monitoring system must support a master reporting console that can also be accessed remotely (including history logs) and must notify support staff of alarms at central and remote sites.
    2. Electrical Systems
      1. Main and step down transformers:
        1. Must be located in a secure mechanical room.
        2. Must have HVAC systems to support heat load and correct humidity levels for each unit.
        3. Must be maintained by a qualified technician to factory standards and be supportable by extended factory warranty.
      2. Main power control panel and PLC (Program Logic Control):
        1. Must be maintained by a qualified technician to factory standards.
        2. Must be located in a secure mechanical room.
        3. Must have HVAC systems to support heat load and correct humidity levels for each unit.
        4. Must have surge suppression sufficient to prevent large surges from damaging panels and equipment supported by panel.
        5. PLC must have password security.
        6. PLC must have UPS (Uninterruptible Power Supply) support for power failure.
      3. Motor control panels:
        1. All controls must have automatic restart after power failure.
        2. Must be maintained by a qualified technician to factory standards.
        3. Must be located in a secure mechanical room.
        4. Must have HVAC systems to support heat load and correct humidity levels for each unit.
      4. UPS systems:
        1. UPS systems in the data center must be sized to meet current and future needs, with sufficient battery backup to allow for a controlled shutdown of primary servers.
        2. UPS systems must be designed, installed, and maintained by authorized electricians and technicians and housed in a secure location. UPS systems follow manufacturer’s recommended maintenance schedule.
        3. UPS systems must have bypass capability to allow for periodic maintenance.
      5. Backup batteries:
        1. Must follow manufacture’s recommendations for system to be of sufficient quality and capacity to ensure a long life thus limiting breaks in the battery strings.
        2. Must be located in secure area with proper ventilation as required.
        3. Must be installed and maintained by authorized technicians.
        4. Must be approved for use in computer equipment UPS systems.
      6. Sub-panels:
        1. Must be sized to meet current and future needs.
        2. Must be located in the data center to minimize power runs to desired equipment.
        3. Panel maps must be maintained to reflect their most current usage.
        4. Sub-panels must never be opened at the face plate by anyone other than qualified electricians.
        5. All materials must be at least three feet away from sub-panels.
      7. RPP (Remote Power Panel) units:
        1. Must be located to maximize ease of distribution to equipment.
        2. Must comply with BS/IEC/EN 60439-1.
      8. Power strips:
        1. Must be sized to meet the power requirements of the cabinet in which they are installed.
        2. Power receptacles for power strips must be installed by qualified electricians.
        3. Monitoring systems must be IP capable.
      9. Power cable layout:
        1. The power pathways must maintain a minimum separation from data cable pathway in accordance with ANSI/TIA-469-B Standards and the University of Kansas Design and Construction Standards Division 27 for Telecommunication Systems.
        2. Equipment power cables should be the minimum required length and slack/strain management must be employed.
        3. Cables must be arranged to minimize air flow disruptions.
      10. Grounding systems:
        1. All data center equipment must be grounded in compliance with state and local codes.
        2. Data center equipment grounds must be independent of all other building grounds (such as lightning protection systems).
        3. All metal objects must be bonded to ground including cabinets, racks, PDUs (Power Distribution Units), CRACs, cable pathway, and any raised floor systems.
        4. Ground resistance should be < 1 Ohm.
      11. Monitoring system:
        1. All electrical equipment must be monitored.
        2. Monitoring systems must be IP capable.
        3. System must have a central monitoring console located in an area such as a NOC (Network Operations Center) and be remotely accessible.
        4. System must be able to report alarms at the central and remote consoles by email and send recorded cell phone messages.
        5. Monitoring system must have analysis and reporting function.
        6. System must be able to retain log files of equipment performance and incident history.
      12. Generator management:
        1. Generator must be start tested and run for at least one hour once a month.
        2. A full load test and switching test must be conducted at least yearly.
        3. Maintenance logs must be kept on all tests and reflect all maintenance performed.
        4. All maintenance must be performed by a qualified technician to factory specifications.
        5. Management must include remote alarm panel (enunciator panel).
      13. Maintenance and testing:
        1. All electrical system components should be regularly inspected.
        2. Main power switches, transformers, automatic transfer switches, and other major electrical system equipment must be maintained by qualified technicians per factory specifications and recommendations for service cycles.
    3. Access Control and Safety
      1. Door security:
        1. Door access control must be maintained 24/7 and should conform to ISO-27001 standards.
        2. An electronic access control system should be in place and log all access to secure data center areas.
        3. Access logs should be maintained for a minimum of one year or longer as specified by site security policy.
        4. Enforcement of strict polices and sign in/out logs is mandatory.
        5. Review of procedures and sign in/out logs must be done on a regular basis.
        6. Secured doors must fail open in a fire emergency.
      2. Video security:
        1. Allows for local and remote surveillance of secured and public spaces.
        2. Recording device (tape or hard disk) must be located in a secure area.
        3. Recording must be done on a regular basis to ensure proper operation of the video security system.
        4. All security recordings must be saved for no less than 30 days.
      3. Granting security access:
        1. Data center locations must have a visitor/non-essential staff access policy.
        2. Access must only be granted to essential personnel.
        3. Visitors must be signed in and out and be supervised at all times.
        4. Visitor logs should be maintained for a minimum of one year or longer as specified by site security.
      4. Emergency procedures:
        1. All sites must maintain published emergency procedures that address:
          1. Emergency contact information
          2. Various and the respective site’s planned responses
          3. Ongoing testing and staff awareness
      5. Fire alarm and suppression systems:
        1. Must be designed specifically for use in data centers.
        2. Must comply with all state and local building codes.
        3. Suppression systems must be designed to minimize risk of equipment damage.
        4. Suppression systems must be gas or dry pipe system.
        5. Suppression system must minimize risk to building occupants.
        6. Must be maintained by qualified technicians.
    4. Raised Floor Systems
      1. Under floor space management:
        1. Must remain clean and corrosion free.
        2. Constant air pressure must be maintained at all times.
        3. Must remain obstruction free for proper air flow.
      2. Cleaning:
        1. Must be done with vacuum cleaners equipped with HEPA/S-class filters.
        2. Must be done on a continual basis.
      3. Floor structure maintenance:
        1. Must be corrosion and rust free.
        2. Damaged pedestals, cross members, tiles, or missing fasteners must be replaced immediately to maintain floor integrity.
      4. Floor grounding:
        1. Must be separate from building ground.
        2. Must comply with all state and local codes.
    5. Server Cabinet Systems
      1. Cabinet standards:
        1. Data center rack enclosures must have 42U vendor neutral mounting rails that are fully adjustable and compatible with all EIA-310 (Electrical Industry Alliance Standards) compliant 19” equipment.
        2. Cabinets must have access points for power and data pathways at the top and bottom of the cabinet.
        3. The data center site must have a standardized set of cabinets tailored to the site’s specific needs.
      2. Cabinet layout:
        1. The cabinets will be configured in a standard hot aisle cold aisle configuration.
        2. The cold aisle edge of the equipment enclosures must line up with the edge of the floor tiles.
        3. Hot and cold aisles must be wide enough to insure adequate access to equipment and safe staff work space.
        4. In cases where vented floor tiles alone are insufficient to heat load for an area, additional cooling measures will be used.
        5. Blanking panels will be installed in any unused rack space to minimize cold/hot air mixing.
      3. Cabinet security:
        1. All cabinets must be lockable.
        2. All cabinets must reside in a secure area within the data center.
      4. Cabinet loading:
        1. Rack loading must not exceed the weight rated capacity for the location’s raised floor.
        2. Rack heat load must not exceed the cooling capacity of the location.
        3. Large servers and equipment must be installed at the bottom of the rack.
    6. Cable Plant
      1. Overhead delivery system cable layout:
        1. The data room must have a system to support overhead delivery of data connections to the equipment cabinets.
        2. The data pathways must maintain a minimum separation from high voltage power and lighting in accordance with ANSI/TIA-469-B Standards (American National Standards Institute/Telecommunications Industry Association) and the University of Kansas Design and Construction Standards Division 27 for Telecommunication Systems.
      2. Fiber standards:
        1. New fiber installation purchases must be 50 micron OM4 Laser optimized fiber.
        2. All fiber installations must be labeled and comply with the KUIT Labeling Standard.
      3. Copper standards:
        1. Copper jumpers must be CAT6 with Booted RJ45 connectors.
        2. All copper data cables must be labeled and comply with the KUIT Labeling Standard.
      4. Grounding:
        1. All cabinets and cable delivery pathways must be grounded in compliance with the University of Kansas Design and Construction Standards Division 27 for Telecommunication Systems.
  2. Support Services
    1. Server Installation
      1. Power:
        1. Systems with redundant power supplies must have their power cords plugged into separate power sources.
        2. Power must be isolated from data cables.
        3. Power cords must be factory certified.
        4. Power cords must be clearly labeled and comply with the KUIT Labeling Standard.
      2. Rack space:
        1. Heavier equipment must be installed in the bottom half of the rack. 
        2. Equipment must be clearly labeled and comply with the KUIT Labeling Standard.
      3. Data connections:
        1. Cable must not exceed required length by more than one foot.
        2. Must be isolated from the system and rack power delivery system.
        3. Must be clearly labeled and comply with the KUIT Labeling Standard.
      4. Fiber connections:
        1. Fiber must not exceed required length by more than one meter.
        2. Must be clearly labeled and comply with the KUIT Labeling Standard.
        3. Must not exceed minimum bend radius as specified by the manufacturer.
    2. Network Layout
      1. Standard switch layout:
        1. All networking equipment will be installed by KUIT staff regardless of ownership.
        2. Switches must be installed in a fashion to minimize the length of data cables required to provision a data connection.
      2. Highly critical system switch layout and redundancy:
        1. In the case of highly critical systems where network path redundancy is required, the systems must have redundant data circuits that connect to separate switches.
        2. Redundant switches must be plugged into separate power strips.
    3. Server Removal
      1. Removal from rack:
        1. All power, data circuits, management circuits, and fiber connections must be reclaimed and removed.
        2. All power cords, fiber and copper cables, and management system connection parts must be inspected and returned to inventory if in acceptable condition.
        3. All management and support software entries must be updated.
        4. Blanking panels must be installed in the vacated rack space.
        5. All servers and components must be labeled, inventoried, and properly bundled for delivery to owner or eWaste following established facility procedures.
      2. Documentation:
        1. An incident ticket documenting removal must be completed and approved before work begins.
        2. The asset database and all other records relating to this server must be updated to reflect the change.
    4. Emergency Response Management
      1. On call policy:
        1. A policy must be in place for each Data Center/Server Room defining staff call back requirements.
        2. Policy will include call back information for all support staff that might be needed to reach a solution.
        3. Policy will define call back authorization needed to request billable support.
      2. Emergency procedure maintenance:
        1. Policies and procedures must be developed to define areas of responsibility, interactions with vendors and other support teams, standard recovery methods, and problem documentation.
        2. Policies and procedures will be reviewed yearly.
        3. Policies and procedures will be stored in a central repository in the cloud, which can be accessed remotely.
      3. Emergency equipment management:
        1. Response kits must be available to support staff equipment and tool needs for each site.
        2. Kits will be stored on site in a secure location.
        3. A master list of all site kits and their locations will be created and a copy kept at each site.
    5. Procedure and Policy Development
      1. Process documentation development:
        1. Each site will have policies defining roles, responsibilities, and performance standards.
        2. Each site change will require a review and update of all documentation.
        3. Site Books will be developed for each site covering all tasks and responsibilities required to support that site. This will include all policies, site standards, and procedures.
      2. Review, update, and replacement of existing documentation:
        1. Policies and procedures will be reviewed and updated yearly.
        2. Each policy and procedure will have an author responsible for maintaining the documents.
    6. Management of Site Support Tools and Equipment
      1. Definition of equipment required:
        1. Each site will create an inventory of support equipment required for that site.
        2. Each site’s needs will be evaluated to determine if support equipment can be shared between sites.
      2. Storage, maintenance, and update of equipment:
        1. Procedures will be developed for maintenance of site equipment.
        2. Site support equipment needs will be reviewed yearly.
        3. Each site will have a defined area for storage of site equipment.
        4. A list of all sites and their equipment will be kept at each site to allow quick location of equipment that can be shifted in case of need.
Exclusions or Special Circumstances: 

Exceptions to these standards shall be allowed only if previously approved by the KU Information Technology Security Office and such approval documented and verified by the Chief Information Officer.

Consequences: 

Faculty, staff, and student employees who violate these University standards may be subject to disciplinary action for misconduct and/or performance based on the administrative process appropriate to their employment.

Students who violate these University standards may be subject to proceedings for non-academic misconduct based on their student status.

Faculty, staff, student employees, and students may also be subject to the discontinuance of specified information technology services based on standards violation.

Contact: 

Chief Information Officer
Price Computing Center
1001 Sunnyside Avenue
Lawrence, KS 66045
785-864-4999
kucio@ku.edu

Approved by: 
Chief Information Officer
Approved on: 
Thursday, December 10, 2009
Effective on: 
Thursday, December 10, 2009
Review Cycle: 
Annual (As Needed)
Background: 

The attached standards are designed to represent the baseline to be used by the Data Center and Server Rooms located on the Lawrence campus. While specific-standards organizations are referenced for examples of best practices, it should be noted that site conditions, special requirements, and cost of modification will be taken into consideration when implementing the final configuration of a site. These standards will be regularly reviewed and updated based on new industry standards, new technology, and lessons learned.

Definitions: 

These definitions apply to these terms as they are used in this document.

CAT 6: Category 6 cable, commonly referred to as Cat-6, is a cable standard for Gigabit Ethernet and other network protocols that feature more stringent specifications for crosstalk and system noise.

CRAC: Computer room air conditioner

Data Center: A large facility designed to support large numbers of servers in a large conditioned room. Data Centers are usually composed of a large number of racks (25 or more) and are manned 24/7/365.

EMF: Electro magnetic fields

eWaste: Electronic equipment disposal service provided by Information Technology on the KU campus

HEPA/S: High Efficiency Particulate Air filters are used in vacuums cleaners in computer rooms to collect fine dust particles.

Hot/Cold Aisles: A method of arranging computer racks which focuses cold air delivery at the front intake of a rack and expels hot air at the back. Rack rows are arranged so the backs of rows face each other and hot air is collected above the row by a ceiling plenum, which returns the air to the CRAC unit directly. The fronts of the racks face each other in a row that has vented tiles in the raised floor to deliver cold air to the rack fronts from the CRAC units.

HVAC: Heating, ventilation, and air conditioning

IP: Internet Protocol is a term used to indicate a connection to the network.

Level I information: University Information with a high risk of significant financial loss, legal liability, public distrust or harm if this data is disclosed

Level II information: University Information with a moderate requirement for Confidentiality and/or moderate or limited risk of financial loss, legal liability, public distrust, or harm if this data is disclosed.

NOC: Network Operations Center is the location in the Data Center, which is staffed 24/7/365 and monitors and responds to all incidents that affect service availability.

OM4: Fiber optic cable used to support high speed communication in the 10GB range

PDU: Power distribution unit

PLC: Program Logic Control is a computer-based control system used to manage main power distribution switching panels.

Response Kit: A special tool kit used by Floor Space Planning technicians to support services on the Data Center Machine Room Floor.

Server Room: Typically a small conditioned space designed to support computing equipment. These are usually satellite processing centers supporting a specific department and not the entire enterprise. A server room at KU can also be defined as any room containing a server or servers critical to the support and operations of a unit or department and/or contains any Level I or II information as defined by the KU Data Classification and Handling Policy and/or Procedures Guide.

SLA/VRLA: Sealed Lead Acid/Value Regulated Lead Acid are two types of batteries that are used to support Data Center Machine rooms during loss of utility power. They are attached to a UPS system.

University Information: Data collected or managed by the University to support University activities. University Information may include records as well as other data and documents.

UPS: Uninterruptible Power Supply is a system used to condition utility power before it is fed to computer systems and provides power failure ride-thru when the main utility fails. These systems have a battery bank attached, which will provide a set number of minutes of ride-thru time. The UPS monitors the batteries and keeps them at full charge. It reports on power and battery problems.

Standards Organizations:

ANSI: American National Standards Institute

TIA: Telecommunications Industry Association

BS/IEC/EN: British National Standard/International Electrical Commission/European Standards.

The University of Kansas Design and Construction Standards Division 27 for Telecommunication Systems: Campus construction standards

EIA: Electrical Industry Alliance

IEEE: Institute of Electrical and Electronics Engineers

ISO: International Organization for Standardization

Keywords: 
data center, server room, server installation, server removal, physical plant, HVAC
Change History: 

02/15/2021: Updated guidelines to align with best practices, approved by CIO; policy formerly approved by Provost.
05/19/2015: Policy formatting cleanup (e.g., bolding, spacing).

Information Access & Technology Categories: 
Information Technology
Privacy & Security

Can't Find What You're Looking For?
Policy Library Search
KU Today
One of 34 U.S. public institutions in the prestigious Association of American Universities
Nearly $290 million in financial aid annually
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times