This article offers an overview of the key terminology, acronyms, abbreviations, and measurements used when discussing data centres, and their respective definitions.
Basic data centre terms
Data centre: A facility that houses and manages computer systems, storage, and networking equipment to ensure the reliable operation and secure management of critical data and applications.
Cloud computing: The delivery of computing services over the internet, utilising data centres as the physical infrastructure that hosts and manages the required hardware and software resources.
Hyperscale: The infrastructure and processes needed in data centre environments to seamlessly scale from a small number of servers to thousands, commonly used in big data and cloud computing contexts..
Hyperscaler: A company or organisation that provides scalable cloud computing services by operating extensive data centre infrastructure capable of supporting vast numbers of servers and handling large-scale workloads. Hyperscalers typically offer services such as cloud storage, computing power, and networking at massive scales, catering to global demand. Examples of hyperscalers include companies like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
Disaster recovery: The process of resuming normal operations after a disaster involves restoring access to data, hardware, networking equipment, software, power, and connectivity, often relying on extra data centre facilities to ensure the recovery and continuity of critical IT services.
Downtime: A period of time when systems are unavailable due to failures or maintenance, impacting service continuity and potentially disrupting business operations and client access to critical data.
Redundancy: The duplication of critical infrastructure components and systems to ensure backup and protection against downtime caused by failures, thereby maintaining continuous operation and service availability.
Resilience: The capacity of a data centre to swiftly recover and maintain operations despite equipment failures, power outages, or other disruptions, ensuring continuous service and minimal downtime.
Scalability: The ability to efficiently expand or contract resources such as computing power, storage, and networking to meet changing demands without disrupting operations.
Brownfield: The development or expansion of existing data centre facilities or infrastructure on previously used or occupied sites.
Greenfield: The building of a new facility from scratch on undeveloped land, allowing for the design and construction of a data centre tailored specifically to the operator’s requirement.
Types of data centre
Hyperscale Data Centre: A facility designed to accommodate extensive compute and network infrastructure, offering scalability and high-speed processing for large data volumes, enabling major companies like Amazon, Google, and Microsoft to efficiently deliver essential services to a global customer base.
Enterprise Data Centre: A data centre owned and operated by a private company, dedicated to processing internal data and hosting mission-critical applications, thereby supporting the organisation’s operational needs and ensuring data security and control.
Edge Data Centre: A smaller data centre facility, usually situated nearer to the end-customer. Demand for these facilities is often driven by a need for lower-latency or data sovereignty.
Green Data Centre: Data centres constructed with a strong emphasis on energy efficiency, environmental impact, and sustainability by integrating advanced technologies and practices such as renewable energy use, optimised cooling systems, and green building standards to minimise energy consumption, reduce carbon footprints, and promote eco-friendly operations.
Intelligent Data Centre: A data centre that leverages AI, machine learning, and IoT devices to enhance operational efficiency and security. Overall performance is optimised through advanced automation and smart technologies, allowing for more proactive management and improved resource utilisation.
Software-Defined Data Centre (SDDC): A data centre in which networking, storage, computing power, and security are virtualised and managed through software, delivering these resources as on-demand services and enhancing flexibility, scalability, and efficiency.
Data centre services
Infrastructure as a Service (IaaS): Providing computer infrastructure—including virtualisation platforms, servers, software, data centre space, and network equipment—on a subscription basis, allowing clients to access and manage these resources as a fully outsourced service rather than investing in and maintaining their own hardware and infrastructure.
Data Centre as a Service (DCaaS): The delivery of off-site physical data centre facilities and infrastructure to clients, providing managed and scalable IT resources without the need for clients to own or maintain their own data centre infrastructure.
Colocation: The practice of housing multiple customers’ servers and other computing hardware within a single data centre facility, where each customer retains ownership of their equipment while sharing the facility’s infrastructure, such as power, cooling, and connectivity.
Private cloud (single-tenant): A cloud computing environment exclusively dedicated to a single organisation, providing customisable and secure access to computing resources, storage, and applications, which can be hosted within the organisation’s own data centres or by a third-party provider’s data centre, tailored to meet the organisation’s specific needs and compliance requirements.
Private cloud (multi-tenant): A cloud computing environment dedicated to a single organisation but hosted within a shared data centre infrastructure that serves multiple tenants, offering the benefits of privacy and customization while leveraging shared resources to optimize cost and efficiency.
Public cloud: A computing environment where computing resources, such as servers, storage, and applications, are hosted and managed by third-party providers and made available to multiple organisations or individuals over the internet, offering scalability and cost-effectiveness without requiring users to manage the underlying infrastructure.
Hybrid Cloud: A hybrid cloud integrates both public and private clouds, enabling organisations to run workloads on public cloud infrastructure for scalability and cost efficiency, while managing sensitive or critical workloads on private clouds for enhanced security and control.
Managed Hosting: An IT provisioning model where a service provider leases dedicated servers and associated hardware to a single client, with the equipment housed and managed at the provider’s facility, offering the client hands-off management of the infrastructure.
DRaaS (Disaster Recovery as a Service): DRaaS offers continuous data protection by replicating data from the primary environment to a designated recovery site, thereby extending the lifespan of legacy assets and maximising the value of existing investments.
Data centre measurements
Power Usage Effectiveness (PUE): A metric defined by the Green Grid that measures data centre efficiency by dividing the total energy consumed by the data centre, including both IT equipment and infrastructure, by the energy consumed solely by the IT computing equipment. A PUE of 1.1 indicates perfect efficiency, with PUE values typically ranging between 1.2 and 2.0 for most data centres.
Data Centre Infrastructure Efficiency (DCIE): A measure of data centre efficiency calculated by dividing the power consumption of IT equipment by the total power consumption of the entire data centre, and is expressed as a percentage. It is the inverse of Power Usage Effectiveness (PUE), reflecting how effectively a data centre uses energy specifically for IT operations relative to overall energy use.
Critical Load: The computer equipment and systems whose continuous operation is essential for business functions, typically supported by an uninterruptible power supply (UPS) to ensure consistent power and minimise downtime in case of power interruptions. Critical load is measured in watts (W) or kilowatts (kW), which quantify the amount of power required to keep essential computer equipment and systems operational.
Critical Cooling Load: The amount of cooling capacity required to maintain optimal operating temperatures for IT equipment and infrastructure to ensure reliable performance and prevent overheating. Typically measured in British Thermal Units per hour (BTU/hr) or kilowatts (kW). These units quantify the amount of thermal energy that needs to be removed by the cooling systems to maintain the optimal temperature and ensure the proper functioning of the IT equipment.
Redundancy Levels (N+1, N+2, 2N): Redundancy levels are defined relative to the baseline “N,” representing the minimum number of independent resources required for system operation. In an N+1 configuration, there is one additional backup resource; N+2 includes two backup resources; and 2N provides double the total resources available to the system.
Nominal Cooling Capacity: The total cooling power of air conditioning equipment, encompassing both latent cooling (the removal of moisture from the air) and sensible cooling (the reduction of air temperature), usually expressed in units such as British Thermal Units per hour (BTU/hr) or kilowatts (kW).
Renewable Energy Credits (RECs): Certificates that certify the generation of a specific amount of renewable energy, such as one megawatt-hour (MWh). Data centres often purchase RECs to offset their energy consumption and support their sustainability goals by demonstrating their commitment to reducing their carbon footprint through the use of renewable energy sources.
Water Usage Effectiveness (WUE): A metric that helps data centres measure the amount of water used for cooling and other facility needs, typically expressed in liters or gallons per unit of IT equipment power consumption (e.g., liters per kilowatt-hour or gallons per megawatt-hour). This measure is used to evaluate and manage the facility’s water consumption efficiency and environmental impact.
Rack Cooling Index: (RCI): A metric which measures the degree to which equipment racks are cooled and maintained compared to industry benchmarks.
Data centre tiering
Tier 1: A Tier 1 data centre, as defined by the Uptime Institute’s tier classification system, is a basic server room that adheres to general guidelines for computer system installations, providing 99.671% availability. It operates with a single, non-redundant distribution path and non-redundant capacity components, offering minimal protection against disruptions and downtime.
Tier 2: A Tier 2 data centre, according to the Uptime Institute’s tier classification system, meets all the requirements of Tier I and offers an improved availability guarantee of 99.741%. It includes redundant site infrastructure capacity components, providing enhanced reliability and protection against disruptions compared to Tier 1.
Tier 3: A Tier 3 data centre, as defined by the Uptime Institute, builds on the requirements of Tiers 1 and 2 by offering dual-powered IT equipment connected to multiple independent distribution paths, ensuring an increased availability of 99.982%. This setup provides enhanced reliability and fault tolerance, allowing for maintenance and upgrades without interrupting operations.
Tier 4: A Tier 4 data centre, according to the Uptime Institute’s tier classification, incorporates all components from the previous tiers and adds independently dual-powered cooling systems. It features fault-tolerant infrastructure with redundant distribution paths and the capability to store electrical power, ensuring a high level of reliability with a guaranteed availability of 99.995%.
Data centre infrastructure
Data Centre Shell: The physical building structure of a data centre that includes the walls, floors, roof, and basic infrastructure elements but lacks the internal technical systems such as IT equipment, cooling, power, and networking components. It provides the essential framework and environment for the installation and operation of these critical systems.
Data Hall: A dedicated area within a data centre where IT equipment, such as servers, storage systems, and networking devices, is housed and operated. It is designed to provide optimal conditions for equipment performance, including cooling, power supply, and security, and typically includes rows of racks or cabinets where the equipment is installed.
Main Distribution Area (MDA): The central space in a data centre where the structured cabling system is distributed. It typically houses the Main Distribution Frame (MDF), which includes core routers, core switches, UPS power, cooling systems, and manages incoming telecommunications and internet wiring, distributing it to various Intermediate Distribution Frames (IDFs).
Intermediate Distribution Frame: A room equipped with UPS power, cooling, and cable racks that manages and interconnects telecommunications and internet wiring between the Main Distribution Frame (MDF) and workstation devices.
Power Distribution Unit (PDU): A device equipped with multiple outlets designed to distribute electrical power to the equipment housed within a rack, ensuring efficient power management and distribution.
Cutout: An opening in a physical structure, such as a floor or wall, designed to facilitate the passage of cables, pipes, or other infrastructure components. It allows for the integration of essential systems and helps maintain organised and efficient use of space within the data centre.
Cabinet/rack: A structure designed to house and organise IT equipment, including servers, network devices, and other hardware. It provides physical support and efficient management of equipment, often incorporating features for cooling, power distribution, and cable management. In network environments, a rack may also house devices that combine hardware and software to deliver and manage shared services and resources.
Server Room: Dedicated space designed to house a high concentration of information technology equipment, such as servers, networking devices, and storage systems, with controlled conditions to ensure optimal performance, cooling, power supply, and security.
Uninterruptible Power Supply (UPS): A battery-powered device that provides immediate backup power to a computer system or other equipment when the primary power source, such as the utility main, fails. It ensures an instant or near-instant continuation of electrical current, protecting against power interruptions and allowing for safe shutdowns or transitions to alternative power sources.
Sub-floor: The open space located beneath a raised computer floor in a data centre. This area is typically used for routing and managing power cables, cooling ducts, and other infrastructure components, providing efficient access and organisation for essential systems.
Aisle: The open space between rows of racks in a data centre. Best practices involve arranging racks with consistent front-to-back orientation to create ‘cold’ and ‘hot’ aisles, optimizing airflow and cooling efficiency.
Data centre cooling
Heating, ventilation, and air conditioning system (HVAC): A system comprising components that condition indoor air, including heating and cooling equipment, ducting, and related airflow devices, to regulate temperature, humidity, and air quality.
Computer Room Air Conditioner (CRAC): A cooling unit designed for data centres that uses a compressor to mechanically cool air, maintaining optimal temperature and humidity levels to ensure the reliable operation of IT equipment.
Computer Room Air Handler (CRAH): A cooling unit used in data centres that utilizes chilled water to cool the air, providing temperature control and maintaining optimal conditions for IT equipment.
Fluid Cooler: Coils and fans that transfer heat from the interior environment to the outside, effectively cooling fluids or air by releasing thermal energy into the external environment.
In-row Cooling: Cooling systems positioned between racks in a data centre row that draw warm air from the hot aisle and deliver cool air directly to the cold aisle, minimizing the air’s travel distance and improving cooling efficiency.
Cool Aisle: An aisle in a data centre where the fronts of racks face into the aisle, allowing chilled airflow to be directed into the aisle and efficiently enter the racks, optimizing cooling performance.
Hot Aisle: An aisle in a data centre where the backs of racks face into the aisle, allowing heated exhaust air from the equipment to enter the aisle and be directed to the CRAC (Computer Room Air Conditioning) return vents for efficient cooling.
Data centre operations
Data Centre Infrastructure Management (DCIM): Software tools used to discover, monitor, and control the assets within a data centre, including both power and computing resources, to optimize operational efficiency and resource management.
VMware Backup: Creating copies of data from virtual machines (VMs) in a VMware environment to safeguard against data loss. This process addresses the challenge of protecting virtualised systems and ensures data integrity and recoverability in case of failures or disasters.
Distribution: Process of routing electrical power to various locations. Outside a building, it involves transmitting power from the power plant through the grid to end users. Inside a building, distribution involves using feeders and circuits to deliver power to various devices and systems within the structure.
Root Cause Analysis (RCA): A systematic approach used to identify the fundamental causes of problems or events, aiming to address these underlying issues rather than just managing symptoms. It focuses on preventing future occurrences by addressing the root causes, rather than merely reacting to problems as they arise.
Root Cause Elimination (RCE): Process of addressing and removing the underlying causes of problems to prevent their recurrence, ensuring that the issues are fully resolved rather than just mitigating their symptoms.
Service Level Agreements (SLAs): Formal contracts between the data centre provider and clients that specify the expected standards for service delivery, including parameters such as uptime guarantees, response times for issue resolution, and maximum allowable downtime, ensuring clear expectations and accountability for performance and reliability.
Liquid Cooling: Cooling technology that uses a liquid to transfer and remove heat. In data centres, the two common methods for heat evacuation are chilled water (a type of liquid cooling) and refrigerant (direct expansion or DX cooling).
Latent Cooling: Process of condensing water from air, which releases energy, and later evaporating the water, which absorbs the same amount of energy. This process delays cooling, so if the condensed water is removed without evaporation in the same environment, the energy used for condensation is not effectively utilized for cooling.
Advanced data centre terms
Content Delivery Network (CDN): A system of distributed servers strategically located across various data centres that cache and deliver web content to users from the nearest server, optimizing performance, reducing latency, and balancing traffic load to enhance the efficiency and reliability of data delivery
Data Centre Bridging (DCB): A set of standards and technologies designed to enhance data centre network efficiency and performance by enabling the seamless integration and management of Ethernet networks across different data centre environments.
Data Centre Networking (DCN): The process of interconnecting all resources within a data centre, including servers, storage, and networking equipment, to enable seamless data flow, efficient resource utilisation, and reliable communication between systems.
Data Integrity: Data integrity ensures that digital information stored and processed within the data centrefacility remains accurate, complete, and unaltered, while being protected from unauthorized access or modifications throughout its lifecycle.
Edge Computing: A distributed computing paradigm that places computations and data storage closer to the data sources, such as IoT devices or local sensors, to enhance response times, reduce latency, and conserve bandwidth by processing data locally rather than relying on a central data centre.
Virtualisation: The process of creating multiple virtual environments from a single physical server or storage device within the data centre, allowing for efficient allocation and management of resources, improved scalability, and enhanced flexibility by isolating and optimizing hardware resources for various applications and workloads.
Are you looking for advisory services in data centres?
Data Centre Security: Key Principles and Best Practices
Discover data centre security key principles, best practices and compliance frameworks to safeguard infrastructure, data, and operations effectively.
GPU-as-a-Service: What it is, Trends and Leading Providers
As AI demand grows, GPU-as-a-Service (GPUaaS) offers scalable, cost-effective access to powerful GPUs, avoiding heavy upfront costs. Explore its trends here.
Regional data centre strategy: At a crossroads?
As compute and storage demands grow, AI-driven automation diversifies enterprise needs. Data centre strategies must shift from one-size-fits-all to customer-focused.