Advanced Analytics Tools and What AI Can Do
In the physical security industry, managing the overwhelming volume of data generated security equipment has become a challenge...
TEECOM engineers participated in a round table Q&A addressing the complexity of telecom design for hyperscale data centers.
Participants
How do you approach the technology design of a hyperscale data center differently than a corporate data center?
Alex Cardiasmenos (AC): The size, quantity, and complexity of pathways is one major difference between hyperscale and corporate data centers. The sheer size of hyperscale data centers necessitates large quantities of cable trays, conduits, and vaults, oftentimes an order of magnitude greater than their corporate counterparts. Most hyperscale data centers are Class 4 data centers, meaning they are fault tolerant and have 2(N+1) redundancy. That redundancy extends to the cabling plant design, which drives up the size and amount of pathway routes required to maintain that redundancy. High-quantity-strand fiber-optic cables and inter-building connectivity often necessitate a complex system of telecommunications vaults and duct banks. With all these things in mind, additional coordination and design is required with the owner and design team to ensure proper project delivery.
Tim Kuhlman (TK): Hyperscale data centers are typically purpose-built to be data centers, as opposed to a corporate data center that is fitted into a commercial office building. One aspect of going to hyperscale is how the facility will be underwritten (insured) and the impacts this will have on design. For a typical office space, an owner will work with their underwriter to insure the building and the local building codes dictate the minimum requirements for building design and safety. An owner’s requirements may exceed the Authority Having Jurisdiction (AHJ) minimums but at a capital cost. Hyperscale facilities, ranging in 25 to 40 Megawatts of energy plant, are more like an industrial facility and are often underwritten by Factory Mutual (FM Global). FM is a leader worldwide for underwriting industrial facilities and providing direction on mitigating risk. The FM directives can require a building to be designed beyond the minimum requirements set by the AHJ. This has to be taken into account by the design team at the beginning of the project. I have seen owners who don’t underwrite with FM but still use their directives, as they are considered best practices in the industry for reducing risk.
Darrel Scobie (DS): Carrier design differs in a hyperscale data centre from a colocation or corporate data centre. Generally, with hyperscale builds, the design intent is for the carriers to bring their cables (copper or fibre) to two, three, or even four different access vaults or areas within the site boundaries, depending on the specific diversity the owner requires. Here the carrier cabling would be terminated/spliced onto the site copper/fibre network ring and then routed to the data center’s Main Equipment Rooms (MERs) or main core rooms for connectivity. Colocation or corporate carrier connections differ; the design intent here is for the carriers to route the cabling directly into one or two Meet-Me Rooms (MMRs) or Points of Presence (POPs) within the building. Colocation centres generally have two diverse MMRs for the termination of carrier cabling; from here the connectivity is routed to the customer’s core cabinet rows. It’s not uncommon for carrier cabling to be presented within a designated rack and the rack shared with other customers within the building. In some cases, customers would request that they have their own secure rack, but this would generally be dependent on space allocations within the MMR.
Shaun Barnes (SB): The approach to design will fundamentally depend on the service offering being provided for that specific data centre function and the specific requirements that the location of the data centre will dictate. The philosophy behind the design approach will be similar for each. What needs to be considered is the differing needs for each variation in regards to location, size, pathways, connectivity, and the specific services required for operation. Understanding the client needs for their market sector will help to shape the design and the elements that will be required to produce not only a redundant and constructable design but one that is also operationally effective.
Does the size of a hyperscale data center require you to take a different approach to the design of the technology systems?
Casey Wittkop: At a high level, our approach to hyperscale data center technology systems design follows the same proven approach we apply to corporate data centers or other types of projects, but the extreme size and scale demands an even greater attention to detail and coordination. The repetition of design elements within the equipment rack rows and multiple tiers of cable tray in the large data halls and core technology rooms of hyperscale data centers has the potential to exaggerate even the smallest issues. Additional steps in our design approach are intended to expose and address any potential issues to ensure the design elements can be scaled up and effectively repeated thousands of times. For example, the design of a single data hall server row may require multiple design options and iterations, construction model walk-throughs with the construction team, and reviews of off-site mock-ups to perfect the design and demonstrate constructability before it is replicated across the data hall.
DS: There are many different approaches to consider when designing a hyperscale data centre, size being an important factor. One of these would be on the cabling side, such as cable lengths and sizes. Generally, in a colocation data centre the network/cabling infrastructure is usually local to the racks. In a hyperscale environment, many rooms, such as main equipment rooms (MERs), main distribution frames (MDFs), and intermediate distribution frames (IDFs), could be external of the data halls. Therefore, cable lengths would need to be looked at carefully.
SB: Generally, the design approach will be the same for most hyperscale data centre projects, as the design tools used will be consistent with an application to produce an effective design. The type of technology deployed within most hypescale projects will be similar but has the potential to be upgraded frequently. It is important to ensure the design team is aware of these technologies and is capable of adapting to an ever-changing environment and technologies. Understanding how to manage and design a technology upgrade for legacy systems is essential for most data centre deployments, but even more prevalent in hyperscale environments.
TK: Data centers are the ideal environment for using pre-terminated fiber and copper-trunk cables. These are cables that come from the factory with the cable terminations factory-installed and tested. They are typically laid in the cable tray from network-equipment rack to network-equipment rack. However, in a hyperscale data center, the termination locations may be in separate parts of the building. The pathways, wall penetrations, and firestop assemblies have to take into account the cable-end size of the pre-terminated cable assembly. In addition to this, there is the cumulative amount of cable slack to deal with. Pre-terminated cables are ordered slightly longer because getting the exact length is near impossible, and there is no solution for a cable that is too short. Even with accurate cable lengths modelled in CAD, there may be three to six feet of cable slack per cable trunk. In a hyperscale data center with hundreds of cables, this slack can take up a lot of space and has to be accounted for in the design.
How does the size of the building and distance between the telecom rooms affect the choice in using multi-mode fiber, single-mode fiber, or twisted-pair copper cabling?
John Pedro: The shift from copper, to multi-mode, to single-mode fiber has been in progress for more than two decades. Today, in a hyperscale environment, the copper cabling deployment supports primarily the users (workstations and Wi-Fi) and the back-of-house system management connections (building management systems, electrical power management systems, lighting, etc). The physical management of different mediums is also a consideration when it comes to physical space for pathways and their associated costs. This has also contributed to the shift to a single medium where possible, such as connections to the management ports of switches.
Multi-mode fiber still exists, but we find it in legacy environments not running the first-tier production network traffic. As with copper, multi-mode connections require a high count of strands versus two-strands of single-mode. As we migrate up in throughput, the path for multi-mode becomes unmanageable and no longer cost-effective due to the high count of strands, cables, and larger pathways.
Large and hyperscale data centers have always pushed the limit of cabling. When the amount of cables required for production connections exceeds the management aspect of the equipment and the facility, we have migrated to the next medium. By continually pushing the limits, these data centers drive new technologies, drive down prices for second-generation technologies, and pull along the smaller and corporate data centers.
DS: The distance between telecom rooms affects the type of cable that is used significantly. Copper cabling (dependent on which category) can only run a certain length, and in most cases, Category 6 (CAT6) cabling can only run 90m, so the telecom rooms would need to be within that distance. Therefore, it’s important to undertake a diamond study during the early stages of design. CAT6 is generally used for Wireless Access Point (WAP) cabling and top-of-rack switches, but even these are gradually being transitioned across to fibre connections. Multi-mode fibre also has a shorter run ratio than single-mode, which is now why a lot of the big hyperscalers are primarily using single-mode for their infrastructure.
TK: Due to the increased size of the building, there are assumptions we make for smaller data centers that have to be re-examined for the hyperscale data center. For example, an OM3 or OM4 is fairly standard for high-bandwidth circuits in a typical data center, but a hyperscale data center could be longer than two super Walmart buildings placed end to end. At these building sizes, a multi-mode fiber will exceed its distance limitation, and a single-mode fiber solution has to be considered.
SB: Most hyperscale deployments will utilize single-mode fibre (SMF) for interconnectivity over multi-mode fibre. The core reasoning for this is based on a number of factors, but primarily, SMF has a far lower attenuation value and transmission distance, which improves the performance over the channel.
Mike Candler (MC): In my experience, multi-mode fiber is not being used in network designs as often, and single-mode seems to be the norm. So fiber-optic cabling distances are less of a concern. Copper distances are essential, especially with so many data connections in a building. Following industry best practices and conservative designs are the best way to keep costs down and reliability up.
Are there issues you have seen in the construction of such large facilities that have required you to change your approach to the technology systems design?
DS: One of the issues that has been looked at closely is the cable size and weight ratios to the type of containment that would be used for the installation of the cables. Larger hyperscale facilities generally have larger-diameter fibre cables due to the requirement of more fibre cores; therefore weight and ceiling loads need to be looked at, as well as factoring a certain percentage for expansion cables and upgrades when designing the appropriate cable containment type.
TK: Yes, the size of the facility, the amount of servers being connected, and the increased size of the network tying everything together have increased the amount of cabling being conveyed across the building and then concentrated at the network racks. This requires the cable management to be taken to the next level. In a corporate data center, there may be one level of tray for point-to-point trunk cables and another level of tray for patch cords. In a hyperscale data center, the amount of cabling requires multiple levels of tray to manage and segrate the cables. This is necessary due to the sheer quantity of cables and to reduce the risk of an accident while working in one cable tray affecting multiple levels of the network. I have seen hyperscale data centers with three to six levels of cable tray. This creates design challenges for routing the cable trays, tying into the appropriate network racks, firestopping through wall penetrations, and coordinating around other utilities wanting to route in the same spaces.
SB: One of the core fundamentals to producing a successful design is ensuring full integration within the project planning and design team. Often the technology designer is brought into the design team far too late during the early design phases, and this has the potential to cause conflict in later stages. The technology designer should be working with the civil, structural, and architectural (CSA) as well as mechanical, electrical, and plumbing (MEP) designers at an early stage to ensure adequate pathways and spaces are accounted for in early design. This helps with planning applications and connectivity strategies but also ensures early coordination between disciplines is achieved to reduce potential design changes and re-works later on in the design program, ultimately saving valuable time and producing potential cost savings to the project in the long term.
MC: Larger construction projects typically require phased construction. Phased construction projects, if not appropriately managed, can cause issues with quality, damage to infrastructure, delays, changes, and re-work. Designs that incorporate the construction phasing can require careful consideration of cable plant design to accommodate construction, commissioning, and turn over of spaces to the owner.
Stay ahead of the curve with our latest blog posts on industry trends, thought leadership, employee stories, and expert insights.