Using a careful methodology, the technologies were assessed by 27 experts from Uptime Institute, 451 Research and the datacenter supplier sector, and by 600 end users, including C-level executives, IT and facilities managers, and datacenter design engineers. Interestingly, the experts and the users had similar (average) scores for some technologies, while for others there was a clear divergence. Both groups were aligned in their views of the most potentially disruptive technology: distributed resiliency – spreading critical workloads across many datacenters using networks, data replication and traffic switching to reduce the risk of failure.
The 451 Take
There is often no single answer when ranking the most disruptive technologies to come, but in our research, distributed resiliency came out on top. With distributed resiliency, physical resiliency effectively moves up to the IT level (but not the application level). Applications and data can be spread across racks, datacenters and regions, and may be blind to underlying datacenter or component failure – meaning that failures have little impact and can be allowed to happen. Even though some operators have used this technology for a decade or so, it is still nascent. Barriers include scale, increased complexity in IT and management software, and fit-for-purpose networking. But as our research shows, distributed resiliency continues to offer enormous promise to CIOs, and, if it is widely taken up, it has the potential to disrupt the ecosystem, suppliers and many operators.
451 Research uses 'disruptive' as a catch-all term that encompasses a range of technologies we view as particularly promising. Some have been on the market but have yet to be fully applied to the datacenter or have yet to make a sufficiently large impact on the market; others are still in development. A key criterion here is that, taken on their own or in association with others, the 10 technologies examined could significantly change the economics of the datacenter in terms of capital or operating costs. It is not our intention to categorically rank the technologies or to be exhaustive in our evaluations or inclusion standards, nor are the technologies examined here the only ones that are – or may be – disruptive. Instead, we provide a qualitative representation of the views of 451 Research's datacenter and enterprise IT analysts, experts at Uptime Institute (also part of The 451 Group), and select datacenter technologies suppliers, as well a broad range of users across the datacenter sector.
Assessing which technologies are likely to be disruptive is extremely challenging. Different domain experts are likely to have very different perspectives and different views on the impact of various technologies. The same is true for operators and managers of datacenters, as well as others involved in the design, construction and operation of datacenters. Each disruptive technology was assessed against three top-line criteria, on a scale of 1-5:
- How big will the impact be?
- How fast will it happen?
- How likely is it that the technology will reach fruition and disrupt the market?
Top-rated disruptive technology
Distributed resiliency could result in much lighter-weight datacenters, with far less single-site redundancy, especially in the power infrastructure. It scored an average expert rating of 4.2 (out of a possible 5) for how fast it will happen, 3.6 for how likely and 4.0 for how big the impact with be. The disruption drivers for this technology are availability, efficiency and site-level redundancy. As long as at least three datacenters are involved and the networking has sufficient capacity and variety, then extremely high availability can be achieved. Distributed resiliency (whether public clouds are involved or not) means that once more than two datacenters are used, spare active capacity is spread among a number of datacenters. The need for high levels of redundancy in power, cooling and IT at each site may be reduced, potentially saving enormous sums.
How fast will it be adopted? There are risks and limitations, and for some, there are too many barriers to make the idea sound feasible at present. The questions are to what extent it will be more widely adopted and how disruptive it will be over time. It may be that, at least in the middle and core of the network, distributed resiliency adds a layer of flexibility and resiliency without disrupting the layers below. In practice, networks and datacenters will continue to use high-availability designs, but not everywhere. In some cases, decades of dependency on proven in-house engineering at a few closely controlled sites may need to be replaced by, or supplemented with, the need to trust in a hybrid of new third-party services and technologies.
Other technologies assessed
Below is a brief description some of the other technologies that were assessed (not including a few that were assessed but did not score high enough to be included). They are listed in random order; more details and their disruptive rating can be found in our TBI report.
Chiller-free datacenters, while a very small minority, are a reality today. It is within the reach of any operator to build an exceptionally energy- and cost-efficient yet mission-critical facility. However, wide operating temperature bands, a prerequisite for chiller-free datacenters in most climates, are often rejected out of a fear for IT system health. Such fears are likely to diminish over time as operational confidence with wider bands grows.
Datacenter management as a service
New cloud services known as datacenter management as a service (DMaaS) could transform the way datacenters are managed and operated. DMaaS aggregates and analyzes large sets of anonymized monitored data about datacenter equipment and operational environments from many different facilities (customers); analysis is enhanced with machine learning and, over time, potentially also with deep-learning approaches.
A small number of datacenters are investing in or investigating microgrids as a way to gain energy independence from the utility grid. A microgrid controls localized energy sources and loads. It normally operates connected to and synchronously with the traditional centralized grid, but can disconnect and function autonomously (in 'island mode') as physical or economic conditions dictate. High costs, including for microgrid energy storage, are a common barrier.
Direct liquid cooling
Direct liquid cooling (DLC) is a method of heat dissipation where the processor or other components are close to or fully immersed in a liquid. The majority of DLC adoption to date has been in high-performance computing (HPC) facilities, but the improved server performance, lower cooling costs and higher rack densities that DLC can enable may be attractive as demand for these facilities increases.
Micro-modular embedded datacenters
Micro-modular datacenters (MMDCs) are a form of prefabricated modular datacenter in a self-contained cabinet that can be deployed in non-datacenter sites. The micro-modular embedded datacenter taps into existing building management systems that aggregate data streams from and control multiple domains of operation. Next-generation edge computing, including the Internet of Things, is expected to drive demand for MMDCs.
Open source infrastructure
Some of the largest datacenter operators are adopting open architectures, such as the Open Compute Project (OCP) and its counterpart Open19, in a bid to lower costs, standardize and simplify. Operators can realize material capex savings by sourcing bare-bones IT hardware, while significant benefits could also be derived at the facility layer from distributing UPSs. Barriers to adoption by enterprises include a need for large-scale orders, and a lack of support services.
Fiber optics has already started displacing copper in networks; with silicon photonics (SiPh), this has become possible in the server too, by placing fiber-optic links directly in semiconductor chips. SiPh involves transmitting light using semiconductor chips, only without the need for discrete optical devices, which leads a step change in the economics of high-speed optical connections in datacenters. SiPh requires scale and a considerable up-front commitment to show its true potential, which means its technical and economic benefits may not be evident to mainstream buyers for some years.
Software-defined power, like the broader software-defined datacenter, is about creating a layer of abstraction that makes it easier to continuously match power resources with changing datacenter needs. Significant cost savings can be achieved by treating power as a virtual resource that is managed through a control plane, and can be controlled, capped, used, stored or even sold to meet changing demands, service levels or policies.
Storage-class memory (SCM), the holy grail of memory research for decades, is a new class of various technologies with the ultimate objective of simplifying data access. SCM technologies serve processors with data and capture results at high speeds, and show little or no wear under heavy load. The long-term implications will be more profound than just speed: it will alter the way computers are designed, and lead to previously inconceivable levels of storage and memory efficiencies. SCM will also help datacenter managers significantly reduce the capital and operational burden of highly redundant facility infrastructures.
In the overall assessment of the above technologies, both the experts and users gave notably higher scores for 'how big' than for the other two categories. This suggests they have more faith in the ability of the technologies to disrupt than they do in the industry's ability to effectively bring them to market to enable widespread adoption. One notable difference between the two groups was how our panel of experts assigned higher scores for 'how fast' (which translated to 'mid-term' future, or four to eight years) than our pool of users. This could be interpreted as a higher level of confidence by experts in the viability of all the technologies assessed to disrupt the datacenter industry, compared with users. Only time will tell whether this confidence was insightful – or just optimistic.
Read the 'Disruptive Technologies in the Datacenter: 10 Technologies Driving a Wave of Change' report to learn more.
Rhonda Ascierto is Research Director for the Datacenter Technologies and Eco-Efficient IT Channel at 451 Research. She has spent more than 15 years at the crossroads of IT and business as an analyst, speaker, adviser and editor covering the technology and competitive forces that shape the global IT industry. Rhonda’s focus is on innovation and disruptive technologies in datacenters and critical infrastructure, including those that enable the efficient use of all resources.
Daniel Bizo is a Senior Analyst for Datacenter Technologies Channel at 451 Research. His research focuses on advanced datacenter design, build and operations, such as prefabricated modular datacenters, highly efficient cooling and integrated facilities and IT management to achieve superior economics.Daniel is also a regular contributor to 451 Research's silicon and systems technology research in the Systems and Software Infrastructure Channel.
Rosanna Jimenez is a Research Associate at 451 Research. Prior to joining the analyst team, Rosanna worked with 451 Research sales supporting vendor and end-user research requests.