More computing is happening at the edge where data is generated, but it is currently dwarfed by the huge growth in centralized datacenter capacity. The continued buildout and optimization of digital infrastructures, however, is also driving a renaissance of the model of decentralized, distributed IT.

This anticipated wave of new edge capacity, still in its infancy, will differ in many ways from past edge buildouts that are mostly IT siloes serving small business applications, branch and departmental computing needs. Above all, upcoming edge computing installations will not be incidental or tactical in nature but devised as part of strategic multi-tier IT services architectures.

These next-generation edge deployments are opening up new opportunities for hardware and software vendors alike, as well as for service providers that can effectively take advantage of a distributed edge tier. (There were 44 deals/M&A transactions surrounding edge computing technologies in 2015 and 2016, according to 451 Research's M&A KnowledgeBase.)

This report focuses on major use cases for edge datacenters. It is part of series of Spotlights on edge datacenters, ahead of the publication of the forthcoming Technology and Business Insight report "Datacenters at the Edge."

The 451 Take

Some suppliers, ranging from Schneider Electric and Vertiv to Dell, Ericsson and Huawei and Nokia, expect that micromodular datacenters will play a leading role in meeting distributed edge demand. Others, such as Google, are less convinced and are focused on building networks and using capacity one step back from the edge – to the 'near edge.' Skeptics argue that edge datacenters will be relatively niche, citing the main use case as CDNs because of their high data volumes. In most other cases, they say, data volumes and latency needs will not require a 'dedicated' edge datacenter. This view is supported by the build of new large and hyperscale datacenters, supported by reliable networks, which once established will bring the edge much closer to users and 'things.'
Both arguments are likely to bear out over time: the edge will be a blur of clearly distinct and different datacenter types. They will range from hyperscale cloud and large colocation facilities that are sited near or near enough to the point of use to support many applications, to new micromodular datacenters at the edge, to smaller clusters of capacity that are not large or critical enough to even be described as datacenters.

Context

Edge computing is evolving rapidly and new use cases are emerging. While 451 Research is bullish on future demand for edge datacenters, we believe it is likely that the demand will not be driven by any one use case but by an aggregation of different use cases.

Currently, 451 Research identifies at least five major use-case categories, with considerable overlap between them in some areas. They fall loosely into two groups: established edge datacenter use cases and those that are emerging or are expected to in the near- to mid-term. The demand characteristics and the opportunity for technology and service providers in each of these categories will vary, as summarized in the table below.

Demand characteristics for major edge datacenter use cases

Edge datacenter use case Probable demand characteristics Opportunity for suppliers and service providers
Content distribution networks (CDNs) High and established demand that will continue to grow Existing growth opportunity
Local processing and storage Largely established demand that will continue to grow Upgrade opportunity: from server closets/rooms to micromodular datacenters/near edge
Off-cloud processing and storage Relatively low and established demand that will remain niche Limited existing opportunity
Internet of Things Demand is limited today, but rapid growth is expected New high-growth opportunity
Next-generation networks and distributed cloud Virtually no demand today but high growth is expected. The speed of deployment/adoption is unclear Future growth opportunity, could be tied to the build out of local network capacity

Source: 451 Research, 2017

Established edge use cases

As discussed, certain use cases for edge of network computing and storage capacity are already well established. They are described in detail below.

Content distribution networks

Distributing content to the edge will continue to be a major driver for existing and new edge datacenter capacity. Essentially dedicated private networks of thousands of specialist edge servers and storage, CDNs are designed to provide secure and timely caching and delivery of data. They exist to direct and deliver content speedily and efficiently to consumers in areas where content providers might not have a presence.

CDNs have core (storage) and edge (distribution) datacenter requirements. Most CDNs lease space in colocation datacenters in urban areas near the point of use. 451 Research forecasts that the total worldwide market for CDN services will grow to $6.8bn by 2019 for a CAGR of 14% over a five-year period. Growth will be driven by the traditional core business of 'bulk' delivery of video, file downloads and static content, as well as developing areas such as cloud and network security services.

CDN providers come in a variety of forms:

  • Pure-play CDN providers are stand-alone companies that originally focused on content delivery but now also provide security and network services. Examples include Akamai, Limelight Networks, CDNetworks and Level 3 Communications.
  • Web hosting, cloud infrastructure and telcos operate their own CDNs but also license technology from pure plays. Examples include BT Group, Telefonica and Verizon.
  • Large internet companies use CDN providers, but a number also own their own CDNs, including AWS (CloudFront), Google (Cloud CDN) and Microsoft.
  • Some large content providers also operate CDNs. Netflix, for example, has its own CDN (in colocation sites) but also relies on AWS for its core datacenter services (it closed the last of its own datacenters in 2015).
Demand for new CDN datacenters will be met by the use of privately owned micromodular datacenters and the use of colocation and public cloud datacenter facilities, often depending on the type of CDN provider or the particular situation.

Local processing and storage

Broadly speaking, the most common edge datacenters today are branch offices and server closets that have been deployed on an ad hoc basis to meet localized compute needs. According to 451 Research's Datacenter Market Size Forecast, there are more than two million server closets and more than one million server rooms worldwide. Server closets typically operate between a few and a few dozen servers, while server rooms are dedicated computer rooms typically within a larger office environment. Some are owned by small and medium-sized enterprises (SMEs, those with fewer than 1,000 employees) that rely on these sites for their business IT needs, while some are owned by large organizations, such as retailers, airports and manufacturers.

The trend is for most SME workloads to migrate to large service provider datacenters and public clouds running at hyperscale sites. We estimate three-quarters or more of these SME server closets could disappear.

Larger organizations, on the other hand, are likely to further invest in edge computing – for latency reasons, and for continuous operation in the event of a connectivity failure to the central cloud due to wide area network (WAN) issues. Many operational and industrial IoT applications concern high-sample time-series data that has no immediate value in raw form but instead provide insights from long-term trend analysis. Local summarization of data, which is periodically forwarded to a centralized cloud, reduces cloud storage as well as bandwidth fees.

We anticipate that demand at these small localized sites will continue to grow and also become more strategic. They are likely to be upgraded from ad hoc installations into dedicated IT installations, with greater compute capacities yet also higher cooling and power redundancy. They are particularly well suited to self-contained micromodular datacenters – and for this reason can be considered an upgrade opportunity for datacenter technology suppliers.

Ownership of these small edge sites, at least in the short to mid-term, is likely to continue to remain with large enterprises, although management of these distributed sites could be outsourced to service providers. In time, both ownership and management of enterprise microsites may be outsourced, offering all-in-one opex service level agreement for branch and distributed computing that includes leased micromodular datacenter assets and their (remote) management. On-site moves and maintenance could also be part of the deal.

Off-cloud processing and storage

Certain applications are required to be localized because of specific latency requirements coupled with large amounts of compute capacity or because of stringent security requirements. Test and development within high-performance compute datacenters (above 20kW per rack), for example, can require low latency and significant storage. These environments may have private cloud components (highly virtualized and with self-service options) but the volume of data and low-latency requirements means they are unsuitable for shared wide-area connections or shared compute environments.

Another example would be classified military or other government workloads that dictate computing and storage that are isolated ('air gapped') from the internet and other wide area networks because of security reasons. As with HPC datacenters, they are sited close to users, with tightly controlled access. Datacenter types for off-cloud processing and storage can vary from multi-megawatt installations to microsites.

Emerging edge datacenter use cases

Certain use cases for edge-of-network compute and storage capacity are not yet well established and thus may be considered as emerging and therefore new market drivers. We discuss two of these use cases below.

Internet of Things (IoT)

The ongoing development of IoT is expected to significantly increase the volume of data and the speed at which it is produced. Edge computing in IoT refers to the generation of data by an edge device itself, be it a sensor in manufacturing equipment, telematics components in fleet vehicles or smart utility meters. These devices range from the very simple, such as temperature and humidity sensors, to the very complex, with integrated compute power for local analytics and network connectivity at the edge sensor, such as on oil rigs.

IoT gateways aggregate sensor and smart object data streams and provide a critical networking layer enabling local-area device data to connect directly to broader networks, such as the internet or other WANs. The myriad use cases inherent in IoT mean that there will be both inexpensive sensors and gateways that tunnel traffic back to private or public cloud services, including in colocation datacenters, for integration with other data and for ex post facto analysis, and also for more mission-critical latency-sensitive applications that require a localized database and analysis.

Edge gateway-based analysis could become more important to reduce the backhaul bandwidth to remote cloud datacenters and the latency inherent in the round-trip time over WANs. Edge analytics and rule-based data filtering are key for latency-sensitive applications, such as autonomous vehicles.

The volume of data that IoT is expected to generate is still likely to create two levels of compute – for edge and for core processing – and will require different types of datacenters, ranging from micromodular datacenters to large colocation and cloud facilities.

Even at the edge, however, IoT datacenters will have various forms, ranging from existing facilities such as colocation and other service provider facilities, particularly in urban areas, to micromodular datacenters that are distributed and connected (and remotely managed).

Next-generation networks and distributed cloud

Network infrastructure has been characterized by slow, incremental change that is restricted by the cost, time and effort needed to deploy hardware. However, recent trends point to transformations in the network enabled by mobile and virtualization technologies, which will support and accelerate new edge-computing approaches. Software-defined networks also support the use of more off-the-shelf hardware and allow for rapid reconfiguration. New 5G base stations, for example, can be introduced in hours and configured in minutes. Edge datacenters may be the same.

Mobile edge computing (MEC), a developing standards effort to replicate cloud computing and application execution at the mobile network edge, is expected to enable ultra-low-latency applications for cellular networks. MEC (soon to be called multi-access edge computing) will provide a networking infrastructure where workloads can be shifted, from edge to near-edge to cloud (in the core layer), depending on latency and bandwidth requirements.

Some vendors are calling this federated decentralized approach 'distributed cloud.' (The concept of 'network slicing,' a combination of technologies including network function virtualization and software-defined networking, will also be a key enabler for distributed cloud.) While distributed cloud does not necessarily require next-generation network infrastructure, it is expected to be a significant enabler and driver. For the sake of simplicity, we have grouped the two together.

Next-generation networks and distributed cloud will require micromodular datacenters, including those colocated with cell towers. The form factors are likely to evolve over time, becoming ever smaller and more embedded in radio access network infrastructure driven by increased integration of processing and storage capacity into the physical cell towers.

A broader, in-depth discussion will soon be published in the Technology & Business Insight (TBI) report, 'Datacenters at the Edge.' For more information about the drivers and barriers for micromodular datacenters, see the recent TBI report, 'Global Prefabricated Modular Datacenter Forecast 2016-2020: Steady as She Goes.'
Rhonda Ascierto
Research Director, Datacenter Technologies & Eco-Efficient IT

Rhonda Ascierto is Research Director for the Datacenter Technologies and Eco-Efficient IT Channel at 451 Research. She has spent more than 15 years at the crossroads of IT and business as an analyst, speaker, adviser and editor covering the technology and competitive forces that shape the global IT industry. Rhonda’s focus is on innovation and disruptive technologies in datacenters and critical infrastructure, including those that enable the efficient use of all resources.

Daniel Bizo
Senior Analyst, Datacenter Technologies

Daniel Bizo is a Senior Analyst for Datacenter Technologies Channel at 451 Research. His research focuses on advanced datacenter design, build and operations, such as prefabricated modular datacenters, highly efficient cooling and integrated facilities and IT management to achieve superior economics.Daniel is also a regular contributor to 451 Research's silicon and systems technology research in the Systems and Software Infrastructure Channel.

Andrew Donoghue
Principal Analyst

Keith Dawson is a principal analyst in 451 Research's Customer Experience & Commerce practice, primarily covering marketing technology. Keith has been covering the intersection of communications and enterprise software for 25 years, mainly looking at how to influence and optimize the customer experience.

Want to read more? Request a trial now.