The continued buildout and optimization of digital infrastructures to deliver workloads that are growing in number and sheer size (both compute requirements and data volume) is driving a renaissance of distributed IT capacity, 451 Research believes, potentially outweighing any consolidation of networked closets and server rooms. We are noting multiple application types that show affinity for an edge presence outside of core datacenter sites. This affinity is based on their data requirements defined as a combination of latency, volume, and availability and reliability requirements. To help give structure to the growing complexity of workload placements, we have devised a framework that evaluates, in broad strokes, the types of workloads that may benefit from an edge presence.

This is not to say that data is the sole determiner of where to run an application. There are multiple other factors that organizations will weigh when making such decisions, including compliance, security and manageability, and costs of any changes to the software architecture (if needed). Our framework is designed to help businesses identify candidates, ranging from Internet of Things (IoT) gateways to branch back-up and recovery, virtual reality (VR) consumer applications and Industry 4.0.

The 451 Take

The reemergence of the edge is not inconsistent with the growing concentration of IT capacity in central sites and at the hands of IT service providers. On the contrary, it is a direct consequence of it: operators will maintain and even increase edge capacity closer to users and connected machines as data volumes keep growing and as the cost of subpar response times (let alone downtime) escalates. If there is a single underlying thread behind the anticipated wave of new edge capacity, it is about data: speed of access, availability and protection. Workload requirements across these three vectors can vary considerably. While the datacenter they are best suited for will often be determined by location (where the facility is sited) as it relates to data requirements, other factors will come into play, including specific business requirements.

The edge affinity framework


Much of the IT industry shares a visibly strong consensus that most workloads are migrating to large datacenters and ultimately headed to public clouds running out of hyperscale sites. This view is somewhat supported by sales dynamics at large IT and datacenter equipment vendors, which are reporting stagnant or eroding enterprise sales, offset to some degree by sales to commercial datacenter service providers growing steadily.

However, 451 Research believes most of the smaller datacenter sites will remain, albeit transformed. Even with the rapid increase in (wide area) network access bandwidth, realized data speeds remain insufficient to move large data sets around on demand. Reliability and availability of network connections are generally lacking and, in most locations, cannot be relied on for critical applications. This means that demand for locally running applications and data stores remains – even in the face of the centralization of core systems. We anticipate public and private cloud-based digital services to be no different and, contrary to some views, to ultimately generate demand for even more edge capacity to optimize the delivery of cloud-originated services (including CDNs) as well as get data from users and the myriad connected machines.

451 views the edge as being defined and driven by data requirements, whether speed, availability and reliability of access, rate of generation or security (or any combination thereof). The key technology component is the WAN, which largely defines these aforementioned characteristics, much more so than relatively inexpensive compute or storage capacities. Scaling bandwidth or adding redundant WAN paths is typically much more difficult, slower and ultimately more expensive than upgrading computing and storage speeds and feeds. Improvements in latency are limited by the laws of physics.

Through this lens we have classified a limited, select number of workload types per site by:

  • Latency tolerance: Applications differ greatly in how sensitive their performance is to speed of access to data.
  • Criticality: What the availability and reliability requirements are for the application to meet business objectives.
  • Data volume per site: Data that is aggregated, generated, processed or stored at a location.
Ours is not a vetted, scientific approach. Instead, our objective was to visualize in broad strokes different workloads by their affinity for an edge presence. We preselected 16 workloads we felt were strong candidates for local presence, including those that are considered to be in an established edge area, as well as those that are expected to be, as part of emerging distributed IT infrastructures. The workloads that score very high in either latency or criticality requirements (that is, they have no tolerance for losing access to the data or application from a site) can be thought of as having hard (technical) requirements for an edge presence, while those that score lower in these but generate high volumes of data specific to a location (e.g., rich sensory data) have a soft (economic) preference for edge capacity.

We used scores from 1-6 that represent a logarithmic scale as the steps between scores are typically an order of magnitude. For example, a score of 6 for latency requirements means a sub-1ms latency, while a score of 5 maps to 1-10ms, and so on. For criticality, we used the amount of time for losing data access to/from the site that's still acceptable for the business. Again, in a similar fashion, we scored 6 for those where we thought 0 downtime was acceptable, 5 for a few seconds, 4 for a few hours. Volume of data originating from, heading to or transiting through the site defines the size of the bubble – again, using a factor of 10-step change between categories.

Figure 1: Scoring values for an edge affinity

Source: 451 Research


Putting it all together

Using this scoring system, we mapped out select edge workloads, weighting criticality and latency requirements and taking into account the relative volume of data per datacenter site (as illustrated by the size of the bubble in Figure 2).

Figure 2: Select workloads by their affinity for the edge

Source: 451 Research

How to read this chart:

  • Size of bubble: Volume of data per edge site
  • Latency sensitivity: 1= high sensitivity; 6 = low sensitivity
  • Criticality: 1= high criticality; 6 = low criticality
The upper and right sides of the graph represent workloads that require an edge presence. Workloads that are closer to the bottom left can still be strong edge use cases for economic reasons – lifting and shifting the data, for example, may be much more expensive due to bandwidth costs than having additional local storage and processing capacity in a self-contained cabinet.

Broad descriptions of the workload samples that we preselected for this analysis are as follows:

  • Carrier NFV: Network functions virtualization is an ongoing major transformational trend of moving away from fixed-function hardware toward software appliances running on industry-standard compute and storage equipment. As traffic volumes grow, these functions will require substantial IT resources at local central offices directly serving subscribers.
  • 5G cell processing: Emerging 5G traffic is expected to bid for a much larger share of IP traffic compared with existing LTE networks. In addition to a massive increase in wireless bandwidth, there will need to be an all IP-based and standard IT-based architecture in place that will also be able to act as an active accelerator for CDNs by performing functions such as data caching and real-time transcoding of content for optimized utilization.
  • IoT gateways: Most connected machines and sensors will need proximity to data hubs to send their data to. Gateways (physical or virtual) will either live outside a datacenter-like environment or be integrated into an edge stack. The volume of real-time data into IoT gateways will be massive, but edge analytics will limit the amount of on-site processing required.
  • IoT data aggregation: Gateways will be talking to aggregation sites for data preprocessing and in some cases real-time analytics such as error detection of security monitoring, as well as optimization for transmission upstream toward core sites. Most installations will likely come in the form of ruggedized IT systems and won't require a conditioned environment, with only larger data-rich sites having a micro-modular datacenter unit or server room.
  • Industry 4.0: Industry 4.0 is the jargon for next-generation manufacturing optimization based on rich data captured from myriad tools, labor, cameras and environmental sensors – as well as the control of such assets. Machine vision and machine learning will be key components.
  • Engineering AR: 451 Research believes augmented reality will be a new frontier for research and development efforts that engineers will use for projections of highly detailed models of real-world environments for visualization. This will require an immense amount of bandwidth and compute at low latency to handle interaction with human operators without delay.
  • Consumer VR/AR: Games, movies, theme parks and other entertainment services will make use of VR/AR technology that will require local IT capacity to deliver.
  • CDNs: These are key to the functioning of the modern internet, especially given the escalating demand for video and other bandwidth-hungry content.
  • Cloud gaming: To perform gaming calculations in a cloud infrastructure and stream graphics to users, ultralow latency between the two will be required.
  • Cloudlets: This is a term used to describe cloud providers' distributed edge capacity for data caching or low-latency compute close to users or things.
  • Imaging (e.g., medical, scientific): High-resolution imaging will increasingly be used in various areas, including medical screenings and diagnostics, generating large data sets that will be required to be archived for decades by law. Even with a cloud-archive strategy, clinics will need local IT capacity to capture imaging data sets for analysis and storage management.
  • Remote data processing: Branch and departmental offices, retail, factories, remote industrial sites, etc., will retain local processing to support local people, devices and machinery even when the network is down. Security and compliance could be key requirements.
  • HPC: High-performance computing is the misfit of the datacenter industry, deviating from the norm in multiple aspects. Extreme data volumes and high-density requirements mean local capacity, which for many organizations remains the lowest-cost option in all but the largest megawatt-scale HPC installations, where power availability and cost are key.
  • Branch backup and recovery: Local capacity to perform data protection tasks, including backup management and data replication, is often cost-prohibitive for moving large amounts of noncritical but valuable data over a WAN.
  • Hosted desktop: Customer service centers, shared service centers, financial institutions and government agencies are all large users of hosted desktop infrastructure to support higher availability and security requirements while lowering support costs. There are various implementation strategies that will differ in their network and compute requirements, but larger offices will likely justify local presence of the supporting server infrastructure.
  • CCTV and analytics: With the expected increase in the use of closed-circuit television coupled with high-definition cameras and advanced analytics, more processing capacity will be required. Cameras will get smarter (integrating more analytics and preprocessing functions) and will have more advanced features such as biometric and object identification, cross-comparison of multiple feeds and coordination of cameras – all of which will require additional IT capacity.
As discussed in a previous report, edge computing is evolving rapidly and new use cases will likely emerge. For a more in-depth analysis of edge datacenters, please see our recently published Technology & Business Insight report, 'Datacenters at the Edge.'

Rhonda Ascierto
Research Director, Datacenter Technologies & Eco-Efficient IT

Rhonda Ascierto is Research Director for the Datacenter Technologies and Eco-Efficient IT Channel at 451 Research. She has spent more than 15 years at the crossroads of IT and business as an analyst, speaker, adviser and editor covering the technology and competitive forces that shape the global IT industry. Rhonda’s focus is on innovation and disruptive technologies in datacenters and critical infrastructure, including those that enable the efficient use of all resources.

Daniel Bizo
Senior Analyst, Datacenter Technologies

Daniel Bizo is a Senior Analyst for Datacenter Technologies Channel at 451 Research. His research focuses on advanced datacenter design, build and operations, such as prefabricated modular datacenters, highly efficient cooling and integrated facilities and IT management to achieve superior economics.Daniel is also a regular contributor to 451 Research's silicon and systems technology research in the Systems and Software Infrastructure Channel.

Keith Dawson
Principal Analyst

Keith Dawson is a principal analyst in 451 Research's Customer Experience & Commerce practice, primarily covering marketing technology. Keith has been covering the intersection of communications and enterprise software for 25 years, mainly looking at how to influence and optimize the customer experience.

Want to read more? Request a trial now.