The technology is not new – purveyors including Megaport and PacketFabric have been around for years (founded in 2013 and 2015, respectively) – but what is new is the number of datacenter providers that are now building services around platforms such as these or building their own platforms that emulate the functionality. At first we spoke about it as an extension of interconnection, but it's an evolution. SPI applies the technologies behind software-defined networking and network virtualization – the ability to alter the network on demand – to the interconnection ecosystems that many datacenter providers have built or are planning. The endgame is a fabric that connects many datacenters and makes them feel, to the end user, like one giant datacenter. It's not hard to envision SPI eventually giving enterprises on-demand access to every major public cloud.
The 451 Take
It might seem early to call SPI a trend, but software programmability, whether in the form of SDN or network overlays, is the prevailing direction for all of networking. Inside the datacenter, we expect a future where network changes are realized through software and controlled by customers or even by applications without human intervention. It's only logical that the networking between datacenters should become similarly virtualized. SPI should become ubiquitous eventually, at least among the biggest players, but in the meantime, it will offer a competitive advantage to those who have it.
There's a nuance here that warrants pointing out. Some vendors and providers alike are today referring to this technology as SDN, but we'd like to suggest otherwise. Pure SDN is a system for programmatic changes to a network to improve network performance and implies a central control mechanism that makes these changes, potentially on the fly. SPI is both a bit simpler than that and 'other than.' SPI may include networking and bandwidth, as SDN does; however, the focus is not necessarily on improving performance, but rather the interconnecting of two or more points. Beyond just networking, SPI includes connections between datacenters, businesses, clouds and who knows what future services. To that end, it is also 'more than.'
To date, we've seen a number of iterations on this idea. Epsilon, Megaport and PacketFabric have all implemented network virtualization or outright SDN. An early iteration of SPI, then, involves using these companies' services to give customers a point-and-click ability to create connections between datacenters or across regions. This tier of service gives customers access to all the major cloud providers in the world, a whole host of other service providers that may be interesting and a good number of internet exchanges, which in turn provide access to network providers all over the globe. Connecting to services becomes seamless, even if the datacenter in question is in another metro, another state or (in some cases) in another country.
Another iteration of SPI involves datacenter operators building platforms themselves – Equinix being the canonical example due to its scale. The provider has an interesting mix of customers within its own datacenters and has built a marketplace for those customers to do business with one another. Taking this to the next level, Equinix has integrated cloud provider on-ramps (which may or may not be customers in the datacenter) and access to the various network providers in its carrier hotel facilities.
The main difference we see between this option and the former is that the foundation of Equinix's fabric, and others like it (providers including Colt, CoreSite, Cyxtera and Flexential have built or are building similar fabrics), is the nest of interconnections within the same building or perhaps the same metro. Whereas the Megaports of the world have done a good job assembling resources from all over the world, platforms like Equinix's started at a very local level and have now expanded globally. Equinix, specifically, is putting the finishing touches on its ECX Fabric, which will connect nearly all its datacenters globally and give Equinix customers a way to directly connect to one another. For workloads that require the absolute lowest of latencies, this could have big implications for performance. It's worth noting, however, that other workloads and situations may be perfectly satisfied by services offered through the likes of Megaport and PacketFabric. It depends on the situation and the application.
The last iteration we've seen is simply a combination of the two. Here, providers build upon what Megaport, PacketFabric or some other provider has done, and then add an overlay interface so that customers can also interconnect with other companies in the same building. To be frank, none of the 'build it yourself' options are particularly easy, which is perhaps why we haven't seen a broad implementation of any of this just yet.
Where All This Is Going
At the heart of SPI is the general trend of the network becoming more fluid. We see this in SDN, in telcos' attempts at deploying network functions virtualization (NFV) and in the promises of 5G network slicing. So while SPI is a promising direction for the datacenter world, it's also arriving relatively late, considering how long SDN has been on the minds of other tech sectors.
Dan provides insight into the Multi-Tenant Datacenter (MTDC) market space. Dan is particularly focused on MTDCs that are trying to move up the stack to offer additional services beyond colocation and connectivity. These services may include disaster recovery, security, various forms of cloud and other managed services. He also assists the 451 Research Information Security group when their interests overlap.
Craig focuses on the confluence of CDNs, interconnect fabrics and cloud access. Craig has covered service-provider and enterprise networking since the dot-com bubble of 1999, including more than 10 years at Light Reading, where he covered broad topics including optical networking, routing and the then-new beat of software-defined networking. He also spent four years at SDxCentral, delving further into SDN, NFV and container technologies.
Sed ac purus sit amet nisl tincidunt tincidunt vel at dolor. In ullamcorper nisi risus, quis fringilla nibh mattis ac. Mauris interdum interdum eros, eget tempus lectus aliquet at. Suspendisse convallis suscipit odio, ut varius enim lacinia in. Lorem ipsum dolor sit amet, consectetur adipiscing elit.