It's probably no surprise that analysts like to have buckets to put technologies, companies and markets into. It's a quick way to group things together in order to talk about them in a big-picture way. Given that, it's not surprising that sometimes we need a new bucket. Enter Software Programmable Interconnection (SPI).

The technology is not new – purveyors including Megaport and PacketFabric have been around for years (founded in 2013 and 2015, respectively) – but what is new is the number of datacenter providers that are now building services around platforms such as these or building their own platforms that emulate the functionality. At first we spoke about it as an extension of interconnection, but it's an evolution. SPI applies the technologies behind software-defined networking and network virtualization – the ability to alter the network on demand – to the interconnection ecosystems that many datacenter providers have built or are planning. The endgame is a fabric that connects many datacenters and makes them feel, to the end user, like one giant datacenter. It's not hard to envision SPI eventually giving enterprises on-demand access to every major public cloud.

The 451 Take

It might seem early to call SPI a trend, but software programmability, whether in the form of SDN or network overlays, is the prevailing direction for all of networking. Inside the datacenter, we expect a future where network changes are realized through software and controlled by customers or even by applications without human intervention. It's only logical that the networking between datacenters should become similarly virtualized. SPI should become ubiquitous eventually, at least among the biggest players, but in the meantime, it will offer a competitive advantage to those who have it.

There's a nuance here that warrants pointing out. Some vendors and providers alike are today referring to this technology as SDN, but we'd like to suggest otherwise. Pure SDN is a system for programmatic changes to a network to improve network performance and implies a central control mechanism that makes these changes, potentially on the fly. SPI is both a bit simpler than that and 'other than.' SPI may include networking and bandwidth, as SDN does; however, the focus is not necessarily on improving performance, but rather the interconnecting of two or more points. Beyond just networking, SPI includes connections between datacenters, businesses, clouds and who knows what future services. To that end, it is also 'more than.'

In the same way that cloud differs from hosted infrastructure, so SPI differs from interconnection. SPI is all about changing the end-user experience through automation. Connectivity that may have traditionally taken weeks (or longer) to establish can now be accomplished nearly instantly. This means less friction for customers and greater scalability for the datacenter operators.

To date, we've seen a number of iterations on this idea. Epsilon, Megaport and PacketFabric have all implemented network virtualization or outright SDN. An early iteration of SPI, then, involves using these companies' services to give customers a point-and-click ability to create connections between datacenters or across regions. This tier of service gives customers access to all the major cloud providers in the world, a whole host of other service providers that may be interesting and a good number of internet exchanges, which in turn provide access to network providers all over the globe. Connecting to services becomes seamless, even if the datacenter in question is in another metro, another state or (in some cases) in another country.

Another iteration of SPI involves datacenter operators building platforms themselves – Equinix being the canonical example due to its scale. The provider has an interesting mix of customers within its own datacenters and has built a marketplace for those customers to do business with one another. Taking this to the next level, Equinix has integrated cloud provider on-ramps (which may or may not be customers in the datacenter) and access to the various network providers in its carrier hotel facilities.

The main difference we see between this option and the former is that the foundation of Equinix's fabric, and others like it (providers including Colt, CoreSite, Cyxtera and Flexential have built or are building similar fabrics), is the nest of interconnections within the same building or perhaps the same metro. Whereas the Megaports of the world have done a good job assembling resources from all over the world, platforms like Equinix's started at a very local level and have now expanded globally. Equinix, specifically, is putting the finishing touches on its ECX Fabric, which will connect nearly all its datacenters globally and give Equinix customers a way to directly connect to one another. For workloads that require the absolute lowest of latencies, this could have big implications for performance. It's worth noting, however, that other workloads and situations may be perfectly satisfied by services offered through the likes of Megaport and PacketFabric. It depends on the situation and the application.

The last iteration we've seen is simply a combination of the two. Here, providers build upon what Megaport, PacketFabric or some other provider has done, and then add an overlay interface so that customers can also interconnect with other companies in the same building. To be frank, none of the 'build it yourself' options are particularly easy, which is perhaps why we haven't seen a broad implementation of any of this just yet.

Where All This Is Going

At the heart of SPI is the general trend of the network becoming more fluid. We see this in SDN, in telcos' attempts at deploying network functions virtualization (NFV) and in the promises of 5G network slicing. So while SPI is a promising direction for the datacenter world, it's also arriving relatively late, considering how long SDN has been on the minds of other tech sectors.

The reason is a shift in the mission of a datacenter operator. Whereas colocation was Job No. 1 traditionally, that is not being infringed upon by interconnection – largely due to the prospect of tapping vast new sectors of the enterprise market. As enterprises get more addicted to the cloud, a neutral datacenter becomes a hub for connecting to a variety of clouds, carriers and services – not to mention a place to host a private cloud, for the older enterprises that are decommissioning their own datacenters and the newer ones that never want to run a datacenter. In these scenarios, SPI acts as both chicken and egg. Laying down automated fabric connectivity will enable the kinds of enterprise interconnection that datacenters would like to provide; conversely, the prospect of gaining this kind of business is prodding datacenter operators to build programmable fabrics.

Another way to think of this is that the colocation business, like so many others, is being taken over by software. Now, that's not true in a literal sense – obviously cages, racks, servers, power supplies and physical storage all must be in place, not to mention the networking hardware and the cabling that the network runs over. But as network operations become software-driven, automated and programmable, it seems evident that the proposition behind colocation should change. That stationary rack in a corner of a big building can talk to the world more easily now, and it stands to reason that the datacenter operator should be able to find opportunity there.

We expect SPI to become nearly ubiquitous among the major datacenter operators. Major fabrics are completed or nearing completion already. Those operators that aren't able to build a fabric themselves, or that would like to supplement their own efforts in the meantime, can turn to Epsilon, Megaport or PacketFabric as partners. If enterprises are to buy into the promises of interconnection — using a carrier-neutral datacenter as their Grand Central Station for cloud and service-provider connectivity — they need to be given easy and flexible networking options. We like the term SPI because it can encompass both SDN and simpler network overlays; regardless of the mechanism, what matters is that the customer is able to light up connections with an ease that wasn't possible before.
Dan Thompson
Research Director, Datacenter Services & Infrastructure

Dan provides insight into the Multi-Tenant Datacenter (MTDC) market space. Dan is particularly focused on MTDCs that are trying to move up the stack to offer additional services beyond colocation and connectivity. These services may include disaster recovery, security, various forms of cloud and other managed services. He also assists the 451 Research Information Security group when their interests overlap.

Craig Matsumoto
Senior Analyst, Datacenter Networking

Craig focuses on the confluence of CDNs, interconnect fabrics and cloud access. Craig has covered service-provider and enterprise networking since the dot-com bubble of 1999, including more than 10 years at Light Reading, where he covered broad topics including optical networking, routing and the then-new beat of software-defined networking. He also spent four years at SDxCentral, delving further into SDN, NFV and container technologies.

Speaker Name
Speaker Title

Sed ac purus sit amet nisl tincidunt tincidunt vel at dolor. In ullamcorper nisi risus, quis fringilla nibh mattis ac. Mauris interdum interdum eros, eget tempus lectus aliquet at. Suspendisse convallis suscipit odio, ut varius enim lacinia in. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Want to read more? Request a trial now.