Menu
Home
Log in / Register
 
Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution
Source

Software Definition

Along with the ever-evolving buzzwords in the technical sector, over the past few years the term “software-defined network” (SDN) has become widespread.

Wikipedia defines the term as follows[1]:

Software-defined networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of lower-level functionality.

At a high level this is a good description. However, if you walk around the many trade shows where exhibitors are using the term in their marketing, and pick up their literature, you will quickly discover that the commonality in meaning is skin-deep when it comes to technical implementation.

Some of the early SDN implementations I saw were essentially widely interfaced network management hubs, consolidating APIs into a single management tool. This allowed coordinated configuration of the existing appliances, very much in a Gen1 model, and with the SDN tool itself being essentially the only software in the system. To me this looks just a consolidated engine for an operations support system (OSS).

Perhaps in reaction to this, or perhaps simply because it was a bigger job to do, it was a little while before I saw SDN moving beyond configuration to include orchestration of “other” software activations within the networks. For me this is much more important than the earlier simple OSS tool. After all most network operations centers (NOCs) were limited in number, so the cost of deploying an aggregated OSS control interface for their network was fairly constrained. Simply moving the traditional, vast array of control interfaces typically controlled in a single NOC into a web browser, or at least a single virtual machine, was only solving a small problem.

A much bigger issue was ensuring the network itself and was being deployed in the most optimal way.

As the approach to deployment of functions such as proxy servers/security gateways - which require compute intelligence to operate - were moved to a Gen2 mode (delivered as virtual machines). This meant that the network operators could deploy services locally to their clients, using the NOC (or SDN controller) to configure the network and adding computers in many places to run the virtual machines. In doing so, they could deploy the proxy server/gate- way application machine to a suitable point on the network for that client's traffic, and even scale up to multiple instances of the machine to cope with locales and volumes of use.

While the machines were still highly specific, verging on custom hardware, from the software's perspective the idea was to make all the machines look like a commodity-off- the-shelf (COTS) computer.

Some of the infrastructure management tools of the mid-2000s had evolved, and in particular, OpenStack was emerging as a free and open source favorite of many recognized cloud and network operators. OpenStack (much like VMware, Eucalyptus, or others) enables operators to deploy Infrastructure as a Service. Through a web UI a network of resources can be added, and virtual machines can be deployed to that infrastructure.

Within OpenStack is OpenFlow. OpenFlow is recognized as a protocol standard for SDN. What this means is that (through its controller software OpenDaylight) it can software reprogram layer-3 switches to secure a specific virtual network path around or through the wider Internet/IP network.

This combination, for me, produces a better picture of a software-defined network. While OpenFlow can determine where traffic is going to go, enabling a network operator to sell a software-switched “private network” to their customer, they can additionally deploy function into the network. It may be a proxy/gateway to help the client control access to the “private network,” and indeed typically this has been of interest to Telcos because it creates the ability to introduce paywalls and revenue models, but it may also be any compute function that the hardware in the infrastructure can support.

If that infrastructure is COTS, then any function could be deployed by an operator to its client and within the clients own secure QoS-managed “virtual” space within the operator's network.

The combined ability has caught the imagination of operators who have, until now, always had to “roll trucks” to install that kind of capability for any clients. Now they can configure such a deployment from a computer screen. Further they not only can deploy the network configuration but also define the whole networks functional capability, allowing the operators to rapidly deliver highly tailored solutions.

Naturally enough when SDN combined with Infrastructure as a Service, new buzzwords emerged. While the original term is “virtual network function” (VNF), the common phrase became “network functions virtualization” - particularly among the community close to the European Telecommunications Standards Institute (ETSI) that has, since 2012, been working hard to help create standards for their SDN/NFV initiative.[2]

The SDN NFV project has been prominent at many Telco-focused events - I myself chaired one in 2015 in London. It was interesting to see the momentum that ETSI had brought to this area. Coming from the CDN space - where the problem of scaled-up distributed function has been central and inherent for two decades, it was interesting spending time with Telcos. There is a significant cultural gap between the two otherwise closely related groups.

The CDN culture is far more pragmatic. CDNs take a top-down view of the networks, and assume that they must bring to a “simple” network a layer of intelligence and function that the network on its own lacks. This has traditionally been achieved in a highly proprietary way. In some ways the CDNs have had more in common with clouds and hosting centers than networks. However, because their value proposition - bringing content to users at high quality and with reliability - revolved around distribution of the edge capability, the CDNs have naturally evolved strong network and service automation systems that enabled them to organize in much the same way that the SDN/NFV models were suggesting.

Obviously the CDN's were function/application specific: virtual proxies and private QoS engineered VLANs were not core business to the CDNs. This in turn meant that the CDN's OSS/orchestration systems were highly proprietary, bespoke, and geared toward their specific technology choices.

As the Telcos have seen the CDN sector become significant, they have all sought to deploy CDN within their own infrastructure, and while this only offers a value proposition to those publishers seeking to reach “on-net” customers within that operators network, for large operators such as the former state Public Telephony Operators, these customer bases are large enough that bringing video delivery in-house represents a significant business.

Yet the Telcos continue to move glacially, and - while this is a visceral comment - they look down on the CDNs as if their two decades of experience was at best “not relevant,” and in some cases, even more naively they think the CDNs simply do not know about telecoms.

Telcos have bought CDNs. In my opinion, unless that CDN is on-net, this is a daft thing to do for many reasons, and only looks good on Investor Relations PR. In practical terms this brings little in terms of real network optimization in-house. The CDNs have made critical hires from the Telco community to try to bridge this gap. However, those Telco executives have really brought the cultural problem in-house in the CDNs, rather than helping the CDN transform the Telcos and build a larger opportunity for both.

This means that CDNs have largely almost myopically missed the emergence of NFV/SDN until about mid-2015, and in the meanwhile Telcos have tried to differentiate from CDNs by building badly designed CDNs on-net that have often failed to justify the commercial commitments.

To be honest, I think it's all a bit of a mess. And this is not helping large publishers, and their partners (such as AppleTV) get the rights deals laid out to actually bring the possibility of migrating TV in entirety to IP as fast as it could be done if the cultural divides were not causing so many complications.

In summary, the technology works. Those who are managing the transition are surrounded by fiefdoms and wrong preconceptions, and this is preventing proper dialogue between the experienced CDNs and the powerful Telcos.

At some point soon this will watershed. I often joke it will be when the over-50s retire and those who are bought up on the Internet alone take control.

  • [1] https://en.wikipedia.org/wiki/Software-defined_networking
  • [2] http://www.etsi.org/technologies-clusters/technologies/nfv
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel