Menu
Home
Log in / Register
 
Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution
Source

Service Velocity

(An earlier version of this chapter was published in www.StreamingMedia.com in 20161)

Interestingly the term “service velocity” appears as far back as 2007 in the context of networks and telecoms, with citations of the term referenced from some academic papers (which I have been unable to find) within the cable industry's patents of early 2008.

The term's rapid emergence from that point can be seen on Google Trends.2

However, it is only within the past few years that I have been aware of the term with relevance to service deployment for streaming media. There is a reason for this, and I will explain both the cause and the effect below. Indeed service velocity is yet another string for the bow of my broader arguments about how we should anticipate significant macro change in the industry over the next two to five years. It also, from first-hand experience, underpins where my own company is finding a strong growth of interest and activities at the moment.

Let me first explain the service velocity concept by quoting a great brief from 2012 where Carl Weinschenk, senior editor of BTReport, gave a good definition3:

Service velocity, as the name aptly implies, is the set of skills and infrastructure that enables service providers to offer the spectrum of sales, deployment, repair, upgrading and other requisite capabilities in a speedy manner.

The idea is fairly straightforward: Operators who anticipate where business will come from will be able to offer it more quickly.

The traditional approach seems reasonable: When a prospect materializes - either by contacting the operator or after being contacted by the MSO's sales staff - an assessment is done to determine if and how the business can be reached and if it makes sense for the operator to do so.

  • 1 http://www.streamingmediaglobal.com/Articles/ReadArticle.aspx?ArticleID= 110848
  • 2 https://www.google.com/trends/explore#q=Service%20Velocity
  • 3 http://www.btreport.net/articles/2012/10/getting-up-to-speed-with-service-velocity.html

Content Delivery Networks: Fundamentals, Design, and Evolution, First Edition. Dom Robinson. © 2017 by John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc.

The problem with this approach grows as operators aim at bigger and more sophisticated potential customers. Since those prospects are bigger, they likely are being courted by other providers as well. If so, they most likely can provide services more quickly - i.e., with higher service velocity - than carriers that are starting from scratch and who also have to spend some time determining if they even want the business.

Now back in 2012 we were in the midst of the discovery of Gen2 virtualization. By this I mean that those network service operators that had traditionally thought only of building their networks from dedicated hardware building blocks (Gen1) were starting to accept that many elements of service could be abstracted from a common commodity off the shelf (COTS) underlying hardware environment - essentially x86 computers. The deployment of hosted servers was no longer a combined hardware and software responsibility, and hardware Infrastructure as a Service (IaaS) providers were inviting operators to deploy their software services within managed hosted networks of computer resources. This meant that in the event of a hardware failure, rather than having no continuity, the service operator could replicate a copy of the software from the failed machine (or another source) to a new machine, and all functionality would quickly be restored. In the meanwhile the hardware operators could send out an engineer to replace the physical unit, and commission it back into the pool of resources that the service operator could draw from as required.

Gen1 physical appliances, where the underlying compute resource was tailored to the application running on that box, were decoupled, and the resulting model has been called “cloud” ever since.

What is key to note is that while various operational aspects of deploying services (obviously resilience and scaling) benefited from this type of virtualization, in many ways the function of the network, and the time to implement functions also became shortened somewhat. There was no longer a need to “roll trucks” to install specialist appliances in remote locations on the network: the idea was that a “vanilla” cloud hardware resource was already available, and an application could be launched as quickly as the image of that computer could be uploaded to the hardware and booted up.

While for a large network operator jumping in and changing all your dedicated routers for COTS servers running routing software has been somewhat scary, for small businesses wanting to create distributed networks of database servers or storage services, and so on, there were several advantages. First and foremost, they could reduce their trips to data centers - the overhead for a small business of traveling across a city or a continent to maintain hardware was effectively amortized into the cost of services fees charged by the cloud operator. Second, the upward scale available to even a one-person business is vast, and is obviously a function of the operating costs of a scaling (and hopefully profitable) business, and that brings great opportunity to small businesses.

Third, an even more important reason that cloud has been successful is that while peaks of demand can be met, huge sunk capital no longer has to be invested in technology that is only used for peaks. In nearly all businesses the daily ebb and flow of traffic through e-commerce platforms, and the like, means that the majority of fixed infrastructure remains unused for most of the time. The larger the scale of the enterprise, the larger is the efficiency that moving infrastructure costs to “use based” can bring.

Within the Gen2 environment typically the complete application function of a traditional appliance was “imaged” and optimized a little to run on the higher power the abundantly available compute resources deployed in the cloud's data centers. Where traditionally a video encoder farm had to produce all the different source formats of the video needed to generate a desired target device, now a single top-level mezzanine source could generate and submit it to the cloud infrastructure, activate as many servers (running the encoder image) as needed for all the formats, and quickly deliver the service, with the advantage that you only need to turn on all the servers a few minutes before they are needed, let them boot, confirm they are ready, and away you go.

Once the encoding task is complete, you could turn off all the cloud servers, and in a pay-as-you-go IaaS Cloud (such as AWS or Azure), you would also be turning off your costs, until the next time they are needed. (Note your traditional Gen1 infrastructure, even if turned off, would still be costing you warehousing or office space, security, manpower, to turn it back on when needed, and so on.)

The Gen2 model, where images of the old appliances are run on ephemeral resources, is a great “Duplo-Lego” introduction to virtualization. Most engineers and architects now understand that trend. Indeed we have seen many engineers, who until four or five years ago had very little regard for distributed computing and virtualization, embrace the cloud wholeheartedly as they learn to scale their applications. What is remarkable here is that scale brings the ability for even a very small company to compete for opportunities with large heavily invested network operators.

Quickly to return to Weinschenk's definition of service velocity:

Since those prospects are bigger, they likely are being courted by other providers as well. If so, they most likely can provide services more quickly - i.e., with higher service velocity - than carriers that are starting from scratch and who also have to spend some time determining if they even want the business.

The Gen2 model has given application developers a service velocity that Gen1 simply cannot provide. Small enterprises usually have much shorter decision-making cycles and agility in their leadership. Larger incumbent operators traditionally had a monopolistic advantage when deciding to offer a service like TV or voice online, and they would set their service velocity to almost completely de-risk even vast capital investment. They could dictate the pace of technological change. Now smaller, more agile start-ups can scale to deliver large subscriber services worldwide and in a few moments. We have moved from an operator-driven technology landscape to a consumer-driven one. Only those enterprises that can keep up with the constantly changing consumer demands can sustain their business. Agility has replaced monopoly.

Such vast change in service velocity, coupled with deregulation in the telecoms sector over the past 20 years, has created a vast thriving market for OTT online providers whose operations depend on money coming from the underlying Telco subscribers.

Back in the 1990s, and even the mid-2000s, Telcos were still very much expecting to provide the shopping portals, the walled garden models of content, and other managed services. These managed on-net services were nearly always so easy to switch away from (just a press of a remote control to reach equivalent OTT providers offering a better/cheaper/optimized service) that managed services had to leverage the advantage of the operator's ownership of the access network for the subscriber (offering higher resolution images or better variety/easier discovery, etc.) else the subscriber could simply shop OTT.

With net neutrality there too is cause for concern. Although the general idea is that the market must be competitive, only one ISP services any particular end user. That access circuit has additional physical installation costs, and the ISP, and the ISP alone has an opportunity to offer “advanced managed network services” over that connection. This can be a headache both for the Telco that invested in infrastructure expecting to upsell with managed services to provide their own shareholders an ROI, and for the regulators who would like the Telcos to continue to invest in developing access networks more widely but are threatened by the risk of opening very unequal markets where only those with large capital funds can deploy managed services within the operator network footprints.

What does this all have to do with service velocity?

Let's recap:

  • • We have Gen1 - The traditional network with intelligent appliances at the edges and a dumb pipe in the middle.
  • • We have Gen2 - The virtualized appliance where the application is no longer fixed permanently at the edges of the network. It can now be moved to the resource most suitable for the optimized delivery of the service, and the use of hardware resources is strictly tied to real demand for the delivery of service.

So what might Gen3 be?

While there are many applications that can run on networks, ranging from gaming to banking to TV, there are (currently) only subsets that can run within networks. Almost exclusively these are network protocols of one type of another.

Let's drill into some of the more familiar network applications, and then broaden into some applications that cross the line between the network and the pure application space.

First are a couple of obvious ones: the Domain Naming Service (DNS) used all the time by everyone - it's what turns www.streamingmedia.com into a number that the routers can coordinate to route a request.

Second is the obvious one: Internet Protocol - the very essence of how data gets from one place to another in a mixed mesh networks of networks.

There are a couple of others too - MPLS is a way that IP can be deployed on various forms of fixed line networks. MPLS can be used by operators to designate priorities for traffic, so ensuring that low latency banking data can be shipped faster than a background software update, etc. MPEG-TS is one that is closer to the StreamingMedia audience, and this is just one of many layers of coordinated media and network protocols that are implemented at either end of a data transmission to ensure that a video signal can be transmitted and received.

In the Gen1 and Gen2 world we have often taken blocks of these protocols (such as IP/HTTP/HLS), and using our application-specific appliances, or virtual appliances, we have processed a group of these to acquire a video signal, and then passed these along a “conveyor belt” of appliances to reprocess them until they are ready ultimately for the end user to consume.

That conveyor belt has become much more agile in the Gen2 world - we can daisy-chain a limitless number of virtual appliances together to translate almost any combination of protocol sources to any combination of output protocols - leading to a world of virtual encoding and transmuxing, etc.

But in pure computational terms, these compute-units/appliances are inefficient. If I want to “stack” three tiny network application protocols, and for some reason it needs three separate Gen2 appliances, then I may need to boot three large operating systems on three compute resources to complete my task.

Gen3 computing architectures look beyond that model. They assume not only COTS hardware but also a common OS, live and running on the host computer. Instead of needing to provision new hardware, and network addresses, and distribute the OS image, boot that, clear security, launch the network application needed for that task, a Gen3 model would expect to run the network application on the already available COTS resource, and to launch that application almost instantaneously, not least because it is significantly smaller than the “full Gen2” image previously used.

Historically we have seen what used to be data warehouses gradually convert to providing cloud hosting. Traditionally these data centers were large customers of network operators - or in some cases great markets for sale to common customers.

The network itself, however, has always had Gen1 architecture. Core exchanges and routing infrastructures terminated very specific network types, and this meant that the networks underpinning all these hosted services were very inflexible.

While the network interface cards themselves always require specific interfacing, the routing core that these interfaces connect to has, over the years, begun to look increasingly like a traditional COTS computer core. And this is largely a testament to the progress of commodity chipsets. So a ?1m Cisco router of five years ago probably has significantly less processing power than a contemporary Dell desktop. Accordingly those looking to increase the service velocity in the currently Gen1 network operator space are directly focused at the moment about how the actual network itself can be virtualized.

Given that this means managing a distributed cloud of resources, and Gen2 is already being eclipsed by the more agile applications developers that thirst for Gen3 architecture (because it is cheaper, more dynamic and more scalable again than Gen2), these network operators are almost all exploring Gen3 virtualization models. They want maximum availability and maximum service velocity - for where they see an opportunity to offer a managed service, and to create and expand into that market, leveraging their ownership of the network and before the regulator prevents them from doing so, in order to protect the OTT market that competes in the same space but has no ability to manage the network.

What may emerge, and I believe this to be quite certain, is that operators will initially take advantage of this massively increased service velocity to “show the way” and to ensure the capability is well defined, and then they will open up the ability to offer managed network services on-net to third parties, in a continuation of the IaaS model seen currently only available on COTS in data warehouses.

CDNs, IoT platforms, Retail, and FinTech will all see advantages they can leverage within the managed services environment, and to be able to run applications deep over operator networks will create a new generation of network services - indeed the term “microservices” is appearing in this context more and more - many of which we cannot imagine today, but all will become available almost as soon as they are invented, at scale and with quality of service and reliability that will surpass our expectations today.

So expect the very ground the streaming media sector is built on to start moving significantly over the next few years. Traditional alliances may change dramatically, some for the better and some for the worse. Operators may rapidly deploy their own CDN models, or they may even open up to invite third parties experienced with managed network microservices to come and evolve with them ....

Whatever the outcome, as they fully virtualize, this inexorable evolution will radically change the service velocity that the powerful network operators bring to market, and this will have deep and far-reaching effects.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel