Menu
Home
Log in / Register
 
Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution
Source

Workflows

So far I have talked about high-level trends across the sector and expressed my own views and opinions of some of those trends. Then, based on where I have seen them emerging, I have conjectured about where I see the industry sector heading.

Now we are going to turn our attention to the end-to-end production of video, and talk in some detail about aspects of the daisy chains of technologies that can be put together to create what are commonly termed “workflows.”

Obviously a good starting point is to agree what the term “workflow” actually means. As with all catchy terms in the IT sector, it sounds very specific, until you realize that different people are using the term to embrace slightly different things.

Let's take the Wikipedia definition as a starting point[1]:

A workflow consists of an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes that transform materials, provide services, or process information. It can be depicted as a sequence of operations, declared as work of a person or group, an organization of staff, or one or more simple or complex mechanisms.

From a more abstract or higher-level perspective, workflow may be considered a view or representation of real work. The flow being described may refer to a document, service or product that is being transferred from one step to another.

Workflows may be viewed as one fundamental building block to be combined with other parts of an organization's structure such as information technology, teams, projects and hierarchies.

“Business activity” is, for me, a central part of the Wikipedia description. Cynically, marketing brochures of many vendors show how their product can bring a company into competition with all the other customers of that vendor. The sale of “product” solves a problem for the vendor alone - that of needing new customers. However, in my opinion, every “workflow” should not be tied to the current ability of a vendor to provide the same supported product to many clients; it should be tied to solving the specific business problems of each individual customer, particularly in a b2b environment where the customers are themselves seeking differentiation in their own b2c outputs.

In the case of a news agency, that “problem” may be the high-speed delivery of content captured “on-location” in a remote region and to a usable format for a broadcast or online delivery.

In the case of a telemedicine company, it may be that a high-quality video picture needs to be delivered cheaply and securely to just one or two remote locations.

It is unlikely that the physical and link layers of these two customers will be similar, and the financial constraints and objectives will almost certainly be significantly different.

In this chapter I will refer back to these two parallel and yet very different workflows to explore the variance.

In the preceding chapter, in Sections 2.7 and 2.8, I discussed orchestration. In a “software-defined” model, orchestration and repeatability are central concepts. In the latest generation of distributed computing architectures, complex, multi-tenant workflows can be activated at will, and replaced instantly should they fail. This approach is leading to new design considerations, and opening up new business possibilities too. In practice this means that organizations will often arrive on day one at the vendor's office with a loose definition of what they think they need based on taking a previous architecture paradigm into a new context. So today we still see many organizations attempting to virtualize an identical workflow to their traditional workflow so as to validate that there are commercial motivations to “doing the same as they have always done” but “in a new way that is just as good but cheaper.” Many have encountered resistance from the corporate/commercial leadership, citing the work as “for technology sake.” They also struggle to make the old model work in the less QoS-guaranteed hardware of COTS, and the “best-effort” service level agreements that underpin the IP networks.

The fact is that simply moving a series of black-box capabilities that have been scripted together using the black-box vendors APIs may be possible. In fact, at id3as, our earliest virtualizations were proof of concept works doing exactly this (to ensure we could test end-to-end viability of virtualization and model some of the cloud economics). However, once resource (“cloud computing”) becomes cheap and essentially endless, and as that resource can be found in many locations on a network that can be created ad hoc, we can start to design differently.

Traditional architectures in the live video space have usually revolved around getting the source feedback to a central point at as high quality as possible, and that central point has a limited amount of key appliances that can transform the media as required for output, and can then deliver it to a distribution network, again in as high quality as possible. The distribution network then has to commit to replicating that source across its own geographically distributed network and finally deliver it through access networks to end users.

Simply moving this workflow from appliances and “private” networks to COTS and IP networks may bring some benefits. These may include better cost efficiencies: it is common that live networks are used in an ad hoc way, and by virtualizing them, they can be activated across the “cloud” on demand and thus cost nothing when they are not being used. Such approaches may also provide better disaster recovery: a failed COTS appliance is one of thousands in a cloud, and can be replaced instantly - where a failed appliance may require a site visit and physical replacement to be installed. However, the peace of mind provided to the operator by the private networks (on which they have not only evolved but often anchored their own key guarantee of delivery business case) is often challenged by a move to IP networks where overprovisioning and best effort are the only guarantees available.

But a first question to ask is whether the network links and topology are still valid in the new compute paradigm.

Traditional broadcast architecture relies on backhaul to a central processing plant and onward contribution to a distribution network. In contrast, the end- to-end design principle[2] of IP networks removes the complex compute functionality from within the network and places it at the edges. This means that in a perfect model the modern broadcast network should be architected in such a way that the source content is encoded and encrypted at source, multicast to all subscribed end users, and if transcoding is required to suit that end users bandwidth/device type, this happens at the edge of the network where the subscriber authenticates on the access network.

Such complete decentralization leaves the modern network simply orchestrating - moving the right functionality/capability to the right resource - and overseeing authentication. The video itself - the high bandwidth data - takes a short route almost directly from the point of creation directly to the end users. There is no “central” broadcast facility. This leads to great scaling, resilience, traffic optimization, and reduced operational complexity.

Yet such a model leaves many traditional broadcast operators uneasy. They have a culture of controlled network core operations. Ask any broadcaster for a tour of their facility and they will proudly show you around their data centers housing huge arrays of appliances. They will also show you to their network operations center (NOC), extolling the virtue of having “amazing” connectivity centralized in that facility, and they will show you their master control room

(MCR) where rows of operators oversee the various user interfaces on their different appliances and OSS systems on large screens that give the impression that they are overseeing a moon landing.

Explaining to operators that all the command and control can be available on a web browser, or a smartphone app, and that centralizing all the network links creates a single point of failure, and that the scripts that control their appliances are prone to failing as soon as any of the appliances update their software release is a tough job.

However, what does change hearts and minds is when a large traditional broadcaster has a significant outage despite its huge investment in protecting against such a situation, while a much leaner, more agile modern broadcaster delivers much higher availability on off-the-shelf technology and with much lower fixed overheads. Once the finance director “gets it" the culture has permission to change, and the long-term migration can begin.

Virtualizing workflow for the sake of doing so is unlikely to bring any benefit. Once a business justification for software-defined workflows has been made, then the technology should step up to meet that business' requirements. I often open conferences with the expression “cloud is not a technical term, it is an economic term.” And it is usually the cost saving made when infrastructure is not used that brings the most benefit!

  • [1] https://en.wikipedia.org/wiki/Workflow Content Delivery Networks: Fundamentals, Design, and Evolution, First Edition. Dom Robinson.© 2017 by John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc.
  • [2] https://en.wikipedia.org/wiki/End-to-end_principle
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel