Menu
Home
Log in / Register
 
Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution
Source

What is a CDN: A Simple Model

Setting the Scene for CDNs

CDN is potentially a huge topic. I'll jump about a bit with some analogy and a brief explanation of the key things in the formative landscape at the time the CDNs emerged.

A CDN is, in logical terms, a very simple thing. It is a way to ensure that many users can consume content.

A traditional audio engineer would think of a CDN's function in much the same way they think of a distribution amplifier. There is a source “input” signal. There are then n channels of output signals replicating the input signal to n downstream receivers. For both upstream and down, there needs to be some processing to achieve this “splitting” of the source to many destinations.

In the analog world of distribution amplifiers, this processing is the essential “amplifier” stage. Without that processing, then as you add more receivers the signal gets progressively divided between them and becomes very weak as the audience grows. In the same way in the digital world of a CDN, the signal needs to be properly replicated to each user, and how this is done affects the quality and cost of the service that the CDN can deliver.

If we head back to the early days of the web, we need to look at the networks as they operated at the time. In the late 1980s (1988 to be exact) the ITU had settled on regulating phone companies' international activities by regulating their trade of “minutes” (explained in a moment). Information services were negotiated out of the regulations. They were deemed, at the time, to be niche services not requiring much capacity on the networks, and so not worth developing a billing system for.

Telco's had evolved to price their main commodity as “call minutes,” so their existing billing systems were effective points of contact between the Telco and regulator to ensure regulatory compliance.

Unlike minutes billing, packet billing was complex. A minute refers to how long a private session between two ends of a network link is available. A packet of data moves across “public” third party networks that have no billing relationship with anyone at either end of the route, only with their direct peers and transit providers, and measured in Mbps for a fixed amount of billing and capacity per month, or MB for a flexible amount of capacity and accordingly a variable amount of billing.

For this reason it took until the Internet was well established before packet billing could emerge.

Telcos started to realize that international trunk routing over an IP pipe, where you could commit to a certain consistent level of versatile IP traffic, could be significantly cheaper than paying for “circuits” of 64 kbps to then run two channels of 8 kbps audio over in a fixed ATM or frame-relay model. These circuits were ultimately shared with the operators other clients, but the contention was calculated in minutes that the circuit was reserved solely for the use of that operator.

Given that operators had always charged end users in minutes for their usage, this minutes market was very simple and very strong. However, the appearance of IP was highly disruptive.

Operators essentially resell “routes of minutes.” With IP appearing as a cost saving for their interconnections with destination operators, the Telcos worked hard to sell access to that cheaper IP route as a premium calculated with minutes. This in turn gave birth to the least-cost routing market. This market dominated telecoms businesses in the mid-/late 1990s, for minutes and particularly as dial up took off. Often hours (many minutes) online on a dial up connection translated to only a small amount of IP traffic actually passing over the Interconnections from the access provider to the rest of the Internet. Very few users in that era were heavy multimedia users. These dial up ISP businesses sold at many times value, which in turn caused a financial market focus on the information services space. Telcos realized that laying fiber was a good thing, since either way ATM/frame-relay voice focused services were going to sell and so were IP services, and funding was abundant. As they laid fiber, they supported subscriber retailers in creating “always on” dial up services, not least to stabilize the wholesale of both minutes of dial up connections and to shore up commitments to IP traffic.

Once the consumer market had discovered “flat rate Internet" the complexity of packet billing, on a packet by packet source network to destination network, the transit billed basis fell away (or latterly passed to OTT operators), and the possibility to deliver large amount of multimedia over the Internet became affordable for the general public.

The only problem was that while huge funding had gone into fiber, it takes several years to lay transoceanic fiber. Indeed, while this rollout was going on, the international network Interconnects were now filling up, and at the same time the consumer business model no longer scaled revenue with their usage.

Usage was obviously running away with itself.

While consumers could now talk internationally 24/7 for no extra cost, and download or stream audio at good quality, the network operators, in ensuring their network could deliver the service their customers demanded, saw scaling cost at their Interconnects.

Smaller ISPs would buy their access to backbone Internet services on an MB of data transferred each month basis. This meant that increased usage directly increased cost of operations. Equally, those that had significant and consistent enough traffic volumes to be able to commit to fixed Mbps peering with a larger operator had to carefully watch that commitment because these connections were not cheap to establish, and they would still need a data transfer model to “burst onto” at a premium to ensure availability of service under peak loads.

While on a domestic country by country basis, settlement-free peering through (in the case of many locations in Europe) public peering exchanges such as the London Internet Exchange (LINX) softened the impact of direct- scaling costs, there was still a clear challenge to face as demand for the global resource of the Internet scaled up.

A few private peering network operators set up “backbone” IP networks - essentially peering networks with entry points in many cities, so a single connection to them would ensure availability in multiple locations. These were often themselves IP networks using a variety of incumbent Telcos fibers buying in wholesale quantities. However, even though these guys could offer a highly tailored IP network service, and enable a publisher to guarantee that their webserver could be reached from many locations, the issue remained that all traffic had to transit many long network hauls to reach the webserver.

As such all the potential global traffic that might hit (as a working model - not referred to as an actual example) CNN.COM would be reaching the CNN. COM webserver from all locations around the world - and that may be huge during a strong breaking news story, but the rest of the time it might be significantly lower. So should CNN.COM buy expensive burstable MB pipes, or put in place large fixed capacity pipes to ensure that it can meet the peak demand and yet not simply pay for every byte of data that a consumer reads? Also that topology made the CNN.COM webserver a single point of failure; never a good thing.

With web services becoming more critical to dissemination of news and popular media, it was clear that some scaling architecture needed to be introduced to ensure that service would be delivered.

It was at this exact moment that the CDNs appeared.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel