Menu
Home
Log in / Register
 
Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution
Source

Wire

For venues that host many events, such as sports stadiums, it has been common to provision a permanent “facility line” - this is a loose term for high- capacity private connections between the venue and the facility capable of supporting (for example) a 270Mbps uncompressed video feed.

Once installed, a facility line may cost only $200 to $300 (indicative) to activate the circuit for the duration of a typical event webcast. In comparison, installation of a private link may run to many $10,000s, which for one off/ad hoc webcasts is typically out of scope.

Satellite links for TV outside broadcast are also a reasonable commitment financially. Although the figures vary wildly, a typical satellite truck (with operator, satellite capacity, and a remote operator to oversee reception at the TV studio facility) could cost $2000 to $3000 per day or more.

For this reason the potential to use relatively cheap and abundant IP connections into many locations has been a key underlying driver in the adoption of IP webcasting. For those who have more constrained budgets, and do not require TV quality production whose cost is prohibitive, the lower link cost of using available IP/Internet connections has been a game changer.

Yet, that said, no matter what the budget, once a webcast goes live and is promoted to the event's audience, regardless of the budget, the brand equity that the clients are placing behind the webcast are proportionately the same. For that reason expectation management has been extremely key to helping them trust IP as a contribution method.

My first webcasts used PSTN telephone lines, which were typically limited to 56kbps but had a number of advantages over today's Internet connections. Most important, when we connected a dial up modem to an ISP on a 56kbps telephone circuit, that circuit was ours alone, for the duration of that session. This meant that our dial up contribution feed had a fixed service level. It typically meant that if we had problems sending an audio feed over a dial up connection, those problems were not between just us and the ISP, but between the ISP and its peers or the CDN we were contributing to. In today's broadband world these ISP interconnections and the peering with CDNs, and so on, are so massively overprovisioned (to deal with the vast amount of video data that is in use) that it is extremely unlikely any issue will arise with the contribution feed.

Over time we began to use ISDN lines. There were several advantages to ISDN: most important, the dial up process (that familiar “phrrrrrra ptwang ptwang” handshake made by dial up modems) took around 8 to 10 seconds to complete, and only once complete could the applications (the encoder and packetizing software) handshake with the CDN again. These 8 seconds could seem like a lifetime in the middle of a sports event. In contrast, ISDN lines connect almost automatically. The result is that a dropped call may result in just a second or two of audio.

ISDN is still in widespread use in the radio and Internet radio sector, providing high-quality private links between studios, and so on.

As early broadband emerged offering 256kbps and 512kbps, the demand for live video came with it. But these broadband lines were based on ADSL technology, and this was unsuitable for contribution for a number of reasons:

  • • The lines were asymmetrical - so, while you could download 512kbps, you may only be able to upload 128kbps.
  • • The lines were contended - which is to say, that while you could download a max throughput of 512kbps, this was only in short bursts, since you were sharing that 512kbps with anything up to 80 other people. For downloading your email over 20 seconds, the line would provide a burst of 512kbps. But if you tried to stream a 450kbps video for any length of time, the chances are that your neighbor would check his email, and for a period it would be impossible to throughput sufficient video data to keep your stream running smoothly.

So, while the audience were connecting with 512kbps, in order to provide a usable video, we had to bond together several ISDN lines. To do this, we required having bonding technology at both ends of the ISDN links. So we had to host technology in a data center and in effect build our own small ISP service. By bonding 6 ISDN lines, we could achieve a throughput of 384kbps, and by allowing around 10% for signaling, and so on, we would then encode our contribution feed at around 320kbps. The hosted, remote end of the ISDN bonding system was itself connected to a 1Mbps Internet connection, and this then forwarded the contribution into the CDN.

Interestingly, at these contribution speeds, even contributions to CDN origins in the US from the UK would typically get through. There is a complexity in the “long fat Internet” connections that use TCP, and that is caused by a combination of TCP Window size and latency - increased latency increased the probability of packet loss, and in turn when TCP noted a lost packet, the entire subsequent “window” of data would be discarded as it arrived, while a request was made to resend the entire window, restarting from the lost packet, and onward. Back at the source this would cause problems with buffering and very often, if the buffer was not big enough to hold a few windows' worth of the stream, the stream would stutter and fail as that particular part of the stream was simply dumped.

This problem stumped a number of webcasters for many years. Despite providing high-capacity links onsite, it seemed impossible to throughput high- quality webcasts all the way to the CDN origins in the US. However, as CDNs provided more localized entry points/origins nearer to the ISPs to which the webcasters were connecting, the CDNs could take on the complexity of internally window sizing the long-haul links to their other locations. This quietly but importantly ensured that it became possible to contribute reliably at speeds exceeding 400kbps, and by 2005 most clients jumped to demanding multiple bitrate streams of 1.4mbps, 700kbps, and 384kbps. So the days of bonded ISDN were superseded as the availability of a variety of new fixed line services, like SDSL (symmetrical DSL) and leased lines, were becoming all the more common.

Now, over a decade later, with high-capacity FTTH connectivity coupled with better contention ratios, most IP lines are capable of providing decent bandwidths for contribution feed, and I have produced many webcasts using just a domestic grade broadband line, although typically keep a good 20% overhead. Moreover these days adaptive bitrate has superseded multiple bitrate.

Where you can find a wire, you will reduce a significant number of variables that most of the other (radio) based links below are exposed to. That said, it is important to understand the characteristics of the line, and to have a point of contact to help if the line is not performing as expected. Your webcast will depend on it, and if it is not entirely under your own control, it is key to know who you can bring into account if needed.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel