CDN (content delivery network) is a technology that appeared in 1990’s but became truly popular only after 2010, when web developers have realized their dire need of speed and CDN providers have lowered prices of CDN service considerably. Undeniably, modern CDN solutions are among the most demanded services at the IT market. A CDN network is a real find for web-masters: it accelerates web pages, provides exceptional security and boosts SEO. But initially, networks did not have all these features.
CDN networks have gone through many changes and improvements during these years, and are still being elaborated since there’s no limit for perfection. Today, this is not just a caching option: a good CDN provides security measures, optimally distributes traffic and ensures global coverage. This article describes the process of CDN hosting development from the start to the present days. It will help you to understand how the technology worked in the beginning, and what was done to boost its usefulness and efficiency.
What the future of content delivery networks will be like?
- Historical perspective of CDN.
- About CDN 1.0.
- Extending CDN 1.0 demands.
- Google cache: a full-fledged alternative?
- The reaction of modern Telcos.
- CDN 2.0 is close to limits.
- CDN and business: recent trends and hot issues.
- Why companies strive to create their own CDNs?
- Why local operators play an important role?
- Financial aspects and Telcos’ response.
- What the next generation of CDN may look like?
Historical perspective of CDN
About CDN 1.0
The first generation of CDNs appeared more than 10 years ago to provide websites with better access bandwidth and improve Internet usage. That resulted into development of content delivery networks that cached popular content in servers spread near the points of consumption to decrease latency and spread demand. Then intelligent routing and edge computation were integrated, and that triggered the growth of the best CDN companies like Limelight, Edgecast and Akamai. However, at this stage, CDN was mostly an American phenomenon thanks to country’s leadership in Internet development.
In comparison with simple caching, CDN 1.0 featured abilities to deal with dynamic content and deliver static content better by “pushing” data to the caches in real time. Back in 1990’s websites operated with static content mostly, and ISPs defined the elements that were rarely changed, and cached them on servers (caching proxy was introduced then). CDN technology is more sophisticated, as it uses more advanced algorithms and can adapt to changing network conditions, for instance, congestion in a router or a link. Therefore,
CDNs are more sophisticated in that the propagation of content through the network of servers is determined cache and CDN based technologies emerged from a lot of interaction and crossover.
Extending CDN 1.0 demands
In the beginning of 2000’s websites’ requirements increased: more storage capacity and bandwidth were demanded as far as video files were embedded. That added more zeros to data bytes while the time available to serve content was reduced (maximum time to transfer one packet decreased from a few seconds to several milliseconds).
When major OTT companies emerged, they provided rather on-demand content than live streaming, which appeared recently thanks to CDN streaming that still requires improvement (which will eventually lead us to CDN 3.0).
Deployment of on-demand videos became another reason to use a content delivery network. This is what conditioned development of CDN 2.0. Initially, data was pre-recorded and sent on demand, and video was just a big piece of data. The difference was that a user not willing to download a video file could stop viewing it at some point. But even on-demand video requires streaming, which triggered the development of CDN service that could optimize it. Some big enterprises like Netflix build their own CDNs, but the vast majority of OTT players use out-of-the-box infrastructure provided by CDN companies. These services deliver video bit by bit, so user can watch only parts of the clip, and it doesn’t have to be delivered entirely.
Then the demand for live video streaming came, and developers came to a dead-lock with usual caching. Live video could not be cached like prerecorded content, so video CDN would be modified to provide high bandwidth for point-to-point transfers between the server and visitors. However, the cost of creation and maintenance of streaming feeds was technically challenging and very expensive. For usual RTSP (Real Time Streaming Protocol) streaming, cache doesn’t really help. Back then, RTP (Real Time Protocol) was the most widespread protocol for streaming video, and it worked well to distribute fragments of data to improve performance and reliability. But it appeared to have some certain flaws. For instance, RTSP could be blocked by routers of firewalls.
HTTP has other advantages, being more standard solution and requiring simple user-side development. After implementing its own HTTP Dynamic Streaming protocol, it became widespread among many CDN providers. HTTP streaming involves server software that breaks video streams into minor chunks saved as separate files and delivers a “manifest file” that tells Media Player how to make up a stream of these chunks.
Besides, this protocol supports adaptive bitrate and automatically sets the appropriate bitrate according to network conditions for smooth user experience. Thus, every user gets the best video quality that the network and their device can produce. Adaptive bitrate has become a milestone in development of CDN delivery service: it enables CDN to send media streaming (including live content) by replicating data over edge cache servers. Then the user requesting the stream is directed to the closest edge server. It goes without saying that such CDN technology appeared to be more cost-effective. With HTTP adaptive streaming, edge server runs right to HTTP server software with license that is either free, or cheaper than the license of Flash Media Streaming Server. It made CDN cost of streaming media almost the same as traditional caching CDN service.
Google cache: a full-fledged alternative?
One option for web-masters to avoid using a content delivery network and to reduce the cost of video delivery is to allow Google for applying cache in your network. Google has enabled YouTube with almost perfect on demand video access, and its network is bigger than even major global CDN like Akamai’s or Edgecast’s. Before YouTube was introduced, caching tactics were relatively simple to define, because they would be easily used with the most popular content. But such services as YouTube required more sophisticated strategies to serve infinite content sources.
What Google offers is caching in ISP networks to reduce transit paths and offload traffic, so using it for video delivery is a nice option. When it comes to such giants as YouTube, Google can offer proprietary formats and architectures to reach better results. However, deployment of Google caching makes up for risks connected with the loss of control and ownership: Google may open the cache to some services, even if operator does not want that.
That leads us to the conclusion that sometimes a cheap CDN is more preferable than Google caching, as it contributes to better security and scalability.
The reaction of modern Telcos
Today, telecommunication companies deploy content delivery networks to relieve the stress on service delivery platforms. However, that does not solve the problem of mile bandwidth, and Telcos adopt various measures to boost bandwidth in the last mile. That involves building of the fiber: either all way to the houses as FTTH (Fiber To The Home), or as FTTC (Fiber To The Curb). It should be noted that, in fact, only a minority of houses will have direct access to the fiber in the near future. Many years should pass before FTTH gets to almost 100% of homes, except for high density population regions.
Thus, Telcos opt to adopt deep fiber strategies to use short copper loop lengths, which allows boosting bandwidth up to 100 Mbps and higher and enabling transfer of multichannel HD services even at 1080p quality or higher resolutions that may appear over the next decade. For instance, Alcatel Lucent introduced vectoring technology that improves copper performance by prohibiting cross-talk interference among adjacent copper pair. This approach to live streaming is limited to densely populated areas, and is not cheap.
As it has been mentioned, only a few IP operators are ready to share in-depth information and prefer to stay anonymous. But telecommunication companies have slightly different problems: for them, the main issue with CDN solutions is mobile networks, and not just in the certain line area. They note that in less developed markets 3G is only taking off now, but it is in advance of set broadband. While in developed markets like France (where ¾ of the traffic is generated on mobile devices via WiFi or home networks), usage is driving the rollouts, not the technology. Thus, in the long-term perspective, global operators do not regard CDN networks and caching as operational cost saving like small companies do.
All in all, only 15% of the entire traffic is cached and saved, because content life has shortened considerably. In developing countries, average traffic per visitor is 1.5 times higher than in most developing markets. Thus, simplification of delivery with CDN services is actually less cost effective in developed markets, although CDN hosting is used mostly there.
One big operator has also revealed that although TV services are consumed and demanded overall by their marketers, OTT consumption is higher than they expect. At the same time, use on TV screen connected to the managed network is not so popular. Content delivery networks are created to solve this problem, which is why there is a dire need to deploy something here.
When IPTV only started being introduced, operators optimized stream delivery by having only the most popular channels multicast. The list of channels was altered, but their number steadily increased. In 2005, FastWeb introduced “intelligent multicast”, and in the future, CDN companies may borrow this idea. People are afraid of illegal P2P sharing, which is why deployment of scalable CDN service seems to be an apparent variant. Slim Kachkachi who worked on the content delivery network for SFR (French Telco), claims that modern telecommunication companies will go much deeper into the network to expand their offerings.
What CDN providers opt for is increasing of PoPs (points of presence): every decent company offers tens and even hundreds of datacenters to deal with a growing number of traffic. Upload of the same pieces of content to numerous caching servers in real time is a real challenge, but sophisticated technologies used in CDN allow for that. Most companies note that “Moore’s Video law” is true: video traffic doubles every 18 months (or even 12). In major fixed networks, traffic has grown by 25% during recent 5 years. Mobile traffic has increased at about 33%, and it seems to grow faster.
CDN 2.0 is close to limits
During the recent years, it became evident that the second generation of CDN networks has architecture with some certain limitations. The main issue there is end-to-end latency and exposure to events within the transmission path as congestion (which is a problem of CDN network). NBC and BBC streaming during the Olympic Games 2012 was harshly criticized by people who could not watch it. This event highlighted the disadvantages and weak sides of the current CDN hosting. BBC used four types of streaming on different devices, and users were disappointed with the presentation on big screens: the expectations were quite high, which made glitches even more noticeable. Most problems were caused by buffering.
Latest studies also show that traditional CDN streaming appears to be more expensive than broadcast. Thus, Telcos should either partner with wired operators, or wait for more effective CDN solutions.
CDN and business: recent trends and hot issues
If there is no special technical solution, live OTT performs the functions of VoD in a network: this is a unicast with as many streams, as there are users. That puts a very high scalability limit. It is evident that CDN do not just cache large files: instead, content is divided into small chunks. Usually, any single repeater has to store a few parts of video stream. When operators use CDN for streaming, content is only transferred to PoPs where users are active and request data. As soon as the second person starts watching the same live stream, benefits of broadcast are activated, as if IP multi-cast would be used.
Another reason why CDN and “live” work well together is unpredictability of live content use in comparison with on-demand content. Live content causes worse traffic spikes than on-demand content does. Denying access to on-demand content is less serious than delaying the delivery of live content, when users are much less tolerant.
Why companies strive to create their own CDNs?
No matter how many PoPs a CDN reseller or a CDN provider offers, clients have to use the infrastructure as it is: extra datacenters in some certain regions and lack of servers in targeted areas. This is why operators who need better coverage in some certain countries and regions start implementing their own CDN technology, and many of them go from caching of static files to caching live TV and VoD for their own TV services. Besides, both traditional and mobile CDN solutions are developed.
Operators that want to ensure the best CDN experience for clients are sure that it is better to create an infrastructure tuned for their specific needs. The truth is: no third-party provider with top CDN solutions can be as reactive on the market as an operator that targets this very audience: local companies usually know better how to serve their customers properly.
Peering and network edge are critical issues, and CDN is a precious way to optimize expenses and boost agility. It makes many companies go for alliance to unite powers and provide the best service possible and provide agreeable CDN cost: the Orange & Akamai and Verizon & Edgecast are good examples of that.
Why local operators play an important role?
Live OTT is considered to be a market driven issue, and it is highly appreciated thanks to being a low cost delivery technology. But in order to conquer the market, they should partner with traditional local operators, because they can provide many aspects of successful Pay TV that OTT technology does not feature. Local providers have such advantages as:
- Content localization;
- Installation and maintenance of devices;
- Local marketing and sales;
- Local call centers and service offices;
- Local data for marketing and advertisement.
Many operators don’t want to depend on huge international suppliers, which is why they are searching for business opportunities to grow and support OTT services in the present environment.
Financial aspects and Telcos’ response
The Olympic Games 2012 successfully broadcast by BBC company has proved one thing: the available capacity was equal to the demand. That shows how cheaper and higher CDN capacity can trigger more demand in the future. However, as it happens with many new things, live OTT streaming is a change resisted by telecommunication companies. Pay TV operators are afraid that the business model based on ads and subscription revenue can be quite successful. At the same time, content owners fear that their precious assets will be lost and compromised by pirates in OTT. Meanwhile, CDNs will develop to the point where managed and unmanaged networks will be irrelevant. After past and present CDN comparison, it becomes clear that raw bandwidth costs less and less.
What the next generation of CDN may look like?
Today, operators are not obsessed with IPTV closed networks: it is evident that the existing infrastructure should be used and developed in place. Considering all the above discussed, we suggest that a boost of capacities in CDN services would cause more OTT usage, and improvements should be made in both CDN network and business model.
Today, live streaming has not caused too many problems for telecommunication companies, because their infrastructure is still capable of handling most load. But as more and more channels are added, and TV is used outside home and become mobile, unicast traffic inevitably increases. While the second generation of video CDN has appeared to be much more effective than the first version, CDN 3.0 promises to be a revolution: it is more than just a response to the lack of resources. Since many advanced operators already try to implement the “Multicast To The Home” technology, CDN companies will opt for this option, as well.
Many industry giants reckon that standardization is a way for operators to cooperate and create a more efficient content delivery network. But how do we know we’re doing the same thing? One important point of CDN 3.0 will be development of federated, united CDN storage and services. Many CDN services strive to cooperate on this problem to meet the threat of OTT companies, creating the fastest CDN with global coverage and the possibility to deliver to any problem, anywhere. No matter how big or minor a CDN provider is – many of them are open for CDN collaboration.
Choice of the best content delivery network depends on many criteria that include CDN pricing, device, bearers, number of PoPs, etc. Whatever technology is used in CDN service, it is clear that modern networks are getting closer to their edge. All initiatives we ‘ve seen are mostly CDN solution for storage of on-demand content on servers, while not so many of them can handle live OTT content.
After the first and the second generation CDN comparison it becomes clear that the future generation of content delivery network will be optimized to deliver dynamic content, and modern operators would be able to compete with major Telcos, expanding coverage not in the Net only, but over TV communications, as well.