Mobile Broadband 2.0: The Top Disruptive Innovations

Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).

Trends in European data usage

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:



Introduction

Telco 2.0 has previously published a wide variety of documents and blog posts on mobile broadband topics – content delivery networks (CDNs), mobile CDNs, WiFi offloading, Public WiFi, network outsourcing (“‘Under-The-Floor’ (UTF) Players: threat or opportunity? ”) and so forth. Our conferences have featured speakers and panellists discussing operator data-plan pricing strategies, tablets, network policy and numerous other angles. We’ve also featured guest material such as Arete Research’s report LTE: Late, Tempting, and Elusive.

In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing

Under The Floor (UTF) Players Telco 2.0

This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.

During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.

It starts by discussing some of the issues which determine the extent of any disruption:

  • Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
  • The role of standardisation , and whether it is a facilitator or inhibitor of disruption
  • Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.

The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:

  • Network infrastructure evolution (radio and core)
  • Control and policy functions, and business-model enablers

It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.

This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.

Telco-driven disruption vs. external trends

There are various potential sources of disruption for the mobile broadband marketplace:

  • New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
  • 3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
  • External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.

The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,

External disruptions to monitor

But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls. 

In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.

It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.

Some of the areas being followed by Telco 2.0 include:

  • Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
  • SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
  • Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
  • Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?

It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.

There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.

The mobile data explosion… or maybe not?

It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?

A J-curve or an S-curve?

Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.

In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.

Figure 1 – Trends in European data usage

 Trends in European Data Usage
 

Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video. 

But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.

Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.

Figure 2 – A plethora of mobile data traffic management solutions

A Plethora of Mobile Data Traffic Management Solutions

What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.

There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.

So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?

The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:

  • Network coverage in terms of area or population
  • Network capacity at the busiest places/times

Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later). 

Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.

So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.

  • Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
  • The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
  • Regulators may start to take different views of traffic management or predicted spectrum requirements.
  • Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
  • Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
  • In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
  • Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.

Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.

Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:

  • “All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
    • Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
  • “Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
    • This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
  • “Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
    • This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business

To read the note in full, including the following analysis…

  • Introduction
  • Telco-driven disruption vs. external trends
  • External disruptions to monitor
  • The mobile data explosion… or maybe not?
  • A J-curve or an S-curve?
  • Evolving the mobile network
  • Overview
  • LTE
  • Network sharing, wholesale and outsourcing
  • WiFi
  • Next-gen IP core networks (EPC)
  • Femtocells / small cells / “cloud RANs”
  • HetNets
  • Advanced offload: LIPA, SIPTO & others
  • Peer-to-peer connectivity
  • Self optimising networks (SON)
  • M2M-specific broadband innovations
  • Policy, control & business model enablers
  • The internal politics of mobile broadband & policy
  • Two sided business-model enablement
  • Congestion exposure
  • Mobile video networking and CDNs
  • Controlling signalling traffic
  • Device intelligence
  • Analytics & QoE awareness
  • Conclusions & recommendations
  • Index

…and the following figures…

  • Figure 1 – Trends in European data usage
  • Figure 2 – A plethora of mobile data traffic management solutions
  • Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
  • Figure 4 – Internal ‘power tensions’ over managing mobile broadband
  • Figure 5 – How a congestion API could work
  • Figure 6 – Relative Maturity of MBB Management Solutions
  • Figure 7 – Laptops generate traffic volume, smartphones create signalling load
  • Figure 8 – Measuring Quality of Experience
  • Figure 9 – Summary of disruptive network innovations

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: 3GPP, Aero2, Alcatel Lucent, AllJoyn, ALU, Amazon, Amdocs, Android, Apple, AT&T, ATIS, BBC, BlackBerry, Bridgewater, CarrierIQ, China, China Mobile, China Unicom, Clearwire, Conex, DoCoMo, Ericsson, Europe, EverythingEverywhere, Facebook, Femto Forum, FlashLinq, Free, Germany, Google, GSMA, H3G, Huawei, IETF, IMEI, IMSI, InterDigital, iPhones,Kenya, Kindle, Light Radio, LightSquared, Los Angeles, MBNL, Microsoft, Mobily, Netflix, NGMN, Norway, NSN, O2, WiFi, Openet, Qualcomm, Radisys, Russia, Saudi Arabia, SoftBank, Sony, Stoke, Telefonica, Telenor, Time Warner Cable, T-Mobile, UK, US, Verizon, Vita, Vodafone, WhatsApp, Yota, YouTube, ZTE.

Technologies and industry terms referenced: 2G, 3G, 4.5G, 4G, Adaptive bitrate streaming, ANDSF (Access Network Discovery and Selection Function), API, backhaul, Bluetooth, BSS, capacity crunch, capex, caps/tiers, CDMA, CDN, CDNs, Cloud RAN, content delivery networks (CDNs), Continuous Computing, Deep packet inspection (DPI), DPI, DRM, Encryption, Enhanced video, EPC, ePDG (Evolved Packet Data Gateway), Evolved Packet System, Femtocells, GGSN, GPS, GSM, Heterogeneous Network (HetNet), Heterogeneous Networks (HetNets), HLRs, hotspots, HSPA, HSS (Home Subscriber Server), HTML5, HTTP Live Streaming, IFOM (IP Flow Mobility and Seamless Offload), IMS, IPR, IPv4, IPv6, LIPA (Local IP Access), LTE, M2M, M2M network enhancements, metro-cells, MiFi, MIMO (multiple in, MME (Mobility Management Entity), mobile CDNs, mobile data, MOSAP, MSISDN, MVNAs (mobile virtual network aggregators)., MVNO, Net Neutrality, network outsourcing, Network sharing, Next-generation core networks, NFC, NodeBs, offload, OSS, outsourcing, P2P, Peer-to-peer connectivity, PGW (PDN Gateway), picocells, policy, Policy and Charging Rules Function (PCRF), Pre-cached video, pricing, Proximity networks, Public WiFi, QoE, QoS, RAN optimisation, RCS, remote radio heads, RFID, self-optimising network technology (SON), Self-optimising networks (SON), SGW (Serving Gateway), SIM-free wireless, single RANs, SIPTO (Selective IP Traffic Offload), SMS, SoftSIM, spectrum, super-femtos, Telco 2.0 Happy Pipe, Transparent optimisation, UMTS, ‘Under-The-Floor’ (UTF) Players, video optimisation, VoIP, VoLTE, VPN, White space, WiFi, WiFi Direct, WiFi offloading, WiMAX, WLAN.

CDNs 2.0: should telcos compete with Akamai?

Content Delivery Networks (CDNs) such as Akamai’s are used to improve the quality and reduce costs of delivering digital content at volume. What role should telcos now play in CDNs? (September 2011, Executive Briefing Service, Future of the Networks Stream).
Should telcos compete with Akamai?
  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 19 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

 

We’ve written about Akamai’s technology strategy for global CDN before as a fine example of the best practice in online video distribution and a case study in two-sided business models, to say nothing of being a company that knows how to work with the grain of the Internet. Recently, Akamai published a paper which gives an overview of its network and how it works. It’s a great paper, if something of a serious read. Having ourselves read, enjoyed and digested it, we’ve distilled the main elements in the following analysis, and used that as a basis to look at telcos’ opportunities in the CDN market.

Related Telco 2.0 Research

In the strategy report Mobile, Fixed and Wholesale Broadband Business Models – Best Practice Innovation, ‘Telco 2.0′ Opportunities, Forecasts and Future Scenarios we examined a number of different options for telcos to reduce costs and improve the quality of content delivery, including Content Delivery Networks (CDNs).

This followed on from Future Broadband Business Models – Beyond Bundling: winning the new $250Bn delivery game in which we looked at long term trends in network architectures, including the continuing move of intelligence and storage towards the edge of the network. Most recently, in Broadband 2.0: Delivering Video and Mobile CDNs we looked at whether there is now a compelling need for Mobile CDNs, and if so, should operators partner with existing players or build / buy their own?

We’ll also be looking in depth at the opportunities in mobile CDNs at the EMEA Executive Brainstorm in London on 9-10th November 2011.

Why have a CDN anyway?

The basic CDN concept is simple. Rather than sending one copy of a video stream, software update or JavaScript library over the Internet to each user who wants it, the content is stored inside their service provider’s network, typically at the POP level in a fixed ISP.

That way, there are savings on interconnect traffic (whether in terms of paid-for transit, capex, or stress on peering relationships), and by locating the servers strategically, savings are also possible on internal backhaul traffic. Users and content providers benefit from lower latency, and therefore faster download times, snappier user interface response, and also from higher reliability because the content servers are no longer a single point of failure.

What can be done with content can also be done with code. As well as simple file servers and media streaming servers, applications servers can be deployed in a CDN in order to bring the same benefits to Web applications. Because the content providers are customers of the CDN, it is possible to also apply content optimisation with their agreement at the time it is uploaded to the CDN. This makes it possible to save further traffic, and to avoid nasty accidents like this one.

Once the CDN servers are deployed, to make the network efficient, they need to be filled up with content and located so they are used effectively – so they need to be located in the right places. An important point of a CDN, and one that may play to telcos’ strengths, is that location is important.

Figure 1: With higher speeds, geography starts to dominate download times

CDN Akamai table distance throughput time Oct 2011 Telco 2.0

Source: Akamai

CDN Player Strategies

Market Overview

CDNs are a diverse group of businesses, with several major players, notably Akamai, the market leader, EdgeCast, and Limelight Networks, all of which are pure-play CDNs, and also a number of players that are part of either carriers or Web 2.0 majors. Level(3), which is widely expected to acquire the LimeLight CDN, is better known as a massive Internet backbone operator. BT Group and Telefonica both have CDN products. On the other hand, Google, Amazon, and Microsoft operate their own, very substantial CDNs in support of their own businesses. Amazon also provides a basic CDN service to third parties. Beyond these, there are a substantial number of small players.

Akamai is by far the biggest; Arbor Networks estimated that it might account for as much as 15% of Internet traffic once the actual CDN traffic was counted in, while the top five CDNs accounted for 10% of inter-domain traffic. The distinction is itself a testament to the effectiveness of CDN as a methodology.

The impact of CDN

As an example of the benefits of their CDN, above and beyond ‘a better viewing experience’, Akamai claim that they can demonstrate a 15% increase in completed transactions on an e-commerce site by using their application acceleration product. This doesn’t seem out of court, as Amazon.com has cited similar numbers in the past, in their case by reducing the volume of data needed to deliver a given web page rather than by accelerating its delivery.

As a consequence of these benefits, and the predicted growth in internet traffic, Akamai expect traffic on their platform to reach levels equivalent to the throughput of a US national broadcast TV station within 2-5 years. In the fixed world, Akamai claims offload rates of as much as 90%. The Jetstream CDN  blog points out that mobile operators might be able to offload as much as 65% of their traffic into the CDN. These numbers refer only to traffic sources that are customers of the CDN, but it ought to be obvious that offloading 90% of the YouTube or BBC iPlayer traffic is worth having.

In Broadband 2.0: Mobile CDNs and video distribution we looked at the early prospects for Mobile CDN, and indeed, Akamai’s own move into the mobile industry is only beginning. However, Telefonica recently announced that its internal, group-wide CDN has reached an initial capability, with service available in Europe and in Argentina. They intend to expand across their entire footprint. We are aware of at least one other mobile operator which is actively investing in CDN capabilities. The degree to which CDN capabilities can be integrated into mobile networks is dependent on the operator’s choice of network architecture, which we discuss later in this note.

It’s also worth noting that one of Akamai’s unique selling points is that it is very much a global operator. As usual, there’s a problem for operators, especially mobile operators, in that the big Internet platforms are global and operators are regional. Content owners can deal with one CDN for their services all around the world – they can’t deal with one telco. Also, big video sources like national TV broadcasters can usually deal with one ex-incumbent fixed operator and cover much of the market, but must deal with several mobile operators.

Application Delivery: the frontier of CDN

Akamai is already doing a lot of what we call “ADN” (Application-Delivery Networking) by analogy to CDN. In a CDN, content is served up near the network edge. In an ADN, applications are hosted in the same way in order to deliver them faster and more reliably. (Of course, the media server in a CDN node is itself a software application.) And the numbers we cited above regarding improved transaction completion rates are compelling.

However, we were a little under-whelmed by the details given of their Edge Computing product. It is restricted to J2EE and XSLT applications, and it seems quite limited in the power and flexibility it offers compared to the state of the art in cloud computing. Google App Engine and Amazon EC2 look far more interesting from a developer point of view. Obviously, they’re going for a different market. But we heartily agree with Dan Rayburn that the future of CDN is applications acceleration, and that this goes double for mobile with its relatively higher background levels of latency.

Interestingly, some of Akamai’s ADN customers aren’t actually distributing their code out to the ADN servers, but only making use of Akamai’s overlay network to route their traffic. Relatively small optimisations to the transport network can have significant benefits in business terms even before app servers are physically forward-deployed.

Other industry developments to watch

There are some shifts underway in the CDN landscape. Notably, as we mentioned earlier, there are rumours that Limelight Networks wants to exit the packet-pushing element of it in favour of the media services side – ingestion, transcoding, reporting and analytics. The most likely route is probably a sale or joint venture with Level(3). Their massive network footprint gives them both the opportunity to do global CDNing, and also very good reasons to do so internally. Being a late entrant, they have been very aggressive on price in building up a customer base (you may remember their role in the great Comcast peering war). They will be a formidable competitor and will probably want to move from macro-CDN to a more Akamai-like forward deployed model.

To read the note in full, including the following additional analysis…

  • Akamai’s technology strategy for a global CDN
  • Can Telcos compete with CDN Players?
  • Potential Telco Leverage Points
  • Global vs. local CDN strategies
  • The ‘fat head’ of content is local
  • The challenges of scale and experience
  • Strategic Options for Telcos
  • Cooperating with Akamai
  • Partnering with a Vendor Network
  • Part of the global IT operation?
  • National-TV-centred CDNs
  • A specialist, wholesale CDN role for challengers?
  • Federated CDN
  • Conclusion

…and the following charts…

  • Figure 1: With higher speeds, geography starts to dominate download times
  • Figure 2: Akamai’s network architecture
  • Figure 3: Architectural options for CDN in 3GPP networks
  • Figure 4: Mapping CDN strategic options

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 19 page report in PDF format here. Non-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, people and products referenced: 3UK, Akamai, Alcatel-Lucent, Amazon, Arbor Networks, BBC, BBC iPlayer, BitTorrent, BT, Cisco, Dan Rayburn, EC2, EdgeCast, Ericsson, Google, GSM, Internet HSPA, Jetstream, Level(3), Limelight Networks, MBNL, Microsoft, Motorola, MOVE, Nokia Siemens Networks, Orange, TalkTalk, Telefonica, T-Mobile, Velocix, YouTube.

Technologies and industry terms referenced: 3GPP, ADSL, App Engine, backhaul, Carrier-Ethernet, Content Delivery Networks (CDNs), DNS, DOCSIS 3, edge computing, FTTx, GGSN, Gi interface, HFC, HSPA+, interconnect, IT, JavaScript, latency, LTE, Mobile CDNs, online, peering, POPs (Points of Presence), RNC, SQL, UMTS, VPN, WLAN.

Broadband 2.0: Mobile CDNs and video distribution

Summary: Content Delivery Networks (CDNs) are becoming familiar in the fixed broadband world as a means to improve the experience and reduce the costs of delivering bulky data like online video to end-users. Is there now a compelling need for their mobile equivalents, and if so, should operators partner with existing players or build / buy their own? (August 2011, Executive Briefing Service, Future of the Networks Stream).
Telco 2.0 Mobile CDN Schematic Small
  Read in Full (Members only)    Buy This Report    To Subscribe

Below is an extract from this 25 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can buy a Single User license for this report online here for £595 (+VAT) or subscribe here. For multiple user licenses, or to find out about interactive strategy workshops on this topic, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

As is widely documented, mobile networks are witnessing huge growth in the volumes of 3G/4G data traffic, primarily from laptops, smartphones and tablets. While Telco 2.0 is wary of some of the headline shock-statistics about forecast “exponential” growth, or “data tsunamis” driven by ravenous consumption of video applications, there is certainly a fast-growing appetite for use of mobile broadband.

That said, many of the actual problems of congestion today can be pinpointed either to a handful of busy cells at peak hour – or, often, the inability of the network to deal with the signalling load from chatty applications or “aggressive” devices, rather than the “tonnage” of traffic. Another large trend in mobile data is the use of transient, individual-centric flows from specific apps or communications tools such as social networking and messaging.

But “tonnage” is not completely irrelevant. Despite the diversity, there is still an inexorable rise in the use of mobile devices for “big chunks” of data, especially the special class of software commonly known as “content” – typically popular/curated standalone video clips or programmes, or streamed music. Images (especially those in web pages) and application files such as software updates fit into a similar group – sizeable lumps of data downloaded by many individuals across the operator’s network.

This one-to-many nature of most types of bulk content highlights inefficiencies in the way mobile networks operate. The same data chunks are downloaded time and again by users, typically going all the way from the public Internet, through the operator’s core network, eventually to the end user. Everyone loses in this scenario – the content publisher needs huge servers to dish up each download individually. The operator has to deal with transport and backhaul load from repeatedly sending the same content across its network (and IP transit from shipping it in from outside, especially over international links). Finally, the user has to deal with all the unpredictability and performance compromises involved in accessing the traffic across multiple intervening points – and ends up paying extra to support the operator’s heavier cost base.

In the fixed broadband world, many content companies have availed themselves of a group of specialist intermediaries called CDNs (content delivery networks). These firms on-board large volumes of the most important content served across the Internet, before dropping it “locally” as near to the end user as possible – if possible, served up from cached (pre-saved) copies. Often, the CDN operating companies have struck deals with the end-user facing ISPs, which have often been keen to host their servers in-house, as they have been able to reduce their IP interconnection costs and deliver better user experience to their customers.

In the mobile industry, the use of CDNs is much less mature. Until relatively recently, the overall volumes of data didn’t really move the needle from the point of view of content firms, while operators’ radio-centric cost bases were also relatively immune from those issues as well. Optimising the “middle mile” for mobile data transport efficiency seemed far less of a concern than getting networks built out and handsets and apps perfected, or setting up policy and charging systems to parcel up broadband into tiered plans. Arguably, better-flowing data paths and video streams would only load the radio more heavily, just at a time when operators were having to compress video to limit congestion.

This is now changing significantly. With the rise in smartphone usage – and the expectations around tablets – Internet-based CDNs are pushing much more heavily to have their servers placed inside mobile networks. This is leading to a certain amount of introspection among the operators – do they really want to have Internet companies’ infrastructure inside their own networks, or could this be seen more as a Trojan Horse of some sort, simply accelerating the shift of content sales and delivery towards OTT-style models? Might it not be easier for operators to build internal CDN-type functions instead?

Some of the earlier approaches to video traffic management – especially so-called “optimisation” without the content companies’ permission of involvement – are becoming trickier with new video formats and more scrutiny from a Net Neutrality standpoint. But CDNs by definition involve the publishers, so potentially any necessary compression or other processing can be collaboratively, rather than “transparently” without cooperation or willingness.

At the same time, many of the operators’ usual vendors are seeing this transition point as a chance to differentiate their new IP core network offerings, typically combining CDN capability into their routing/switching platforms, often alongside the optimisation functions as well. In common with other recent innovations from network equipment suppliers, there is a dangled promise of Telco 2.0-style revenues that could be derived from “upstream” players. In this case, there is a bit more easily-proved potential, since this would involve direct substitution of the existing revenues already derived from content companies, by the Internet CDN players such as Akamai and Limelight. This also holds the possibility of setting up a two-sided, content-charging business model that fits OK with rules on Net Neutrality – there are few complaints about existing CDNs except from ultra-purist Neutralists.

On the other hand, telco-owned CDNs have existed in the fixed broadband world for some time, with largely indifferent levels of success and adoption. There needs to be a very good reason for content companies to choose to deal with multiple national telcos, rather than simply take the easy route and choose a single global CDN provider.

So, the big question for telcos around CDNs at the moment is “should I build my own, or should I just permit Akamai and others to continue deploying servers into my network?” Linked to that question is what type of CDN operation an operator might choose to run in-house.

There are four main reasons why a mobile operator might want to build its own CDN:

  • To lower costs of network operation or upgrade, especially in radio network and backhaul, but also through the core and in IP transit.
  • To improve the user experience of video, web or applications, either in terms of data throughput or latency.
  • To derive incremental revenue from content or application providers.
  • For wider strategic or philosophical reasons about “keeping control over the content/apps value chain”

This Analyst Note explores these issues in more details, first giving some relevant contextual information on how CDNs work, especially in mobile.

What is a CDN?

The traditional model for Internet-based content access is straightforward – the user’s browser requests a piece of data (image, video, file or whatever) from a server, which then sends it back across the network, via a series of “hops” between different network nodes. The content typically crosses the boundaries between multiple service providers’ domains, before finally arriving at the user’s access provider’s network, flowing down over the fixed or mobile “last mile” to their device. In a mobile network, that also typically involves transiting the operator’s core network first, which has a variety of infrastructure (network elements) to control and charge for it.

A Content Delivery Network (CDN) is a system for serving Internet content from servers which are located “closer” to the end user either physically, or in terms of the network topology (number of hops). This can result in faster response times, higher overall performance, and potentially lower costs to all concerned.

In most cases in the past, CDNs have been run by specialist third-party providers, such as Akamai and Limelight. This document also considers the role of telcos running their own “on-net” CDNs.

CDNs can be thought of as analogous to the distribution of bulky physical goods – it would be inefficient for a manufacturer to ship all products to customers individually from a single huge central warehouse. Instead, it will set up regional logistics centres that can be more responsive – and, if appropriate, tailor the products or packaging to the needs of specific local markets.

As an example, there might be a million requests for a particular video stream from the BBC. Without using a CDN, the BBC would have to provide sufficient server capacity and bandwidth to handle them all. The company’s immediate downstream ISPs would have to carry this traffic to the Internet backbone, the backbone itself has to carry it, and finally the requesters’ ISPs’ access networks have to deliver it to the end-points. From a media-industry viewpoint, the source network (in this case the BBC) is generally called the “content network” or “hosting network”; the destination is termed an “eyeball network”.

In a CDN scenario, all the data for the video stream has to be transferred across the Internet just once for each participating network, when it is deployed to the downstream CDN servers and stored. After this point, it is only carried over the user-facing eyeball networks, not any others via the public Internet. This also means that the CDN servers may be located strategically within the eyeball networks, in order to use its resources more efficiently. For example, the eyeball network could place the CDN server on the downstream side of its most expensive link, so as to avoid carrying the video over it multiple times. In a mobile context, CDN servers could be used to avoid pushing large volumes of data through expensive core-network nodes repeatedly.

When the video or other content is loaded into the CDN, other optimisations such as compression or transcoding into other formats can be applied if desired. There may also be various treatments relating to new forms of delivery such as HTTP streaming, where the video is broken up into “chunks” with several different sizes/resolutions. Collectively, these upfront processes are called “ingestion”.

Figure 1 – Content delivery with and without a CDN

Mobile CDN Schematic, Fig 1 Telco 2.0 Report

Source: STL Partners / Telco 2.0

Value-added CDN services

It is important to recognise that the fixed-centric CDN business has increased massively in richness and competition over time. Although some of the players have very clever architectures and IPR in the forms of their algorithms and software techniques, the flexibility of modern IP networks has tended to erode away some of the early advantages and margins. Shipping large volumes of content is now starting to become secondary to the provision of associated value-added functions and capabilities around that data. Additional services include:

  • Analytics and reporting
  • Advert insertion
  • Content ingestion and management
  • Application acceleration
  • Website security management
  • Software delivery
  • Consulting and professional services

It is no coincidence that the market leader, Akamai, now refers to itself as “provider of cloud optimisation services” in its financial statements, rather than a CDN, with its business being driven by “trends in cloud computing, Internet security, mobile connectivity, and the proliferation of online video”. In particular, it has started refocusing away from dealing with “video tonnage”, and towards application acceleration – for example, speeding up the load times of e-commerce sites, which has a measurable impact on abandonment of purchasing visits. Akamai’s total revenues in 2010 were around $1bn, less than half of which came from “media and entertainment” – the traditional “content industries”. Its H1 2011 revenues were relatively disappointing, with growth coming from non-traditional markets such as enterprise and high-tech (eg software update delivery) rather than media.

This is a critically important consideration for operators that are looking to CDNs to provide them with sizeable uplifts in revenue from upstream customers. Telcos – especially in mobile – will need to invest in various additional capabilities as well as the “headline” video traffic management aspects of the system. They will need to optimise for network latency as well as throughput, for example – which will probably not have the cost-saving impacts expected from managing “data tonnage” more effectively.

Although in theory telcos’ other assets should help – for example mapping download analytics to more generalised customer data – this is likely to involve extra complexity with the IT side of the business. There will also be additional efforts around sales and marketing that go significantly beyond most mobile operators’ normal footprint into B2B business areas. There is also a risk that an analysis of bottlenecks for application delivery / acceleration ends up simply pointing the finger of blame at the network’s inadequacies in terms of coverage. Improving delivery speed, cost or latency is only valuable to an upstream customer if there is a reasonable likelihood of the end-user actually having connectivity in the first place.

Figure 2: Value-added CDN capabilities

Mobile CDN Schematic - Functionality Chart - Telco 2.0 Report

Source: Alcatel-Lucent

Application acceleration

An increasingly important aspect of CDNs is their move beyond content/media distribution into a much wider area of “acceleration” and “cloud enablement”. As well as delivering large pieces of data efficiently (e.g. video), there is arguably more tangible value in delivering small pieces of data fast.

There are various manifestations of this, but a couple of good examples illustrate the general principles:

  • Many web transactions are abandoned because websites (or apps) seem “slow”. Few people would trust an airline’s e-commerce site, or a bank’s online interface, if they’ve had to wait impatiently for images and page elements to load, perhaps repeatedly hitting “refresh” on their browsers. Abandoned transactions can be directly linked to slow or unreliable response times – typically a function of congestion either at the server or various mid-way points in the connection. CDN-style hosting can accelerate the service measurably, leading to increased customer satisfaction and lower levels of abandonment.
  • Enterprise adoption of cloud computing is becoming exceptionally important, with both cost savings and performance enhancements promised by vendors. Sometimes, such platforms will involve hybrid clouds – a mixture of private (Internal) and public (Internet) resources and connectivity. Where corporates are reliant on public Internet connectivity, they may well want to ensure as fast and reliable service as possible, especially in terms of round-trip latency. Many IT applications are designed to be run on ultra-fast company private networks, with a lot of “hand-shaking” between the user’s PC and the server. This process is very latency-dependent, and especially as companies also mobilise their applications the additional overhead time in cellular networks may otherwise cause significant problems.

Hosting applications at CDN-type cloud acceleration providers achieves much the same effect as for video – they can bring the application “closer”, with fewer hops between the origin server and the consumer. Additionally, the CDN is well-placed to offer additional value-adds such as firewalling and protection against denial-of-service attacks.

To read the 25 note in full, including the following additional content…

  • How do CDNs fit with mobile networks?
  • Internet CDNs vs. operator CDNs
  • Why use an operator CDN?
  • Should delivery mean delivery?
  • Lessons from fixed operator CDNs
  • Mobile video: CDNs, offload & optimisation
  • CDNs, optimisation, proxies and DPI
  • The role of OVPs
  • Implementation and planning issues
  • Conclusion & recommendations

… and the following additional charts…

  • Figure 3 – Potential locations for CDN caches and nodes
  • Figure 4 – Distributed on-net CDNs can offer significant data transport savings
  • Figure 5 – The role of OVPs for different types of CDN player
  • Figure 6 – Summary of Risk / Benefits of Centralised vs. Distributed and ‘Off Net’ vs. ‘On-Net’ CDN Strategies

……Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 25 page report in PDF format here. Non-Members, please see here for how to subscribe, here to buy a single user license for £595 (+VAT), or for multi-user licenses and any other enquiries please email contact@telco2.net or call +44 (0) 207 247 5003.

Organisations and products referenced: 3GPP, Acision, Akamai, Alcatel-Lucent, Allot, Amazon Cloudfront, Apple’s Time Capsule, BBC, BrightCove, BT, Bytemobile, Cisco, Ericsson, Flash Networks, Huawei, iCloud, ISPs, iTunes, Juniper, Limelight, Netflix, Nokia Siemens Networks, Ooyala, OpenWave, Ortiva, Skype, smartphone, Stoke, tablets, TiVo, Vantrix, Velocix, Wholesale Content Connect, Yospace, YouTube.

Technologies and industry terms referenced: acceleration, advertising, APIs, backhaul, caching, CDN, cloud, distributed caches, DNS, Evolved Packet Core, eyeball network, femtocell, fixed broadband, GGSNs, HLS, HTTP streaming, ingestion, IP network, IPR, laptops, LIPA, LTE, macro-CDN, micro-CDN, middle mile, mobile, Net Neutrality, offload, optimisation, OTT, OVP, peering proxy, QoE, QoS, RNCs, SIPTO, video, video traffic management, WiFi, wireless.

Full Article: iFlood – How better mobile user interfaces demand Layer Zero openness

Networks guru Andrew Odlyzko recently estimated that a typical mobile user consumes 20MB of data a month for voice service, but that T-Mobile Netherlands reports their iPhone users consuming 640MB of data a month; so upgrading everyone to the Jesus Phone would increase the demand for IP bandwidth on cellular networks by a factor of 30.

It had in the past been estimated that major European cellular operators might be able to provide 500MB/user/month without another wave of network upgrades; if this calculation is at all typical, it looks like there is a substantial risk of an ”iPlayer event” hitting cellular in the near future. Recap: when the BBC placed vast amounts of its content on the Internet through its iPlayer service, DSL traffic in the UK spiked; or rather, it didn’t spike, the trend shifted permanently upwards.

That, of course, is much more worrying; because the marginal costs are set by the capacity needed to handle the peaks, a rise in average traffic means a boost to costs multiplied by the peak/mean ratio. An aggravating factor is the pricing structure for BT Wholesale backhaul service – the commits are 155Mbits/s, so if the new peak demand just exceeded your existing commit, you needed to buy a whole 155Mbits/s pipe. The impact on the UK unbundling/bitstream ISPs has been serious and the sector remains in a critical condition.

Traditionally, a mobile base station was provisioned with 2 E-1 leased lines, 2×2 Mbit/s capacity. Multiplied by 4, that’s 9,676,800 Mbits in a month. Divide by 8 to convert to MB, 1,181GB/1.15TB a month. Which means that a typical cell site could support at the most 1,832 users’ activity, or quite a lot less when you consider the peak/mean issue – typical values are 4:1 for GSM voice (458 users), but as high as 50:1 for IP (36!). Clearly, those operators who have had the foresight to pull fibre to the base stations and, especially, to acquire their own infrastructure will be at a major advantage.

The elements of traffic generation

The iPlayer event was an example of content push – what changed was the availability of a huge quantity of compelling content, which was also free. If Samsung’s recently announced video store takes off, that would be another example of content push. But this is far from the only driver of traffic generation, though. It is important to realise that the Internet video market is a tightly-coupled system. The total user experience is made up of content, of the user interface, of feedback and discovery mechanisms, of delivery over the network, and of the business model. All of them are very closely related – if the product is heavily DRM-restricted, prettying up the front end doesn’t help.

It is characteristic of a coupled system that the slowest-changing factor is the main constraint, but the fastest-changing factor is the driver of change. In this case, the slowest-changing factor is the infrastructure, and within that, the digs and poles of layer zero. Even the copper changes faster than that. The fastest-changing factor is the user interface, which can be changed at will. Sociability, discovery and the like, which require serious software development, are in the middle, with issues like BT Wholesale pricing some way below.

There was not much special about the iPhone technically; the first ones were 2G devices in a 3G world, and good luck to you trying to pull 640MB a month on GPRS alone. Is that even possible? Its integration with iTunes gave it access to content, but the cost issue meant that the bulk of the music on iPhones was probably downloaded over WLANs or sideloaded from a PC. But one thing that it did do very well was the user interface; Apple exploited its historic speciality in industrial design and GUI design to the limit. Typically, a lot of geeks and engineers scoffed at the gadget as an overdesigned bauble for big-kid hipsters; fools that we were.

But the core insight of the iPhone designers was to design for the Web and for rich media, probably helped by not having a telephony background. Therefore, they chose to cover as much of the form factor with a high quality screen as possible, and worked from there. They also made some advances in the GUI (zooming, gesture recognition), but the much talked about browser was less sensational. (Like all versions of Safari, it is based on the open-source WebKit engine that also makes the Nokia browser and Konqueror work.)

So we’re now beginning to see that changing the user interface can radically impact the engineering and economics of the network; and because it is a fast-changing element, it can do so faster than the network layer can react.

From receiving to sending

The Internet is a copying machine, they say; more to the point, it is usually a one-to-many medium that is experienced as a many-to-one medium. I draw content from many different sources according to the stuff I like; but each source is broadcasting itself to many readers. As a rule, people read more than they write, even if P2P distribution blurs this. One criticism of the iPhone is that it’s optimised for passive consumption of content; some users report their uplink/downlink ratio changing dramatically on changing to the iPhone.

Looking at another online-video sensation which hammers the ISP economy, YouTube, it’s quite clear that another driver of traffic is improved content ingestion. As whatever you place on the Web will be written relatively few times and read many times, there is a multiplier effect to anything that makes it easier to create or at least to distribute content.

YouTube’s innovation was three-fold; it made it dramatically simpler to upload video to the Internet, and it made it dramatically simpler to popularise it once it was there, through the embedding process and through its social functions. This latter feature meant there was much more of an incentive to upload stuff in the first place, because it was more likely to get viewed.

Better user interfaces and social mechanisms for content creation, then, are potentially major drivers of change in your cost model. They can change very quickly; and their impact is multiplied. Already, I can uplink photos to Flickr faster from my Nokia E71 than from my DSL link; granted, this is because of the UK’s lamentable infrastructure, but it shows some idea of the possibilities. Perhaps that Samsung device with the mini-decks might be less silly than we thought?

Faster adaptation: considered helpful

As we were wondering what would happen to the cellular networks’ backhaul bills, and contemplating the wreck of the DSL unbundler/bitstream business model, we looked enviously across the Channel to Telco 2.0’s favourite ISP, Iliad. They have just announced another set of fantastic figures; their margins are 70%-80% where they have deployed fibre, and their agility in launching new services doesn’t need to be rehearsed again. They even built their own content-creation service, after all; no fear of the future there.

What makes the difference? Iliad has always been committed to investing in engineering and infrastructure, giving it the agility to match the speed of change the application layer can achieve. It’s been determined to realise the OPEX and unbundling/wholesale savings from fibre deployment; and Iliad’s results have demonstrated that they are real and they are enough to fund deployment.

There is a crucial element, however, in their success; in France, access to duct and pole infrastructure is a regulated product, and major cities are more than keen on selling access to their own physical infrastructures – the sewers of Paris are the classic example. If you want to fix the ISP business model, fixing layer zero is the place to start, before the next fast-changing application knocks us back into the ditch.

Conclusions

  1. The ISP/telco market is a closely coupled system: An analysis in terms of differential rates of change shows that rapidly changing applications and user interfaces can have seismic impact on slowly changing network operator business models
  2. The benefits of fibre are real: Iliad is showing that fibre deployment isn’t just nice to have, it’s saving the ISP business model
  3. Open access to infrastructure is vital: There is no contradiction between applications/VAS and layer zero – instead they go together. If you want fantastic new apps, pick up a shovel.

Full Article: Online Video Usage – YouTube thrashes iPlayer, but for how long?

Online Video consumption is booming. The good news is that clearer demand patterns are beginning to emerge which should help in capacity planning and improving the user experience; the bad news is that an overall economic model which works for all players in the value chain is about as clear as mud.

We previously analysed the leffect of the launch of the BBC iPlayer on the ISP business model, but the truth is that, even in the UK, YouTube traffic still far outweighs the BBC iPlayer in the all important peak hour slot – even though the bitrate is far lower.

Looking at current usage data at a UK ISP we can see that the number of concurrent people using YouTube is roughly seven times that of the iPlayer. However, our analysis suggests that this situation is set to change quite dramatically as traditional broadcasters increase their presence online, with significant impact for all players. Here’s why:

Streaming Traffic Patterns

Our friends at Plusnet, a small UK ISP, have provided Telco 2.0 with their latest data on traffic patterns. The important measurement for ISPs is peak hour load as this determines variable-cost capacity requirements.

iplayer_7_days.PNG

iPlayer accounts for around 7% of total bandwidth at peak hour. The peaks are quite variable and follows the hit shows: the availability of Dr Who episodes or the latest in a long string of British losers at Wimbledon increase traffics.

Included within the iPlayer 7% is the Flash-based streaming traffic. The Kontiki-P2P based free-rental-download iPlayer traffic is included within general streaming volumes. This accounts for 5% of total peak-hour traffic and includes such applications as Real Audio, iChat, Google Video, Joost, Squeezebox, Slingbox, Google Earth, Multicast, DAAP, Kontiki (4OD, SkyPlayer, iPlayer downloads), Quicktime, MS Streaming, Shoutcast, Coral Video, H.323 and IGMP.

The BBC are planning to introduce a “bookmarking��? feature to the iPlayer which will allow pre-ordering of content and hopefully time-of-day based delivery options. This is a win-win-win enhancement and we can’t see any serious objections to the implementation: for the consumers it is great because they can view higher-quality video and allow the download when traffic is not counted towards their allowance; for ISPs it is great because it encourages non-peak hour downloads; and for the BBC it is great as it will potentially reduce their CDN costs.

youtube_7_days.PNG

YouTube traffic accounts for 17% of peak-hour usage – this is despite YouTube streaming at around 200kbps compared to the iPlayer 500kbps. There are about seven times the amount of concurrent users using YouTube compared to the iPlayer at peak hour. Concurrent is important here: YouTubers watch short-length clips whereas iPlayers watch longer shows of broadcast length.

P2P is declining in importance

The real interesting part of the PlusNet data is that peak-hour streaming at around 30% far outweighs p2p and usenet traffic at around 10%. Admittedly the peakhour p2p/usenet traffic at Plusnet is probably far lower than at other ISPs, but it goes to show how ISPs can control their destiny and manage consumption through the use of open and transparent traffic shaping policies. Overall, p2p consumption is 26% of Plusnet traffic across a 24-hour window – the policies are obviously working and people are p2p and usenet downloading when the network is not busy.

Quality and therefore bandwidth bound to increase

Both YouTube and the iPlayer are relatively low-bandwidth solutions compared to broadcast quality shows either in SD (standard definition) or HD (high-definition), however applications are emerging which are real headache material for the ISPs.

The most interesting emerging application is the Move Networks media player. This player is already in use by Fox, ABC, ESPN, Discovery and Televisa — amongst others. In the UK, it is currently only used by ChannelBee, which is a new online channel launched by Tim Lovejoy of Soccer AM fame.

The interesting part of the Move Networks technology is dynamic adjustment of the bit-rate according to the quality of the connection. Also, it does not seem to suffer from the buffering “feature��? that unfortunately seems to be part of the YouTube experience. Move Networks achieve this by installing a client in the form of a browser plug-in which switches the video stream according to the connection much in the same way as the TCP protocol works. We have regularly streamed content at 1.5Mbps which is good enough to view on a big widescreen TV and is indistinguishable to the naked eye from broadcast TV.

Unlike Akamai there is no secret sauce in the Move Networks technology and we expect other Media Players to start to use similar features — after all every content owner wants the best possible experience for viewers.

Clearing the rights

The amount of iPlayer content is also increasing: Wimbledon coverage was available for the first time and upcoming is the Beijing Olympics and the British Golf Open. We also expect that the BBC will eventually get permission to make available content outside of the iPlayer 7-day window. The clearing of rights for the BBC’s vast archive will take many years, but slowly but surely more and more content will be available. This is true for all major broadcasters in the UK and the rest of the world.

YouTube to shrink in importance

It will be extremely interesting to see how YouTube responds to the challenge of the traditional broadcasters — personally we can’t see a future where YouTube market share is anywhere near its current level. We believe watching User Generated Content, free of copyright, will always be a niche market.

Online Video Distribution and the associated economics is a key area of study for the Telco 2.0 team.