The telecoms industry is embracing network virtualisation and software defined networking, which are designed to both cut costs and enable greater agility. Whilst most operators have focused on the operating and capital cost benefits of virtualisation, few have attempted to define the range of potential new services that could be enabled by these new technologies and even fewer have attempted to forecast the associated revenue growth.
This report outlines:
Why and how network functions virtualisation (NFV), software defined networking (SDN) and distributed compute capabilities could generate new revenue growth for telcos.
The potential new services enabled by these technologies.
The revenue growth that a telco might hope to achieve.
This report does not discuss the cost, technical, organisational, market or regulatory challenges operators will need to overcome in making the transition to SDN and NFV. STL Partners (STL) also acknowledges that operators are still a long way from developing and launching some of the new services discussed in this paper, not least because they require capabilities that do not exist today. Nevertheless, by mapping the opportunity landscape for operators, this report should help to pave the way to fully capturing the transformative potential of SDN and NFV.
To sense-check our findings, STL has tested the proposed service concepts with the industry. The new services identified and modelled by STL were shared with approximately 25 telecoms operators. Hewlett Packard Enterprise (HPE) kindly commissioned and supported this research and testing programme.
However, STL wrote this report independently, and the views and conclusions contained herein are those of STL.
Introduction
The end of growth in telecoms…?
Most telecoms operators are facing significant competitive pressure from rival operators and players in adjacent sectors. Increased competition among telcos and Internet players has driven down voice and messaging revenues. Whilst demand for data services is increasing, STL forecasts that revenue growth in this segment will not offset the decline in voice and messaging revenue (see Figure 5).
Figure 5: Illustrative forecast: revenue decline for converged telco in advanced market
Source: STL Partners analysis
Figure 5 shows STL forecasts for revenues over a six-year horizon for an illustrative converged telco operating in an advanced market. The telco, its market characteristics and the modelling mechanics are described in detail later in this report.
We believe that existing ‘digital’ businesses (representing consumer digital services, such as IPTV and managed services for enterprises) will not grow significantly on an organic basis over the next six years (unless operators are able to radically transform their business). Note, this forecast is for a converged telco (mobile and fixed) addressing both enterprise and consumer segments; we anticipate that revenues could face a steeper decline for non-converged, consumer-only or enterprise-only players.
Given that telcos’ cost structures are quite rigid, with high capex and opex requirements to manage infrastructure, the ongoing decline in core service revenue will continue to put significant pressure on the core business. As revenues decline, margins fall and telcos’ ability to invest in innovation is curbed, making it even harder to find new sources of revenue.
New technologies can be a catalyst for telco transformation
However, STL believes that new technologies have the potential to both streamline the telco cost structure and spur growth. In particular, network functions virtualisation (NFV) and software-defined networking (SDN) offer many potential benefits for telcos.
Virtualisation has the potential to generate significant cost savings for telcos. Whilst the process of managing a transition to NFV and SDN may be fraught with challenges and be costly, it should eventually lead to:
A reduction in capex – NFV will lead to the adoption of generic common-off-the-shelf (COTS) hardware. This hardware will be lower cost, able to serve multiple functions and will be more readily re-usable. Furthermore, operators will be less tied to vendors’ proprietary platforms, as functions will be more openly interchangeable. This will increase competition in the hardware and software markets, leading to an overall reduction in capital investment.
Reduction of opex through automation. Again, as services will be delivered via software there will be less cost associated with the on-going management and maintenance of the network infrastructure. The network will be more-centrally managed, allowing more efficient sharing of resources, such as space, power and cooling systems.
Product lifecycle management improvements through more integrated development and operations (devops)
In addition to cost savings, virtualisation can also allow operators to become more agile. This agility arises from two factors:
The nature of the new infrastructure
The change in cost structure
As the new infrastructure will be software-centric, as opposed to hardware-centric, greater levels of automation will be possible. This new software-defined, programmable infrastructure could also increase flexibility in the creation, management and provisioning of services in a way that is not possible with today’s infrastructure, leading to greater agility.
Virtualisation will also change the telco cost structure, potentially allowing operators to be less risk-averse and thereby become more innovative. Figure 6 below shows how virtualisation can impact the operating model of a telco. Through virtualisation, an infrastructure player becomes more like a platform or product player, with less capital tied-up in infrastructure (and the management of that infrastructure) and more available to spend on marketing and innovation.
Redefining the cost structure could help spur transformation across the business, as processes and culture begin to revolve less around fixed infrastructure investment and more-around software and innovation.
Figure 6: Virtualisation can redefine the cost structure of a telco
Summary: Changing consumer behaviours and the transition to 4G are likely to bring about a fresh surge of video traffic on many networks. Fortunately, mobile content delivery networks (CDNs), which should deliver both better customer experience and lower costs, are now potentially an option for carriers using a combination of technical advances and new strategic approaches to network design. This briefing examines why, how, and what operators should do, and includes lessons from Akamai, Level 3, Amazon, and Google. (May 2013, Executive Briefing Service).
Introduction
Content delivery networks (CDNs) are by now a proven pattern for the efficient delivery of heavy content, such as video, and for better user experience in Web applications. Extensively deployed worldwide, they can be optimised to save bandwidth, to provide greater resilience, or to help scale up front-end applications. In the autumn of 2012, it was estimated that CDN providers accounted for 40% of the traffic entering residential ISP networks from the Internet core. This is likely to be an underestimate if anything, as a major use case for CDN is to reduce the volume of traffic that has to transit the Internet and to localise traffic within ISP networks. Craig Labovitz of DeepField Networks, formerly the head of Arbor’s ATLAS instrumentation project, estimates that from 35-45% of interdomain Internet traffic is accounted for by CDNs, rising to 60% for some smaller networks, and 85% of this is video.
Figure 1: CDNs, the supertankers of the Internet, are growing
Source: DeepField, STL
In the past, we have argued that mobile networks could benefit from deploying CDN, both in order to provide CDN services to content providers and in order to reduce their Internet transit and internal backhaul costs. We have also looked at the question of whether telcos should try to compete with major Internet CDN providers directly. In this note, we will review the CDN business model and consider whether the time has come for mobile CDN, in the light of developments at the market leader, Akamai.
The CDN Business Model
Although CDNs account for a very large proportion of Internet traffic and are indispensable to many content and applications providers, they are relatively small businesses. Dan Rayburn of Frost & Sullivan estimates that the video CDN market, not counting services provided by telcos internally, is around $1bn annually. In 2011, Cisco put it at $2bn with a 20% CAGR.
This is largely because much of the economic value created by CDNs accrues to the operators in whose networks they deploy their servers, in the form of efficiency savings, and to the content providers, in the form of improved sales conversions, less downtime, savings on hosting and transit, and generally, as an improvement in the quality of their product. It’s possible to see this as a two-sided business model – although the effective customer is the content provider, whose decisions determine the results of competition, much of the economic value created accrues to the operator and the content provider’s customer.
On top of this, it’s often suggested that margins in the core CDN product, video delivery, are poor and it would be worth moving to supposedly more lucrative “media services”, products like transcoding (converting original video files into the various formats served out of the CDN for networks with more or less bandwidth, mobile versus fixed devices, Apple HLS versus Adobe Flash, etc) and analytics aimed at content creators and rightsholders, or to lower-scale but higher-margin enterprise products. We are not necessarily convinced of this, and we will discuss the point further on page 9. For the time being, note that it is relatively easy to enter the CDN market, and it is influenced by Moore’s law. Therefore, as with most electronic, computing, and telecoms products, there is structural pressure on prices.
The Problem: The Traffic Keeps Coming
A major 4G operator recently released data on the composition of traffic over their new network. As much as 40% of the total, it turned out, was music or video streaming. The great majority of this will attract precisely no revenue for the operator, unless by chance it turns out to represent the marginal byte that induces a user to spend money on out-of-bundle data. However, it all consumes spectrum and needs backhauling and therefore costs money.
The good news is that most, or even all, of this could potentially be distributed via a CDN, and in many cases probably will be distributed by a CDN as far as the mobile operator’s Internet point of presence. Some of this traffic will be uplink, a segment likely to grow fast with better radios and better device cameras, but there are technical options related to CDN that can benefit uplink applications as well.
Figure 2: Video, music, and photos are filling up a 4G mobile network
Source: EE, STL
Another 36.5% of the traffic is accounted for by Web browsing and e-mail. A large proportion of the Web activity could theoretically come from a CDN, too – even if the content itself has to be generated dynamically by application logic, things like images, fonts, and JavaScript libraries are a quick win in terms of performance. Estimates of how much Internet traffic in general could be served from a CDN range from 35% (AT&T) to 98% (Analysys Mason).
As 29% of their traffic originates from the top 3 point sources – YouTube, Facebook, and iTunes – it’s also observable that signing-up a relatively small subset of content providers as customers will provide considerable benefit. Out of those three, all of them use a CDN, and two of those – Facebook and iTunes – are customers of Akamai, while YouTube relies on Google’s own solution.
We can re-arrange the last chart to illustrate this more fully. (Note that Skype, as a peer-to-peer application that is also live, is unsuitable for CDN as usually understood.)
Figure 3: The top 9 CDN-able point sources represent 40% of EE’s traffic
Source: EE, STL
Looking further afield, the next chart shows the traffic breakdown by application from DeepField’s observations in North American ISP networks.
Figure 4: The Web giants ride on the CDNs
Source: DeepField
Clearly, the traffic sources and traffic types that are served from CDNs are both the heaviest to transport and also the ones that contribute most to the busy hour; note that these are peak measurements, and the total of the CDN traffic here (Netflix, YouTube, CDN other, Facebook) is substantially more than it is on average.
To read the Software Defined Networking in full, including the following sections detailing additional analysis…
Akamai: the World’s No.1 CDN
Financial and KPI review
The Choice for CDN Customers: Akamai, Amazon, or DIY like Google?
CDN depth: the key question
CDN depth and mobile networks
Akamai’s guidelines for deployment
Why has mobile CDN’s time come?
What has held mobile CDN back?
But the world has changed…
…Networks are much less centralised…
…and IP penetrates much more deeply into the network
Licensed or Virtual CDN – a (relatively) new business model
SDN: a disruptive opportunity
So, why right now?
Conclusions
It may be time for telcos to move on mobile CDN
The CDN industry is exhibiting familiar category killer dynamics
Regional point sources remain important
CDN internals are changing the structure of the Internet
Recommendations for action
…and the following figures…
Figure 1: CDNs, the supertankers of the Internet, are growing
Figure 2: Video, music, and photos are filling up a 4G mobile network
Figure 3: The top 9 CDN-able point sources represent 40% of EE’s traffic
Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).
Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.
In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing
This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.
During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.
It starts by discussing some of the issues which determine the extent of any disruption:
Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
The role of standardisation , and whether it is a facilitator or inhibitor of disruption
Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.
The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:
Network infrastructure evolution (radio and core)
Control and policy functions, and business-model enablers
It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.
This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.
Telco-driven disruption vs. external trends
There are various potential sources of disruption for the mobile broadband marketplace:
New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.
The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,
External disruptions to monitor
But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls.
In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.
It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.
Some of the areas being followed by Telco 2.0 include:
Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?
It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.
There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.
The mobile data explosion… or maybe not?
It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?
A J-curve or an S-curve?
Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.
In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.
Figure 1 – Trends in European data usage
Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video.
But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.
Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.
Figure 2 – A plethora of mobile data traffic management solutions
What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.
There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.
So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?
The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:
Network coverage in terms of area or population
Network capacity at the busiest places/times
Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later).
Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.
So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.
Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
Regulators may start to take different views of traffic management or predicted spectrum requirements.
Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.
Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.
Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:
“All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
“Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
“Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business
To read the note in full, including the following analysis…
Introduction
Telco-driven disruption vs. external trends
External disruptions to monitor
The mobile data explosion… or maybe not?
A J-curve or an S-curve?
Evolving the mobile network
Overview
LTE
Network sharing, wholesale and outsourcing
WiFi
Next-gen IP core networks (EPC)
Femtocells / small cells / “cloud RANs”
HetNets
Advanced offload: LIPA, SIPTO & others
Peer-to-peer connectivity
Self optimising networks (SON)
M2M-specific broadband innovations
Policy, control & business model enablers
The internal politics of mobile broadband & policy
Two sided business-model enablement
Congestion exposure
Mobile video networking and CDNs
Controlling signalling traffic
Device intelligence
Analytics & QoE awareness
Conclusions & recommendations
Index
…and the following figures…
Figure 1 – Trends in European data usage
Figure 2 – A plethora of mobile data traffic management solutions
Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
Figure 4 – Internal ‘power tensions’ over managing mobile broadband
Figure 5 – How a congestion API could work
Figure 6 – Relative Maturity of MBB Management Solutions
Figure 9 – Summary of disruptive network innovations
…Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format here. Non-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.
CDN 2.0: Event Summary Analysis. A summary of the findings of the CDN 2.0 session, 10th November 2011, held in the Guoman Hotel, London
Part of the New Digital Economics Executive Brainstorm
series, the CDN 2.0 session took place at the Guoman Hotel, London on the 10th
November and looked at the future of online video, both the star product telcos
rely on for much of their revenue and the main driver of their costs.
Using a widely acclaimed interactive format called ‘Mindshare’, the event enabled
specially-invited senior executives from across the communications, media,
banking and technology sectors to discuss the field of content delivery
networking and the digital logistics systems Netflix, YouTube and other online
video providers rely on.
This note summarises some of the high-level
findings and includes the verbatim output of the brainstorm.
Content Delivery Networks (CDNs) such as Akamai’s are used to improve the quality and reduce costs of delivering digital content at volume. What role should telcos now play in CDNs? (September 2011, Executive Briefing Service, Future of the Networks Stream).
Below is an extract from this 19 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.
To share this article easily, please click:
//
Introduction
We’ve written about Akamai’s technology strategy for global CDN before as a fine example of the best practice in online video distribution and a case study in two-sided business models, to say nothing of being a company that knows how to work with the grain of the Internet. Recently, Akamai published a paper which gives an overview of its network and how it works. It’s a great paper, if something of a serious read. Having ourselves read, enjoyed and digested it, we’ve distilled the main elements in the following analysis, and used that as a basis to look at telcos’ opportunities in the CDN market.
Related Telco 2.0 Research
In the strategy report Mobile, Fixed and Wholesale Broadband Business Models – Best Practice Innovation, ‘Telco 2.0′ Opportunities, Forecasts and Future Scenarios we examined a number of different options for telcos to reduce costs and improve the quality of content delivery, including Content Delivery Networks (CDNs).
This followed on from Future Broadband Business Models – Beyond Bundling: winning the new $250Bn delivery game in which we looked at long term trends in network architectures, including the continuing move of intelligence and storage towards the edge of the network. Most recently, in Broadband 2.0: Delivering Video and Mobile CDNs we looked at whether there is now a compelling need for Mobile CDNs, and if so, should operators partner with existing players or build / buy their own?
We’ll also be looking in depth at the opportunities in mobile CDNs at the EMEA Executive Brainstorm in London on 9-10th November 2011.
Why have a CDN anyway?
The basic CDN concept is simple. Rather than sending one copy of a video stream, software update or JavaScript library over the Internet to each user who wants it, the content is stored inside their service provider’s network, typically at the POP level in a fixed ISP.
That way, there are savings on interconnect traffic (whether in terms of paid-for transit, capex, or stress on peering relationships), and by locating the servers strategically, savings are also possible on internal backhaul traffic. Users and content providers benefit from lower latency, and therefore faster download times, snappier user interface response, and also from higher reliability because the content servers are no longer a single point of failure.
What can be done with content can also be done with code. As well as simple file servers and media streaming servers, applications servers can be deployed in a CDN in order to bring the same benefits to Web applications. Because the content providers are customers of the CDN, it is possible to also apply content optimisation with their agreement at the time it is uploaded to the CDN. This makes it possible to save further traffic, and to avoid nasty accidents like this one.
Once the CDN servers are deployed, to make the network efficient, they need to be filled up with content and located so they are used effectively – so they need to be located in the right places. An important point of a CDN, and one that may play to telcos’ strengths, is that location is important.
Figure 1: With higher speeds, geography starts to dominate download times
Source: Akamai
CDN Player Strategies
Market Overview
CDNs are a diverse group of businesses, with several major players, notably Akamai, the market leader, EdgeCast, and Limelight Networks, all of which are pure-play CDNs, and also a number of players that are part of either carriers or Web 2.0 majors. Level(3), which is widely expected to acquire the LimeLight CDN, is better known as a massive Internet backbone operator. BT Group and Telefonica both have CDN products. On the other hand, Google, Amazon, and Microsoft operate their own, very substantial CDNs in support of their own businesses. Amazon also provides a basic CDN service to third parties. Beyond these, there are a substantial number of small players.
Akamai is by far the biggest; Arbor Networks estimated that it might account for as much as 15% of Internet traffic once the actual CDN traffic was counted in, while the top five CDNs accounted for 10% of inter-domain traffic. The distinction is itself a testament to the effectiveness of CDN as a methodology.
The impact of CDN
As an example of the benefits of their CDN, above and beyond ‘a better viewing experience’, Akamai claim that they can demonstrate a 15% increase in completed transactions on an e-commerce site by using their application acceleration product. This doesn’t seem out of court, as Amazon.com has cited similar numbers in the past, in their case by reducing the volume of data needed to deliver a given web page rather than by accelerating its delivery.
As a consequence of these benefits, and the predicted growth in internet traffic, Akamai expect traffic on their platform to reach levels equivalent to the throughput of a US national broadcast TV station within 2-5 years. In the fixed world, Akamai claims offload rates of as much as 90%. The Jetstream CDN blog points out that mobile operators might be able to offload as much as 65% of their traffic into the CDN. These numbers refer only to traffic sources that are customers of the CDN, but it ought to be obvious that offloading 90% of the YouTube or BBC iPlayer traffic is worth having.
In Broadband 2.0: Mobile CDNs and video distribution we looked at the early prospects for Mobile CDN, and indeed, Akamai’s own move into the mobile industry is only beginning. However, Telefonica recently announced that its internal, group-wide CDN has reached an initial capability, with service available in Europe and in Argentina. They intend to expand across their entire footprint. We are aware of at least one other mobile operator which is actively investing in CDN capabilities. The degree to which CDN capabilities can be integrated into mobile networks is dependent on the operator’s choice of network architecture, which we discuss later in this note.
It’s also worth noting that one of Akamai’s unique selling points is that it is very much a global operator. As usual, there’s a problem for operators, especially mobile operators, in that the big Internet platforms are global and operators are regional. Content owners can deal with one CDN for their services all around the world – they can’t deal with one telco. Also, big video sources like national TV broadcasters can usually deal with one ex-incumbent fixed operator and cover much of the market, but must deal with several mobile operators.
Application Delivery: the frontier of CDN
Akamai is already doing a lot of what we call “ADN” (Application-Delivery Networking) by analogy to CDN. In a CDN, content is served up near the network edge. In an ADN, applications are hosted in the same way in order to deliver them faster and more reliably. (Of course, the media server in a CDN node is itself a software application.) And the numbers we cited above regarding improved transaction completion rates are compelling.
However, we were a little under-whelmed by the details given of their Edge Computing product. It is restricted to J2EE and XSLT applications, and it seems quite limited in the power and flexibility it offers compared to the state of the art in cloud computing. Google App Engine and Amazon EC2 look far more interesting from a developer point of view. Obviously, they’re going for a different market. But we heartily agree with Dan Rayburn that the future of CDN is applications acceleration, and that this goes double for mobile with its relatively higher background levels of latency.
Interestingly, some of Akamai’s ADN customers aren’t actually distributing their code out to the ADN servers, but only making use of Akamai’s overlay network to route their traffic. Relatively small optimisations to the transport network can have significant benefits in business terms even before app servers are physically forward-deployed.
Other industry developments to watch
There are some shifts underway in the CDN landscape. Notably, as we mentioned earlier, there are rumours that Limelight Networks wants to exit the packet-pushing element of it in favour of the media services side – ingestion, transcoding, reporting and analytics. The most likely route is probably a sale or joint venture with Level(3). Their massive network footprint gives them both the opportunity to do global CDNing, and also very good reasons to do so internally. Being a late entrant, they have been very aggressive on price in building up a customer base (you may remember their role in the great Comcast peering war). They will be a formidable competitor and will probably want to move from macro-CDN to a more Akamai-like forward deployed model.
To read the note in full, including the following additional analysis…
Akamai’s technology strategy for a global CDN
Can Telcos compete with CDN Players?
Potential Telco Leverage Points
Global vs. local CDN strategies
The ‘fat head’ of content is local
The challenges of scale and experience
Strategic Options for Telcos
Cooperating with Akamai
Partnering with a Vendor Network
Part of the global IT operation?
National-TV-centred CDNs
A specialist, wholesale CDN role for challengers?
Federated CDN
Conclusion
…and the following charts…
Figure 1: With higher speeds, geography starts to dominate download times
Figure 2: Akamai’s network architecture
Figure 3: Architectural options for CDN in 3GPP networks
Figure 4: Mapping CDN strategic options
…Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 19 page report in PDF format here. Non-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.
Organisations, people and products referenced: 3UK, Akamai, Alcatel-Lucent, Amazon, Arbor Networks, BBC, BBC iPlayer, BitTorrent, BT, Cisco, Dan Rayburn, EC2, EdgeCast, Ericsson, Google, GSM, Internet HSPA, Jetstream, Level(3), Limelight Networks, MBNL, Microsoft, Motorola, MOVE, Nokia Siemens Networks, Orange, TalkTalk, Telefonica, T-Mobile, Velocix, YouTube.
Technologies and industry terms referenced: 3GPP, ADSL, App Engine, backhaul, Carrier-Ethernet, Content Delivery Networks (CDNs), DNS, DOCSIS 3, edge computing, FTTx, GGSN, Gi interface, HFC, HSPA+, interconnect, IT, JavaScript, latency, LTE, Mobile CDNs, online, peering, POPs (Points of Presence), RNC, SQL, UMTS, VPN, WLAN.
Summary: Content Delivery Networks (CDNs) are becoming familiar in the fixed broadband world as a means to improve the experience and reduce the costs of delivering bulky data like online video to end-users. Is there now a compelling need for their mobile equivalents, and if so, should operators partner with existing players or build / buy their own? (August 2011, Executive Briefing Service, Future of the Networks Stream).
Below is an extract from this 25 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can buy a Single User license for this report online here for £595 (+VAT) or subscribe here. For multiple user licenses, or to find out about interactive strategy workshops on this topic, please email contact@telco2.net or call +44 (0) 207 247 5003.
To share this article easily, please click:
//
Introduction
As is widely documented, mobile networks are witnessing huge growth in the volumes of 3G/4G data traffic, primarily from laptops, smartphones and tablets. While Telco 2.0 is wary of some of the headline shock-statistics about forecast “exponential” growth, or “data tsunamis” driven by ravenous consumption of video applications, there is certainly a fast-growing appetite for use of mobile broadband.
That said, many of the actual problems of congestion today can be pinpointed either to a handful of busy cells at peak hour – or, often, the inability of the network to deal with the signalling load from chatty applications or “aggressive” devices, rather than the “tonnage” of traffic. Another large trend in mobile data is the use of transient, individual-centric flows from specific apps or communications tools such as social networking and messaging.
But “tonnage” is not completely irrelevant. Despite the diversity, there is still an inexorable rise in the use of mobile devices for “big chunks” of data, especially the special class of software commonly known as “content” – typically popular/curated standalone video clips or programmes, or streamed music. Images (especially those in web pages) and application files such as software updates fit into a similar group – sizeable lumps of data downloaded by many individuals across the operator’s network.
This one-to-many nature of most types of bulk content highlights inefficiencies in the way mobile networks operate. The same data chunks are downloaded time and again by users, typically going all the way from the public Internet, through the operator’s core network, eventually to the end user. Everyone loses in this scenario – the content publisher needs huge servers to dish up each download individually. The operator has to deal with transport and backhaul load from repeatedly sending the same content across its network (and IP transit from shipping it in from outside, especially over international links). Finally, the user has to deal with all the unpredictability and performance compromises involved in accessing the traffic across multiple intervening points – and ends up paying extra to support the operator’s heavier cost base.
In the fixed broadband world, many content companies have availed themselves of a group of specialist intermediaries called CDNs (content delivery networks). These firms on-board large volumes of the most important content served across the Internet, before dropping it “locally” as near to the end user as possible – if possible, served up from cached (pre-saved) copies. Often, the CDN operating companies have struck deals with the end-user facing ISPs, which have often been keen to host their servers in-house, as they have been able to reduce their IP interconnection costs and deliver better user experience to their customers.
In the mobile industry, the use of CDNs is much less mature. Until relatively recently, the overall volumes of data didn’t really move the needle from the point of view of content firms, while operators’ radio-centric cost bases were also relatively immune from those issues as well. Optimising the “middle mile” for mobile data transport efficiency seemed far less of a concern than getting networks built out and handsets and apps perfected, or setting up policy and charging systems to parcel up broadband into tiered plans. Arguably, better-flowing data paths and video streams would only load the radio more heavily, just at a time when operators were having to compress video to limit congestion.
This is now changing significantly. With the rise in smartphone usage – and the expectations around tablets – Internet-based CDNs are pushing much more heavily to have their servers placed inside mobile networks. This is leading to a certain amount of introspection among the operators – do they really want to have Internet companies’ infrastructure inside their own networks, or could this be seen more as a Trojan Horse of some sort, simply accelerating the shift of content sales and delivery towards OTT-style models? Might it not be easier for operators to build internal CDN-type functions instead?
Some of the earlier approaches to video traffic management – especially so-called “optimisation” without the content companies’ permission of involvement – are becoming trickier with new video formats and more scrutiny from a Net Neutrality standpoint. But CDNs by definition involve the publishers, so potentially any necessary compression or other processing can be collaboratively, rather than “transparently” without cooperation or willingness.
At the same time, many of the operators’ usual vendors are seeing this transition point as a chance to differentiate their new IP core network offerings, typically combining CDN capability into their routing/switching platforms, often alongside the optimisation functions as well. In common with other recent innovations from network equipment suppliers, there is a dangled promise of Telco 2.0-style revenues that could be derived from “upstream” players. In this case, there is a bit more easily-proved potential, since this would involve direct substitution of the existing revenues already derived from content companies, by the Internet CDN players such as Akamai and Limelight. This also holds the possibility of setting up a two-sided, content-charging business model that fits OK with rules on Net Neutrality – there are few complaints about existing CDNs except from ultra-purist Neutralists.
On the other hand, telco-owned CDNs have existed in the fixed broadband world for some time, with largely indifferent levels of success and adoption. There needs to be a very good reason for content companies to choose to deal with multiple national telcos, rather than simply take the easy route and choose a single global CDN provider.
So, the big question for telcos around CDNs at the moment is “should I build my own, or should I just permit Akamai and others to continue deploying servers into my network?” Linked to that question is what type of CDN operation an operator might choose to run in-house.
There are four main reasons why a mobile operator might want to build its own CDN:
To lower costs of network operation or upgrade, especially in radio network and backhaul, but also through the core and in IP transit.
To improve the user experience of video, web or applications, either in terms of data throughput or latency.
To derive incremental revenue from content or application providers.
For wider strategic or philosophical reasons about “keeping control over the content/apps value chain”
This Analyst Note explores these issues in more details, first giving some relevant contextual information on how CDNs work, especially in mobile.
What is a CDN?
The traditional model for Internet-based content access is straightforward – the user’s browser requests a piece of data (image, video, file or whatever) from a server, which then sends it back across the network, via a series of “hops” between different network nodes. The content typically crosses the boundaries between multiple service providers’ domains, before finally arriving at the user’s access provider’s network, flowing down over the fixed or mobile “last mile” to their device. In a mobile network, that also typically involves transiting the operator’s core network first, which has a variety of infrastructure (network elements) to control and charge for it.
A Content Delivery Network (CDN) is a system for serving Internet content from servers which are located “closer” to the end user either physically, or in terms of the network topology (number of hops). This can result in faster response times, higher overall performance, and potentially lower costs to all concerned.
In most cases in the past, CDNs have been run by specialist third-party providers, such as Akamai and Limelight. This document also considers the role of telcos running their own “on-net” CDNs.
CDNs can be thought of as analogous to the distribution of bulky physical goods – it would be inefficient for a manufacturer to ship all products to customers individually from a single huge central warehouse. Instead, it will set up regional logistics centres that can be more responsive – and, if appropriate, tailor the products or packaging to the needs of specific local markets.
As an example, there might be a million requests for a particular video stream from the BBC. Without using a CDN, the BBC would have to provide sufficient server capacity and bandwidth to handle them all. The company’s immediate downstream ISPs would have to carry this traffic to the Internet backbone, the backbone itself has to carry it, and finally the requesters’ ISPs’ access networks have to deliver it to the end-points. From a media-industry viewpoint, the source network (in this case the BBC) is generally called the “content network” or “hosting network”; the destination is termed an “eyeball network”.
In a CDN scenario, all the data for the video stream has to be transferred across the Internet just once for each participating network, when it is deployed to the downstream CDN servers and stored. After this point, it is only carried over the user-facing eyeball networks, not any others via the public Internet. This also means that the CDN servers may be located strategically within the eyeball networks, in order to use its resources more efficiently. For example, the eyeball network could place the CDN server on the downstream side of its most expensive link, so as to avoid carrying the video over it multiple times. In a mobile context, CDN servers could be used to avoid pushing large volumes of data through expensive core-network nodes repeatedly.
When the video or other content is loaded into the CDN, other optimisations such as compression or transcoding into other formats can be applied if desired. There may also be various treatments relating to new forms of delivery such as HTTP streaming, where the video is broken up into “chunks” with several different sizes/resolutions. Collectively, these upfront processes are called “ingestion”.
Figure 1 – Content delivery with and without a CDN
Source: STL Partners / Telco 2.0
Value-added CDN services
It is important to recognise that the fixed-centric CDN business has increased massively in richness and competition over time. Although some of the players have very clever architectures and IPR in the forms of their algorithms and software techniques, the flexibility of modern IP networks has tended to erode away some of the early advantages and margins. Shipping large volumes of content is now starting to become secondary to the provision of associated value-added functions and capabilities around that data. Additional services include:
Analytics and reporting
Advert insertion
Content ingestion and management
Application acceleration
Website security management
Software delivery
Consulting and professional services
It is no coincidence that the market leader, Akamai, now refers to itself as “provider of cloud optimisation services” in its financial statements, rather than a CDN, with its business being driven by “trends in cloud computing, Internet security, mobile connectivity, and the proliferation of online video”. In particular, it has started refocusing away from dealing with “video tonnage”, and towards application acceleration – for example, speeding up the load times of e-commerce sites, which has a measurable impact on abandonment of purchasing visits. Akamai’s total revenues in 2010 were around $1bn, less than half of which came from “media and entertainment” – the traditional “content industries”. Its H1 2011 revenues were relatively disappointing, with growth coming from non-traditional markets such as enterprise and high-tech (eg software update delivery) rather than media.
This is a critically important consideration for operators that are looking to CDNs to provide them with sizeable uplifts in revenue from upstream customers. Telcos – especially in mobile – will need to invest in various additional capabilities as well as the “headline” video traffic management aspects of the system. They will need to optimise for network latency as well as throughput, for example – which will probably not have the cost-saving impacts expected from managing “data tonnage” more effectively.
Although in theory telcos’ other assets should help – for example mapping download analytics to more generalised customer data – this is likely to involve extra complexity with the IT side of the business. There will also be additional efforts around sales and marketing that go significantly beyond most mobile operators’ normal footprint into B2B business areas. There is also a risk that an analysis of bottlenecks for application delivery / acceleration ends up simply pointing the finger of blame at the network’s inadequacies in terms of coverage. Improving delivery speed, cost or latency is only valuable to an upstream customer if there is a reasonable likelihood of the end-user actually having connectivity in the first place.
Figure 2: Value-added CDN capabilities
Source: Alcatel-Lucent
Application acceleration
An increasingly important aspect of CDNs is their move beyond content/media distribution into a much wider area of “acceleration” and “cloud enablement”. As well as delivering large pieces of data efficiently (e.g. video), there is arguably more tangible value in delivering small pieces of data fast.
There are various manifestations of this, but a couple of good examples illustrate the general principles:
Many web transactions are abandoned because websites (or apps) seem “slow”. Few people would trust an airline’s e-commerce site, or a bank’s online interface, if they’ve had to wait impatiently for images and page elements to load, perhaps repeatedly hitting “refresh” on their browsers. Abandoned transactions can be directly linked to slow or unreliable response times – typically a function of congestion either at the server or various mid-way points in the connection. CDN-style hosting can accelerate the service measurably, leading to increased customer satisfaction and lower levels of abandonment.
Enterprise adoption of cloud computing is becoming exceptionally important, with both cost savings and performance enhancements promised by vendors. Sometimes, such platforms will involve hybrid clouds – a mixture of private (Internal) and public (Internet) resources and connectivity. Where corporates are reliant on public Internet connectivity, they may well want to ensure as fast and reliable service as possible, especially in terms of round-trip latency. Many IT applications are designed to be run on ultra-fast company private networks, with a lot of “hand-shaking” between the user’s PC and the server. This process is very latency-dependent, and especially as companies also mobilise their applications the additional overhead time in cellular networks may otherwise cause significant problems.
Hosting applications at CDN-type cloud acceleration providers achieves much the same effect as for video – they can bring the application “closer”, with fewer hops between the origin server and the consumer. Additionally, the CDN is well-placed to offer additional value-adds such as firewalling and protection against denial-of-service attacks.
To read the 25 note in full, including the following additional content…
How do CDNs fit with mobile networks?
Internet CDNs vs. operator CDNs
Why use an operator CDN?
Should delivery mean delivery?
Lessons from fixed operator CDNs
Mobile video: CDNs, offload & optimisation
CDNs, optimisation, proxies and DPI
The role of OVPs
Implementation and planning issues
Conclusion & recommendations
… and the following additional charts…
Figure 3 – Potential locations for CDN caches and nodes
Figure 4 – Distributed on-net CDNs can offer significant data transport savings
Figure 5 – The role of OVPs for different types of CDN player
Figure 6 – Summary of Risk / Benefits of Centralised vs. Distributed and ‘Off Net’ vs. ‘On-Net’ CDN Strategies
……Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 25 page report in PDF format here. Non-Members, please see here for how to subscribe, here to buy a single user license for £595 (+VAT), or for multi-user licenses and any other enquiries please email contact@telco2.net or call +44 (0) 207 247 5003.
Summary: Telenor’s new ‘Mobile Business Network’ integrates SME’s mobile and fixed phone systems via managed APIs, providing added functionality and delivering greater business efficiency. It uses a ‘two-sided’ business model strategy and targets the market via developers.
The enterprise is the key field for new forms of voice and messaging; it’s where the social and economic value of bits exceeds their quantity by the greatest margin, and where the problems of bad voice & messaging are most severe.
People spend hours answering phone calls and typing information into computers – calls they take from people sitting behind computers that are internetworked with the ones they sit behind. Quite often, the answer is to send the caller on to someone else. Meanwhile, other people struggle to avoid calls from enterprises.
It’s got to change, and here’s a start: Mobilt Bedriftsnett or the ‘Mobile Business Network’ from Telenor.
‘Telenor 2.0’
Telenor are a large, Norwegian integrated telecoms operator, and a pioneer and early adopter of some Telco 2.0 ideas. As long ago as 2001, their head of strategy Lars Godell, was working on an early implementation of some of the ideas we’ve been promoting. They also have an active ‘Telenor 2.0′ strategic transformation programme.
Content Provider Access – CPA – established a standard interface for the ingestion, delivery, billing, and settlement of mobile content of any description that would be delivered to Telenor subscribers, and was the first service of this kind to share revenue from content sales with third parties and to interwork with other mobile and fixed line operators, years before the iPhone or even NTT’s pioneering i-Mode. Later, they added a developer sandbox (Playground) as well.
So, what would they do when they encountered the need for better voice & messaging? The importance of this line of business, and its focus on enterprises, has been part and parcel of Telco 2.0 since its inception (here’s a note on “digital workers” from the spring of 2007, and another on better telephony from the same period), and we’ve only become more convinced of its importance as a wave of disruptive innovators have entered the field.
We spoke to Telenor’s product manager for charging APIs, Elisabeth Falck, and strategy director Frank Elter; they think MB is “our latest move towards Telco 2.0”.
Voice 2.0: despite the changing value proposition…
In the Voice & Messaging 2.0 strategy report, we identified a fundamental shift in the value proposition of telephony; in the past, telephony was scarce relative to labour. That stopped being true between 1986 and 2001 in the US, when the price per minute of telephony fell below that of people’s time (the exact crossover points are 1986 for unskilled workers and landline calls, 1998 for graduates and mobile calls, and finally 2001 for unskilled workers and mobile calls).
Now, telephony is relatively plentiful; this is why there are now call-centre help desks and repair centres rather than service engineers and local repair shops. It’s no longer worth employing workers to avoid telephone calls; rather, it’s worth delivering services to the customer by phone rather than having a field sales or service force. The chart below visualises this relationship.
…and changing position in the value chain…
We also identified two other major trends in voice – commoditisation and fragmentation.
Voice is increasingly commoditised – that is to say, it’s a bulk product, cheap, and largely homeogenous. These are also the classic conditions of a product in perfect competition; despite the name and the ideological baggage, this isn’t a good thing, as in this situation economic theory predicts that profit margins will be competed away down to the absolute minimum required to keep the participants from giving up.
The provision of Voice is also increasingly fragmented and diverse – there are more and more producers, and more and more different applications, networks, and hardware devices incorporate some form of telephony. For example, games consoles like the Xbox have a voice chat capability, and CRM systems like Salesforce.com can be integrated with click-to-call services.
As a result, there’s less and less value in the telephone call itself – the period between the ringing tone and the click, when the circuit is established and bearer traffic is flowing. This bit is now cheap or free, and although Skype hasn’t eaten the world as it seemed it might in 2005, this is largely because the industry has reacted by bundling – i.e. slashing prices. Of course, neither the disruptors nor the traditional telcos can base a business on a permanent price war – eventually, prices go to zero. We’ve seen the results of this; several VoIP carriers whose business was based on offering the same features as the PSTN, but cheaper, have already gone under.
The outlook of Telco 2.0 Executive Brainstorm delegates as far back as 2007 demonstrates the widespread acceptance of these trends in the industry, and the increasing proliferation of diverse means of delivery of voice as show in the following chart.
… Voice is still the biggest game in Telcotown…
So why bother with voice? The short answer is that there are three communications products the public gladly pays for – voice, SMS, and IP access.
Telenor’s CPA, one of the most successful and longest-running mobile content plays, is proud of $100m in revenues. In comparison, the business voice market in Norway is NOK6.9bn – $1.22bn. Even in 10 years’ time, voice will comprise the bulk of Telco revenue streams. However grim the prospects, defending Voice is only optional in the sense that survival is optional.
Moreover the emergence of the first wave of internet voice players – Skype, Vonage, etc., and the subsequent fight back by Operators, demonstrates that there is still much scope for innovation in voice and messaging, and that the option of better voice and messaging is still open.
…although the rules are changing…
Specifically, the possible zone of value is now adjacent to the call – features like presence-and-availability, dynamic call routing, speech-to-text, collaboration, history, and integration with the field of CEBP (Communications-Enabled Business Processes). There may also be some scope for improving the bearer quality – HD voice is currently gaining buzz – although the challenge there is that the Internet Players can use better voice codecs as well (Skype already does).
…and the Enterprise market is where the smart money is
The crucial market for better voice & messaging is the enterprise, because that’s where the money is. Nowhere else does the economic value of bits exceed their quantity and cost so much.
For large enterprises, the answer will almost certainly come from custom developments. They are already extensive users of VoIP internally, and increasingly externally as well. They tend to have large customised IT and unified communications installations, and the money and infrastructure to either do their own development or hire software/systems integration firms to do it for them. The appropriate telco play is something like BT Global Services – the systems integration/managed services wing of BT.
But using the toolkit of Voice 2.0 is technically challenging. It’s been said that free software is usually only free if you value your time at zero; small and medium-sized businesses can never afford to do that.
Mobilt Bedriftsnett (MB) is Telenor’s response to this situation, aimed at Small and Medium Enterprises (SMEs). Its primary benefit is to improve business efficiency by extending the functions of an internal PBX and/or unified communications system to include all the companies’ mobile phones.
Telenor’s internal business modelling estimates the cost of CRM failures – missed appointments, rework of mistakes, complaints, lost sales – to a potential SME customer at between $500 and $2,000 a year. This is the economic ‘friction’ that the product is designed to address.
The Core Product is based on Telenor APIs…
The product is based on a suite of APIs into Telenor infrastructure, one of which replicates a hosted IP-PBX, i.e. IP Centrex, solution. It’s aimed at SMEs, and in particular, at integrating with their existing PBX, unified communications, and CRM installations. There’s a browser-based end user interface, which lets nontechnical customers manage their services.
There is also considerable scope for further development, and MB also provides four other APIs, which provide a click-to-call capability, bulk or programmatic SMS, location information, and “Status Push”. This last one provides information on whether a user is currently in coverage, power level, bandwidth, etc, and will be extended to carry presence-and-availability information and integrate with groupware and CRM systems in Q1 2010.
…and integrated with PBX/UC Vendor Client Solutions
Extensive work has been carried out with PBX/UC vendors, notably Alcatel-Lucent and Microsoft, to ensure integration. For example, one of the current use cases for the click-to-call API permits a user to launch a conference call from within MS Outlook or a CRM application. The voice switch receives an event from the SOAP API, initiates a call to the user’s mobile device, then bridges in the target number.
The ‘two-sided’ Enterprise ‘App Store’
MB is also the gateway to a business-focused app store, which markets the work of third-party software developers using the MB API to their base of SME customers. This element qualifies it as a two-sided business model. Telenor is thereby facilitating trade that wouldn’t otherwise occur, by sharing revenue from its customers with upstream producers and also by bringing SMEs that might not otherwise attract any interest from the developer community into contact with it. Developers either pay per use or receive a 70% revenue share depending on the APIs in use.
Telenor are using the existing infrastructure created for CPA to pay out the revenue share and carry out the digital logistics, and targeting the developer community they’re already building under their iLabs project. So far, third-party applications include integration with Microsoft’s Office Communication Server line of products, integration with Alcatel-Lucent and some other proprietary IP-PBXs, and a mobile-based CRM solution, WebOfficeOne.
Route to Market: Enterprise ICT Specialists
In a twist on the two-sided business model, MB services are primarily marketed to systems integrators, independent software developers, and CRM and IP telephony vendors, who act as a channel to market for core Telenor products such as voice, messaging, presence & availability, and location. This differs quite sharply from their experience with CPA, whose business is dominated by content providers.
Pricing is based on a freemium model; some API usage is free, businesses that choose to use the CPA payments system pay through the revenue sharing mechanism, and ones that don’t but do use the APIs heavily pay by usage.
Technical Architecture: migrating to industry standards
Telco 2.0 has previously articulated the seven questions concept – seven key customer questions that can be answered using telecom’s operator’s data assets as shown in the following diagram.
Telenor’s API layer consists of Simple Object Access Protocol (SOAP) and Web service interfaces between the customer needs on the left of the diagram, and a bank of service gateways which communicate with various elements of in the core network on the right.
At the moment, the click-to-call and status push interfaces are implemented using the proprietary Computer-Support Telecoms Applications (CSTA) standard, in order to integrate more easily with the Alcatel-Lucent range of PBXs. So far, they don’t implement Parlay-X (or OneAPI as the GSMA calls it), but they intend to migrate to the standard in the future. Like Microsoft OCS, Asterisk, and much else, the industry standard IETF SIP is used for the core voice, messaging, and availability functions.
Early days, high hopes…
Telenor is unwilling to describe what it would consider to constitute success with Mobilt Bedriftsnett; however, they do say that they expect it to be a “great source of income”. MB has only been live since June 2009, and traffic to CPA inevitably dwarfs that to the MB APIs at present.
…and part of a bigger strategic plan
Mobilt Bedriftsnett makes up the Voice & Messaging 2.0 element of Telenor’s transformation towards Telco 2.0. The other components of ‘Telenor 2.0’ are:
CPA, the platform enabling 3rd party mobile content transactions
the iLabs/Playground developer community
increasing strategic interest in M2M applications
a recently launched Content Delivery Network (or CDN – a subject gaining salience again, after the recent Arbor Networks study that showed them accounting for 10-15% of global Internet traffic )
Mobile Payments, Money transfer and Banking at Grameenphone in Bangladesh.
Lessons from Telenor 2.0
With Mobilt Bedriftsnett, Telenor has carried on its tradition of pioneering Telco 2.0 style business model innovations, though it is relatively early to judge the success of the ‘Telenor 2.0′ strategy.
At this stage of market development, Telenor’s approach therefore shows three important lessons to other industry players.
1) They are taking serious steps to create and try ‘two-sided’ telecoms business models.
2) The repeated mentions of CPA’s role in MB point to an important truth about Telco 2.0 – the elements of it are mutually supporting. It becomes dramatically easier to create a developer community, bill for sender-pays data, operate an app store, etc, if you already have an effective payments and revenue-sharing solution. Similarly, an effective identification/authorisation capability underlies billing and payments. Telenor understands and is acting on this network principle.
NB A full PDF copy of this briefing can be downloaded here.
This special Executive Briefing report summarises the brainstorming output from the Content Distribution 2.0 (Broadband Video) section of the 6th Telco 2.0 Executive Brainstorm, held on 6-7 May in Nice, France, with over 200 senior participants from across the Telecoms, Media and Technology sectors. See: www.telco2.net/event/may2009.
It forms part of our effort to stimulate a structured, ongoing debate within the context of our ‘Telco 2.0′ business model framework (see www.telco2research.com).
Each section of the Executive Brainstorm involved short stimulus presentations from leading figures in the industry, group brainstorming using our ‘Mindshare’ interactive technology and method, a panel discussion, and a vote on the best industry strategy for moving forward.
There are 5 other reports in this post-event series, covering the other sections of the event: Retail Services 2.0, Enterprise Services 2.0, Piloting 2.0, Technical Architecture 2.0, and APIs 2.0. In addition there will be an overall ‘Executive Summary’ report highlighting the overall messages from the event.
Each report contains:
Our independent summary of some of the key points from the stimulus presentations
An analysis of the brainstorming output, including a large selection of verbatim comments
The ‘next steps’ vote by the participants
Our conclusions of the key lessons learnt and our suggestions for industry next steps.
The brainstorm method generated many questions in real-time. Some were covered at the event itself and others we have responded to in each report. In addition we have asked the presenters and other experts to respond to some more specific points.
Background to this report
The demand for internet video is exploding. This is putting significant stress on the current fixed and mobile distribution business model. Infrastructure investments and operating costs required to meet demand are growing faster than revenues. The strategic choices facing operators are to charge consumers more when they expect to pay less, to risk upsetting content providers and users by throttling bandwidth, or to unlock new revenues to support investment and cover operating costs by creating new valuable digital distribution services for the video content industry.
Brainstorm Topics
A summary of the new Telco 2.0 Online Video Market Study: Options and Opportunities for Distributors in a time of massive disruption.
What are the most valuable new digital distribution services that telcos could create?
What is the business model for these services – who are the potential buyers and what are prior opportunity areas?
What progress has been made in new business models for video distribution – including FTTH deployment, content-delivery networking, and P2P?
Preliminary results of the UK cross-carrier trial of sender-pays data
How the TM Forum’s IPSphere programme can support video distribution
Stimulus Presenters and Panellists
Richard D. Titus, Controller, Future Media, BBC
Trudy Norris-Grey, MD Transformation and Strategy, BT Wholesale
Scott Shoaf, Director, Strategy and Planning, Juniper Networks
Ibrahim Gedeon, CTO, Telus
Andrew Bud, Chairman, Mobile Entertainment Forum
Alan Patrick, Associate, Telco 2.0 Initiative
Facilitator
Simon Torrance, CEO, Telco 2.0 Initiative
Analysts
Chris Barraclough, Managing Director, Telco 2.0 Initiative
Dean Bubley, Senior Associate, Telco 2.0 Initiative
Alex Harrowell, Analyst, Telco 2.0 Initiative
Stimulus Presentation Summaries
Content Distribution 2.0
Scott Shoaf, Director, Strategy and Planning, Juniper Networks opened the session with a comparison of the telecoms industry’s response to massive volumes of video and that of the US cable operators. He pointed out that the cable companies’ raison d’etre was to deliver vast amounts of video; therefore their experience should be worth something.
The first question, however, was to define the problem. Was the problem the customer, in which case the answer would be to meter, throttle, and cap bandwidth usage? If we decided this was the solution, though, the industry would be in the position of selling broadband connections and then trying to discourage its customers from using them!
Or was the problem not one of cost, but one of revenue? Networks cost money; the cloud is not actually a cloud, but is made up of cables, trenches, data centres and machines. Surely there wouldn’t be a problem if revenues rose with higher usage? In that case, we ought to be looking at usage-based pricing, but also at alternative business models – like advertising and the two-sided business model.
Or is it an engineering problem? It’s not theoretically impossible to put in bigger pipes until all the HD video from everyone can reach everyone else without contention – but in practice there is always some degree of oversubscription. What if we focused on specific sources of content? Define a standard of user experience, train the users to that, and work backwards?
If it is an engineering problem, the first step is to reduce the problem set. The long tail obviously isn’t the problem; it’s too long, as has been pointed out, and doesn’t account for very much traffic. It’s the ‘big head’ or ‘short tail’ stuff that is the heart of the problem: we need to deal with this short tail of big traffic generators. We need a CDN or something similar to deliver for this.
On cable, the customers are paying for premium content – essentially movies and TV – and the content providers are paying for distribution. We need to escape from the strict distinctions between Internet, IPTV, and broadcast. After all, despite the alarming figures for people leaving cable, many of them are leaving existing cable connections to take a higher grade of service. Consider Comcast’s Fancast – focused on users, not lines, with an integrated social-recommendation system, it integrates traditional cable with subscription video. Remember that broadcast is a really great way to deliver!
Advertising – at the moment, content owners are getting 90% of the ad money.
Getting away from this requires us to standardise the technology and the operational and commercial practices involved. The cable industry is facing this with the SCTE130 and Advanced Advertising 1.0 standards, which provide for fine-grained ad insertion and reporting. We need to blur the definition of TV advertising – the market is much bigger if you include Internet and TV ads together. Further, 20,000 subscribers to IPTV aren’t interesting to anyone – we need to attack this across the industry and learn how to treat the customer as an asset.
The Future of Online Video, 6 months on
Alan Patrick, Associate, Telco 2.0 updated the conference on how things had changed since he introduced the ”Pirate World” concept from our Online Video Distribution strategy report at the last Telco 2.0 event. The Pirate World scenario, he said, had set in much faster and more intensely than we had expected, and was working in synergy with the economic crisis.
Richard Titus, Controller, Future Media, BBC: ”I have no problem with carriers making money, in fact, I pay over the odds for a 50Mbits link, but the real difference is between a model that creates opportunities for the public and one which constrains them.”
Ad revenues were falling; video traffic still soaring; rights-holders’ reaction had been even more aggressive than we had expected, but there was little evidence that it was doing any good. Entire categories of content were in crisis.
On the other hand, the first stirrings of the eventual “New Players Emerge” scenario were also observable; note the success of Apple in creating a complete, integrated content distribution and application development ecosystem around its mobile devices.
The importance of CPE is only increasing; especially with the proliferation of devices capable of media playback (or recording) and interacting with Internet resources. There’s a need for a secure gateway to help manage all the gadgets and deliver content efficiently. Similarly, CDNs are only becoming more central – there is no shortage of bandwidth, but only various bottlenecks. It’s possible that this layer of the industry may become a copyright policing point.
We think new forms of CPE and CDNs are happening now; efforts to police copyright in the network are in the near future; VAS platforms are the next wave after that, and then customer data will become a major line of business.
Most of all, time is flying by, and the overleveraged, or undercapitalised, are being eaten first.
The Content Delivery Framework
Ibrahim Gedeon, CTO, Telus introduced some lessons from Telus’s experience deploying both on-demand bandwidth and developer APIs. Telcos aren’t good at content, he said; instead, we need to be the smartest pipe and make use of our trusted relationship with customers, built up over the last 150 years.
We’re working in an environment where cash is scarce and expensive, and pricing is a zero- or even negative-sum game; impossible to raise prices, and hard to cut without furthering the price war. So what should we be doing? A few years ago the buzzword was SDP; now it’s CDN. We’d better learn what those actually mean!
Trudy Norris-Gray, Managing Director, BT Wholesale: ”There is no capacity problem in the core, but there is to the consumer – and three bad experiences means the end of an application or service for that individual user.”
Anyway, we’re both a mobile and fixed operator and ISP, and we’ve got an IPTV network. We’ve learned the hard way that technology isn’t our place in the value chain. When we got the first IPTV system from Microsoft, it used 2,500 servers and far, far too much power. So we’re moving to a CDF (Content Delivery Framework) – which looks a lot like a SDP. Have the vendors just changed the labels on these charts?
So why do we want this? So we can charge for bandwidth, of course; if it was free, we wouldn’t care! But we’re making around $10bn in revenues and spending 20% of that in CAPEX. We need a business case for this continued investment.
We need the CDF to help us to dynamically manage the delivery and charging process for content. There was lots of goodness in IMS, the buzzword of five years ago, and in SDPs. But in the end it’s the APIs that matter. And we like standards because we’re not very big. So, we want to use TM Forum’s IPSphere to extend the CDF and SDF; after all, in roaming we apply different rate cards dynamically and settle transactions, so why not here too, for video or data? I’d happily pay five bucks for good 3G video interconnection.
And we need to do this for developer platforms too, which is why we’re supporting the OneAPI reference architecture. To sum up, let’s not forget subscriber identity, online charging – we’ve got to make money – the need for policy management because not all users are equal, and QoS for a differentiated user experience.
Sender-Pays Data in Practice
Andrew Bud, Chairman, MEF gave an update on the trial of sender-pays data he announced at the last event. This is no longer theoretical, he said; it’s functioning, just with a restricted feature set. Retail-only Internet has just about worked so far; because people pay for the services through their subscription and they’re free. Video breaks this, he said; it will be impossible to be comprehensive, meaningful, and sustainable.
You can’t, he said, put a meaningful customer warning that covers all the possible prices you might encounter due to carrier policy with your content; and everyone is scared of huge bills after the WAP experience. Further, look at the history of post offices, telegraphy and telephony – it’s been sender-pays since the 1850s. Similarly, Amazon.com is sender-pays, as is Akamai.
Hence we need sending-party pays data – that way, we can have truly free ads: not one where the poor end users ends up paying the delivery cost!
Our trial: we have relationships with carriers making up 85% of the UK market. We have contracts, priced per-MB of data, with them. And we have four customers – Jamster, who brought you the Crazy Frog, Shorts, THMBNLS, who produce mobisodes promoting public health, and Creative North – mobile games as a gift from the government. Of course, without sender-pays this is impossible.
We’ve discovered that the carriers have no idea how much data costs; wholesale pricing has some very interesting consequences. Notably the prices are being set too high. Real costs and real prices mean that quality of experience is a real issue; it’s a very complicated system to get right. The positive sign, and ringing endorsement for the trial, is that some carriers are including sender-pays revenue in their budgets now!
Participant Feedback
Introduction
The business of video is a prime battleground for Telco 2.0 strategies. It represents the heaviest data flows, the cornerstone of triple/quad-play bundling, powerful entrenched interests from broadcasters and content owners, and a plethora of regulators and industry bodies. For many people, it lies at the heart of home-based service provision and entertainment, as well as encroaching on the mobile space. The growth of P2P and other illegal or semi-legal download mechanisms puts pressure on network capacity – and invites controversial measures around protecting content rights and Net Neutrality.
In theory, operators ought to be able to monetise video traffic, even if they don’t own or aggregate content themselves. There should be options for advertising, prioritised traffic or blended services – but these are all highly dependent on not just capable infrastructure, but realistic business models. Operators also need to find a way to counter the ‘Network Neutrality’ lobbyists who are confounding the real issue (access to the internet for all service providers on a ‘best efforts’ basis) with spurious arguments that operators should not be able to offer premium services, such as QoS and identity, to customers that want to pay for them. Telco 2.0 would argue that the right to offer (and the right to buy) a better service is a cornerstone of capitalism and something that is available in every other industry. Telecoms should be no different. Of course, it remains up to the operators to develop services that customers are willing to pay more for…
A common theme in the discussion was “tempus fugit” – time flies. The pace of evolution has been staggering, especially in Internet video distribution – IPTV, YouTube, iPlayer, Hulu, Qik, P2P, mashups and so forth. Telcos do not have the luxury of time for extended pilot projects or grandiose collaborations that take years to come to fruition.
With this timing issue in mind, the feedback from the audience was collected in three categories, although here the output has been aggregated thematically, as follows:
STOP – What should we stop doing?
START – What should we start doing?
DO MORE – What things should we do more of?
Feedback: STOP the current business model
There was broad agreement that the current model is unsustainable, especially given the demands that “heavy” content like video traffic places on the network…..
· [Stop] giving customers bandwidth for free [#5]
· Stop complex pricing models for end-user [#9]
· Stop investing so much in sustaining old order [#18]
· Stop charging mobile subscribers on a per megabyte basis. [#37]
· Current peering agreement/ip neutrality is not sustainable. [#41]
· [Stop] assuming things are free. [#48]
· [Stop] lowering prices for unlimited data. [#61]
· Have to develop more models for upstream charging for data rather than just flat rate to subscribers. [#11]
· Build rational pricing segmentation for data to monetize both sides of the value chain with focus on premium value items. [#32]
Feedback: Transparency and pricing
… with many people suggesting that Telcos first need to educate users and service providers about the “true cost” of transporting data…. although whether they actually know the answer themselves is another question, as it is much an issue of accounting practices as network architecture.
· Make the service providers aware of the cost they generate to carriers. [#31]
· Make pricing transparency for consumers a must. [#10]
· Mobile operators start being honest with themselves about the true cost of data before they invest in LTE. [#7]
· When resources are limited, then rationing is necessary. Net Neutrality will not work. Today people pay for water in regions where it is limited in supply. Its use is abused when there are no limits. [#17]
· Start being transparent in data charges, it will all stay or fall with cost transparency. [#12]
· You can help people understand usage charges, with meters or regular updates, requires education for a behavioural change, easier for fixed than mobile. [#14]
· Service providers need to have a more honest dialogue with subscribers and give them confidence to use services [#57]
· As an industry we must invest more in educating the market about network economics, end-users as well as service providers. [#58]
· Start charging subscribers flat rate data fee rather than per megabyte. [#46]
Feedback: Sender-pays data
Andrew Bud’s concept of “sender pays data”, in which a content provider bundles in the notional cost of data transport into the download price for the consumer, generated both enthusiasm and concerns (although very little outright disagreement). Telco 2.0 agrees with the fundamental ‘elegance’ of the notion, but thinks that there are significant practical, regulatory and technical issues that need to be resolved. In particular, the delivery of “monolithic” chunks of content like movies may be limited, especially in mobile networks where data traffic is dominated by PCs with mobile broadband, usually conducting a wide variety of two-way applications like social networking.
Positive
· Sender pays is the only sane model. [#6]
· Do sender pays on both ‘sides’ consumer as well…gives ‘control’ and clarity to user. [#54]
· Sender Pays is one specific example of a much larger category of 3rd-party pays data, which also includes venue owners (e.g. hotels or restaurants), advertisers/sponsors (‘thanks for flying Virgin, we’re giving you 10MB free as a thank-you’), software developers, government (e.g. ‘benefit’ data for the unemployed etc) etc. The opportunity for Telcos may be much larger from upstream players outside the content industry [#73]
· We already do sender pays on our mobile portal – on behalf of all partner content providers including Napster mobile. [#77]
· Change the current peering model into an end to end sender pay model where all carriers in the chain receive the appropriate allocation of the sender pay revenue in order to guarantee the QoS for the end user. [#63]
· Focus on the money flows e.g. confirm the sender pays model. [#19]
Qualified Support/Implementation concerns
· Business models on sender pays, but including the fact, that roaming is needed, data costs will be quite different across mobile carriers and the aggregators costs and agreements are based on the current carriers. These things need to be solved first [#26]
· Sender pays is good but needs the option of ‘only deliver via WiFi or femtocell when the user gets home’ at 1/100th the cost of ‘deliver immediately via 3G macro network’. [#15]
· Who pays for AJAX browsers proactively downloading stuff in the background without explicit user request? [#64]
· Be realistic about sender pays data. It will not take off it is not standard across the market, and the data prices currently break the content business model – you have to compare to the next alternative. A video on iTunes costs 1.89 GBP including data… Operators should either take a long term view or forget about it. [#20]
· Sender-pays data can be used to do anything the eco-system needs, including quality/HD. It doesn’t yet today only because the carriers don’t know how to provide those. [#44]
· Sender pays works for big monolithic chunks like songs or videos. But doesn’t work for mash up or communications content/data like Facebook (my Facebook page has 30 components from different providers – are you going to bill all of them separately?) [#53]
· mBlox: more or less like a free-call number. doesn’t guarantee quality/HD [#8]
Sceptical
· Stop sender pays because user is inundated with spam. [#23]
o Re 23: At least the sender is charged for the delivery. I do not want to pay for your SPAM! [#30]
Feedback: QoS
A fair amount of the discussion revolved around the thorny issues of capacity, congestion, prioritisation and QoS, although some participants felt this distracted a little from the “bigger picture” of integrated business models.
· Part of bandwidth is dedicated to high quality contents (paid for). Rest is shared/best effort. [#27]
· Start annotating the network, by installing the equivalent of gas meters at all points across the network, in order that they truly understand the nature of traffic passing over the network – to implement QoS. [#56]
o Re: 56 – that’s fine in the fixed world or mobile core, but it doesn’t work in the radio network. Managing QoS in mobile is difficult when you have annoying things like concrete walls and metallised reflective windows in the way [#75]
· [Stop] being telecom focused and move more towards solutions. It is more than bandwidth. [#25]
· Stop pretending that mobile QoS is important, as coverage is still the gating factor for user experience. There’s no point offering 99.9% reliability when you only have 70% coverage, especially indoors [#29]
· Start preparing for a world of fewer, but converged fixed-mobile networks that are shared between operators. In this world there will need to be dynamic model of allocating and charging for network capacity. [#67]
· We need applications that are more aware of network capacity, congestion, cost and quality – and which alter their behaviour to optimise for the conditions at any point in time e.g. with different codec’s or frame rate or image size. The intelligence to do this is in the device, not the network. [#68]
o Re: 68, is it really in the CPE? If the buffering of the content is close at the terminal, perhaps, otherwise there is no jitter guarantee. [#78]
§ Re 78 – depends on the situation, and download vs. streaming etc. Forget the word ‘terminal’, it’s 1980s speak, if you have a sufficiently smart endpoint you can manage this – hence PCs being fine for buffering YouTube or i-Player etc, and some of the video players auto-sensing network conditions [#81]
· QoE – for residential cannot fully support devices which are not managed for streamed content. [#71]
· Presumably CDNs and caching have a bit of a problem with customised content, e.g. with inserted/overlaid personalised adverts in a video stream? [#76]
Feedback: platforms, APIs, and infrastructure
However, the network and device architecture is only part of the issue. It is clear that video distribution fits centrally within the wider platform problems of APIs and OSS/BSS architecture, which span the overall Telco 2.0 reach of a given operator.
· Too much focus on investment in the network, where is the innovation in enterprise software innovation to support the network? [#70]
· For operator to open up access to the business assets in a consistent manner to innovative. Intermediaries who can harmonise APIs across a national or global marketplace. [#13]
· The BSS back office; billing, etc will not support robust interactive media for the most part. [#22]
· Let content providers come directly to Telcos to avoid a middle layer (aggregators) to take the profit. This requires collaboration and standardization among Telco’s for the technical interfaces and payment models. [#28]
· More analysis on length of time and cost of managing billing vendor for support of 2-sided business model. Prohibitively expensive in back office to take risks. Why? [#65]
· It doesn’t matter how strong the network is if you can’t monetize it on the back end OSS/BSS. [#40]
Feedback: Business models for video
Irrespective of the technical issues, or specific point commercial innovations like sender pays, there are also assorted problems in managing ecosystem dynamics, or more generalised business models for online video or IPTV. A significant part of the session’s feedback explored the concerns and possible solutions – with the “elephant in the room” of Net Neutrality lurking on the sidelines.
· Open up to lower cost lower risk trials to see what does and doesn’t work. [#35]
· Real multi quality services in order to monetize high quality services. [#36]
· Transform net neutrality issues into a fair policy approach… meaning that you cannot have equal treatment when some parties abuse the openness. [#39]
o Re 39: I want QoE for content I want to see. Part of this is from speed of access. Net Neutrality comes from the Best Effort and let is fight out in the scarce network. I.e. I do not get the QoE for all the other rubbish in the network. [#69]
· Why not bundling VAS with content transportation to ease migration from a free world to a pay for value world? [#43]
· Do more collaborative models which incorporate the entire value chain. [#55]
· Service providers start partnering to resell long tail content from platform providers with big catalogues. [#59]
· [Start to] combine down- and up-stream models in content. Especially starts get paid to deliver long tail content. [#60]
· Start thinking longer term instead of short term profit, to create a new ecosystem that is bigger and healthier. [#62]
· Exploit better the business models between content providers and carriers. [#16]
· Adapt price to quality of service. [#21]
· Put more attention on quality of end user experience. [#24]
· I am prepared to pay a higher retail DSL subscription if I get a higher quality of experience. – not just monthly download limits. [#38]
· maximize revenues based on typical Telco capabilities (billing, delivery, assurance on million of customers) [#50]
· Need a deeper understanding of consumer demand which can then be aggregated by the operator (not content aggregators), providing feedback to content producers/owners and then syndicated as premium content to end-users. It comes down to operators understanding that the real value lays in their user data not their pipes! [#52]
· On our fixed network, DSL resellers pay for the access and for the bandwidth used – this corresponds to the sender pays model; due to rising bandwidth demand the charge for the resellers continuously increases. so we have to adapt bandwidth tariffs every year in order not to suffocate our DSL resellers. Among them are also companies offering TV streaming. [#82]
· More settlement free peering with content/app suppliers – make the origination point blazingly fast and close to zero cost. rather focus on charging for content distribution towards the edge of the access network (smart caching, torrent seeds, multicast nodes etc) [#74]
Feedback: Others
In addition to these central themes, the session’s participants also offered a variety of other comments concerning regulatory issues, industry collaboration, consumer issues and other non-video services like SMS.
· Start addressing customer data privacy issues now, before it’s too late and there is a backlash from subscribers and the media. [#42]
· Consolidating forums and industry bodies so we end up with one practical solution. [#45]
· Identifying what an operator has potential to be of use for to content SP other than a pipe. [#49]
· Getting regulators to stimulate competition by enforcing structural separation – unbundle at layer 1, bring in agile players with low operating cost. Let customers vote with their money – focus on deliverable the fastest basic IP pipe at a reasonable price. If the basic price point is reasonable customers will be glad to pay for extra services – either sender or receiver based. [#72]
· IPTV <> Internet TV. In IPTV the Telco chooses my content, Internet TV I choose. [#79]
· Put attention on creating industry collaboration models. [#47]
· Stop milking the SMS cash cow and stop worrying about cannibalising it, otherwise today’s rip-off mobile data services will never take off. [#33]
· SMS combined with the web is going to play a big role in the future, maybe bigger that the role it played in the past. Twitter is just the first of a wave of SMS based social media and comms applications for people. [#51]
Participants ‘Next Steps’ Vote
Participants were then asked: Which of the following do we need to understand better in the next 6 months?
Is there really a capacity problem, and what is the nature of it?
How to tackle the net neutrality debate and develop an acceptable QOS solution for video?
Is there a long term future for IPTV?
How to take on the iPhone regarding mobile video?
More aggressive piloting / roll-out of sender party pays data?
Lessons learnt & next steps
The vote itself reflects the nature of the discussions and debates at the event: there are lots of issues and things that the industry is not yet clear on that need to be ironed out. The world is changing fast and how we overcome issues and exploit opportunities is still hazy. And all the time, there is a concern that the speed of change could overtake existing players (including Telcos and ISPs)!
However, there does now seem to be greater clarity on several issues with participants becoming increasingly keen to see the industry tackle the business model issue of flat-rate pricing to consumers and little revenue being attached to the distribution of content (particularly bandwidth hungry video). Overall, most seem to agree that:
1. End users like simple pricing models (hence success of flat rate) but that some ‘heavy users’ will require a variable rate pricing scheme to cover the demands they make;
2. Bandwidth is not free and costs to Telcos and ISPs will continue to rise as video traffic grows;
3. Asking those sending digital goods to pay for the distribution cost is sensible…;
4. …but plenty of work needs to be done on the practicalities of the sender-pays model before it can be widely adopted across fixed and mobile;
5. Operators need to develop a suite of value-added products and services for those sending digital goods over their networks so they can charge incremental revenues that will enable continued network investment;
6. Those pushing the ‘network neutrality’ issue are (deliberately or otherwise) causing confusion over such differential pricing which creates PR and regulatory risks for operators that need to be addressed.
There are clearly details to be ironed out – and probably experiments in pricing and charging to be done. Andrew Bud’s (and many others, it must be added, have suggested similar) sending-party pays model may work, or it may not – but this is an area where experiments need to be tried. The idea of “educating” upstream users is euphemistic – they are well aware of the benefits they currently are accruing, which is why the Net Neutrality debate is being deliberately muddied. Distributors need to be working on disentangling bits that are able to be free from those that pay to ride, not letting anyone get a free ride.
As can be seen in the responses, there is also a growing realisation that the Telco has to understand and deal with the issues of the overall value chain, end-to-end, not just the section under its direct control, if it wishes to add value over and above being a bit pipe. This is essentially moving towards a solution of the “Quality of Service” issue – they need to decide how much of the solution is capacity increase, how much is traffic management, and how much is customer expectation management.
Alan Patrick, Telco 2.0: ”98.7% of users don’t have an iPhone, but 98% of mobile developers code for it because it has an integrated end-to-end experience, rather than a content model based on starving in a garage.”
The “Tempus Fugit” point is well made too – the Telco 2.0 participants are moving towards an answer, but it is not clear that the same urgency is being seen among wider Telco management.
Two areas were skimmed through a little too quickly in the feedback:
Managing a way through the ‘Pirate World’ environment
The economic crisis has helped in that it has reduced the amount of venture capital and other risk equity going into funding plays that need not make revenue, never mind profit. In our view this means that the game will resolve into a battle of deep pockets to fund the early businesses. Incumbents typically suffer from higher cost bases and higher hurdle rates for new ventures. New players typically have less revenue, but lower cost structures. For existing Telcos this means using existing assets as effectively as possible and we suggest a more consolidated approach from operators and associated forums and industry bodies so the industry ends up with one practical solution. This is particularly important when initially tackling the ‘Network Neutrality’ issue and securing customer and regulatory support for differential pricing policies.
Adopting a policing role, particularly in the short-term during Pirate World, may be valuable for operators. Telco 2.0 believes the real value is in managing the supply of content from companies (rather than end users) and ensuring that content is legal (paid for!).
What sort of video solution should Telcos develop?
The temptation for operators to push iPTV is huge – it offers, in theory, steady revenues and control of the set-top box. Unfortunately, all the projected growth is expected to be in Web TV, delivered to PCs or TVs (or both). Providing a suite of value-added distribution services is perhaps a more lucrative strategy for operators:
Operators must better understand the needs of upstream segments and individual customers (media owners, aggregators, broadcasters, retailers, games providers, social networks, etc.) and develop propositions for value-added services in response to these. Managing end user data is likely to be important here. As one participant put it:
o We need a deeper understanding of consumer demand which can then be aggregated by the operator (not content aggregators), providing feedback to content producers/owners and then syndicated as premium content to end-users. It comes down to operators understanding that the real value lays in their user data not their pipes! [#52]
Customer privacy will clearly be an issue if operators develop solutions for upstream customers that involve the management of data flows between both sides of the platform. End users want to know what upstream customers are providing, how they can pay, whether the provider is trusted, etc. and the provider needs to be able to identify and authenticate the customer, as well as understand what content they want and how they want to pay for it. Opt-in is one solution but is complex and time-consuming to build scale so operators need to explore ways of protecting data while using it to add value to transactions over the network.
The UK’s largest broadcaster finally launched its online video streaming and download service on Christmas Day. Plusnet, a small ISP owned by BT, has provided a preliminary analysis of the traffic and the results should send shivers down the spine of any ISP currently offering an unlimited “all-you-eat” service.
The iPlayer service is basically a 7-day catch-up service which enables people who missed and didn’t record a broadcast to watch the programme at their leisure on a PC connected to the internet. The iPlayer differs from any other internet-based video service in certain key respects:
It is funded by the £135.50 annual licence fee which pays for the majority of BBC activities.
The BBC collected 25.1m licence fees in 2006/7. No advertising is required for the iPlayer business model to work.
It is heavily promoted on the BBC broadcast TV channels. The BBC had a 42.6% share of overall UK viewing in 2006/7 and therefore a lot of people already know about the existence of the iPlayer after one month of launch.
it is a high quality service and is designed for watching whole programmes rather than consumption of small vignettes.
This is sharp contrast to the current #1 streaming site, YouTube.
A massive rise in costs
The key outputs from the Plusnet data is that in January:
more customers are streaming;
streamers are using more; and most importantly
peak usage is being pushed up
This equates for Plusnet to streaming cost increasing in total to £51.7k/month from £17.2k, or an increase of 18.3p/user from 6.1p/user. This is a 200% cost increase in just the first MONTH of the service. If we assume that the Plusnet base of 282k customers is a representative sample of the whole UK internet universe than we can draw some interesting conclusions about the overall impact of the iPlayer on the UK internet. On the whole UK IPstream base of 8.5m the introduction of the iPlayer would equate to an increase in costs to £1.5m in January from 500k.
Despite access unbundling, ‘middle mile’ costs remain a key bottleneck
IPstream is a wholesale product from BT, with BT being being responsible for the transit of the data from the customer’s home to an interconnect point of the ISP’s choice. The ISP pays for bandwidth capacity at the point of interconnect. BT Retail acts like an external ISP in the structurally separated model. The overall effect of the iPlayer for the BT’s IPstream-based customers is roughly neutral, with the increase in revenues at wholesale (external base of 4.2m customers) being offset by the increase in costs at BT Retail (total base of 4.2m customers). Of course, this assumes no bandwidth overages at BT Retail, which probably is not the case as both BT and Plusnet have bandwidth caps. In effect, incremental cost for ISPs using the IPstream product is determined by ordering extra BT IPstream pipes which come in 155-meg bit size chunks. The option for the ISP is either to allow a degradation in performance or order more capacity.
Time to buy more pipes
We tested the bandwidth profile using Wireshark watching a 59mins documentary celebrating the 50 year anniversary of Sputnik with both streaming and P2P. The streaming traffic is easy to analyse as it comes through on port 1935, which is the port used by Flash for streaming. Basically a jitter-free screening ran on average at around 0.5Mbit/sec. Using the 155-meg ordering slice this means only around 300 people need to be watching the iPlayer at the same time (peak = 8pm-10pm) to fill a pipe. Seeing that IPstream customers are aggregated across the UK to a single point, a lot of ISPs will be thinking of the need to order extra capacity. The BBC also offers a P2P download which is of higher quality than the streaming. We managed to download the 500Mb file in just over 20 minutes at an average speed of 3.5Mbit/sec. The total traffic (including overhead) for the streaming was 231MB and for the P2P delivery was 544Mb.
Full unbundling still leaves ISPs at the mercy of backhaul costs
The story for facility-based LLU(Local Loop Unbundling) players, which account for another 3.7m UK broadband customers, is slightly different as it depends completely on network design and distribution of the base across the exchanges. Telco 2.0 market intelligence says that some unbundlers have ordered 1-gig links for the backhaul and should be unaffected least in the short term. However, some unbundlers have only ordered 100-meg links and could be in deep trouble with peak hour people really noticing the difference in experience. The only real option for these unbundlers is to order extra capacity on their backhaul links which could be extremely expensive. The average speed for someone just browsing and doing emails is quite low compared to someone sat back watching videos stream.
Cable companies understand sending telly over wires
The story for Virgin Media, which is the main UK cable operator with 3.3m broadband subscribers, is again is dependent on network design. This time it depends upon the load on the UBR(Universal Broadband Router) within the network segment. Virgin Media have a special angle to this as the iPlayer will be coming to their Video-on-Demand service in the spring, and therefore we assume this will take a lot of load off their IP network. The Virgin VoD service runs on dedicated bandwidth within their network and allows for the content to be watched on TV rather than PC. A big bonus for the Virgin Media subscribers.
Modelling the cost impact
For both cable and LLU players the cost profile is radically different to IPstream players, and it is not a trivial task to calculate the impact. However, we can extrapolate the Plusnet traffic figures to note the effect in volumes of data. We have modelled four scenarios: usage the same as in Jan 2008 (i.e. an average of 19min/month/user) rising to 1 hour/month, 1 hour/week and 1 hour/day. These would give an increase in cost of £1,035k/month, £3,243k/month, £14,053k/month and £98,638k/month respectively for the IPstream industry, only based upon Plusnet cost assumptions. Of course this is assuming the IPstream base stays the same (and they don’t just all go bust straight away!). Across the whole of the UK ISP industry, the increase in traffic (Gb/month) is 1,166, 3,655, 15,837 and 111,161 respectively. That’s a lot of data. The obvious conclusion is that ISP pricing will need to be raised and extra capacity will needed to be added. The data reinforces our belief expressed in our recent Broadband Report that “Video will kill the ISP star”. The problem with the current ISP model is it is like an all you can eat buffet, where one in ten customers eats all the food, one in a hundred takes his chair home too, and one in a thousand unscrews all the fixtures and fittings and loads them into a van as well.
A trigger for industry structural change?
An interesting corollary to the increase in costs for the ISPs is that we believe that the iPlayer will actually speed up consolidation across the industry and make the life of smaller ISPs even more difficult than it is today. Additionally because of the high bandwidth needs of the iPlayer, the long copper lengths in rural England and the lack of cable or LLU competition to the IPstream product, we believe that the iPlayer will increase the digital divide between rural and suburban UK. The iPlayer also poses an interesting question for the legion of UK small businesses who rely on broadband and yet don’t have a full set of telecommunications skills. What do they do about the employee who wants to eat their lunch at their desk whilst simultaneously watching last nights episode of top soap EastEnders?
Time to stop the game of ‘pass the distribution cost parcel’
The BBC is actually in quite a difficult situation, especially as publicity starts to mount over the coming months with users breaking their bandwidth limits and more or more start to get charged for overages. The UK licence payers expect they paid for both content and distribution when they handed over £133.50. In 2006/7, the BBC paid £99.7m for distributing its broadcast TV signal, £42.6m for its radio signal and only £8.8m for its online content. This is out of a total of £3.2bn licence fee income. I would suggest that the easiest way for the BBC to escape the iPlayer conundrum is for them to pay an equitable fee to the ISPs for distributing their content and the ISP plan comes with unlimited BBC content, possibly with a small retail mark-up. The alternative of traffic-shaping your users to death doesn’t seem like a great way of creating high customer satisfaction. The old media saying sums up the situation quite nicely:
“If content is King, then distribution is King Kong”
[Ed – to participate in the debate on sustainable business models in the telecoms-media-tech space, do come to the Telco 2.0 ‘Executive Brainstorm’ on 16-17 April in London.]