Network convergence: How to deliver a seamless experience

Operators need to adapt to the changing connectivity demands post-COVID19

The global dependency on consistent high-performance connectivity has recently come to the fore as the COVID-19 outbreak has transformed many of the remaining non-digital tasks into online activities.

The typical patterns of networking have broken and a ‘new normal’, albeit possibly a somewhat transitory one, is emerging. The recovery of the global economy will depend on governments, healthcare providers, businesses and their employees robustly communicating and gaining uninhibited access to content and cloud through their service providers – at any time of day, from any location and on any device.

Reliable connectivity is a critical commodity. Network usage patterns have shifted more towards the home and remote working. Locations which were previously light-usage now have high demands. Conversely, many business locations no longer need such high capacity. Utilisation is not expected to return to pre-COVID-19 patterns either, as people and businesses adapt to new daily routines – at least for some time.

The strategies with which telcos started the year have of course been disrupted with resources diverted away from strategic objectives to deal with a new mandate – keep the country connected. In the short-term, the focus has shifted to one which is more tactical – ensuring customer satisfaction through a reliable and adaptable service with rapid response to issues. In the long-term, however, the objectives for capacity and coverage remain. Telcos are still required to reach national targets for a minimum connection quality in rural areas, whilst delivering high bandwidth service demands in hotspot locations (although these hotspot locations might now change).

Of course, modern networks are designed with scalability and adaptability in mind – some recent deployments from new disruptors (such as Rakuten) demonstrate the power of virtualisation and automation in that process, particularly when it comes to the radio access network (RAN). In many legacy networks, however, one area which is not able to adapt fast enough is the physical access. Limits on spectrum, coverage (indoors and outdoors) and the speed at which physical infrastructure can be installed or updated become a bottleneck in the adaptation process. New initiatives to meet home working demand through an accelerated fibre rollout are happening, but they tend to come at great cost.

Network convergence is a concept which can provide a quick and convenient way to address this need for improved coverage, speed and reliability in the access network, without the need to install or upgrade last mile infrastructure. By definition, it is the coming-together of multiple network assets, as part of a transformation to one intelligent network which can efficiently provide customers with a single, unified, high-quality experience at any time, in any place.

It has already attracted interest and is finding an initial following. A few telcos have used it to provide better home broadband. Internet content and cloud service providers are interested, as it adds resilience to the mobile user experience, and enterprises are interested in utilising multiple lower cost commodity backhauls – the combination of which benefits from inherent protection against costly network outages.

Enter your details below to request an extract of the report

Network convergence helps create an adaptable and resilient last mile

Most telcos already have the facility to connect with their customers via multiple means; providing mobile, fixed line and public Wi-Fi connectivity to those in their coverage footprint. The strategy has been to convert individual ‘pure’ mobile or fixed customers into households. The expectation is that this creates revenue increase through bundling and loyalty whilst bringing some added friction into the ability to churn – a concept which has been termed ‘convergence’. Although the customer may see one converged telco through brand, billing and customer support, the delivery of a consistent user experience across all modes of network access has been lacking and awkward. In the end, it is customer dissatisfaction which drives churn, so delivering a consistent user experience is important.

Convergence is a term used to mean many different things, from a single bill for all household connectivity, to modernising multiple core networks into a single efficient core. While most telcos have so far been concentrating on increasing operational efficiency, increasing customer loyalty/NPS and decreasing churn through some initial aspects of convergence, some are now looking into network convergence – where multiple access technologies (4G, 5G, Wi-Fi, fixed line) can be used together to deliver a resilient, optimised and consistent network quality and coverage.

Overview of convergence

Source: STL Partners

As an overarching concept, network convergence introduces more flexibility into the access layer. It allows a single converged core network to utilise and aggregate whichever last mile connectivity options are most suited to the environment. Some examples are:

  • Hybrid Access: DSL and 4G macro network used together to provide extra speed and fallback reliability in hybrid fixed/mobile home gateways.
  • Cell Densification: 5G and Wi-Fi small cells jointly providing short range capacity to augment the macro network in dense urban areas.
  • Fixed Wireless Access: using cellular as a fibre alternative in challenging areas.

The ability to combine various network accesses is attractive as an option for improving adaptability, resilience and speed. Strategically, putting such flexibility in place can support future growth and customer retention with the added advantage of improving operational efficiency. Tactically, it enables an ability to quickly adapt resources to short-term changes in demand. COVID-19 has been a clear example of this need.

Table of Contents

  • Executive Summary
    • Convergence and network convergence
    • Near-term benefits of network convergence
    • Strategic benefits of network convergence
    • Balancing the benefits of convergence and divergence
    • A three-step plan
  • Introduction
    • The changing environment
    • Network convergence: The adaptable and resilient last mile
    • Anticipated benefits to telcos
    • Challenges and opposing forces
  • The evolution to network convergence
    • Everyone is combining networks
    • Converging telco networks
    • Telco adoption so far
  • Strategy, tactics and hurdles
    • The time is right for adaptability
    • Tactical motivators
    • Increasing the relationship with the customer
    • Modernisation and efficiency – remaining competitive
    • Hurdles from within the telco ecosystem
    • Risk or opportunity? Innovation above-the-core
  • Conclusion
    • A three-step plan
  • Index

Enter your details below to request an extract of the report

 

 

The changing consumer landscape: Telco strategies for success

Winning in the evolving “in home” consumer market

COVID-19 is accelerating significant and lasting changes in consumer behaviours as the majority of the population is being implored to stay at home. As a result, most people now work remotely and stay connected with colleagues, friends, and family via video conferencing. Consumer broadband and telco core services are therefore in extremely high demand and, coupled with the higher burden on the network, consumers have high expectations and dependencies on quality connectivity.

Furthermore, we found that people of all ages (including non-digital natives) are becoming more technically aware. This means they may be willing to purchase more services beyond core connectivity from their broadband provider. At the same time, their expectations on performance are rising. Consumers have a better understanding of the products on offer and, for example, expect Wi-Fi to deliver quoted broadband speeds throughout the house and not just in proximity to the router.

As a result of this changing landscape, there are opportunities, but also challenges that operators must overcome to better address consumers, stay relevant in the market, and win “in the home”.

This report looks at the different strategies telcos can pursue to win “in the home” and address the changing demands of consumers. It draws on an interview programme with eight operators, as well as a survey of more than 1100+ consumers globally . As well as canvassing consumers’ high level views of telcos and their services, the survey explores consumer willingness to buy cybersecurity services from telcos in some depth.

Enter your details below to download an extract of the report

With increasing technical maturity comes an increasingly demanding market

Consumers are increasing in technical maturity

The consumer market as a whole is becoming much more digital. Over the past decade there has been a big shift towards online and self-service models for B2C services (e.g. ecommerce, online banking, automated chatbots, video streaming). This reflects the advent of the Coordination Age – connecting people to machines, information, and things – and the growing technical maturity of the consumer market.

COVID-19 has been a recent, but significant, driver in pushing consumers towards a more digital age, forcing the use of video conferencing and contactless interactions. Even people who are not considered digitally native are becoming increasingly tech savvy and tech capable customers.

Cisco forecasts that, between 2018 and 2023, the number of Internet users globally will increase from 51% to 66% . It has also forecast an increase in data volumes per capita per month from 1.5GB in 2017 to 9.7GB in 2022 . Depending on the roll out of 5G in different markets, this number may increase significantly as demand for mobile data increases to meet the potential increases in supply.

Furthermore, in our survey of 1,100+ consumers globally, 33% of respondents considered themselves avid users and 51% considered themselves moderate users of technology. Only 16% of the population felt they were light users, using technology only when essential for a limited number of use cases and needing significant support when purchasing and implementing new technology-based solutions.

Though this did not vary significantly by region or existing spend, it did vary (as would be expected) by age – 51% of respondents aged between 25 and 30 considered themselves avid users of technology, while only 18% of respondents over 50 said the same. Nevertheless, even within the 50+ segment, 55% considered themselves moderate users of technology.

Self-proclaimed technical maturity varies significantly by age

Source: STL Partners consumer survey analysis (n=1,131)

The growing technical maturity of consumers suggests a larger slice of the market will be ready and willing to adopt digital solutions from a telco, providing an opportunity for potential growth in the consumer market.

Consumers have higher expectations on telco services

Coupled with the increasing technical maturity comes an increase in consumer expectations. This makes the increasing technical maturity a double edged sword – more consumers will be ready to adopt more digital solutions but, with a better understanding of what’s on offer, they can also be more picky about what they receive and more demanding about performance levels that can be achieved.

An example of this is in home broadband. It is no longer sufficient to deliver quoted throughput speeds only within proximity to the router. A good Wi-Fi connection must now permeate throughout the house, so that high-quality video content and video calls can be streamed from any room without any drop in quality or connection. It must also be able to handle an increasing number of connected devices – Cisco forecasts an increase from a global average of 1.2 to 1.6 connections per person between 2018 and 2023 .

Consumers are also becoming increasingly impatient. In all walks of life, whether it be dating, technology or experiences, consumers want instant gratification. Additionally, with the faster network speeds of 4G+, fibre, and eventually 5G, consumers want (and are used to) continuous video feeds, seamless streaming, and near instant downloads – buffering should be a thing of the past.

One of our interviewees, a Northern European operator, commented: “Consumers are not willing to wait, they want everything here, now, immediately. Whether it is web browsing or video conferencing or video streaming, consumers are increasingly impatient”.

However, these demands extend beyond telco core services and connectivity. In the context of digital maturity, a Mediterranean operator noted “There is increasing demand for more specialized services…there is more of a demand on value-added, rather than core, services”.

This presents new challenges and opportunities for operators seeking growth “in the home”. Telcos need to find a way to address these changing demands to stay relevant and be successful in the consumer market.

Table of Contents

  • Executive summary
  • Introduction
  • Growing demand for core broadband and value-added services
    • COVID-19 is driving significant, and likely lasting, change
    • With increasing technical maturity comes an increasingly demanding market
  • Telcos need new ways to stay relevant in B2C
    • The consumer market is both diverse and difficult to segment
    • Should telcos be looking beyond the triple play?
  • How can telcos differentiate in the consumer market?
    • Differentiate through price
    • Differentiate through new products beyond connectivity
    • Differentiate through reliability of service
  • Conclusions and key recommendations
  • Appendices
    • Appendix 1: Consumer segments used in the survey
    • Appendix 2: Cybersecurity product bundles used in the conjoint analysis

Request STL research insights overview pack

Consumer Wi-Fi: Faster, smarter and near-impossible to replace

Introduction

This briefing, part of the Network Futures and (Re-)Connecting with Consumers research streams examines the connectivity and network options for the home – especially looking at the role of Wi-Fi (and its newest evolution, Wi-Fi 6) within the home and other consumer spaces, as a platform for connecting smartphones, PCs, IoT devices, and entertainment/media systems.

It build on the report exploring how telcos could play a coordination role in the smart home market in January 2019 (Can telcos create a compelling smart home?) with a focus on security and remote-management of assets in the home.

This report focuses primarily on developed markets (and China) in which most homes have a fixed-line connection. In developing countries where fixed-lines are scarce, Wi-Fi also plays an important background role, albeit within the constraints imposed by the more limited bandwidth available via cellular or fixed wireless connections to the Internet.

In developed markets, homes now commonly have between five and 20 Wi-Fi enabled endpoints. 

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Wi-Fi is a core consumer service

As discussed in this report, STL does not believe that 5G poses any general threat to the dominant use of Wi-Fi in homes. This document does not look in depth at trends in either enterprise Wi-Fi, or public hotspots – although in the latter case, cellular substitution is more of a genuine issue.

For the residential consumer market, readers should first be aware that Wi-Fi remains incredibly important even for “non-smart” homes. It is important to look at this space through the lens of normal broadband and ISP service delivery, even without connecting new consumer products and services. A sizeable part of both broadband customer satisfaction, and complaints/support issues stems from the quality and manageability of residential Wi-Fi.

This year is the 20th anniversary of consumer Wi-Fi, kickstarted by Apple’s introduction of the AirPort access-point (AP) in 1999. Since then, Wi-Fi has grown to encompass over 30 billion cumulative shipped devices, notably including virtually every PC and phone in use today. Over four billion Wi-Fi products are shipped annually, with over 13 billion in regular use.[1] It has evolved in speed, features and maturity – and is often seen by consumers as being synonymous with Internet connectivity itself.

It’s also about to evolve, encompassing a set of changes into a new packaged specification named ‘Wi-Fi 6’.

While a large part of Wi-Fi’s early success can be attributed to its use in enterprises, or through “hotspots” in public spaces like cafes and hotels, the real core of its adoption has been for residential use. The bulk of Internet access delivered in-home travels its last few metres over Wi-Fi – even for products like televisions. Many notebook PCs no longer have an Ethernet port for a wired connection.

Wi-Fi has a huge economic impact for users, SPs and industry

Chart showing the global value of Wi-Fi at the advent of Wi-Fi 6
The global value of Wi-Fi at the advent of Wi-Fi 6

Source: Wi-Fi Alliance, ValueOfWiFi.com

Telcos and Wi-Fi

While telcos have always been wary of Wi-Fi’s substitutional role vs. cellular in public spaces, within the home the majority of operators view it as a huge positive – and even a source of new revenue and differentiation.

All fixed/cable operators are advocates of home Wi-Fi, as it allows more data usage, from more devices, increasing the value of both Internet connectivity and “on-network” services such as IPTV and IP-based PSTN telephony. As this report discusses, Wi-Fi (sometimes combined with Bluetooth or other short-range wireless technologies) can help telcos connect new IoT systems and participate in their ecosystems, such as eHealth, smart metering, security and more. Some operators are directly monetising “premium Wi-Fi” products or using them to encourage customers to upgrade to higher-ARPU bundles.

While mobile operators sometimes dislike third-party Wi-Fi for its ability to “break out” data locally, rather than routing traffic through their cores (and billing engines), they nevertheless appreciate its ability to support Wi-Fi calling to extend voice telephony to rooms lacking good coverage. They also usually like the (network-driven or user-initiated) means to offload wireless data, that could be expensive to serve to users through walls from outdoor macro cell-sites. With 5G, this comes even further to the fore, as most of the early spectrum bands, such as 3.5GHz or 24-28GHz, will struggle with in-building penetration. We can also expect the majority of fixed-wireless access 5G to marry an external- (or window-) mounted antenna to an indoor Wi-Fi AP for final connection to most devices.

About half of all IP traffic across all devices is delivered via Wi-Fi

PrChart showing proportion of telecoms traffic delivered by Wi-Fi forecast 2019 to 2022
Proportion of telecoms traffic delivered by Wi-Fi forecast 2019 to 2022

*Wireless traffic includes Wi-Fi and mobile. Source: Cisco VNI Global IP Traffic Forecast, 2017-2022

In the rest of this report we discuss telcos’ love/hate relationship with Wi-Fi, including why the newest generation is a game changer for smart homes and the technology’s relationship with 4G/5G and IoT.

Contents:

  • Executive Summary
  • Introduction
  • Part of the broader battle for home/consumer services
  • Unlicensed spectrum – why it matters
  • What’s in a name? Why WiFi 6 is important
  • Wi-Fi and telcos: A complex relationship
  • Telco residential Wi-Fi evangelists
  • Wi-Fi technology evolution
  • Whole-home Wi-Fi: A game-changer
  • New revenue for telcos?
  • Is Wi-Fi threatened by 4G/5G?
  • Wi-Fi and IoT
  • Competition vs. Bluetooth, Zigbee & Z-Wave
  • Competition vs. cellular and LPWA?
  • The vendor / internet space
  • Arrival of the major technology firms
  • Beyond connectivity: New use-cases for Wi-Fi
  • Conclusions and recommendations
  • Recommendations for fixed and cable operators / ISPs
  • Recommendations for mobile operators
  • Recommendations for regulators and policymakers

Figures:

  1. Consumer Wi-Fi is a new control-point for smart home connections
  2. Wi-Fi has a huge economic impact for users, SPs and industry
  3. About half of all IP traffic, across all devices is delivered via Wi-Fi
  4. Simpler, more consumer-friendly branding for Wi-Fi
  5. What’s new with Wi-Fi 6 / 802.11ax?
  6. Wi-Fi is a double-edged sword for telcos; better for fixed ISPs than MNOs
  7. There are multiple determinants of good home broadband experience
  8. Some broadband operators market their service based on Wi-Fi performance
  9. MU-MIMO enables gigabit speeds for Wi-Fi
  10. Wi-Fi companion apps are becoming commonplace
  11. Mesh networks can provide a connectivity backbone for smart homes
  12. In-home Wi-Fi boosters or mesh improve satisfaction significantly
  13. KPN’s Wi-Fi tuner app enables optimal coverage & performance
  14. Some telcos & ISPs are using mesh Wi-Fi to offer QoS/coverage guarantees
  15. Whole-home Wi-Fi offers better indoor awareness than cellular
  16. Huawei’s 5G home FWA blends an outdoor mmWave unit with indoor Wi-Fi
  17. Consumer Wi-Fi is a new control-point for smart home connections
  18. Wi-Fi silicon specialists sometimes work directly with telcos
  19. Software, cloud and security capabilities are likely to be exploited by CSP Wi-Fi in future
  20. Motion-detection is one of the most intriguing future Wi-Fi capabilities
  21. Wi-Fi plus voice integration will accelerate with the Amazon/eero acquisition

[1] Source: Wi-Fi Alliance

Keywords, companies and technologies referenced: Wi-Fi 6, 5G, cellular, fixed wireless access (FWA), KPN, BT,  Blutooth, Zigbee, LPWA, IoT, smart home, Amazon, Cisco, Apple.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Why fibre is on fire again

Introduction

Fibre to the home is growing at a near-explosive rate

Every company faces the problems of mature markets, disappointing revenues and tough decisions on investment. Everyone agrees that fibre delivers the best network experience, but until recently most companies rejected fibre as too costly.

Now, 15 of the world’s largest phone companies have decided fibre to the home is a solution. Why are so many now investing so heavily?

Here are some highlight statistics:

  • On 26th July 2018, AT&T announced it will pass 5 million locations with fibre to the home in the next 12 months, after reaching 3 million new locations in the last year.[1] Fibre is now a proven money-maker for the US giant, bringing new customers every quarter.
  • Telefónica Spain has passed 20 million premises – over 70% of the addressable population – and continues at 2 million a year.
  • Telefónica Brazil is going from 7 million in 2018 to 10 million in 2020.
  • China’s three giants have 344 million locations connected.[2]
  • Worldwide FTTH connections grew 23% between Q1 2017 and Q1 2018.[3]
  • In June 2018, China Mobile added 4.63 million broadband customers, nearly all FTTH.[4]
  • European FTTH growth in 2017 was 20%.[5]
  • In India, Mukesh Ambani intends to connect 50 million homes at Reliance Jio.[6]

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Even the most reluctant carriers are now building, including Deutsche Telekom and British Telecom. In 2015, BT Openreach CTO Peter Bell said FTTH was “impossible” for Britain because it was too expensive.[7] Now, BT is hiring 3,500 engineers to connect 3 million premises, with 10 million more homes under consideration.[8]

Credit Suisse believes that for an incumbent, “The cost of building fibre is less than the cost of not building fibre.”

Contents:

  • Executive Summary
  • Introduction
  • Fibre to the home is growing at a near-explosive rate
  • Why the change?
  • Strategies of leading companies
  • Frontrunners
  • Moving toward rapid growth
  • Relative newcomer
  • The newly converted
  • Alternate carriers
  • Naysayers
  • U.S. regionals: CenturyLink, Frontier and Windstream
  • The Asian pioneers
  • Two technologies to consider
  • Ten-gigabit equipment
  • G.fast
  • The hard question: How many will decide to go wireless only?

Figures:

  • Figure 1: Paris area fibre coverage – Orange has covered most of the capital
  • Figure 2: European fibre growth
  • Figure 3: Top five European incumbents, stock price July 2016 – July 2018
  • Figure 4: DT CEO Tim Höttges and Bavarian Prime Minister Dr. Markus Söder announce a deal to fibre nearly all of Bavaria, part financed by the government

[1] https://www.fastnet.news/index.php/11-fib/715-at-t-fiber-run-rate-going-from-3m-to-5m-year

[2] https://www.fastnet.news/index.php/8-fnn/713-china-1-1b-4g-400m-broadband-328m-fibre-home-rapid-growth

[3] http://point-topic.com/free-analysis/world-broadband-statistics-q1-2018/

[4] https://www.chinamobileltd.com/en/ir/operation_m.php

[5] http://www.ftthcouncil.eu/documents/PressReleases/2018/PR%20Market%20Panorama%20-%2015-02-2018-%20FINAL.pdf

[6] https://www.fastnet.news/index.php/11-fib/703-india-unreal-jio-wants-50m-ftth-in-1100-cities

[7] G.fast Summit May 2015

[8] https://www.theguardian.com/business/2018/feb/01/bt-openreach-hire-3000-engineers-drive-to-fill-broadband-not-spots

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

5G: The spectrum game is changing – but how to play?

Introduction

Why does spectrum matter?

Radio spectrum is a key “raw material” for mobile networks, together with evolution of the transmission technology itself, and the availability of suitable cell-site locations. The more spectrum is made available for telcos, the more capacity there is overall for current and future mobile networks. The ability to provide good coverage is also determined largely by spectrum allocations.

Within the industry, we are accustomed to costly auction processes, as telcos battle for tranches of frequencies to add capacity, or support new generations of technology. In contrast, despite the huge costs to telcos for different spectrum allocation, most people have very little awareness of what bands their phones support, other than perhaps that it can use ‘mobile/cellular’ and WiFi.

Most people, even in the telecoms industry, don’t grasp the significance of particular numbers of MHz or GHz involved (Hz = number of cycles per second, measured in millions or billions). And that is just the tip of the jargon and acronym iceberg – a full discussion of mobile RAN (radio access network) technology involves different sorts of modulation, multiple antennas, propagation metrics, path loss (in decibels, dB) and so forth.

Yet as 5G pulls into view, it is critical to understand the process by which new frequencies will be released by governments, or old ones re-used by the mobile industry. To deliver the much-promised peak speeds and enhanced coverage of 5G, big chunks of frequencies are needed. Yet spectrum has many other uses besides public mobile networks, and battles will be fierce about any reallocations of incumbent users’ rights. The broadcast industry (especially TV), satellite operators, government departments (notably defence), scientific research communities and many other constituencies are involved here. In addition, there are growing demands for more bandwidth for unlicensed usage (as used for WiFi, Bluetooth and other low-power IoT networks such as SigFox).

Multiple big industries – usually referred to by the mobile community as “verticals” – are flexing their own muscles as well. Energy, transport, Internet, manufacturing, public safety and other sectors all see the benefits of wireless connectivity – but don’t necessarily want to involve mobile operators, nor subscribe to their preferred specifications and standards. Many have huge budgets, a deep legacy of systems-building and are hiring mobile specialists.

Lastly, parts of the technology industry are advocates of more nuanced approaches to spectrum management. Rather than dedicate bands to single companies, across whole countries or regions, they would rather develop mechanisms for sharing spectrum – either on a geographic basis, or by allowing some form of “peaceful coexistence” where different users’ radios behave nicely together, instead of creating interference. In theory, this could improve the efficient use of spectrum – but adds complexity, and perhaps introduces so much extra competition than willingness to invest suffers.

Which bands are made available for 5G, on what timescales, in what type of “chunks”, and the authorisation / licensing schemes involved, all define the potential opportunity for operators in 5G – as well as the risks of disruption, and (for some) how large the window is to fully-monetise 4G investments.

The whole area is a minefield to understand – it brings together the hardest parts of wireless technology to grasp, along with impenetrable legal processes, and labyrinthine politics at national and international levels. And ideally, it is possible to somehow to layer on consideration of end-user needs, and economic/social outputs as well.

Who are the stakeholders for spectrum?

At first sight, it might seem that spectrum allocations for mobile networks ought to be a comparatively simple affair, with governments deciding on tranches of frequencies and an appropriate auction process. MNOs can bid for their desired bands, and then deploy networks (and, perhaps, gripe about the costs afterwards).

The reality is much more complex. A later section describes some of the international bureaucracy involved in defining appropriate bands, which can then be doled out by governments (assuming they don’t decide to act unilaterally). But even before that, it is important to consider which organisations want to get involved in the decision process – and their motivations, whether for 5G or other issues that are closer to their own priorities, which intersect with it.

Governments have a broad set of drivers and priorities to reconcile – technological evolution of the economy as a whole, the desire for a competitive telecoms industry, exports, auction receipts – and the protection of other spectrum user groups such as defence, transport and public safety. Different branches of government and the public administration have differing views, and there may sometimes be tussles between the executive branch and various regulators.

Much the same is true at regional levels, especially in Europe, where there are often disagreements between European Commission, European Parliament, the regulators’ groups and 28 different EU nations’ parliaments (plus another 23 non-EU nations).

Even within the telecoms industry there are differences of opinion – some operators see 5G as an urgent strategic priority, that can help differentiation and reduce costs of existing infrastructure deployments. Others are still in the process of rolling out 4G networks and want to ensure that those investments continue to have relevance. There are variations in how much credence is assigned to the projections of IoT growth – and even there, whether there needs to be breathing room for 4G cellular types such as NB-IoT, which is yet to be deployed despite its putative replacement being discussed already.

The net result is many rounds of research, debate, consultation, disagreement and (eventually) compromise. Yet in many ways, 5G is different from 3G and 4G, especially because many new sectors are directly involved in helping define the use-cases and requirements. In many ways, telecoms is now “too important to be left to the telcos”, and many other voices will therefore need to be heard.

 

  • Executive Summary
  • Introduction
  • Why does spectrum matter?
  • Who are the stakeholders for spectrum?
  • Spectrum vs. business models
  • Does 5G need spectrum harmonisation as much as 4G?
  • Spectrum authorisation types & processes
  • Licensed, unlicensed and shared spectrum
  • Why is ITU involved, and what is IMT spectrum?
  • Key bands for 5G
  • Overview
  • 5G Phase 1: just more of the same?
  • mmWave beckons – the high bands >6GHz
  • Conclusions

 

  • Figure 1 – 5G spectrum has multiple stakeholders with differing priorities
  • Figure 2 – Multi-band support has improved hugely since early 4G phones
  • Figure 3 – A potential 5G deployment & standardisation timeline
  • Figure 4 – ITU timeline for 5G spectrum harmonisation, 2014-2020
  • Figure 5 – High mmWave frequencies (e.g. 28GHz) don’t go through solid walls
  • Figure 6 – mmWave brings new technology and design challenges

eSIM: How Much Should Operators Worry?

What is eSIM? Or RSP?

There is a lot of confusion around what eSIM actually means. While the “e” is often just assumed to stand for “embedded”, this is only half the story – and one which various people in the industry are trying to change.

In theory the term “eSIM” refers only to the functionality of “remote provisioning”; that is, the ability to download an operator profile to an in-market SIM (and also potentially switch between profiles or delete them). This contrasts with the traditional methods of pre-provisioning specific, fixed profiles into SIMs during manufacture. Most SIMs today have a particular operator’s identity and encryption credentials set at the factory. This is true of both the familiar removable SIM cards used in mobile phones, and the “soldered-in” form used in some M2M devices.

In other words, the original “e” was a poor choice – it was intended to stand for “enhanced”, “electronic” or just imply “new and online” like eCommerce or eGovernment. In fact, the first use in 2011 was for eUICC – the snappier term eSIM only emerged a couple of years later. UICCs (Universal Integrated Circuit Cards) are the smart-card chips themselves, that are used both in SIMs and other applications, for example, bank, transport and access-security cards. Embedded, solderable SIMs have existed for certain M2M uses since 2010.

In an attempt to separate out the “form factor” (removable vs. embedded) aspect from the capability (remote vs. factory provisioned), the term RSP sometimes gets used, standing for Remote SIM Provisioning. This is the title of GSMA’s current standard. But unsurprisingly, the nicer term eSIM is hard to dislodge in observers’ consciousness, so it is likely to stick around. Most now think of eSIMs as having both the remote-provisioning function and an embedded non-removable form-factor. In theory, we might even get remote-provisioning for removable SIMs (the 2014 Apple SIM was a non-standard version of this).

Figure 1: What does eSIM actually mean?

What does esim mean

Source: Disruptive Analysis

This picture is further muddied by different sets of GSMA standards for M2M and consumer use-cases at present, where the latter involves some way for the end-user to choose which profiles to download and when to activate them – for example, linking a new cellular tablet to an existing data-plan. This is different to a connected car or an industrial M2M use-case, where the manufacturer designs in the connectivity, and perhaps needs to manage whole “fleets” of eSIMs together. The GSMA M2M version of the standards were first released in 2013, and the first consumer specifications were only released in 2016. Both are being enhanced over time, and there are intentions to develop a converged M2M/consumer specification, probably in H2 2017.

eSims vs Soft-SIM / vSims

This is another area of confusion – some people confuse eSIMs with the concept of a “soft-SIM” (also called virtual SIMs/vSIMs). These have been discussed for years as a possible option for replacing physical SIM chips entirely, whether remotely provisioned, removable/soldered or not. They use purely software-based security credentials and certificates, which could be based in the “secure zone” of some mobile processors.

However, the mobile industry has strongly pushed-back on the Soft-SIM concept and standardisation, for both security reasons and also (implicit) commercial concerns. Despite this we are aware of at least two Asian handset vendors that have recently started using virtual SIMs for roaming applications.

For now, soft-SIMs appear to be far from the standards agenda, although there is definitely renewed interest. They also require a secondary market in “profiles”, which is at a very early stage and not receiving much industry attention at the moment. STL thinks that there is a possibility that we could see a future standardised version of soft-SIMs and the associated value-chain and controls, but it will take a lot of convincing for the telco industry (and especially GSMA) to push for it. It might get another nudge from Apple (which indirectly catalysed the whole eSIM movement with a 2010 patent), but as discussed later that seems improbable in the short term.

Multi-IMSI: How does multi-IMSI work?

It should also be noted that multi-IMSI (International Mobile Subscriber Identity) SIMs are yet another category here. Already used in various niches, these allow a single operator profile to be associated with multiple phone numbers – for example in different geographies. Combined with licences in different countries or multiple MVNO arrangements, this allows various clever business models, but anchored in one central operator’s system. Multi-local operators such as Truphone exploit this, as does Google in its Fi service which blends T-Mobile US and Sprint networks together. It is theoretically possible to blend multi-IMSI functionality with eSIM remote-provisioning.

eSIMs use cases and what do stakeholders hope to gain

  • There are two sets of use-cases and related stakeholder groups for eSIMs:
  • Devices that already use cellular radios & SIMs today; This group can be sub-divided into:
    • Mobile phones
    • M2M uses (e.g. connected cars and industrial modules)
    • Connected devices such as tablets, PC dongles and portable WiFi hotspots.
  • Devices that do not have cellular connectivity currently; this covers a huge potential range of IoT
    devices.
  • Broadly speaking, it is hoped that eSIM will improve the return on investment and/or efficiency of existing cellular devices and services, or help justify and enable the inclusion of cellular connections in new ones. Replacing existing SIMs is (theoretically) made easier by scrutinising existing channels and business processes and improving them – while new markets (again theoretically) offer win-win scenarios where there is no threat of disruption to existing business models.

The two different stakeholders want to receive different benefits from eSIMs. Mobile operators want:

  • Lower costs for procuring and distributing SIMs.
  • Increased revenue from adding more cellular devices and related services, which can be done incrementally with an eSIM, e.g. IoT connectivity and management.
  • Better functionality and security compared to competing non-cellular technologies.
  • Limited risk of disintermediation, increased churn or OEMs acting as gatekeepers.

And device manufacturers want:

  • To reduce their “bill of material” (BoM) costs and number of design compromises compared to existing removable SIMs
  • To sell more phones and other connected devices
  • To provide better user experience, especially compared to competing OEMs / ecosystems
  • To create additional revenue streams related to service connectivityTo upgrade existing embedded (but non-programmable) soldered SIMs for M2M

The truth, however, is more complex than that – there needs to be clear proof that eSIM improves existing devices’ costs or associated revenues, without introducing extra complexity or risk. And new device categories need to justify the addition of the (expensive, power-consuming) radio itself, as well as choosing SIM vs. eSIM for authentication. In both cases, the needs and benefits for cellular operators and device OEMs (plus their users and channels) must coincide.

There are also many other constituencies involved here: niche service providers of many types, network equipment and software suppliers, IoT specialists, chipset companies, enterprises and their technology suppliers, industry associations, SIM suppliers and so forth. In each case there are both incumbents, and smaller innovators/disruptors trying to find a viable commercial position.

This brings in many “ifs” and “buts” that need to be addressed.

Contents

  • Executive Summary
  • Introduction: What is eSIM? Or RSP?
  • Not a Soft-SIM, or multi-IMSI
  • What do stakeholders hope to gain?
  • A million practical problems So where does eSIM make sense?
  • Phones or just IoT?
  • Forecasts for eSIM
  • Conclusion 

 

  • Figure 1: What does eSIM actually mean?
  • Figure 2: eSIM standardisation & industry initiatives timeline
  • Figure 3: eSIM shipment forecasts, by device category, 2016-2021

5G: How Will It Play Out?

Introduction: Different visions of 5G

The ‘idealists’ and the ‘pragmatists’

In the last 18 months, several different visions of 5G have emerged.

One is the vision espoused by the major R&D collaborations, academics, standardisation groups, the European Union, and some operators. This is the one with the flying robots, self-driving cars, and fully automated factories whose internal networks are provided entirely by ultra-low latency critical communications profiles within the cellular network. The simplest way to describe its aims would be to say that they intend to create a genuinely universal mobile telecommunications system serving everything from 8K streaming video for football crowds, through basic (defined as 50Mbps) fixed-wireless coverage for low-ARPU developing markets, to low-rate and ultra-low power but massive-scale M2M, with the same radio waveform, backed by a single universal virtualised core network “sliced” between use-cases. This slide, from Samsung’s Raj Gawera, sums it up – 5G is meant to maximise all eight factors labelled on the vertices of the chart.

Figure 1: 5G, the vision: one radio for everything

Source: Samsung, 3G & 4G Wireless Blog

Most of its backers – the idealist group – are in no hurry, targeting 2020 at the earliest for the standard to be complete, and deployment to begin sometime after that. There are some recent signs of increasing urgency – and certainly various early demonstrations – although that is perhaps a response to the sense of movement elsewhere in the industry.

The other vision is the one backed in 3GPP (the main standards body for 5G) by an alliance of semiconductor companies – including Intel, Samsung, ARM, Qualcomm, and Mediatek – but also Nokia Networks and some carriers, notably Verizon Wireless. This vision is much more radio-centric, being focused on the so-called 5G New Radio (NR) element of the project, and centred on delivering ultra-high capacity mobile broadband. It differs significantly from the idealists’ on timing – the pragmatist group wants to have real deployments by 2018 or even earlier, and is willing (even keen) to take an IETF-like approach where the standards process ratifies the results of “rough consensus and running code”.

Carriers’ interests fall between the two poles. In general, operators’ contributions to the process focus on the three Cs – capacity, cost, and carbon dioxide – but they also usually have a special interest of their own. This might be network virtualisation and slicing for converged operators with significant cloud and enterprise interests, low-latency or massive-scale M2M for operators with major industrial customers, or low-cost mobile broadband for operators with emerging market opcos.

The summer and especially September 2016’s CTIA Mobility conference also pointed towards some players in the middle – AT&T is juggling its focus on its ECOMP NFV mega-project, with worries that Verizon will force its hand on 5G the same way it did with 4G. It would be in the idealist group if it could align 5G radio deployment and NFV perfectly, but it is probably aware of the gulf widening rather than narrowing between the two. Ericsson is pushing for 5G incrementalism (and minimising the risk of carriers switching vendors at a later date) with its “Plug-In” strategy for specific bits of functionality.

Dino Flores of Qualcomm, the chairman of 3GPP RAN (RAN = radio access network) has chosen to compromise by taking forward the core enhanced mobile broadband (eMBB) elements for what is now being called “Phase 1”, but also cherry-picking two of the future use cases – “massive” M2M, and “critical” communications. These last two differ in that the first is optimised for scalability and power saving, and the second is optimised for quality-of-service control (or PPP for Priority, Precedence, and Pre-emption in 3GPP terminology), reliable delivery, and very low latency. As the low-cost use case is essentially eMBB in low-band spectrum, with a less dense network and a high degree of automation, this choice covers carriers’ expressed needs rather well, at least in principle. In practice, the three have very different levels of commercial urgency.

Implicitly, of course, the other, more futuristic use cases (such as self-driving cars) have been relegated to “Phase 2”. As Phase 2 is expected to be delivered after 2020, or in other words, on the original timetable, this means that Phase 1 has indeed accelerated significantly. Delays in some of the more futuristic applications may not be a major worry to many people – self-driving cars probably have more regulatory obstacles than technical ones, while Vehicle to Vehicle (V2V) communications seems to be less of a priority for the automotive industry than many assert. A recent survey by Ericsson[1] suggested that better mapping and navigation is more important than “platooning” vehicles (grouping them together on the highway in platoons, which increases the capacity of the highway) as a driver of next-gen mobile capabilities.

3GPP’s current timeline foresees issuing the Technical Report (TR) detailing the requirements for the New Radio standard at the RAN (Radio Access Network) 73 meeting next month, and finalising a Non-Standalone version of the New Radio standard at either RAN 78 in December 2017, with the complete NR specification being frozen by the TSG (Technical Specifications Group) 80 meeting in June 2018, in time to be included in 3GPP Release 14. (In itself this is a significant hurry-up – the original plan was for 5G to wait for R15.) This spec would include all three major use cases, support for both <6GHz and millimetre wave spectrum, and both Non-Standalone and Standalone.

Importantly, if both Non-Standalone and the features common to it and Standalone standards are ready by the end of 2017, we will be very close to a product that could be deployed in a ‘pragmatist’ scenario even ahead of the standards process. This seems to be what VZW, Nokia, Ericsson, and others are hoping for – especially for fixed-5G. The December 2017 meeting is an especially important juncture as it will be a joint meeting of both TSG and RAN. AT&T has also called for a speeding-up of standardisation[2].

The problem, however, is that it may be difficult to reconcile the technical requirements of all three in one new radio, especially as the new radio must also be extensible to deal with the many different use cases of Phase 2, and must work both with the 4G core network as “anchor” in Non-Standalone and with the new 5G core when that arrives, in Standalone.

Also, radio development is forging ahead of both core development and spectrum policy. Phase 1 5G is focused on the bands below 6GHz, but radio vendors have been demonstrating systems working in the 15, 28, 60, and 73GHz bands – for instance Samsung and T-Mobile working on 28GHz[3]. The US FCC especially has moved very rapidly to make this spectrum available, while the 3GPP work item for millimetre wave isn’t meant to report before 2017 – and with harmonisation and allocation only scheduled for discussion at ITU’s 2019 World Radio Congress.

The upshot is that the March 2017 TSG 75 meeting is a critical decision point. Among much else it will have to confirm the future timeline and make a decision on whether or not the Non-Standalone (sometimes abbreviated to NSA) version of the New Radio will be ready by TSG/RAN 78 in December. The following 3GPP graphic summarises the timeline.


[1] https://www.ericsson.com/se/news/2039614

[2] http://www.fiercewireless.com/tech/at-t-s-keathley-5g-standards-should-be-released-2017-not-2018

[3] http://www.fiercewireless.com/tech/t-mobile-samsung-plan-5g-trials-using-pre-commercial-systems-at-28-ghz

 

  • Executive Summary
  • Introduction: Different visions of 5G
  • One Network to Rule Them All: Can it Happen?
  • Network slicing: a nice theory, but work needed…
  • Difficulty versus Urgency: understanding opportunities and blockers for 5G
  • Business drivers of the timeline: both artificial and real
  • Internet-Agility Driving Progress
  • How big is the mission critical IoT opportunity?
  • Conclusions

 

  • Figure 1: 5G, the vision: one radio for everything
  • Figure 2: The New Radio standardisation timeline, as of June 2016
  • Figure 3: An example frame structure, showing the cost of critical comms
  • Figure 4: LTE RAN protocols desperately need simplicity
  • Figure 5: Moving the Internet/RAN boundary may be problematic, but the ultra-low latency targets demand it
  • Figure 6: Easy versus urgent
  • Figure 7: A summary of key opportunities and barriers in 5G

MWC 2016: 5G and Wireless Networks

Getting Serious About 5G

MWC 2016 saw intense hype about 5G. This is typical for the run-up to a new “G”, but at least this year there was much less of the waffle about it being “a behaviour”, a “special generation”, the “last G”, or a “state of mind”. Instead, there was much more concrete activity from all stakeholders, including operators, technology vendors and standards bodies.

Nokia CEO Rajeev Suri, notably, set a 2017 target for 5G deployment to begin, which has been taken up by carriers including Verizon Wireless. This is still controversial, but the major barriers seem to be around standardisation and spectrum, rather than the technology. Most vendors had a demonstration of 5G in some form, although the emphasis and timeframes varied. However, the general theme is that even the 2018-2019 timeframe set by the Korean operators may now be overtaken by events.

An important theme at the show was that expectations for 5G have been revised:

  • They have been revised up, when it comes to the potential of future radio technology, which is seen as being capable of delivering a useful version of 5G much faster;
  • They have been revised down, when it comes to some of the more science-fictional visions of ‘one network to cover every imaginable use case’. 5G is likely to be focused on mobile broadband plus a couple of other IoT options.

This is in part thanks to a strong operator voice on 5G, coordinated through the Next Generation Mobile Networks Alliance (NGMN)1, reaching the standardisation process in 3GPP. It is also due to a strong presence by the silicon vendors in the standards process, which is important given the concentration of the device market into relatively few system-on-chip and even fewer RF component manufacturers.

Context: 3GPP 5G RAN Meeting Set the Scene for Faster Development

To understand the shift at MWC, it is useful to revisit what operators and vendors were focusing on at the September 2015 3GPP 5G RAN meeting in Phoenix. Operator concerns from the sessions can be summed up as the three Cs – cost (reducing total cost of ownership), capacity (more of it, specifically enhanced mobile broadband data and supporting massive numbers of IoT device connections), and carbon dioxide (less of it, through using less energy).

At that key meeting, most operators clearly wanted the three Cs, and most also highlighted a particular interest in one or another of the 5G benefit areas. Orange was interested in driving low-cost mobile broadband for its African operations. Deutsche Telekom was keen on network slicing and virtualisation for its enterprise customers. Verizon Wireless wanted more speed above all, to maintain its premium carrier status in the rich US cellular market. Vodafone was interested in the IoT/M2M aspects as a new growth opportunity.

This was reflected in operator views on timing of 5G standardisation and commercialisation. The more value a particular operator placed on capacity, the sooner they wanted “early 5G” and the more focused the specs would have to be, putting off the more visionary elements (device-to-device, no-cells networks, etc.) to a second phase.

A strong alliance between the silicon vendors – Qualcomm, Samsung, Mediatek, ARM, and Intel – and key network vendors, notably Nokia, emerged to push for an early 5G standardisation focused on a new radio access technology. This standard would be used in the context of existing 4G networks before the new 5G core network arrives2, and begins to deliver on the three Cs. On the other side of the discussion, Huawei (which was still talking about 5G in 2020 at MWC) was keen to keep the big expansive vision of an all-purpose multiservice 5G network alive, and to eke out 4G with incremental updates (LTE-A Pro) in the meantime.

Dino Flore, the Qualcomm executive who chairs 3GPP RAN, compromised by going for the early 5G radio access but keeping two of the special requests – for “massive” IoT and for “mission-critical” IoT – on the programme, while accepting continuing development of LTE as LTE-A Pro.

 

  • Executive Summary
  • Getting Serious About 5G
  • Context: 3GPP 5G RAN Meeting Set the Scene for Faster Development
  • MWC showed the early 5G camp is getting stronger
  • A special relationship: Nokia, Qualcomm, Intel
  • Conclusions

How BT beat Apple and Google over 5 years

BT Group outperformed Apple and Google

Over the last five years, the share price of BT Group, the UK’s ex-incumbent telecoms operator, has outperformed those of Apple and Google, as well as a raft of other telecoms shares. The following chart shows BT’s share price in red and Apple’s in in blue for comparison.

Figure 1:  BT’s Share Price over 5 Years

Source: www.stockcharts.com

Now of course, over a longer period, Apple and Google have raced way ahead of BT in terms of market capitalisation, with Apple’s capital worth $654bn and Google $429bn USD compared to BT’s £35bn (c$53bn USD).

And, with any such analysis, where you start the comparison matters. Nonetheless, BT’s share price performance during this period has been pretty impressive – and it has delivered dividends too.

The total shareholder returns (capital growth plus all dividends) of shares in BT bought in September 2010 are over 200% despite its revenues going down in the period.

So what has happened at BT, then?

Sound basic financials despite falling revenues

Over this 5 year period, BT’s total revenues fell by 12%. However, in this period BT has also managed to grow EBITDA from £5.9bn to £6.3bn – an impressive margin expansion.   This clearly cannot go on for ever (a company cannot endlessly shrink its way to higher profits) but this has contributed to positive capital markets sentiment.

Figure 2: BT Group Revenue and EBITDA 2010/11 – 2014/15

[Figure 2]

Source: BT company accounts, STL Partners

BT pays off its debts

BT has also managed to reduce its debt significantly, from £8.8bn to £5.1bn over this period.

Figure 3: BT has reduced its debts by more than a third (£billions)

 

Source: BT company accounts, STL Partners

Margin expansion and debt reduction suggests good financial management but this does not explain the dramatic growth in firm value (market capitalisation plus net debt) from just over £20bn in March 2011 to circa £40bn today (based on a mid-September 2015 share price).

Figure 4: BT Group’s Firm Value has doubled in 5 Years

Source: BT company accounts, STL Partners

  • Introduction: BT’s Share Price Miracle
  • So what has happened at BT, then?
  • Sound basic financials despite falling revenues
  • Paying off its debts
  • BT Sport: a phenomenal halo effect?
  • Will BT Sport continue to shine?
  • Take-Outs from BT’s Success

 

  • Figure 1: BT’s Share Price over 5 Years
  • Figure 2: 5-Year Total Shareholder Returns Vs Revenue Growth for leading telecoms players
  • Figure 3: BT Group Revenue and EBITDA 2010/11-2014/15
  • Figure 4: BT has reduced its debts by more than a third (£billions)
  • Figure 5: BT Group’s Firm Value has doubled in 5 Years
  • Figure 6: BT Group has improved key market valuation ratios
  • Figure 7: BT ‘broadband and TV’ compared to BT Consumer Division
  • Figure 8: Comparing Firm Values / Revenue Ratios
  • Figure 9: BT Sport’s impact on broadband

Gigabit Cable Attacks This Year

Introduction

Since at least May, 2014 and the Triple Play in the USA Executive Briefing, we have been warning that the cable industry’s continuous improvement of its DOCSIS 3 technology threatens fixed operators with a succession of relatively cheap (in terms of CAPEX) but dramatic speed jumps. Gigabit chipsets have been available for some time, with the actual timing of the roll-out being therefore set by cable operators’ commercial choices.

With the arrival of DOCSIS 3.1, multi-gigabit cable has also become available. As a result, cable operators have become the best value providers in the broadband mass markets: typically, we found in the Triple Play briefing, they were the cheapest in terms of price/megabit in the most common speed tiers, at the time between 50 and 100Mbps. They were sometimes also the leaders for outright speed, and this has had an effect. In Q3 2014, for the first time, Comcast had more high-speed Internet subscribers than it had TV subscribers, on a comparable basis. Furthermore, in Europe, cable industry revenues grew 4.6% in 2014 while the TV component grew 1.8%. In other words, cable operators are now broadband operators above all.

Figure 1: Comcast now has more broadband than TV customers

Source: STL Partners, Comcast Q1 2015 trending schedule 

In the December, 2014 Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons? Executive Briefing, we covered the impact on AT&T’s consumer wireline business, and pointed out that its strategy of concentrating on content as opposed to broadband has not really delivered. In the context of ever more competition from streaming video, it was necessary to have an outstanding broadband product before trying to add content revenues. This was something which their DSL infrastructure couldn’t deliver in the context of cable or fibre competitors. The cable competition concentrated on winning whole households’ spending with broadband, with content as an upsell, and has undermined the wireline base to the point where AT&T might well exit a large proportion of it or perhaps sell off the division, refocusing on wireless, DirecTV satellite TV, and enterprise. At the moment, Comcast sees about 2 broadband net-adds for each triple-play net-add, although the increasing numbers of business ISP customers complicate the picture.

Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups

Source: STL, Comcast Q1 trending schedule

Since Christmas, the trend has picked up speed. Comcast announced a 2Gbps deployment to 1.5 million homes in the Atlanta metropolitan area, with a national deployment to follow. Time Warner Cable has announced a wave of upgrades in Charlotte, North Carolina that ups their current 30Mbps tier to 200Mbps and their 50Mbps tier to 300Mbps, after Google Fiber announced plans to deploy in the area. In the UK, Virgin Media users have been reporting unusually high speeds, apparently because the operator is trialling a 300Mbps speed tier, not long after it upgraded 50Mbps users to 152Mbps.

It is very much worth noting that these deployments are at scale. The Comcast and TWC rollouts are in the millions of premises. When the Virgin Media one reaches production status, it will be multi-million too. Vodafone-owned KDG in Germany is currently deploying 200Mbps, and it will likely go further as soon as it feels the need from a tactical point of view. This is the advantage of an upgrade path that doesn’t require much trenching. Not only can the upgrades be incremental and continuous, they can also be deployed at scale without enormous disruption.

Technology is driving the cable surge

This year’s CES saw the announcement, by Broadcom, of a new system-on-a-chip (SoC) for cable modems/STBs that integrates the new DOCSIS 3.1 cable standard. This provides for even more speeds, theoretically up to 7Gbps downlink, while still providing a broadcast path for pure TV. The SoC also, however, includes a WLAN radio with the newest 802.11ac technology, including beamforming and 4×4 multiple-input and multiple-output (MIMO), which is rated for gigabit speeds in the local network.

Even taking into account the usual level of exaggeration, this is an impressive package, offering telco-hammering broadband speeds, support for broadcast TV, and in-home distribution at speeds that can keep up with 4K streaming video. These are the SoCs that Comcast will be using for its gigabit cable rollouts. STMicroelectronics demonstrated its own multigigabit solution at CES, and although Intel has yet to show a DOCSIS 3.1 SoC, the most recent version of its Puma platform offers up to 1.6Gbps in a DOCSIS 3 network. DOCSIS 3 and 3.1 are designed to be interoperable, so this product has a future even after the head-ends are upgraded.

Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset

Source: RCRWireless 

With multiple chipset vendors shipping products, CableLabs running regular interoperability tests, and large regional deployments beginning, we conclude that the big cable upgrade is now here. Even if cable operators succeed in virtualising their set-top box software, you can’t provide the customer-end modem nor the WiFi router from the cloud. It’s important to realise that FTTH operators can upgrade in a similarly painless way by replacing their optical network terminals (ONTs), but DSL operators need to replace infrastructure. Also, ONTs are often independent from the WLAN router or other customer equipment , so the upgrade won’t necessarily improve the WiFi.

WiFi is also getting a major upgrade

The Broadcom device is so significant, though, because of the very strong WiFi support built in with the cable modem. Like the cable industry, the WiFi ecosystem has succeeded in keeping up a steady cycle of continuous improvements that are usually backwards compatible, from 802.11b through to 802.11ac, thanks to a major standards effort, the scale that Intel and Apple’s support gives us, and its relatively light intellectual property encumbrance.

802.11ac adds a number of advanced radio features, notably multiple-user MIMO, beamforming, and higher-density modulation, that are only expected to arrive in the cellular network as part of 5G some time after 2020, as well as some incremental improvements over 802.11n, like additional MIMO streams, wider channels, and 5GHz spectrum by default. As a result, the industry refers to it as “gigabit WiFi”, although the gigabit is a per-station rather than per-user throughput.

The standard has been settled since January 2014, and support is available in most flagship-class devices and laptop chipsets since then, so this is now a reality. The upgrade of the cable networks to 802.11ac WiFi backed with DOCSIS3.1 will have major strategic consequences for telcos, as it enables the cable operators and any strategic partners of theirs to go in even harder on the fixed broadband business and also launch a WiFi-plus-MVNO mobile service at the same time. The beamforming element of 802.11ac should help them to support higher user densities, as it makes use of the spatial diversity among different stations to reduce interference. Cablevision already launched a mobile service just before Christmas. We know Comcast is planning to launch one sometime this year, as they have been hiring a variety of mobile professionals quite aggressively. And, of course, the CableWiFi roaming alliance greatly facilitates scaling up such a service. The economics of a mini-carrier, as we pointed out in the Google MVNO: What’s Behind It and What Are the Implications? Executive Briefing, hinge on how much traffic can be offloaded to WiFi or small cells.

Figure 4: Modelling a mini-carrier shows that the WiFi is critical

Source: STL Partners

Traffic carried on WiFi costs nothing in terms of spectrum and much less in terms of CAPEX (due to the lower intellectual property tax and the very high production runs of WiFi equipment). In a cable context, it will often be backhauled in the spare capacity of the fixed access network, and therefore will account for very little additional cost on this score. As a result, the percentage of data traffic transferred to WiFi, or absorbed by it, is a crucial variable. KDDI, for example, carries 57% of its mobile data traffic on WiFi and hopes to reach 65% by the end of this year. Increasing the fraction from 30% to 57% roughly halved their CAPEX on LTE.

A major regulatory issue at the moment is the deployment of LTE-LAA (Licensed-Assisted Access), which aggregates unlicensed radio spectrum with a channel from licensed spectrum in order to increase the available bandwidth. The 5GHz WiFi band is the most likely candidate for this, as it is widely available, contains a lot of capacity, and is well-supported in hardware.

We should expect the cable industry to push back very hard against efforts to rush deployment of LTE-LAA cellular networks through the regulatory process, as they have a great deal to lose if the cellular networks start to take up a large proportion of the 5GHz band. From their point of view, a major purpose of LTE-LAA might be to occupy the 5GHz and deny it to their WiFi operations.

  • Executive Summary
  • Introduction
  • Technology is driving the cable surge
  • WiFi is also getting a major upgrade
  • Wholesale and enterprise markets are threatened as well
  • The Cable Surge Is Disrupting Wireline
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game 
  • Figure 1: Comcast now has more broadband than TV customers
  • Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups
  • Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset
  • Figure 4: Modelling a mini-carrier shows that the WiFi is critical
  • Figure 5: Comcast’s growth is mostly driven by business services and broadband
  • Figure 6: Comcast Business is its growth start with a 27% CAGR
  • Figure 7: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 8: 3 major cable operators’ business services are now close to AT&T or Verizon’s scale
  • Figure 9: Summary of gigabit deployments
  • Figure 10: CAPEX as a % of revenue has been falling for some time…

 

Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons?

Looking Back to 2012

In version 1.0 of the Telco 2.0 Transformation Index, we identified a number of key strategic issues at AT&T that would mark it in the years to come. Specifically, we noted that the US wireless segment, AT&T Mobility, had been very strong, powered by iPhone data plans, that by contrast the consumer wireline segment, Home Solutions, had been rather weak, and that the enterprise segment, Business Solutions, faced a massive “crossing the chasm” challenge as its highly valuable customers began a technology transition that exposed them to new competitors, such as cloud computing providers, cable operators, and dark-fibre owners.

Figure 1: AT&T revenues by reporting segment, 2012 and 2014

AT&T revenues by reporting segment, 2012 and 2014

Source: Telco 2.0 Transformation Index

We noted that the wireless segment, though strong, was behind its great rival Verizon Wireless for 4G coverage and capacity, and that the future of the consumer wireline segment was dependent on a big strategic bet on IPTV content, delivered over VDSL (aka “fibre to the cabinet”).

In Business Solutions, newer products like cloud, M2M services, Voice 2.0, and various value-added networking services, grouped in “Strategic Business Services”, had to scale up and take over from traditional ones like wholesale circuit voice and Centrex, IP transit, classic managed hosting, and T-carriers, before too many customers went missing. The following chart shows the growth rates in each of the reporting segments over the last two years.

Figure 2: Revenue growth by reporting segment, 2-year CAGR

Revenue growth by reporting segment, 2-year CAGR

Source: Telco 2.0 Transformation Index

Out of the three major segments, wireless, consumer wireline, and business solutions, we can see that wireless is performing acceptably (although growth has slowed down), business solutions is in the grip of its transition, and wireline is just about growing. Because wireless is such a big segment (see Figure 1), it contributes a disproportionate amount to the company’s top line growth. Figure 2 shows revenue in the wireline segment as an index with Q2 2011 set to 100.

Figure 3: Wireline overall is barely growing…

AT&T Wireline Revenue

 Source: Telco 2.0 Transformation Index

Back in 2012, we summed up the consumer wireline strategy as being all about VDSL and TV. The combination, plus voice, makes up the product line known as U-Verse, which we covered in the Telco 2.0 Transformation Index. We were distinctly sceptical, essentially because we believe that broadband is now the key product in the triple-play and the one that sells the other elements. With cable operators routinely offering 100Mbps, and upgrades all the way to gigabit speeds in the pipeline, we found it hard to believe that a DSL network with “up to” 45Mbps maximum would keep up.

 

  • Executive Summary
  • Contents
  • Looking Back to 2012
  • The View in 2014
  • The DirecTV Filing
  • Getting out of consumer wireline
  • The business customers: jewel in the crown of wireline
  • Conclusion

 

  • Figure 1: AT&T revenues by reporting segment, 2012 and 2014
  • Figure 2: Revenue growth by reporting segment, 2-year CAGR
  • Figure 3: Wireline overall is barely growing…
  • Figure 4: It’s been a struggle for all fixed operators to retain customers – except high-speed cablecos Comcast and Charter
  • Figure 5: AT&T is 5th for ARPU, by a distance
  • Figure 6: AT&T’s consumer wireline ARPU is growing, but it is only just enough to avoid falling further behind
  • Figure 7: U-Verse content sales may have peaked
  • Figure 8: For the most important speed band, the cable option is a better deal
  • Figure 9: Revenue – only cablecos left alive…
  • Figure 10: Broadband “drives” bundles…
  • Figure 11: …or do bundles drive broadband?

Triple-Play in the USA: Infrastructure Pays Off

Introduction

In this note, we compare the recent performance of three US fixed operators who have adopted contrasting strategies and technology choices, AT&T, Verizon, and Comcast. We specifically focus on their NGA (Next-Generation Access) triple-play products, for the excellent reason that they themselves focus on these to the extent of increasingly abandoning the subscriber base outside their footprints. We characterise these strategies, attempt to estimate typical subscriber bundles, discuss their future options, and review the situation in the light of a “Deep Value” framework.

A Case Study in Deep Value: The Lessons from Apple and Samsung

Deep value strategies concentrate on developing assets that will be difficult for any plausible competitor to replicate, in as many layers of the value chain as possible. A current example is the way Apple and Samsung – rather than Nokia, HTC, or even Google – came to dominate the smartphone market.

It is now well known that Apple, despite its image as a design-focused company whose products are put together by outsourcers, has invested heavily in manufacturing throughout the iOS era. Although the first generation iPhone was largely assembled from proprietary parts, in many ways it should be considered as a large-scale pilot project. Starting with the iPhone 3GS, the proportion of Apple’s own content in the devices rose sharply, thanks to the acquisition of PA Semiconductor, but also to heavy investment in the supply chain.

Not only did Apple design and pilot-produce many of the components it wanted, it bought them from suppliers in advance to lock up the supply. It also bought machine tools the suppliers would need, often long in advance to lock up the supply. But this wasn’t just about a tactical effort to deny componentry to its competitors. It was also a strategic effort to create manufacturing capacity.

In pre-paying for large quantities of components, Apple provides its suppliers with the capital they need to build new facilities. In pre-paying for the machine tools that will go in them, they finance the machine tool manufacturers and enjoy a say in their development plans, thus ensuring the availability of the right machinery. They even invent tools themselves and then get them manufactured for the future use of their suppliers.

Samsung is of course both Apple’s biggest competitor and its biggest supplier. It combines these roles precisely because it is a huge manufacturer of electronic components. Concentrating on its manufacturing supply chain both enables it to produce excellent hardware, and also to hedge the success or failure of the devices by selling componentry to the competition. As with Apple, doing this is very expensive and demands skills that are both in short supply, and sometimes also hard to define. Much of the deep value embedded in Apple and Samsung’s supply chains will be the tacit knowledge gained from learning by doing that is now concentrated in their people.

The key insight for both companies is that industrial and user-experience design is highly replicable, and patent protection is relatively weak. The same is true of software. Apple had a deeply traumatic experience with the famous Look and Feel lawsuit against Microsoft, and some people have suggested that the supply-chain strategy was deliberately intended to prevent something similar happening again.

Certainly, the shift to this strategy coincides with the launch of Android, which Steve Jobs at least perceived as a “stolen product”. Arguably, Jobs repeated Apple’s response to Microsoft Windows, suing everyone in sight, with about as much success, whereas Tim Cook in his role as the hardware engineering and then supply-chain chief adopted a new strategy, developing an industrial capability that would be very hard to replicate, by design.

Three Operators, Three Strategies

AT&T

The biggest issue any fixed operator has faced since the great challenges of privatisation, divestment, and deregulation in the 1980s is that of managing the transition from a business that basically provides voice on a copper access network to one that basically provides Internet service on a co-ax, fibre, or possibly wireless access network. This, at least, has been clear for many years.

AT&T is the original telco – at least, AT&T likes to be seen that way, as shown by their decision to reclaim the iconic NYSE ticker symbol “T”. That obscures, however, how much has changed since the divestment and the extremely expensive process of mergers and acquisitions that patched the current version of the company together. The bit examined here is the AT&T Home Solutions division, which owns the fixed-line ex-incumbent business, also known as the merged BellSouth and SBC businesses.

AT&T, like all the world’s incumbents, deployed ADSL at the turn of the 2000s, thus getting into the ISP business. Unlike most world incumbents, in 2005 it got a huge regulatory boost in the form of the Martin FCC’s Comcast decision, which declared that broadband Internet service was not a telecommunications service for regulatory purposes. This permitted US fixed operators to take back the Internet business they had been losing to independent ISPs. As such, they were able to cope with the transition while concentrating on the big-glamour areas of M&A and wireless.

As the 2000s advanced, it became obvious that AT&T needed to look at the next move beyond DSL service. The option taken was what became U-Verse, a triple-play product which consists of:

  • Either ADSL, ADSL2+, or VDSL, depending on copper run length and line quality
  • Plus IPTV
  • And traditional telephony carried over IP.

This represents a minimal approach to the transition – the network upgrade requires new equipment in the local exchanges, or Central Offices in US terms, and in street cabinets, but it does not require the replacement of the access link, nor any trenching.

This minimisation of capital investment is especially important, as it was also decided that U-Verse would not deploy into areas where the copper might need investment to carry it. These networks would eventually, it was hoped, be either sold or closed and replaced by wireless service. U-Verse was therefore, for AT&T, in part a means of disposing of regulatory requirements.

It was also important that the system closely coupled the regulated domain of voice with the unregulated, or at least only potentially regulated, domain of Internet service and the either unregulated or differently regulated domain of content. In many ways, U-Verse can be seen as a content first strategy. It’s TV that is expected to be the primary replacement for the dwindling fixed voice revenues. Figure 1 shows the importance of content to AT&T vividly.

Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast

Telco 2 UVerse TV sales account for the largest chunk of Telco 2 revenue at ATandT although M2M is growing fast.png

Source: Telco 2.0 Transformation Index

This sounds like one of the telecoms-as-media strategies of the late 1990s. However, it should be clearly distinguished from, say, BT’s drive to acquire exclusive sports content and to build up a brand identity as a “channel”. U-Verse does not market itself as a “TV channel” and does not buy exclusive content – rather, it is a channel in the literal sense, a distributor through which TV is sold. We will see why in the next section.

The US TV Market

It is well worth remembering that TV is a deeply national industry. Steve Jobs famously described it as “balkanised” and as a result didn’t want to take part. Most metrics vary dramatically across national borders, as do qualitative observations of structure. (Some countries have a big public sector broadcaster, like the BBC or indeed Al-Jazeera, to give a basic example.) Countries with low pay-TV penetration can be seen as ones that offer greater opportunities, it being usually easier to expand the customer base than to win share from the competition (a “blue ocean” versus a “red sea” strategy).

However, it is also true that pay-TV in general is an easier sell in a market where most TV viewers already pay for TV. It is very hard to convince people to pay for a product they can obtain free.

In the US, there is a long-standing culture of pay-TV, originally with cable operators and more recently with satellite (DISH and DirecTV), IPTV or telco-delivered TV (AT&T U-Verse and Verizon FiOS), and subscription OTT (Netflix and Hulu). It is also a market characterised by heavy TV usage (an average household has 2.8 TVs). Out of the 114.2 million homes (96.7% of all homes) receiving TV, according to Nielsen, there are some 97 million receiving pay-TV via cable, satellite, or IPTV, a penetration rate of 85%. This is the largest and richest pay-TV market in the world.

In this sense, it ought to be a good prospect for TV in general, with the caveat that a “Sky Sports” or “BT Sport” strategy based on content exclusive to a distributor is unlikely to work. This is because typically, US TV content is sold relatively openly in the wholesale market, and in many cases, there are regulatory requirements that it must be provided to any distributor (TV affiliate, cable operator, or telco) that asks for it, and even that distributors must carry certain channels.

Rightsholders have backed a strategy based on distribution over one based on exclusivity, on the principle that the customer should be given as many opportunities as possible to buy the content. This also serves the interests of advertisers, who by definition want access to as many consumers as possible. Hollywood has always aimed to open new releases on as many cinema screens as possible, and it is the movie industry’s skills, traditions, and prejudices that shaped this market.

As a result, it is relatively easy for distributors to acquire content, but difficult for them to generate differentiation by monopolising exclusive content. In this model, differentiation tends to accrue to rightsholders, not distributors. For example, although HBO maintains the status of being a premium provider of content, consumers can buy it from any of AT&T, Verizon, Comcast, any other cable operator, satellite, or direct from HBO via an OTT option.

However, pay-TV penetration is high enough that any new entrant (such as the two telcos) is committed to winning share from other providers, the hard way. It is worth pointing out that the US satellite operators DISH and DirecTV concentrated on rural customers who aren’t served by the cable MSOs. At the time, their TV needs weren’t served by the telcos either. As such, they were essentially greenfield deployments, the first pay-TV propositions in their markets.

The biggest change in US TV in recent times has been the emergence of major new distributors, the two RBOCs and a range of Web-based over-the-top independents. Figure 2 summarises the situation going into 2013.

Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale

OTT video providers beat telcos cablecos and satellite for subscriber growth at scale

Source: Telco 2.0 Transformation Index

The two biggest classes of distributors saw either a marginal loss of subscribers (the cablecos) or a marginal gain (satellite). The two groups of (relatively) new entrants, as you’d expect, saw much more growth. However, the OTT players are both bigger and much faster growing than the two telco players. It is worth pointing out that this mostly represents additional TV consumption, typically, people who already buy pay-TV adding a Netflix subscription. “Cord cutting” – replacing a primary TV subscription entirely – remains rare. In some ways, U-Verse can be seen as an effort to do something similar, upselling content to existing subscribers.

Competing for the Whole Bundle – Comcast and the Cable Industry

So how is this option doing? The following chart, Figure 3, shows that in terms of overall service ARPU, AT&T’s fixed strategy is delivering inferior results than its main competitors.

Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Source: Telco 2.0 Transformation Index

The interesting point here is that Time Warner Cable is doing less well than some of its cable industry peers. Comcast, the biggest, claims a $159 monthly ARPU for triple-play customers, and it probably has a higher density of triple-players than the telcos. More representatively, they also quote a figure of $134 monthly average revenue per customer relationship, including single- and double-play customers. We have used this figure throughout this note. TWC, in general, is more content-focused and less broadband-focused than Comcast, having taken much longer to roll out DOCSIS 3.0. But is that important? After all, aren’t cable operators all about TV? Figure 4 shows clearly that broadband and voice are now just as important to cable operators as they are to telcos. The distinction is increasingly just a historical quirk.

Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators

Non video revenues ie Internet service and voice are the driver of growth for US cable operatorsSource: NCTA data, STL Partners

As we have seen, TV in the USA is not a differentiator because everyone’s got it. Further, it’s a product that doesn’t bring differentiation but does bring costs, as the rightsholders exact their share of the selling price. Broadband and voice are different – they are, in a sense, products the operator makes in-house. Most have to buy the tools (except Free.fr which has developed its own), but in any case the operator has to do that to carry the TV.

The differential growth rates in Figure 4 represent a substantial change in the ISP industry. Traditionally, the Internet engineering community tended to look down on cable operators as glorified TV distribution systems. This is no longer the case.

In the late 2000s, cable operators concentrated on improving their speeds and increasing their capacity. They also pressed their vendors and standardisation forums to practice continuous improvement, creating a regular upgrade cycle for DOCSIS firmware and silicon that lets them stay one (or more) jumps ahead of the DSL industry. Some of them also invested in their core IP networking and in providing a deeper and richer variety of connectivity products for SMB, enterprise, and wholesale customers.

Comcast is the classic example of this. It is a major supplier of mobile backhaul, high-speed Internet service (and also VoIP) for small businesses, and a major actor in the Internet peering ecosystem. An important metric of this change is that since 2009, it has transitioned from being a downlink-heavy eyeball network to being a balanced peer that serves about as much traffic outbound as it receives inbound.

The key insight here is that, especially in an environment like the US where xDSL unbundling isn’t available, if you win a customer for broadband, you generally also get the whole bundle. TV is a valuable bonus, but it’s not differentiating enough to win the whole of the subscriber’s fixed telecoms spend – or to retain it, in the presence of competitors with their own infrastructure. It’s also of relatively little interest to business customers, who tend to be high-value customers.

 

  • Executive Summary
  • Introduction
  • A Case Study in Deep Value: The Lessons from Apple and Samsung
  • Three Operators, Three Strategies
  • AT&T
  • The US TV Market
  • Competing for the Whole Bundle – Comcast and the Cable Industry
  • Competing for the Whole Bundle II: Verizon
  • Scoring the three strategies – who’s winning the whole bundles?
  • SMBs and the role of voice
  • Looking ahead
  • Planning for a Future: What’s Up Cable’s Sleeve?
  • Conclusions

 

  • Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast
  • Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale
  • Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up
  • Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators
  • Figure 5: Comcast has the best pricing per megabit at typical service levels
  • Figure 6: Verizon is ahead, but only marginally, on uplink pricing per megabit
  • Figure 7: FCC data shows that it’s the cablecos, and FiOS, who under-promise and over-deliver when it comes to broadband
  • Figure 7: Speed sells at Verizon
  • Figure 8: Comcast and Verizon at parity on price per megabit
  • Figure 9: Typical bundles for three operators. Verizon FiOS leads the way
  • Figure 12: The impact of learning by doing on FTTH deployment costs during the peak roll-out phase

LTE: APAC and US ‘Leading The Experience’

Summary: LTE is gaining traction in Asia Pacific and the US, despite challenges with spectrum, voice, and handsets. In South Korea, for example, penetration is expected to exceed 50% within 18 months. Our report on the lessons learned at the 2012 NGMN conference. (July 2012, Executive Briefing Service, Future of the Networks Stream).

LTE in Korea

  Read in Full (Members only)   To Subscribe click here

Below is an extract from this 14 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks here. Non-members can subscribe here and for this and other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

We will be looking further at the role of LTE as an element of the strategic transformation of the telco industry at the invitation only Executive Brainstorms in Dubai (November 6-7, 2012), Singapore (4-5 December, 2012), Silicon Valley (19-20 March 2013), and London (23-24 April, 2013). Email contact@stlpartners.com or call +44 (0) 207 243 5003 to find out more.

To share this article easily, please click:



Taking the pulse of LTE

Introduction – NGMN 2012

In June, Telco 2.0 attended the main annual conference of NGMN in San Francisco. NGMN is the “Next Generation Mobile Network” alliance, the industry group tasked with defining the requirements for 4G networks and beyond. (It is then up to 3GPP and – historically at least – other standards bodies, to define the actual technologies which meet those requirements). Set up in 2006, it evaluated a number of candidate technologies, it eventually settled on LTE as its preferred “next-gen” technology, after a brief flirtation including WiMAX as well.

The conference was an interesting mix of American and Asian companies, operators (with quite a CTO-heavy representation), major vendors and some niche technology specialists. Coincidentally, the event also took place at the same time as Apple’s flagship annual developer conference at the Moscone Center across the road.

Although it was primarily about current LTE networks, quite a lot of the features that feature in the next stage, “LTE-Advanced” were discussed too, as well as updates on the roles of HSPA+ and WiFi. Some of the material was outside Telco 2.0’s normal beat (for example the innards of base station antennas), but there were also quite a lot of references to evolving broadband business models, APIs and the broader Internet value chain.

Key Take-Outs

In some countries, LTE adoption is happening very quickly – in fact, faster than expected. This is impressive, and a testament to the NGMN process and 3GPP getting the basic radio technology standards right. However, rollout and uptake is very patchy, especially outside the US, Korea and Japan. They are still problems around the fragmentation of suitable spectrum bands, expensive devices, supporting IT systems and the thorny issue of how to deal with voice. In addition, many operators’ capex budgets are being constrained by macroeconomic uncertainty. What also seems true is that LTE has not (yet) resulted in any substantive new telco business models, although there is clearly a lot of work behind the scenes on APIs and new pricing and data-bundling approaches.  

We are also impressed by the continued focus of the NGMN itself on further evolution of 4G+ networks, in resolving the outstanding technical issues (e.g. helping to drive towards multiband-capable devices, working on mobilised versions of adaptive video streaming), continuing the evolution to ever-better network speeds and efficiencies, and helping to minimise operators’ capex and opex through programmes such as SON (self-optimising networks).

ngmm: the engine of broadband wireless innovation

LTE adoption: accelerating – but patchy

One key conclusion from the event was the surprisingly rapid switch-over of users from 3G to 4G where it is available, especially with a decent range of handsets and aggressive marketing. In particular, US, South Korean and Japanese operators are leading the way. The US probably has the largest absolute number of subscribers – almost certainly more than 10m by the end of Q2 2012 (Verizon had 8m by end-Q1, with MetroPCS and AT&T also having launched). But in terms of penetration, it looks like South Korea is going to be the prize-winner. SKTelecom already has more than 3m subscribers, and is expecting 6m by the end of the year. More meaningfully, the various Korean presenters at the event seemed to agree the penetration of LTE could be as high as 50% of mobile users by the end of next year. NTT DoCoMo’s LTE service (branded Xi) is also accelerating rapidly, recently crossing the 3m user threshold, with a broad range of LTE smartphones coming out this summer, in an attempt to take the wind out of Softbank’s iPhone hegemony.

Figure 1: South Korea will have 30m LTE subs at end-2013, vs 49m population

LTE in Korea
Source: Samsung Electronics

This growth is not really being mirrored elsewhere, however. At the end of Q1, TeliaSonera had just 100k subscribers (mostly USB dongles) across a 7-country footprint of LTE networks, despite being the first to launch at the end of 2009. This probably reflects the fact that smartphones suitable for European frequency bands (and supporting voice) have been slow in arriving, something that should change rapidly from now onwards. It is also notable that TeliaSonera has attempted to position LTE as a premium, higher-priced option compared to 3G, while operators such as Verizon have really just used 4G as a marketing ploy, offering faster speeds as a counter to AT&T – and also perhaps to give Android devices an edge against the more expensive-to-subsidise iPhone.

Once European and Chinese markets really start to market LTE smartphones in anger (which will likely be around the 2012 Xmas season), we should see another ramp-up in demand – although that will partly be determined by whether the next iPhone (likely due around September-October) finally supports LTE or not.

To read the note in full, including the following sections detailing support for the analysis…

  • New business models, or more of the same?
  • Are the new models working?
  • Wholesale LTE
  • Other hurdles for LTE
  • Spectrum fragmentation blues
  • Handsets and spectrum
  • Roaming and spectrum
  • But what about voice and messaging?
  • HetNets & WiFi – part of “Next-gen networks” or not?
  • LTE Apps?
  • Conclusions

…and the following figures…

  • Figure 1: South Korea will have 30m LTE subs at end-2013, vs 49m population
  • Figure 2 – Juniper: exposing network APIs to apps
  • Figure 3 – Yota is wholesaling LTE capacity, while acting as a 2G/3G MVNO
  • Figure 4 – A compelling argument to replace old public-safety radios with LTE
  • Figure 5 – NTT DoCoMo made a colourful argument about LTE spectrum fragmentation

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 14 page report in PDF format hereNon-Members, please subscribe here. For this or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

‘Under-The-Floor’ (UTF) Players: threat or opportunity?

Introduction

The ‘smart pipe’ imperative

In some quarters of the telecoms industry, the received wisdom is that the network itself is merely an undifferentiated “pipe”, providing commodity connectivity, especially for data services. The value, many assert, is in providing higher-tier services, content and applications, either to end-users, or as value-added B2B services to other parties. The Telco 2.0 view is subtly different. We maintain that:

  1. Increasingly valuable services will be provided by third-parties but that operators can provide a few end-user services themselves. They will, for example, continue to offer voice and messaging services for the foreseeable future.
  2. Operators still have an opportunity to offer enabling services to ‘upstream’ service providers such as personalisation and targeting (of marketing and services) via use of their customer data, payments, identity and authentication and customer care.
  3. Even if operators fail (or choose not to pursue) options 1 and 2 above, the network must be ‘smart’ and all operators will pursue at least a ‘smart network’ or ‘Happy Pipe’ strategy. This will enable operators to achieve three things.
  • To ensure that data is transported efficiently so that capital and operating costs are minimised and the Internet and other networks remain cheap methods of distribution.
  • To improve user experience by matching the performance of the network to the nature of the application or service being used – or indeed vice versa, adapting the application to the actual constraints of the network. ‘Best efforts’ is fine for asynchronous communication, such as email or text, but unacceptable for traditional voice telephony. A video call or streamed movie could exploit guaranteed bandwidth if possible / available, or else they could self-optimise to conditions of network congestion or poor coverage, if well-understood. Other services have different criteria – for example, real-time gaming demands ultra-low latency, while corporate applications may demand the most secure and reliable path through the network.
  • To charge appropriately for access to and/or use of the network. It is becoming increasingly clear that the Telco 1.0 business model – that of charging the end-user per minute or per Megabyte – is under pressure as new business models for the distribution of content and transportation of data are being developed. Operators will need to be capable of charging different players – end-users, service providers, third-parties (such as advertisers) – on a real-time basis for provision of broadband and maybe various types or tiers of quality of service (QoS). They may also need to offer SLAs (service level agreements), monitor and report actual “as-experienced” quality metrics or expose information about network congestion and availability.

Under the floor players threaten control (and smartness)

Either through deliberate actions such as outsourcing, or through external agency (Government, greenfield competition etc), we see the network-part of the telco universe suffering from a creeping loss of control and ownership. There is a steady move towards outsourced networks, as they are shared, or built around the concept of open-access and wholesale. While this would be fine if the telcos themselves remained in control of this trend (we see significant opportunities in wholesale and infrastructure services), in many cases the opposite is occurring. Telcos are losing control, and in our view losing influence over their core asset – the network. They are worrying so much about competing with so-called OTT providers that they are missing the threat from below.

At the point at which many operators, at least in Europe and North America, are seeing the services opportunity ebb away, and ever-greater dependency on new models of data connectivity provision, they are potentially cutting off (or being cut off from) one of their real differentiators.
Given the uncertainties around both fixed and mobile broadband business models, it is sensible for operators to retain as many business model options as possible. Operators are battling with significant commercial and technical questions such as:

  • Can upstream monetisation really work?
  • Will regulators permit priority services under Net Neutrality regulations?
  • What forms of network policy and traffic management are practical, realistic and responsive?

Answers to these and other questions remain opaque. However, it is clear that many of the potential future business models will require networks to be physically or logically re-engineered, as well as flexible back-office functions, like billing and OSS, to be closely integrated with the network.
Outsourcing networks to third-party vendors, particularly when such a network is shared with other operators is dangerous in these circumstances. Partners that today agree on the principles for network-sharing may have very different strategic views and goals in two years’ time, especially given the unknown use-cases for new technologies like LTE.

This report considers all these issues and gives guidance to operators who may not have considered all the various ways in which network control is being eroded, from Government-run networks through to outsourcing services from the larger equipment providers.

Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators Under The Floor Players Fig 1 Defending Network Capabilities

Source: STL Partners

Industry structure is being reshaped

Over the last year, Telco 2.0 has updated its overall map of the telecom industry, to reflect ongoing dynamics seen in both fixed and mobile arenas. In our strategic research reports on Broadband Business Models, and the Roadmap for Telco 2.0 Operators, we have explored the emergence of various new “buckets” of opportunity, such as verticalised service offerings, two-sided opportunities and enhanced variants of traditional retail propositions.
In parallel to this, we’ve also looked again at some changes in the traditional wholesale and infrastructure layers of the telecoms industry. Historically, this has largely comprised basic capacity resale and some “behind the scenes” use of carriers-carrier services (roaming hubs, satellite / sub-oceanic transit etc).

Figure 2 – Telco 1.0 Wholesale & Infrastructure structure

Under The Floor (UTF) Players Fig 2 Telco 1.0 Scenario

Source: STL Partners

Content

  • Revising & extending the industry map
  • ‘Network Infrastructure Services’ or UTF?
  • UTF market drivers
  • Implications of the growing trend in ‘under-the-floor’ network service providers
  • Networks must be smart and controlling them is smart too
  • No such thing as a dumb network
  • Controlling the network will remain a key competitive advantage
  • UTF enablers: LTE, WiFi & carrier ethernet
  • UTF players could reduce network flexibility and control for operators
  • The dangers of ceding control to third-parties
  • No single answer for all operators but ‘outsourcer beware’
  • Network outsourcing & the changing face of major vendors
  • Why become an under-the-floor player?
  • Categorising under-the-floor services
  • Pure under-the-floor: the outsourced network
  • Under-the-floor ‘lite’: bilateral or multilateral network-sharing
  • Selective under-the-floor: Commercial open-access/wholesale networks
  • Mandated under-the-floor: Government networks
  • Summary categorisation of under-the-floor services
  • Next steps for operators
  • Build scale and a more sophisticated partnership approach
  • Final thoughts
  • Index

 

  • Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators
  • Figure 2 – Telco 1.0 Wholesale & Infrastructure structure
  • Figure 3 – The battle over infrastructure services is intensifying
  • Figure 4 – Examples of network-sharing arrangements
  • Figure 5 – Examples of Government-run/influenced networks
  • Figure 6 – Four under-the-floor service categories
  • Figure 7: The need for operator collaboration & co-opetition strategies

Broadband 2.0: Mobile CDNs and video distribution

Summary: Content Delivery Networks (CDNs) are becoming familiar in the fixed broadband world as a means to improve the experience and reduce the costs of delivering bulky data like online video to end-users. Is there now a compelling need for their mobile equivalents, and if so, should operators partner with existing players or build / buy their own? (August 2011, Executive Briefing Service, Future of the Networks Stream).
Telco 2.0 Mobile CDN Schematic Small
  Read in Full (Members only)    Buy This Report    To Subscribe

Below is an extract from this 25 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can buy a Single User license for this report online here for £595 (+VAT) or subscribe here. For multiple user licenses, or to find out about interactive strategy workshops on this topic, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

As is widely documented, mobile networks are witnessing huge growth in the volumes of 3G/4G data traffic, primarily from laptops, smartphones and tablets. While Telco 2.0 is wary of some of the headline shock-statistics about forecast “exponential” growth, or “data tsunamis” driven by ravenous consumption of video applications, there is certainly a fast-growing appetite for use of mobile broadband.

That said, many of the actual problems of congestion today can be pinpointed either to a handful of busy cells at peak hour – or, often, the inability of the network to deal with the signalling load from chatty applications or “aggressive” devices, rather than the “tonnage” of traffic. Another large trend in mobile data is the use of transient, individual-centric flows from specific apps or communications tools such as social networking and messaging.

But “tonnage” is not completely irrelevant. Despite the diversity, there is still an inexorable rise in the use of mobile devices for “big chunks” of data, especially the special class of software commonly known as “content” – typically popular/curated standalone video clips or programmes, or streamed music. Images (especially those in web pages) and application files such as software updates fit into a similar group – sizeable lumps of data downloaded by many individuals across the operator’s network.

This one-to-many nature of most types of bulk content highlights inefficiencies in the way mobile networks operate. The same data chunks are downloaded time and again by users, typically going all the way from the public Internet, through the operator’s core network, eventually to the end user. Everyone loses in this scenario – the content publisher needs huge servers to dish up each download individually. The operator has to deal with transport and backhaul load from repeatedly sending the same content across its network (and IP transit from shipping it in from outside, especially over international links). Finally, the user has to deal with all the unpredictability and performance compromises involved in accessing the traffic across multiple intervening points – and ends up paying extra to support the operator’s heavier cost base.

In the fixed broadband world, many content companies have availed themselves of a group of specialist intermediaries called CDNs (content delivery networks). These firms on-board large volumes of the most important content served across the Internet, before dropping it “locally” as near to the end user as possible – if possible, served up from cached (pre-saved) copies. Often, the CDN operating companies have struck deals with the end-user facing ISPs, which have often been keen to host their servers in-house, as they have been able to reduce their IP interconnection costs and deliver better user experience to their customers.

In the mobile industry, the use of CDNs is much less mature. Until relatively recently, the overall volumes of data didn’t really move the needle from the point of view of content firms, while operators’ radio-centric cost bases were also relatively immune from those issues as well. Optimising the “middle mile” for mobile data transport efficiency seemed far less of a concern than getting networks built out and handsets and apps perfected, or setting up policy and charging systems to parcel up broadband into tiered plans. Arguably, better-flowing data paths and video streams would only load the radio more heavily, just at a time when operators were having to compress video to limit congestion.

This is now changing significantly. With the rise in smartphone usage – and the expectations around tablets – Internet-based CDNs are pushing much more heavily to have their servers placed inside mobile networks. This is leading to a certain amount of introspection among the operators – do they really want to have Internet companies’ infrastructure inside their own networks, or could this be seen more as a Trojan Horse of some sort, simply accelerating the shift of content sales and delivery towards OTT-style models? Might it not be easier for operators to build internal CDN-type functions instead?

Some of the earlier approaches to video traffic management – especially so-called “optimisation” without the content companies’ permission of involvement – are becoming trickier with new video formats and more scrutiny from a Net Neutrality standpoint. But CDNs by definition involve the publishers, so potentially any necessary compression or other processing can be collaboratively, rather than “transparently” without cooperation or willingness.

At the same time, many of the operators’ usual vendors are seeing this transition point as a chance to differentiate their new IP core network offerings, typically combining CDN capability into their routing/switching platforms, often alongside the optimisation functions as well. In common with other recent innovations from network equipment suppliers, there is a dangled promise of Telco 2.0-style revenues that could be derived from “upstream” players. In this case, there is a bit more easily-proved potential, since this would involve direct substitution of the existing revenues already derived from content companies, by the Internet CDN players such as Akamai and Limelight. This also holds the possibility of setting up a two-sided, content-charging business model that fits OK with rules on Net Neutrality – there are few complaints about existing CDNs except from ultra-purist Neutralists.

On the other hand, telco-owned CDNs have existed in the fixed broadband world for some time, with largely indifferent levels of success and adoption. There needs to be a very good reason for content companies to choose to deal with multiple national telcos, rather than simply take the easy route and choose a single global CDN provider.

So, the big question for telcos around CDNs at the moment is “should I build my own, or should I just permit Akamai and others to continue deploying servers into my network?” Linked to that question is what type of CDN operation an operator might choose to run in-house.

There are four main reasons why a mobile operator might want to build its own CDN:

  • To lower costs of network operation or upgrade, especially in radio network and backhaul, but also through the core and in IP transit.
  • To improve the user experience of video, web or applications, either in terms of data throughput or latency.
  • To derive incremental revenue from content or application providers.
  • For wider strategic or philosophical reasons about “keeping control over the content/apps value chain”

This Analyst Note explores these issues in more details, first giving some relevant contextual information on how CDNs work, especially in mobile.

What is a CDN?

The traditional model for Internet-based content access is straightforward – the user’s browser requests a piece of data (image, video, file or whatever) from a server, which then sends it back across the network, via a series of “hops” between different network nodes. The content typically crosses the boundaries between multiple service providers’ domains, before finally arriving at the user’s access provider’s network, flowing down over the fixed or mobile “last mile” to their device. In a mobile network, that also typically involves transiting the operator’s core network first, which has a variety of infrastructure (network elements) to control and charge for it.

A Content Delivery Network (CDN) is a system for serving Internet content from servers which are located “closer” to the end user either physically, or in terms of the network topology (number of hops). This can result in faster response times, higher overall performance, and potentially lower costs to all concerned.

In most cases in the past, CDNs have been run by specialist third-party providers, such as Akamai and Limelight. This document also considers the role of telcos running their own “on-net” CDNs.

CDNs can be thought of as analogous to the distribution of bulky physical goods – it would be inefficient for a manufacturer to ship all products to customers individually from a single huge central warehouse. Instead, it will set up regional logistics centres that can be more responsive – and, if appropriate, tailor the products or packaging to the needs of specific local markets.

As an example, there might be a million requests for a particular video stream from the BBC. Without using a CDN, the BBC would have to provide sufficient server capacity and bandwidth to handle them all. The company’s immediate downstream ISPs would have to carry this traffic to the Internet backbone, the backbone itself has to carry it, and finally the requesters’ ISPs’ access networks have to deliver it to the end-points. From a media-industry viewpoint, the source network (in this case the BBC) is generally called the “content network” or “hosting network”; the destination is termed an “eyeball network”.

In a CDN scenario, all the data for the video stream has to be transferred across the Internet just once for each participating network, when it is deployed to the downstream CDN servers and stored. After this point, it is only carried over the user-facing eyeball networks, not any others via the public Internet. This also means that the CDN servers may be located strategically within the eyeball networks, in order to use its resources more efficiently. For example, the eyeball network could place the CDN server on the downstream side of its most expensive link, so as to avoid carrying the video over it multiple times. In a mobile context, CDN servers could be used to avoid pushing large volumes of data through expensive core-network nodes repeatedly.

When the video or other content is loaded into the CDN, other optimisations such as compression or transcoding into other formats can be applied if desired. There may also be various treatments relating to new forms of delivery such as HTTP streaming, where the video is broken up into “chunks” with several different sizes/resolutions. Collectively, these upfront processes are called “ingestion”.

Figure 1 – Content delivery with and without a CDN

Mobile CDN Schematic, Fig 1 Telco 2.0 Report

Source: STL Partners / Telco 2.0

Value-added CDN services

It is important to recognise that the fixed-centric CDN business has increased massively in richness and competition over time. Although some of the players have very clever architectures and IPR in the forms of their algorithms and software techniques, the flexibility of modern IP networks has tended to erode away some of the early advantages and margins. Shipping large volumes of content is now starting to become secondary to the provision of associated value-added functions and capabilities around that data. Additional services include:

  • Analytics and reporting
  • Advert insertion
  • Content ingestion and management
  • Application acceleration
  • Website security management
  • Software delivery
  • Consulting and professional services

It is no coincidence that the market leader, Akamai, now refers to itself as “provider of cloud optimisation services” in its financial statements, rather than a CDN, with its business being driven by “trends in cloud computing, Internet security, mobile connectivity, and the proliferation of online video”. In particular, it has started refocusing away from dealing with “video tonnage”, and towards application acceleration – for example, speeding up the load times of e-commerce sites, which has a measurable impact on abandonment of purchasing visits. Akamai’s total revenues in 2010 were around $1bn, less than half of which came from “media and entertainment” – the traditional “content industries”. Its H1 2011 revenues were relatively disappointing, with growth coming from non-traditional markets such as enterprise and high-tech (eg software update delivery) rather than media.

This is a critically important consideration for operators that are looking to CDNs to provide them with sizeable uplifts in revenue from upstream customers. Telcos – especially in mobile – will need to invest in various additional capabilities as well as the “headline” video traffic management aspects of the system. They will need to optimise for network latency as well as throughput, for example – which will probably not have the cost-saving impacts expected from managing “data tonnage” more effectively.

Although in theory telcos’ other assets should help – for example mapping download analytics to more generalised customer data – this is likely to involve extra complexity with the IT side of the business. There will also be additional efforts around sales and marketing that go significantly beyond most mobile operators’ normal footprint into B2B business areas. There is also a risk that an analysis of bottlenecks for application delivery / acceleration ends up simply pointing the finger of blame at the network’s inadequacies in terms of coverage. Improving delivery speed, cost or latency is only valuable to an upstream customer if there is a reasonable likelihood of the end-user actually having connectivity in the first place.

Figure 2: Value-added CDN capabilities

Mobile CDN Schematic - Functionality Chart - Telco 2.0 Report

Source: Alcatel-Lucent

Application acceleration

An increasingly important aspect of CDNs is their move beyond content/media distribution into a much wider area of “acceleration” and “cloud enablement”. As well as delivering large pieces of data efficiently (e.g. video), there is arguably more tangible value in delivering small pieces of data fast.

There are various manifestations of this, but a couple of good examples illustrate the general principles:

  • Many web transactions are abandoned because websites (or apps) seem “slow”. Few people would trust an airline’s e-commerce site, or a bank’s online interface, if they’ve had to wait impatiently for images and page elements to load, perhaps repeatedly hitting “refresh” on their browsers. Abandoned transactions can be directly linked to slow or unreliable response times – typically a function of congestion either at the server or various mid-way points in the connection. CDN-style hosting can accelerate the service measurably, leading to increased customer satisfaction and lower levels of abandonment.
  • Enterprise adoption of cloud computing is becoming exceptionally important, with both cost savings and performance enhancements promised by vendors. Sometimes, such platforms will involve hybrid clouds – a mixture of private (Internal) and public (Internet) resources and connectivity. Where corporates are reliant on public Internet connectivity, they may well want to ensure as fast and reliable service as possible, especially in terms of round-trip latency. Many IT applications are designed to be run on ultra-fast company private networks, with a lot of “hand-shaking” between the user’s PC and the server. This process is very latency-dependent, and especially as companies also mobilise their applications the additional overhead time in cellular networks may otherwise cause significant problems.

Hosting applications at CDN-type cloud acceleration providers achieves much the same effect as for video – they can bring the application “closer”, with fewer hops between the origin server and the consumer. Additionally, the CDN is well-placed to offer additional value-adds such as firewalling and protection against denial-of-service attacks.

To read the 25 note in full, including the following additional content…

  • How do CDNs fit with mobile networks?
  • Internet CDNs vs. operator CDNs
  • Why use an operator CDN?
  • Should delivery mean delivery?
  • Lessons from fixed operator CDNs
  • Mobile video: CDNs, offload & optimisation
  • CDNs, optimisation, proxies and DPI
  • The role of OVPs
  • Implementation and planning issues
  • Conclusion & recommendations

… and the following additional charts…

  • Figure 3 – Potential locations for CDN caches and nodes
  • Figure 4 – Distributed on-net CDNs can offer significant data transport savings
  • Figure 5 – The role of OVPs for different types of CDN player
  • Figure 6 – Summary of Risk / Benefits of Centralised vs. Distributed and ‘Off Net’ vs. ‘On-Net’ CDN Strategies

……Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 25 page report in PDF format here. Non-Members, please see here for how to subscribe, here to buy a single user license for £595 (+VAT), or for multi-user licenses and any other enquiries please email contact@telco2.net or call +44 (0) 207 247 5003.

Organisations and products referenced: 3GPP, Acision, Akamai, Alcatel-Lucent, Allot, Amazon Cloudfront, Apple’s Time Capsule, BBC, BrightCove, BT, Bytemobile, Cisco, Ericsson, Flash Networks, Huawei, iCloud, ISPs, iTunes, Juniper, Limelight, Netflix, Nokia Siemens Networks, Ooyala, OpenWave, Ortiva, Skype, smartphone, Stoke, tablets, TiVo, Vantrix, Velocix, Wholesale Content Connect, Yospace, YouTube.

Technologies and industry terms referenced: acceleration, advertising, APIs, backhaul, caching, CDN, cloud, distributed caches, DNS, Evolved Packet Core, eyeball network, femtocell, fixed broadband, GGSNs, HLS, HTTP streaming, ingestion, IP network, IPR, laptops, LIPA, LTE, macro-CDN, micro-CDN, middle mile, mobile, Net Neutrality, offload, optimisation, OTT, OVP, peering proxy, QoE, QoS, RNCs, SIPTO, video, video traffic management, WiFi, wireless.

Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market

Summary: ‘Net Neutrality’ has gathered increasing momentum as a market issue, with AT&T, Verizon, major European telcos and Google and others all making their points in advance of the Ofcom, EC, and FCC consultation processes. This is Telco 2.0’s input, analysis and recommendations. (September 2010, Foundation 2.0,, Executive Briefing Service, Future of the Networks Stream).

NB A PDF copy of this 17 page document can be downloaded in full here. We’ll also be discussing this at the Telco 2.0 Executive Brainstorms. Email contact@telco2.net or call +44 (0) 207 247 5003 to find out more.

Overview

In this paper, Telco 2.0 recommends that the appropriate general response to concerns over ‘Net Neutrality’ is to make it easier for customers to understand what they should expect, and what they actually get, from their broadband service, rather than impose strict technical rules or regulation about how ISPs should manage their networks.

In this article we describe in detail why, and provide recommendations for how.

NB We would like to express our thanks to Dean Bubley of Disruptive Analysis, who has worked closely with our team to develop this paper.

Analysis of Net Neutrality Issues

‘Net Neutrality’ = Self-Interest (Poorly Disguised)

‘Net Neutrality’ is an issue manufactured and amplified by lobbyists on the behalf of competing commercial interests. Much of the debate on the issue has become somewhat distracting and artificial as the ‘noise’ of self-interested opinion has become much louder than the ‘signal’ of potential resolutions.

The libertarian ideal that the title implies is a clever piece of PR manipulation of ideas of freedom of access of information, and freedom from interference. For the most part, this is far from the reality of the motives of the players engaged in the debate.

Additionally, the ‘public’ net neutrality debate is being driven by tech-savvy early adopters whose views and ‘use cases’ are not statistically representative of the overall internet population.

This collection of factors has created a strange landscape of idealist and specialised viewpoints congregating around the industry lobbyists’ various positions.

However, behind the scenes, the big commercial players are becoming increasingly tense, and we have recently experienced a marked reluctance from senior telco executives to comment on the issue in public.

Our position is that, beyond the hyperbole, the fair and proper management of contention between Internet Applications and ‘Specialised Services’ is important in the interests of consumers and the potential creation of new business models.

What, exactly, is the ‘problem’ and for whom?

Rapidly increasing use of the Internet and Specialised Services, particularly bandwidth hungry applications like online video, is causing (or, at least, will in theory cause) increasing contention in parts of the network.

The currently expressed primary concerns of net neutrality activists are that some consumers will receive a service whose delivery has been covertly manipulated by an external party, in this case their ISP. Similarly, some application and service providers fear that their services are or will consequently be discriminated against by telcos.

Some telcos think that certain other large and bandwidth-hungry applications are receiving a ‘free ride’ on their networks, and their corporate owners consequently receiving the benefits of expensive network investments without contribution. As a consequence, ISPs argue that they should be entitled to unilaterally constrain certain types of applications unless application providers pay for the additional bandwidth.

It’s a Commercial Issue, not a Moral Issue

One of the areas of obfuscation in the ‘Net Neutrality’ debate is the confusion between two sets of issues in the debate: ‘moral and legal’ and ‘commercial’.

Moral & legal issues include matters such as ‘freedom of expression’ and the right to unfettered internet access, the treatment of pirated content, and censorship of extreme religious or pornographic materials. We regard these as subjects for the law where the service is consumed / produced etc., but that have in some places become entangled in the ‘Net Neutrality’ debate and which should not be its focus.

The commercial issue is whether operators should be regulated in how they prioritise traffic from one commercial application over another without the user’s knowledge.

What causes this problem?

Contention can arise at different points between the service or application and the user, for example:

  • Caused by bulk traffic from users and applications in the ‘core network’ beyond the local exchange, (akin to the general slowing of Internet applications in the evening in Europe due to greater local and U.S. usage at that time);
  • Between applications on a bandwidth restricted local access route (e.g. ADSL over a copper pair, mobile broadband).

As a service may originate from and be delivered to anywhere globally, the first kind of contention can only be truly be managed if there is either a) an Internet-wide standard for prioritising different types of traffic, or b) a specific overlay network for that service which bypasses the internet to a certain ‘outer’ point in the network closer to the consumer such as a local exchange. This latter class of service delivery may be accompanied by a connection between the exchange and the end-user that is not over the internet – and this is the case in most IPTV services.

To alleviate issues of contention, various ‘Traffic Management’ strategies are available to operators, as shown in the following diagram, with increasingly controversial types of intervention to the right.

Figure 1 – Ofcom’s Traffic Management Continuum

Ofcom's Traffic Management Continuum

 

Source: Ofcom

Is It Really a Problem?

Operators already do apply traffic management techniques, an example of which was given by 3UK’s Director of Network Strategy at the recent Broadband Stakeholder Group (BSG) event in London, who explained that at peak times in the busiest cells, 3 limits SS7 signalling and P2P traffic. He explained that they selected these categories because they are essentially ‘background’ applications that have little impact on the consumer’s experience, and it was important to keep down latency so that more interactive applications like Web browsing functioned well. A major ‘use case’ for 3UK was identifying which cells needed investment.

In 3UK’s case, there was perhaps surprisingly more signalling traffic than there was P2P. Though this is a mobile peculiarity, it illustrates that assumptions about problems in managing traffic management can often be wrong, and it is important that decisions should be taken on the basis of data rather than prejudice.

While there are vociferous campaigners and powerful commercial interests at stake, it is fair to say that the streets are not often full of angry consumers waving banners reading ‘Hands off my YouTube’ and knocking on the doors of telcos’ HQs. While a quick and entirely non-representative survey of Telco 2.0’s non-technical relatives-of-choice revealed complete ignorance and lack of further interest in the subject, this does not necessarily mean that there is not, or could not be, a problem, and it is possible that consumers could unwittingly suffer. On balance though, Telco 2.0 has not yet seen significant evidence of a market failure. We also believe that the mechanisms of the market are the best means of managing potential conflict.

A case of ‘Terminological Inexactitude’

We broadly agree with Alex Blowers of OFCOM, who said that ‘80% of the net neutrality debate is in the definition’ at the recent BSG conference.

First, the term ‘Net Neutrality’ does not actually distinguish which services it refers to – does ‘Net’ mean ‘The Internet’, ‘The Network’, or something else? To most it is taken to mean ‘The Internet’, so what is ‘The Internet’? Despite the initial sense that the answer to this question seems completely obvious, a short conversation within or outside the industry will reveal an enormous range of definitions. The IT Director will give you a different answer from your non-technical relatives and friends.

These ambiguities have the straightforward consequence that the term ‘Net Neutrality’ can be used to mean whatever the user wants, and its use is therefore generally a guarantee for mindless circular arguments and confusion . In other words: perfect conditions for lobbyists with partial views.

For most people, ‘the internet’ is “everything I can get or do when my computer or phone is connected online”. A consumer with such a view probably has a broadband line and an internet service and is among those, in theory at least, most in need of protection from unscrupulous policy management that might favour one form of online traffic over another without their knowledge or control. It is their understanding and expectation of what they have bought against the reality of what they get that we see as the key in this matter.

In this paper, we discuss two classes of services that can be delivered via a broadband access line.

1. Access to ‘The Internet’ (note capitalisation), which means being able to see and interact with the full range of websites, applications and services that are legitimate and publicly available. We set out some guiding principles below on a tighter definition of what services described as ‘The Internet’ should deliver.

2. ‘Specialised Services’ are other services that use a broadband line, that often connect to a device other than a PC (e.g. IPTV via set-top boxes, smart meters, RIM’s Blackberry Exchange Server (BES)) or a service that may be connected to a PC but via a VPN, such as corporate video conferencing, Cloud or Enterprise VOIP solutions.

While ‘Specialised Services’ are not by our definition pure Internet services, they can also have an effect in certain circumstances on the provision of ‘The Internet’ to an end-user where they share parts of the connection that are in contention. Additionally, there can be contention between services on ‘The Internet’ from multiple users or applications connected via a common router.

Additionally, fixed and mobile communications present different contexts for the services, with different potential mechanisms for control and management. Mobile services have the particular difference that, other than signalling, there is no connection between device and the network when data services are not being used.

The Internet: ‘Appellation Controlee’?

One possible mechanism to improve consumer understanding and standards of marketing services is to introduce a framework for defining more tightly services sold as “Internet Access”.
In our view, services sold as ‘The Internet’ should:

  • Provide access to all legitimate online services using the ‘public’ internet;
  • Perform within certain bounds of service performance as marketed (e.g. speed, latency);
  • Be subject to the minimum necessary ‘traffic management’ by the ISP, which should only be permissible in specific instances, such as contention (e.g. peak hour use) ;
  • Aim to maintain consistent delivery of all services in line with reasonable customer expectation and best possible customer experience (as exemplified in a ‘code of best practice’);
  • Provide published and accessible performance measures against ‘best practice’ standards.

Where a customer has paid extra for a Specialised Service, e.g. IPTV, it is reasonable to give that service priority to pre-agreed limits while in use.

The point of defining such an experience would be to give consumers a reference point, or perhaps a ‘Kitemark’, to assure them of the nature of the service they are buying. In instances where the service sold is less than that defined, the service would need to be identified, e.g. a ‘Limited Internet Access Service’.

The Internet isn’t really ‘Neutral’

To understand the limitations and possible advantages of ‘traffic management’, and put this into context, it is worth briefly reviewing some of the other ways in which customer and service experiences vary.

Different Services Work in Different Ways
Many Internet Services use already use complex mechanisms to optimise their delivery to the end-user. For example:

  • Google has built a huge Content Delivery Network, using fibre to speed communications between data centres, dedicated delivery of traffic to international peering points, and equipment at ISPs for expediting caching and content delivery, to ensure that its content is delivered more rapidly to the outer edges of the network;
  • BBC News Player uses an Akamai Content Delivery Network (CDN) similarly;
  • Skype delivers its traffic more effectively by optimising its route through the peer-to-peer network.

Equally, most ISPs are able to ‘tune’ their data services to better match the characteristics of their own network. Although these assets are only available to the services that pay for, own, or create them, none of these techniques actively slows any other service. Indeed, and in theory, by creating or using new non-congested routes, they free capacity for other services so the whole network benefits.

Consumer Experiences are Different too
Today’s consumer experience of ISP services varies widely on local factors. Two neighbours (who happen to be on different nodes) could, in theory get a very different user experience from the same ISP depending on factors such as:

  • Local congestion (service node contention, loading, router and backhaul capacity);
  • Quality and length of local loop (including customer internal wiring);
  • Physical signal interference at the MDF (potentially a big issue where there is lots of ULL);
  • Time of day;
  • Router make and settings (particularly relating to QOS, security).

These factors will, in many cases, massively outweigh performance variation experienced from possible ‘traffic management’ by ISPs.

Internet Protocols try to be ‘Fair’

The Internet runs using a set of data traffic rules or protocols which determine how different pieces of data reach their destinations. These protocols, e.g. TCP/IP, OSPF, BGP, are designed to ensure that traffic from different sources is transmitted with equal priority and efficiency.

Further Technical Fixes Are Possible

Network congestion is not an issue that appeared overnight with the FCC’s 2005 Comcast decision. In fact, the Internet engineering community has been grappling with it with some success since the near-disaster in the late 1980s that led to the introduction of congestion control mechanisms in TCP.

Much more recently, the popular BitTorrent file sharing protocol, frequently criticised for getting around TCP’s congestion control, has been adapted to provide application-level congestion control. The P4P protocol, created at MIT and tested by Verizon and Telefonica, provides means for P2P systems and service provider networks to cooperate better. However, it remains essentially unused.

A further consideration is that it is necessary to be realistic about what can be expected – we have heard the benefits from traffic-shaping cited as an extension of around 10% in the upgrade cycle in the best-case scenario.

It’s Complex, not Neutral

It is therefore simply not the case that all Internet services progress from point of origin somewhere in the cloud of cyberspace to the end-users via a random and polite system. There are assets that are not equally shared, significant local variations, and there are complex rules and standards.

‘The Internet’ is a highly complex structure with many competing mechanisms of delivery, and this is one of its great strengths – the multiplicity of routes and mechanisms creates a resilient and continually evolving and improving system. But it is not ‘neutral’, although many of its core functions (such as congestion control) are explicitly designed to be fair.

Don’t Block the Pipes, Lubricate the Market

In principle, Telco 2.0 endorses developments that support new business models, but also believes that the rights of end-users should be appropriately protected. They have, after all, already paid for the service, and having done so should have the right to access the services they believe they have paid for within the bounds of legality.

In terms of how to achieve this balance, it’s very difficult to measure and police service levels, and we believe that simply mandating traffic management solutions alone is impractical.

Moreover, we think that creating a fair and efficient market is a better mechanism than any form of regulation on the methods that operators use to prioritise services.

Empower the Customer

There are three basic ways of creating and fulfilling expectations fairly, and empowering end-customers to make better decisions on which service they choose.

  1. Improving Transparency – being clear and honest about what the customer can expect from their service in terms of performance, and making sure that any traffic management approaches are clearly communicated.
  2. Enabling DIY Service Management – some customers, particularly corporate clients and advanced users, are able and can be expected to manage significant components of their Internet services. For example, mechanisms already exist to flag classes of traffic as priority, and many types of CPE are capable of doing so. It’s necessary, however, that the service provider’s routers honour the attribute in question and that users are aware of it. Many customers would need support to manage this effectively, and this could be a role for 3rd parties in the market, though it is unlikely that this alone will result in fairness for all users.
  3. Establishing Protection – for many customers, DIY Service Management is neither interesting nor possible, and we argue that a degree of protection is desirable by defining fair rules or ‘best practice’ for traffic management.

Not all customers are alike

‘Net Neutrality’ or any form of management of contention is not an issue for corporate customers, most of whom have the ability to configure their IP services at their will. For example, a financial services trader is likely to prioritise Bloomberg and trading services above all other services. This is not a new concept, as telcos have been offering managed data services (priority etc) to enterprise customers for years over their data connections and private IP infrastructure.

Some more advanced consumer users can also prioritise their own services. Some can alter the traffic management rules in their routers as described above. However, these customers are certainly in the minority of Innovators and Early Adopters. Innovation in user-experience design could change this to a degree, especially if customers have a reason to engage rather than being asked to do their service provider’s bottom line a favour.

The issue of unmanaged contention is therefore likely to affect the mass market, but is only likely to arise in certain circumstances. To illustrate this we have selected a number of specific scenarios or use cases in which we will show how we believe the principles we advocate should be applied. But first, what are our principles?

Lubricate the Market

There are broadly three regulatory market approaches.

  1. Do nothing’ – the argument for this is that there is no evidence of market failure, and that regulating the service is therefore unnecessary and moreover difficult to do. We have some sympathy for this position, but believe that in practice some of direction is needed as recommended below.
  2. Regulate the Market’ – so that telcos can do what they like with the traffic but customers can choose between suppliers on the basis of clear information about their practices and performance. A pure version of this approach would involve the specification of better consumer information at point of sale and published APIs on congestion.
  3. Regulate the Method’ – with hard rules on traffic management rather than how the services are sold and presented. The ‘hard’ approach is potentially best suited to where the ‘market’ is insufficiently competitive / open. This method is difficult to police as services blur and ‘the game’ then becomes to be categorised as one type of service but act as another.

Telco 2.0 advocates a hybrid approach that promotes market transparency and liquidity to empower customers in their choices of ISP and services, including:

  • Guidelines for operators on ‘best practice in traffic management’, which in general would recommend that operators should follow the principle of “minimum intervention”;
  • Published assessments on how each operator meets these guidelines that make understanding operator’s performance on these issues straightforward for customers.

The criteria of the assessment would include the actual performance of the operator against claimed performance (e.g. speed, latency), and whether they adhere to the ‘Code of Best Practice’.

How might it work?

The communication of this assessment could be as simple as a ‘traffic light’ style indicator, where a full Internet service meeting best practice and consistently achieving say 90% of claimed performance would be ‘Green’, while services meeting lower standards / adherence or failing to report adequately would be signalled ‘Amber’ or ‘Red’. The principles used by the operator should also be published, though utilising this step on its own would run the risk of the “Licence Agreement” problem for software – which is that no-one reads them.

We’ll be working on refining our guidelines and thoughts on how an indicator or other system might work by working through some specific ‘Use Cases’ outlined below. In the meantime, we recommend the suggestions made by long-time Telco 2.0 Associate Dean Bubley in his Disruptive Analysis’s ‘Draft Code of Conduct for Policy Management and Net Neutrality’.

It is our view that as long as Telcos are forced to be open, regulator (and consumer bodies) can question or, ultimately, regulate for/against behaviours that could be beneficial/damaging.

The Role of the Regulator

We believe that the roles of the regulator(s) should be to:

  • Develop an agreed code of best practice with industry collaboration;
  • Agree, collect and publish measures of performance against the code;
  • Make it as easy as possible to switch providers by reducing the ‘hassle factor’ of clumsy processes, and by releasing consumers from onerous contractual obligations in instances of non-compliance with the code or performance at a ‘Red’ standard;
  • Monitor, publicise, and police market performance in line with appropriate regulatory compliance procedures.

We draw a parallel with what the UK regulator, Ofcom, used to do for telephony:

  • Force all providers with over a certain market share to report key performance metrics;
  • Publish these (ideally on the web, real time and by postcode);
  • Make it as easy to switch providers as possible;
  • Continuously review the set of performance metrics collected and published.

New ‘Enhanced Service’ Business Models?

Additionally, we see the following possible theoretical service layers within an Internet Service that could be used to create new business models:

  1. Best efforts’ – e.g. ‘We try our best to deliver all of your broadband services to maximum speed and performance, but some services may take priority at certain times of the day in order to cope with network demands. The services will not cease to work but you may experience temporarily degraded performance.’
  2. Protected’ – akin to the ambulance lane (e.g. Health, SmartGrid – packets that are always delivered/could be low or higher bandwidth e.g. a video health app but the principle of priority stands for both).
  3. Enhanced Service’ – e.g. a TV service that the customer has paid for (e.g. IPTV) or that a network will (or might) pay extra to deliver assure a higher degree of quality.

One possibility that we will be exploring is whether it could be possible to create an ‘On-demand Enhanced Service’. For example, to deliver a better video streaming experience the video provider can pays for their traffic to take priority over other services with the express consent of the customer. This may be achieved by adding a message to the Enhanced Service e.g. ‘Click here to use our Enhanced Video Service where we’ll pay to get your video to you quicker. This may cause degradation to the service to other applications currently active on your broadband line while you are using the Enhanced Video Service’.

We have long thought that there is scope for innovation in service design and pricing – for example, rather than offering a (supposed) continuous 8Mbps throughput (which most UK operators can’t actually support and have no intention of supporting), why not offer a lower average rate and the option to “burst” up to high speed when required? ISPs actually sell each other bandwidth on similar terms, so there is no reason why this should be impossible.

Example Scenarios / Use Cases

We’ve identified a number of specific scenarios which we will be researching and developing ‘Use Cases’ to illustrate how these principles would apply. Each of these cases is intended to illustrate different aspects of how a service should be sold to, and managed by / for the customer to ensure that expectations are set and met and that consumers are protected appropriately.
Fixed ‘Use Cases’

  1. Contention between Internet services over an ADSL line on a copper pair, e.g. Dad is editing a website, Daughter is watching YouTube videos, with a SmartGrid meter in operation over a shared wireless router. This is interesting because of the limited bandwidth on the ADSL line, plus consideration of the SmartGrid monitoring as a ‘Specialised Service’, and potentially also as a ‘Protected Service’ in our exploratory classification of potential service classes.
  2. Contention between Internet and Specialised Services over an ADSL line on a copper pair, e.g. Dad is streaming an HD video on the internet, daughter is watching IPTV. This is interesting because of the limited bandwidth on the ADSL line and the additional factor of the IPTV service over the broadband connection. Unlike a DOCSIS 3 cable link, where the CATV service is additional to the Internet service and in fact can be used to offload applications like iPlayer, the DSL environment means that “specialised services” will contend with public Internet service.
  3. Managed Vs Unmanaged Femtocells over an ADSL connection. An Unmanaged Femtocell is e.g. a Sprint Femtocell over an AT&T ADSL connection, where the Femtocell is treated purely as another source of IP traffic. A Managed Femtocell is e.g. a Softbank Femtocell operating on a Softbank ADSL line, using techniques such as improved synchronisation with the network to produce a better service. An examination of alternate approaches to managing Femtocell traffic is interesting: 1) because a Femtocell inherently involves a combination of mobile and fixed traffic over different networks, so draws out fixed/mobile issues, and; 2) it is useful to work through how a Managed Femtocell Use Case might work within the market approach we’ve defined.
  4. A comparison of a home worker using videoconferencing with remote colleagues in two scenarios: one using VPN software and configured router; the second using Skype with no local configuration. The objective here is to explore the relative difference in the quality of user experience as an illustration of what is possible in a advanced user ‘DIY’ management scenario.
  5. The ‘Use Case’ of an ‘On-demand Enhanced Service’ for a professional web video-cast, with the consumer experience as outlined above. The idea here is that the user grants temporary permission to the video provider and the network to temporarily provide an ‘Enhanced Service’. This role of this ‘Use Case’ is to explore how and whether a ‘sender pays’ model could be implemented both technically and commercially in a way that respected consumer concerns.
  6. HDTV to the living room TV. This is interesting because the huge bandwidth requirements needed to deliver HDTV are far beyond those originally envisaged and have a potentially significant impact on network costs. Would user expectations of such a service permit e.g. buffering to deliver it without extra cost, or might this also enable a legitimate ‘two-sided’ sender pays model where the upstream customer (e.g. the media provider) pays?

Mobile ‘Use Case’

  1. VOIP over mobile. Is it right that VOIP over mobile networks should be treated differently from how it is over fixed networks?

Telco 2.0’s Position Vs the Rest

There is reasonably common ground between most analysts and commentators on the need for more transparency in Internet Access service definition and performance and management standards, though there is little clarity yet in the ways in which this clarity might be achieved.

The area which is most contentious is the notion of ‘non-discrimination’ – that is of allowing ISPs to prioritise one form or source of traffic over another. AT&T are firmly in favour of ‘paid prioritisation’ whereas Google/Verizon are not, and says ‘wireline broadband providers would not be able to discriminate against or prioritize lawful Internet content, applications or services in a way that causes harm to users or competition’.

Interestingly, in the Norwegian Government’s Net Neutrality Guidelines issued in 2009, provision is made that allows operators to manage traffic in certain circumstances in order to protect the network and other services.

Free Press are a US activist movement who champion ‘Net Neutrality’. While we have accord with their desire for freedom of speech, and understand the imperative to create a more level-playing field for media in the US, our position is not aligned in terms of enshrining total neutrality globally by regulation.

In terms of the regulators’ positions, the UK’s Ofcom is tentatively against ‘ex-ante’ regulation, whereas the FCC seems to favour non-discrimination as a principle. The FCC is also asking whether mobile and fixed are different – we say they are, although as the example of 3UK shows, the differences may not be the ones you expect. Ofcom is also already looking at how it might make switching easier for customers.

We also note that US-based commentators generally see less competition on fixed internet services than in Europe, and less mobile broadband options for customers. Our position is that local competitive conditions are a relevant consideration in these matters, albeit that the starting point should be to regulate the market as described before considering a stronger stance on intervention in circumstances of low local competition.

Conclusion & Recommendations

‘Net Neutrality’ is largely a clever but distracting lobbyists’ ploy that has gathered enormous momentum on the hype circuit. The debate does create a possible opportunity to market and measure broadband services better, and that’s no bad thing for customers. There may also be opportunities to create new business models, but there’s still work to be done to assess if these are material.

‘Lubricate the Market’

1. “Internet Access” should be more tightly defined to mean a service that:

  • Provides access to all legitimate online services using the ‘public’ internet;
  • Performs within certain bounds of service performance as marketed (e.g. speed, latency);
  • Is subject to the minimum necessary ‘traffic management’ by the ISP, which should only be permissible in specific instances, such as contention (e.g. peak hour use);
  • Maintains consistent delivery of all services in line with reasonable customer expectation and best possible customer experience (as exemplified in a ‘code of best practice’);
  • Provides published and accessible performance measures against ‘best practice’ standards.

2. Where a customer has paid extra for a ‘Specialised Service’, e.g. IPTV, it is reasonable to give that service priority to agreed limits while in use. Services not meeting these criteria should be named, e.g. “Limited Internet Access”.

3. ISPs should be:

  • Able to do ‘what they need’ in terms of traffic management to deliver an effective service but that they must be open and transparent about it;
  • Realistic about the likely limits to possible benefits from traffic-shaping.

4. The roles of the regulator are to:

  • Develop an agreed code of best practice with industry collaboration;
  • Agree, collect and publish measures of performance against the code;
  • Ensure sufficient competition and ease of switching in the market;
  • Monitor, publicise, and police market performance in line with appropriate regulatory compliance procedures.

We have also outlined:

  • Principles for a code of best practice;
  • A simple ‘traffic light’ system that might be used to signal quality and compliance levels;
  • ‘Use Cases’ for further analysis to help refine the recommended ‘Code of Practice’ and its implementation, including exploration of an ‘On-Demand Enhanced Service’ that could potentially enable new business models within the framework outlined.

 

Full Report – Entertainment 2.0: New Sources of Revenue for Telcos?

Summary: Telco assets and capabilities could be used much more to help Film, TV and Gaming companies optimize their beleaguered business model. An extract from our new 38 page Executive Briefing report examining the opportunities for ‘Hollywood’ and telcos.

 

NB A PDF of this 38 page report can be downloaded here.

Executive Summary

Based on output from the Telco 2.0 Initiative’s 1st Hollywood-Telco International Executive Brainstorm held in Los Angeles in May 2010 and subsequent research and analysis, this Executive Briefing provides an introduction to new opportunities for strategic collaboration between content owners and telcos to address some of the fundamental challenges to their mutual business models caused by the growth of online and digital entertainment content.

To help frame our analysis, we have identified four new business approaches that are being adopted by media services providers. These both undermine traditional value chains and stimulate the creation of new business models. We characterise them as:

  1. “Content anywhere” – extending DSAT/MSO subscription services onto multiple devices eg SkyPlayer, TV Anywhere, Netflix/LoveFilm
  2. “Content storefront” – integrating shops onto specific devices and the web. eg Apple iTunes, Amazon, Tesco
  3. “Recreating TV channels through online portals” – controlling consumption with new online portals eg BBC iPlayer, Hulu, YouTube
  4. “Content storage” – providing digital lockers for storing & playback of personal content collections eg Tivo, UltraViolet (formerly DECE)/KeyChest

To thrive in this environment, and counter the continuing threat of piracy, content owners need to create new functionality, experiences and commercial models which are flexible and relevant to a fast moving market.

Our study shows that Telco assets are, theoretically at least, ideally suited to enable these requirements and that strategic collaboration between telcos and content owners could open up new markets for both parties:

  • New distribution channels for content: Telcos building online storefront propositions more easily, with reduced risk and lower costs, based on digital locker propositions like Keychest and UltraViolet;
  • Improved TV experiences: developing services for mobile screens that complement those on the primary viewing screen;
  • Direct-to-consumer engagement for content owners: studios taking advantage of unique telco enabling capabilities for payments, customer care, and customer data for marketing and CRM to engage with consumers in new ways;
  • Operational cost reduction for Studios and Broadcasters: Telco cloud-based services to optimise activities such as content storage, distribution and archive digitisation.

To realise these opportunities both parties – telcos and content owners – need to re-appraise their understanding of the value that each can offer the other.

For telcos, rather than just creating bespoke ‘enterprise ICT solutions’ for the media industry – which tends to be the current approach – long term, strategic value will come from creating interoperable platforms that provide content owners with ‘plug and play’ telco capabilities and enabling services.

For content owners, telcos should be seen as much more than just alternative sales channels to cable.

There is a finite window of opportunity for content owners and telcos to establish places in the new content ecosystems that are developing fast before major Internet players – Apple, Google – and new players use their skills and market positions to dominate online markets. Speedy collaborative action between telcos and studios is required.

In this Executive Briefing, we concentrate on the US market as it is both the largest in the world and the one that most influences the development of professional video content, and the UK, as the largest in Europe.

The developments in both are indicative of the types of changes that are facing all markets, although the exact opportunities and challenges are influenced by the existing make up of the video entertainment market in each country and the specific regulatory environment.

This report is part of an ongoing, integrated programme of research and events by the Telco 2.0 Initiative to foster productive collaboration on new business models in the global digital entertainment marketplace.

Sizing the opportunity

The online entertainment opportunity is often talked down by both telcos and media companies. It is, after all, just a small percentage of the current consumer and advertising spend. Examples are easily cited that diminish the value of the opportunity: global Mobile TV revenues (revenues not profits) don’t reach $1bn; on demand represents just 2% of total TV revenues; online film (rental and download) does slightly better but hasn’t yet reached 5% of filmed entertainment revenues.

Individually, these are not the sort of figures that are going to get telco execs bouncing with enthusiasm but collectively (as illustrated below) the annual digital entertainment market reached revenues of $55.4bn in 2009 and has a growth rate approaching 20%. And that figure is even better if you discount digital magazine and newspaper ad revenue. So, even today, digital entertainment is a market that equates to 83% of Vodafone Group’s 2009/10 revenue and it is experiencing the kind of growth that the mobile industry was once famed for. Realistically, the telco share will remain small for some time but as a growth market it cannot be ignored.

Table 1: Global Value of Digital Entertainment by Content Type 2009

Digital Content Type Revenue (US$ bn) % increase year-on-year
Video on Demand

4.09

11.6

Pay-per-view TV

4.56

-1.9

Mobile TV

0.99

7.2

Online and Mobile TV Ads

2.95

17.5

Digital Music Distribution

8.1

29.3

Online Film Rental

4.2

25.8

Digital Film Downloads

0.59

49.3

Online Games

11.63

21.3

Wireless Games

7.31

18.2

Online Game and in-game Ads

1.55

16.2

Consumer Magazine Digital  Ads

1.31

-0.2

Newspaper Digital Ads

5.48

-5.6

Electronic Book Publishing

1.79

50.4

Total

54.55

Average

18.39

 

Source: Telco 2.0 Initiative and PricewaterhouseCoopers Global Entertainment and Media Outlook: 2010-2014

An important factor to note here is the continued growth of digital music distribution revenue. Music can easily be discounted as a medium that has already moved online but it still has a huge amount of growth space as a year-on-year revenue increase of approaching 30% indicates. This is the key point, digital entertainment is a growth opportunity for the next decade and when you look at the size of the physical products that currently serve the markets targeted by these digital alternatives, it is high growth potential for a market that is already of a considerable size, as illustrated in the table below:

Segment Revenue 2009 ($bn)
Television subscriptions and licence fees

185.9

TV Advertising

148.56

Recorded Music

26.37

Filmed Entertainment

85.14

Newspapers

154.88

Trade publishing

148.11

Book publishing

108.2

Total

857.16

Source: Telco 2.0 Initiative and PricewaterhouseCoopers Global Entertainment and Media Outlook: 2010-2014

Once you take out the existing on line spend and those elements, such as movie theatre revenues that won’t move online, the current addressable market is in the region of $700bn a year.

Again, to put that in context at our recent Best Practice Live! online conference and exposition, Anthony Hill from Nokia Siemens Networks valued the web services 2.0 market at $1 trillion.

What is more, there is evidence to suggest that as well as the substitution of digital online for physical and broadcast formats, the virtual world is also bringing additional viewers and potentially additional revenue with it. An extract from Nielsen’s A2/M2 Three Screen Report presented at our 1st Hollywood-Telco Executive Brainstorming, suggests that while online video viewing in the US grew 12% year-on-year and mobile viewing grew 57%, this was not at the cost of TV viewing in the home which also grew, if only very marginally by 0.5%.

It is not surprising therefore that content owners are looking to take advantage of the shifts in the market and move up the value chain to take a greater share of the revenues, and that telcos also want to play a part in a significant growth market. Indeed, the motivations pushing both groups towards digital and online entertainment are truly compelling.

In the next two sections we examine these in more detail.

Telcos: Let us entertain you

Telcos are keen to build a bigger role in online entertainment for four reasons:

  • Entertainment provides the kind of eye catching and compelling content that broadband networks, both fixed and mobile, were built for
  • Broadband networks will carry the traffic irrespective of the role of the telco, so it’s strategically important to play in a part of the business that accounts for a majority of their total data traffic
    (We estimate that online video makes up one-third of consumer internet traffic today and that this could grow more than ten times by 2013 to account for over 90% of consumer traffic overall. That makes it vitally important that telcos understand and maximise the opportunities associated with video and although not all video will be entertainment and not all entertainment is video, video entertainment is a major market driver. For more on broadband data trends, see our latest Broadband Strategy Report)
  • Entertainment is a growth market and the type of opportunity that can help build a Telco 2.0 business that, in conjunction with others, could re-ignite the interest of the financial markets in telecoms as a growth stock

  • It is a defensive play. Cable and DSAT providers are bundling communications services – broadband and telephony – into their service offering, eating into the customer bases of telcos. While telcos are still receiving revenue from these through wholesale, they are losing the direct link to customers and the associated customer information, both of which are integral to the ability of telcos to build effective two-sided business models

Downstream opportunity and challenges

Today, the vast majority of telcos are concentrating their activities in the entertainment arena in downstream activities – in IPTV and mobile TV, backed in some instances by a web TV offering as well. The primary success factors are, as might be expected, coverage/reach, quality of service and of course the appeal of the content. These are pre-requisites for success but there are no universally applicable targets by which we can judge success as so much depends on the competitive landscape within each market.

For example, mobile TV is bigger in China and India than in Western Europe and North America and this is despite the late entry of 3G systems in China and the very recent 3G spectrum auction in India which has kept connection speeds low. Furthermore, TV is far from ubiquitous, covering about 75% of the world’s population and Internet penetration sits around 25% and as low as 12% in developing markets, according to the ITU.

So why is mobile TV getting better take up on 2.5G in India than on 3G and 3.5G in mature markets? The simple answer can be found in the penetration levels of alternatives. Mobile simply has better reach than alternative transmission systems in emerging markets and the same factors influence mature markets with different results.
In developed markets, telcos are becoming part of the entertainment value chain as competitors to cable and DSAT but primarily with IPTV as opposed to mobile which, with a few exceptions, is coming more through content-specific apps than general services.

In the US, Comcast’s COO, Steve Burke recently cited telcos along with other cable providers and DSAT service providers as the company’s competition. Telcos have many options of how to enter the market but the default seems to be to think firstly, if not only, of full IPTV services or mobile TV, driven primarily by the desire to defend their communications markets.

Telcos are competing with TV cable and satellite companies for the home market on two fronts. Firstly to deliver high speed broadband connectivity and secondly to offer TV services. The problem for telcos is that IPTV, their TV service offering, has barely scratched the surface despite recent rapid growth.

According to the Broadband Forum, global IPTV subscribers grew 46% year on year for the first quarter of 2010. This equates to 11.4 million new IPTV subscribers, the most rapid growth in any 12 month period yet recorded and the global IPTV market totaled 36.3 million IPTV as at March 31st 2010. To put that number in some sort of context, according to Nielsen, there are 286 million TV viewers (not subscriptions) in the US alone.

The US IPTV broke the 6 million subscriber mark in the first quarter of 2010 and is growing fast but Europe is taking to the technology faster. France tops the IPTV charts, with just over 9 million users. This is perhaps no surprise given the weakness of its cable and satellite TV markets. Conversely, the UK which has decent broadband penetration, ranking 6th worldwide, doesn’t even register on the top ten for IPTV. Market entry is tough in the UK with BSkyB and Virgin Media dominating the pay TV market and in BSkyB’s case, tying up the premium content.

In many countries, telcos have also struggled to do the deals with studios and TV networks that will secure them the most compelling content and are therefore struggling to compete with cable and satellite services.

IPTV realities

In the US, AT&T’s U-verse and Verizon’s FiOS IPTV services have made some inroads. FiOS had 3 million subscribers at the end of Q1 2010, according to the company which also claims the service is available to 12.6 million premises or 28.8% of Verizon’s footprint. Its TV subscriber base had increased 46% to the end of 2009, while the major cable companies saw their shares drop by one or two percent but in absolute numbers cable remains dominant. Conversely, cable companies have seen their shares of broadband connectivity rise at the cost of the telcos.

Yet for telcos, the investments required for increasing speed and capacity through fibre is high. Fibre certainly represents the future for connectivity but its deployment is a long process and building complete end-2-end IPTV services will not make sense for every area in every country. Indeed, even within the US, Verizon is concentrating on core states and has sold its local wireline operations in 16 states to concentrate on building its fibre business where it is strongest. And fibre rollout is just the start.

Becoming a TV service provider is not straightforward and becoming a differentiated TV service provider is even more challenging. In addition to the technical connectivity, it requires deals to be made with networks to show their programming and, if real differentiation is to be made, deals also have to be brokered with studios, production companies and other owners of content, such as the governing bodies of sports, to secure broadcasting rights. Then it requires the development of an easy to use guide and the ability to at least keep up with the technical developments with which established TV providers are differentiating themselves – HD, 3D and integration with web features, such as social networking sites and delivery across multiple screens. This is a significant undertaking.

Many telcos are recognising that they cannot play every role in the video distribution value chain in every market. Indeed, even in France which we’ve established has receptive market conditions for IPTV, Orange has pulled out of competing in the sports and film genres that so often dictate the success of paid for TV services, and France is not alone.

At our 1st Hollywood-Telco Executive Brainstorm and at the 9th Telco 2.0 Executive Brainstorm, Telecom Italia’s representatives reiterated the company’s belief that IPTV was primarily a defensive play, designed to protect broadband revenues and that entertainment-related revenues would come instead from using telco assets and telco-powered capabilities to build new services around TV.

For Telecom Italia this is primarily about the upstream play based around QoE, CRM, billing and customer data and it also believes a business can be built around context and targeted advertising for free content on three screens. Building such functionality, linking consumer information and data about the environment to supplement TV services themselves and offer similar functionality to third parties is a core strategic decision being made by Telecom Italia, more about which can be seen on Antonio Pavolini’s presentation on our Best Practice Live! event site.

The upstream potential for telcos is something we shall return to later but we also believe that by working with, rather than in competition with studios and other content owners, telcos can become involved faster and more effectively in delivering entertainment services to their customers.

In addition, we believe an alternative and more accessible downstream opportunity exists based on digital rights lockers.

Digital Lockers: an alternative downstream option

Digital rights lockers are virtual content libraries hosted in the cloud that allow consumers to develop collections of content that are not tied to a physical format or device. There are a number of digital locker developments for online video, most notably Disney’s Keychest and UltraViolet, the new brand for the Digital Entertainment Content Ecosystem (DECE), a cross industry development.

These basically mean that a consumer can buy a piece of content once and then view it on any device at any time. For example, if a consumer buys Avatar on Blu-ray disc they would be able to register the disc into an UltraViolet-powered rights locker at the time of purchase or at a later date. Once registered, the digital proof of purchase is held in the cloud and then media service providers – cable companies, telcos etc., can access that information in my rights locker to know that they can deliver it to the consumer.

There are two important points here. The first is that from a consumer point of view it gives them what they want in the form of a one-time purchase for multiple formats; the second is that it breaks the tie between the device and the content. Content, in the form of rights, is held in the cloud, and is delivered in the appropriate form for any supported and registered device.

Ultraviolet works using a network-based authentication service and account management hub from Neustar that allows consumers log in and access digital entertainment they have rights to. The system authenticates rights to view content from multiple services, with multiple devices as well as manages the content and registration of devices in consumer accounts.

It means that content doesn’t keep a user locked with a particular device manufacturer as, for example, iTunes content does to the iPhone or iPad. Therefore if a consumer switches to a different device, assuming it is also supported, content will be viewable.

Furthermore, the fact that UltraViolet has multiple content owners committed, namely Fox Entertainment Group, NBC Universal, Paramount, Sony (which chairs UltraViolet) and Warner Brothers, makes it a strong proposition, as to try to compete with a proposition limited to one vendor, with one that was multi-device but limited to the content of a single owner is unlikely to succeed.

What this means for telcos is that an ecosystem is being built that they can simply tap into. A telco could use an UltraViolet-provided API to build access into its own customers offerings, whether that be IPTV, mobile TV or through a web-enabled storefront. They can access the rights locker in the cloud and see what a consumer is entitled to see and deliver it to them seamlessly. They make the enquiry, and deliver the video to the consumer which is in the right format for the device and complete with the necessary rights attached. An indication of how this flow works is illustrated in the diagram below.

Figure 1: Building the Digital Locker Proposition

Source: Telco 2.0 Initiative

The key point here for telcos is that such an approach overcomes the issues associated with scale and potentially with international distribution deals as well.

Both of these were uppermost in the minds of telco execs at the 1st Hollywood-Telco Executive Brainstorm, who constantly reiterated their frustrations at not being able to get the deals they need to give their customers the content they demand because of high ‘minimers’ and a mismatch between the internal operations and practices of telcos and studios. (See Telco 2.0 in Hollywood: There’s Gold in Them Thar Hills).

Collective activities such as UltraViolet are by definition more difficult to develop than those from an individual company as they require balancing of individual priorities and goals with those of the group. However, as we will discuss in more detail later, the desire of studios to compete with other groups to act as the retail and distribution point, makes working together more desirable.

It is also important to recognise that the strategic choices made by media companies have an impact on the options available to telcos. When media companies work together and create a new ecosystem, the value of upstream services from a single telco diminish, while their ability to enter as a downstream player increases as the open nature of an UltraViolet -like ecosystem allows them to grow incrementally. Should telcos want to play on the upstream side in such communities, then they too need to act collectively.

Telcos’ missed opportunity

Telcos have been notable by their absence in the development of UltraViolet which means many of the upstream capabilities they could have offered – authentication, billing, formatting for different devices – are being developed, at least in the first instance in different ways. However, there could be potential for these to develop over time based on building stronger relationships with the UltraViolet ecosystem, particularly around mobile device support. The formatting of data for specific devices is part of the basic operator function and this expertise could be highly valuable to the UltraViolet community but to be offered as an upstream service, rather than a downstream differentiator, it needs to be a collective proposition from the operator community, not piece meal.

However, if telcos are being challenged to develop new models to engage their customers, then content owners are even more so.

UltraViolet and other digital rights locker solutions open an alternative downstream opportunity for telcos and particularly for those that don’t have the market scale to compete with other Pay TV services. It is a middle ground that enables telcos to build an entertainment portfolio and overcome some of the challenges of building an entirely new business.

Content’s business model crunch

Once upon a time there were only two ways to get content to a large amount of people. Broadcast was one and the other was through a delivery chain made up of distributors, aggregators and retailers. The Internet changed that and high speed broadband access changes it again as even HD video can be accessed by individuals through the Internet, opening up new completion to those traditional channels.

All of these bring new challenges to the existing models of traditional media companies, which are both being challenged and pursuing new opportunities, as illustrated in the table below. The colour coding of the challengers reflects the severity of the challenge in the short to medium term (1-3 years).

Content Owner Category Upstream Business Model Downstream Business Model Upstream Business Challengers Downstream Business Challengers New Business Opportunities
Film Studios None Revenue share from various sales windows – movie theatre, DVD sales, DVD rentals, Pay TV, Free-to-air TV – with the respective distributor/retailer None New online rental and sales channels eg Netflix, LoveFilm, iTunes, plus free and pirates Downstream – sell direct to consumer getting all revenue and expand service offering with merchandising upsell etc

Upstream – create upstream advertising business

Free to Air TV Broadcasters Selling advertising inventory

 

Public/government funding

 

Syndication

 

  Fragmentation of peak audiences;

Google TV and online player services which undermine/destroy the value of advertising.

  Upstream – greater distribution, lifespan delivering more ad value/opportunities; greater value to advertisers based on better measurement; greater targeting and personalisation; instant purchase opportunity.
Pay TV Broadcasters Selling advertising inventory

 

Syndication

Pay TV services to consumers Google TV and online player services which undermine/destroy the value of advertising. New sellers of TV content – Netflix, iTunes,

New distributors of TV content – Hulu

New Connected TV propositions – particularly Google TV

Downstream – Maximise value of content with pre-broadcast promotion, post-broadcast access

Upstream – greater distribution, lifespan delivering more ad value/opportunities; greater value to advertisers based on better measurement; greater targeting and personalisation; instant purchase opportunity.

Games Publishers Licensing of IP to third parties eg films, TV, books, comics etc

Ad-funded apps

Largest share of product sales revenue. Total shared with distributor, retailer + licence fee payable to platforms

In-play features

Piracy Spiralling costs of production undermining profitability

Online distribution potential to open the market and reduce the power and value of the publisher role; Piracy

On line potential to break links between the platform and the game and gain larger share of ‘hit’ game revenues; growth and monetisation of casual and mobile games
Independent Games Developers Commissions from Publishers, platforms and studios

Ad-funded apps

Revenue from product sales revenue shared with distributor, retailer + licence fee payable to platforms Spiralling costs of production undermining profitability; Piracy Spiralling costs of production undermining profitability

 

On line potential to break links between the platform and the game and gain larger share of ‘hit’ game revenues; growth and monetisation of casual and mobile games
Newspaper/Magazine Publishers Sell advertising inventory Revenue share with distributors and retailers Proliferation of online publications hitting subs bases and diluting value to advertisers. Free is the dominant model

 

Online proliferation of publications; User-generated publications blogs etc;- readership dropping Online provides opportunity for instant access to news, views – faster turn over and more inventory.

Potential to leverage back catalogue and increase life/value of old articles

Book Publishers None Revenue share with distributors and retailers None Online cutting the price of the product and therefore revenue to be shared Potential to go direct to consumers and dramatically lower production costs
Music Labels Licensing model to third party Revenue share with artists, distributors and retailers None Piracy; Free and low cost online models taking too much revenue out of the value chain to sustain it Have to reinvent business model as opportunity already missed

Source: Telco 2.0 Initiative

For some entertainment sectors, especially the music labels, a major battle if not the entire war has been lost and the fear of following in their footsteps keeps the minds of TV and film studios, as well as publishers focused on the possible threats to their own revenue streams.

For other content owners, such as game developers, the opportunities look to outweigh the threats, as their position in the value chain is currently limited by the strength of the platforms and publishers. Indeed, by examining the games market we can see some of the opportunities that are developing for content owners to usurp failing business models and engage more directly with their customers in many more ways.

Games search for better business model

The online games market is currently the most valuable of online video content businesses but this is predominantly made up from mass online games, such as World of Warcraft, as well as casual games and not from the blockbuster platform games that permeate the market. There has long been a feeling amongst developers and even some of the smaller publishers that the business model is broken, stacking the odds against developers.

Developers, whether in-house with publishers or independent, face burgeoning costs caused by the fragmentation of platforms and the increasing reliance on mega hit games. These cost more and more due to competitive pressure driving the complexity and sophistication of the game itself plus licensing fees paid for IP and to console manufacturers, which bizarrely are paid according to the number of games produced not sold. Currently, average development costs range from $15m for two SKUs (the unit attached to each hardware version) to $30 million for all SKUs and the figures keep on rising. For example, according to Blitz games, it cost $40 million to develop Red Dead Redemption, the current number one best-selling game.

It is not surprising therefore that casual and social games which typically have six month development cycles and cost between $30,000 and $300,000 and mobile games which cost even less, generally in the range of $5000 to $20,000 per title, are attracting more of the time and energies of independent developers. What have been missing in the past are effective routes to market for these but in social networking sites for casual games and app stores for mobile games, these are now reaching mass audiences with ease and both bring new business models to the games industry.

The mobile model is a simple storefront revenue share one, which typically delivers 70% of revenue back to the developer. The causal gaming and social networking tie up is following a freemium model, whereby the basic game is played for free and higher levels, additional features and tools can be purchased. Both of these represent potential channels for ‘hardcore’ games to follow, if and it’s a big if, the experience of the console can be replicated online.
OnLive has become one of the first to examine this potential, launching the first version of its cloud-based games service on June 17, 2010 and we will follow up on the development of this and the business model alternatives and opportunities in an Analyst Note later in the year.

However, while the games market faces up to its own business model challenge, it, like many other content areas, should not be viewed in isolation. Games consoles are providing more functionality than just playing games. They also offer Internet access, apps, voice and are also capable of playing DVDs. Meanwhile, games also have the potential to be accessed via other entertainment channels and this is indicative of the blurring boundaries that have emerged across the entire entertainment arena.

Blurring boundaries

Today, we have global media companies, such as News Corp, that owns Film and TV studios, book and newspaper publishers, TV networks, DSAT and cable service providers.

This convergence is a long-standing trend which has also thrown up some very notable failures, such as AOL TimeWarner. However, the synergies across vertical entertainment sectors mean that companies continue to try for the ultimate media company. Indeed, the online environment only encourages this as the content type is no longer tied to the delivery medium. Furthermore, other companies are moving up and down the value chain. In fact the landscape is more complicated with three distinct but sometimes related trends emerging:

  • The value chain is breaking up with existing players given the opportunity to collapse it down to fewer elements and take a larger share for themselves
  • New players are disrupting the ecosystem with new business models that challenge the value of the existing chain just as it is being reconfigured
  • New functionality, such as integrated apps or extension to the mobile screen, is providing further ways to differentiate services

Each of these alone would be considered an industry challenge; combined they set up a bloody battlefield. What is more, the battle is not limited to a single content type. The distinctions between platforms are also blurring. You can get TV through your games console, films on TV from your pay TV service and TV through the internet and all together these forces are totally disrupting the value chain.

These forces have also prompted the emergence of four different business approaches as follows:

  • “Content anywhere” – extending DSAT/MSO subscription services onto multiple devices eg SkyPlayer, TV Anywhere, Netflix/LoveFilm
  • “Content storefront” – integrating shops onto specific devices and the web. eg Apple iTunes, Amazon, Tesco
  • “Recreating TV channels through online portals” – controlling consumption with new online portals eg BBC iPlayer, Hulu, YouTube
  • “Content storage” – providing digital lockers for storing & playback of personal content collections eg Tivo, UltraViolet /KeyChest

These are not mutually exclusive but are providing new approaches that media service providers are using alone or in combination to create new business models that are disrupting the traditional value chain and building new ones.

Disrupting and rebuilding the value chain

Using the example of video distribution, we can see more clearly how things are changing. The traditional value chain for video distribution (illustrated below) sees a consumer purchase the video content from a retailer, who in turn was supplied via a wholesale distributor who bought from the content creators, and watches it on a device bought specifically for the purpose – TV, video/DVD/Blu-Ray player, games console, hifi, etc.

Figure 2: Video Content Value Chain

Each role has a distinct group of companies that serve it and that are highly competitive amongst themselves but as distribution and delivery moves online, the two end points are moving towards the middle to gain greater control over the market and a larger slice of the pie.

The online digital value chain has the same functions to deliver but which companies should provide these are anything but clear as each player has a shot at cutting some of the others out of the chain completely. The obvious point of collapse is between the distribution and retail elements and this is where the likes of Amazon’s on-demand service come in. However, it is not the only convergence point and content owners should be defining the online value chain to position themselves at the centre of it, not taking what they are given.

In the physical world, selling content straight to the consumer had been a logistical impossibility for content owners other than TV broadcasters but with digital a direct channel to the consumer becomes a distinct possibility. What we now have is the possibility to combine distribution and content creation. For content creators this means direct serving through on-demand services of the Internet.

All the major film studios have their own service on-demand services and provide early access to on-demand content through these. For example, Universal launches films on its site the same day they are offered for release on DVD. This demonstrates that although the sale of DVDs is the most lucrative of the second rights windows, a direct online sales channel is even better value for the studios.

However, while a direct sales channel is good for studios, the highly lucrative DVD/Blu-ray retail market is being threatened from all sides – from subscription rental, from VoD download and streaming and from pirates – all of whom are targeting the online opportunities offered up by physical windows. Therefore studios are attempting to both maximise their opportunity for the new channel and protect their traditional revenue streams. It is a decidedly difficult path to tread and the winners and losers will be inevitably be defined by their ability to pull on the vital parts of the value chain.

Taking each link in the chain in turn, we can see how the four new business approaches are being created and how telcos could contribute to their development.

Content Creation

The major studios have the professional content ownership in the bag, at least for the time being. There is little doubt that while user generated content (UGC) creates an interesting new dynamic, especially for news services, and cheaper and more accessible production and online distribution theoretically opens up greater opportunities for independent productions, neither is close to making a serious challenge in reality.

There have been a few examples of this in music where independent artists have used the internet to distribute and social networks to market their work and thus cut out the need for a music label. Given that the experience that music companies have had with online distribution is the one cited as the one to avoid by all content owners, it is worth watching.

Perhaps a stronger play is to draw on new interactive capabilities to supplement the content creation process, helping to define plot lines and characters. This is an area that has already seen some collaboration between telcos and studios, for example when Sprint worked with WPP’s Mindshare and NBC to create a new character for Heroes. This was in fact seen as a development for advertising, rather than creative content but it establishes a precedent for engagement with a show’s audience.

These are both developments that demand tracking but they’re not the most pressing issue at the moment. That comes from the convergence of the online distribution and retail elements of the video value chain.

Distribution and Retail

The distinguishing line between retailers and distributors is a fading one in the digital world as getting the content from content owner to retailer is a simple electronic process. As a result no distributor is required and the content owner can go directly to the retailer, which, if independent of the studios can also act as an aggregator. So a more likely structure is that content owners will become their own retailers and they will compete with independent aggregator retailers that provide a one-stop shop for content for multiple media companies.

This might appear nothing but positive but without the distributor or a requirement for a physical outlet, retailers have the ability to completely reset the pricing model according to their vastly reduced cost base. Content owners have little or no influence over the prices retailers choose to set one the retailer, or indeed the renter has purchased the selling rights. That has already set a challenge to content owners, as we have seen with Apple iTunes and music and increasingly with TV and film content. The $0.99 film may appear a good deal to the consumer but is it enough to maintain the investment in new film development and once price expectations are set, can they be increased?
Studios are, not surprisingly, anxious to avoid having pricing dictated to them by a single dominant online retailer but to compete effectively they need to differentiate their online stores through content of course but also through the ease of use and relevance of its service.

Unlike telcos, or even cable cos and DSAT service providers, content owners have access to the content to differentiate their online service from competitors. They are of course limited to their own content but with a limited number of top studios or content creators, consumers would be able to find their way to the content they liked the most – just as they find their way to the content they want on different TV channels. But content alone does not a service make and this is the steep learning curve that content owners are embarking on.

A full service requires effective content delivery, quick, easy and secure payment/billing facilities, a slick search and recommendation engine, easy links and access to additional services or more video content and efficient customer care. None of these are core competencies for film and TV studios, or even games developers and publishers. However, they are core assets of telcos, the same telcos that are struggling to get the right level of premium content from the studios. There is a natural synergy here, even more so when you take into consideration the motivations for telcos to get into the entertainment arena.

To this end we have identified core telco assets that could aid that differentiation. These are:

  • Interactive Marketing and CRM, whereby telcos are capable and willing to share the information they hold on customers and their behaviours. These leverage both personal and contextual data to create valuable information services
  • Customer Care – a core telco competency that is completely lacking from the skill set of media companies
  • Three Screen Delivery – telcos have vast experience in identifying devices, the OS and software platforms, detecting software configurations and radio connectivity and transcoding to deliver content in the right format for the device
  • Direct Payments – through the telco bill, one click secure and easy payments are possible, overcoming one of the major obstacles identified to put off prospective customers. Includes the ability to detect account status, billing and payments capabilities
  • Identity-based Content Delivery – the telco ability to identify customers and deliver across platforms provides another way to build a content anywhere business

(For more, see How to Out-Apple Apple).

It will also be important for studios to act quickly as hardware vendors have watched the Apple phenomenon and are looking to get a slice of the pie, meaning there are many more than one new entrant.

New roles – device vendors sell content

Entertainment device manufacturers rely on new formats and technologies linked to content to sell new hardware – eg Hi-resolution and 3D films, TV and games. That is how their business model works but online also gives them potential to move up the value chain and get into the retail sector, grabbing a share of the revenue as Apple has done. It is a process that has been exemplified in the mobile industry. With apps and app stores, manufacturers of devices can build communities and develop the potential for three revenue streams: hardware sales, revenue share on app sales and even software licensing to app developers.

There are two converging trends involved here. On the one hand, entertainment and communications devices are becoming one and the same thing, while on the other, devices are moving up the value chain and vendors are looking to gain revenue from outside of the highly competitive consumer electronics markets.

For example, Sony has built up an Internet community around the PS3, using it as the backbone for its connected TV network. It is looking to gain a recurring revenue stream to go alongside the one-off hardware revenues of the TVs themselves. TV replacement cycles are around 7-8 years and while the speed of innovation is increasing – 3D is coming relatively quickly on the heels of HD when compared to the move from colour to remote control to digital for instance – it is still a more uncertain business and one more influence by macro economical trends than pay TV services.

Sony is particularly interesting because its previously separate divisions serving the consumer electronics and entertainment markets are combining their assets to provide a significant challenger to the value chain. For example, in 2008, the Sony Pictures film Hancock was offered to those in the US who had its Bravia TV sets just a few days after the cinema launch and before the DVD went on sale, completely usurping the window release process that sees films follow a well-defined path from cinema to DVD/Blu-Ray sale, then to rental, followed by pay TV and finally free-to-air TV.

The Hancock experiment has not been repeated and instead Sony has done deals other online distributors for film on its Bravia sets, such as with LoveFilm in the UK, but it does show the potential if, like Sony, a company can control both ends of the value chain. However, the new competition does not end there.

New players – TV changing times

Continuing with the TV example, we can see that traditional TV broadcasters look at web distribution as an opportunity to:

  • Extend their reach beyond the limitations of the time-specific schedule
  • Diversify their offering by creating additional services and interactivity around broadcast services

On the flip side of this though, the web is also a threat as it takes away the broadcaster’s iron control over the distribution channel. YouTube has proved to be a key battle ground as broadcast TV programmes that were one posted for free within minutes now have a fee attached and a raft of country-specific as well a general providers of managed solutions have emerged as quality becomes a greater differentiator.

Taking this a stage further are the internet players, such as Apple and Google, which have both released TV specific services that extend their Internet expertise and services to the TV screen. Most significantly though, these also extend their respective business models to the TV screen. How these fit into and challenge the overall connected TV market is illustrated in the table below, which features a number of players from the US, the biggest market for video entertainment and the UK, Europe’s largest market, that are trying out new business models in attempts to gain a greater share of the market.

Table 4: Who’s Doing What in Connected TV

Consumer Proposition Company Product Business Model Strengths/Weaknesses
Pay to view TV services

 

Amazon Online retail community – streaming and download Retail model for sale and rental

New Business Approach: Content Storefront

Strengths -strong online retail brand and community. Cloud based storage and access

Weaknesses – Complex rights management; weak mobile/portable service.

Apple Apple TV box plugs into any HD TV to provide the Apple interface to content through AppStore and iTunes and Mac-like computer navigation. Available now

Rumours persist about a $30 per month subscription service but Apple has not yet been able to secure the content deals to make this happen.

Hardware sales and 30% revenue share of all film and TV downloads

New Business Approach: Content Storefront & Content Anywhere

Strengths: Apple Brand; Leverages existing content environment; iPhone, iPad for mobile reach and ads

Weaknesses: On-demand service only, now schedule broadcast; Depends on content it can get – has held up sub-based monthly TV service

Google TV Box from Logitech or embedded in new Sony TV. Leverages Google search capabilities and Android developer community

To launch Autumn 2010

Revenue share from Android market and extension of  AdWords from Internet to TV

New Business Approach: Content Channel & Content Anywhere

Strengths: Google brand; search engine; Android App Developer Community and mobile reach;

Weaknesses: Scale -hardware has to be bought; Access to content, currently blocked by 4 major broadcasters in US; Dependence on hardware vendors and content owners

IPTV Verizon FiOS IPTV service with hundreds of channels, VoD and  PVR Triple play service – bundled with broadband and telephony. Defensive activity to protect comms revenues

New Business Approach: Content Anywhere

Strengths: triple play – have control of both Internet and TV channels into the home; control over the delivery pipe

Weaknesses: Scale; Lack of exclusive access to premium content and content differentiation

Sky Player (UK) Web TV player behind pay wall giving same service to subscribers as they get through satellite TV service Watch anywhere value-add. Defensive play against web TV players

New Business Approach: Content Anywhere

Strengths: Leverage existing premium content deals and original content – especially sports; Extensive back catalogue

Weaknesses: Ability to control the quality of the experience

NetFlix/LoveFilm Postal and online rental service Subscription model for rental that breaks pay-per-view rental model

New Business Approach: Content Anywhere & Content Channel

Strengths: Subscription model attractive to consumers wanting set cost; model naturally migrates to online and gets stronger with lower distribution costs

Weaknesses: At the mercy of content owners wanting to protect revenue from other windows; No control over the pipe.

Cable TV Everywhere

In Beta testing on Comcast’s Fancast

Led by Comcast and Time Warner, this provides online access to their cable content behind a pay wall Watch anywhere value-add to prop up premium subscription packages. Defensive play against web TV players such as Hulu

New Business Approach: Content Anywhere

Strengths: Leverage existing premium content deals and original content; Extensive back catalogue; Ability to control the quality of the experience

Weaknesses: Late to the game. Limited to those who already have cable subscription – no online only business model

Hybrid Pay and Free Hulu Plus

Launched June 29, 2010

$10 a month subscription premium service for near real-time access to TV content, on all screens with handover pick up and play across all of them Freemium model building value add on top of the free Hulu service adding subscription revenue to advertising

New Business Approach: Content Channel

Strengths: access to premium new and catalogued content from its parent companies; two-sided business model; three screen delivery

Weaknesses: access to content outside of that from founders;  no control of QoS of pipe – possible victim of throttling

 

YouTube Free to view video upload for UGC, advertising supported professional content

Experimenting with paid for content

Ad-funded

New Business Approach: Content Anywhere

Strengths: Brand, scale and reach; Google technical and financial backing

Weaknesses: High costs of storage; lack of advertising placement

Free to view web TV services Free to Air – Project Canvas (UK) Extension of Freeview (all free to air digital channels)  to the web with full player capabilities Ad-funded

New Business Approach: Content Anywhere

Strengths: reach – accessible by anyone with web access.

Weaknesses: player functions, such as fast forward, undermine value of advertising ; late to the game; no control over QoS of delivery – possible victim of blocking/throttling

Hulu Free access to a range of Flash-based streamed TV and Internet video. Scheduling based on Ad-funded, although the intention has been announced to introduce some paid services

New Business Approach: Content Anywhere

Strengths: free access limited only by internet connections; access to premium new and catalogued content from its parent companies

Weaknesses: no control over the quality of the connection – possible victim of throttling; access to premium content from other media companies; non-sustainable business model for most valuable content

Source: Telco 2.0 Initiative

Apple goes it alone

Apple TV is predominantly a pay-as-you-go model based on iTunes; it is a virtual retailer and has pioneered the content storefront model. Interestingly, TV is the first Apple business not to be built around hardware, suggesting that the company believes its online store is strong enough to carry its own business. However, Steve Jobs has referred to Apple TV as a project that will remain a hobby for some time to come. Despite this, Apple still has a strong role influence over the online video market. Its pricing policies for video on iTunes reset pricing levels and its stand on not supporting Flash is not just a technology decision based on the ‘buggy’ nature of Flash as described by Jobs.

Flash fire

Certainly Flash is quite heavy and would slow the iPhone down, so Apple is protecting its user experience but it is also protecting both its up and downstream business models for professionally created video content.

The vast majority of online video services from Hulu to Netflix, LoveFilm and TV network players are Flash based. This means that they can be viewed via a browser by any device that supports Flash. By taking Flash out of the equation, video consumers on the iPhone are left with two choices: get a different device that supports Flash or purchase video content through iTunes and that means more revenue for Apple. Apple talks about HTML5 as the natural replacement for Flash but this is still new technology and Apple will look to maximise its position for as long as possible.

Video service providers or content owners also have two choices: to build a second site that works on HTML5, as the BBC has for the iPlayer; or create apps for the devices as Netflix, LoveFilm and Hulu have. This later again works well for Apple as the apps strengthen the consumers tie to Apple hardware as should the consumer want to move to say an Android device, they’d lose the app, so it reinforces the company’s hardware business model. Furthermore, where applicable, Apple also gets its 30% of the app revenue and of course, SDK revenue.

The decision not to support Flash may have something to do with technology and protecting the user experience but it clearly also has everything to do with reinforcing the strength of Apple’s existing business models.

As with all other services, Apple and Google are taking fundamentally different approaches with Apple expanding its closed iTunes environment, while Google is all open. Apple is betting on taking a share or distribution/retail revenue; Google on turning TV into another and potentially huge extension to its advertising platform.

The big question here therefore is what this means for the broadcaster’s advertising model, as if Google’s version, which is performance-based and uses real rather than predicted data, is available on a TV screen, won’t the value of TV advertising decrease?

In the Apple case, it establishes the direct relationship with the customer, while Google’s play is purely upstream, extending its reach and delivering a new audience to its advertisers.
Apple also has another and for them, more significant objective, and that’s selling new hardware. While its famous Appstore brought in $1 billion dollars in its first complete year of activity, 30% of which went to Apple, that is put into perspective by the company’s total revenues of $13.50 billion and net quarterly profit of $3.07 billion for the second quarter, 2010.

Connected TVs could represent another diversification for the company that has been so successful at moving into the mobile device market and, just as with the mobile industry, Apple is looking to leverage the tight integration of its content marketplace to differentiate its hardware. This has been the primary driver of Apple’s business model but the expectation is that the relatively low retail price of the box and the lack of any real design differentiation means that for TV, Apple will be looking to create value from the service on the big and combine this with tight integration on the portable ones – the iPod, iPhone, and iPad.

Google on the other hand, is working in conjunction with hardware companies, most notably Sony for the Internet-enabled TV end product and Intel to get the Google functionality embedded in the chipsets. And Google’s objectives are not small, as a spokesman for the company have been quoted saying that it aims to have as big an impact on the TV industry as smartphones have had on the mobile world.

Differentiation options

In essence all the forces converging on TV are competing for the eyeballs of the consumer and the way they are doing this is to offer the consumer more choice: more content on more devices to be watched at more times. The consumer utopia it seems is to be able to pick exactly what they want to watch, when they want to watch it and on what screen; or in other words, the complete antithesis of broadcast TV when you get what your given, when its broadcast and always on the same type of screen.

There are however, problems with the utopia and the companies that solve these most effectively will run out the long term winners. As we are starting to establish there are four key points of competition:

  • The range of premium content
  • Integration of Internet apps to enhance the viewing experience
  • The user interface/ guide for finding the content

Extension to the mobile screen for true anywhere anytime viewing and enhancement of the big screen viewing experience

This is not to say that they are the only ways to compete, merely that they are the primary ones at the moment. Of these, only the appeal of content is proven; consumer demand for and willingness to commit dollars to the other three are, as yet, not.

Companies are however beginning to place their bets and while some are predictable, such as Google looking to leverage its search expertise for content discovery and its Android app development community to bring Internet-style services to TV programming; a lot more remain unclear. In particular, the ability of content owners that don’t have an established distribution channel to draw upon, to go directly to consumers is in question.

Interaction with Internet applications

For Google, key is a Software Development Kit (SDK) that will help independent content providers develop widgets to access their platform, content and participate in Google’s advertising revenue sharing program, similar to AdSense on desktop apps.

The Google TV proposition is Android based to draw on the rapidly-growing global developer community to create new and innovative ways for TV viewers to interact with their TVs. Facebook and Twitter widgets would provide easy to use chat facilities around key programming, for example. They are already used and by linking them in real time with the programme screen itself adds a further level of social networking. Beyond that, the possibilities are almost limitless but platform owners must understand what they are doing and the possible impact on existing revenue streams, particularly advertising as they could find their business model undermined.

For example, at a recent conference in London, an ITV representative speaking about Project Canvas stated that widgets and other apps would be uploaded onto the platform for free. Now imagine if an existing advertiser, say BetFair, which advertisers alongside sports events, launched a widget that enabled live betting on the event being broadcast. Why would they pay for advertising in a prime spot when they can launch the widget for free?

So the big Internet companies are coming to the TV party. They bring with them their own business models and these will compete with and impact on those of established TV studios, networks and distributors.

Free to air broadcasting is a simple one-sided upstream business selling advertising timeslots, whereas pay TV has a two-sided business model which adds consumer subscriptions to the revenue pot. A third pay-per-view element also exists for some pay TV services but revenues from these have declined over recent years. Following these developments through, we can see some major changes on the horizon.

Impact on advertising

TV accounts for the largest single proportion of advertising spend. Globally, it secures 36.6% of total ad revenues, according to PWC, and although spending has fallen on TV advertising over the last two it remains a default choice for buyers. It’s the IBM decision – the no one gets fired for making the obvious decision but if the TV world is changing isn’t advertising following suit?

Initially at least, research suggests that the answer is no. The $56 billion US TV advertising segment is expected to recover, growing by 9.8% during 2010 and thus erasing last year’s losses and returning the sector back to 2006 -2008 levels, according to the Magna Global Advertising Forecast, April 13, 2010.

At the 1st Telco-Hollywood Executive Brainstorm, executives were at pains to point out how TV still dominated viewing consumption patterns, delivers a better reaction to advertisements and are therefore seen as more pervasive than radio, print or online. According to the TVB, Nielsen Media Research Custom Survey 2008, cited by one of the speakers, 70% of adults believe that TV adverts are more persuasive than adverts on other mediums. It would seem that the TV/advertising love affair is set fair. However, we believe that the building trend towards on-demand viewing is challenging this particularly for free-to-air broadcast services.

If we take online advertising, we can see that it currently accounts for 12% of marketing budgets versus 34% of time users spend online. However, tying strategy to a delivery medium rather than a business model is a fundamental mistake. It’s not about online or cable or broadcast but live or on-demand. This is the change that undermines the very foundation of TV, broadcast and scheduling.

Scheduling, search and discovery

Of all the new functions that are hitting the market, it is on-demand that is making the greatest impact at this stage. At the 9th Executive Brainstorm, we asked the audience whether discovery would become the new search. The results were negative but it was the wrong question for entertainment and particularly for TV. The bigger question is whether discovery can become the new scheduling?

Broadband connectivity means that there is the potential for anyone to watch what they want, when they want it and where they want to. In theory therefore the individual can do the job of the schedulers of TV networks and channels and personalize it for themselves. This is especially true for the mass of archived material that is the mainstay of a host of cable DSAT pay TV channels. And the usage figures seem to back up the fact that the desire for consumers to control their own viewing is an irreversible trend and not just a short term fad.

A clear trend has developed around time-shifting whether through VoD, on-demand, or delayed TV; watching when the viewer wants and not when the scheduler says, is proving ever more popular. According to ComScore research on US viewing behaviour, 55% of viewers now watch original programming at a time other than when it’s scheduled. Furthermore, although it is a more pronounced feature in the younger demographic groups, it is also permeating the viewing habits of the over 50s and even over 60s, with 43% of the over 65s watching programmes after they had been aired. In short: everyone is time shifting.

However, a completely unstructured service where consumers search for the content they want relies on them knowing what that is. TV is generally regarded as a ‘learn back’ experience. It doesn’t require a huge amount of concentration, unlike say, video games and as such consumers don’t want to and won’t spend protracted periods of time searching for something, especially if they don’t know what they are searching for. Therefore the default becomes to stick with what you know. Just as many of us do with music, our viewing tastes could get stuck in time and we will miss out on new and different content.

We therefore have two contradictory user behaviours driving service requirements for on-demand video. On the one hand, consumers want to choose their viewing not have it thrust upon them, on the other they don’t want to have to make a huge effort to find their viewing.

The viewing guide is one of the most criticised parts of cable, DSAT and IPTV services. Often slow and clunky the guides, which list the schedules for each channel, struggle to deliver the large amount of information they have in a meaningful and useful way. If we then take out the timings and searching becomes even more difficult.
More sophisticated ways of discovering content are needed that encompass search, recommendation and some form of default scheduling and models for this are appearing.

Recommendation engines such as that used by Amazon or Apple Genius, which use observed tastes to make suggestions for future purchases are now well-established practice for online retail. However, there is potential to take this a stage further.

For example, the online radio service Last.fm offers playlists based on popularity of tracks as a default, as well as search and recommendation. This is interesting primarily because it is having the effect of increasing the amount and variety of music listened to by consumers. This may seem to be a nice, fluffy feature, but it is important for the discovery and support of new music talent and therefore the continued life of the industry.

Last.fm has flaws. It runs two business models, a free, ad-funded one in the US and UK and a subscription model in all other territories. This, the company says, is because it doesn’t have the sales capabilities necessary to run an effective ad sales campaign outside of its core territories and which will dominate in the long term is still uncertain but the value of an effective recommendation engine that acts as a scheduler, is.

As one speaker said at the 9th Telco 2.0 Executive Brainstorm in London, the opinions that influence his viewing habits are those of British actor/comedian, Stephen Fry and his mate Dave. No scheduler can do that but an effective mash up of recommendations perhaps through Facebook, Twitter and a purpose built recommendation engine such as Amazon uses, could.

Pay TV services increased choice dramatically over free-to-air alone and split the ad revenue. That took the number or channels from single figures to hundreds. Now imagine what happens when everyone has an individual channel.
The impact is already being felt on advertising revenue as these predictions for the US market from Magna Global reflect.

Table 5: US Ad Revenue According to Platform

PLATFORM 2009 ($$$) Est. 2010 ($$$) Est. 2011 ($$) Est. % Difference
Digital Online $22,843.7 $24,611.7 $26,792.0  + 8.9%
Cable TV $20,148.8 $21,491.7 $22,477.3  + 4.6%
Broadcast TV  $27,789.3   $29,047.9 $27,384.0  – 5.7%

Source: Magna Global data presented at 1st Hollywood-Telco Executive Brainstorm, May 2010

It does, however, mean that advertisers can target far more effectively and finally put to bed John Wanamaker’s infamous adage: “I know 50% of my advertising is wasted, I just don’t know which half.”
However, there are problems with this vision as well.

On the revenue side, it requires new ways to measure viewing of shows and ads, while in terms of the user experience, it is essential to find an effective way for users to find what they want and discover what they will like but don’t yet know about.

Basically this is a data crunching business. It requires personal data, usage information, content meta data, device preferences and more to be combined to create valuable information. Telcos with their knowledge of the customer, device and environment have high value. Even more valuable is the willingness of telcos to share this data with content owners, unlike many of the alternatives.

Extension to the mobile screen

Beyond time and place shifting, the other major area of competition and development is the ability to deliver content across multiple screens. In many ways it is a development of the same trend: consumers do not want to be restricted on where, when and how they view. The seamless shift from one screen to another would, for example, allow a consumer to move from TV to mobile and then to PC as the commute to work.

These pause and pick up services are of course already in existence. Netflix supports this and it is a central part of the cable industry’s TV Everywhere initiative. Meanwhile, the ability to buy once and view on any screen is the functionality at the heart of rights locker propositions such as Disney’s KeyChest or the UltraViolet initiative that we mentioned earlier.

However, we believe that this alone is only part of the story.

User behaviour suggests that consumers actually like to use more than one screen to interact with content. For example, it’s all very well getting detailed stats to accompany a baseball game through a multi-screen view on IPTV say, but this interferes with the primary viewing function. As ESPN has discovered, a more effective approach is to provide the stats to a mobile device through an app, giving the consumer the ability to choose when and how they look up the info. Add in a chat facility and watching the ball game at home becomes a social activity as it would be seeing it live.

Getting the right platform for the right content is not a trivial matter. It is often assumed that the more intense the experience, the more it consumes the viewer, the more valuable it is. Furthermore, the value is assumed to increase the faster the connection gets as this allows greater intensity. This is not necessarily the case.
Some ‘lean forward’ activities can benefit from greater speeds. For example, many online games will improve with faster response times and these are highly immersive activities. However, not all activities require and always benefit from greater immersion. 3D TV has provoked great debate along these lines as the most intense experience is seen to require greater engagement and brain activity, making it a more complete viewing experience but also a less social one.

Sociability is both a sought after and valuable feature and is not necessarily driven by high immersion, high speed experiences. It’s about getting the right device for the experience, related to the user’s activity. And that requires information about what screens the user has, how they use them, where and when. This is the kind of new user data that telcos can collect and use without drawing heavily on complicated legacy BSS and OSS systems.

So, in summary, we have a situation in which online video distribution is a reality and growing at a fearsome pace but the business models and value chain are far from clear. Indeed, the stress points in existing models for both studios and telcos are more obvious than revenue opportunities. Both of these are areas in which telcos are well placed to play an important part.

The next section therefore looks at how the assets and developments of the content and telco industries can be combined for the benefit of both.

Telco and Media Collaboration

The options for telcos are many and the approach taken is dependent on the structure of the entertainment industry in their country and their own set up. Amongst the considerations they must assess are their willingness to collaborate, the assets and skills available to them and the regulatory environment. As we’ve already established, for some IPTV is a viable option, for others mobile as a mainline channel is also worth pursuing as it is the most ubiquitous option.

In addition, a third downstream option is emerging for telcos to be an access provider to digital rights lockers.
On the upstream side, the opportunities are many but they are far less defined and their development seems to be stuck in an endless chicken and egg situation in which each side is waiting for the other to define the services required. To help move this discussion on and to define some near term opportunities we have honed down the list of possibilities by examining what media companies are looking for from telcos.

Where to Start

Surprisingly, combatting piracy which gains so many headlines is not uppermost in the list of priorities, according to our research.Instead the issues that dominate the thoughts of media executives are primarily those that we have outlined earlier, namely the ability to differentiate through delivering across any screen, enhancing the user experience and improving content discovery, as illustrated in the table below. Underlying that is the desire to re-define and build the value of the upstream side of the entertainment business ie advertising.

Table 6: Importance of addressing issues facing media companies rated 1-5*

*Where 1 is of no importance and 5 is critical           Source: 1st Hollywood-Telco Executive Brainstorm, Santa Monica

However, along with piracy, the area in which telcos and entertainment distributors are most likely to interact is in conflict over the quality of the pipe they are receiving.

QoS, QoE and throttling back

Video is all about the viewing experience so anything that influences that experience is of vital importance to content owners and to the retailer/distributor if this is done by a third party such as Hulu, NetFlix or LoveFilm. As these media service providers have no ability to control pipe, they use adaptive rate video technology to sense the bandwidth available and deliver the quality of video the connection is capable of dealing with effectively.

Adaptive rate technology works to a point as it means that they can deliver the best possible video for the bandwidth at any given time. However, the underlying transmission speed and quality is still beyond the control of the media service provider, so though better connectivity may be possible it is not being delivered. Furthermore, within the confines of the thorny net neutrality debate, the throttling of service types and in some instances specific services is happening to the detriment of online video services. For example, in the UK, LoveFilm has examples of customers with 20MBit/s connections unable to get a satisfactory service even though other video streaming services including the BBC iPlayer work perfectly well.

Telcos want a share of the video revenue that is being generated over their networks, and in throttling, deliberately reducing the speed of connections, they have a stick to beat media service providers with, should they wish to and be allowed to use it. However, just like DRM, throttling is a negative activity and will serve no positive purpose as consumers are just as likely to move ISPs if their services don’t work as they are media service providers. If they want certain content, they will find a way to do and don’t be surprised if such throttling activities pushes more consumers to Pirate Bay and its like where an additional wait will be tolerated to download rather than stream and content comes free.

So is there a better relationship to be had?

Charging for QoS/QoE SLAs to media service providers would be the first choice of telcos and while our research suggests that telcos believe this to be more of a possibility now than a year ago, our view is that it is still a service that requires consistent failures in the market in order to prove its value before it could become a capability media service providers will pay for en masse. Therefore if telcos can’t charge upstream players for a guaranteed pipe, at least in the short term, they need to look at other what other telco assets can offer media companies.

In our analyst note, ‘How to Out-Apple Apple’, we identified a series of telco assets that could be valuable to media companies that are or are intending to sell their content directly to customers through online outlets. These and how they add up against competitors are summarised in the table below.

Table 7: Telcos Offer Unrivalled Asset Combination

Payments Content Delivery User Experience – 3screen Interactive Marketing/ CRM Customer Care
Apple Yes Yes No No* No
Amazon Yes Yes No No* No
Netflix Yes Yes No No No
Cable Cos/Satellite Yes Yes Partially Partially Yes
Other enablers Banks, Credit Cards, PayPal Eg Akamai, L3, Limelight Marketing Ad Agencies Outsource
Telcos Yes Yes Yes (Converged telcos) Yes Yes

*Apple and Amazon have interactive marketing and CRM functions but do not pass data on to content owners.
Source: Telco 2.0 Initiative

At our first Telco-Hollywood Executive brainstorm the value of these telco assets and other capabilities were discussed and rated. All were recognised as offering possible value to studios in the development of their own services. As the graph below shows, it was the functions nearest to the consumer that rated the highest.

Table 8: Telco Capabilities Rated (1-5*) According to Their Perceived Value to Content Owners

*Where 1 is not valuable at all and 5 the most valuable
Source: 1st Hollywood-Telco Executive Brainstorm, Santa Monica

Again, the value of owning the network infrastructure is recognised as hosting content locally and ensuring its effective delivery are natural roles for telcos to take on. Usage and access distribution is the next highest rated capability and the expectation from studios and media companies is that telcos should enter the market as media service providers in some form or another and the downstream market should not be ignored. That said, those assets that are a stage further removed from consumers were also rated as valuable.

Identification and authentication, payments, decision support and data mining (listed under the heading of Interactive Marketing /CRM in table 7) and content protection are all rated as useful. However, there is a caveat here in that what media companies want is complete solutions not raw data or APIs that they then have to build services around.

Over the next 12 months, telcos need to develop complete solutions that can meet the needs of media companies. Simply saying that they have the assets and capabilities is not enough. Also the speed with which the market is changing means that solutions need to be developed fast as the window of opportunity for media companies to gain a more powerful position in the ecosystem will be relatively short. New and powerful players are entering the market and putting pressure of established price paradigms. Once changed these are difficult if not impossible to change back until serious failures in the market appear. The next 1-2 years are therefore vital.

From a telco point of view, this makes playing upstream difficult if they have not already begun to develop solutions that could be packaged for media companies. Downstream opportunities are in some ways less time sensitive but the current market flux offers up opportunities to establish a position that will be harder to reach when it is more stable.

Strategic choices

Telcos therefore have a series of strategic decisions to make about how and where they play in the entertainment market. We have developed a structure of generic strategy choices based on the willingness and ability of telcos to move on and off their own network and whether they intend to offer and end-to-end solution to consumers or play a specific and limited role in the ecosystem. The overall strategies these choices create are illustrated in figure 3 below.

Figure 3: Generic Two-Sided Business Model Strategies

Source: Telco 2.0 Initiative

Taking this a stage further, we can map this general theory onto the specific choices facing telcos in the entertainment market to create a framework with four different approaches to telco involvement in the entertainment ecosystem. As the market is developing, on-net activities are providing the greatest opportunities, as illustrated in figure 4 below.

Figure 4: Entertainment-specific Business Model Strategy Choices

Source: Telco 2.0 Initiative

Entertainment is an increasingly complicated market with collapsing value chains, new entrants and new technologies that are allowing established players to compete in different ways. Having a clear idea of where telcos fit into this dynamic market structure is important and there are roles for telcos to both support media companies in their attempts to go directly to consumers and to go directly to consumers themselves. A second phase, supporting multiple third party platforms and media service providers may emerge but this is a stage further removed as these are currently competing with what telcos and media companies are trying to do themselves.

Taking a look at the bottom left quadrant – enabling media companies to develop a direct sales channel – in more detail, it is possible to identify a range of way in which to do this. There are firstly a range of activities and assets that can be undertaken, as we have already outlined above, and secondly there is a range of ways in which to utilise those assets.

Using Telco 2.0’s established gold analogy, we can see that telcos have the opportunity to be more or less involved; to offer what they have in raw data to media companies and let them do as they wish with it at one end of the scale, right through to offering a complete end-2-end service.
Examples of what type of entertainment-orientated services can be offered at each stage by telcos are illustrated below.

Figure 5: Possible Telco Roles in Entertainment Industry

Telco 2.0 research suggests media companies put greatest value on telco assets that are packaged to them as services. As illustrated above, this means a managed one click payment capability, not an API on top of which they have to build their own payment service. What is more, they want things that have been proven to work, either for other industries or for the telcos itself. Eating your own dog food is not just a sound bite to be trotted out at conferences, it’s an essential tactic if telcos are going to gain credibility as upstream suppliers to the entertainment industry.

Fortunately this is something the telecoms industry is recognising, as a vote at the recent Telco 2 Executive Brainstorm in London where delegates demonstrated that to take full advantage of the customer data they have, they must first find ways to use it effectively themselves. During the session focused on the use and monetisation of customer data, the 180-strong group of executives were asked to rank 1 to 4, the importance of different strategies for beginning to use the customer data they hold in the short terms (next 12 months), using it for their own purposes ranked the highest.

Telco’s two-sided business model for entertainment

There is little doubt that telcos have a strong strategic interest in the entertainment industry both for its opportunity to grow and its impact on their existing businesses. However, they are entering a market that is itself in flux and the opportunities are neither well defined nor static. The market is constantly changing and so are the possible roles and activities open to telcos. Under such circumstances a two-sided approach to the business makes even more sense, giving more choice and opportunity to build revenue as illustrated below.

Figure 6: Telco two-sided business model for entertainment

 

Source: Telco 2.0 Initiative

However, the extent to which a specific telco gets involved in the up and downstream opportunities will depend on the specific market conditions, together with the skills and assets of the telco. These must also be mapped onto the needs of media companies as telcos must be aware of what the market is looking for.

Over the coming months we will look at each of the downstream models in detail, examining the conditions that make each play viable, together with the tactics that make each downstream strategy effective.

In addition to these we will develop use cases and identify case studies that help define realistic opportunities for upstream services.

Conclusions

  1. The transmission of entertainment online represents a significant growth market for at least the next decade. The addressable market is in the region of $700bn a year and growing, although it will take more than a decade if it ever happens to turn all entertainment into digital/online forms
  2. Content owners are being challenged by the changing patterns of distribution, retail and viewing and are looking to secure their place in new value chains that at least protect, if not increase their existing revenue streams
  3. Although the risks to content owners vary across genres, a major set of opportunities centre on the ability of content owners to go directly to consumers which requires a skill set outside of the core competencies of media companies. Telcos have many, if not all of these capabilities
  4. A second set of opportunities exist for media companies to work together to build ecosystems with their content at its heart that compete favourably with the offerings of new internet players. UltraViolet is an example of this which could compete with Apple iTunes. These approaches create new and better downstream opportunities for telco as they lower the barriers to telcos to enter the entertainment delivery business and allow them to build the business incrementally
  5. Complete end-2-end services in the form of IPTV require significant scale and huge investment. The success of these will be dependent on achieving these and they are heavily impacted by the maturity of competing solutions such as cable and DSAT. Therefore, while prospects look good in for telcos with large market shares that operate markets with weak pay TV markets, the majority of telcos will find differentiation through IPTV difficult and its deployment is therefore best seen as a defensive tactic against telephony and broadband service provision by TV service providers
  6. Other downstream plays are possible for telcos including:

    • Maximising the mobile opportunity by using it to enhance experiences on other screens and not just as another delivery channel. Telcos and content owners need to collaborate to develop value-added services that consumers will pay for
    • Operating as effective storefronts for entertainment content. Telcos have the customer scale, billing, customer information and CRM capabilities that make them ideal retailers but to be effective, telcos must offer content owners more than existing online retailers, such as Apple and Amazon. Of particular importance here is a willingness and ability of telcos to pass back to content owners on data about customer activity
    • The storefront proposition is strengthened further by developments such as UltraViolet which would enable telcos to build propositions while taking out the risk for both content owners and telcos
  7. Telcos have a range of core assets that could enhance the ability of content owners to move up the value chain and go directly to consumers and therefore gain a greater share of the entertainment revenue pie
  8. No other player has the range of enabling capabilities telcos have. However, the Telco USP is in combining these enablers and telcos must look at packaging them together as plug and play services for content owners. These need to apply across content categories (and possibly extend to other vertical markets) so that telcos can turn a niche vertical service into a broader and more valuable opportunity
  9. There is a finite window of opportunity for content owners and telcos to establish places in the new content ecosystems that are developing before major Internet players – Apple, Google – and new players use their skills and market positions to dominate those ecosystems. Speedy collaborative action between telcos and studios is required