The European Telecoms market in 2020, Report 1: Evaluating 10 forces of change

Introduction

Telecoms – the times they are a changin’

The global telecoms market is experiencing change at an unprecedented pace.  As recently as 2012 , few would have predicted that consumer voice and messaging would be effectively ‘given away’ with data packages in 2015.  Yet today, the shift towards data as the ‘valuable’ part of the mobile bundle has been made in many European markets and, although many operators still allocate a large proportion of revenue to voice and messaging, the value proposition is clearly now ‘data-led’.

Europe, in particular, is facing great uncertainty

While returns on investment have steadily reduced in European telecoms, the market has remained structurally fragmented with a large number of disparate players – fixed-only; mobile-only; converged; wholesalers; enterprise-only; content-oriented players (cablecos); and so forth. Operators generally have continued to make steady economic returns for investors and have been considered ‘defensive stocks’ by the capital markets owing to an ability to generate strong dividend yields and withstand economic down-turns (although Telefonica’s woes in Spain will attest to the limitations of the telco business model to recession).

But the forces of change in Europe are growing and, as a company’s ‘Safe Harbor’ statement would put it, ‘past performance does not guarantee future results’. Strategists are puzzling over what the European telecoms industry might look like in 2020 (and how might that affect their own company) given the broad range of forces being exerted on it in 2015.

STL Partners believes there are 12 questions that need to be considered when considering what the European telecoms market might look like in 2020:

  1. How will regulation of national markets and the wider European Union progress?
  2. How will government policies and the new EC Digital Directive impact telecoms?
  3. How will competition among traditional telecoms players develop?
  4. How strong will new competitors be and how will they compete with operators?
  5. What is the revenue and margin outlook for telecoms core services?
  6. Will new technologies such as NFV, SDN, and eSIM, have a positive or negative effect on operators?
  7. How will the capital markets’ attitude towards telecoms operators change and how much capital will be available for investment by operators?
  8. How will the attitudes and behaviours of customers – consumer and enterprise – evolve and what bearing might this have on operators’ business models?
  9. How will the vision and aspirations of telecoms senior managers play out – will digital services become a greater focus or will the ‘data pipe’ model prevail? How important will content be for operators? What will be the relative importance of fixed vs mobile, consumer vs enterprise?
  10. Will telcos be able to develop the skills, assets and partnerships required to pursue a services strategy successfully or will capabilities fall short of aspirations?
  11. What M&A strategy will telco management pursue to support their strategies: buying other telcos vs buying into adjacent industries? Focus on existing countries only vs moves into other countries or even a pan-European play?
  12. How effective will the industry be in reducing its cost base – capex and opex – relative to the new competitors such as the internet players in consumer services and IT players in enterprise services?

Providing clear answers to each of these 12 questions and their combined effect on the industry is extremely challenging because:

  • Some forces are, to some extent at least, controllable by operators whereas other forces are largely outside their control;
  • Although some forces are reasonably well-established, many others are new and/or changing rapidly;
  • Establishing the interplay between forces and the ‘net effect’ of them together is complicated because some tend to create a domino effect (e.g. greater competition tends to result in lower revenues and margins which, in turn, means less capital being available for investment in networks and services) whereas other forces can negate each other (e.g. the margin impact of lower core service revenues could be – at least partially – offset by a lower cost base achieved through NFV).

The role of this report

In essence, strategists (and investors) are finding it very difficult to understand the many and varied forces affecting the telecoms industry (this report) and predict the structure of and returns from the European telecoms market in 2020 (Report 2). This, in turn, makes it challenging to determine how operators should seek to compete in the future (the focus of a STL Partners report in July, Four strategic pathways to Telco 2.0).

In summary, the European Telecoms market in 2020 reports therefore seek to:

  • Identify the key forces of change in Europe and provide a useful means of classifying them within a simple and logical 2×2 framework (this report);
  • Help readers refine their thoughts on how Europe might develop by outlining four alternative ‘futures’ that are both sufficiently different from each other to be meaningful and internally consistent enough to be realistic (Report 2);
  • Provide a ‘prediction’ for the future European telecoms market based on the responses of two ‘wisdom of crowds’ votes conducted at a recent STL Partners event for senior managers from European telcos plus our STL Partners’ own viewpoint (Report 2).
  • Executive Summary
  • Introduction
  • Telecoms – the times they are a changin’
  • Europe, in particular, is facing great uncertainty
  • The role of this report
  • Understanding and classifying the forces of change
  • External (market) forces
  • Internal (telco) forces
  • Summary: The impact of internal and external forces over the next 5 years
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: O2’s SIM-only pay monthly tariffs – many with unlimited voice and messaging bundled in
  • Figure 2: A framework for classifying telco market forces: internal and external
  • Figure 3: Telefonica dividend yield vs Spanish 10-year bond yield
  • Figure 4: Customer attitudes to European telecoms brands – 2003 vs 2015
  • Figure 5: Summarising the key skills, partnerships, assets and culture needed to realise ambitions
  • Figure 6: SMS Price vs. penetration of Top OTT messaging apps in 2012
  • Figure 7: Summary of how internal and external forces could develop in the next 5 years

Key Questions for The Future of the Network, Part 2: Forthcoming Disruptions

We recently published a report, Key Questions for The Future of the Network, Part 1: The Business Case, exploring the drivers for network investment.  In this follow-up report, we expand the coverage into two separate areas through which we explore 5 key questions:

Disruptive network technologies

  1. Virtualisation & the software telco – how far, how fast?
  2. What is the path to 5G? And what will it be used for?
  3. What is the role of WiFi & other wireless technologies?

External changes

  1. What are the impacts of government & regulation on the network?
  2. How will the vendor landscape change & what are the implications of this?

In the extract below, we outline the context for the first area – disruptive network technologies – and explore the rationales and processes associated with virtualisation (Question 1).

Critical network-technology disruptions

This section covers three huge questions which should be at the top of any CTO’s mind in a CSP – and those of many other executives as well. These are strategically-important technology shifts that have the potential to “change the game” in the longer term. While two of them are “wireless” in nature, they also impact fixed/fibre/cable domains, both through integration and potential substitution. These will also have knock-on effects in financial terms – directly in terms of capex/opex costs, or indirectly in terms of services enabled and revenues.

This is not intended as a round-up of every important trend across the technology spectrum. Clearly, there are many other evolutions occurring in device design, IoT, software-engineering, optical networking and semiconductor development. These will all intersect in some ways with telcos, but there are so many “logical hops” away from the process of actually building and running networks, that they don’t really fit into this document easily. (Although they do appear in contexts such as drivers of desirable 5G network capabilities).

Instead, the focus once again is on unanswered questions that link innovation with “disruption” of how networks are conceived and deployed. As described below, network-virtualisation has huge and diverse impacts across the CSP universe. 5G will likely have a large gap versus today’s 4G architecture, too. This is very different to changes which are mostly incremental.

The mobile and software focus of this section is deliberate. Fixed-network technologies – fast-evolving though they are – generally do not today cause “disruption” in a technical sense. As the name suggests, the current newest cable-industry standard, DOCSIS3.1, is an evolution of 3.0, not a revolution. There is no 4.0 on the drawing-boards, yet. But the relative ease of upgrade to “gigabit cable” may unleash more market-related disruptions, as telcos feel the need to play catch-up with their rivals’ swiftly-escalating headline speeds.

Fibre technologies also tend to be comparatively incremental, rather than driving (or enabling) massive organisational and competitive shifts. In fixed networks there are other important drivers – competition, network unbundling, 4K television, OTT-style video and so on – as well as important roles for virtualisation, which covers both mobile and fixed domains. For markets with high use of residential “OTT video” services such as Netflix – especially in 4K variants – the push to gigabit-range speeds may be faster than expected. This will also have knock-on impacts on the continued improvement of WiFi, defending against ever-faster cellular WiFi networks. Indeed, faster gigabit cable and FTTH networks will be necessary to provide backhaul for 4.5G and 5G cellular networks, both for normal cell-towers and the expected rapid growth of small-cells.

The questions covered in more depth here examine:

  • Virtualisation & the “software telco”: How fast will SDN and NFV appear in commercial networks, and how broad are their impacts in both medium and longer terms? 
  • What is the path from 4G to 5G? This is a less-obvious question than it might appear, as we do yet even have agreed definitions of what we want “5G” to do, let alone defined standards to do it.
  • What is the role of WiFi and other wireless technologies? 

All of these intersect, and have inter-dependencies. For instance, 5G networks are likely to embrace SDN/NFV as a core component, and also perhaps form an “umbrella” over other low-power wireless networks.

A fourth “critical” question would have been to consider security technology and processes. Clearly, the future network is going to face continued challenges from hackers and maybe even cyber-warfare, against which we will need to prepare. However, that is in many ways a broader set of questions that actually reflect on all the others – virtualisation will bring its own security dilemmas, as (no doubt) will 5G. WiFi already does. It is certainly a critical area that bears consideration at a strategic level within CSPs, although it is not addressed here as a specific “question”. It is also a huge and complex area that deserves separate study.

Non-disruptive network technologies

As well as being prepared to exploit truly disruptive innovations, the industry also needs to get better at spotting non-disruptive ones that are doomed to failure, and abandoning them before they incur too much cost or distraction. The telecoms sector has a long way to go before it embraces the start-up mentality of “failing fast” – there are too many hypothetical “standards” gathering dust on a metaphorical shelf, and never being deployed despite a huge amount of work. Sometimes they get shoe-horned into new architectures, as a way to breathe life into them – but that often just encumbers shiny new technologies with the failures of the past.

For example, over the past 10+ years, the telecom industry has been pitching IMS (IP Multimedia Subsystem) as the future platform for interoperating services. It is finally gaining some adoption, but essentially only as a way to implement VoIP versions of the phone system – and even then, with huge increases in complexity and often higher costs. It is not “disruptive” except insofar as sucking huge amounts of resources and management attention, away from other possible sources of genuine innovation. Few developers care about it, and the “technology politics” behind it have helped contribute to the industry’s problems, not the solutions. While there is growth in the deployment of IMS (e.g. as a basis for VoLTE – voice on LTE, or fixed-line VoIP) it is primarily an extra cost, rather than a source of new revenue or competitive advantage. It might help telcos reduce costs by retiring old equipment or reclaiming spectrum for re-use, but that seems to be the limit of its utility and opportunity.

Figure 1: IMS-based services (mostly VoIP) are evolutionary not disruptive

Source: Disruptive Analysis

A common theme in recent years has been for individual point solutions for technical standards to seem elegant “in isolation”, but actually fail to take account of the wider market context. Real-world “offload” of mobile data traffic to WiFi and femtocells has been minimal, because of various practical and commercial constraints – many of which have been predictable. Self-optimising networks (where radio components configured, provisioned and diagnosed themselves automatically) suffered from apathy by vendors – as well as fears from operator staff that they might make themselves redundant. A whole slew of attempts at integrating WiFi with cellular have also had minimal impact, because they ignored the existence of private WiFi and user behaviour. Some of these are now making a return, engineered into more holistic solutions like HetNets and SDN. Telcos execs need to ensure that their representatives on standards bodies, or industry fora, are able to make pragmatic decisions with multiple contributory inputs, rather than always pursue “engineering purity”.

Virtualisation & the “software telco” – how far, how fast?

Spurred by rapid advances in standardised computing products and cloud platforms, the idea of virtualisation is now almost ubiquitous across the telecom sector. Yet the specialised nature of network equipment means that “switching to the cloud” is a lot more complicated than is the case for enterprise IT. But change is happening – the industry is now slowly moving from inflexible, non-scalable network elements or technology sub-systems, to ones which are programmable, running on commercial hardware, and which can “spin up” or down in terms of capacity. We are still comparatively early in this new cycle, but the trend now appears to be inexorable. It is being driven both by what is becoming possible – and also the threats posed by other denizens of the “cloud universe” migrating towards the telecoms industry and threatening to replace aspects unilaterally.

Two acronyms cover the main developments:

  • Software-defined networks (SDN) change the basic network “plumbing” – rather than hugely-complex switches and routers, transmitting and processing data streams individually, SDN puts a central “controller” function in charge of more flexible boxes. These can be updated more easily, have new network-processing capabilities enabled, and allow (hopefully) for better reliability and lower costs.
  • Network function virtualisation (NFV) is less about the “big iron” parts of the network, instead focusing on the myriad of other smaller units needed to do more specific tasks relating to control, security, optimisation and so forth. It allows these supporting functions to be re-cast in software, running as apps on standard servers, rather than needing a variety of separate custom-built boxes and chips.

Figure 2: ETSI’s vision for NFV

                                                                                    Source: ETSI & STL Partners

And while a lot of focus has been placed on operators’ own data-centres and “data-plane” boxes like routers and assorted traffic-processing “middle-boxes” even, that is not the whole story. Virtualisation also extends to the other elements of telco kit: “control-plane” elements used to oversee the network and internal signalling, billing and OSS systems, and even bits of the access and radio network. Tying them all together – and managing the new virtual components – brings new challenges in “orchestration”.

But this begs a number of critical subsidiary questions.

  • Executive Summary
  • Introduction
  • Does the network matter? And will it face “disruption”?
  • Raising questions
  • Overview: Which disruptions are next?
  • Critical network-technology disruptions
  • Non-disruptive network technologies
  • Virtualisation & the “software telco” – how far, how fast?
  • What is the path to 5G? And what will it be used for?
  • What is the role of WiFi & other wireless technologies?
  • What else needs to happen?
  • What are the impacts of government & regulation?
  • Will the vendor landscape shift?
  • Conclusions & Other Questions
  • STL Partners and Telco 2.0: Change the Game
  • Figure 1: New services are both network-integrated & independent
  • Figure 2: IMS-based services (mostly VoIP) are evolutionary not disruptive
  • Figure 3: ETSI’s vision for NFV
  • Figure 4: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 5: Virtualisation roadmap: Telefonica
  • Figure 6: 5G timeline & top-level uses
  • Figure 7: Suggested example 5G use-cases
  • Figure 8: 5G architecture will probably be virtualised from Day 1
  • Figure 9: Key 5G Research Initiatives
  • Figure 10: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 11: Proliferating wireless options for IoT
  • Figure 12: Forthcoming IoT-related wireless technologies
  • Figure 13: London bus with free WiFi sponsored by ice-cream company
  • Figure 14: Vendor landscape in turmoil as IT & network domains merge

 

Key Questions for NextGen Broadband Part 1: The Business Case

Introduction

It’s almost a cliché to talk about “the future of the network” in telecoms. We all know that broadband and network infrastructure is a never-ending continuum that evolves over time – its “future” is continually being invented and reinvented. We also all know that no two networks are identical, and that despite standardisation there are always specific differences, because countries, regulations, user-bases and legacies all vary widely.

But at the same time, the network clearly matters still – perhaps more than it has for the last two decades of rapid growth in telephony and SMS services, which are now dissipating rapidly in value. While there are certainly large swathes of the telecom sector benefiting from content provision, commerce and other “application-layer” activities, it is also true that the bulk of users’ perceived value is in connectivity to the Internet, IPTV and enterprise networks.

The big question is whether CSPs can continue to convert that perceived value from users into actual value for the bottom-line, given the costs and complexities involved in building and running networks. That is the paradox.

While the future will continue to feature a broader set of content/application revenue streams for telcos, it will also need to support not just more and faster data connections, but be able to cope with a set of new challenges and opportunities. Top of the list is support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. There is a significant chance that many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition evolving – faster- or lower-powered networks, or perhaps even the semi-mythical “QoS”, which if not paid for directly, could perhaps be integrated into compelling packages and data-service bundles. There is also the potential for “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. But does this add more value than investing in more web/OTT-style applications and services, de-coupled from the network?

Again, this raises questions about technology, business models – and the practicalities of making it happen.

This plays directly into the concept of the revenue “hunger gap” we have analysed for the past two years – without ever-better (but more efficient) networks, the telecom industry is going to get further squeezed. While service innovation is utterly essential, it also seems to be slow-moving and patchy. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Depending on the trajectory of regulatory change, we may also see more consolidation among parts of the service provider industry, more quad-play networks, more sharing and wholesale models.

We also see communications networks and applications permeating deeper into society and government. There is a sense among some policymakers that “telecoms is too important to leave up to the telcos”, with initiatives like Smart Cities and public-safety networks often becoming decoupled from the mainstream of service providers. There is an expectation that technology – and by extension, networks – will enable better economies, improved healthcare and education, safer and more efficient transport, mechanisms for combatting crime and climate change, and new industries and jobs, even as old ones become automated and robotised.

Figure 1 – New services are both network-integrated & independent

 

Source: STL Partners

And all of this generates yet more uncertainty, with yet more questions – some about the innovations needed to support these new visions, but also whether they can be brought to market profitably, given the starting-point we find ourselves at, with fragmented (yet growing) competition, regulatory uncertainty, political interference – and often, internal cultural barriers within the CSPs themselves. Can these be overcome?

A common theme from the section above is “Questions”. This document – and a forthcoming “sequel” – is intended to group, lay out and introduce the most important ones. Most observers just tend to focus on a few areas of uncertainty, but in setting up the next year or so of detailed research, Telco 2.0 wants to fully list and articulate all of the hottest issues. Only once they are collated, can we start to work out the priorities – and inter-dependencies.

Our belief is that all of the detailed questions on “Future Networks” can, it fact, be tied back to one of two broader, more over-reaching themes:

  • What are the business cases and operational needs for future network investment?
  • Which disruptions (technological or other) are expected in the future?

The business case theme is covered in this document. It combines future costs (spectrum, 4G/5G/fibre deployments, network-sharing, virtualisation, BSS/OSS transformation etc.) and revenues (data connectivity, content, network-integrated service offerings, new Telco 2.0-style services and so on). It also encompasses what is essential to make the evolution achievable, in terms of organisational and cultural transformation within telcos.

A separate Telco 2.0 document, to be published in coming weeks, will cover the various forthcoming disruptions. These are expected to include new network technologies that will ultimately coalesce to form 5G mobile and new low-power wireless, as well as FTTx and DOCSIS cable evolution. In addition, virtualisation in both NFV and SDN guises will be hugely transformative.

There is also a growing link between mobile and fixed domains, reflected in quad-play propositions, industry consolidation, and the growth of small-cells and WiFi with fixed-line backhaul. In addition, to support future service innovation, there need to be adequate platforms for both internal and external developers, as well as a meaningful strategy for voice/video which fits with both network and end-user trends. Beyond the technical, additional disruption will be delivered by regulatory change (for example on spectrum and neutrality), and also a reshaped vendor landscape.

The remainder of this report lays out the first five of the Top 10 most important questions for the Future Network. We can’t give definitive analyses, explanations or “answers” in a report of this length – and indeed, many of them are moving targets anyway. But taking a holistic approach to laying out each question properly – where it comes from, and what the “moving parts” are, we help to define the landscape. The objective is to help management teams apply those same filters to their own organisations, understand how can costs be controlled and revenues garnered, see where consolidation and regulatory change might help or hinder, and deal with users and governments’ increasing expectations.

The 10 Questions also lay the ground for our new Future Network research stream, forthcoming publications and comment/opinion.

Overview: what is the business case for Future Networks?

As later sections of both this document and the second in the series cover, there are various upcoming technical innovations in the networking pipeline. Numerous advanced radio technologies underpin 4.5G and 5G, there is ongoing work to improve fibre and DSL/cable broadband, virtualisation promises much greater flexibility in carrier infrastructure and service enablement, and so on. But all those advances are predicated on either (ideally) more revenues, or at least reduced costs to deploy and operate. All require economic justification for investment to occur.

This is at the core of the Future Networks dilemma for operators – what is the business case for ongoing investment? How can the executives, boards of directors and investors be assured of returns? We all know about the ongoing shift of business & society online, the moves towards smarter cities and national infrastructure, changes in entertainment and communication preferences and, of course, the Internet of Things – but how much benefit and value might accrue to CSPs? And is that value driven by network investments, or should telecom companies re-focus their investments and recruitment on software, content and the cloud?

This is not a straightforward question. There are many in the industry that assert that “the network is the key differentiator & source of value”, while others counter that it is a commodity and that “the real value is in the services”.

What is clear is that better/faster networks will be needed in any case, to achieve some of the lofty goals that are being suggested for the future. However, it is far from clear how much of the overall value-chain profit can be captured from just owning the basic machinery – recent years have shown a rapid de-coupling of network and service, apart from a few areas.

In the past, networks largely defined the services offered – most notably broadband access, phone calls and SMS, as well as cable TV and IPTV. But with the ubiquitous rise of Internet access and service platforms/gateways, an ever-increasing amount of service “logic” is located on the web, or in the cloud – not enshrined in the network itself. This is an important distinction – some services are abstracted and designed to be accessed from any network, while others are intimately linked to the infrastructure.

Over the last decade, the prevailing shift has been for network-independent services. In many ways “the web has won”. Potentially this trend may reverse in future though, as servers and virtualised, distributed cloud capabilities get pushed down into localised network elements. That, however, brings its own new complexities, uncertainties and challenges – it a brave (or foolhardy) telco CEO that would bet the company on new in-network service offers alone. We will also see API platforms expose network “capabilities” to the web/cloud – for example, W3C is working on standards to allow web developers to gain insights into network congestion, or users’ data-plans.

But currently, the trend is for broadband access and (most) services to be de-coupled. Nonetheless, some operators seem to have been able to make clever pricing, distribution and marketing decisions (supported by local market conditions and/or regulation) to enable bundles to be made desirable.

US operators, for example, have generally fared better than European CSPs, in what should have been comparably-mature markets. But was that due to a faster shift to 4G networks? Or other factors, such as European telecom fragmentation and sub-scale national markets, economic pressures, or perhaps a different legacy base? Did the broad European adoption of pre-paid (and often low-ARPU) mobile subscriptions make it harder to justify investments on the basis of future cashflows – or was it more about the early insistence that 2.6GHz was going to be the main “4G band”, with its limitations later coming back to bite people? It is hard to tease apart the technology issues from the commercial ones.

Similar differences apply in the fixed-broadband world. Why has adoption and typical speed varied so much? Why have some markets preferred cable to DSL? Why are fibre deployments patchy and very nation-specific? Is it about the technology involved – or the economy, topography, government policies, or the shape of the TV/broadcast sector?

Understanding these issues – and, once again, articulating the questions properly – is core to understanding the future for CSPs’ networks. We are in the middle of 4G rollout in most countries, with operators looking at the early requirements for 5G. SDN and NFV are looking important – but their exact purpose, value and timing still remain murky, despite the clear promises. Can fibre rollouts – FTTC or FTTH – still be justified in a world where TV/video spend is shifting away from linear programming and towards online services such as Netflix?

Given all these uncertainties, it may be that either network investments get slowed down – or else consolidation, government subsidy or other top-level initiatives are needed to stimulate them. On the other hand, it could be the case that reduced costs of capex and opex – perhaps through outsourcing, sharing or software-based platforms, or even open-source technology – make the numbers work out well, even for raw connectivity. Certainly, the last few years have seen rising expenditure by end-users on mobile broadband, even if it has also contributed to the erosion of legacy services such as telephony and SMS, by enabling more modern/cheaper rivals. We have also seen a shift to lower-cost network equipment and software suppliers, and an emphasis for “off the shelf” components, or open interfaces, to reduce lock-in and encourage competition.

The following sub-sections each frame a top-level, critical question relating to the business case for Future Networks:

  • Will networks support genuinely new services & enablers/APIs, or just faster/more-granular Internet access?
  • Speed, coverage, performance/QoS… what actually generates network value? And does this derive from customer satisfaction, new use-cases, or other sources?
  • Does quad-play and fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation and culture: what needs to change to support future network investments?

 

  • Executive Summary
  • Introduction
  • Overview: what is the business case for Future Networks?
  • Supporting new services or just faster Internet?
  • Speed, coverage, quality…what is most valuable?
  • Does quad-play & fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation & culture: what changes?
  • Conclusions

 

  • Figure 1 – New services are both network-integrated & independent
  • Figure 2 – Mobile data device & business model evolution
  • Figure 3 – Some new services are directly enabled by network capabilities
  • Figure 4 – Network investments ultimately need to map onto customers’ goals
  • Figure 5 – Customers put a priority on improving indoor/fixed connectivity
  • Figure 6 – Notional “coverage” does not mean enough capacity for all apps
  • Figure 7 – Different operator teams have differing visions of the future
  • Figure 8 – “Software telcos” may emulate IT’s “DevOps” organisational dynamic

 

BT/EE: Huge Regulatory Headache and Trigger for European Transformation

UK Cellular: The Context

The UK is a high-penetration market (134%), and has for the most part been considered a high-competition one, with 5 MNOs and numerous resellers/MVNOs. However, since the Free.fr and T-Mobile USA price disruptions, the UK has ceased to be one of the cheaper markets among rich countries and now seems a little expensive by French standards, while the EE joint venture effectively means a move down from 5 operators to 4. There has been considerable concern that a price disruption was in the offing since BT acquired 2.6GHz spectrum, perhaps via a “Free-style” BT deployment, or alternatively via BT leasing the spectrum to a third party, possibly Virgin Media or TalkTalk. However, it is not as obvious that there is a big target for price disruption as it was in France pre-Free or the US pre-T-Mobile, as Figure 1 shows. The UK operators are only slightly dearer than the French average, with one exception, and the market is more competitive.

Figure 1: The UK is a slightly dearer cellular market than France

Source: STL Partners, themobileworld.com

The following chart summarises the current status of the operators.

Figure 2: UK mobile market overview, 2012-2014

Source: Company Accounts, STL Partners analysis

One reason to pick EE over O2 is immediately clear – EE has substantially better ARPU, is increasing it, and is at least holding onto customers. A deeper look into the company shows that the 4G network is just recruiting customers fast enough to compensate for churn away from the two legacy networks. Overall, the market is just growing.

Figure 3: UK cellular subscriber growth, 2012-2014

Source: Company Accounts, STL Partners analysis

O2 is the cheapest of the four 4G operators and is discounting hard to win share. Meanwhile, Vodafone UK starts to look like a squeezed third operator, losing customers and ARPU at the same time, and fourth operator 3UK looks remarkably strong. In terms of profitability, Figure 4 shows that Vodafone is just managing to hold its margins, while O2 is growing at constant margins, EE is improving its margins, and 3UK is powering ahead, improving its margins, ARPU, and subscriber base at the same time.

Figure 4: 3UK is a remarkably strong fourth operator

Source: Company Accounts, STL Partners analysis

 

  • UK Cellular: The Context
  • Meanwhile, in the Retail ISP Market
  • The Business Case for BT+EE
  • An affordable deal?
  • Valuation and leverage
  • Synergy: operational cost savings
  • Synergy: marketing, customer data and cross-sales
  • Synergy: quad-play revenue
  • Can a BT-EE merger be acceptable to the Regulator?
  • The Spectrum Position
  • The Vertical Integration Problem
  • The Move towards Convergence and the Fixed Squeeze Potential Scenarios
  • Conclusion: big bets, tests, and signals
  • BT: betting big
  • The market: three big decisions
  • The regulator and the regulatory environment: a big test
  • Sending important signals

 

  • Figure 1: The UK is a slightly dearer cellular market than France
  • Figure 2: UK mobile market overview, 2012-2014
  • Figure 3: UK cellular subscriber growth, 2012-2014
  • Figure 4: 3UK is a remarkably strong fourth operator
  • Figure 5: UK consumer wireline overview
  • Figure 6: FTTC is mostly benefiting the “major independent” ISPs
  • Figure 7: BT Sport has peaked as a driver of broadband net-adds, but the football rights bills keep coming
  • Figure 8: Content costs are eating around 70% of wholesale fibre revenue at BT
  • Figure 9: BT Sport’s impact on its market valuation
  • Figure 10: BT-EE would blow through the 2013 regulatory cap on spectrum allocations, but not the proposed cap post-2.3/3.4GHz auctions
  • Figure 11: Although BT-EE is just compliant with the 2.3/3.4GHz cap, it looks suspiciously dominant
  • Figure 12: Fibre-rich MNOs break away from the herd of mediocrity in Europe Figure 13: Vodafone – light on fibre across the EU

Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons?

Looking Back to 2012

In version 1.0 of the Telco 2.0 Transformation Index, we identified a number of key strategic issues at AT&T that would mark it in the years to come. Specifically, we noted that the US wireless segment, AT&T Mobility, had been very strong, powered by iPhone data plans, that by contrast the consumer wireline segment, Home Solutions, had been rather weak, and that the enterprise segment, Business Solutions, faced a massive “crossing the chasm” challenge as its highly valuable customers began a technology transition that exposed them to new competitors, such as cloud computing providers, cable operators, and dark-fibre owners.

Figure 1: AT&T revenues by reporting segment, 2012 and 2014

AT&T revenues by reporting segment, 2012 and 2014

Source: Telco 2.0 Transformation Index

We noted that the wireless segment, though strong, was behind its great rival Verizon Wireless for 4G coverage and capacity, and that the future of the consumer wireline segment was dependent on a big strategic bet on IPTV content, delivered over VDSL (aka “fibre to the cabinet”).

In Business Solutions, newer products like cloud, M2M services, Voice 2.0, and various value-added networking services, grouped in “Strategic Business Services”, had to scale up and take over from traditional ones like wholesale circuit voice and Centrex, IP transit, classic managed hosting, and T-carriers, before too many customers went missing. The following chart shows the growth rates in each of the reporting segments over the last two years.

Figure 2: Revenue growth by reporting segment, 2-year CAGR

Revenue growth by reporting segment, 2-year CAGR

Source: Telco 2.0 Transformation Index

Out of the three major segments, wireless, consumer wireline, and business solutions, we can see that wireless is performing acceptably (although growth has slowed down), business solutions is in the grip of its transition, and wireline is just about growing. Because wireless is such a big segment (see Figure 1), it contributes a disproportionate amount to the company’s top line growth. Figure 2 shows revenue in the wireline segment as an index with Q2 2011 set to 100.

Figure 3: Wireline overall is barely growing…

AT&T Wireline Revenue

 Source: Telco 2.0 Transformation Index

Back in 2012, we summed up the consumer wireline strategy as being all about VDSL and TV. The combination, plus voice, makes up the product line known as U-Verse, which we covered in the Telco 2.0 Transformation Index. We were distinctly sceptical, essentially because we believe that broadband is now the key product in the triple-play and the one that sells the other elements. With cable operators routinely offering 100Mbps, and upgrades all the way to gigabit speeds in the pipeline, we found it hard to believe that a DSL network with “up to” 45Mbps maximum would keep up.

 

  • Executive Summary
  • Contents
  • Looking Back to 2012
  • The View in 2014
  • The DirecTV Filing
  • Getting out of consumer wireline
  • The business customers: jewel in the crown of wireline
  • Conclusion

 

  • Figure 1: AT&T revenues by reporting segment, 2012 and 2014
  • Figure 2: Revenue growth by reporting segment, 2-year CAGR
  • Figure 3: Wireline overall is barely growing…
  • Figure 4: It’s been a struggle for all fixed operators to retain customers – except high-speed cablecos Comcast and Charter
  • Figure 5: AT&T is 5th for ARPU, by a distance
  • Figure 6: AT&T’s consumer wireline ARPU is growing, but it is only just enough to avoid falling further behind
  • Figure 7: U-Verse content sales may have peaked
  • Figure 8: For the most important speed band, the cable option is a better deal
  • Figure 9: Revenue – only cablecos left alive…
  • Figure 10: Broadband “drives” bundles…
  • Figure 11: …or do bundles drive broadband?

Triple-Play in the USA: Infrastructure Pays Off

Introduction

In this note, we compare the recent performance of three US fixed operators who have adopted contrasting strategies and technology choices, AT&T, Verizon, and Comcast. We specifically focus on their NGA (Next-Generation Access) triple-play products, for the excellent reason that they themselves focus on these to the extent of increasingly abandoning the subscriber base outside their footprints. We characterise these strategies, attempt to estimate typical subscriber bundles, discuss their future options, and review the situation in the light of a “Deep Value” framework.

A Case Study in Deep Value: The Lessons from Apple and Samsung

Deep value strategies concentrate on developing assets that will be difficult for any plausible competitor to replicate, in as many layers of the value chain as possible. A current example is the way Apple and Samsung – rather than Nokia, HTC, or even Google – came to dominate the smartphone market.

It is now well known that Apple, despite its image as a design-focused company whose products are put together by outsourcers, has invested heavily in manufacturing throughout the iOS era. Although the first generation iPhone was largely assembled from proprietary parts, in many ways it should be considered as a large-scale pilot project. Starting with the iPhone 3GS, the proportion of Apple’s own content in the devices rose sharply, thanks to the acquisition of PA Semiconductor, but also to heavy investment in the supply chain.

Not only did Apple design and pilot-produce many of the components it wanted, it bought them from suppliers in advance to lock up the supply. It also bought machine tools the suppliers would need, often long in advance to lock up the supply. But this wasn’t just about a tactical effort to deny componentry to its competitors. It was also a strategic effort to create manufacturing capacity.

In pre-paying for large quantities of components, Apple provides its suppliers with the capital they need to build new facilities. In pre-paying for the machine tools that will go in them, they finance the machine tool manufacturers and enjoy a say in their development plans, thus ensuring the availability of the right machinery. They even invent tools themselves and then get them manufactured for the future use of their suppliers.

Samsung is of course both Apple’s biggest competitor and its biggest supplier. It combines these roles precisely because it is a huge manufacturer of electronic components. Concentrating on its manufacturing supply chain both enables it to produce excellent hardware, and also to hedge the success or failure of the devices by selling componentry to the competition. As with Apple, doing this is very expensive and demands skills that are both in short supply, and sometimes also hard to define. Much of the deep value embedded in Apple and Samsung’s supply chains will be the tacit knowledge gained from learning by doing that is now concentrated in their people.

The key insight for both companies is that industrial and user-experience design is highly replicable, and patent protection is relatively weak. The same is true of software. Apple had a deeply traumatic experience with the famous Look and Feel lawsuit against Microsoft, and some people have suggested that the supply-chain strategy was deliberately intended to prevent something similar happening again.

Certainly, the shift to this strategy coincides with the launch of Android, which Steve Jobs at least perceived as a “stolen product”. Arguably, Jobs repeated Apple’s response to Microsoft Windows, suing everyone in sight, with about as much success, whereas Tim Cook in his role as the hardware engineering and then supply-chain chief adopted a new strategy, developing an industrial capability that would be very hard to replicate, by design.

Three Operators, Three Strategies

AT&T

The biggest issue any fixed operator has faced since the great challenges of privatisation, divestment, and deregulation in the 1980s is that of managing the transition from a business that basically provides voice on a copper access network to one that basically provides Internet service on a co-ax, fibre, or possibly wireless access network. This, at least, has been clear for many years.

AT&T is the original telco – at least, AT&T likes to be seen that way, as shown by their decision to reclaim the iconic NYSE ticker symbol “T”. That obscures, however, how much has changed since the divestment and the extremely expensive process of mergers and acquisitions that patched the current version of the company together. The bit examined here is the AT&T Home Solutions division, which owns the fixed-line ex-incumbent business, also known as the merged BellSouth and SBC businesses.

AT&T, like all the world’s incumbents, deployed ADSL at the turn of the 2000s, thus getting into the ISP business. Unlike most world incumbents, in 2005 it got a huge regulatory boost in the form of the Martin FCC’s Comcast decision, which declared that broadband Internet service was not a telecommunications service for regulatory purposes. This permitted US fixed operators to take back the Internet business they had been losing to independent ISPs. As such, they were able to cope with the transition while concentrating on the big-glamour areas of M&A and wireless.

As the 2000s advanced, it became obvious that AT&T needed to look at the next move beyond DSL service. The option taken was what became U-Verse, a triple-play product which consists of:

  • Either ADSL, ADSL2+, or VDSL, depending on copper run length and line quality
  • Plus IPTV
  • And traditional telephony carried over IP.

This represents a minimal approach to the transition – the network upgrade requires new equipment in the local exchanges, or Central Offices in US terms, and in street cabinets, but it does not require the replacement of the access link, nor any trenching.

This minimisation of capital investment is especially important, as it was also decided that U-Verse would not deploy into areas where the copper might need investment to carry it. These networks would eventually, it was hoped, be either sold or closed and replaced by wireless service. U-Verse was therefore, for AT&T, in part a means of disposing of regulatory requirements.

It was also important that the system closely coupled the regulated domain of voice with the unregulated, or at least only potentially regulated, domain of Internet service and the either unregulated or differently regulated domain of content. In many ways, U-Verse can be seen as a content first strategy. It’s TV that is expected to be the primary replacement for the dwindling fixed voice revenues. Figure 1 shows the importance of content to AT&T vividly.

Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast

Telco 2 UVerse TV sales account for the largest chunk of Telco 2 revenue at ATandT although M2M is growing fast.png

Source: Telco 2.0 Transformation Index

This sounds like one of the telecoms-as-media strategies of the late 1990s. However, it should be clearly distinguished from, say, BT’s drive to acquire exclusive sports content and to build up a brand identity as a “channel”. U-Verse does not market itself as a “TV channel” and does not buy exclusive content – rather, it is a channel in the literal sense, a distributor through which TV is sold. We will see why in the next section.

The US TV Market

It is well worth remembering that TV is a deeply national industry. Steve Jobs famously described it as “balkanised” and as a result didn’t want to take part. Most metrics vary dramatically across national borders, as do qualitative observations of structure. (Some countries have a big public sector broadcaster, like the BBC or indeed Al-Jazeera, to give a basic example.) Countries with low pay-TV penetration can be seen as ones that offer greater opportunities, it being usually easier to expand the customer base than to win share from the competition (a “blue ocean” versus a “red sea” strategy).

However, it is also true that pay-TV in general is an easier sell in a market where most TV viewers already pay for TV. It is very hard to convince people to pay for a product they can obtain free.

In the US, there is a long-standing culture of pay-TV, originally with cable operators and more recently with satellite (DISH and DirecTV), IPTV or telco-delivered TV (AT&T U-Verse and Verizon FiOS), and subscription OTT (Netflix and Hulu). It is also a market characterised by heavy TV usage (an average household has 2.8 TVs). Out of the 114.2 million homes (96.7% of all homes) receiving TV, according to Nielsen, there are some 97 million receiving pay-TV via cable, satellite, or IPTV, a penetration rate of 85%. This is the largest and richest pay-TV market in the world.

In this sense, it ought to be a good prospect for TV in general, with the caveat that a “Sky Sports” or “BT Sport” strategy based on content exclusive to a distributor is unlikely to work. This is because typically, US TV content is sold relatively openly in the wholesale market, and in many cases, there are regulatory requirements that it must be provided to any distributor (TV affiliate, cable operator, or telco) that asks for it, and even that distributors must carry certain channels.

Rightsholders have backed a strategy based on distribution over one based on exclusivity, on the principle that the customer should be given as many opportunities as possible to buy the content. This also serves the interests of advertisers, who by definition want access to as many consumers as possible. Hollywood has always aimed to open new releases on as many cinema screens as possible, and it is the movie industry’s skills, traditions, and prejudices that shaped this market.

As a result, it is relatively easy for distributors to acquire content, but difficult for them to generate differentiation by monopolising exclusive content. In this model, differentiation tends to accrue to rightsholders, not distributors. For example, although HBO maintains the status of being a premium provider of content, consumers can buy it from any of AT&T, Verizon, Comcast, any other cable operator, satellite, or direct from HBO via an OTT option.

However, pay-TV penetration is high enough that any new entrant (such as the two telcos) is committed to winning share from other providers, the hard way. It is worth pointing out that the US satellite operators DISH and DirecTV concentrated on rural customers who aren’t served by the cable MSOs. At the time, their TV needs weren’t served by the telcos either. As such, they were essentially greenfield deployments, the first pay-TV propositions in their markets.

The biggest change in US TV in recent times has been the emergence of major new distributors, the two RBOCs and a range of Web-based over-the-top independents. Figure 2 summarises the situation going into 2013.

Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale

OTT video providers beat telcos cablecos and satellite for subscriber growth at scale

Source: Telco 2.0 Transformation Index

The two biggest classes of distributors saw either a marginal loss of subscribers (the cablecos) or a marginal gain (satellite). The two groups of (relatively) new entrants, as you’d expect, saw much more growth. However, the OTT players are both bigger and much faster growing than the two telco players. It is worth pointing out that this mostly represents additional TV consumption, typically, people who already buy pay-TV adding a Netflix subscription. “Cord cutting” – replacing a primary TV subscription entirely – remains rare. In some ways, U-Verse can be seen as an effort to do something similar, upselling content to existing subscribers.

Competing for the Whole Bundle – Comcast and the Cable Industry

So how is this option doing? The following chart, Figure 3, shows that in terms of overall service ARPU, AT&T’s fixed strategy is delivering inferior results than its main competitors.

Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Source: Telco 2.0 Transformation Index

The interesting point here is that Time Warner Cable is doing less well than some of its cable industry peers. Comcast, the biggest, claims a $159 monthly ARPU for triple-play customers, and it probably has a higher density of triple-players than the telcos. More representatively, they also quote a figure of $134 monthly average revenue per customer relationship, including single- and double-play customers. We have used this figure throughout this note. TWC, in general, is more content-focused and less broadband-focused than Comcast, having taken much longer to roll out DOCSIS 3.0. But is that important? After all, aren’t cable operators all about TV? Figure 4 shows clearly that broadband and voice are now just as important to cable operators as they are to telcos. The distinction is increasingly just a historical quirk.

Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators

Non video revenues ie Internet service and voice are the driver of growth for US cable operatorsSource: NCTA data, STL Partners

As we have seen, TV in the USA is not a differentiator because everyone’s got it. Further, it’s a product that doesn’t bring differentiation but does bring costs, as the rightsholders exact their share of the selling price. Broadband and voice are different – they are, in a sense, products the operator makes in-house. Most have to buy the tools (except Free.fr which has developed its own), but in any case the operator has to do that to carry the TV.

The differential growth rates in Figure 4 represent a substantial change in the ISP industry. Traditionally, the Internet engineering community tended to look down on cable operators as glorified TV distribution systems. This is no longer the case.

In the late 2000s, cable operators concentrated on improving their speeds and increasing their capacity. They also pressed their vendors and standardisation forums to practice continuous improvement, creating a regular upgrade cycle for DOCSIS firmware and silicon that lets them stay one (or more) jumps ahead of the DSL industry. Some of them also invested in their core IP networking and in providing a deeper and richer variety of connectivity products for SMB, enterprise, and wholesale customers.

Comcast is the classic example of this. It is a major supplier of mobile backhaul, high-speed Internet service (and also VoIP) for small businesses, and a major actor in the Internet peering ecosystem. An important metric of this change is that since 2009, it has transitioned from being a downlink-heavy eyeball network to being a balanced peer that serves about as much traffic outbound as it receives inbound.

The key insight here is that, especially in an environment like the US where xDSL unbundling isn’t available, if you win a customer for broadband, you generally also get the whole bundle. TV is a valuable bonus, but it’s not differentiating enough to win the whole of the subscriber’s fixed telecoms spend – or to retain it, in the presence of competitors with their own infrastructure. It’s also of relatively little interest to business customers, who tend to be high-value customers.

 

  • Executive Summary
  • Introduction
  • A Case Study in Deep Value: The Lessons from Apple and Samsung
  • Three Operators, Three Strategies
  • AT&T
  • The US TV Market
  • Competing for the Whole Bundle – Comcast and the Cable Industry
  • Competing for the Whole Bundle II: Verizon
  • Scoring the three strategies – who’s winning the whole bundles?
  • SMBs and the role of voice
  • Looking ahead
  • Planning for a Future: What’s Up Cable’s Sleeve?
  • Conclusions

 

  • Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast
  • Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale
  • Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up
  • Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators
  • Figure 5: Comcast has the best pricing per megabit at typical service levels
  • Figure 6: Verizon is ahead, but only marginally, on uplink pricing per megabit
  • Figure 7: FCC data shows that it’s the cablecos, and FiOS, who under-promise and over-deliver when it comes to broadband
  • Figure 7: Speed sells at Verizon
  • Figure 8: Comcast and Verizon at parity on price per megabit
  • Figure 9: Typical bundles for three operators. Verizon FiOS leads the way
  • Figure 12: The impact of learning by doing on FTTH deployment costs during the peak roll-out phase