Gigabit Cable Attacks This Year

Introduction

Since at least May, 2014 and the Triple Play in the USA Executive Briefing, we have been warning that the cable industry’s continuous improvement of its DOCSIS 3 technology threatens fixed operators with a succession of relatively cheap (in terms of CAPEX) but dramatic speed jumps. Gigabit chipsets have been available for some time, with the actual timing of the roll-out being therefore set by cable operators’ commercial choices.

With the arrival of DOCSIS 3.1, multi-gigabit cable has also become available. As a result, cable operators have become the best value providers in the broadband mass markets: typically, we found in the Triple Play briefing, they were the cheapest in terms of price/megabit in the most common speed tiers, at the time between 50 and 100Mbps. They were sometimes also the leaders for outright speed, and this has had an effect. In Q3 2014, for the first time, Comcast had more high-speed Internet subscribers than it had TV subscribers, on a comparable basis. Furthermore, in Europe, cable industry revenues grew 4.6% in 2014 while the TV component grew 1.8%. In other words, cable operators are now broadband operators above all.

Figure 1: Comcast now has more broadband than TV customers

Source: STL Partners, Comcast Q1 2015 trending schedule 

In the December, 2014 Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons? Executive Briefing, we covered the impact on AT&T’s consumer wireline business, and pointed out that its strategy of concentrating on content as opposed to broadband has not really delivered. In the context of ever more competition from streaming video, it was necessary to have an outstanding broadband product before trying to add content revenues. This was something which their DSL infrastructure couldn’t deliver in the context of cable or fibre competitors. The cable competition concentrated on winning whole households’ spending with broadband, with content as an upsell, and has undermined the wireline base to the point where AT&T might well exit a large proportion of it or perhaps sell off the division, refocusing on wireless, DirecTV satellite TV, and enterprise. At the moment, Comcast sees about 2 broadband net-adds for each triple-play net-add, although the increasing numbers of business ISP customers complicate the picture.

Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups

Source: STL, Comcast Q1 trending schedule

Since Christmas, the trend has picked up speed. Comcast announced a 2Gbps deployment to 1.5 million homes in the Atlanta metropolitan area, with a national deployment to follow. Time Warner Cable has announced a wave of upgrades in Charlotte, North Carolina that ups their current 30Mbps tier to 200Mbps and their 50Mbps tier to 300Mbps, after Google Fiber announced plans to deploy in the area. In the UK, Virgin Media users have been reporting unusually high speeds, apparently because the operator is trialling a 300Mbps speed tier, not long after it upgraded 50Mbps users to 152Mbps.

It is very much worth noting that these deployments are at scale. The Comcast and TWC rollouts are in the millions of premises. When the Virgin Media one reaches production status, it will be multi-million too. Vodafone-owned KDG in Germany is currently deploying 200Mbps, and it will likely go further as soon as it feels the need from a tactical point of view. This is the advantage of an upgrade path that doesn’t require much trenching. Not only can the upgrades be incremental and continuous, they can also be deployed at scale without enormous disruption.

Technology is driving the cable surge

This year’s CES saw the announcement, by Broadcom, of a new system-on-a-chip (SoC) for cable modems/STBs that integrates the new DOCSIS 3.1 cable standard. This provides for even more speeds, theoretically up to 7Gbps downlink, while still providing a broadcast path for pure TV. The SoC also, however, includes a WLAN radio with the newest 802.11ac technology, including beamforming and 4×4 multiple-input and multiple-output (MIMO), which is rated for gigabit speeds in the local network.

Even taking into account the usual level of exaggeration, this is an impressive package, offering telco-hammering broadband speeds, support for broadcast TV, and in-home distribution at speeds that can keep up with 4K streaming video. These are the SoCs that Comcast will be using for its gigabit cable rollouts. STMicroelectronics demonstrated its own multigigabit solution at CES, and although Intel has yet to show a DOCSIS 3.1 SoC, the most recent version of its Puma platform offers up to 1.6Gbps in a DOCSIS 3 network. DOCSIS 3 and 3.1 are designed to be interoperable, so this product has a future even after the head-ends are upgraded.

Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset

Source: RCRWireless 

With multiple chipset vendors shipping products, CableLabs running regular interoperability tests, and large regional deployments beginning, we conclude that the big cable upgrade is now here. Even if cable operators succeed in virtualising their set-top box software, you can’t provide the customer-end modem nor the WiFi router from the cloud. It’s important to realise that FTTH operators can upgrade in a similarly painless way by replacing their optical network terminals (ONTs), but DSL operators need to replace infrastructure. Also, ONTs are often independent from the WLAN router or other customer equipment , so the upgrade won’t necessarily improve the WiFi.

WiFi is also getting a major upgrade

The Broadcom device is so significant, though, because of the very strong WiFi support built in with the cable modem. Like the cable industry, the WiFi ecosystem has succeeded in keeping up a steady cycle of continuous improvements that are usually backwards compatible, from 802.11b through to 802.11ac, thanks to a major standards effort, the scale that Intel and Apple’s support gives us, and its relatively light intellectual property encumbrance.

802.11ac adds a number of advanced radio features, notably multiple-user MIMO, beamforming, and higher-density modulation, that are only expected to arrive in the cellular network as part of 5G some time after 2020, as well as some incremental improvements over 802.11n, like additional MIMO streams, wider channels, and 5GHz spectrum by default. As a result, the industry refers to it as “gigabit WiFi”, although the gigabit is a per-station rather than per-user throughput.

The standard has been settled since January 2014, and support is available in most flagship-class devices and laptop chipsets since then, so this is now a reality. The upgrade of the cable networks to 802.11ac WiFi backed with DOCSIS3.1 will have major strategic consequences for telcos, as it enables the cable operators and any strategic partners of theirs to go in even harder on the fixed broadband business and also launch a WiFi-plus-MVNO mobile service at the same time. The beamforming element of 802.11ac should help them to support higher user densities, as it makes use of the spatial diversity among different stations to reduce interference. Cablevision already launched a mobile service just before Christmas. We know Comcast is planning to launch one sometime this year, as they have been hiring a variety of mobile professionals quite aggressively. And, of course, the CableWiFi roaming alliance greatly facilitates scaling up such a service. The economics of a mini-carrier, as we pointed out in the Google MVNO: What’s Behind It and What Are the Implications? Executive Briefing, hinge on how much traffic can be offloaded to WiFi or small cells.

Figure 4: Modelling a mini-carrier shows that the WiFi is critical

Source: STL Partners

Traffic carried on WiFi costs nothing in terms of spectrum and much less in terms of CAPEX (due to the lower intellectual property tax and the very high production runs of WiFi equipment). In a cable context, it will often be backhauled in the spare capacity of the fixed access network, and therefore will account for very little additional cost on this score. As a result, the percentage of data traffic transferred to WiFi, or absorbed by it, is a crucial variable. KDDI, for example, carries 57% of its mobile data traffic on WiFi and hopes to reach 65% by the end of this year. Increasing the fraction from 30% to 57% roughly halved their CAPEX on LTE.

A major regulatory issue at the moment is the deployment of LTE-LAA (Licensed-Assisted Access), which aggregates unlicensed radio spectrum with a channel from licensed spectrum in order to increase the available bandwidth. The 5GHz WiFi band is the most likely candidate for this, as it is widely available, contains a lot of capacity, and is well-supported in hardware.

We should expect the cable industry to push back very hard against efforts to rush deployment of LTE-LAA cellular networks through the regulatory process, as they have a great deal to lose if the cellular networks start to take up a large proportion of the 5GHz band. From their point of view, a major purpose of LTE-LAA might be to occupy the 5GHz and deny it to their WiFi operations.

  • Executive Summary
  • Introduction
  • Technology is driving the cable surge
  • WiFi is also getting a major upgrade
  • Wholesale and enterprise markets are threatened as well
  • The Cable Surge Is Disrupting Wireline
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game 
  • Figure 1: Comcast now has more broadband than TV customers
  • Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups
  • Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset
  • Figure 4: Modelling a mini-carrier shows that the WiFi is critical
  • Figure 5: Comcast’s growth is mostly driven by business services and broadband
  • Figure 6: Comcast Business is its growth start with a 27% CAGR
  • Figure 7: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 8: 3 major cable operators’ business services are now close to AT&T or Verizon’s scale
  • Figure 9: Summary of gigabit deployments
  • Figure 10: CAPEX as a % of revenue has been falling for some time…

 

Key Questions for The Future of the Network, Part 2: Forthcoming Disruptions

We recently published a report, Key Questions for The Future of the Network, Part 1: The Business Case, exploring the drivers for network investment.  In this follow-up report, we expand the coverage into two separate areas through which we explore 5 key questions:

Disruptive network technologies

  1. Virtualisation & the software telco – how far, how fast?
  2. What is the path to 5G? And what will it be used for?
  3. What is the role of WiFi & other wireless technologies?

External changes

  1. What are the impacts of government & regulation on the network?
  2. How will the vendor landscape change & what are the implications of this?

In the extract below, we outline the context for the first area – disruptive network technologies – and explore the rationales and processes associated with virtualisation (Question 1).

Critical network-technology disruptions

This section covers three huge questions which should be at the top of any CTO’s mind in a CSP – and those of many other executives as well. These are strategically-important technology shifts that have the potential to “change the game” in the longer term. While two of them are “wireless” in nature, they also impact fixed/fibre/cable domains, both through integration and potential substitution. These will also have knock-on effects in financial terms – directly in terms of capex/opex costs, or indirectly in terms of services enabled and revenues.

This is not intended as a round-up of every important trend across the technology spectrum. Clearly, there are many other evolutions occurring in device design, IoT, software-engineering, optical networking and semiconductor development. These will all intersect in some ways with telcos, but there are so many “logical hops” away from the process of actually building and running networks, that they don’t really fit into this document easily. (Although they do appear in contexts such as drivers of desirable 5G network capabilities).

Instead, the focus once again is on unanswered questions that link innovation with “disruption” of how networks are conceived and deployed. As described below, network-virtualisation has huge and diverse impacts across the CSP universe. 5G will likely have a large gap versus today’s 4G architecture, too. This is very different to changes which are mostly incremental.

The mobile and software focus of this section is deliberate. Fixed-network technologies – fast-evolving though they are – generally do not today cause “disruption” in a technical sense. As the name suggests, the current newest cable-industry standard, DOCSIS3.1, is an evolution of 3.0, not a revolution. There is no 4.0 on the drawing-boards, yet. But the relative ease of upgrade to “gigabit cable” may unleash more market-related disruptions, as telcos feel the need to play catch-up with their rivals’ swiftly-escalating headline speeds.

Fibre technologies also tend to be comparatively incremental, rather than driving (or enabling) massive organisational and competitive shifts. In fixed networks there are other important drivers – competition, network unbundling, 4K television, OTT-style video and so on – as well as important roles for virtualisation, which covers both mobile and fixed domains. For markets with high use of residential “OTT video” services such as Netflix – especially in 4K variants – the push to gigabit-range speeds may be faster than expected. This will also have knock-on impacts on the continued improvement of WiFi, defending against ever-faster cellular WiFi networks. Indeed, faster gigabit cable and FTTH networks will be necessary to provide backhaul for 4.5G and 5G cellular networks, both for normal cell-towers and the expected rapid growth of small-cells.

The questions covered in more depth here examine:

  • Virtualisation & the “software telco”: How fast will SDN and NFV appear in commercial networks, and how broad are their impacts in both medium and longer terms? 
  • What is the path from 4G to 5G? This is a less-obvious question than it might appear, as we do yet even have agreed definitions of what we want “5G” to do, let alone defined standards to do it.
  • What is the role of WiFi and other wireless technologies? 

All of these intersect, and have inter-dependencies. For instance, 5G networks are likely to embrace SDN/NFV as a core component, and also perhaps form an “umbrella” over other low-power wireless networks.

A fourth “critical” question would have been to consider security technology and processes. Clearly, the future network is going to face continued challenges from hackers and maybe even cyber-warfare, against which we will need to prepare. However, that is in many ways a broader set of questions that actually reflect on all the others – virtualisation will bring its own security dilemmas, as (no doubt) will 5G. WiFi already does. It is certainly a critical area that bears consideration at a strategic level within CSPs, although it is not addressed here as a specific “question”. It is also a huge and complex area that deserves separate study.

Non-disruptive network technologies

As well as being prepared to exploit truly disruptive innovations, the industry also needs to get better at spotting non-disruptive ones that are doomed to failure, and abandoning them before they incur too much cost or distraction. The telecoms sector has a long way to go before it embraces the start-up mentality of “failing fast” – there are too many hypothetical “standards” gathering dust on a metaphorical shelf, and never being deployed despite a huge amount of work. Sometimes they get shoe-horned into new architectures, as a way to breathe life into them – but that often just encumbers shiny new technologies with the failures of the past.

For example, over the past 10+ years, the telecom industry has been pitching IMS (IP Multimedia Subsystem) as the future platform for interoperating services. It is finally gaining some adoption, but essentially only as a way to implement VoIP versions of the phone system – and even then, with huge increases in complexity and often higher costs. It is not “disruptive” except insofar as sucking huge amounts of resources and management attention, away from other possible sources of genuine innovation. Few developers care about it, and the “technology politics” behind it have helped contribute to the industry’s problems, not the solutions. While there is growth in the deployment of IMS (e.g. as a basis for VoLTE – voice on LTE, or fixed-line VoIP) it is primarily an extra cost, rather than a source of new revenue or competitive advantage. It might help telcos reduce costs by retiring old equipment or reclaiming spectrum for re-use, but that seems to be the limit of its utility and opportunity.

Figure 1: IMS-based services (mostly VoIP) are evolutionary not disruptive

Source: Disruptive Analysis

A common theme in recent years has been for individual point solutions for technical standards to seem elegant “in isolation”, but actually fail to take account of the wider market context. Real-world “offload” of mobile data traffic to WiFi and femtocells has been minimal, because of various practical and commercial constraints – many of which have been predictable. Self-optimising networks (where radio components configured, provisioned and diagnosed themselves automatically) suffered from apathy by vendors – as well as fears from operator staff that they might make themselves redundant. A whole slew of attempts at integrating WiFi with cellular have also had minimal impact, because they ignored the existence of private WiFi and user behaviour. Some of these are now making a return, engineered into more holistic solutions like HetNets and SDN. Telcos execs need to ensure that their representatives on standards bodies, or industry fora, are able to make pragmatic decisions with multiple contributory inputs, rather than always pursue “engineering purity”.

Virtualisation & the “software telco” – how far, how fast?

Spurred by rapid advances in standardised computing products and cloud platforms, the idea of virtualisation is now almost ubiquitous across the telecom sector. Yet the specialised nature of network equipment means that “switching to the cloud” is a lot more complicated than is the case for enterprise IT. But change is happening – the industry is now slowly moving from inflexible, non-scalable network elements or technology sub-systems, to ones which are programmable, running on commercial hardware, and which can “spin up” or down in terms of capacity. We are still comparatively early in this new cycle, but the trend now appears to be inexorable. It is being driven both by what is becoming possible – and also the threats posed by other denizens of the “cloud universe” migrating towards the telecoms industry and threatening to replace aspects unilaterally.

Two acronyms cover the main developments:

  • Software-defined networks (SDN) change the basic network “plumbing” – rather than hugely-complex switches and routers, transmitting and processing data streams individually, SDN puts a central “controller” function in charge of more flexible boxes. These can be updated more easily, have new network-processing capabilities enabled, and allow (hopefully) for better reliability and lower costs.
  • Network function virtualisation (NFV) is less about the “big iron” parts of the network, instead focusing on the myriad of other smaller units needed to do more specific tasks relating to control, security, optimisation and so forth. It allows these supporting functions to be re-cast in software, running as apps on standard servers, rather than needing a variety of separate custom-built boxes and chips.

Figure 2: ETSI’s vision for NFV

                                                                                    Source: ETSI & STL Partners

And while a lot of focus has been placed on operators’ own data-centres and “data-plane” boxes like routers and assorted traffic-processing “middle-boxes” even, that is not the whole story. Virtualisation also extends to the other elements of telco kit: “control-plane” elements used to oversee the network and internal signalling, billing and OSS systems, and even bits of the access and radio network. Tying them all together – and managing the new virtual components – brings new challenges in “orchestration”.

But this begs a number of critical subsidiary questions.

  • Executive Summary
  • Introduction
  • Does the network matter? And will it face “disruption”?
  • Raising questions
  • Overview: Which disruptions are next?
  • Critical network-technology disruptions
  • Non-disruptive network technologies
  • Virtualisation & the “software telco” – how far, how fast?
  • What is the path to 5G? And what will it be used for?
  • What is the role of WiFi & other wireless technologies?
  • What else needs to happen?
  • What are the impacts of government & regulation?
  • Will the vendor landscape shift?
  • Conclusions & Other Questions
  • STL Partners and Telco 2.0: Change the Game
  • Figure 1: New services are both network-integrated & independent
  • Figure 2: IMS-based services (mostly VoIP) are evolutionary not disruptive
  • Figure 3: ETSI’s vision for NFV
  • Figure 4: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 5: Virtualisation roadmap: Telefonica
  • Figure 6: 5G timeline & top-level uses
  • Figure 7: Suggested example 5G use-cases
  • Figure 8: 5G architecture will probably be virtualised from Day 1
  • Figure 9: Key 5G Research Initiatives
  • Figure 10: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 11: Proliferating wireless options for IoT
  • Figure 12: Forthcoming IoT-related wireless technologies
  • Figure 13: London bus with free WiFi sponsored by ice-cream company
  • Figure 14: Vendor landscape in turmoil as IT & network domains merge