How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing

 

Gigabit Cable Attacks This Year

Introduction

Since at least May, 2014 and the Triple Play in the USA Executive Briefing, we have been warning that the cable industry’s continuous improvement of its DOCSIS 3 technology threatens fixed operators with a succession of relatively cheap (in terms of CAPEX) but dramatic speed jumps. Gigabit chipsets have been available for some time, with the actual timing of the roll-out being therefore set by cable operators’ commercial choices.

With the arrival of DOCSIS 3.1, multi-gigabit cable has also become available. As a result, cable operators have become the best value providers in the broadband mass markets: typically, we found in the Triple Play briefing, they were the cheapest in terms of price/megabit in the most common speed tiers, at the time between 50 and 100Mbps. They were sometimes also the leaders for outright speed, and this has had an effect. In Q3 2014, for the first time, Comcast had more high-speed Internet subscribers than it had TV subscribers, on a comparable basis. Furthermore, in Europe, cable industry revenues grew 4.6% in 2014 while the TV component grew 1.8%. In other words, cable operators are now broadband operators above all.

Figure 1: Comcast now has more broadband than TV customers

Source: STL Partners, Comcast Q1 2015 trending schedule 

In the December, 2014 Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons? Executive Briefing, we covered the impact on AT&T’s consumer wireline business, and pointed out that its strategy of concentrating on content as opposed to broadband has not really delivered. In the context of ever more competition from streaming video, it was necessary to have an outstanding broadband product before trying to add content revenues. This was something which their DSL infrastructure couldn’t deliver in the context of cable or fibre competitors. The cable competition concentrated on winning whole households’ spending with broadband, with content as an upsell, and has undermined the wireline base to the point where AT&T might well exit a large proportion of it or perhaps sell off the division, refocusing on wireless, DirecTV satellite TV, and enterprise. At the moment, Comcast sees about 2 broadband net-adds for each triple-play net-add, although the increasing numbers of business ISP customers complicate the picture.

Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups

Source: STL, Comcast Q1 trending schedule

Since Christmas, the trend has picked up speed. Comcast announced a 2Gbps deployment to 1.5 million homes in the Atlanta metropolitan area, with a national deployment to follow. Time Warner Cable has announced a wave of upgrades in Charlotte, North Carolina that ups their current 30Mbps tier to 200Mbps and their 50Mbps tier to 300Mbps, after Google Fiber announced plans to deploy in the area. In the UK, Virgin Media users have been reporting unusually high speeds, apparently because the operator is trialling a 300Mbps speed tier, not long after it upgraded 50Mbps users to 152Mbps.

It is very much worth noting that these deployments are at scale. The Comcast and TWC rollouts are in the millions of premises. When the Virgin Media one reaches production status, it will be multi-million too. Vodafone-owned KDG in Germany is currently deploying 200Mbps, and it will likely go further as soon as it feels the need from a tactical point of view. This is the advantage of an upgrade path that doesn’t require much trenching. Not only can the upgrades be incremental and continuous, they can also be deployed at scale without enormous disruption.

Technology is driving the cable surge

This year’s CES saw the announcement, by Broadcom, of a new system-on-a-chip (SoC) for cable modems/STBs that integrates the new DOCSIS 3.1 cable standard. This provides for even more speeds, theoretically up to 7Gbps downlink, while still providing a broadcast path for pure TV. The SoC also, however, includes a WLAN radio with the newest 802.11ac technology, including beamforming and 4×4 multiple-input and multiple-output (MIMO), which is rated for gigabit speeds in the local network.

Even taking into account the usual level of exaggeration, this is an impressive package, offering telco-hammering broadband speeds, support for broadcast TV, and in-home distribution at speeds that can keep up with 4K streaming video. These are the SoCs that Comcast will be using for its gigabit cable rollouts. STMicroelectronics demonstrated its own multigigabit solution at CES, and although Intel has yet to show a DOCSIS 3.1 SoC, the most recent version of its Puma platform offers up to 1.6Gbps in a DOCSIS 3 network. DOCSIS 3 and 3.1 are designed to be interoperable, so this product has a future even after the head-ends are upgraded.

Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset

Source: RCRWireless 

With multiple chipset vendors shipping products, CableLabs running regular interoperability tests, and large regional deployments beginning, we conclude that the big cable upgrade is now here. Even if cable operators succeed in virtualising their set-top box software, you can’t provide the customer-end modem nor the WiFi router from the cloud. It’s important to realise that FTTH operators can upgrade in a similarly painless way by replacing their optical network terminals (ONTs), but DSL operators need to replace infrastructure. Also, ONTs are often independent from the WLAN router or other customer equipment , so the upgrade won’t necessarily improve the WiFi.

WiFi is also getting a major upgrade

The Broadcom device is so significant, though, because of the very strong WiFi support built in with the cable modem. Like the cable industry, the WiFi ecosystem has succeeded in keeping up a steady cycle of continuous improvements that are usually backwards compatible, from 802.11b through to 802.11ac, thanks to a major standards effort, the scale that Intel and Apple’s support gives us, and its relatively light intellectual property encumbrance.

802.11ac adds a number of advanced radio features, notably multiple-user MIMO, beamforming, and higher-density modulation, that are only expected to arrive in the cellular network as part of 5G some time after 2020, as well as some incremental improvements over 802.11n, like additional MIMO streams, wider channels, and 5GHz spectrum by default. As a result, the industry refers to it as “gigabit WiFi”, although the gigabit is a per-station rather than per-user throughput.

The standard has been settled since January 2014, and support is available in most flagship-class devices and laptop chipsets since then, so this is now a reality. The upgrade of the cable networks to 802.11ac WiFi backed with DOCSIS3.1 will have major strategic consequences for telcos, as it enables the cable operators and any strategic partners of theirs to go in even harder on the fixed broadband business and also launch a WiFi-plus-MVNO mobile service at the same time. The beamforming element of 802.11ac should help them to support higher user densities, as it makes use of the spatial diversity among different stations to reduce interference. Cablevision already launched a mobile service just before Christmas. We know Comcast is planning to launch one sometime this year, as they have been hiring a variety of mobile professionals quite aggressively. And, of course, the CableWiFi roaming alliance greatly facilitates scaling up such a service. The economics of a mini-carrier, as we pointed out in the Google MVNO: What’s Behind It and What Are the Implications? Executive Briefing, hinge on how much traffic can be offloaded to WiFi or small cells.

Figure 4: Modelling a mini-carrier shows that the WiFi is critical

Source: STL Partners

Traffic carried on WiFi costs nothing in terms of spectrum and much less in terms of CAPEX (due to the lower intellectual property tax and the very high production runs of WiFi equipment). In a cable context, it will often be backhauled in the spare capacity of the fixed access network, and therefore will account for very little additional cost on this score. As a result, the percentage of data traffic transferred to WiFi, or absorbed by it, is a crucial variable. KDDI, for example, carries 57% of its mobile data traffic on WiFi and hopes to reach 65% by the end of this year. Increasing the fraction from 30% to 57% roughly halved their CAPEX on LTE.

A major regulatory issue at the moment is the deployment of LTE-LAA (Licensed-Assisted Access), which aggregates unlicensed radio spectrum with a channel from licensed spectrum in order to increase the available bandwidth. The 5GHz WiFi band is the most likely candidate for this, as it is widely available, contains a lot of capacity, and is well-supported in hardware.

We should expect the cable industry to push back very hard against efforts to rush deployment of LTE-LAA cellular networks through the regulatory process, as they have a great deal to lose if the cellular networks start to take up a large proportion of the 5GHz band. From their point of view, a major purpose of LTE-LAA might be to occupy the 5GHz and deny it to their WiFi operations.

  • Executive Summary
  • Introduction
  • Technology is driving the cable surge
  • WiFi is also getting a major upgrade
  • Wholesale and enterprise markets are threatened as well
  • The Cable Surge Is Disrupting Wireline
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game 
  • Figure 1: Comcast now has more broadband than TV customers
  • Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups
  • Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset
  • Figure 4: Modelling a mini-carrier shows that the WiFi is critical
  • Figure 5: Comcast’s growth is mostly driven by business services and broadband
  • Figure 6: Comcast Business is its growth start with a 27% CAGR
  • Figure 7: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 8: 3 major cable operators’ business services are now close to AT&T or Verizon’s scale
  • Figure 9: Summary of gigabit deployments
  • Figure 10: CAPEX as a % of revenue has been falling for some time…

 

Key Questions for The Future of the Network, Part 2: Forthcoming Disruptions

We recently published a report, Key Questions for The Future of the Network, Part 1: The Business Case, exploring the drivers for network investment.  In this follow-up report, we expand the coverage into two separate areas through which we explore 5 key questions:

Disruptive network technologies

  1. Virtualisation & the software telco – how far, how fast?
  2. What is the path to 5G? And what will it be used for?
  3. What is the role of WiFi & other wireless technologies?

External changes

  1. What are the impacts of government & regulation on the network?
  2. How will the vendor landscape change & what are the implications of this?

In the extract below, we outline the context for the first area – disruptive network technologies – and explore the rationales and processes associated with virtualisation (Question 1).

Critical network-technology disruptions

This section covers three huge questions which should be at the top of any CTO’s mind in a CSP – and those of many other executives as well. These are strategically-important technology shifts that have the potential to “change the game” in the longer term. While two of them are “wireless” in nature, they also impact fixed/fibre/cable domains, both through integration and potential substitution. These will also have knock-on effects in financial terms – directly in terms of capex/opex costs, or indirectly in terms of services enabled and revenues.

This is not intended as a round-up of every important trend across the technology spectrum. Clearly, there are many other evolutions occurring in device design, IoT, software-engineering, optical networking and semiconductor development. These will all intersect in some ways with telcos, but there are so many “logical hops” away from the process of actually building and running networks, that they don’t really fit into this document easily. (Although they do appear in contexts such as drivers of desirable 5G network capabilities).

Instead, the focus once again is on unanswered questions that link innovation with “disruption” of how networks are conceived and deployed. As described below, network-virtualisation has huge and diverse impacts across the CSP universe. 5G will likely have a large gap versus today’s 4G architecture, too. This is very different to changes which are mostly incremental.

The mobile and software focus of this section is deliberate. Fixed-network technologies – fast-evolving though they are – generally do not today cause “disruption” in a technical sense. As the name suggests, the current newest cable-industry standard, DOCSIS3.1, is an evolution of 3.0, not a revolution. There is no 4.0 on the drawing-boards, yet. But the relative ease of upgrade to “gigabit cable” may unleash more market-related disruptions, as telcos feel the need to play catch-up with their rivals’ swiftly-escalating headline speeds.

Fibre technologies also tend to be comparatively incremental, rather than driving (or enabling) massive organisational and competitive shifts. In fixed networks there are other important drivers – competition, network unbundling, 4K television, OTT-style video and so on – as well as important roles for virtualisation, which covers both mobile and fixed domains. For markets with high use of residential “OTT video” services such as Netflix – especially in 4K variants – the push to gigabit-range speeds may be faster than expected. This will also have knock-on impacts on the continued improvement of WiFi, defending against ever-faster cellular WiFi networks. Indeed, faster gigabit cable and FTTH networks will be necessary to provide backhaul for 4.5G and 5G cellular networks, both for normal cell-towers and the expected rapid growth of small-cells.

The questions covered in more depth here examine:

  • Virtualisation & the “software telco”: How fast will SDN and NFV appear in commercial networks, and how broad are their impacts in both medium and longer terms? 
  • What is the path from 4G to 5G? This is a less-obvious question than it might appear, as we do yet even have agreed definitions of what we want “5G” to do, let alone defined standards to do it.
  • What is the role of WiFi and other wireless technologies? 

All of these intersect, and have inter-dependencies. For instance, 5G networks are likely to embrace SDN/NFV as a core component, and also perhaps form an “umbrella” over other low-power wireless networks.

A fourth “critical” question would have been to consider security technology and processes. Clearly, the future network is going to face continued challenges from hackers and maybe even cyber-warfare, against which we will need to prepare. However, that is in many ways a broader set of questions that actually reflect on all the others – virtualisation will bring its own security dilemmas, as (no doubt) will 5G. WiFi already does. It is certainly a critical area that bears consideration at a strategic level within CSPs, although it is not addressed here as a specific “question”. It is also a huge and complex area that deserves separate study.

Non-disruptive network technologies

As well as being prepared to exploit truly disruptive innovations, the industry also needs to get better at spotting non-disruptive ones that are doomed to failure, and abandoning them before they incur too much cost or distraction. The telecoms sector has a long way to go before it embraces the start-up mentality of “failing fast” – there are too many hypothetical “standards” gathering dust on a metaphorical shelf, and never being deployed despite a huge amount of work. Sometimes they get shoe-horned into new architectures, as a way to breathe life into them – but that often just encumbers shiny new technologies with the failures of the past.

For example, over the past 10+ years, the telecom industry has been pitching IMS (IP Multimedia Subsystem) as the future platform for interoperating services. It is finally gaining some adoption, but essentially only as a way to implement VoIP versions of the phone system – and even then, with huge increases in complexity and often higher costs. It is not “disruptive” except insofar as sucking huge amounts of resources and management attention, away from other possible sources of genuine innovation. Few developers care about it, and the “technology politics” behind it have helped contribute to the industry’s problems, not the solutions. While there is growth in the deployment of IMS (e.g. as a basis for VoLTE – voice on LTE, or fixed-line VoIP) it is primarily an extra cost, rather than a source of new revenue or competitive advantage. It might help telcos reduce costs by retiring old equipment or reclaiming spectrum for re-use, but that seems to be the limit of its utility and opportunity.

Figure 1: IMS-based services (mostly VoIP) are evolutionary not disruptive

Source: Disruptive Analysis

A common theme in recent years has been for individual point solutions for technical standards to seem elegant “in isolation”, but actually fail to take account of the wider market context. Real-world “offload” of mobile data traffic to WiFi and femtocells has been minimal, because of various practical and commercial constraints – many of which have been predictable. Self-optimising networks (where radio components configured, provisioned and diagnosed themselves automatically) suffered from apathy by vendors – as well as fears from operator staff that they might make themselves redundant. A whole slew of attempts at integrating WiFi with cellular have also had minimal impact, because they ignored the existence of private WiFi and user behaviour. Some of these are now making a return, engineered into more holistic solutions like HetNets and SDN. Telcos execs need to ensure that their representatives on standards bodies, or industry fora, are able to make pragmatic decisions with multiple contributory inputs, rather than always pursue “engineering purity”.

Virtualisation & the “software telco” – how far, how fast?

Spurred by rapid advances in standardised computing products and cloud platforms, the idea of virtualisation is now almost ubiquitous across the telecom sector. Yet the specialised nature of network equipment means that “switching to the cloud” is a lot more complicated than is the case for enterprise IT. But change is happening – the industry is now slowly moving from inflexible, non-scalable network elements or technology sub-systems, to ones which are programmable, running on commercial hardware, and which can “spin up” or down in terms of capacity. We are still comparatively early in this new cycle, but the trend now appears to be inexorable. It is being driven both by what is becoming possible – and also the threats posed by other denizens of the “cloud universe” migrating towards the telecoms industry and threatening to replace aspects unilaterally.

Two acronyms cover the main developments:

  • Software-defined networks (SDN) change the basic network “plumbing” – rather than hugely-complex switches and routers, transmitting and processing data streams individually, SDN puts a central “controller” function in charge of more flexible boxes. These can be updated more easily, have new network-processing capabilities enabled, and allow (hopefully) for better reliability and lower costs.
  • Network function virtualisation (NFV) is less about the “big iron” parts of the network, instead focusing on the myriad of other smaller units needed to do more specific tasks relating to control, security, optimisation and so forth. It allows these supporting functions to be re-cast in software, running as apps on standard servers, rather than needing a variety of separate custom-built boxes and chips.

Figure 2: ETSI’s vision for NFV

                                                                                    Source: ETSI & STL Partners

And while a lot of focus has been placed on operators’ own data-centres and “data-plane” boxes like routers and assorted traffic-processing “middle-boxes” even, that is not the whole story. Virtualisation also extends to the other elements of telco kit: “control-plane” elements used to oversee the network and internal signalling, billing and OSS systems, and even bits of the access and radio network. Tying them all together – and managing the new virtual components – brings new challenges in “orchestration”.

But this begs a number of critical subsidiary questions.

  • Executive Summary
  • Introduction
  • Does the network matter? And will it face “disruption”?
  • Raising questions
  • Overview: Which disruptions are next?
  • Critical network-technology disruptions
  • Non-disruptive network technologies
  • Virtualisation & the “software telco” – how far, how fast?
  • What is the path to 5G? And what will it be used for?
  • What is the role of WiFi & other wireless technologies?
  • What else needs to happen?
  • What are the impacts of government & regulation?
  • Will the vendor landscape shift?
  • Conclusions & Other Questions
  • STL Partners and Telco 2.0: Change the Game
  • Figure 1: New services are both network-integrated & independent
  • Figure 2: IMS-based services (mostly VoIP) are evolutionary not disruptive
  • Figure 3: ETSI’s vision for NFV
  • Figure 4: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 5: Virtualisation roadmap: Telefonica
  • Figure 6: 5G timeline & top-level uses
  • Figure 7: Suggested example 5G use-cases
  • Figure 8: 5G architecture will probably be virtualised from Day 1
  • Figure 9: Key 5G Research Initiatives
  • Figure 10: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 11: Proliferating wireless options for IoT
  • Figure 12: Forthcoming IoT-related wireless technologies
  • Figure 13: London bus with free WiFi sponsored by ice-cream company
  • Figure 14: Vendor landscape in turmoil as IT & network domains merge

 

The Future Value of Voice and Messaging

Background – ‘Voice and Messaging 2.0’

This is the latest report in our analysis of developments and strategies in the field of voice and messaging services over the past seven years. In 2007/8 we predicted the current decline in telco provided services in Voice & Messaging 2.0 “What to learn from – and how to compete with – Internet Communications Services”, further articulated strategic options in Dealing with the ‘Disruptors’: Google, Apple, Facebook, Microsoft/Skype and Amazon in 2011, and more recently published initial forecasts in European Mobile: The Future’s not Bright, it’s Brutal. We have also looked in depth at enterprise communications opportunities, for example in Enterprise Voice 2.0: Ecosystem, Species and Strategies, and trends in consumer behaviour, for example in The Digital Generation: Introducing the Participation Imperative Framework.  For more on these reports and all of our other research on this subject please see here.

The New Report


This report provides an independent and holistic view of voice and messaging market, looking in detail at trends, drivers and detailed forecasts, the latest developments, and the opportunities for all players involved. The analysis will save valuable time, effort and money by providing more realistic forecasts of future potential, and a fast-track to developing and / or benchmarking a leading-edge strategy and approach in digital communications. It contains

  • Our independent, external market-level forecasts of voice and messaging in 9 selected markets (US, Canada, France, Germany, Spain, UK, Italy, Singapore, Taiwan).
  • Best practice and leading-edge strategies in the design and delivery of new voice and messaging services (leading to higher customer satisfaction and lower churn).
  • The factors that will drive best and worst case performance.
  • The intentions, strategies, strengths and weaknesses of formerly adjacent players now taking an active role in the V&M market (e.g. Microsoft)
  • Case studies of Enterprise Voice applications including Twilio and Unified Communications solutions such as Microsoft Office 365
  • Case studies of Telco OTT Consumer Voice and Messaging services such as like Telefonica’s TuGo
  • Lessons from case studies of leading-edge new voice and messaging applications globally such as Whatsapp, KakaoTalk and other so-called ‘Over The Top’ (OTT) Players


It comprises a 18 page executive summary, 260 pages and 163 figures – full details below. Prices on application – please email contact@telco2.net or call +44 (0) 207 247 5003.

Benefits of the Report to Telcos, Technology Companies and Partners, and Investors


For a telco, this strategy report:

  • Describes and analyses the strategies that can make the difference between best and worst case performance, worth $80bn (or +/-20% revenues) in the 9 markets we analysed.
  • Externally benchmarks internal revenue forecasts for voice and messaging, leading to more realistic assumptions, targets, decisions, and better alignment of internal (e.g. board) and external (e.g. shareholder) expectations, and thereby potentially saving money and improving contributions.
  • Can help improve decisions on voice and messaging services investments, and provides valuable insight into the design of effective and attractive new services.
  • Enables more informed decisions on partner vs competitor status of non-traditional players in the V&M space with new business models, and thereby produce better / more sustainable future strategies.
  • Evaluates the attractiveness of developing and/or providing partner Unified Communication services in the Enterprise market, and ‘Telco OTT’ services for consumers.
  • Shows how to create a valuable and realistic new role for Voice and Messaging services in its portfolio, and thereby optimise its returns on assets and capabilities


For other players including technology and Internet companies, and telco technology vendors

  • The report provides independent market insight on how telcos and other players will be seeking to optimise $ multi-billion revenues from voice and messaging, including new revenue streams in some areas.
  • As a potential partner, the report will provide a fast-track to guide product and business development decisions to meet the needs of telcos (and others).
  • As a potential competitor, the report will save time and improve the quality of competitor insight by giving strategic insights into the objectives and strategies that telcos will be pursuing.


For investors, it will:

  • Improve investment decisions and strategies returning shareholder value by improving the quality of insight on forecasts and the outlook for telcos and other technology players active in voice and messaging.
  • Save vital time and effort by accelerating decision making and investment decisions.
  • Help them better understand and evaluate the needs, goals and key strategies of key telcos and their partners / competitors


The Future Value of Voice: Report Content Summary

  • Executive Summary. (18 pages outlining the opportunity and key strategic options)
  • Introduction. Disruption and transformation, voice vs. telephony, and scope.
  • The Transition in User Behaviour. Global psychological, social, pricing and segment drivers, and the changing needs of consumer and enterprise markets.
  • What now makes a winning Value Proposition? The fall of telephony, the value of time vs telephony, presence, Online Service Provider (OSP) competition, operators’ responses, free telco offerings, re-imaging customer service, voice developers, the changing telephony business model.
  • Market Trends and other Forecast Drivers. Model and forecast methodology and assumptions, general observations and drivers, ‘Peak Telephony/SMS’, fragmentation, macro-economic issues, competitive and regulatory pressures, handset subsidies.
  • Country-by-Country Analysis. Overview of national markets. Forecast and analysis of: UK, Germany, France, Italy, Spain, Taiwan, Singapore, Canada, US, other markets, summary and conclusions.
  • Technology: Products and Vendors’ Approaches. Unified Comminications. Microsoft Office 365, Skype, Cisco, Google, WebRTC, Rich Communications Service (RCS), Broadsoft, Twilio, Tropo, Voxeo, Hypervoice, Calltrunk, Operator voice and messaging services, summary and conclusions.
  • Telco Case Studies. Vodafone 360, One Net and RED, Telefonica Digital, Tu Me, Tu Go, Bluvia and AT&T.
  • Summary and Conclusions. Consumer, enterprise, technology and Telco OTT.

Europe’s brutal future: Vodafone and Telefonica hit hard

Introduction

 

Even in the UK and Germany, the markets with the brightest future, STL Partners forecasts a respective 19% and 20% decline in mobile core services (voice, messaging and data) revenues by 2020. The UK has less far to fall simply because the market has already contracted over the last 2-3 years whereas the German market has continued to grow.

We forecast a decline of 34% in France over the same period.

In Italy and, in particular, Spain we forecast a brutal decline of 47% and 61% respectively. Overall, STL Partners anticipates a reduction of 36% or €30 billion in core mobile service revenues by 2020. This equates to around €50 billion for Europe as a whole.

 

Like the medical profession, we don’t always like being correct when our diagnoses are pessimistic. So it is with some regret that we note that our forecasts are being borne out by the latest reports from southern Europe. Vodafone has been forced into a loss for H1 2012, after it wrote down the value of its Spanish and Italian OpCos by £5.9bn. Here’s why:

eurobloodbath.png

The writedown is of course non-cash, and those of us who remember Chris Gent’s Vodafone will be familiar with the sensation. But the reasons for it could not be more real. Service revenue has fallen sickeningly, down 7.9% across Europe, 1.4% across the group.

Vodafone has enjoyed a decent performance from the company’s assets in Africa, Asia, Turkey, and the Pacific, and a hefty dividend from Verizon Wireless. It is the performance in Europe which is dreadful and the situation in southern Europe especially bad.

For while service revenue in Gernany was up 1.8%, it was down a staggering 12.8% in both Spain and Italy. And margins were sacrificed for volume; EBITDA was down 16.6% in Italy, and 13.8% in “Other Southern Europe”, that is to say mostly Greece and Portugal. Even the UK saw service revenues fall -2.1%, while the Netherlands was down -1.9%. Vodafone’s investments across Europe seem to have landed in an arc of austerity running from the Norwegian Sea to the Aegean, the long way around.

Vodafone’s enterprise line of business has helped the Italian division defy gravity for a while. Until recently, OneNet was racking up the same 6% growth rates in Italy that it saw in Germany and contributing substantially to service revenue, even though the wider business was shrinking. In Q2, service revenue in Italy was down 4.1% but enterprise was up 5.8%.

But strategy inevitably beats tactics. Tellingly, the half-year statement from Vodafone management went a little coy about enterprise’s performance. Numbers are only given for Germany and Turkey, and for group-wide One Net seats. They are good, but you wonder about the numbers that aren’t given. We are told that One Net is “performing well” in Italy, but that’s not a number.

Meanwhile, Telefonica saw its European revenues fall 6.4% year-on-year. The problem is in Spain, where the plummet was 12.9%. Mobile was worse still, with revenues thumped downwards by 16.2%.

The damage, for both carriers, is concentrated in mobility, in southern Europe, and in voice and messaging. Telefonica blames termination rate cuts (as does Vodafone – both carriers are big enough that they tend to terminate more calls from other carriers than they pay out on), but this isn’t really going to wash. As Vodafone’s own statement makes clear, MTRs are coming down everywhere. And Telefonica’s wireline revenues were horrible, too, down 9.6%.

But the biggest hit to revenue for Vodafone was in messaging, and then in voice. Data revenue is growing. In the half to 30th September 2011, Vodafone.es subscribers generated £156 million in messaging revenues. In the corresponding half this year, it was £99 million. Part of this is accounted for by movement in the euro-sterling exchange rate, so Vodafone reports it as a 30% hit to messaging and a 20% hit to voice. Italy saw an 11.4% hit to messaging and a 16% hit to voice. The upshot to Vodafone is a 29.7% cut to the division’s operating profits. Brutal indeed.

Obviously, a lot of this is being driven by the European economic crisis. It is more than telling that Vodafone’s German and Turkish operations are powering ahead, while it’s not just the Mediterranean economies under the European Union’s “troika” management (EC, ECB and IMF) that are suffering. The UK, under its own voluntary austerity plan, was down 2.1% for Telefonica, and the Netherlands, having gone from being the keenest pupil in the class to another austerity case in the space of one unexpectedly bad budget, is off 1.9%. Even if you file Turkey under “emerging market”, the comparison between the Mediterranean disaster area, the OK-ish position in North-Western Europe, and the impressive (£2.4bn) dividend from Verizon Wireless in the States is compelling.

But disruption is a fact. We should not expect that things will snap back as soon as the macro-economy takes a turn for the better. One of the reasons for our grim prediction was that as well as weak economies, the Southern European markets exhibited surprisingly high prices for mobile service.

The impact of the crisis is likely to permanently reset customer behaviour, technology adoption, and price expectations. The Southern price premium is likely to be permanently eroded, whether by price war or by regulatory action. Customers are observably changing their behaviour in order to counter-optimise the carriers’ tariff plans.

Vodafone observes plummeting messaging revenues, poor voice revenues, and heavy customer retention spending, specifically on handset subsidies for smartphones. In fact, Vodafone admits that it has tried to phase out subsidy in Spain and been forced to turn back. This suggests that customers are becoming very much more aware of the high margin on SMS, are rationing it, and are deliberately pressing for any kind of smartphone in order to make use of alternatives to SMS. Once they are hooked on WhatsApp, they are unlikely to go back to carrier messaging if the economy looks up.

Another customer optimisation Vodafone encounters is that the customers love their integrated fixed/mobile plan. Unfortunately, this may mean they are shifting data traffic off the cellular network in the home-zone and onto WLAN. Further, as Vodafone is a DSL unbundler, the margin consequences of moving revenue this way may not be so great. In Italy, although the integrated tariffs sold well, a “fall in the non-ULL customer base” is blamed for a 5.6% drop in fixed service revenue. Are the customers fleeing the reseller lines because Vodafone can’t match TI or Fastweb’s pricing, or is it that the regulatory position means margins on unbundled lines are worse?

Vodafone’s response to all this is its RED tariff plan. This essentially represents a Telco 2.0 Happy Pipe strategy, providing unlimited voice and messaging in order to slow down the adoption of alternative communications, and setting data bundles at levels intended to be above the expected monthly usage, so the subscribers feel able to use them, but not far enough above it that the bandwidth-hog psychology takes hold.

vf-red.png

With regard to devices, RED offers three options with tiered pricing: SIM only, basic smartphone, and iPhone. The idea is to make the subsidy costs more evident to the customer, to slow up the replacement cycle on flagship smartphones via SIM-only, and to channel the smartphone hunters into the cheaper devices. Overall, the point is to drive data and smartphone adoption down the diffusion curve, so as to help the transition from a metered voice-centric to a data-centric business model.

The CEO, Vittorio Colao, says as much:

The reason why the whole industry is on a difficult trend…is because we historically voice priced really high and data priced really low.

Vodafone’s competitors face a serious challenge. They are typically still very dependent on prepaid voice minutes, a market which is suffering. Even in Northern Europe, it’s off 10%. Telcos loved PAYG because everything in it is incremental. Now, the challenge is how to create a RED-like tariff for the PAYG market.

Euro Voice Brutal Image 2 Chart Euro 5 Oct 2012.png

Those in North and South America, MENA and Asia-Pacific may be looking at Europe and breathing a sigh of relief. But don’t fool yourself. SMS revenues in the US are down for the first time driven by volume and price declines. One rather worrying outcome of last week’s Digital Arabia event was that operators in the region seem to be under the impression that the decline for them is still several years out and destined to be a relatively gentle softening of the market. There’s more here on our initial take on what they need to do to avoid complacency and start to build new business models more quickly.

LTE: APAC and US ‘Leading The Experience’

Summary: LTE is gaining traction in Asia Pacific and the US, despite challenges with spectrum, voice, and handsets. In South Korea, for example, penetration is expected to exceed 50% within 18 months. Our report on the lessons learned at the 2012 NGMN conference. (July 2012, Executive Briefing Service, Future of the Networks Stream).

LTE in Korea

  Read in Full (Members only)   To Subscribe click here

Below is an extract from this 14 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks here. Non-members can subscribe here and for this and other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

We will be looking further at the role of LTE as an element of the strategic transformation of the telco industry at the invitation only Executive Brainstorms in Dubai (November 6-7, 2012), Singapore (4-5 December, 2012), Silicon Valley (19-20 March 2013), and London (23-24 April, 2013). Email contact@stlpartners.com or call +44 (0) 207 243 5003 to find out more.

To share this article easily, please click:



Taking the pulse of LTE

Introduction – NGMN 2012

In June, Telco 2.0 attended the main annual conference of NGMN in San Francisco. NGMN is the “Next Generation Mobile Network” alliance, the industry group tasked with defining the requirements for 4G networks and beyond. (It is then up to 3GPP and – historically at least – other standards bodies, to define the actual technologies which meet those requirements). Set up in 2006, it evaluated a number of candidate technologies, it eventually settled on LTE as its preferred “next-gen” technology, after a brief flirtation including WiMAX as well.

The conference was an interesting mix of American and Asian companies, operators (with quite a CTO-heavy representation), major vendors and some niche technology specialists. Coincidentally, the event also took place at the same time as Apple’s flagship annual developer conference at the Moscone Center across the road.

Although it was primarily about current LTE networks, quite a lot of the features that feature in the next stage, “LTE-Advanced” were discussed too, as well as updates on the roles of HSPA+ and WiFi. Some of the material was outside Telco 2.0’s normal beat (for example the innards of base station antennas), but there were also quite a lot of references to evolving broadband business models, APIs and the broader Internet value chain.

Key Take-Outs

In some countries, LTE adoption is happening very quickly – in fact, faster than expected. This is impressive, and a testament to the NGMN process and 3GPP getting the basic radio technology standards right. However, rollout and uptake is very patchy, especially outside the US, Korea and Japan. They are still problems around the fragmentation of suitable spectrum bands, expensive devices, supporting IT systems and the thorny issue of how to deal with voice. In addition, many operators’ capex budgets are being constrained by macroeconomic uncertainty. What also seems true is that LTE has not (yet) resulted in any substantive new telco business models, although there is clearly a lot of work behind the scenes on APIs and new pricing and data-bundling approaches.  

We are also impressed by the continued focus of the NGMN itself on further evolution of 4G+ networks, in resolving the outstanding technical issues (e.g. helping to drive towards multiband-capable devices, working on mobilised versions of adaptive video streaming), continuing the evolution to ever-better network speeds and efficiencies, and helping to minimise operators’ capex and opex through programmes such as SON (self-optimising networks).

ngmm: the engine of broadband wireless innovation

LTE adoption: accelerating – but patchy

One key conclusion from the event was the surprisingly rapid switch-over of users from 3G to 4G where it is available, especially with a decent range of handsets and aggressive marketing. In particular, US, South Korean and Japanese operators are leading the way. The US probably has the largest absolute number of subscribers – almost certainly more than 10m by the end of Q2 2012 (Verizon had 8m by end-Q1, with MetroPCS and AT&T also having launched). But in terms of penetration, it looks like South Korea is going to be the prize-winner. SKTelecom already has more than 3m subscribers, and is expecting 6m by the end of the year. More meaningfully, the various Korean presenters at the event seemed to agree the penetration of LTE could be as high as 50% of mobile users by the end of next year. NTT DoCoMo’s LTE service (branded Xi) is also accelerating rapidly, recently crossing the 3m user threshold, with a broad range of LTE smartphones coming out this summer, in an attempt to take the wind out of Softbank’s iPhone hegemony.

Figure 1: South Korea will have 30m LTE subs at end-2013, vs 49m population

LTE in Korea
Source: Samsung Electronics

This growth is not really being mirrored elsewhere, however. At the end of Q1, TeliaSonera had just 100k subscribers (mostly USB dongles) across a 7-country footprint of LTE networks, despite being the first to launch at the end of 2009. This probably reflects the fact that smartphones suitable for European frequency bands (and supporting voice) have been slow in arriving, something that should change rapidly from now onwards. It is also notable that TeliaSonera has attempted to position LTE as a premium, higher-priced option compared to 3G, while operators such as Verizon have really just used 4G as a marketing ploy, offering faster speeds as a counter to AT&T – and also perhaps to give Android devices an edge against the more expensive-to-subsidise iPhone.

Once European and Chinese markets really start to market LTE smartphones in anger (which will likely be around the 2012 Xmas season), we should see another ramp-up in demand – although that will partly be determined by whether the next iPhone (likely due around September-October) finally supports LTE or not.

To read the note in full, including the following sections detailing support for the analysis…

  • New business models, or more of the same?
  • Are the new models working?
  • Wholesale LTE
  • Other hurdles for LTE
  • Spectrum fragmentation blues
  • Handsets and spectrum
  • Roaming and spectrum
  • But what about voice and messaging?
  • HetNets & WiFi – part of “Next-gen networks” or not?
  • LTE Apps?
  • Conclusions

…and the following figures…

  • Figure 1: South Korea will have 30m LTE subs at end-2013, vs 49m population
  • Figure 2 – Juniper: exposing network APIs to apps
  • Figure 3 – Yota is wholesaling LTE capacity, while acting as a 2G/3G MVNO
  • Figure 4 – A compelling argument to replace old public-safety radios with LTE
  • Figure 5 – NTT DoCoMo made a colourful argument about LTE spectrum fragmentation

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 14 page report in PDF format hereNon-Members, please subscribe here. For this or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.