Telco digital twins: Cool tech or real value?

Definition of a digital twin

Digital twin is a familiar term with a well-known definition in industrial settings. However, in a telco setting it is useful to define what it is and how it differs from a standard piece of modelling. This research discusses the definition of a digital twin and concludes with a detailed taxonomy.

An archetypical digital twin:

  • models a single entity/system (for example, a cell site).
  • creates a digital representation of this entity/system, which can be either a physical object, process, organisation, person or abstraction (details of the cell-site topology or the part numbers of components that make up the site).
  • has exactly one twin per thing (each cell site can be modelled separately).
  • updates (either continuously, intermittently or as needed) to mirror the current state of this thing. For example, the cell sitescurrent performance given customer behavior.

In addition:

  • multiple digital twins can be aggregated to form a composite view (the impact of network changes on cell sitesin an area).
  • the data coming into the digital twin can drive various types of analytics (typically digital simulations and models) within the twin itself – or could transit from one or multiple digital twins to a third-party application (for example, capacity management analytics).
  • the resulting analysis has a range of immediate uses, such as feeding into downstream actuators, or it can be stored for future use, for instance mimicking scenarios for testingwithout affecting any live applications.
  • a digital twin is directly linked to the original, which means it can enable a two-way interaction. Not only can a twin allow others to read its own data, but it can transmit questions or commands back to the original asset.

Enter your details below to download an extract of the report

What is the purpose of a digital twin?

This research uses the phrase “archetypical twin” to describe the most mature twin category, which can be found in manufacturing, operations, construction, maintenance and operating environments. These have been around in different levels of sophistication for the last 10 years or so and are expected to be widely available and mature in the next five years. Their main purpose is to act as a proxy for an asset, so that applications wanting data about the asset can connect directly to the digital twin rather than having to connect directly with the asset. In these environments, digital twins tend to be deployed for expensive and complex equipment which needs to operate efficiently and without significant down time. For example, jet engines or other complex equipment. In the telco, the most immediate use case for an archetypical twin is to model the cell tower and associated Radio Access Network (RAN) electronics and supporting equipment.

The adoption of digital twins should be seen as an evolution from today’s AI models

digital-twins-evolution-of-todays-ai-models-stl-partners

*See report for detailed graphic.

Source: STL Partners

 

At the other end of the maturity curve from the archetypical twin, is the “digital twin of the organisation” (DTO). This is a virtual model of a department, business unit, organisation or whole enterprise that management can use to support specific financial or other decision-making processes. It uses the same design pattern and thinking of a twin of a physical object but brings in a variety of operational or contextual data to model a “non-physical” thing. In interviews for this research, the consensus was that these were not an initial priority for telcos and, indeed, conceptually it was not totally clear whether the benefits make them a must-have for telcos in the mid-term either.

As the telecoms industry is still in the exploratory and trial phase with digital twins, there are a series of initial deployments which, when looked at, raise a somewhat semantic question about whether a digital representation of an asset (for example, a network function) or a system (for example, a core network) is really a digital twin or actually just an organic development of AI models that have been used in telcos for some time. Referring to this as the “digital twin/model” continuum, the graphic above shows the characteristics of an archetypical twin compared to that of a typical model.

The most important takeaway from this graphic are the factors on the right-hand side that make a digital twin potentially much more complex and resource hungry than a model. How important it is to distinguish an archetypical twin from a hybrid digital twin/model may come down to “marketing creep”, where deployments tend to get described as digital twins whether they exhibit many of the features of the archtypical twin or not. This creep will be exacerbated by telcos’ needs, which are not primarily focused on emulating physical assets such as engines or robots but on monitoring complex processes (for example, networks), which have individual assets (for example, network functions, physical equipment) that may not need as much detailed monitoring as individual components in an airplane engine. As a result, the telecoms industry could deploy digital twin/models far more extensively than full digital twins.

Table of contents

  • Executive Summary
    • Choosing where to start
    • Complexity: The biggest short-term barrier
    • Building an early-days digital twin portfolio
  • Introduction
    • Definition of a digital twin
    • What is the purpose of a digital twin?
    • A digital twin taxonomy
  • Planning a digital twin deployment
    • Network testing
    • Radio and network planning
    • Cell site management
    • KPIs for network management
    • Fraud prediction
    • Product catalogue
    • Digital twins within partner ecosystems
    • Digital twins of services
    • Data for customer digital twins
    • Customer experience messaging
    • Vertical-specific digital twins
  • Drivers and barriers to uptake of digital twins
    • Drivers
    • Barriers
  • Conclusion: Creating a digital twin strategy
    • Immediate strategy for day 1 deployment
    • Long-term strategy

Related research

Enter your details below to download an extract of the report

Telco Cloud: Why it hasn’t delivered, and what must change for 5G

Related Webinar – 5G Telco Clouds: Where we are and where we are headed

This research report will be expanded upon on our upcoming webinar 5G Telco Clouds: Where we are and where we are headed. In this webinar we will argue that 5G will only pay if telcos find a way to make telco clouds work. We will look to address the following key questions:

  • Why have telcos struggled to realise the telco cloud promise?
  • What do telcos need to do to unlock the key benefits?
  • Why is now the time for telcos to try again?

Join us on April 8th 16:00 – 17:00 GMT by using this registration link.

Telco cloud: big promises, undelivered

A network running in the cloud

Back in the early 2010s, the idea that a telecoms operator could run its network in the cloud was earth-shattering. Telecoms networks were complicated and highly-bespoke, and therefore expensive to build, and operate. What if we could find a way to run networks on common, shared resources – like the cloud computing companies do with IT applications? This would be beneficial in a whole host of ways, mostly related to flexibility and efficiency. The industry was sold.

In 2012, ETSI started the ball rolling when it unveiled the Network Functions Virtualisation (NFV) whitepaper, which borrowed the IT world’s concept of server-virtualisation and gave it a networking spin. Network functions would cease to be tied to dedicated pieces of equipment, and instead would run inside “virtual machines” (VMs) hosted on generic computing equipment. In essence, network functions would become software apps, known as virtual network functions (VNFs).

Because the software (the VNF) is not tied to hardware, operators would have much more flexibility over how their network is deployed. As long as we figure out a suitable way to control and configure the apps, we should be able to scale deployments up and down to meet requirements at a given time. And as long as we have enough high-volume servers, switches and storage devices connected together, it’s as simple as spinning up a new instance of the VNF – much simpler than before, when we needed to procure and deploy dedicated pieces of equipment with hefty price tags attached.

An additional benefit of moving to a software model is that operators have a far greater degree of control than before over where network functions physically reside. NFV infrastructure can directly replace old-school networking equipment in the operator’s central offices and points of presence, but the software can in theory run anywhere – in the operator’s private centralised data centre, in a datacentre managed by someone else, or even in a public hyperscale cloud. With a bit of re-engineering, it would be possible to distribute resources throughout a network, perhaps placing traffic-intensive user functions in a hub closer to the user, so that less traffic needs to go back and forth to the central control point. The key is that operators are free to choose, and shift workloads around, dependent on what they need to achieve.

The telco cloud promise

Somewhere along the way, we began talking about the telco cloud. This is a term that means many things to many people. At its most basic level, it refers specifically to the data centre resources supporting a carrier-grade telecoms network: hardware and software infrastructure, with NFV as the underlying technology. But over time, the term has started to also be associated with cloud business practices – that is to say, the innovation-focussed business model of successful cloud computing companies

Figure 2: Telco cloud defined: New technology and new ways of working

Telco cloud: Virtualised & programmable infrastructure together with cloud business practices

Source: STL Partners

In this model, telco infrastructure becomes a flexible technology platform which can be leveraged to enable new ways of working across an operator’s business. Operations become easier to automate. Product development and testing becomes more straightforward – and can happen more quickly than before. With less need for high capital spend on equipment, there is more potential for shorter, success-based funding cycles which promote innovation.

Much has been written about the vast potential of such a telco cloud, by analysts and marketers alike. Indeed, STL Partners has been partial to the same. For this reason, we will avoid a thorough investigation here. Instead, we will use a simplified framework which covers the four major buckets of value which telco cloud is supposed to help us unlock:

Figure 3: The telco cloud promise: Major buckets of value to be unlocked

Four buckets of value from telco cloud: Openness; Flexibility, visibility & control; Performance at scale; Agile service introduction

Source: STL Partners

These four buckets cover the most commonly-cited expectations of telcos moving to the cloud. Swallowed within them all, to some extent, is a fifth expectation: cost savings, which have been promised as a side-effect. These expectations have their origin in what the analyst and vendor community has promised – and so, in theory, they should be realistic and achievable.

The less-exciting reality

At STL Partners, we track the progress of telco cloud primarily through our NFV Deployment Tracker, a comprehensive database of live deployments of telco cloud technologies (NFV, SDN and beyond) in telecoms networks across the planet. The emphasis is on live rather than those running in testbeds or as proofs of concept, since we believe this is a fairer reflection of how mature the industry really is in this regard.

What we find is that, after a slow start, telcos have really taken to telco cloud since 2017, where we have seen a surge in deployments:

Figure 4: Total live deployments of telco cloud technology, 2015-2019
Includes NFVi, VNF, SDN deployments running in live production networks, globally

Telco cloud deployments have risen substantially over the past few years

Source: STL Partners NFV Deployment Tracker

All of the major operator groups around the world are now running telco clouds, as well as a significant long tail of smaller players. As we have explained previously, the primary driving force in that surge has been the move to virtualise mobile core networks in response to data traffic growth, and in preparation for roll-out of 5G networks. To date, most of it is based on NFV: taking existing physical core network functions (components of the Evolved Packet Core or the IP Multimedia Subsystem, in most cases) and running them in virtual machines. No operator has completely decommissioned legacy network infrastructure, but in many cases these deployments are already very ambitious, supporting 50% or more of a mobile operator’s total network traffic.

Yet, despite a surge in deployments, operators we work with are increasingly frustrated in the results. The technology works, but we are a long way from unlocking the value promised in Figure 2. Solutions to date are far from open and vendor-neutral. The ability to monitor, optimise and modify systems is far from ubiquitous. Performance is acceptable, but nothing to write home about, and not yet proven at mass scale. Examples of truly innovative services built on telco cloud platforms are few and far between.

We are continually asked: will telco cloud really deliver? And what needs to change for that to happen?

The problem: flawed approaches to deployment

Learning from those on the front line

The STL Partners hypothesis is that telco cloud, in and of itself, is not the problem. From a theoretical standpoint, there is no reason that virtualised and programmable network and IT infrastructure cannot be a platform for delivering the telco cloud promise. Instead, we believe that the reason it has not yet delivered is linked to how the technology has been deployed, both in terms of the technical architecture, and how the telco has organised itself to operate it.

To test this hypothesis, we conducted primary research with fifteen telecoms operators at different stages in their telco cloud journey. We asked them about their deployments to date, how they have been delivered, the challenges encountered, how successful they have been, and how they see things unfolding in the future.

Our sample includes individuals leading telco cloud deployment at a range of mobile, fixed and converged network operators of all shapes and sizes, and in all regions of the world. Titles vary widely, but include Chief Technology Officers, Heads of Technology Exploration and Chief Network Architects. Our criteria were that individuals needed to be knee-deep in their organisation’s NFV deployments, not just from a strategic standpoint, but also close to the operational complexities of making it happen.

What we found is that most telco cloud deployments to date fall into two categories, driven by the operator’s starting point in making the decision to proceed:

Figure 5: Two starting points for deploying telco cloud

Function-first "we need to virtualise XYZ" vs platform-first "we want to build a cloud platform"

Source: STL Partners

The operators we spoke to were split between these two camps. What we found is that the starting points greatly affect how the technology is deployed. In the coming pages, we will explain both in more detail.

Table of contents

  • Executive Summary
  • Telco cloud: big promises, undelivered
    • A network running in the cloud
    • The telco cloud promise
    • The less-exciting reality
  • The problem: flawed approaches to deployment
    • Learning from those on the front line
    • A function-first approach to telco cloud
    • A platform-first approach to telco cloud
  • The solution: change, collaboration and integration
    • Multi-vendor telco cloud is preferred
    • The internal transformation problem
    • The need to foster collaboration and integration
    • Standards versus blueprints
    • Insufficient management and orchestration solutions
    • Vendor partnerships and pre-integration
  • Conclusions: A better telco cloud is possible, and 5G makes it an urgent priority

Facebook’s Telecom Infra Project: What is it good for?

Introduction

In early 2016, Facebook launched the Telecom Infra Project (TIP). It was set up as an open industry initiative, to reduce costs in creating telecoms network equipment, and associated processes and operations, primarily through open-source concepts applied to network hardware, interfaces and related software.

One of the key objectives was to split existing proprietary vendor “black boxes” (such as cellular base stations, or optical multiplexers) into sub-components with standard interfaces. This should enable competition for each constituent part, and allow the creation of lower-cost “white box” designs from a wider range of suppliers than today’s typical oligopoly. Critically, this is expected to enable much-broader adoption of networks in developing markets, where costs – especially for radio networks – remain too high for full deployments. Other outcomes may be around cheaper 5G infrastructure, or specialised networks for indoor use or vertical niches.

TIP’s emergence parallels a variety of open-source initiatives elsewhere in telecoms, notably ONAP – the merger of two NFV projects being developed by AT&T (ECOMP) and the Linux Foundation (Open-O). It also parallels many other approaches to improving network affordability for developing markets.

TIP got early support from a number of operators (including SK Telecom, Deutsche Telekom, BT/EE and Globe), hosting/cloud players like Equinix and Bandwidth, semiconductor suppliers including Intel, and various (mostly radio-oriented) network vendors like Radisys, Vanu, IP Access, Quortus and – conspicuously – Nokia. It has subsequently expanded its project scope, governance structure and member base, with projects on optical transmission and core-network functions as well as cellular radios.

More recently, it has signalled that not all its output will be open-source, but that it will also support RAND (reasonable and non-discriminatory) intellectual property rights (IPR) licensing as well. This reflected push-back from some vendors on completely relinquishing revenues from their (R&D-heavy) IPR. While services, integration and maintenance offered around open-source projects have potential, it is less clear that they will attract early-stage investment necessary for continued deep innovation in cutting-edge network technology.

At first sight, it is not obvious why Facebook should be the leading light here. But contrary to popular belief, Facebook – like Google and Amazon and Alibaba – is not really just a “web” company. They all design or build physical hardware as well – servers, network gear, storage, chips, data-centres and so on. They all optimise the entire computing / network chain to serve their needs, with as much efficiency as possible in terms of power consumption, physical space requirements and so on. They all have huge hardware teams and commit substantial R&D resources to the messy, expensive business of inventing new kit. Facebook in particular has set up Internet.org to help get millions online in the developing world, and is still working on its Aquila communications drones. It also set up OCP (Open Computing Platform) as a very successful open-source project for data-centre design; in many ways TIP is OCP’s newer and more telco-oriented cousin.

Many in the telecom industry often overlook the fact that their Internet peers now have more true “technology” investment – and especially networking innovation – than most operators. Some operators – notably DT and SKT – are pushing back against the vendor “establishment”, which they see as stifling network innovation by continuing to push monolithic, proprietary black boxes.

Contents:

  • Executive Summary
  • Introduction
  • What does Open-Source mean, applied to hardware?
  • Focus areas for TIP
  • Overview
  • Voyager
  • OpenCellular
  • Strategic considerations and implications
  • Operator involvement with TIP
  • A different IPR model to other open-source domains
  • Fit with other Facebook initiatives
  • Who are the winners?
  • Who are the losers?
  • Conclusions and Recommendations

Figures:

  • Figure 1: A core TIP philosophy is “unbundling” components of vendor “black boxes”
  • Figure 2: OpenCellular functional architecture and external design
  • Figure 3: SKT sees open-source, including TIP, as fundamental to 5G

Mobile/Multi-Access Edge Computing: How can telcos monetise this cloud?

Introduction

A formal definition of MEC is that it enables IT, NFV and cloud-computing capabilities within the access network, in close proximity to subscribers. Those edge-based capabilities can be provided to internal network functions, in-house applications run by the operator, or potentially third-party partners / developers.

There has long been a vision in the telecoms industry to put computing functions at local sites. In fixed networks, operators have often worked with CDN and other partners on distributed network capabilities, for example. In mobile, various attempts have been made to put computing or storage functions alongside base stations – both big “macro” cells and in-building small/pico-cells. Part of the hope has been the creation of services tailored to a particular geography or building.

But besides content-cacheing, none of these historic concepts and initiatives have gained much traction. It turns out that “location-specific” services can be easily delivered from central facilities, as long as the endpoint knows its own location (e.g. using GPS) and communicates this to the server.

This is now starting to change. In the last three years, various market and technical trends have re-established the desire for localised computing. Standards have started to evolve, and early examples have emerged. Multiple groups of stakeholders – telcos and their network vendors, application developers, cloud providers, IoT specialists and various others have (broadly) aligned to drive the emergence of edge/fog computing. While there are numerous competing architectures and philosophies, there is clearly some scope for telco-oriented approaches.

While the origins of MEC (and the original “M”) come from the mobile industry, driven by visions of IoT, NFV and network-slicing, the pitch has become more nuanced, and now embraces fixed/cable networks as well – hence the renaming to “multi-access”.

Figure 1: A taxonomy of mobile edge computing

Source: IEEE Conference Paper, Ahmed & Ahmed, https://www.researchgate.net/publication/285765997

Background market drivers for MEC

Before discussing specific technologies and use-cases for MEC, it is important to contextualise some other trends in telecoms that are helping build a foundation for it:

  • Telcos need to reduce costs & increase revenues: This is a bit “obvious” but bears repeating. Most initiatives around telco cloud and virtualisation are driven by these two fundamental economic drivers. Here, they relate to a desire to (a) reduce network capex/opex by shifting from proprietary boxes to standardised servers, and (b) increase “programmability” of the network to host new functions and services, and allow them to be deployed/updated/scaled rapidly. These underpin broader trends in NFV and SDN, and then indirectly to MEC and edge-computing.
  • New telco services may be inherently “edge-oriented”: IoT, 5G, vertical enterprise applications, plus new consumer services like IPTV also fit into both the virtualisation story and the need for distributed capabilities. For example, industrial IoT connectivity may need realtime control functions for machinery, housed extremely close by, for millisecond (or less) latency. Connected vehicles may need roadside infrastructure. Enterprises might demand on-premise secure data storage, even for cloud-delivered services, for compliance reasons. Various forms of AI (such as machine vision and deep learning) involve particular needs and new ways of handling data.
  • The “edge” has its own context data: Some applications are not just latency-sensitive in terms of response between user and server, but also need other local, fast-changing data such as cell congestion or radio-interference metrics. Going all the way to a platform in the core of the network, to query that status, may take longer than it takes the status to change. The length of the “control loop” may mean that old/wrong contextual data is given, and the wrong action taken by the application. Locally-delivered information, via “edge APIs” could be more timely.
  • Not all virtual functions can be hosted centrally: While a lot of the discussion around NFV involves consolidated data-centres and the “telco cloud”, this does not apply to all network functions. Certain things can indeed be centralised (e.g. billing systems, border/gateway functions between core network and public Internet), but other things make more sense to distribute. For example, Virtual CPE (customer premises equipment) and CDN caches need to be nearer to the edge of the network, as do some 5G functions such as mobility management. No telco wants to transport millions of separate video streams to homes, all the way from one central facility, for instance.
  • There will therefore be localised telco compute sites anyway: Since some telco network functions have to be located in a distributed fashion, there will need to be some data-centres either at aggregation points / central offices or final delivery nodes (base stations, street cabinets etc.). Given this requirement, it is understandable that vendors and operators are looking at ways to extend such sites from the “necessary” to the “possible” – such as creating more generalised APIs for a broader base of developers.
  • Radio virtualisation is slightly different to NFV/SDN: While most virtualisation focus in telecoms goes into developments in the core network, or routers/switches, various other relevant changes are taking place. In particular, the concept of C-RAN (cloud-RAN) has taken hold in recent years, where traditional mobile base stations (usually called eNodeB’s) are sometimes being split into the electronics “baseband” units (BBUs) and the actual radio transmit/receive components, called the remote “radio head”, RRH. A number of eNodeB’s BBUs can be clustered together at one site (sometimes called a “hotel”), with fibre “front-haul” connecting the RRHs. This improves the efficiency of both power and space utilisation, and also means the BBUs can be combined and virtualised – and perhaps have extra compute functions added.
  • Property business interests: Telcos have often sold or rented physical space in their facilities – colocation of equipment racks for competitive carriers, or servers in hosting sites and data-centres. In turn, they also rely on renting space for their own infrastructure, especially for siting mobile cell-towers on roofs or walls. This two-way trade continues today – and the idea of mobile edge computing as a way to sell “virtual” space in distributed compute facilities maps well to this philosophy.

Contents:

  • Executive Summary
  • Introduction
  • Background market drivers for MEC
  • Why Edge Computing matters
  • The ever-wider definition of “Edge”
  • Wider market trends in edge-computing
  • Use-cases & deployment scenarios for MEC
  • Horizontal use-cases
  • Addressing vertical markets – the hard realities
  • MEC involves extra costs as well as revenues
  • Current status & direction of MEC
  • Standards path and operator involvement
  • Integration challenges
  • Conclusions & Recommendations

Figures:

  • Figure 1: A taxonomy of mobile edge computing
  • Figure 2: Even within “low latency” there are many different sets of requirements
  • Figure 3: The “network edge” is only a slice of the overall cloud/computing space
  • Figure 4: Telcos can implement MEC at various points in their infrastructure
  • Figure 5: Networks, Cloud and IoT all have different starting-points for the edge
  • Figure 6: Network-centric use-cases for MEC suggested by ETSI
  • Figure 7: MEC needs to integrate well with many adjacent technologies and trends

VoLTE: Voice beyond the phone call?

Introduction

Telephony is still necessary in the 4G era

Some people in the telecom industry believe that “voice is dead” – or, at least, that traditional phone calls are dying off. Famously, many younger mobile users eschew standalone realtime communications, instead preferring messaging loaded with images and emoji, via apps such as Facebook Messenger and WeChat, or those embedded e.g. in online gaming applications. At the other end of the spectrum, various forms of video-based communications are important, such as SnapChat’s disappearing video stories, as well as other services such as Skype and FaceTime.

Even for basic calling-type access, WhatsApp and Viber have grown huge, while assorted enterprise UC/UCaaS services such as Skype for Business and RingCentral are often “owning” the business customer base. Other instances of voice (and messaging and video) are appearing as secondary features “inside” other applications – games, social networks, enterprise collaboration, mobile apps and more – often enabled by the WebRTC standard and assorted platforms-as-a-service.

Smartphones and the advent of 4G have accelerated all these trends – although 3G networks have seen them as well, especially for messaging in developing markets. Yet despite the broad uptake of Internet-based messaging and voice/video applications, it is still important for mobile operators to provide “boring old phone calls” for mobile handset subscribers, not least in order to enable “ubiquitous connection” to friends, family and businesses – plus also emergency calls. Plenty of businesses still rely on the phone – and normal phone numbers as identifiers – from banks to doctors’ practices. Many of the VoIP services can “fall back” to normal telephony, or dial out (or in) from the traditional telco network. Many license terms mandate provision of voice capability.

This is true for both fixed and mobile users – and despite the threat of reaching “peak telephony”, there is a long and mostly-stable tail of calling that won’t be displaced for years, if ever.

Figure 1: Various markets are beyond “peak telephony” despite lower call costs

Source: Disruptive Analysis, National Regulators

In other words, even if usage and revenues are falling, telcos – and especially mobile operators – need to keep Alexander Graham Bell’s 140-year legacy alive. If the network transitions to 4G and all-IP, then the telephony service needs to do so as well – ideally with feature-parity and conformance to all the legacy laws and regulation.

(As a quick aside, it is worth noting that telephony is only one sort of “voice communication”, although people often use the terms synonymously. Other voice use-cases vary from conferencing, push-to-talk, audio captioning for the blind, voice-assistants like Siri and Alexa, karaoke, secure encrypted calls and even medical-diagnostics apps that monitor breathing noise. We discuss the relevance of non-telephony voice services for telcos later in this report).

4G phone calls: what are the options?

  • CSFB (Circuit-Switched Fallback): The connection temporarily drops from 4G, down to 3G or 2G. This enables a traditional non-IP (CS – circuit-switched) call to be made or received on a 4G phone. This is the way most LTE subscribers access telephony today.
  • VoLTE: This is a “pure” 4G phone call, made using the phone’s in-built dialler, the cellular IP connection and tightly-managed connectivity with prioritisation of voice packets, to ensure good QoS. It hooks into the telco’s IMS core network, from where it can either be directly connected to the other party (end-to-end over IP), go via a transit provider or exchange, or else it can interwork with the historic circuit-based phone network.
  • App-based calling: This involves making a VoIP call over the normal, best-efforts, data connection. The function could be provided by a telco itself (eg Reliance Jio’s 4GVoice app), an enterprise UC provider, or an Internet application like Skype or Viber. Increasingly, these applications are also integrated into phones “native dialler” interface and can share call-logs and other functions. [Note – STL’s Future of The Network research stream does not use the pejorative, obsolete and inaccurate term “OTT”.]

None of these three options is perfect.

Content:

  • Executive Summary
  • Introduction
  • Telephony is still necessary in the 4G era
  • 4G phone calls: what are the options?
  • The history of VoLTE
  • The Good, the Bad & the Ugly
  • The motivations for VoLTE deployment
  • The problems for VoLTE deployment?
  • Industry politics
  • Market Status & Forecasts
  • Business & Strategic Implications
  • Is VoLTE really just “ToLTE”?
  • Link to NFV & Cloud
  • GSMA Universal Profile: Heaven or Hell for Telcos?
  • Do telcos have a role in video communications?
  • Intersection with enterprise voice
  • Conclusions
  • Recommendations

Figures:

  • Figure 1: Various markets are beyond “peak telephony” despite lower call costs
  • Figure 2: VoLTE, mobile VoIP & LTE timeline
  • Figure 3: VoLTE coverage is often deployed progressively
  • Figure 4: LTE subscribers, by voice technology, 2009-2021

5G: The spectrum game is changing – but how to play?

Introduction

Why does spectrum matter?

Radio spectrum is a key “raw material” for mobile networks, together with evolution of the transmission technology itself, and the availability of suitable cell-site locations. The more spectrum is made available for telcos, the more capacity there is overall for current and future mobile networks. The ability to provide good coverage is also determined largely by spectrum allocations.

Within the industry, we are accustomed to costly auction processes, as telcos battle for tranches of frequencies to add capacity, or support new generations of technology. In contrast, despite the huge costs to telcos for different spectrum allocation, most people have very little awareness of what bands their phones support, other than perhaps that it can use ‘mobile/cellular’ and WiFi.

Most people, even in the telecoms industry, don’t grasp the significance of particular numbers of MHz or GHz involved (Hz = number of cycles per second, measured in millions or billions). And that is just the tip of the jargon and acronym iceberg – a full discussion of mobile RAN (radio access network) technology involves different sorts of modulation, multiple antennas, propagation metrics, path loss (in decibels, dB) and so forth.

Yet as 5G pulls into view, it is critical to understand the process by which new frequencies will be released by governments, or old ones re-used by the mobile industry. To deliver the much-promised peak speeds and enhanced coverage of 5G, big chunks of frequencies are needed. Yet spectrum has many other uses besides public mobile networks, and battles will be fierce about any reallocations of incumbent users’ rights. The broadcast industry (especially TV), satellite operators, government departments (notably defence), scientific research communities and many other constituencies are involved here. In addition, there are growing demands for more bandwidth for unlicensed usage (as used for WiFi, Bluetooth and other low-power IoT networks such as SigFox).

Multiple big industries – usually referred to by the mobile community as “verticals” – are flexing their own muscles as well. Energy, transport, Internet, manufacturing, public safety and other sectors all see the benefits of wireless connectivity – but don’t necessarily want to involve mobile operators, nor subscribe to their preferred specifications and standards. Many have huge budgets, a deep legacy of systems-building and are hiring mobile specialists.

Lastly, parts of the technology industry are advocates of more nuanced approaches to spectrum management. Rather than dedicate bands to single companies, across whole countries or regions, they would rather develop mechanisms for sharing spectrum – either on a geographic basis, or by allowing some form of “peaceful coexistence” where different users’ radios behave nicely together, instead of creating interference. In theory, this could improve the efficient use of spectrum – but adds complexity, and perhaps introduces so much extra competition than willingness to invest suffers.

Which bands are made available for 5G, on what timescales, in what type of “chunks”, and the authorisation / licensing schemes involved, all define the potential opportunity for operators in 5G – as well as the risks of disruption, and (for some) how large the window is to fully-monetise 4G investments.

The whole area is a minefield to understand – it brings together the hardest parts of wireless technology to grasp, along with impenetrable legal processes, and labyrinthine politics at national and international levels. And ideally, it is possible to somehow to layer on consideration of end-user needs, and economic/social outputs as well.

Who are the stakeholders for spectrum?

At first sight, it might seem that spectrum allocations for mobile networks ought to be a comparatively simple affair, with governments deciding on tranches of frequencies and an appropriate auction process. MNOs can bid for their desired bands, and then deploy networks (and, perhaps, gripe about the costs afterwards).

The reality is much more complex. A later section describes some of the international bureaucracy involved in defining appropriate bands, which can then be doled out by governments (assuming they don’t decide to act unilaterally). But even before that, it is important to consider which organisations want to get involved in the decision process – and their motivations, whether for 5G or other issues that are closer to their own priorities, which intersect with it.

Governments have a broad set of drivers and priorities to reconcile – technological evolution of the economy as a whole, the desire for a competitive telecoms industry, exports, auction receipts – and the protection of other spectrum user groups such as defence, transport and public safety. Different branches of government and the public administration have differing views, and there may sometimes be tussles between the executive branch and various regulators.

Much the same is true at regional levels, especially in Europe, where there are often disagreements between European Commission, European Parliament, the regulators’ groups and 28 different EU nations’ parliaments (plus another 23 non-EU nations).

Even within the telecoms industry there are differences of opinion – some operators see 5G as an urgent strategic priority, that can help differentiation and reduce costs of existing infrastructure deployments. Others are still in the process of rolling out 4G networks and want to ensure that those investments continue to have relevance. There are variations in how much credence is assigned to the projections of IoT growth – and even there, whether there needs to be breathing room for 4G cellular types such as NB-IoT, which is yet to be deployed despite its putative replacement being discussed already.

The net result is many rounds of research, debate, consultation, disagreement and (eventually) compromise. Yet in many ways, 5G is different from 3G and 4G, especially because many new sectors are directly involved in helping define the use-cases and requirements. In many ways, telecoms is now “too important to be left to the telcos”, and many other voices will therefore need to be heard.

 

  • Executive Summary
  • Introduction
  • Why does spectrum matter?
  • Who are the stakeholders for spectrum?
  • Spectrum vs. business models
  • Does 5G need spectrum harmonisation as much as 4G?
  • Spectrum authorisation types & processes
  • Licensed, unlicensed and shared spectrum
  • Why is ITU involved, and what is IMT spectrum?
  • Key bands for 5G
  • Overview
  • 5G Phase 1: just more of the same?
  • mmWave beckons – the high bands >6GHz
  • Conclusions

 

  • Figure 1 – 5G spectrum has multiple stakeholders with differing priorities
  • Figure 2 – Multi-band support has improved hugely since early 4G phones
  • Figure 3 – A potential 5G deployment & standardisation timeline
  • Figure 4 – ITU timeline for 5G spectrum harmonisation, 2014-2020
  • Figure 5 – High mmWave frequencies (e.g. 28GHz) don’t go through solid walls
  • Figure 6 – mmWave brings new technology and design challenges

5G: How Will It Play Out?

Introduction: Different visions of 5G

The ‘idealists’ and the ‘pragmatists’

In the last 18 months, several different visions of 5G have emerged.

One is the vision espoused by the major R&D collaborations, academics, standardisation groups, the European Union, and some operators. This is the one with the flying robots, self-driving cars, and fully automated factories whose internal networks are provided entirely by ultra-low latency critical communications profiles within the cellular network. The simplest way to describe its aims would be to say that they intend to create a genuinely universal mobile telecommunications system serving everything from 8K streaming video for football crowds, through basic (defined as 50Mbps) fixed-wireless coverage for low-ARPU developing markets, to low-rate and ultra-low power but massive-scale M2M, with the same radio waveform, backed by a single universal virtualised core network “sliced” between use-cases. This slide, from Samsung’s Raj Gawera, sums it up – 5G is meant to maximise all eight factors labelled on the vertices of the chart.

Figure 1: 5G, the vision: one radio for everything

Source: Samsung, 3G & 4G Wireless Blog

Most of its backers – the idealist group – are in no hurry, targeting 2020 at the earliest for the standard to be complete, and deployment to begin sometime after that. There are some recent signs of increasing urgency – and certainly various early demonstrations – although that is perhaps a response to the sense of movement elsewhere in the industry.

The other vision is the one backed in 3GPP (the main standards body for 5G) by an alliance of semiconductor companies – including Intel, Samsung, ARM, Qualcomm, and Mediatek – but also Nokia Networks and some carriers, notably Verizon Wireless. This vision is much more radio-centric, being focused on the so-called 5G New Radio (NR) element of the project, and centred on delivering ultra-high capacity mobile broadband. It differs significantly from the idealists’ on timing – the pragmatist group wants to have real deployments by 2018 or even earlier, and is willing (even keen) to take an IETF-like approach where the standards process ratifies the results of “rough consensus and running code”.

Carriers’ interests fall between the two poles. In general, operators’ contributions to the process focus on the three Cs – capacity, cost, and carbon dioxide – but they also usually have a special interest of their own. This might be network virtualisation and slicing for converged operators with significant cloud and enterprise interests, low-latency or massive-scale M2M for operators with major industrial customers, or low-cost mobile broadband for operators with emerging market opcos.

The summer and especially September 2016’s CTIA Mobility conference also pointed towards some players in the middle – AT&T is juggling its focus on its ECOMP NFV mega-project, with worries that Verizon will force its hand on 5G the same way it did with 4G. It would be in the idealist group if it could align 5G radio deployment and NFV perfectly, but it is probably aware of the gulf widening rather than narrowing between the two. Ericsson is pushing for 5G incrementalism (and minimising the risk of carriers switching vendors at a later date) with its “Plug-In” strategy for specific bits of functionality.

Dino Flores of Qualcomm, the chairman of 3GPP RAN (RAN = radio access network) has chosen to compromise by taking forward the core enhanced mobile broadband (eMBB) elements for what is now being called “Phase 1”, but also cherry-picking two of the future use cases – “massive” M2M, and “critical” communications. These last two differ in that the first is optimised for scalability and power saving, and the second is optimised for quality-of-service control (or PPP for Priority, Precedence, and Pre-emption in 3GPP terminology), reliable delivery, and very low latency. As the low-cost use case is essentially eMBB in low-band spectrum, with a less dense network and a high degree of automation, this choice covers carriers’ expressed needs rather well, at least in principle. In practice, the three have very different levels of commercial urgency.

Implicitly, of course, the other, more futuristic use cases (such as self-driving cars) have been relegated to “Phase 2”. As Phase 2 is expected to be delivered after 2020, or in other words, on the original timetable, this means that Phase 1 has indeed accelerated significantly. Delays in some of the more futuristic applications may not be a major worry to many people – self-driving cars probably have more regulatory obstacles than technical ones, while Vehicle to Vehicle (V2V) communications seems to be less of a priority for the automotive industry than many assert. A recent survey by Ericsson[1] suggested that better mapping and navigation is more important than “platooning” vehicles (grouping them together on the highway in platoons, which increases the capacity of the highway) as a driver of next-gen mobile capabilities.

3GPP’s current timeline foresees issuing the Technical Report (TR) detailing the requirements for the New Radio standard at the RAN (Radio Access Network) 73 meeting next month, and finalising a Non-Standalone version of the New Radio standard at either RAN 78 in December 2017, with the complete NR specification being frozen by the TSG (Technical Specifications Group) 80 meeting in June 2018, in time to be included in 3GPP Release 14. (In itself this is a significant hurry-up – the original plan was for 5G to wait for R15.) This spec would include all three major use cases, support for both <6GHz and millimetre wave spectrum, and both Non-Standalone and Standalone.

Importantly, if both Non-Standalone and the features common to it and Standalone standards are ready by the end of 2017, we will be very close to a product that could be deployed in a ‘pragmatist’ scenario even ahead of the standards process. This seems to be what VZW, Nokia, Ericsson, and others are hoping for – especially for fixed-5G. The December 2017 meeting is an especially important juncture as it will be a joint meeting of both TSG and RAN. AT&T has also called for a speeding-up of standardisation[2].

The problem, however, is that it may be difficult to reconcile the technical requirements of all three in one new radio, especially as the new radio must also be extensible to deal with the many different use cases of Phase 2, and must work both with the 4G core network as “anchor” in Non-Standalone and with the new 5G core when that arrives, in Standalone.

Also, radio development is forging ahead of both core development and spectrum policy. Phase 1 5G is focused on the bands below 6GHz, but radio vendors have been demonstrating systems working in the 15, 28, 60, and 73GHz bands – for instance Samsung and T-Mobile working on 28GHz[3]. The US FCC especially has moved very rapidly to make this spectrum available, while the 3GPP work item for millimetre wave isn’t meant to report before 2017 – and with harmonisation and allocation only scheduled for discussion at ITU’s 2019 World Radio Congress.

The upshot is that the March 2017 TSG 75 meeting is a critical decision point. Among much else it will have to confirm the future timeline and make a decision on whether or not the Non-Standalone (sometimes abbreviated to NSA) version of the New Radio will be ready by TSG/RAN 78 in December. The following 3GPP graphic summarises the timeline.


[1] https://www.ericsson.com/se/news/2039614

[2] http://www.fiercewireless.com/tech/at-t-s-keathley-5g-standards-should-be-released-2017-not-2018

[3] http://www.fiercewireless.com/tech/t-mobile-samsung-plan-5g-trials-using-pre-commercial-systems-at-28-ghz

 

  • Executive Summary
  • Introduction: Different visions of 5G
  • One Network to Rule Them All: Can it Happen?
  • Network slicing: a nice theory, but work needed…
  • Difficulty versus Urgency: understanding opportunities and blockers for 5G
  • Business drivers of the timeline: both artificial and real
  • Internet-Agility Driving Progress
  • How big is the mission critical IoT opportunity?
  • Conclusions

 

  • Figure 1: 5G, the vision: one radio for everything
  • Figure 2: The New Radio standardisation timeline, as of June 2016
  • Figure 3: An example frame structure, showing the cost of critical comms
  • Figure 4: LTE RAN protocols desperately need simplicity
  • Figure 5: Moving the Internet/RAN boundary may be problematic, but the ultra-low latency targets demand it
  • Figure 6: Easy versus urgent
  • Figure 7: A summary of key opportunities and barriers in 5G

MWC 2016: 5G and Wireless Networks

Getting Serious About 5G

MWC 2016 saw intense hype about 5G. This is typical for the run-up to a new “G”, but at least this year there was much less of the waffle about it being “a behaviour”, a “special generation”, the “last G”, or a “state of mind”. Instead, there was much more concrete activity from all stakeholders, including operators, technology vendors and standards bodies.

Nokia CEO Rajeev Suri, notably, set a 2017 target for 5G deployment to begin, which has been taken up by carriers including Verizon Wireless. This is still controversial, but the major barriers seem to be around standardisation and spectrum, rather than the technology. Most vendors had a demonstration of 5G in some form, although the emphasis and timeframes varied. However, the general theme is that even the 2018-2019 timeframe set by the Korean operators may now be overtaken by events.

An important theme at the show was that expectations for 5G have been revised:

  • They have been revised up, when it comes to the potential of future radio technology, which is seen as being capable of delivering a useful version of 5G much faster;
  • They have been revised down, when it comes to some of the more science-fictional visions of ‘one network to cover every imaginable use case’. 5G is likely to be focused on mobile broadband plus a couple of other IoT options.

This is in part thanks to a strong operator voice on 5G, coordinated through the Next Generation Mobile Networks Alliance (NGMN)1, reaching the standardisation process in 3GPP. It is also due to a strong presence by the silicon vendors in the standards process, which is important given the concentration of the device market into relatively few system-on-chip and even fewer RF component manufacturers.

Context: 3GPP 5G RAN Meeting Set the Scene for Faster Development

To understand the shift at MWC, it is useful to revisit what operators and vendors were focusing on at the September 2015 3GPP 5G RAN meeting in Phoenix. Operator concerns from the sessions can be summed up as the three Cs – cost (reducing total cost of ownership), capacity (more of it, specifically enhanced mobile broadband data and supporting massive numbers of IoT device connections), and carbon dioxide (less of it, through using less energy).

At that key meeting, most operators clearly wanted the three Cs, and most also highlighted a particular interest in one or another of the 5G benefit areas. Orange was interested in driving low-cost mobile broadband for its African operations. Deutsche Telekom was keen on network slicing and virtualisation for its enterprise customers. Verizon Wireless wanted more speed above all, to maintain its premium carrier status in the rich US cellular market. Vodafone was interested in the IoT/M2M aspects as a new growth opportunity.

This was reflected in operator views on timing of 5G standardisation and commercialisation. The more value a particular operator placed on capacity, the sooner they wanted “early 5G” and the more focused the specs would have to be, putting off the more visionary elements (device-to-device, no-cells networks, etc.) to a second phase.

A strong alliance between the silicon vendors – Qualcomm, Samsung, Mediatek, ARM, and Intel – and key network vendors, notably Nokia, emerged to push for an early 5G standardisation focused on a new radio access technology. This standard would be used in the context of existing 4G networks before the new 5G core network arrives2, and begins to deliver on the three Cs. On the other side of the discussion, Huawei (which was still talking about 5G in 2020 at MWC) was keen to keep the big expansive vision of an all-purpose multiservice 5G network alive, and to eke out 4G with incremental updates (LTE-A Pro) in the meantime.

Dino Flore, the Qualcomm executive who chairs 3GPP RAN, compromised by going for the early 5G radio access but keeping two of the special requests – for “massive” IoT and for “mission-critical” IoT – on the programme, while accepting continuing development of LTE as LTE-A Pro.

 

  • Executive Summary
  • Getting Serious About 5G
  • Context: 3GPP 5G RAN Meeting Set the Scene for Faster Development
  • MWC showed the early 5G camp is getting stronger
  • A special relationship: Nokia, Qualcomm, Intel
  • Conclusions

The Open Source Telco: Taking Control of Destiny

Preface

This report examines the approaches to open source software – broadly, software for which the source code is freely available for use, subject to certain licensing conditions – of telecoms operators globally. Several factors have come together in recent years to make the role of open source software an important and dynamic area of debate for operators, including:

  • Technological Progress: Advances in core networking technologies, especially network functions virtualisation (NFV) and software-defined networking (SDN), are closely associated with open source software and initiatives, such as OPNFV and OpenDaylight. Many operators are actively participating in these initiatives, as well as trialling their software and, in some cases, moving them into production. This represents a fundamental shift away from the industry’s traditional, proprietary, vendor-procured model.
    • Why are we now seeing more open source activities around core communications technologies?
  • Financial Pressure: However, over-the-top (OTT) disintermediation, regulation and adverse macroeconomic conditions have led to reduced core communications revenues for operators in both developed and emerging markets alike. As a result, operators are exploring opportunities to move away from their core, infrastructure business, and compete in the more software-centric services layer.
    • How do the Internet players use open source software, and what are the lessons for operators?
  • The Need for Agility: In general, there is recognition within the telecoms industry that operators need to become more ‘agile’ if they are to succeed in the new, rapidly-changing ICT world, and greater use of open source software is seen by many as a key enabler of this transformation.
    • How can the use of open source software increase operator agility?

The answers to these questions, and more, are the topic of this report, which is sponsored by Dialogic and independently produced by STL Partners. The report draws on a series of 21 interviews conducted by STL Partners with senior technologists, strategists and product managers from telecoms operators globally.

Figure 1: Split of Interviewees by Business Area

Source: STL Partners

Introduction

Open source is less optional than it once was – even for Apple and Microsoft

From the audience’s point of view, the most important announcement at Apple’s Worldwide Developer Conference (WWDC) this year was not the new versions of iOS and OS X, or even its Spotify-challenging Apple Music service. Instead, it was the announcement that Apple’s highly popular programming language ‘Swift’ was to be made open source, where open source software is broadly defined as software for which the source code is freely available for use – subject to certain licensing conditions.

On one level, therefore, this represents a clever engagement strategy with developers. Open source software uptake has increased rapidly during the last 15 years, most famously embodied by the Linux operating system (OS), and with this developers have demonstrated a growing preference for open source tools and platforms. Since Apple has generally pushed developers towards proprietary development tools, and away from third-party ones (such as Adobe Flash), this is significant in itself.

An indication of open source’s growth can be found in OS market shares in consumer electronics devices. As Figure 2 shows below, Android (open source) had a 49% share of shipments in 2014; if we include the various other open source OS’s in ‘other’, this increases to more than 50%.

Figure 2: Share of consumer electronics shipments* by OS, 2014

Source: Gartner
* Includes smartphones, tablets, laptops and desktop PCs

However, one of the components being open sourced is Swift’s (proprietary) compiler – a program that translates written code into an executable program that a computer system understands. The implication of this is that, in theory, we could even see Swift applications running on non-Apple devices in the future. In other words, Apple believes the risk of Swift being used on Android is outweighed by the reward of engaging with the developer community through open source.

Whilst some technology companies, especially the likes of Facebook, Google and Netflix, are well known for their activities in open source, Apple is a company famous for its proprietary approach to both hardware and software. This, combined with similar activities by Microsoft (who open sourced its .NET framework in 2014), suggest that open source is now less optional than it once was.

Open source is both an old and a new concept for operators

At first glance, open source also appears to now be less optional for telecoms operators, who traditionally procure proprietary software (and hardware) from third-party vendors. Whilst many (but not all) operators have been using open source software for some time, such as Linux and various open source databases in the IT domain (e.g. MySQL), we have in the last 2-3 years seen a step-change in operator interest in open source across multiple domains. The following quote, taken directly from the interviews, summarises the situation nicely:

“Open source is both an old and a new project for many operators: old in the sense that we have been using Linux, FreeBSD, and others for a number of years; new in the sense that open source is moving out of the IT domain and towards new areas of the industry.” 

AT&T, for example, has been speaking widely about its ‘Domain 2.0’ programme. Domain 2.0 has the objectives to transform AT&T’s technical infrastructure to incorporate network functions virtualisation (NFV) and software-defined networking (SDN), to mandate a higher degree of interoperability, and to broaden the range of alternative suppliers available across its core business. By 2020, AT&T hopes to virtualise 75% of its network functions, and it sees open source as accounting for up to 50% of this. AT&T, like many other operators, is also a member of various recently-formed initiatives and foundations around NFV and SDN, such as OPNFV – Figure 3 lists some below.

Figure 3: OPNFV Platinum Members

Source: OPNFV website

However, based on publicly-available information, other operators might appear to have lesser ambitions in this space. As ever, the situation is more complex than it first appears: other operators do have significant ambitions in open source and, despite the headlines NFV and SDN draw, there are many other business areas in which open source is playing (or will play) an important role. Figure 4 below includes three quotes from the interviews which highlight this broad spectrum of opinion:

Figure 4: Different attitudes of operators to open source – selected interview quotes

Source: STL Partners interviews

Key Questions to be Addressed

We therefore have many questions which need to be addressed concerning operator attitudes to open source software, adoption (by area of business), and more:

  1. What is open source software, what are its major initiatives, and who uses it most widely today?
  2. What are the most important advantages and disadvantages of open source software? 
  3. To what extent are telecoms operators using open source software today? Why, and where?
  4. What are the key barriers to operator adoption of open source software?
  5. Prospects: How will this situation change?

These are now addressed in turn.

  • Preface
  • Executive Summary
  • Introduction
  • Open source is less optional than it once was – even for Apple and Microsoft
  • Open source is both an old and a new concept for operators
  • Key Questions to be Addressed
  • Understanding Open Source Software
  • The Theory: Freely available, licensed source code
  • The Industry: Dominated by key initiatives and contributors
  • Research Findings: Evaluating Open Source
  • Open source has both advantages and disadvantages
  • Debunking Myths: Open source’s performance and security
  • Where are telcos using open source today?
  • Transformation of telcos’ service portfolios is making open source more relevant than ever…
  • … and three key factors determine where operators are using open source software today
  • Open Source Adoption: Business Critical vs. Service Area
  • Barriers to Telco Adoption of Open Source
  • Two ‘external’ barriers by the industry’s nature
  • Three ‘internal’ barriers which can (and must) change
  • Prospects and Recommendations
  • Prospects: An open source evolution, not revolution
  • Open Source, Transformation, and Six Key Recommendations
  • About STL Partners and Telco 2.0
  • About Dialogic

 

  • Figure 1: Split of Interviewees by Business Area
  • Figure 2: Share of consumer electronics shipments* by OS, 2014
  • Figure 3: OPNFV Platinum Members
  • Figure 4: Different attitudes of operators to open source – selected interview quotes
  • Figure 5: The Open IT Ecosystem (incl. key industry bodies)
  • Figure 6: Three Forms of Governance in Open Source Software Projects
  • Figure 7: Three Classes of Open Source Software License
  • Figure 8: Web Server Share of Active Sites by Developer, 2000-2015
  • Figure 9: Leading software companies vs. Red Hat, market capitalisation, Oct. 2015
  • Figure 10: The Key Advantages and Disadvantages of Open Source Software
  • Figure 11: How Google Works – Failing Well
  • Figure 12: Performance gains from an open source activation (OSS) platform
  • Figure 13: Intel Hardware Performance, 2010-13
  • Figure 14: Open source is more likely to be found today in areas which are…
  • Figure 15: Framework mapping current telco uptake of open source software
  • Figure 16: Five key barriers to telco adoption of open source software
  • Figure 17: % of employees with ‘software’ in their LinkedIn job title, Oct. 2015
  • Figure 18: ‘Waterfall’ and ‘Agile’ Software Development Methodologies Compared
  • Figure 19: Four key cultural attributes for successful telco transformation

Huawei’s choice: 5G visionary, price warrior or customer champion?

Introduction: Huawei H1s

Huawei’s H1 2015 results caused something of a stir, as they seemed to promise a new cycle of rapid growth at the No.2 infrastructure vendor. The headline figure was that revenue for H1 was up 30% year-on-year, somewhat surprising as LTE infrastructure spending was thought to have passed the peak in much of the world. In context, Huawei’s revenue has grown at a 16% CAGR since 2010, while its operating profits have grown at 2%, implying very significant erosion of margins as the infrastructure business commoditises. Operating margins were in the region of 17-18% in 2010, before falling to 10-12% in 2012-2014.

Figure 1 – If Huawei’s H2 delivers as promised, it may have broken out of the commoditisation trap… for now

Source: STL Partners, Huawei press releases 

Our estimate, in Figure 1, uses the averages for the last 4 years to show two estimates for the full-year numbers. If the first, ‘2015E’, is delivered, this would take Huawei’s profitability back to the levels of 2010 and nearly double its operating profit. The second estimate ‘Alternate 2015E’, assumes a similar performance to last year’s, in which the second half of the year disappoints in terms of profitability. In this case, full-year margin would be closer to 12% rather than 18% and all the growth would be coming from volume. The H1 announcement promises margins for 2015 of 18%, which would therefore mean a very successful year indeed if they were delivered in H2. For the last few years, Huawei’s H2 revenue has been rather higher than H1, on average by about 10% for 2011-2014. You might expect this in a growing business, but profitability is much more erratic.

For reference, Figure 2 shows that the relationship between H1 and H2 profitability has varied significantly from year to year. While in 2012 and 2013 Huawei’s operating profits in H2 were higher than in H1, in 2011 and 2014, its H2 operating profits were much less than in H1. 2015E shows the scenario needed to deliver the 18% annual margin target; Alternate 2015E shows a scenario where H2 is relatively weak, in line with last year.

Figure 2 – Huawei’s H1 and H2 Profits have varied significantly year on year

Source: STL Partners, Huawei press releases 

Huawei’s annual report hints at some reasons for the weak H2 2014, notably poor sales in North America, stockbuilding ahead of major Chinese investment (inventory rose sharply through 2014), and the launch of the Honor low-cost device brand. However, although North American wireless investment was in fact low at the time, it’s never been a core market for Huawei, and Chinese carriers were spending heavily. It is plausible that adding a lot of very cheap devices would weigh on the company’s profitability. As we will see, though, there are reasons to think Huawei might not have got full value from strong carrier spending in this timeframe.

In any event, to hit Huawei’s ambitious 2015 target, it will need a great H2 2015 to follow from its strong H1. It hasn’t performed this particular ‘double’ for the last four years, so it will certainly be an achievement to do it in 2015. And if it does, how is the market looking for 2016 and beyond?

Where are we in the infrastructure cycle?

As Huawei is still primarily an infrastructure vendor, its business is closely coupled to operators’ CAPEX plans. In theory, these plans are meant to be cyclical, driven by the ever present urge to upgrade technology and build out networks. The theory goes that on one hand, technology drivers (new standards, higher-quality displays and camera sensors) and user behaviour (the secular growth in data traffic) drive operators to invest. On the other, financial imperatives (to derive as much margin from depreciating assets as possible) encourage operators to resist spending and sweat the assets.

Sometimes, the technology drivers get the upper hand; sometimes, the financial constraints. Therefore, the operator tends to “flip” between a high-investment and a low-investment state. Because operators compete, this behaviour may become synchronised within markets, causing large geographies to exhibit an infrastructure spending cycle.

In practice, there are other drivers that mitigate against the cyclical forces. There are ‘bottlenecks’ in integration and in scaling resources up and down, and generally, businesses prefer to keep expenditures as flat as possible to reduce variations and resulting ‘surprises’ for their shareholders. In general though, there is some ongoing variation in levels of capex investment in every market, as we examine in the following sections.

North America: operators take a breather before 5G

In North America, the tipping-point from sweating the assets to investment seems to have been reached roughly in 2011-2012, when the major carriers began a cycle of heavy investment in LTE infrastructure. This investment peaked in 2014. Recently, AT&T informed its shareholders to expect significantly lower CAPEX over the next few years, and in fact the actual numbers so far this year are substantially lower than the guidance of around 14-15% of revenue. Excluding the Mexican acquisitions, CAPEX/Revenue has been running around 13% since Q3 2014. From Q2 2013 to the end of Q2 2014, AT&T spent on average $5.7bn a quarter with the vendors. Since then, the average is $4.4bn, so AT&T has reduced its quarterly CAPEX by 21%.

Figure 3 – AT&T’s LTE investment cycle looks over.

Source: STL Partners, Huawei press releases

During 2013, AT&T, Sprint, and VZW were all in a higher spending phase, as Figure 3 shows. Since then, AT&T and Sprint have backed off considerably. However, despite its problems, Sprint does seem to be starting another round of investment, and VZW has started to invest again, while T-Mobile is rising gradually. We can therefore say that the investment pause in North America is overhyped, but does exist – compare the first half of 2013, when AT&T, Sprint, and T-Mobile were all near the top of the cycle while VZW was dead on the average.

Figure 4 – The investment cycle in North America.

Source: STL Partners

The pattern is somewhat clearer in terms of CAPEX as a percentage of revenue, shown in Figure 5. In late 2012 and most of 2013, AT&T, Sprint, and T-Mobile were all near the top of their historic ranges for CAPEX as a percentage of their revenue. Now, only Sprint is really pushing hard.

Figure 5 – Spot the outlier. Sprint is the only US MNO still investing heavily

Source: STL Partners, company filings

If there is cyclicality it is most visible here in Sprint’s numbers, and the cycle is actually pretty short – the peak-to-trough time seems to be about a year, so the whole cycle takes about two years to run. That suggests that if there is a more general cyclical uptick, it should be around H1 2016, and the one after that nicely on time for early 5G implementations in 2018.

  • Executive Summary
  • Introduction: Huawei H1s
  • Where are we in the infrastructure cycle?
  • North America: operators take a breather before 5G
  • Europe: are we seeing a return to growth?
  • China: full steam ahead under “special action mobilisation”
  • The infrastructure market is changing
  • Commoditisation on a historic scale
  • And Huawei is no longer the price leader
  • The China Mobile supercontract: a highly political event
  • Conclusion: don’t expect a surge in infrastructure profitability
  • Huawei’s 5G Strategy and the Standards Process
  • Huawei’s approach to 5G
  • What do operators want from 5G?
  • In search of consensus: 3GPP leans towards an simpler “early 5G” solution
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In Q2, the Euro-5 out-invested Chinese operators for the first time
  • Figure 2: If Huawei’s H2 delivers as promised, it may have broken out of the commoditisation trap for now
  • Figure 3: Huawei’s H1 and H2 Profits have varied significantly year on year
  • Figure 4: AT&T’s LTE investment cycle looks over.
  • Figure 5: The investment cycle in North America.
  • Figure 6: Spot the outlier. Sprint is the only US MNO still investing heavily
  • Figure 7: 3 of the Euro-5 carriers are beginning to invest again
  • Figure 8: European investment levels are not as far behind as you might think
  • Figure 9: Chinese CAPEX/Revenue levels have been 10 percent higher than US or European ones – but this may be changing
  • Figure 10: Chinese infrastructure spending was taking a breather too, until Xi’s intervention
  • Figure 11: Chinese MNOs are investing heavily
  • Figure 12: LTE deployments have grown 100x while prices have fallen even more
  • Figure 13: As usual, Huawei is very much committed to a single radio solution
  • Figure 14: Huawei wants most 5G features in R15 by H2 2018
  • Figure 15: Huawei only supports priority for MBB very weakly and emphasises R16 and beyond
  • Figure 16: Chinese operators, Alcatel-Lucent, ZTE, and academic researchers disagree with Huawei
  • Figure 17: Orange’s view of 5G: distinctly practical
  • Figure 18: Telefonica is really quite sceptical about much of the 5G technology base
  • Figure 19: Qualcomm sees R15 as a bolt-on new radio in an LTE het-net
  • Figure 20: 3GPP RAN chairman Dino Flores says “yes” to prioritisation
  • Figure 21: Working as a group, the operators were slightly more ambitious
  • Figure 22: The vendors are very broadband-focused
  • Figure 23: Vodafone and Huawei