Network use metrics: Good versus easy and why it matters

Introduction

Telecoms, like much of the business world, often revolves around measurements, metrics and KPIs. Whether these relate to coverage of networks, net-adds and churn rates of subscribers, or financial metrics such as ARPU, there is a plethora of numerical measures to track.

They are used to determine shifts in performance over time, or benchmark between different companies and countries. Regulators and investors scrutinise the historical data and may set quantitative targets as part of policy or investment criteria.

This report explores the nature of such metrics, how they are (mis)used and how the telecoms sector – and especially its government and regulatory agencies – can refocus on good (i.e., useful, accurate and meaningful) data rather than over-simplistic or just easy-to-collect statistics.

The discussion primarily focuses on those metrics that relate to overall industry trends or sector performance, rather than individual companies’ sales and infrastructure – although many datasets are built by collating multiple companies’ individual data submissions. It considers mechanisms to balance the common “data asymmetry” between internal telco management KPIs and metrics available to outsiders such as policymakers.

A poor metric often has huge inertia and high switching costs. The phenomenon of historical accidents leading to entrenched, long-lasting effects is known as “path dependence”. Telecoms reflects a similar situation – as do many other sub-sectors of the economy. There are many old-fashioned metrics that are no longer really not fit for purpose and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

An important question is why some less-than-perfect metrics such as ARPU still have utility – and how and where to continue using them, with awareness of their limitations – or modify them slightly to reflect market reality. Sometimes maintaining continuity and comparability of statistics over time is important. Conversely, other old metrics such as “minutes” of voice telephony actually do more harm than good and should be retired or replaced.

Enter your details below to download an extract of the report

Looking beyond operator KPIs

Throughout the report, we make a semantic distinction between industry-wide metrics and telco KPIs. KPIs are typically generated for specific individual companies, rather than aggregated across a sector. And while both KPIs and metrics can be retrospective or set as goals, metrics can also be forecast, especially where they link operational data to other underlying variables, such as population, geographic areas or demand (rather than supply).

STL Partners has previous published work on telcos’ external KPIs, including discussion of the focus on “defensive” statistics on core connectivity, “progressive” numbers on new revenue-generating opportunities, and socially-oriented datasets on environmental social and governance (ESG) and staffing. See the figure below.

Types of internal KPIs found in major telcos

Source: STL Partners

Policymakers need metrics

The telecoms policy realm spans everything from national broadband plans to spectrum allocations, decisions about mergers and competition, net neutrality, cybersecurity, citizen inclusion and climate/energy goals. All of them use metrics either during policy development and debate, or as goalposts for quantifying electoral pledges or making regional/international comparisons.

And it is here that an informational battleground lies.

There are usually multiple stakeholder groups in these situations, whether it is incumbents vs. new entrants, tech #1 vs. tech #2, consumers vs. companies, merger proponents vs. critics, or just between different political or ideological tribes and the numerous industry organisations and lobbying institutions that surround them. Everyone involved wants data points that make themselves look good and which allow them to argue for more favourable treatment or more funding.

The underlying driver here is policy rather than performance.

Data asymmetry

A major problem that emerges here is data asymmetry. There is a huge gulf between the operational internal KPIs used by telcos, and those that are typically publicised in corporate reports and presentations or made available in filings to regulators. Automation and analytics technologies generate ever more granular data from networks’ performance and customers’ usage of, and payment for, their services – but these do not get disseminated widely.

Thus, policymakers and regulators often lack the detailed and disaggregated primary information and data resources available to large companies’ internal reporting functions. They typically need to mandate specific (comparable) data releases via operators’ license terms or rely on third-party inputs from sources such as trade associations, vendor analysis, end-user surveys or consultants.

 

Table of content

  • Executive Summary
    • Key recommendations
    • Next steps
  • Introduction
    • Key metrics overview
    • KPIs vs. metrics: What’s in a name?
    • Who uses telco metrics and why?
    • Data used in policy-making and regulation
    • Metrics and KPIs enshrined in standards
    • Why some stakeholders love “old” metrics
    • Granularity
  • Coverage, deployment and adoption
    • Mobile network coverage
    • Fixed network deployment/coverage
  • Usage, speed and traffic metrics
    • Voice minutes and messages
    • Data traffic volumes
    • Network latency
  • Financial metrics
    • Revenue and ARPU
    • Capex
  • Future trends and innovation in metrics
    • The impact of changing telecom industry structure
    • Why applications matter: FWA, AR/VR, P5G, V2X, etc
    • New sources of data and measurements
  • Conclusion and recommendations
    • Recommendations for regulators and policymakers
    • Recommendations for fixed and cable operators
    • Recommendations for mobile operators
    • Recommendations for telecoms vendors
    • Recommendations for content, cloud and application providers
    • Recommendations for investors and consultants
  • Appendix
    • Key historical metrics: Overview
    • How telecoms data is generated
  • Index

Enter your details below to download an extract of the report

Telco Cloud Deployment Tracker : Is 5G SA getting real?

5G SA core: Will 2H23 finally see momentum?

At the end of 2021, we predicted that 5G SA core deployments would significantly accelerate in 2022, but they did not. There were 21 launches of converged 5G NSA/SA or pure 5G SA cores in 2022, against 18 in 2021. In the January 2023 update of our tracker, when we reviewed telco cloud activity for 2022, we shifted all the outstanding deployments once expected in 2022 to 2023. Some of these deployments had been announced for over two years and this made 2023 look as if it might become the year of 5G SA.

Now at the half-way point in 2023, there have been only seven 5G SA (including converged 5G NSA/SA) core deployments so far:

  • Although few in number, these deployments are significant either by their scale (Reliance Jio in India) or by virtue of the importance of the operators involved: E& (introduced in the UAE in March); and Vodafone (in the UK in June).
  • And for Orange, which is engaged in 5G SA deployments across its entire European footprint, the launch of a first country (Spain in February 2023) is encouraging progress.

But it is legitimate to ask whether the remaining 30 5G SA launches that we still have pending for 2023 are likely to take place in the remaining six months (as our Tracker currently reflects). Or will they in fact trickle in over the next few years or even not happen at all?

Global deployments of 5G core by type, 2018–2024

Source: STL Partners

 

Enter your details below to download an extract of the report

Why have SA 5GC deployments gone off track?

Our September 2022 report 5G standalone (SA) core: Why and how telcos should keep going provided some pointers as to why operators are slow in jumping to 5G SA. These remain valid today:

  1. 5G SA requires significant investment, for which (in some markets at least) there is no clear ROI because the use cases that would leverage 5G SA capabilities (in terms of latency, bandwidth or high volume of connections) are yet to emerge, both on the consumer and the enterprise fronts, as are the ways to monetise them.
  2. Many operators are still weighing up their strategy for partnering with the hyperscale cloud providers. In particular, this relates to the role of public cloud as an infrastructure platform for 5G SA deployments and the role hyperscaler infrastructure can play in accelerating SA network coverage.
  3. Some of the leading operators that are yet to launch SA are also among the main supporters of open RAN and/or are engaged in fibre rollout projects: those conflicting investment requirements may create delays and a need for phasing in some of the rollouts.

To fully exploit 5G SA requires an organisational evolution within telcos. To reap its benefits as both a pure connectivity enabler and as a platform for innovative services, telcos need to undergo an evolution in their processes and organisations to support cloud practices and operations. This doesn’t happen overnight.

In APAC where SA is steaming ahead, greater telco ambition and strong state support have spurred deployments

One way to address the question of stalled 5G SA deployments is to examine what has driven the deployments that have taken place. Will the use cases involved there drive a bigger wave of deployments globally?

While there have been 13 (converged 5G NSA/) SA core deployments in Europe, 31 have taken place in APAC. They involve the leading operators in China, Japan, the Philippines, Singapore, South Korea and Taiwan. The roll-outs support bandwidth-hungry consumer use cases such as gaming, AR/VR, HD/4K content streaming, VoNR, etc. Some operators, such as NTT Docomo, SK Telecom and the Chinese players, have made SA available to support a limited number of private networking and industrial IoT use cases. Factors driving these deployments include:

  • State support or mandates for 5G SA (China and South Korea)
  • Consumer enthusiasm for and early adoption of 5G, with the SA version offering tangible performance gains over 4G
  • Rich ecosystem of local device manufacturers and app developers, and a commitment by operators to invest in new use cases and services
  • Ability to offload ‘power users’ of bandwidth-hungry, latency-critical services off the 4G and 5G NSA network and willingness from those users to pay a premium for these benefits (the three Chinese operators have seen modest ARPU increases between 2020 and 2022 of between 2.5% and 5.2% per annum)
  • Pre-existing local and metro fibre, supporting 5G SA backhaul.

Effective deployments of 5G SA and converged 5G NSA/SA cores by region, 2019-23

Source: STL Partners

 

Table of Contents

  • Executive summary
  • Deep dive: Is 5G SA getting real?
  • Regional overview
  • Operator view
  • Vendor view

Related research

Enter your details below to download an extract of the report

Telco Cloud Deployment Tracker: Will vRAN eclipse pure open RAN?

Is vRAN good enough for now?

In this October 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present data and analysis on progress with deployments of vRAN and open RAN. It is fair to say that open RAN (virtualised AND disaggregated RAN) deployments have not happened at the pace that STL Partners and many others had forecast. In parallel, some very significant deployments and developments are occurring with vRAN (virtualised NOT disaggregated RAN). Is open RAN a networking ideal that is not yet, or never will be, deployed in its purest form?

In our Telco Cloud Deployment Tracker, we track deployments of three types of virtualised RAN:

  1. Open RAN / O-RAN: Open, disaggregated, virtualised / cloud-native, with baseband (BU) functions distributed between a Central Unit (CU: control plane functions) and Distributed Unit (DU: data plane functions)
  2. vRAN: Virtualised and distributed CU/DU, with open interfaces but implemented as an integrated, single-vendor platform
  3. Cloud RAN (C-RAN): Single-vendor, virtualised / centralised BU, or CU only, with proprietary / closed interfaces

Cloud RAN is the most limited form of virtualised RAN: it is based on porting part or all of the functionality of the legacy, appliance-based BU into a Virtual Machine (VM). vRAN and open RAN are much more significant, in both technology and business-model terms, breaking open all parts of the RAN to more competition and opportunities for innovation. They are also cloud-native functions (CNFs) rather than VM-based.

Enter your details below to request an extract of the report

2022 was meant to be the breakthrough year for open RAN: what happened?

  • Of the eight deployments of open RAN we were expecting to go live in 2022 (shown in the chart below), only three had done so by the time of writing.
  • Two of these were on the same network: Altiostar and Mavenir RAN platforms at DISH. The other was a converged Parallel Wireless 2G / 3G RAN deployment for Orange Central African Republic.
  • This is hardly the wave of 5G open RAN, macro-network roll-outs that the likes of Deutsche Telekom, Orange, Telefónica and Vodafone originally committed to for 2022. What has gone wrong?
  • Open RAN has come up against a number of thorny technological and operational challenges, which are well known to open RAN watchers:
    • integration challenges and costs
    • hardware performance and optimisation
    • immature ecosystem and unclear lines of accountability when things go wrong
    • unproven at scale, and absence of economies of scale
    • energy efficiency shortcomings
    • need to transform the operating model and processes
    • pressured 5G deployment and Huawei replacement timelines
    • absence of mature, open, horizontal telco cloud platforms supporting CNFs.
  • Over and above these factors, open RAN is arguably not essential for most of the 5G use cases it was expected to support.
  • This can be gauged by looking at some of the many open RAN trials that have not yet resulted in commercial deployments.

Global deployments of C-RAN, vRAN and open RAN, 2016 to 2023

Image shows global deployments of C-RAN, vRAN and open RAN, 2016 to 2023

Source: STL Partners

Previous telco cloud tracker releases and related research

Enter your details below to request an extract of the report

Telco Cloud Deployment Tracker: 5G core deep dive

Deep dive: 5G core deployments 

In this July 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present granular information on 5G core launches. They fall into three categories:

  • 5G Non-standalone core (5G NSA core) deployments: The 5G NSA core (agreed as part of 3GPP Release in December 2017), involves using a virtualised and upgraded version of the existing 4G core (or EPC) to support 5G New Radio (NR) wireless transmission in tandem with existing LTE services. This was the first form of 5G to be launched and still accounts for 75% of all 5G core network deployments in our Tracker.
  • 5G Standalone core (5G SA core) deployments: The SA core is a completely new and 5G-only core. It has a simplified, cloud-native and distributed architecture, and is designed to support services and functions such as network slicing, Ultra-Reliable Low-Latency Communications (URLLC) and enhanced Machine-Type Communications (eMTC, i.e. massive IoT). Our Tracker indicates that the upcoming wave of 5G core deployments in 2022 and 2023 will be mostly 5G SA core.
  • Converged 5G NSA/SA core deployments: this is when a dual-mode NSA and SA platform is deployed; in most cases, the NSA core results from the upgrade of an existing LTE core (EPC) to support 5G signalling and radio. The principle behind a converged NSA/SA core is the ability to orchestrate different combinations of containerised network functions, and automatically and dynamically flip over from an NSA to an SA configuration, in tandem – for example – with other features and services such as Dynamic Spectrum Sharing and the needs of different network slices. For this reason, launching a converged NSA/SA platform is a marker of a more cloud-native approach in comparison with a simple 5G NSA launch. Ericsson is the most commonly found vendor for this type of platform with a handful coming from Huawei, Samsung and WorkingGroupTwo. Albeit interesting, converged 5G NSA/SA core deployments remain a minority (7% of all 5G core deployments over the 2018-2023 period) and most of our commentary will therefore focus on 5G NSA and 5G SA core launches.

Enter your details below to request an extract of the report

75% of 5G cores are still Non-standalone (NSA)

Global 5G core deployments by type, 2018–23

  • There is renewed activity this year in 5G core launches since the total number of 5G core deployments so far in 2022 (effective and in progress) stands at 49, above the 47 logged in the whole of 2021. At the very least, total 5G deployments in 2022 will settle between the level of 2021 and the peak of 2020 (97).
  • 5G in whichever form now exists in most places where it was both in demand and affordable; but there remain large economies where it is yet to be launched: Turkey, Russia and most notably India. It also remains to be launched in most of Africa.
  • In countries with 5G, the next phase of launches, which will see the migration of NSA to SA cores, has yet to take place on a significant scale.
  • To date, 75% of all 5G cores are NSA. However, 5G SA will outstrip NSA in terms of deployments in 2022 and represent 24 of the 49 launches this year, or 34 if one includes converged NSA/SA cores as part of the total.
  • All but one of the 5G launches announced for 2023 are standalone; they all involve Tier-1 MNOs including Orange (in its European footprint involving Ericsson and Nokia), NTT Docomo in Japan and Verizon in the US.

The upcoming wave of SA core (and open / vRAN) represents an evolution towards cloud-native

  • Cloud-native functions or CNFs are software designed from the ground up for deployment and operation in the cloud with:​
  • Portability across any hardware infrastructure or virtualisation platform​
  • Modularity and openness, with components from multiple vendors able to be flexibly swapped in and out based on a shared set of compute and OS resources, and open APIs (in particular, via software ‘containers’)​
  • Automated orchestration and lifecycle management, with individual micro-services (software sub-components) able to be independently modified / upgraded, and automatically re-orchestrated and service-chained based on a persistent, API-based, ‘declarative’ framework (one which states the desired outcome, with the service chain organising itself to deliver the outcome in the most efficient way)​
  • Compute, resource, and software efficiency: as a concomitant of the automated, lean and logically optimal characteristics described above, CNFs are more efficient (both functionally and in terms of operating costs) and consume fewer compute and energy resources.​
  • Scalability and flexibility, as individual functions (for example, distributed user plane functions in 5G networks) can be scaled up or down instantly and dynamically in response to overall traffic flows or the needs of individual services​
  • Programmability, as network functions are now entirely based on software components that can be programmed and combined in a highly flexible manner in accordance with the needs of individual services and use contexts, via open APIs.​

Previous telco cloud tracker releases and related research

Each new release of the tracker is global, but is accompanied by an analytical report which focusses on trends in given regions from time to time:

Enter your details below to request an extract of the report

Private networks: Lessons so far and what next

The private networks market is rapidly developing

Businesses across a range of sectors are exploring the benefits of private networks in supporting their connected operations. However, there are considerable variations between national markets, reflecting spectrum and other regulatory actions, as well as industrial structure and other local factors. US, Germany, UK, Japan and the Nordics are among the leading markets.

Enterprises’ adoption of digitalisation and automation programmes is growing across various industries. The demand from enterprises stems from their need for customised networks to meet their vertical-specific connectivity requirements – as well as more basic considerations of coverage and cost of public networks, or alternative wireless technologies.

On the supply side, the development in cellular standards, including the virtualisation of the RAN and core elements, the availability of edge computing, and cloud management solutions, as well as the changing spectrum regulations are making private networks more accessible for enterprises. That said, many recently deployed private cellular networks still use “traditional” integrated small cells, or major vendors’ bundled solutions – especially in conservative sectors such as utilities and public safety.

Many new players are entering the market through different vertical and horizontal approaches and either competing or collaborating with traditional telcos. Traditional telcos, new telcos (mainly building private networks or offering network services), and other stakeholders are all exploring strategies to engage with the market and assessing the opportunities across the value chain as private network adoption increases.

Following up on our 2019 report Private and vertical cellular networks: Threats and opportunities, we explore the recent developments in the private network market, regulatory activities and policy around local and shared spectrum, and the different deployment approaches and business cases. In this report we address several interdependent elements of the private networks landscape

Enter your details below to request an extract of the report

What is a private network?

A private network leverages dedicated resources such as infrastructure and spectrum to provide precise coverage and capacity to specific devices and user groups. The network can be as small as a single radio cell covering a single campus or a location such as a manufacturing site (or even a single airplane), or it can span across a wider geographical area such as a nationwide railway network or regional utility grids.

Private networks is an umbrella term that can includes different LAN (or WAN) connectivity options such as Wi-Fi and LPWAN. However, more commonly, the term is being associated with private cellular networks based on 3GPP mobile technologies, i.e. LTE or 5G New Radio (NR).

Private networks are also different from in-building densification solutions like small cells and DAS which extend the coverage of public network or strengthen its capacity indoors or in highly dense locations. These solutions are still part of the public network and do not support customised control over the local network access or other characteristics. In future, some may support local private networks as well as public MNOs’ services.

Besides dedicated coverage and capacity, private networks can be customised in other aspects such as security, latency and integration with the enterprise internal systems to meet business specific requirements in ways that best effort public networks cannot.

Unlike public networks, private networks are not available to the public through commercially available devices and SIM cards. The network owner or operator controls the authorisation and the access to the network for permissioned devices and users. These definitions blur somewhat if the network is run by a “community” such as a municipality.

Typically, devices will not work outside the boundaries of their private network. That is a requirement in many use cases, such as manufacturing, where devices are not expected to continue functioning outside the premise. However, in a few areas, such as logistics, solutions can include the use of dual-SIM devices for both public and private networks or the use of other wide area technologies such as TETRA for voice. Moreover, agreements with public networks to enable roaming can be activated to support certain service continuity outside the private network boundaries.

While the technology and market are still developing, several terms are being used interchangeably to describe 3GPP private networks such dedicated networks, standalone networks, campus networks, local networks, vertical mobile network and non-public networks (NPN) as defined by the 3GPP.

The emergence of new telcos

Many telcos are not ready to support private networks demands from enterprises on large scale because they lack sufficient resources and expertise. Also, some enterprises might be reluctant to work with telcos for different reasons including their concerns over the traditional telcos’ abilities in vertical markets and a desire to control costs. This gap is already catalysing the emergence of new types of mobile network service providers, as opposed to traditional MNOs that operate national or regional public mobile networks.

These players essentially carry out the same roles as traditional MNOs in configuring the network, provisioning the service, and maintaining the private network infrastructure. Some of them may also have access to spectrum and buy network equipment and technologies directly from network equipment vendors. In addition to “new telcos” or “new operators”, other terms have been used to describe these players such as specialist operators and alternative operators. Throughout this report, we will use new telcos or specialist operators when describing these players collectively and traditional/public operators when referring to typical wide area national mobile network provider. New players can be divided into the following categories:

Possible private networks service providers

private networks ecosystem

Source: STL Partners

Table of content

  • Executive Summary
    • What next
    • Trends and recommendations for telcos, vendors, enterprises and policymakers
  • Introduction
  • Types of private network operators
    • What is a private network?
    • The emergence of new telcos
  • How various stakeholders are approaching the market
    • Technology development: Choosing between LTE and 5G
    • Private network technology vendors
    • Regional overview
    • Vertical overview
    • Mergers and acquisitions activities
  • The development of spectrum regulations
    • Unlicensed spectrum for LTE and 5G is an attractive option, but it remains limited
    • The rise of local spectrum licensing threatens some telcos
    • …but there is no one-size fits all in local spectrum licensing
    • How local spectrum licensing shapes the market and enterprise adoption
    • Recommendations for different stakeholders
  • Assessing the approaches to network implementation
    • Private network deployment models
    • Business models and roles for telcos
  • Conclusion and recommendations
  • Index
  • Appendix 1:  Examples of private networks deployments in 2020 – 2021

Enter your details below to request an extract of the report

Why the consumer IoT is stuck in the slow lane

A slow start for NB-IoT and LTE-M

For telcos around the world, the Internet of Things (IoT) has long represented one of the most promising growth opportunities. Yet for most telcos, the IoT still only accounts for a low single digit percentage of their overall revenue. One of the stumbling blocks has been relatively low demand for IoT solutions in the consumer market. This report considers why that is and whether low cost connectivity technologies specifically-designed for the IoT (such as NB-IoT and LTE-M) will ultimately change this dynamic.

NB-IoT and LTE-M are often referred to as Massive IoT technologies because they are designed to support large numbers of connections, which periodically transmit small amounts of data. They can be distinguished from broadband IoT connections, which carry more demanding applications, such as video content, and critical IoT connections that need to be always available and ultra-reliable.

The initial standards for both technologies were completed by 3GPP in 2016, but adoption has been relatively modest. This report considers the key B2C and B2B2C use cases for Massive IoT technologies and the prospects for widespread adoption. It also outlines how NB-IoT and LTE-M are evolving and the implications for telcos’ strategies.

This builds on previous STL Partners’ research, including LPWA: Which way to go for IoT? and Can telcos create a compelling smart home?. The LPWA report explained why IoT networks need to be considered across multiple generations, including coverage, reliability, power consumption, range and bandwidth. Cellular technologies tend to be best suited to wide area applications for which very reliable connectivity is required (see Figure below).

IoT networks should be considered across multiple dimensions

IoT-networks-disruptive-analysis-stl-2021
Source: Disruptive Analysis

 

Enter your details below to request an extract of the report

The smart home report outlined how consumers could use both cellular and short-range connectivity to bolster security, improve energy efficiency, charge electric cars and increasingly automate appliances. One of the biggest underlying drivers in the smart home sector is peace of mind – householders want to protect their properties and their assets, as rising population growth and inequality fuels fear of crime.

That report contended that householders might be prepared to pay for a simple and integrated way to monitor and remotely control all their assets, from door locks and televisions to solar panels and vehicles.  Ideally, a dashboard would show the status and location of everything an individual cares about. Such a dashboard could show the energy usage and running cost of each appliance in real-time, giving householders fingertip control over their possessions. They could use the resulting information to help them source appropriate insurance and utility supply.

Indeed, STL Partners believes telcos have a broad opportunity to help coordinate better use of the world’s resources and assets, as outlined in the report: The Coordination Age: A third age of telecoms. Reliable and ubiquitous connectivity is a key enabler of the emerging sharing economy in which people use digital technologies to easily rent the use of assets, such as properties and vehicles, to others. The data collected by connected appliances and sensors could be used to help safeguard a property against misuse and source appropriate insurance covering third party rentals.

Do consumers need Massive IoT?

Whereas some IoT applications, such as connected security cameras and drones, require high-speed and very responsive connectivity, most do not. Connected devices that are designed to collect and relay small amounts of data, such as location, temperature, power consumption or movement, don’t need a high-speed connection.

To support these devices, the cellular industry has developed two key technologies – LTE-M (LTE for Machines, sometimes referred to as Cat M) and NB-IoT (Narrowband IoT). In theory, they can be deployed through a straightforward upgrade to existing LTE base stations. Although these technologies don’t offer the capacity, throughput or responsiveness of conventional LTE, they do support the low power wide area connectivity required for what is known as Massive IoT – the deployment of large numbers of low cost sensors and actuators.

For mobile operators, the deployment of NB-IoT and LTE-M can be quite straightforward. If they have relatively modern LTE base stations, then NB-IoT can be enabled via a software upgrade. If their existing LTE network is reasonably dense, there is no need to deploy additional sites – NB-IoT, and to a lesser extent LTE-M, are designed to penetrate deep inside buildings. Still, individual base stations may need to be optimised on a site-by-site basis to ensure that they get the full benefit of NB-IoT’s low power levels, according to a report by The Mobile Network, which notes that operators also need to invest in systems that can provide third parties with visibility and control of IoT devices, usage and costs.

There are a number of potential use cases for Massive IoT in the consumer market:

  • Asset tracking: pets, bikes, scooters, vehicles, keys, wallets, passport, phones, laptops, tablets etc.
  • Vulnerable persontracking: children and the elderly
  • Health wearables: wristbands, smart watches
  • Metering and monitoring: power, water, garden,
  • Alarms and security: smoke alarms, carbon monoxide, intrusion
  • Digital homes: automation of temperature and lighting in line with occupancy

In the rest of this report we consider the key drivers and barriers to take-up of NB-IoT and LTE-M for these consumer use cases.

Table of Contents

  • Executive Summary
  • Introduction
  • Do consumers need Massive IoT?
    • The role of eSIMs
    • Takeaways
  • Market trends
    • IoT revenues: Small, but growing
  • Consumer use cases for cellular IoT
    • Amazon’s consumer IoT play
    • Asset tracking: Demand is growing
    • Connecting e-bikes and scooters
    • Slow progress in healthcare
    • Smart metering gains momentum
    • Supporting micro-generation and storage
    • Digital buildings: A regulatory play?
    • Managing household appliances
  • Technological advances
    • Network coverage
  • Conclusions: Strategic implications for telcos

 

Enter your details below to request an extract of the report

Driving the agility flywheel: the stepwise journey to agile

Agility is front of mind, now more than ever

Telecoms operators today face an increasingly challenging market, with pressure coming from new non-telco competitors, the demands of unfamiliar B2B2X business models that emerge from new enterprise opportunities across industries and the need to make significant investments in 5G. As the telecoms industry undergoes these changes, operators are considering how best to realise commercial opportunities, particularly in enterprise markets, through new types of value-added services and capabilities that 5G can bring.

However, operators need to be able to react to not just near-term known opportunities as they arise but ready themselves for opportunities that are still being imagined. With such uncertainty, agility, with the quick responsiveness and unified focus it implies, is integral to an operator’s continued success and its ability to capitalise on these opportunities.

Traditional linear supply models are now being complemented by more interconnected ecosystems of customers and partners. Innovation of products and services is a primary function of these decentralised supply models. Ecosystems allow the disparate needs of participants to be met through highly configurable assets rather than waiting for a centralised player to understand the complete picture. This emphasises the importance of programmability in maximising the value returned on your assets, both in end-to-end solutions you deliver, and in those where you are providing a component of another party’s system. The need for agility has never been stronger, and this has accelerated transformation initiatives within operators in recent years.

Enter your details below to request an extract of the report

Concepts of agility have crystallised in meaning

In 2015, STL Partners published a report on ‘The Agile Operator: 5 key ways to meet the agility challenge’, exploring the concept and characteristics of operator agility, including what it means to operators, key areas of agility and the challenges in the agile transformation. Today, the definition of agility remains as broad as in 2015 but many concepts of agility have crystallised through wider acceptance of the importance of the construct across different parts of the organisation.

Agility today is a pervasive philosophy of incremental innovation learned from software development that emphasises both speed of innovation at scale and carrier-grade resilience. This is achieved through cloud native modular architectures and practices such as sprints, DevOps and continuous integration and continuous delivery (CI/CD) – occurring in virtuous cycle we call the agility flywheel.

The Agility Flywheel

agility-flywheel

Source: STL Partners

Six years ago, operators were largely looking to borrow only certain elements of cloud native for adoption in specific pockets within the organisation, such as IT. Now, the cloud model is more widely embraced across the business and telcos profess ambitions to become software-centric companies.

Same problem, different constraints

Cloud native is the most fundamental version of the componentised cloud software vision and progress towards this ideal of agility has been heavily constrained by operators’ underlying capabilities. In 2015, operators were just starting to embark on their network virtualisation journeys with barriers such as siloed legacy IT stacks, inelastic infrastructures and software lifecycles that were architecture constrained. Though these barriers continue to be a challenge for many, the operators at the forefront – now unhindered by these basic constraints – have been driving a resurgence and general acceleration towards agility organisation-wide, facing new challenges around the unknowns underpinning the requirements of future capabilities.

With 5G, the network itself is designed as cloud native from the ground up, as are the leading edge of enterprise applications recently deployed by operators, alleviating by design some of the constraints on operators’ ability to become more agile. Uncertainty around what future opportunities will look like and how to support them requires agility to run deep into all of an operators’ processes and capabilities. Though there is a vast raft of other opportunities that do not need cloud native, ultimately the market is evolving in this direction and operators should benchmark ambitions on the leading edge, with a plan to get there incrementally. This report looks to address the following key question:

Given the flexibility and driving force that 5G provides, how can operators take advantage of recent enablers to drive greater agility and thrive in the current pace of change?

Enter your details below to request an extract of the report

 

Table of Contents

    • Executive Summary
    • Agility is front of mind, now more than ever
      • Concepts of agility have crystallised in meaning
      • Same problem, different constraints
    • Ambitions to be a software-centric business
      • Cloudification is supporting the need for agility
      • A balance between seemingly opposing concepts
    • You are only as agile as your slowest limb
      • Agility is achieved stepwise across three fronts
      • Agile IT and networks in the decoupled model
      • Renewed need for orchestration that is dynamic
      • Enabling and monetising telco capabilities
      • Creating momentum for the agility flywheel
    • Recommendations and conclusions

SK Telecom: Lessons in 5G, AI, and adjacent market growth

SK Telecom’s strategy

SK Telecom is the largest mobile operator in South Korea with a 42% share of the mobile market and is also a major fixed broadband operator. It’s growth strategy is focused on 5G, AI and a small number of related business areas where it sees the potential for revenue to replace that lost from its core mobile business.

By developing applications based on 5G and AI it hopes to create additional revenue streams both for its mobile business and for new areas, as it has done in smart home and is starting to do for a variety of smart business applications. In 5G it is placing an emphasis on indoor coverage and edge computing as basis for vertical industry applications. Its AI business is centred around NUGU, a smart speaker and a platform for business applications.

Its other main areas of business focus are media, security, ecommerce and mobility, but it is also active in other fields including healthcare and gaming.

The company takes an active role internationally in standards organisations and commercially, both in its own right and through many partnerships with other industry players.

It is a subsidiary of SK Group, one of the largest chaebols in Korea, which has interests in energy and oil. Chaebols are large family-controlled conglomerates which display a high level and concentration of management power and control. The ownership structures of chaebols are often complex owing to the many crossholdings between companies owned by chaebols and by family members. SK Telecom uses its connections within SK Group to set up ‘friendly user’ trials of new services, such as edge and AI

While the largest part of the business remains in mobile telecoms, SK Telecom also owns a number of subsidiaries, mostly active in its main business areas, for example:

  • SK Broadband which provides fixed broadband (ADSL and wireless), IPTV and mobile OTT services
  • ADT Caps, a securitybusiness
  • IDQ, which specialises in quantum cryptography (security)
  • 11st, an open market platform for ecommerce
  • SK Hynixwhich manufactures memory semiconductors

Few of the subsidiaries are owned outright by SKT; it believes the presence of other shareholders can provide a useful source of further investment and, in some cases, expertise.

SKT was originally the mobile arm of KT, the national operator. It was privatised soon after establishing a cellular mobile network and subsequently acquired by SK Group, a major chaebol with interests in energy and oil, which now has a 27% shareholding. The government pension service owns a 11% share in SKT, Citibank 10%, and 9% is held by SKT itself. The chairman of SK Group has a personal holding in SK Telecom.

Following this introduction, the report comprises three main sections:

  • SK Telecom’s business strategy: range of activities, services, promotions, alliances, joint ventures, investments, which covers:
    • Mobile 5G, Edge and vertical industry applications, 6G
    • AIand applications, including NUGU and Smart Homes
    • New strategic business areas, comprising Media, Security, eCommerce, and other areas such as mobility
  • Business performance
  • Industrial and national context.

Enter your details below to download an extract of the report

Overview of SKT’s activities

Network coverage

SK Telecom has been one of the earliest and most active telcos to deploy a 5G network. It initially created 70 5G clusters in key commercial districts and densely populated areas to ensure a level of coverage suitable for augmented reality (AR) and virtual reality (VR) and plans to increase the number to 240 in 2020. It has paid particular attention to mobile (or multi-access) edge computing (MEC) applications for different vertical industry sectors and plans to build 5G MEC centres in 12 different locations across Korea. For its nationwide 5G Edge cloud service it is working with AWS and Microsoft.

In recognition of the constraints imposed by the spectrum used by 5G, it is also working on ensuring good indoor 5G coverage in some 2,000 buildings, including airports, department stores and large shopping malls as well as small-to-medium-sized buildings using distributed antenna systems (DAS) or its in-house developed indoor 5G repeaters. It also is working with Deutsche Telekom on trials of the repeaters in Germany. In addition, it has already initiated activities in 6G, an indication of the seriousness with which it is addressing the mobile market.

NUGU, the AI platform

It launched its own AI driven smart speaker, NUGU in 2016/7, which SKT is using to support consumer applications such as Smart Home and IPTV. There are now eight versions of NUGU for consumers and it also serves as a platform for other applications. More recently it has developed several NUGU/AI applications for businesses and civil authorities in conjunction with 5G deployments. It also has an AI based network management system named Tango.

Although NUGU initially performed well in the market, it seems likely that the subsequent launch of smart speakers by major global players such as Amazon and Google has had a strong negative impact on the product’s recent growth. The absence of published data supports this view, since the company often only reports good news, unless required by law. SK Telecom has responded by developing variants of NUGU for children and other specialist markets and making use of the NUGU AI platform for a variety of smart applications. In the absence of published information, it is not possible to form a view on the success of the NUGU variants, although the intent appears to be to attract young users and build on their brand loyalty.

It has offered smart home products and services since 2015/6. Its smart home portfolio has continually developed in conjunction with an increasing range of partners and is widely recognised as one of the two most comprehensive offerings globally. The other being Deutsche Telekom’s Qivicon. The service appears to be most successful in penetrating the new build market through the property developers.

NUGU is also an AI platform, which is used to support business applications. SK Telecom has also supported the SK Group by providing new AI/5G solutions and opening APIs to other subsidiaries including SK Hynix. Within the SK Group, SK Planet, a subsidiary of SK Telecom, is active in internet platform development and offers development of applications based on NUGU as a service.

Smart solutions for enterprises

SKT continues to experiment with and trial new applications which build on its 5G and AI applications for individuals (B2C), businesses and the public sector. During 2019 it established B2B applications, making use of 5G, on-prem edge computing, and AI, including:

  • Smart factory(real time process control and quality control)
  • Smart distribution and robot control
  • Smart office (security/access control, virtual docking, AR/VRconferencing)
  • Smart hospital (NUGUfor voice command for patients, AR-based indoor navigation, facial recognition technology for medical workers to improve security, and investigating possible use of quantum cryptography in hospital network)
  • Smart cities; e.g. an intelligent transportation system in Seoul, with links to vehicles via 5Gor SK Telecom’s T-Map navigation service for non-5G users.

It is too early to judge whether these B2B smart applications are a success, and we will continue to monitor progress.

Acquisition strategy

SK Telecom has been growing these new business areas over the past few years, both organically and by acquisition. Its entry into the security business has been entirely by acquisition, where it has bought new revenue to compensate for that lost in the core mobile business. It is too early to assess what the ongoing impact and success of these businesses will be as part of SK Telecom.

Acquisitions in general have a mixed record of success. SK Telecom’s usual approach of acquiring a controlling interest and investing in its acquisitions, but keeping them as separate businesses, is one which often, together with the right management approach from the parent, causes the least disruption to the acquired business and therefore increases the likelihood of longer-term success. It also allows for investment from other sources, reducing the cost and risk to SK Telecom as the acquiring company. Yet as a counterpoint to this, M&A in this style doesn’t help change practices in the rest of the business.

However, it has also shown willingness to change its position as and when appropriate, either by sale, or by a change in investment strategy. For example, through its subsidiary SK Planet, it acquired Shopkick, a shopping loyalty rewards business in 2014, but sold it in 2019, for the price it paid for it. It took a different approach to its activity in quantum technologies, originally set up in-house in 2011, which it rolled into IDQ following its acquisition in 2018.

SKT has also recently entered into partnerships and agreements concerning the following areas of business:

 

Table of Contents

  • Executive Summary
  • Introduction and overview
    • Overview of SKT’s activities
  • Business strategy and structure
    • Strategy and lessons
    • 5G deployment
    • Vertical industry applications
    • AI
    • SK Telecom ‘New Business’ and other areas
  • Business performance
    • Financial results
    • Competitive environment
  • Industry and national context
    • International context

Enter your details below to download an extract of the report

Fixed wireless access growth: To 20% homes by 2025

=======================================================================================

Download the additional file on the left for the PPT chart pack accompanying this report

=======================================================================================

Fixed wireless access growth forecast

Fixed Wireless Access (FWA) networks use a wireless “last mile” link for the final connection of a broadband service to homes and businesses, rather than a copper, fibre or coaxial cable into the building. Provided mostly by WISPs (Wireless Internet Service Providers) or mobile network operators (MNOs), these services come in a wide range of speeds, prices and technology architectures.

Some FWA services are just a short “drop” from a nearby pole or fibre-fed hub, while others can work over distances of several kilometres or more in rural and remote areas, sometimes with base station sites backhauled by additional wireless links. WISPs can either be independent specialists, or traditional fixed/cable operators extending reach into areas they cannot economically cover with wired broadband.

There is a fair amount of definitional vagueness about FWA. The most expansive definitions include cheap mobile hotspots (“Mi-Fi” devices) used in homes, or various types of enterprise IoT gateway, both of which could easily be classified in other market segments. Most service providers don’t give separate breakouts of deployments, while regulators and other industry bodies report patchy and largely inconsistent data.

Our view is that FWA is firstly about providing permanent broadband access to a specific location or premises. Primarily, this is for residential wireless access to the Internet and sometimes typical telco-provided services such as IPTV and voice telephony. In a business context, there may be a mix of wireless Internet access and connectivity to corporate networks such as VPNs, again provided to a specific location or building.

A subset of FWA relates to M2M usage, for instance private networks run by utility companies for controlling grid assets in the field. These are typically not Internet-connected at all, and so don’t fit most observers’ general definition of “broadband access”.

Usually, FWA will be marketed as a specific service and package by some sort of network provider, usually including the terminal equipment (“CPE” – customer premise equipment), rather than allowing the user to “bring their own” device. That said, lower-end (especially 4G) offers may be SIM-only deals intended to be used with generic (and unmanaged) portable hotspots.
There are some examples of private network FWA, such as a large caravan or trailer park with wireless access provided from a central point, and perhaps in future municipal or enterprise cellular networks giving fixed access to particular tenant structures on-site – for instance to hangars at an airport.

Enter your details below to request an extract of the report

FWA today

Today, fixed-wireless access (FWA) is used for perhaps 8-9% of broadband connections globally, although this varies significantly by definition, country and region. There are various use cases (see below), but generally FWA is deployed in areas without good fixed broadband options, or by mobile-only operators trying to add an additional fixed revenue stream, where they have spare capacity.

Fixed wireless internet access fits specific sectors and uses, rather than the overall market

FWA Use Cases

Source: STL Partners

FWA has traditionally been used in sparsely populated rural areas, where the economics of fixed broadband are untenable, especially in developing markets without existing fibre transport to towns and villages, or even copper in residential areas. Such networks have typically used unlicensed frequency bands, as there is limited interference – and little financial justification for expensive spectrum purchases. In most cases, such deployments use proprietary variants of Wi-Fi, or its ill-fated 2010-era sibling WiMAX.

Increasingly however, FWA is being used in more urban settings, and in more developed market scenarios – for example during the phase-out of older xDSL broadband, or in places with limited or no competition between fixed-network providers. Some cellular networks primarily intended for mobile broadband (MBB) have been used for fixed usage as well, especially if spare capacity has been available. 4G has already catalysed rapid growth of FWA in numerous markets, such as South Africa, Japan, Sri Lanka, Italy and the Philippines – and 5G is likely to make a further big difference in coming years. These mostly rely on licensed spectrum, typically the national bands owned by major MNOs. In some cases, specific bands are used for FWA use, rather than sharing with normal mobile broadband. This allows appropriate “dimensioning” of network elements, and clearer cost-accounting for management.

Historically, most FWA has required an external antenna and professional installation on each individual house, although it also gets deployed for multi-dwelling units (MDUs, i.e. apartment blocks) as well as some non-residential premises like shops and schools. More recently, self-installed indoor CPE with varying levels of price and sophistication has helped broaden the market, enabling customers to get terminals at retail stores or delivered direct to their home for immediate use.

Looking forward, the arrival of 5G mass-market equipment and larger swathes of mmWave and new mid-band spectrum – both licensed and unlicensed – is changing the landscape again, with the potential for fibre-rivalling speeds, sometimes at gigabit-grade.

Enter your details below to request an extract of the report

Table of contents

  • Executive Summary
  • Introduction
    • FWA today
    • Universal broadband as a goal
    • What’s changed in recent years?
    • What’s changed because of the pandemic?
  • The FWA market and use cases
    • Niche or mainstream? National or local?
    • Targeting key applications / user groups
  • FWA technology evolution
    • A broad array of options
    • Wi-Fi, WiMAX and close relatives
    • Using a mobile-primary network for FWA
    • 4G and 5G for WISPs
    • Other FWA options
    • Customer premise equipment: indoor or outdoor?
    • Spectrum implications and options
  • The new FWA value chain
    • Can MNOs use FWA to enter the fixed broadband market?
    • Reinventing the WISPs
    • Other value chain participants
    • Is satellite a rival waiting in the wings?
  • Commercial models and packages
    • Typical pricing and packages
    • Example FWA operators and plans
  • STL’s FWA market forecasts
    • Quantitative market sizing and forecast
    • High level market forecast
  • Conclusions
    • What will 5G deliver – and when and where?
  • Index

Open RAN: What should telcos do?

————————————————————————————————————–

Related webinar: Open RAN: What should telcos do?

In this webinar STL Partners addressed the three most important sub-components of Open RAN (open-RAN, vRAN and C-RAN) and how they interact to enable a new, virtualized, less vendor-dominated RAN ecosystem. The webinar covered:

* Why Open RAN matters – and why it will be about 4G (not 5G) in the short term
* Data-led overview of existing Open RAN initiatives and challenges
* Our recommended deployment strategies for operators
* What the vendors are up to – and how we expect that to change

Date: Tuesday 4th August 2020
Time: 4pm GMT

Access the video recording and presentation slides

————————————————————————————————————————————————————————-

For the report chart pack download the additional file on the left

What is the open RAN and why does it matter?

The open RAN’ encompasses a group of technological approaches that are designed to make the radio access network (RAN) more cost effective and flexible. It involves a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

Legacy RAN: single-vendor and inflexible

The traditional, legacy radio access network (RAN) uses dedicated hardware to deliver the baseband function (modulation and management of the frequency range used for cellular network transmission), along with proprietary interfaces (typically based on the Common Public Radio Interface (CPRI) standard) for the fronthaul from the baseband unit (BBU) to the remote radio unit (RRU) at the top of the transmitter mast.

Figure 1: Legacy RAN architecture

Source: STL Partners

This means that, typically, telcos have needed to buy the baseband and the radio from a single vendor, with the market presently dominated largely by the ‘big three’ (Ericsson, Huawei and Nokia), together with a smaller market share for Samsung and ZTE.

The architecture of the legacy RAN – with BBUs typically but not always at every cell site – has many limitations:

  • It is resource-intensive and energy-inefficient – employing a mass of redundant equipment operating at well below capacity most of the time, while consuming a lot of power
  • It is expensive, as telcos are obliged to purchase and operate a large inventory of physical kit from a limited number of suppliers, which keeps the prices high
  • It is inflexible, as telcos are unable to deploy to new and varied sites – e.g. macro-cells, small cells and micro-cells with different radios and frequency ranges – in an agile and cost-effective manner
  • It is more costly to manage and maintain, as there is less automation and more physical kit to support, requiring personnel to be sent out to remote sites
  • It is not very programmable to support the varied frequency, latency and bandwidth demands of different services.

Enter your details below to request an extract of the report

Moving to the open RAN: C-RAN, vRAN and open-RAN

There are now many distinct technologies and standards emerging in the radio access space that involve a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

We have adopted ‘the open RAN’ as an umbrella term which encompasses all of these technologies. Together, they are expected to make the RAN more cost effective and flexible. The three most important sub-components of the open RAN are C-RAN, vRAN and open-RAN.

Centralised RAN (C-RAN), also known as cloud RAN, involves distributing and centralising the baseband functionality across different telco edge, aggregation and core locations, and in the telco cloud, so that baseband processing for multiple sites can be carried out in different locations, nearer or further to the end user.

This enables more effective control and programming of capacity, latency, spectrum usage and service quality, including in support of 5G core-enabled technologies and services such as network slicing, URLLC, etc. In particular, baseband functionality can be split between more centralised sites (central baseband units – CU) and more distributed sites (distributed unit – DU) in much the same way, and for a similar purpose, as the split between centralised control planes and distributed user planes in the mobile core, as illustrated below:

Figure 2: Centralised RAN (C-RAN) architecture

Cloud RAN architecture

Source: STL Partners

Virtual RAN (vRAN) involves virtualising (and now also containerising) the BBU so that it is run as software on generic hardware (General Purpose Processing – GPP) platforms. This enables the baseband software and hardware, and even different components of them, to be supplied by different vendors.

Figure 3: Virtual RAN (vRAN) architecture

vRAN architecture

Source: STL Partners

Open-RANnote the hyphenation – involves replacing the vendor-proprietary interfaces between the BBU and the RRU with open standards. This enables BBUs (and parts thereof) from one or multiple vendors to interoperate with radios from other vendors, resulting in a fully disaggregated RAN:

Figure 4: Open-RAN architecture

Open-RAN architecture

Source: STL Partners

 

RAN terminology: clearing up confusion

You will have noticed that the technologies above have similar-sounding names and overlapping definitions. To add to potential confusion, they are often deployed together.

Figure 5: The open RAN Venn – How C-RAN, vRAN and open-RAN fit together

Open-RAN venn: open-RAN inside vRAN inside C-RAN

Source: STL Partners

As the above diagram illustrates, all forms of the open RAN involve C-RAN, but only a subset of C-RAN involves virtualisation of the baseband function (vRAN); and only a subset of vRAN involves disaggregation of the BBU and RRU (open-RAN).

To help eliminate ambiguity we are adopting the typographical convention ‘open-RAN’ to convey the narrower meaning: disaggregation of the BBU and RRU facilitated by open interfaces. Similarly, where we are dealing with deployments or architectures that involve vRAN and / or cloud RAN but not open-RAN in the narrower sense, we refer to those examples as ‘vRAN’ or ‘C-RAN’ as appropriate.

In the coming pages, we will investigate why open RAN matters, what telcos are doing about it – and what they should do next.

Table of contents

  • Executive summary
  • What is the open RAN and why does it matter?
    • Legacy RAN: single-vendor and inflexible
    • The open RAN: disaggregated and flexible
    • Terminology, initiatives & standards: clearing up confusion
  • What are the opportunities for open RAN?
    • Deployment in macro networks
    • Deployment in greenfield networks
    • Deployment in geographically-dispersed/under-served areas
    • Deployment to support consolidation of radio generations
    • Deployment to support capacity and coverage build-out
    • Deployment to support private and neutral host networks
  • How have operators deployed open RAN?
    • What are the operators doing?
    • How successful have deployments been?
  • How are vendors approaching open RAN?
    • Challenger RAN vendors: pushing for a revolution
    • Incumbent RAN vendors: resisting the open RAN
    • Are incumbent vendors taking the right approach?
  • How should operators do open RAN?
    • Step 1: Define the roadmap
    • Step 2: Implement
    • Step 3: Measure success
  • Conclusions
    • What next?

Enter your details below to request an extract of the report

5G: Bridging hype, reality and future promises

The 5G situation seems paradoxical

People in China and South Korea are buying 5G phones by the million, far more than initially expected, yet many western telcos are moving cautiously. Will your company also find demand? What’s the smart strategy while uncertainty remains? What actions are needed to lead in the 5G era? What questions must be answered?

New data requires new thinking. STL Partners 5G strategies: Lessons from the early movers presented the situation in late 2019, and in What will make or break 5G growth? we outlined the key drivers and inhibitors for 5G growth. This follow on report addresses what needs to happen next.

The report is informed by talks with executives of over three dozen companies and email contacts with many more, including 21 of the first 24 telcos who have deployed. This report covers considerations for the next three years (2020–2023) based on what we know today.

“Seize the 5G opportunity” says Ke Ruiwen, Chairman, China Telecom, and Chinese reports claimed 14 million sales by the end of 2019. Korea announced two million subscribers in July 2019 and by December 2019 approached five million. By early 2020, The Korean carriers were confident 30% of the market will be using 5G by the end of 2020. In the US, Verizon is selling 5G phones even in areas without 5G services,  With nine phone makers looking for market share, the price in China is US$285–$500 and falling, so the handset price barrier seems to be coming down fast.

Yet in many other markets, operators progress is significantly more tentative. So what is going on, and what should you do about it?

Enter your details below to request an extract of the report

5G technology works OK

22 of the first 24 operators to deploy are using mid-band radio frequencies.

Vodafone UK claims “5G will work at average speeds of 150–200 Mbps.” Speeds are typically 100 to 500 Mbps, rarely a gigabit. Latency is about 30 milliseconds, only about a third better than decent 4G. Mid-band reach is excellent. Sprint has demonstrated that simply upgrading existing base stations can provide substantial coverage.

5G has a draft business case now: people want to buy 5G phones. New use cases are mostly years away but the prospect of better mobile broadband is winning customers. The costs of radios, backhaul, and core are falling as five system vendors – Ericsson, Huawei, Nokia, Samsung, and ZTE – fight for market share. They’ve shipped over 600,000 radios. Many newcomers are gaining traction, for example Altiostar won a large contract from Rakuten and Mavenir is in trials with DT.

The high cost of 5G networks is an outdated myth. DT, Orange, Verizon, and AT&T are building 5G while cutting or keeping capex flat. Sprint’s results suggest a smart build can quickly reach half the country without a large increase in capital spending. Instead, the issue for operators is that it requires new spending with uncertain returns.

The technology works, mostly. Mid-band is performing as expected, with typical speeds of 100–500Mbps outdoors, though indoor performance is less clear yet. mmWave indoor is badly degraded. Some SDN, NFV, and other tools for automation have reached the field. However, 5G upstream is in limited use. Many carriers are combining 5G downstream with 4G upstream for now. However, each base station currently requires much more power than 4G bases, which leads to high opex. Dynamic spectrum sharing, which allows 5G to share unneeded 4G spectrum, is still in test. Many features of SDN and NFV are not yet ready.

So what should companies do? The next sections review go-to-market lessons, status on forward-looking applications, and technical considerations.

Early go-to-market lessons

Don’t oversell 5G

The continuing publicity for 5G is proving powerful, but variable. Because some customers are already convinced they want 5G, marketing and advertising do not always need to emphasise the value of 5G. For those customers, make clear why your company’s offering is the best compared to rivals’. However, the draw of 5G is not universal. Many remain sceptical, especially if their past experience with 4G has been lacklustre. They – and also a minority swayed by alarmist anti-5G rhetoric – will need far more nuanced and persuasive marketing.

Operators should be wary of overclaiming. 5G speed, although impressive, currently has few practical applications that don’t already work well over decent 4G. Fixed home broadband is a possible exception here. As the objective advantages of 5G in the near future are likely to be limited, operators should not hype features that are unrealistic today, no matter how glamorous. If you don’t have concrete selling propositions, do image advertising or use happy customer testimonials.

Table of Contents

  • Executive Summary
  • Introduction
    • 5G technology works OK
  • Early go-to-market lessons
    • Don’t oversell 5G
    • Price to match the experience
    • Deliver a valuable product
    • Concerns about new competition
    • Prepare for possible demand increases
    • The interdependencies of edge and 5G
  • Potential new applications
    • Large now and likely to grow in the 5G era
    • Near-term applications with possible major impact for 5G
    • Mid- and long-term 5G demand drivers
  • Technology choices, in summary
    • Backhaul and transport networks
    • When will 5G SA cores be needed (or available)?
    • 5G security? Nothing is perfect
    • Telco cloud: NFV, SDN, cloud native cores, and beyond
    • AI and automation in 5G
    • Power and heat

Enter your details below to request an extract of the report

Vendors vs. telcos? New plays in enterprise managed services

Digital transformation is reshaping vendors’ and telcos’ offer to enterprises

What does ‘digital transformation’ mean?

The enterprise market for telecoms vendors and operators is being radically reshaped by digital transformation. This transformation is taking place across all industry verticals, not just the telecoms sector, whose digital transformation – desirable or actual – STL Partners has forensically mapped out for several years now.

The term ‘digital transformation’ is so familiar that it breeds contempt in some quarters. Consequently, it is worth taking a while to refresh our thinking on what ‘digital transformation’ actually means. This will in turn help explain how the digital needs and practices of enterprises are impacting on vendors and telcos alike.

The digitisation of enterprises across all sectors can be described as part of a more general social, economic and technological evolution toward ever more far-reaching use of software-, computing- and IP-based modes of: interacting with customers and suppliers; communicating; networking; collaborating; distributing and accessing media content; producing, marketing and selling goods and services; consuming and purchasing those goods and services; and managing money flows across the economy. Indeed, one definition of the term ‘digital’ in this more general sense could simply be ‘software-, computing- and IP-driven or -enabled’.

For the telecoms industry, the digitisation of society and technology in this sense has meant, among other things, the decline of voice (fixed and mobile) as the primary communications service, although it is still the single largest contributor to turnover for many telcos. Voice mediates an ‘analogue’ economy and way of working in the sense that the voice is a form of ‘physical’ communication between two or more persons. In addition, the activity and means of communication (i.e. the actual telephone conversation to discuss project issues) is a separate process and work task from other work tasks, in different physical locations, that it helps to co-ordinate. By contrast, in an online collaboration session, the communications activity and the work activity are combined in a shared virtual space: the digital service allows for greater integration and synchronisation of tasks previously carried out by physical means, in separate locations, and in a less inherently co-ordinated manner.

Similarly, data in the ATM and Frame Relay era was mainly a means to transport a certain volume of information or files from one work place to another, without joining those work places together as one: the work places remained separate, both physically and in terms of the processes and work activities associated with them. The traditional telecoms network itself reflected the physical economy and processes that it enabled: comprising massive hardware and equipment stacks responsible for shifting huge volumes of voice signals and data packets (so called on the analogy of postal packets) from one physical location to another.

By contrast, with the advent of the digital (software-, computing- and IP-enabled) society and economy, the value carried by communications infrastructure has increasingly shifted from voice and data (as ‘physical’ signals and packets) to that of new modes of always-on, virtual interconnectedness and interactivity that tend towards the goal of eliminating or transcending the physical separation and discontinuity of people, work processes and things.

Examples of this digital transformation of communications, and associated experiences of work and life, could include:

  • As stated above, simple voice communications, in both business and personal life, have been increasingly superseded by ‘real-time’ or near-real-time, one-to-one or one-to-many exchange and sharing of text and audio-visual content across modes of communication such as instant messaging, unified communications (UC), social media (including increasingly in the work place) or collaborative applications enabling simultaneous, multi-party reviewing and editing of documents and files
  • Similarly, location-to-location file transfers in support of discrete, geographically separated business processes are being replaced by centralised storage and processing of, and access to, enterprise data and applications in the cloud
  • These trends mean that, in theory, people can collaborate and ‘meet’ with each other from any location in the world, and the digital service constitutes the virtual activity and medium through which that collaboration takes place
  • Similarly, with the Internet of Things (IoT), physical objects, devices, processes and phenomena generate data that can be transmitted and analysed in ‘real time’, triggering rapid responses and actions directed towards those physical objects and processes based on application logic and machine learning – resulting in more efficient, integrated processes and physical events meeting the needs of businesses and people. In other words, the IoT effectively involves digitising the physical world: disparate physical processes, and the action of diverse physical things and devices, are brought together by software logic and computing around human goals and needs.

‘Virtualisation’ effectively means ‘digital optimisation’

In addition to the cloud and IoT, one of the main effects of enterprise digital transformation on the communications infrastructure has of course been Network Functions Virtualisation (NFV) and SoftwareDefined Networking (SDN). NFV – the replacement of network functionality previously associated with dedicated hardware appliances by software running on standard compute devices – could also simply be described as the digitisation of telecoms infrastructure: the transformation of networks into software-, computing- and IP-driven (digital) systems that are capable of supporting the functionality underpinning the virtual / digital economy.

This functionality includes things like ultrafast, reliable, scalable and secure routing, processing, analysis and storage of massive but also highly variable data flows across network domains and on a global scale – supporting business processes ranging from ‘mere’ communications and collaboration to co-ordination and management of large-scale critical services, multi-national enterprises, government functions, and complex industrial processes. And meanwhile, the physical, Layer-1 elements of the network have also to become lightning-fast to deliver the massive, ‘real-time’ data flows on which the digital systems and services depend.

Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos

Virtualisation frees vendors from ‘operator lock-in’

Virtualisation has generally been touted as a necessary means for telcos to adapt their networks to support the digital service demands of their customers and, in the enterprise market, to support those customers’ own digital transformations. It has also been advocated as a means for telcos to free themselves from so-called ‘vendor lock-in’: dependency on their network hardware suppliers for maintenance and upgrades to equipment capacity or functionality to support service growth or new product development.

From the other side of the coin, virtualisation could also be seen as a means for vendors to free themselves from ‘operator lock-in’: a dependency on telcos as the primary market for their networking equipment and technology. That is to say, the same dynamic of social and enterprise digitisation, discussed above, has driven vendors to virtualise their own product and service offerings, and to move away from the old business model, which could be described as follows:

▪ telcos and their implementation partners purchase hardware from the vendor
▪ deploy it at the enterprise customer
▪ and then own the business relationship with the enterprise and hold the responsibility for managing the services

By contrast, once the service-enabling technology is based on software and standard compute hardware, this creates opportunities for vendors to market their technology direct to enterprise customers, with which they can in theory take over the supplier-customer relationship.

Of course, many enterprises have continued to own and operate their own private networks and networking equipment, generally supplied to them by vendors. Therefore, vendors marketing their products and services direct to enterprises is not a radical innovation in itself. However, the digitisation / virtualisation of networking technology and of enterprise networks is creating a new competitive dynamic placing vendors in a position to ‘win back’ direct relationships to enterprise customers that they have been serving through the mediation of telcos.

Virtualisation changes the competitive dynamic

Virtualisation changes the competitive dynamic

Contents:

  • Executive Summary: Digital transformation is changing the rules of the game
  • Digital transformation is reshaping vendors’ and telcos’ offer to enterprises
  • What does ‘digital transformation’ mean?
  • ‘Virtualisation’ effectively means ‘digital optimisation’
  • Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos
  • Vendors and telcos: the business models are changing
  • New vendor plays in enterprise networking: four vendor business models
  • Vendor plays: Nokia, Ericsson, Cisco and IBM
  • Ericsson: changing the bet from telcos to enterprises – and back again?
  • Cisco: Betting on enterprises – while operators need to speed up
  • IBM: Transformation involves not just doing different things but doing things differently
  • Conclusion: Vendors as ‘co-Operators’, ‘co-opetors’ or ‘co-opters’ – but can telcos still set the agenda?
  • How should telcos play it? Four recommendations

Figures:

  • Figure 1: Virtualisation changes the competitive dynamic
  • Figure 2: The telco as primary channel for vendors
  • Figure 3: New direct-to-enterprise opportunities for vendors
  • Figure 4: Vendors as both technology supplier and OTT / operator-type managed services provider
  • Figure 5: Vendors as digital service creators, with telcos as connectivity providers and digital service enablers
  • Figure 6: Vendors as digital service enablers, with telcos as digital service creators / providers
  • Figure 7: Vendor manages communications / networking as part of overall digital transformation focus
  • Figure 8: Nokia as technology supplier and ‘operator-type’ managed services provider
  • Figure 9: Nokia’s cloud-native core network blueprint
  • Figure 10: Nokia WING value chain
  • Figure 11: Ericsson’s model for telcos’ roles in the IoT ecosystem
  • Figure 12: Ericsson generates the value whether operators provide connectivity only or also market the service
  • Figure 13: IBM’s model for telcos as digital service enablers or providers – or both

The ‘Agile Operator’: 5 Key Ways to Meet the Agility Challenge

Understanding Agility

What does ‘Agility’ mean? 

A number of business strategies and industries spring to mind when considering the term ‘agility’ but the telecoms industry is not front and centre… 

Agility describes the ability to change direction and move at speed, whilst maintaining control and balance. This innate flexibility and adaptability aptly describes an athlete, a boxer or a cheetah, yet this description can be (and is) readily applied in a business context. Whilst the telecoms industry is not usually referenced as a model of agility (and is often described as the opposite), a number of business strategies and industries have adopted more ‘agile’ approaches, attempting to simultaneously reduce inefficiencies, maximise the deployment of resources, learn though testing and stimulate innovation. It is worthwhile recapping some of the key ‘agile’ approaches as they inform our and the interviewees’ vision of agility for the telecoms operator.

When introduced, these approaches have helped redefine their respective industries. One of the first business strategies that popularised a more ‘agile’ approach was the infamous ‘lean-production’ and related ‘just-in-time’ methodologies, principally developed by Toyota in the mid-1900s. Toyota placed their focus on reducing waste and streamlining the production process with the mindset of “only what is needed, when it is needed, and in the amount needed,” reshaping the manufacturing industry.

The methodology that perhaps springs to many people’s minds when they hear the word agility is ‘agile software development’. This methodology relies on iterative cycles of rapid prototyping followed by customer validation with increasing cross-functional involvement to develop software products that are tested, evolved and improved repeatedly throughout the development process. This iterative and continuous improvement directly contrasts the waterfall development model where a scripted user acceptance testing phase typically occurs towards the end of the process. The agile approach to development speeds up the process and results in software that meets the end users’ needs more effectively due to continual testing throughout the process.

Figure 5: Agile Software Development

Source: Marinertek.com

More recently the ‘lean startup’ methodology has become increasingly popular as an innovation strategy. Similarly to agile development, this methodology also focuses on iterative testing (replacing the testing of software with business-hypotheses and new products). Through iterative testing and learning a startup is able to better understand and meet the needs of its users or customers, reducing the inherent risk of failure whilst keeping the required investment to a minimum. The success of high-tech startups has popularised this approach; however the key principles and lessons are not solely applicable to startups but also to established companies.

Despite the fact that (most of) these methodologies or philosophies have existed for a long time, they have not been adopted consistently across all industries. The digital or internet industry was built on these ‘agile’ principles, whereas the telecoms industry has sought to emulate this by adopting agile models and methods. Of course these two industries differ in nature and there will inevitably be constraints that affect the ability to be agile across different industries (e.g. the long planning and investment cycles required to build network infrastructure) yet these principles can broadly be applied more universally, underwriting a more effective way of working.

This report highlights the benefits and challenges of becoming more ‘agile’ and sets out the operator’s perspective of ‘agility’ across a number of key domains. This vision of the ‘Agile Operator’ was captured through 29 interviews with senior telecoms executives and is supplemented by STL analysis and research.

Barriers to (telco) agility 

…The telecoms industry is hindered by legacy systems, rigid organisational structures and cultural issues…

It is well known that the telecoms industry is hampered by legacy systems; systems that may have been originally deployed between 5-20 years ago are functionally limited. Coordinating across these legacy systems impedes a telco’s ability to innovate and customise product offerings or to obtain a complete view of customers. In addition to legacy system challenges, interview participants outlined a number of other key barriers to becoming more agile. Three principle barriers emerged:

  1. Legacy systems
  2. Mindset & Culture
  3. Organisational Structure & Internal Processes

Legacy Systems 

One of the main (and often voiced by interviewees) barriers to achieving greater agility are legacy systems. Dealing with legacy IT systems and technology can be very cumbersome and time-consuming as typically they are not built to be further developed in an agile way. Even seemingly simple change requests end in development queues that stretch out many months (often years). Therefore operators remain locked-in to the same, limited core capabilities and options, which in turn stymies innovation and agility. 

The inability to modify a process, a pricing plan or to easily on/off-board a 3rd-party product has significant ramifications for how agile a company can be. It can directly limit innovation within the product development process and indirectly diminish employees’ appetite for innovation.

It is often the case that operators are forced to find ‘workarounds’ to launch new products and services. These workarounds can be practical and innovative, yet they are often crude manipulations of the existing capabilities. They are therefore limited in terms what they can do and in terms of the information that can be captured for reporting and learning for new product development. They may also create additional technical challenges when trying to migrate the ‘workaround’ product or service to a new system. 

Figure 6: What’s Stopping Telco Agility?

Source: STL Partners

Mindset & Culture

The historic (incumbent) telco culture, born out of public sector ownership, is the opposite of an ‘agile’ mindset. It is one that put in place rigid controls and structure, repealed accountability and stymied enthusiasm for innovation – the model was built to maintain and scale the status quo. For a long time the industry invested in the technology and capabilities aligned to this approach, with notable success. As technology advanced (e.g. ever-improving feature phones and mobile data) this approach served telcos well, enhancing their offerings which in turn further entrenched this mindset and culture. However as technology has advanced even further (e.g. the internet, smartphones), this focus on proven development models has resulted in telcos becoming slow to address key opportunities in the digital and mobile internet ecosystems. They now face a marketplace of thriving competition, constant disruption and rapid technological advancement. 

This classic telco mindset is also one that emphasized “technical” product development and specifications rather than the user experience. It was (and still is) commonplace for telcos to invest heavily upfront in the creation of relatively untested products and services and then to let the product run its course, rather than alter and improve the product throughout its life.

Whilst this mindset has changed or is changing across the industry, interviewees felt that the mindset and culture has still not moved far enough. Indeed many respondents indicated that this was still the main barrier to agility. Generally they felt that telcos did not operate with a mindset that was conducive to agile practices and this contributed to their inability to compete effectively against the internet players and to provide the levels of service that customers are beginning to expect. 

Organisational Structure & Internal Processes

Organisational structure and internal processes are closely linked to the overall culture and mindset of an organisation and hence it is no surprise that interviewees also noted this aspect as a key barrier to agility. Interviewees felt that the typical (functionally-orientated) organisational structure hinders their companies’ ability to be agile: there is a team for sales, a team for marketing, a team for product development, a network team, a billing team, a provisioning team, an IT team, a customer care team, a legal team, a security team, a privacy team, several compliance teams etc.. This functional set-up, whilst useful for ramping-up and managing an established product, clearly hinders a more agile approach to developing new products and services through understanding customer needs and testing adoption/behaviour. With this set-up, no-one in particular has a full overview of the whole process and they are therefore not able to understand the different dimensions, constraints, usage and experience of the product/service. 

Furthermore, having these discrete teams makes it hard to collaborate efficiently – each team’s focus is to complete their own tasks, not to work collaboratively. Indeed some of the interviewees blamed the organisational structure for creating a layer of ‘middle management’ that does not have a clear understanding of the commercial pressures facing the organisation, a route to address potential opportunities nor an incentive to work outside their teams. This leads to teams working in silos and to a lack of information sharing across the organisation.

A rigid mindset begets a rigid organisational structure which in turn leads to the entrenchment of inflexible internal processes. Interviewees saw internal processes as a key barrier, indicating that within their organisation and across the industry in general internal decision-making is too slow and bureaucratic.

 

Interviewees noted that there were too many checks and processes to go through when making decisions and often new ideas or opportunities fell outside the scope of priority activities. Interviewees highlighted project management planning as an example of the lack of agility; most telcos operate against 1-2 year project plans (with associated budgeting). Typically the budget is locked in for the year (or longer), preventing the re-allocation of financing towards an opportunity that arises during this period. This inflexibility prevents telcos from quickly capitalising on potential opportunities and from (re-)allocating resources more efficiently.

  • Executive Summary
  • Understanding Agility
  • What does ‘Agility’ mean?
  • Barriers to (telco) agility
  • “Agility” is an aspiration that resonates with operators
  • Where is it important to be agile?
  • The Telco Agility Framework
  • Organisational Agility
  • The Agile Organisation
  • Recommended Actions: Becoming the ‘Agile’ Organisation
  • Network Agility
  • A Flexible & Scalable Virtualised Network
  • Recommended Actions: The Journey to the ‘Agile Network’
  • Service Agility
  • Fast & Reactive New Service Creation & Modification
  • Recommended Actions: Developing More-relevant Services at Faster Timescales
  • Customer Agility
  • Understand and Make it Easy for your Customers
  • Recommended Actions: Understand your Customers and Empower them to Manage & Customise their Own Service
  • Partnering Agility
  • Open and Ready for Partnering
  • Recommended Actions: Become an Effective Partner
  • Conclusion

 

  • Figure 1: Regional & Functional Breakdown of Interviewees
  • Figure 2: The Barriers to Telco Agility
  • Figure 3: The Telco Agility Framework
  • Figure 4: The Agile Organisation
  • Figure 5: Agile Software Development
  • Figure 6: What’s Stopping Telco Agility?
  • Figure 7: The Importance of Agility
  • Figure 8: The Drivers & Barriers of Agility
  • Figure 9: The Telco Agility Framework
  • Figure 10: The Agile Organisation
  • Figure 11: Organisational Structure: Functional vs. Customer-Segmented
  • Figure 12: How Google Works – Small, Open Teams
  • Figure 13: How Google Works – Failing Well
  • Figure 14: NFV managed by SDN
  • Figure 15: Using Big Data Analytics to Predictively Cache Content
  • Figure 16: Three Steps to Network Agility
  • Figure 17: Launch with the Minimum Viable Proposition – Gmail
  • Figure 18: The Key Components of Customer Agility
  • Figure 19: Using Network Analytics to Prioritise High Value Applications
  • Figure 20: Knowing When to Partner
  • Figure 21: The Telco Agility Framework

The Internet of Things: Impact on M2M, where it’s going, and what to do about it?

Introduction

From RFID in the supply chain to M2M today

The ‘Internet of Things’ first appeared as a marketing term in 1999 when it was applied to improved supply-chain strategies, leveraging the then hot-topics of RFID and the Internet.

Industrial engineers planned to use miniaturised, RFID tags to track many different types of asset, especially relatively low cost ones. However, their dependency on accessible RFID readers constrained their zonal range. This also constrained many such applications to the enterprise sector and within a well-defined geographic footprint.

Modern versions of RFID labelling have expanded the addressable market through barcode and digital watermarking approaches, for example, while mobile has largely removed the zonal constraint. In fact, mobile’s economies of scale have ushered in a relatively low-cost technology building block in the form of radio modules with local processing capability. These modules allow machines and sensors to be monitored and remotely managed over mobile networks. This is essentially the M2M market today.

M2M remained a specialist, enterprise sector application for a long time. It relied on niche, systems integration and hardware development companies, often delivering one-off or small-scale deployments. For many years, growth in the M2M market did not meet expectations for faster adoption, and this is visible in analyst forecasts which repeatedly time-shifted the adoption forecast curve. Figure 1 below, for example, illustrates successive M2M forecasts for the 2005-08 period (before M2M began to take off) as analysts tried to forecast when M2M module shipment volumes would breach the 100m units/year hurdle:

Figure 1: Historical analyst forecasts of annual M2M module shipment volumes

Source: STL Partners, More With Mobile

Although the potential of remote connectivity was recognised, it did not become a high-volume market until the GSMA brought about an alignment of interests, across mobile operators, chip- and module-vendors, and enterprise users by targeting mobile applications in adjacent markets.

The GSMA’s original Embedded Mobile market development campaign made the case that connecting devices and sensors to (Internet) applications would drive significant new use cases and sources of value. However, in order to supply economically viable connected devices, the cost of embedding connectivity had to drop. This meant:

  • Educating the market about new opportunities in order to stimulate latent demand
  • Streamlining design practices to eliminate many layers of implementation costs
  • Promoting adoption in high-volume markets such as automotive, consumer health and smart utilities, for example, to drive economies of scale in the same manner that led to the mass-adoption of mobile phones

The late 2000’s proved to be a turning point for M2M, with the market now achieving scale (c. 189m connections globally as of January 2014) and growing at an impressive rate (c. 40% per annum). 

From M2M to the Internet of Things?

Over the past 5 years, companies such as Cisco, Ericsson and Huawei have begun promoting radically different market visions to those of ‘traditional M2M’. These include the ‘Internet of Everything’ (that’s Cisco), a ‘Networked Society’ with 50 billion cellular devices (that’s Ericsson), and a ‘Cellular IoT’ with 100 billion devices (that’s Huawei).

Figure 2: Ericsson’s Promise: 50 billion connected ‘things’ by 2020

Source: Ericsson

Ericsson’s calculation builds on the idea that there will be 3 billion “middle class consumers”, each with 10 M2M devices, plus personal smartphones, industrial, and enterprise devices. In promoting such visions, the different market evangelists have shifted market terminology away from M2M and towards the Internet of Things (‘IoT’).

The transition towards IoT has also had consequences beyond terminology. Whereas M2M applications were previously associated with internal-to-business, operational improvements, IoT offers far more external market prospects. In other words, connected devices allow a company to interact with its customers beyond its strict operational boundaries. In addition, standalone products can now deliver one or more connected services: for example, a connected bus can report on its mechanical status, for maintenance purposes, as well as its location to deliver a higher quality, transit service.

Another consequence of the rise of IoT relates to the way that projects are evaluated. In the past, M2M applications tended to be justified on RoI criteria. Nowadays, there is a broader, commercial recognition that IoT opens up new avenues of innovation, efficiency gains and alternative sources of revenue: it was this recognition, for example, that drove Google’s $3.2 billion valuation of Nest (see the Connected Home EB).

In contrast to RFID, the M2M market required companies in different parts of the value chain to share a common vision of a lower cost, higher volume future across many different industry verticals. The mobile industry’s success in scaling the M2M market now needs to adjust for an IoT world. Before examining what these changes imply, let us first review the M2M market today, how M2M service providers have adapted their business models and where this positions them for future IoT opportunities.

M2M Today: Geographies, Verticals and New Business Models

Headline: M2M is now an important growth area for MNOs

The M2M market has now evolved into a high volume and highly competitive business, with leading telecoms operators and other service providers (so-called ‘M2M MVNOs’ e.g. KORE, Wyless) providing millions of cellular (and fixed) M2M connections across numerous verticals and applications.

Specifically, 428 MNOs were offering M2M services across 187 countries by January 2014 – 40% of mobile network operators – and providing 189 million cellular connections. The GSMA estimates the number of global connections to be growing by about 40% per annum. Figure 3 below shows that as of Q4 2013 China Mobile was the largest player by connections (32 million), with AT&T second largest but only half the size.

Figure 3: Selected leading service providers by cellular M2M connections, Q4 2013

 

Source: Various, including GSMA and company accounts, STL Partners, More With Mobile

Unsurprisingly, these millions of connections have also translated into material revenues for service providers. Although MNOs typically do not report M2M revenues (and many do not even report connections), Verizon reported $586m in ‘M2M and telematics’ revenues for 2014, growing 47% year-on-year, during its most recent earnings call. Moreover, analysis from the Telco 2.0 Transformation Index also estimates that Vodafone Group generated $420m in revenues from M2M during its 2013/14 March-March financial year.

However, these numbers need to be put in context: whilst $500m growing 40% YoY is encouraging, this still represents only a small percentage of these telcos’ revenues – c. 0.5% in the case of Vodafone, for example.

Figure 4: Vodafone Group enterprise revenues, implied forecast, FY 2012-18

 

Source: Company accounts, STL Partners, More With Mobile

Figure 4 uses data provided by Vodafone during 2013 on the breakdown of its enterprise line of business and grows these at the rates which Vodafone forecasts the market (within its footprint) to grow over the next five years – 20% YoY revenue growth for M2M, for example. Whilst only indicative, Figure 4 demonstrates that telcos need to sustain high levels of growth over the medium- to long-term and offer complementary, value added services if M2M is to have a significant impact on their headline revenues.

To do this, telcos essentially have three ways to refine or change their business model:

  1. Improve their existing M2M operations: e.g. new organisational structures and processes
  2. Move into new areas of M2M: e.g. expansion along the value chain; new verticals/geographies
  3. Explore the Internet of Things: e.g. new service innovation across verticals and including consumer-intensive segments (e.g. the connected home)

To provide further context, the following section examines where M2M has focused to date (geographically and by vertical). This is followed by an analysis of specific telco activities in 1, 2 and 3.

 

  • Executive Summary
  • Introduction
  • From RFID in the supply chain to M2M today
  • From M2M to the Internet of Things?
  • M2M Today: Geographies, Verticals and New Business Models
  • Headline: M2M is now an important growth area for MNOs
  • In-depth: M2M is being driven by specific geographies and verticals
  • New Business Models: Value network innovation and new service offerings
  • The Emerging IoT: Outsiders are raising the opportunity stakes
  • The business models and profitability potentials of M2M and IoT are radically different
  • IoT shifts the focus from devices and connectivity to data and its use in applications
  • New service opportunities drive IoT value chain innovation
  • New entrants recognise the IoT-M2M distinction
  • IoT is not the end-game
  • ‘Digital’ and IoT convergence will drive further innovation and new business models
  • Implications for Operators
  • About STL Partners and Telco 2.0: Change the Game
  • About More With Mobile

 

  • Figure 1: Historical analyst forecasts of annual M2M module shipment volumes
  • Figure 2: Ericsson’s Promise: 50 billion connected ‘things’ by 2020
  • Figure 3: Selected leading service providers by cellular M2M connections, Q4 2013
  • Figure 4: Vodafone Group enterprise revenues, implied forecast, FY 2012-18
  • Figure 5: M2M market penetration vs. growth by geographic region
  • Figure 6: Vodafone Group organisational chart highlighting Telco 2.0 activity areas
  • Figure 7: Vodafone’s central M2M unit is structured across five areas
  • Figure 8: The M2M Value Chain
  • Figure 9: ‘New entrant’ investments outstripped those of M2M incumbents in 2014
  • Figure 10: Characterising the difference between M2M and IoT across six domains
  • Figure 11: New business models to enable cross-silo IoT services
  • Figure 12: ‘Digital’ and IoT convergence

 

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”