The private networks ecosystem: Who to partner with

Introduction

Enterprises are considering implementing private networks to improve their operational efficiency and enable new use cases. Stakeholders on the supply side, including traditional and new service providers and technology vendors, are looking for routes to enter and compete in the private networks market. The graphic below shows the current number of deployments by sector.

Private networks deployments by sector, 2021 to March 2023

Private-networks-deployments-by-sector-2021-March-2023-stl-partners

Source: STL Partners Private Networks Global Insight Tool

With many players in the market, it can be difficult for new and existing players in the private networks value chain to navigate and determine the best strategy and position to take. Infrastructure and service providers are simultaneously trying to carve out their core positions and expand their solutions to capture the largest possible share of the opportunity. For enterprises, this is creating a confusing market, where it is difficult to assess which partners to work with. This can also create barriers to growth in the market as enterprise customers hold back on investing in private networks until the ecosystem and value chain are more structured. So there is an incentive for providers to be clearer on their roles.

Therefore, it is important for all stakeholders and providers to have a clear and comprehensive understanding of the ecosystem in order to:

  • Gain clarity on where a given company’s strength is in the market, and how to stand out and differentiate. For example, which vertical or layer of the value chain to target.
  • Develop a clear view of gaps in existing solutions and emerging challengers.
  • Identify good partnering or acquisition targets and opportunities.

Enter your details below to download an extract of the report


Recent news has reported on the number of collaborations happening lately. This is not surprising given that deploying and running private networks requires a level of collaboration, due to the complexity of private networks solutions compared to public wireless networks and legacy enterprise connectivity solutions.

By leveraging these partnerships, stakeholders can strengthen their offerings in the market. They can provide tailored solutions that address the specific challenges and requirements of their target vertical markets, ultimately helping them to stand out in a crowded market and achieve success in the private network industry.

In addition, stakeholders must consider several factors when choosing private network partners, such as their level of expertise in their sector, their record of project delivery and their customer base.

This report examines the private networks ecosystem, focusing on the collaborative efforts required to build and harness their potential. We begin by exploring the diverse ecosystem surrounding private networks, providing insights into their lifecycle. We examine the various solutions available in the market, including managed service offerings and the ever-increasing popularity of hyperscaler-supported as-a-service solutions. Finally, we shed light on the significance of strategic partnerships and current trends shaping the private network landscape. By addressing growth barriers and embracing collaboration, organisations can navigate this dynamic environment with confidence.

In line with our continued focus on private networks, this report builds on the knowledge presented in our Private Networks Insight Service where we provide in-depth analysis of the market through research reports and products such as:

Building a private network requires coordination between multiple parties

Private networks share aspects of the public wireless networks as well as enterprise LAN networks. Therefore, deploying and managing them present a unique set of requirements that relates to these two types of networks. There is no single provider currently that can independently address all these requirements. The success of private networks delivery depends on solving the challenges associated with these differences through effective ecosystem partnerships. (See graphic below)

Requirements for private networks solutions and services

Requirements-for-private-networks-solutions-and-services-stl-partners

Source: STL Partners

 

Table of contents

  • Executive Summary
    • Market opportunities
    • Recommendations for stakeholders
  • Introduction
    • Building a private network requires coordination between multiple parties
  • The private network ecosystem
    • Overview of the private network lifecycle
  • Types of private network solutions available in the market
    • Enterprise co-developed solutions
    • Managed service solutions
    • Hyperscaler-supported as-a-service solutions
  • Where to partner in private networks
    • Key trends underpinning a shifting ecosystem
    • Partnership strategies: How they can address growth barriers
  • Conclusion

Enter your details below to download an extract of the report

Telco digital twins: Cool tech or real value?

Definition of a digital twin

Digital twin is a familiar term with a well-known definition in industrial settings. However, in a telco setting it is useful to define what it is and how it differs from a standard piece of modelling. This research discusses the definition of a digital twin and concludes with a detailed taxonomy.

An archetypical digital twin:

  • models a single entity/system (for example, a cell site).
  • creates a digital representation of this entity/system, which can be either a physical object, process, organisation, person or abstraction (details of the cell-site topology or the part numbers of components that make up the site).
  • has exactly one twin per thing (each cell site can be modelled separately).
  • updates (either continuously, intermittently or as needed) to mirror the current state of this thing. For example, the cell sitescurrent performance given customer behavior.

In addition:

  • multiple digital twins can be aggregated to form a composite view (the impact of network changes on cell sitesin an area).
  • the data coming into the digital twin can drive various types of analytics (typically digital simulations and models) within the twin itself – or could transit from one or multiple digital twins to a third-party application (for example, capacity management analytics).
  • the resulting analysis has a range of immediate uses, such as feeding into downstream actuators, or it can be stored for future use, for instance mimicking scenarios for testingwithout affecting any live applications.
  • a digital twin is directly linked to the original, which means it can enable a two-way interaction. Not only can a twin allow others to read its own data, but it can transmit questions or commands back to the original asset.

Enter your details below to download an extract of the report

What is the purpose of a digital twin?

This research uses the phrase “archetypical twin” to describe the most mature twin category, which can be found in manufacturing, operations, construction, maintenance and operating environments. These have been around in different levels of sophistication for the last 10 years or so and are expected to be widely available and mature in the next five years. Their main purpose is to act as a proxy for an asset, so that applications wanting data about the asset can connect directly to the digital twin rather than having to connect directly with the asset. In these environments, digital twins tend to be deployed for expensive and complex equipment which needs to operate efficiently and without significant down time. For example, jet engines or other complex equipment. In the telco, the most immediate use case for an archetypical twin is to model the cell tower and associated Radio Access Network (RAN) electronics and supporting equipment.

The adoption of digital twins should be seen as an evolution from today’s AI models

digital-twins-evolution-of-todays-ai-models-stl-partners

*See report for detailed graphic.

Source: STL Partners

 

At the other end of the maturity curve from the archetypical twin, is the “digital twin of the organisation” (DTO). This is a virtual model of a department, business unit, organisation or whole enterprise that management can use to support specific financial or other decision-making processes. It uses the same design pattern and thinking of a twin of a physical object but brings in a variety of operational or contextual data to model a “non-physical” thing. In interviews for this research, the consensus was that these were not an initial priority for telcos and, indeed, conceptually it was not totally clear whether the benefits make them a must-have for telcos in the mid-term either.

As the telecoms industry is still in the exploratory and trial phase with digital twins, there are a series of initial deployments which, when looked at, raise a somewhat semantic question about whether a digital representation of an asset (for example, a network function) or a system (for example, a core network) is really a digital twin or actually just an organic development of AI models that have been used in telcos for some time. Referring to this as the “digital twin/model” continuum, the graphic above shows the characteristics of an archetypical twin compared to that of a typical model.

The most important takeaway from this graphic are the factors on the right-hand side that make a digital twin potentially much more complex and resource hungry than a model. How important it is to distinguish an archetypical twin from a hybrid digital twin/model may come down to “marketing creep”, where deployments tend to get described as digital twins whether they exhibit many of the features of the archtypical twin or not. This creep will be exacerbated by telcos’ needs, which are not primarily focused on emulating physical assets such as engines or robots but on monitoring complex processes (for example, networks), which have individual assets (for example, network functions, physical equipment) that may not need as much detailed monitoring as individual components in an airplane engine. As a result, the telecoms industry could deploy digital twin/models far more extensively than full digital twins.

Table of contents

  • Executive Summary
    • Choosing where to start
    • Complexity: The biggest short-term barrier
    • Building an early-days digital twin portfolio
  • Introduction
    • Definition of a digital twin
    • What is the purpose of a digital twin?
    • A digital twin taxonomy
  • Planning a digital twin deployment
    • Network testing
    • Radio and network planning
    • Cell site management
    • KPIs for network management
    • Fraud prediction
    • Product catalogue
    • Digital twins within partner ecosystems
    • Digital twins of services
    • Data for customer digital twins
    • Customer experience messaging
    • Vertical-specific digital twins
  • Drivers and barriers to uptake of digital twins
    • Drivers
    • Barriers
  • Conclusion: Creating a digital twin strategy
    • Immediate strategy for day 1 deployment
    • Long-term strategy

Related research

Enter your details below to download an extract of the report

Telco Cloud Deployment Tracker: Deploying NFs on public cloud without losing control

In this update, we present a review of telco cloud deployments for the whole of 2022 and discuss trends that will shape the year ahead. Fewer deployments than expected were completed in 2022. The main reason for this was a delay in previously announced 5G Standalone (SA) core roll-outs, for reasons we have analysed in a previous report. However, we expect these deployments to be largely completed in 2023. 

We also review deployments of NFs on the public cloud in 2022. While few in number, they are significant in scope, and illustrate ways in which telcos of different types can deploy NFs on public cloud while retaining control over the management and ongoing development of those NFs.

Enter your details below to download an extract of the report

CNFs on the public cloud: Recent deployments illustrate how to avoid hyperscaler lock-in

Few telcos have yet deployed critical network functions on the hyperscale cloud, as discussed in this report. However, significant new deployments did go live in 2022, as did tests and pilots, involving all three hyperscalers:​

Recent deployments and trials of CNFs on public cloud

Source: STL Partners

In our recently published Telco Cloud Manifesto 2.0, we argued that telcos thinking of outsourcing telco cloud (i.e. both VNFs/CNFs and cloud infrastructure) to hyperscalers should not do so as a simple alternative to evolving their own software development skills and cloud operational processes. In order to avoid a potentially crippling dependency on their hyperscaler partners, it is essential for operators to maintain control over the development and orchestration of their critical NFs and cloud infrastructure while delivering services across a combination of the private cloud and potentially multiple public clouds. In contrast to a simple outsourcing model, the deployments on public cloud in 2022 reflect different modes of exploiting the resources and potential of the cloud while maintaining control over NF development and potential MEC use cases. The telcos involved retain control because only specific parts of the cloud stack are handed over to the hyperscale platform; and, within that, the telcos also retain control over variable elements such as orchestration, NF development, physical infrastructure or the virtualisation layer.

In this report, we discuss the models which the telcos above have followed to migrate their network workloads onto the public cloud and how this move fits their overall virtualisation strategies.

Previous telco cloud tracker releases and related research

Enter your details below to download an extract of the report

VNFs on public cloud: Opportunity, not threat

VNF deployments on the hyperscale cloud are just beginning

Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date

The past three years have seen many major telcos concluding collaboration agreements with the leading hyperscalers. These have involved one or more of five business models for the telco-hyperscaler relationship that we discussed in a previous report, and which are illustrated below:

Five business models for telco-hyperscaler partnerships

Source: STL Partners

In this report, we focus more narrowly on the deployment, delivery and operation by and to telcos of virtualised and cloud-native network functions (VNFs / CNFs) over the hyperscale public cloud. To date, there have been few instances of telcos delivering live, commercial services on the public network via VNFs hosted on the public cloud. STL Partners’ Telco Cloud Deployment Tracker contains eight examples of this, as illustrated below:

Major telcos deploying VNFs in the public cloud

Source: STL Partners

Enter your details below to request an extract of the report

Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’

The telcos in the above table are all of comparable stature and ambition to the likes of AT&T and DISH in the realm of telco cloud but have a diametrically opposite stance when it comes to VNF deployment on public cloud. They have decided against large-scale public cloud deployments for a variety of reasons, including:

  • They have invested a considerable amount of money, time and human resources on their private clouddeployments, and they want and need to utilise the asset and generate the RoI.
  • Related to this, they have generated a large amount of intellectual property (IP) as a result of their DIY cloud– and VNF-development work. Clearly, they wish to realise the business benefits they sought to achieve through these efforts, such as cost and resource efficiencies, automation gains, enhanced flexibility and agility, and opportunities for both connectivityand edge compute service innovation. Apart from the opportunity cost of not realising these gains, it is demoralising for some CTO departments to contemplate surrendering the fruit of this effort in favour of a hyperscaler’s comparable cloud infrastructure, orchestration and management tools.
  • In addition, telcos have an opportunity to monetise that IP by marketing it to other telcos. The Rakuten Communications Platform (RCP) marketed by Rakuten Symphony is an example of this: effectively, a telco providing a telco cloud platform on an NFaaS basis to third-party operators or enterprises – in competition to similar offerings that might be developed by hyperscalers. Accordingly, RCP will be hosted over private cloud facilities, not public cloud. But in theory, there is no reason why RCP could not in future be delivered over public cloud. In this case, Rakuten would be acting like any other vendor adapting its solutions to the hyperscale cloud.
  • In theory also, telcos could also offer their private telcoclouds as a platform, or wholesale or on-demand service, for third parties to source and run their own network functions (i.e. these would be hosted on the wholesale provider’s facilities, in contrast to the RCP, which is hosted on the client telco’s facilities). This would be a logical fit for telcos such as BT or Deutsche Telekom, which still operate as their respective countries’ communications backbone provider and primary wholesale provider

BT and Deutsche Telekom have also been among the telcos that have been most visibly hostile to the idea of running NFs powering their own public, mass-market services on the public and hyperscale cloud. And for most operators, this is the main concern making them cautious about deploying VNFs on the public cloud, let alone sourcing them from the cloud on an NFaaS basis: that this would be making the ‘core’ telco business and asset – the network – dependent on the technology roadmaps, operational competence and business priorities of the hyperscalers.

Table of contents

  • Executive Summary
  • Introduction: VNF deployments on the hyperscale cloud are just beginning
    • Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date
    • DISH and AT&T: AWS vs Azure; vendor-supported vs DIY; NaaCP vs net compute
  • Other DIY or vendor-supported best-of-breed players are not hosting VNFs on public cloud
    • Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’
    • The reluctance to deploy VNFs on the cloud reflects a persistent, legacy concept of the telco
  • But NaaCP will drive more VNF deployments on public cloud, and opportunities for telcos
    • Multiple models for NaaCP present prospects for greater integration of cloud-native networks and public cloud
  • Conclusion: Convergence of network and cloud is inevitable – but not telcos’ defeat
  • Appendix

Related Research

 

Enter your details below to request an extract of the report

Forecasting capacity of network edge computing

We have updated this forecast. Check the latest report here

Telco edge build has been slower than expected

Telecoms operators have been planning the deployment of edge computing sites for at least the last three years.

Initially, the premise of (mobile) edge computing was to take advantage of the prime real estate telecoms operators had. Mobile operators, in particular, had undergone a process of evolving their network facilities from sites which housed purpose-built networking equipment to data centres as they adopted virtualisation. The consolidation of networking equipment meant there would be spare capacity in these data centres that could easily host applications for enterprises and developers.

That evolution has now been accelerated by the advent of 5G, a mobile generation built on a software-based architecture and IT principles. The result will be a proliferation of edge data centres that will be used for radio access network and core network hardware and software.

However, the reality is that it has taken time for telcos to deploy these sites. There are multiple reasons for this:

  1. Cost: There is a cost to renovate an existing telco site and ensure it meets requirements common for world-class data centres.
  2. Demand: Telcos are hesitant to take on the risk of building out the infrastructure until they are certain of the demand for these data centres.
  3. 5G roll-out: Mobile operators have been prioritising their 5G RAN roll-out in the last two years, over the investment in edge data centres.
  4. Partnership decisions: The discussion around who to partner with to build the edge data centres has become more complicated, because of the number of partners vying for the role and the entrance of new partners (e.g. hyperscalers) which has slowed down decision-making

Enter your details below to request an extract of the report

Early adopters have taken significant strides in their edge strategy in 2021

2020 and 2021 have been seen as inflection points as a number of leading telecoms operators have launched edge sites: e.g. AT&T, Verizon, Cox Communications, SK Telecom and Vodafone. Arguably, this was triggered by AWS announcing partnerships on AWS Wavelength with four telecoms operators in November 2019, with more recently announced (e.g. Telstra in 2021).

Going forward, key questions remain on the trajectory of telco edge build:

  • How many edge data centres will telcos build and make available for consumer/enterprise applications?
  • How much capacity of telco edge computing will there be globally?
  • How much of telco edge computing will be used for distributed core network functions vs. consumer/enterprise applications?
  • What proportion of telco edge data centre capacity will be taken up by hyperscalers’ platforms?

This report seeks to forecast the capacity at telecoms operators’ edge data centres until 2025 and provide clarity on the nature and location of these sites. In other words, how many sites and servers will be available for running applications and where will these sites be located, both physically and logically in the telecoms operators’ networks.

Before reading this report, we would recommend reading STL Partners’ previous publications on telco edge computing to provide context for some of the key themes addressed, for example:

The report focuses on network edge computing sites

Edge computing comprises of a spectrum of potential location and technologies designed to bring processing power closer to the end-device and source of data, outside of a central data centre or cloud. This report focuses on forecasting capacity at the network edge – i.e. edge computing at edge data centres owned (and usually operated) by telecoms operators.

The initial version of the forecast models capacity at these sites for non-RAN workloads. In other words, processing for enterprise or consumer applications and the distributed core network functions required to support them. Future versions of the forecast will expand to RAN.

Forecast scope in terms of edge locations and workload types

The report covers two out of three scenarios for building the network edge

Table of content

  • Executive summary
  • Introduction
  • There are 3 key factors determining telco edge data centre build out
  • Logically, most network edge will be in the transport aggregation layer
  • Geographically, we will see a shift in the concentration of network edge data centres
  • The limited capacity at network edge DCs will largely be used for edge applications
  • Most telecoms operators are taking a hybrid approach to building their edge
  • Conclusions and next steps
  • Appendix: Methodology

Enter your details below to request an extract of the report

Why and how to go telco cloud native: AT&T, DISH and Rakuten

The telco business is being disaggregated

Telcos are facing a situation in which the elements that have traditionally made up and produced their core business are being ‘disaggregated’: broken up into their component parts and recombined in different ways, while some of the elements of the telco business are increasingly being provided by players from other industry verticals.

By the same token, telcos face the pressure – and the opportunity – to combine connectivity with other capabilities as part of new vertical-specific offerings.

Telco disaggregation primarily affects three interrelated aspects of the telco business:

  1. Technology:
    • ‘Vertical’ disaggregation: separating out of network functions previously delivered by dedicated, physical equipment into software running on commodity computing hardware (NFV, virtualisation)
    • ‘Horizontal’ disaggregation: breaking up of network functions themselves into their component parts – at both the software and hardware levels; and re-engineering, recombining and redistributing of those component parts (geographically and architecturally) to meet the needs of new use cases. In respect of software, this typically involves cloud-native network functions (CNFs) and containerisation
    • Open RAN is an example of both types of disaggregation: vertical disaggregation through separation of baseband processing software and hardware; and horizontal disaggregation by breaking out the baseband function into centralised and distributed units (CU and DU), along with a separate, programmable controller (RAN Intelligent Controller, or RIC), where all of these can in theory be provided by different vendors, and interface with radios that can also be provided by third-party vendors.
  2. Organisational structure and operating model: Breaking up of organisational hierarchies, departmental siloes, and waterfall development processes focused on the core connectivity business. As telcos face the need to develop new vertical- and client-specific services and use cases beyond the increasingly commoditised, low-margin connectivity business, these structures are being – or need to be – replaced by more multi-disciplinary teams taking end-to-end responsibility for product development and operations (e.g. DevOps), go-to-market, profitability, and technology.

Transformation from the vertical telco to the disaggregated telco

3. Value chain and business model: Breaking up of the traditional model whereby telcos owned – or at least had end-to-end operational oversight over – . This is not to deny that telcos have always relied on third party-owned or outsourced infrastructure and services, such as wholesale networks, interconnect services or vendor outsourcing. However, these discrete elements have always been welded into an end-to-end, network-based services offering under the auspices of the telco’s BSS and OSS. These ensured that the telco took overall responsibility for end-to-end service design, delivery, assurance and billing.

    • The theory behind this traditional model is that all the customer’s connectivity needs should be met by leveraging the end-to-end telco network / service offering. In practice, the end-to-end characteristics have not always been fully controlled or owned by the service provider.
    • In the new, further disaggregated value chain, different parts of the now more software-, IT- and cloud-based technology stack are increasingly provided by other types of player, including from other industry verticals. Telcos must compete to play within these new markets, and have no automatic right to deliver even just the connectivity elements.

All of these aspects of disaggregation can be seen as manifestations of a fundamental shift where telecoms is evolving from a utility communications and connectivity business to a component of distributed computing. The core business of telecoms is becoming the processing and delivery of distributed computing workloads, and the enablement of ubiquitous computing.

Enter your details below to request an extract of the report

Telco disaggregation is a by-product of computerisation

Telco industry disaggregation is part of a broader evolution in the domains of technology, business, the economy, and society. This evolution comprises ‘computerisation’. Computing analyses and breaks up material processes and systems into a set of logical and functional sub-components, enabling processes and products to be re-engineered, optimised, recombined in different ways, managed, and executed more efficiently and automatically.

In essence, ‘telco disaggregation’ is a term that describes a moment in time at which telecoms technology, organisations, value chains and processes are being broken up into their component parts and re-engineered, under the impact of computerisation and its synonyms: digitisation, softwarisation, virtualisation and cloud.

This is part of a new wave of societal computerisation / digitisation, which at STL Partners we call the Coordination Age. At a high level, this can be described as ‘cross-domain computerisation’: separating out processes, services and functions from multiple areas of technology, the economy and society – and optimising, recombining and automating them (i.e. coordinating them), so that they can better deliver on social, economic and environmental needs and goals. In other words, this enables scarce resources to be used more efficiently and sustainably in pursuit of individual and social needs.

NFV has computerised the network; telco cloud native subordinates it to computing

In respect of the telecoms industry in particular, one could argue that the first wave of virtualisation (NFV and SDN), which unfolded during the 2010s, represented the computerisation and digitisation of telecoms networking. The focus of this was internal to the telecoms industry in the first instance, rather than connected to other social and technology domains and goals. It was about taking legacy, physical networking processes and functions, and redesigning and reimplementing them in software.

Then, the second wave of virtualisation (cloud-native – which is happening now) is what enables telecoms networking to play a part in the second wave of societal computerisation more broadly (the Coordination Age). This is because the different layers and elements of telecoms networks (services, network functions and infrastructure) are redefined, instantiated in software, broken up into their component parts, redistributed (logically and physically), and reassembled as a function of an increasing variety of cross-domain and cross-vertical use cases that are enabled and delivered, ultimately, by computerisation. Telecoms is disaggregated by, subordinated to, and defined and controlled by computing.

In summary, we can say that telecoms networks and operations are going through disaggregation now because this forms part of a broader societal transformation in which physical processes, functions and systems are being brought under the control of computing / IT, in pursuit of broader human, societal, economic and environmental goals.

In practice, this also means that telcos are facing increasing competition from many new types of actor, such as:

  • Computing, IT and cloud players
  • More specialist and agile networking providers
  • And vertical-market actors – delivering connectivity in support of vertical-specific, Coordination Age use cases.

 

Table of contents

  • Executive Summary
    • Three critical success factors for Coordination Age telcos
    • What capabilities will remain distinctively ‘telco’?
    • Our take on three pioneering cloud-native telcos
  • Introduction
    • The telco business is being disaggregated
    • Telco disaggregation is a by-product of computerisation
  • The disaggregated telco landscape: Where’s the value for telcos?
    • Is there anything left that is distinctively ‘telco’?
    • The ‘core’ telecoms business has evolved from delivering ubiquitous communications to enabling ubiquitous computing
    • Six telco-specific roles for telecoms remain in play
  • Radical telco disaggregation in action: AT&T, DISH and Rakuten
    • Servco, netco or infraco – or a patchwork of all three?
    • AT&T Network Cloud sell-off: Desperation or strategic acuity?
    • DISH Networks: Building the hyperscale network
    • Rakuten Mobile: Ecommerce platform turned cloud-native telco, turned telco cloud platform provider
  • Conclusion

Enter your details below to request an extract of the report

O-RAN: What is it worth?

Introducing STL Partners’ O-RAN Market Forecast

This capex forecast is STL Partners’ first attempt at estimating the value of the O-RAN market.

  • This is STL Partners’ first O-RAN market value forecast
  • It is based on analysis of telco RAN capex and projected investment pathways for O-RAN
  • The assumptions are informed by public announcements, private discussions and the opinions of our Telco Cloud team
  • We look forward to developing it further based on client feedback

Enter your details below to request an extract of the report

What is O-RAN?

We define O-RAN as virtualised, disaggregated, open-interface architectures.

  • Our O-RAN capex forecasts cover virtualised, disaggregated, open-interface architectures in the Radio Access Network
  • They do not include vRAN or O-RAN compliant but single vendor deployments

O-RAN definition open RAN

O-RAN will account for 76% of active RAN capex by 2030

As mobile operators upgrade their 4G networks and invest in new 5G infrastructure, they can continue purchasing single vendor legacy RAN equipment or opt for multi-vendor open-standard O-RAN solutions.

Each telco will determine its O-RAN roadmap based on its specific circumstances (footprint, network evolution, rural coverage, regulatory pressure, etc)1. For the purpose of this top-level O-RAN capex forecast, STL has defined four broad pathways for transitioning from legacy RAN/vRAN to O-RAN and categorised each of the top 40 mobile operators in one of the pathways, based on their announced or suspected O-RAN strategy.

Through telcos’ projected mobile capex and the pathway categorisation, we estimate that by 2026 annual sales of O-RAN active network elements (including equipment and software) will reach USD12 billion, or 21% of all active RAN capex (excluding passive infrastructure). By 2030, these will reach USD43 billion and 76%, respectively.

Total annual O-RAN capex spend

Table of content

  • Executive summary
    • O-RAN forecast 2020-2030
    • Brownfield vs greenfield
    • Four migration pathways
  • Modelling assumptions
  • Migration pathways
    • Committed O-RAN-philes
    • NEP-otists
    • Leap-froggers
    • Industrial O-RAN
  • Next steps

 

Enter your details below to request an extract of the report

AI is starting to pay: Time to scale adoption

=======================================================================================

Download the additional file on the left for the PPT chart pack accompanying this report

=======================================================================================

AI adoption yields positive results

Over the last five years, telcos have made measurable progress in AI adoption and it is starting to pay off.  When compared to all industries, telcos have become adept at handling large data sets and implementing automation. Over the last several years the telecoms industry has gone from not knowing where or how to implement AI, to having developed and implemented hundreds of AI and automation applications for network operations, fraud prevention, customer channel management, and sales and marketing. We have discussed these use cases and operator strategies and opportunities in detail in previous reports.

For the more advanced telcos, the challenge is no longer setting up data management platforms and systems and identifying promising use cases for AI and automation, but overcoming the organisational and cultural barriers to becoming truly data-centric in mindset, processes and operations. A significant part of this challenge includes disseminating AI adoption and expertise of these technologies and associated skills to the wider organisation, beyond a centralised AI team.The benchmark for success here is not other telcos, or companies in other industries with large legacy and physical assets, but digital- and cloud-native companies that have been established with a data-centric mindset and practices from the start. This includes global technology companies like Microsoft, Google and Amazon, who increasingly see telecoms operators as customers, or perhaps even competitors one day, as well as greenfield players such as Rakuten, Jio and DISH, which as well as more modern networks have fewer ingrained legacy processes and cultural practices to overcome.

Enter your details below to request an extract of the report

Enter your details below to request an extract of the report

Telecoms has a high AI adoption rate compared with other industries

AI pays off

Source: McKinsey

In this report, we assess several telcos’ approach to AI and the results they have achieved so far, and draw some lessons on what kind of strategy and ambition leads to better results. In the second section of the report, we explore in more detail the concrete steps telcos can take to help accelerate and scale the use of AI and automation across the organisation, in the hopes of becoming more data-driven businesses.

While not all telcos have an ambition to drive new revenue growth through development of their own IP in AI, to form the basis of new enterprise or consumer services, all operators will need AI to permeate their internal processes to compete effectively in the long term. Therefore, whatever the level ambition, disseminating fundamental AI and data skills across the organisation is crucial to long term success. STL Partners believes that the sooner telcos can master these skills, the higher their chances of successfully applying them to drive innovation both in core connectivity and new services higher up the value chain.

Contents

  • Executive Summary
  • Introduction
  • Developing an AI strategy: What is it for?
    • Telefónica: From AURA and LUCA to Telefónica Tech
    • Vodafone: An efficiency focused strategy
    • Elisa: A vertical application approach
    • Takeaways: Comparing three approaches
  • AI maturity progression
    • Adopt big data analytics: The basic building blocks
    • Creating a centralised AI unit
    • Creating a new business unit
    • Disseminating AI across the organisation
  • Using partnerships to accelerate and scale AI
    • O2 and Cardinality
    • AT&T Acumos
  • Conclusion and recommendations
  • Index

Enter your details below to request an extract of the report

Open RAN: What should telcos do?

————————————————————————————————————–

Related webinar: Open RAN: What should telcos do?

In this webinar STL Partners addressed the three most important sub-components of Open RAN (open-RAN, vRAN and C-RAN) and how they interact to enable a new, virtualized, less vendor-dominated RAN ecosystem. The webinar covered:

* Why Open RAN matters – and why it will be about 4G (not 5G) in the short term
* Data-led overview of existing Open RAN initiatives and challenges
* Our recommended deployment strategies for operators
* What the vendors are up to – and how we expect that to change

Date: Tuesday 4th August 2020
Time: 4pm GMT

Access the video recording and presentation slides

————————————————————————————————————————————————————————-

For the report chart pack download the additional file on the left

What is the open RAN and why does it matter?

The open RAN’ encompasses a group of technological approaches that are designed to make the radio access network (RAN) more cost effective and flexible. It involves a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

Legacy RAN: single-vendor and inflexible

The traditional, legacy radio access network (RAN) uses dedicated hardware to deliver the baseband function (modulation and management of the frequency range used for cellular network transmission), along with proprietary interfaces (typically based on the Common Public Radio Interface (CPRI) standard) for the fronthaul from the baseband unit (BBU) to the remote radio unit (RRU) at the top of the transmitter mast.

Figure 1: Legacy RAN architecture

Source: STL Partners

This means that, typically, telcos have needed to buy the baseband and the radio from a single vendor, with the market presently dominated largely by the ‘big three’ (Ericsson, Huawei and Nokia), together with a smaller market share for Samsung and ZTE.

The architecture of the legacy RAN – with BBUs typically but not always at every cell site – has many limitations:

  • It is resource-intensive and energy-inefficient – employing a mass of redundant equipment operating at well below capacity most of the time, while consuming a lot of power
  • It is expensive, as telcos are obliged to purchase and operate a large inventory of physical kit from a limited number of suppliers, which keeps the prices high
  • It is inflexible, as telcos are unable to deploy to new and varied sites – e.g. macro-cells, small cells and micro-cells with different radios and frequency ranges – in an agile and cost-effective manner
  • It is more costly to manage and maintain, as there is less automation and more physical kit to support, requiring personnel to be sent out to remote sites
  • It is not very programmable to support the varied frequency, latency and bandwidth demands of different services.

Enter your details below to request an extract of the report

Moving to the open RAN: C-RAN, vRAN and open-RAN

There are now many distinct technologies and standards emerging in the radio access space that involve a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

We have adopted ‘the open RAN’ as an umbrella term which encompasses all of these technologies. Together, they are expected to make the RAN more cost effective and flexible. The three most important sub-components of the open RAN are C-RAN, vRAN and open-RAN.

Centralised RAN (C-RAN), also known as cloud RAN, involves distributing and centralising the baseband functionality across different telco edge, aggregation and core locations, and in the telco cloud, so that baseband processing for multiple sites can be carried out in different locations, nearer or further to the end user.

This enables more effective control and programming of capacity, latency, spectrum usage and service quality, including in support of 5G core-enabled technologies and services such as network slicing, URLLC, etc. In particular, baseband functionality can be split between more centralised sites (central baseband units – CU) and more distributed sites (distributed unit – DU) in much the same way, and for a similar purpose, as the split between centralised control planes and distributed user planes in the mobile core, as illustrated below:

Figure 2: Centralised RAN (C-RAN) architecture

Cloud RAN architecture

Source: STL Partners

Virtual RAN (vRAN) involves virtualising (and now also containerising) the BBU so that it is run as software on generic hardware (General Purpose Processing – GPP) platforms. This enables the baseband software and hardware, and even different components of them, to be supplied by different vendors.

Figure 3: Virtual RAN (vRAN) architecture

vRAN architecture

Source: STL Partners

Open-RANnote the hyphenation – involves replacing the vendor-proprietary interfaces between the BBU and the RRU with open standards. This enables BBUs (and parts thereof) from one or multiple vendors to interoperate with radios from other vendors, resulting in a fully disaggregated RAN:

Figure 4: Open-RAN architecture

Open-RAN architecture

Source: STL Partners

 

RAN terminology: clearing up confusion

You will have noticed that the technologies above have similar-sounding names and overlapping definitions. To add to potential confusion, they are often deployed together.

Figure 5: The open RAN Venn – How C-RAN, vRAN and open-RAN fit together

Open-RAN venn: open-RAN inside vRAN inside C-RAN

Source: STL Partners

As the above diagram illustrates, all forms of the open RAN involve C-RAN, but only a subset of C-RAN involves virtualisation of the baseband function (vRAN); and only a subset of vRAN involves disaggregation of the BBU and RRU (open-RAN).

To help eliminate ambiguity we are adopting the typographical convention ‘open-RAN’ to convey the narrower meaning: disaggregation of the BBU and RRU facilitated by open interfaces. Similarly, where we are dealing with deployments or architectures that involve vRAN and / or cloud RAN but not open-RAN in the narrower sense, we refer to those examples as ‘vRAN’ or ‘C-RAN’ as appropriate.

In the coming pages, we will investigate why open RAN matters, what telcos are doing about it – and what they should do next.

Table of contents

  • Executive summary
  • What is the open RAN and why does it matter?
    • Legacy RAN: single-vendor and inflexible
    • The open RAN: disaggregated and flexible
    • Terminology, initiatives & standards: clearing up confusion
  • What are the opportunities for open RAN?
    • Deployment in macro networks
    • Deployment in greenfield networks
    • Deployment in geographically-dispersed/under-served areas
    • Deployment to support consolidation of radio generations
    • Deployment to support capacity and coverage build-out
    • Deployment to support private and neutral host networks
  • How have operators deployed open RAN?
    • What are the operators doing?
    • How successful have deployments been?
  • How are vendors approaching open RAN?
    • Challenger RAN vendors: pushing for a revolution
    • Incumbent RAN vendors: resisting the open RAN
    • Are incumbent vendors taking the right approach?
  • How should operators do open RAN?
    • Step 1: Define the roadmap
    • Step 2: Implement
    • Step 3: Measure success
  • Conclusions
    • What next?

Enter your details below to request an extract of the report

5G: Bridging hype, reality and future promises

The 5G situation seems paradoxical

People in China and South Korea are buying 5G phones by the million, far more than initially expected, yet many western telcos are moving cautiously. Will your company also find demand? What’s the smart strategy while uncertainty remains? What actions are needed to lead in the 5G era? What questions must be answered?

New data requires new thinking. STL Partners 5G strategies: Lessons from the early movers presented the situation in late 2019, and in What will make or break 5G growth? we outlined the key drivers and inhibitors for 5G growth. This follow on report addresses what needs to happen next.

The report is informed by talks with executives of over three dozen companies and email contacts with many more, including 21 of the first 24 telcos who have deployed. This report covers considerations for the next three years (2020–2023) based on what we know today.

“Seize the 5G opportunity” says Ke Ruiwen, Chairman, China Telecom, and Chinese reports claimed 14 million sales by the end of 2019. Korea announced two million subscribers in July 2019 and by December 2019 approached five million. By early 2020, The Korean carriers were confident 30% of the market will be using 5G by the end of 2020. In the US, Verizon is selling 5G phones even in areas without 5G services,  With nine phone makers looking for market share, the price in China is US$285–$500 and falling, so the handset price barrier seems to be coming down fast.

Yet in many other markets, operators progress is significantly more tentative. So what is going on, and what should you do about it?

Enter your details below to request an extract of the report

5G technology works OK

22 of the first 24 operators to deploy are using mid-band radio frequencies.

Vodafone UK claims “5G will work at average speeds of 150–200 Mbps.” Speeds are typically 100 to 500 Mbps, rarely a gigabit. Latency is about 30 milliseconds, only about a third better than decent 4G. Mid-band reach is excellent. Sprint has demonstrated that simply upgrading existing base stations can provide substantial coverage.

5G has a draft business case now: people want to buy 5G phones. New use cases are mostly years away but the prospect of better mobile broadband is winning customers. The costs of radios, backhaul, and core are falling as five system vendors – Ericsson, Huawei, Nokia, Samsung, and ZTE – fight for market share. They’ve shipped over 600,000 radios. Many newcomers are gaining traction, for example Altiostar won a large contract from Rakuten and Mavenir is in trials with DT.

The high cost of 5G networks is an outdated myth. DT, Orange, Verizon, and AT&T are building 5G while cutting or keeping capex flat. Sprint’s results suggest a smart build can quickly reach half the country without a large increase in capital spending. Instead, the issue for operators is that it requires new spending with uncertain returns.

The technology works, mostly. Mid-band is performing as expected, with typical speeds of 100–500Mbps outdoors, though indoor performance is less clear yet. mmWave indoor is badly degraded. Some SDN, NFV, and other tools for automation have reached the field. However, 5G upstream is in limited use. Many carriers are combining 5G downstream with 4G upstream for now. However, each base station currently requires much more power than 4G bases, which leads to high opex. Dynamic spectrum sharing, which allows 5G to share unneeded 4G spectrum, is still in test. Many features of SDN and NFV are not yet ready.

So what should companies do? The next sections review go-to-market lessons, status on forward-looking applications, and technical considerations.

Early go-to-market lessons

Don’t oversell 5G

The continuing publicity for 5G is proving powerful, but variable. Because some customers are already convinced they want 5G, marketing and advertising do not always need to emphasise the value of 5G. For those customers, make clear why your company’s offering is the best compared to rivals’. However, the draw of 5G is not universal. Many remain sceptical, especially if their past experience with 4G has been lacklustre. They – and also a minority swayed by alarmist anti-5G rhetoric – will need far more nuanced and persuasive marketing.

Operators should be wary of overclaiming. 5G speed, although impressive, currently has few practical applications that don’t already work well over decent 4G. Fixed home broadband is a possible exception here. As the objective advantages of 5G in the near future are likely to be limited, operators should not hype features that are unrealistic today, no matter how glamorous. If you don’t have concrete selling propositions, do image advertising or use happy customer testimonials.

Table of Contents

  • Executive Summary
  • Introduction
    • 5G technology works OK
  • Early go-to-market lessons
    • Don’t oversell 5G
    • Price to match the experience
    • Deliver a valuable product
    • Concerns about new competition
    • Prepare for possible demand increases
    • The interdependencies of edge and 5G
  • Potential new applications
    • Large now and likely to grow in the 5G era
    • Near-term applications with possible major impact for 5G
    • Mid- and long-term 5G demand drivers
  • Technology choices, in summary
    • Backhaul and transport networks
    • When will 5G SA cores be needed (or available)?
    • 5G security? Nothing is perfect
    • Telco cloud: NFV, SDN, cloud native cores, and beyond
    • AI and automation in 5G
    • Power and heat

Enter your details below to request an extract of the report

Telco Cloud: Why it hasn’t delivered, and what must change for 5G

Related Webinar – 5G Telco Clouds: Where we are and where we are headed

This research report will be expanded upon on our upcoming webinar 5G Telco Clouds: Where we are and where we are headed. In this webinar we will argue that 5G will only pay if telcos find a way to make telco clouds work. We will look to address the following key questions:

  • Why have telcos struggled to realise the telco cloud promise?
  • What do telcos need to do to unlock the key benefits?
  • Why is now the time for telcos to try again?

Join us on April 8th 16:00 – 17:00 GMT by using this registration link.

Telco cloud: big promises, undelivered

A network running in the cloud

Back in the early 2010s, the idea that a telecoms operator could run its network in the cloud was earth-shattering. Telecoms networks were complicated and highly-bespoke, and therefore expensive to build, and operate. What if we could find a way to run networks on common, shared resources – like the cloud computing companies do with IT applications? This would be beneficial in a whole host of ways, mostly related to flexibility and efficiency. The industry was sold.

In 2012, ETSI started the ball rolling when it unveiled the Network Functions Virtualisation (NFV) whitepaper, which borrowed the IT world’s concept of server-virtualisation and gave it a networking spin. Network functions would cease to be tied to dedicated pieces of equipment, and instead would run inside “virtual machines” (VMs) hosted on generic computing equipment. In essence, network functions would become software apps, known as virtual network functions (VNFs).

Because the software (the VNF) is not tied to hardware, operators would have much more flexibility over how their network is deployed. As long as we figure out a suitable way to control and configure the apps, we should be able to scale deployments up and down to meet requirements at a given time. And as long as we have enough high-volume servers, switches and storage devices connected together, it’s as simple as spinning up a new instance of the VNF – much simpler than before, when we needed to procure and deploy dedicated pieces of equipment with hefty price tags attached.

An additional benefit of moving to a software model is that operators have a far greater degree of control than before over where network functions physically reside. NFV infrastructure can directly replace old-school networking equipment in the operator’s central offices and points of presence, but the software can in theory run anywhere – in the operator’s private centralised data centre, in a datacentre managed by someone else, or even in a public hyperscale cloud. With a bit of re-engineering, it would be possible to distribute resources throughout a network, perhaps placing traffic-intensive user functions in a hub closer to the user, so that less traffic needs to go back and forth to the central control point. The key is that operators are free to choose, and shift workloads around, dependent on what they need to achieve.

The telco cloud promise

Somewhere along the way, we began talking about the telco cloud. This is a term that means many things to many people. At its most basic level, it refers specifically to the data centre resources supporting a carrier-grade telecoms network: hardware and software infrastructure, with NFV as the underlying technology. But over time, the term has started to also be associated with cloud business practices – that is to say, the innovation-focussed business model of successful cloud computing companies

Figure 2: Telco cloud defined: New technology and new ways of working

Telco cloud: Virtualised & programmable infrastructure together with cloud business practices

Source: STL Partners

In this model, telco infrastructure becomes a flexible technology platform which can be leveraged to enable new ways of working across an operator’s business. Operations become easier to automate. Product development and testing becomes more straightforward – and can happen more quickly than before. With less need for high capital spend on equipment, there is more potential for shorter, success-based funding cycles which promote innovation.

Much has been written about the vast potential of such a telco cloud, by analysts and marketers alike. Indeed, STL Partners has been partial to the same. For this reason, we will avoid a thorough investigation here. Instead, we will use a simplified framework which covers the four major buckets of value which telco cloud is supposed to help us unlock:

Figure 3: The telco cloud promise: Major buckets of value to be unlocked

Four buckets of value from telco cloud: Openness; Flexibility, visibility & control; Performance at scale; Agile service introduction

Source: STL Partners

These four buckets cover the most commonly-cited expectations of telcos moving to the cloud. Swallowed within them all, to some extent, is a fifth expectation: cost savings, which have been promised as a side-effect. These expectations have their origin in what the analyst and vendor community has promised – and so, in theory, they should be realistic and achievable.

The less-exciting reality

At STL Partners, we track the progress of telco cloud primarily through our NFV Deployment Tracker, a comprehensive database of live deployments of telco cloud technologies (NFV, SDN and beyond) in telecoms networks across the planet. The emphasis is on live rather than those running in testbeds or as proofs of concept, since we believe this is a fairer reflection of how mature the industry really is in this regard.

What we find is that, after a slow start, telcos have really taken to telco cloud since 2017, where we have seen a surge in deployments:

Figure 4: Total live deployments of telco cloud technology, 2015-2019
Includes NFVi, VNF, SDN deployments running in live production networks, globally

Telco cloud deployments have risen substantially over the past few years

Source: STL Partners NFV Deployment Tracker

All of the major operator groups around the world are now running telco clouds, as well as a significant long tail of smaller players. As we have explained previously, the primary driving force in that surge has been the move to virtualise mobile core networks in response to data traffic growth, and in preparation for roll-out of 5G networks. To date, most of it is based on NFV: taking existing physical core network functions (components of the Evolved Packet Core or the IP Multimedia Subsystem, in most cases) and running them in virtual machines. No operator has completely decommissioned legacy network infrastructure, but in many cases these deployments are already very ambitious, supporting 50% or more of a mobile operator’s total network traffic.

Yet, despite a surge in deployments, operators we work with are increasingly frustrated in the results. The technology works, but we are a long way from unlocking the value promised in Figure 2. Solutions to date are far from open and vendor-neutral. The ability to monitor, optimise and modify systems is far from ubiquitous. Performance is acceptable, but nothing to write home about, and not yet proven at mass scale. Examples of truly innovative services built on telco cloud platforms are few and far between.

We are continually asked: will telco cloud really deliver? And what needs to change for that to happen?

The problem: flawed approaches to deployment

Learning from those on the front line

The STL Partners hypothesis is that telco cloud, in and of itself, is not the problem. From a theoretical standpoint, there is no reason that virtualised and programmable network and IT infrastructure cannot be a platform for delivering the telco cloud promise. Instead, we believe that the reason it has not yet delivered is linked to how the technology has been deployed, both in terms of the technical architecture, and how the telco has organised itself to operate it.

To test this hypothesis, we conducted primary research with fifteen telecoms operators at different stages in their telco cloud journey. We asked them about their deployments to date, how they have been delivered, the challenges encountered, how successful they have been, and how they see things unfolding in the future.

Our sample includes individuals leading telco cloud deployment at a range of mobile, fixed and converged network operators of all shapes and sizes, and in all regions of the world. Titles vary widely, but include Chief Technology Officers, Heads of Technology Exploration and Chief Network Architects. Our criteria were that individuals needed to be knee-deep in their organisation’s NFV deployments, not just from a strategic standpoint, but also close to the operational complexities of making it happen.

What we found is that most telco cloud deployments to date fall into two categories, driven by the operator’s starting point in making the decision to proceed:

Figure 5: Two starting points for deploying telco cloud

Function-first "we need to virtualise XYZ" vs platform-first "we want to build a cloud platform"

Source: STL Partners

The operators we spoke to were split between these two camps. What we found is that the starting points greatly affect how the technology is deployed. In the coming pages, we will explain both in more detail.

Table of contents

  • Executive Summary
  • Telco cloud: big promises, undelivered
    • A network running in the cloud
    • The telco cloud promise
    • The less-exciting reality
  • The problem: flawed approaches to deployment
    • Learning from those on the front line
    • A function-first approach to telco cloud
    • A platform-first approach to telco cloud
  • The solution: change, collaboration and integration
    • Multi-vendor telco cloud is preferred
    • The internal transformation problem
    • The need to foster collaboration and integration
    • Standards versus blueprints
    • Insufficient management and orchestration solutions
    • Vendor partnerships and pre-integration
  • Conclusions: A better telco cloud is possible, and 5G makes it an urgent priority