Why telcos need to capture the edge opportunity now

Why telcos need to capture the edge opportunity now

This report is based on an interview programme that STL Partners conducted in the months of July and August 2021. The interview programme consisted of 17 interviews: 13 with operators and 4 with enterprises. More information about the telecoms interviewees can be found below.

Figure 1: Interviewee profiles across telcos and enterprises

We asked operators and enterprises about the role of edge computing within their organisations, as well as their overall technology strategy. We investigated the key use cases they were exploring, their view on ecosystem partnerships and their vertical targeting strategy to understand: How can operators capture the edge opportunity?

Table of Contents

Executive Summary……………………………………………………………………………………………………………………………..2
Telco edge computing is now…………………………………………………………………………………………………………..2
Key takeaway: telcos must take a pragmatic approach going forward …………………………………………..2
Preface…………………………………………………………………………………………………………………………………………………5
Introduction………………………………………………………………………………………………………………………………………….7
Edge progress to date: laying the groundwork…………………………………………………………………………………….8
Building the 5G foundation has been the priority ……………………………………………………………………………..8
Private cellular network solutions are gaining traction …………………………………………………………………..10
Operators have been undergoing an education process………………………………………………………………..11
Clarity around use cases has allowed telcos to more actively engage the ecosystem………………….11
The inflection point: how to capture demand for edge computing through focused strategies ……….13
Vertical strategy………………………………………………………………………………………………………………………………13
Horizontal strategy………………………………………………………………………………………………………………………….18
Partnerships must underpin any successful edge strategy ………………………………………………………………20
Conclusion………………………………………………………………………………………………………………………………………….23

Further reading

STL Partners has an extensive catalogue of edge research, which can be found on our Edge Computing hub. These reports provide an overview of edge, an examination of the telco opportunities and challenges in pursuing edge, and the role of 5G in edge. We recommend reading the following articles and reports before pursuing this report to provide sufficient context.

SK Telecom’s journey in commercialising 5G

SK Telecom (SKT), Verizon and Telstra were among the first in the world to commence the commercialisation of 5G networks. SK Telecom and Verizon launched broadband-based propositions in 2018, but it was only in 2019, when 5G smartphones became available, that consumer, business and enterprise customers were really able to experience the networks.

Part 1 of our 3-part series looks at SKT’s journey and how its propositions have developed from when 5G was launched to the current time. It includes an analysis of both consumer and business offerings promoted on SKT’s website to identify the revenues streams that 5G is supporting now – as opposed to revenues that new 5G use cases might deliver in future.

Download the report extract

At launch, SKT introduced 5G-specific tariffs, that coupled large data allowances with unique apps and services designed to ensure data consumption and demonstrate the advantages of 5G access. 5G plans were more expensive than 4G plans, but the price of 5G data per MB was less than that for 4G to tempt customers to make the switch.

SKT’s well-documented approach to 5G has been regarded as inspirational by other telcos, though many consider a similar approach out-of-reach (e.g. for other telcos, coverage issues may limit their ability to charge a premium, or 5G-value-adding services may be lacking).

This report examines the market factors that have enabled and constrained SKT’s 5G actions, as it moves to deliver propositions for audiences beyond the early adopters and heavy data users. It identifies lessons in the commercialisation of 5G for those operators that are on their own 5G journeys and those that have yet to start.

5G performance to date

This analysis is based on the latest data available as we went to press in March 2021.

There were 10.9 million 5G subscribers in South Korea at end-November 2020 (15.5% of the total 70.5 million mobile subscriptions in the market, according to the Ministry of Science and ICT) and network coverage is reported to be more than 90% of the population (a figure that was already quoted in March 2020). Subscriber numbers grew by nearly one million in November 2020, boosted by the introduction of the iPhone 12, which sold 600K units that month.

SKT’s share of 5G subscribers was 46% (5.05 million) in November, to which SKT added a further 400K+ in December, reaching 5.48 million by the end of 2020.

The telco took just four and a half months to reach one million 5G subscribers following launch, significantly less than it had taken with 4G, which had attained the same milestone in eight months following 4G’s commercial launch in 2011.

SKT quarterly 5G subscriber numbers (millions)

SK Telecom 5G subscribers

Source: STL Partners, SK Telecom

SKT credits 5G subscriber growth for its 2.8% MNO revenue increase in the year to December 2020, however the impact on ARPU is less clear. An initial increase in overall ARPU followed the introduction of higher priced 5G plans at launch, but ARPU has fallen back slightly since then, possibly due to COVID-19 economic factors.

SKT total ARPU trend following 5G launch

SK Telecom 5G ARPU

Source: STL Partners

In its 2020 year-end earnings call, SKT reported that it was top of the leader board in South Korea’s three customer satisfaction surveys and in the 5G quality assessment by the Ministry of Science and ICT.

As a cautionary note, Hong Jung-min of the ruling Democratic Party reported that 500K 5G users had switched to 4G LTE during August 2020 due to network issues, including limited coverage, slower than expected speeds. It is unclear how SKT was affected by this.

 

Table of Contents

  • Executive Summary
    • Recommendations
    • Next steps
  • Introduction
  • 5G performance to date
  • Details of launch
  • Consumer propositions
    • At launch
    • …And now
  • Business and enterprise propositions
    • At launch
    • …And now
  • Analysis of 5G market development
    • What next?
    • mmWave
  • Conclusion
  • Appendix 1

Download the report extract

Growing B2B revenues from edge: Five new telco services

=======================================================================================

Download the additional file on the left for the PPT chart pack accompanying this report

=======================================================================================

Edge computing has sparked significant interest from telcos

Edge computing brings cloud capabilities such as data processing and storage closer to the end user, device, or the source of data. There are two main opportunity areas for telcos in edge computing. Firstly, telcos have an opportunity to provide edge computing via edge data centres at sites on the telecoms network – network edge, sometimes referred to as multi-access edge computing. Secondly, telcos can offer edge-enabled services through compute platforms at the customer premises – on-premise edge.

Although there is an opportunity for telcos to offer new services and an enhanced customer experience to their consumer customer base, much of the edge computing opportunity for telcos is in the B2B segment. We have covered the general strategy operators are taking for edge computing in our previous report Telco edge computing: What’s the operator strategy? and through insights on our Edge Hub. Within enterprise, edge offers a chance for operators to move beyond offering connectivity services and extend into the platform and application space.

However, the market is still young; enterprises are still at an early stage of understanding the potential benefits of edge computing. There is limited availability of network edges; telcos are still deploying sites and few have begun to offer mechanisms to access the edge compute infrastructure within them. As a result, developers are only just starting to build applications to leverage this new infrastructure.

Enter your details below to request an extract of the report


Telcos are still grappling with defining the opportunity. Since adoption is so nascent, many feel that they are not able to prove the commercial case to unlock significant investment. Some operators are pushing ahead by building out edge infrastructure, securing partnerships and launching edge computing services. Nonetheless, even these operators are keeping an open mind to edge and waiting to see what unfolds as the market matures. What is clear is that, with the hyperscalers and others moving into the edge, telcos are increasingly keen to capitalise on the edge opportunity and solidify their position in the market before it’s too late.

The sweet spot opportunity for edge is highly dependent on telcos’ starting points: some have existing capabilities within B2B networking and cloud, partnerships, and strong customer relationships. But for other telcos, the B2B business is at a very early stage. Meanwhile, edge infrastructure build differs across telcos, with some choosing to partner with hyperscalers to create the hardware and software stack within edge data centres while others are opting to build their own stack.

It is therefore critical for telcos to:

  1. Assess whether they can leverage existing B2Bservices, customers and partners versus where they need to invest to fill the gaps
  2. Understand which factors may affect how successful they are in offering new edgeservices
  3. Prioritise which servicesthey could offer to B2B customers

In this report, we focus on answering the following questions:

Which B2B services can edge computing add value to? And how ready are telcos to take new edge services to market?

In order to better understand how operators are thinking about edge services and what they are looking to offer today, we interviewed eight technology and strategy leaders working in operators primarily across Europe.

To ensure an open and candid dialogue, we have anonymised their contributions. We would like to take the opportunity to thank those who participated in this research. A summary of the interviewee profiles is provided in the Appendix.

Telcos’ B2B businesses today

As consumer revenues come under increasing pressure, operators are looking to their B2B businesses to provide a new source of revenue growth. The maturity of their B2B businesses today varies from those who have a limited offering focussed primarily on phones, SIMs and basic connectivity (particularly mobile-only telcos, e.g. Three UK), to those who are providing full vertical applications or taking on the role of systems integrator (often incumbents or telcos with fixed networks, e.g. DTAG, Vodafone). Many telcos are looking for opportunities to take on more of the latter role, by expanding their B2B offerings and increasing their foothold in the value chain e.g. by offering managed services. Particularly with the arrival of 5G, they see greater potential to grow revenues through B2B services compared with B2C.

Maturity levels of telcos’ B2B business

Table of content

  • Executive Summary
  • Introduction
  • Strategic principles for B2B telco edge
    • Telcos’ B2B businesses today
    • Three telco strategies for B2B edge
    • On-premise edge and network edge are separate opportunities
    • Telcos are open to partnering with the hyperscalers for edge
  • Five types of B2B edge services
    • Edge-to-cloud networking
    • Private edge infrastructure
    • Network edge platforms
    • Multi-edge and cloud orchestration
    • Vertical solutions
  • Evaluating the opportunity: How should telcos prioritise?
    • It’s not just about technology
    • However, significant value creation does not come easy
    • Telcos should consider new business models to ensure success
  • Next steps for telcos in building B2B edge services
    • Prioritise services to monetise edge
    • Evaluate the role of partners
    • Work closely with customers given that edge is still nascent
  • Appendix
    • Interviewee overview
  • Index

Enter your details below to request an extract of the report


SK Telecom: Lessons in 5G, AI, and adjacent market growth

SK Telecom’s strategy

SK Telecom is the largest mobile operator in South Korea with a 42% share of the mobile market and is also a major fixed broadband operator. It’s growth strategy is focused on 5G, AI and a small number of related business areas where it sees the potential for revenue to replace that lost from its core mobile business.

By developing applications based on 5G and AI it hopes to create additional revenue streams both for its mobile business and for new areas, as it has done in smart home and is starting to do for a variety of smart business applications. In 5G it is placing an emphasis on indoor coverage and edge computing as basis for vertical industry applications. Its AI business is centred around NUGU, a smart speaker and a platform for business applications.

Its other main areas of business focus are media, security, ecommerce and mobility, but it is also active in other fields including healthcare and gaming.

The company takes an active role internationally in standards organisations and commercially, both in its own right and through many partnerships with other industry players.

It is a subsidiary of SK Group, one of the largest chaebols in Korea, which has interests in energy and oil. Chaebols are large family-controlled conglomerates which display a high level and concentration of management power and control. The ownership structures of chaebols are often complex owing to the many crossholdings between companies owned by chaebols and by family members. SK Telecom uses its connections within SK Group to set up ‘friendly user’ trials of new services, such as edge and AI

While the largest part of the business remains in mobile telecoms, SK Telecom also owns a number of subsidiaries, mostly active in its main business areas, for example:

  • SK Broadband which provides fixed broadband (ADSL and wireless), IPTV and mobile OTT services
  • ADT Caps, a securitybusiness
  • IDQ, which specialises in quantum cryptography (security)
  • 11st, an open market platform for ecommerce
  • SK Hynixwhich manufactures memory semiconductors

Few of the subsidiaries are owned outright by SKT; it believes the presence of other shareholders can provide a useful source of further investment and, in some cases, expertise.

SKT was originally the mobile arm of KT, the national operator. It was privatised soon after establishing a cellular mobile network and subsequently acquired by SK Group, a major chaebol with interests in energy and oil, which now has a 27% shareholding. The government pension service owns a 11% share in SKT, Citibank 10%, and 9% is held by SKT itself. The chairman of SK Group has a personal holding in SK Telecom.

Following this introduction, the report comprises three main sections:

  • SK Telecom’s business strategy: range of activities, services, promotions, alliances, joint ventures, investments, which covers:
    • Mobile 5G, Edge and vertical industry applications, 6G
    • AIand applications, including NUGU and Smart Homes
    • New strategic business areas, comprising Media, Security, eCommerce, and other areas such as mobility
  • Business performance
  • Industrial and national context.

Enter your details below to request an extract of the report


Overview of SKT’s activities

Network coverage

SK Telecom has been one of the earliest and most active telcos to deploy a 5G network. It initially created 70 5G clusters in key commercial districts and densely populated areas to ensure a level of coverage suitable for augmented reality (AR) and virtual reality (VR) and plans to increase the number to 240 in 2020. It has paid particular attention to mobile (or multi-access) edge computing (MEC) applications for different vertical industry sectors and plans to build 5G MEC centres in 12 different locations across Korea. For its nationwide 5G Edge cloud service it is working with AWS and Microsoft.

In recognition of the constraints imposed by the spectrum used by 5G, it is also working on ensuring good indoor 5G coverage in some 2,000 buildings, including airports, department stores and large shopping malls as well as small-to-medium-sized buildings using distributed antenna systems (DAS) or its in-house developed indoor 5G repeaters. It also is working with Deutsche Telekom on trials of the repeaters in Germany. In addition, it has already initiated activities in 6G, an indication of the seriousness with which it is addressing the mobile market.

NUGU, the AI platform

It launched its own AI driven smart speaker, NUGU in 2016/7, which SKT is using to support consumer applications such as Smart Home and IPTV. There are now eight versions of NUGU for consumers and it also serves as a platform for other applications. More recently it has developed several NUGU/AI applications for businesses and civil authorities in conjunction with 5G deployments. It also has an AI based network management system named Tango.

Although NUGU initially performed well in the market, it seems likely that the subsequent launch of smart speakers by major global players such as Amazon and Google has had a strong negative impact on the product’s recent growth. The absence of published data supports this view, since the company often only reports good news, unless required by law. SK Telecom has responded by developing variants of NUGU for children and other specialist markets and making use of the NUGU AI platform for a variety of smart applications. In the absence of published information, it is not possible to form a view on the success of the NUGU variants, although the intent appears to be to attract young users and build on their brand loyalty.

It has offered smart home products and services since 2015/6. Its smart home portfolio has continually developed in conjunction with an increasing range of partners and is widely recognised as one of the two most comprehensive offerings globally. The other being Deutsche Telekom’s Qivicon. The service appears to be most successful in penetrating the new build market through the property developers.

NUGU is also an AI platform, which is used to support business applications. SK Telecom has also supported the SK Group by providing new AI/5G solutions and opening APIs to other subsidiaries including SK Hynix. Within the SK Group, SK Planet, a subsidiary of SK Telecom, is active in internet platform development and offers development of applications based on NUGU as a service.

Smart solutions for enterprises

SKT continues to experiment with and trial new applications which build on its 5G and AI applications for individuals (B2C), businesses and the public sector. During 2019 it established B2B applications, making use of 5G, on-prem edge computing, and AI, including:

  • Smart factory(real time process control and quality control)
  • Smart distribution and robot control
  • Smart office (security/access control, virtual docking, AR/VRconferencing)
  • Smart hospital (NUGUfor voice command for patients, AR-based indoor navigation, facial recognition technology for medical workers to improve security, and investigating possible use of quantum cryptography in hospital network)
  • Smart cities; e.g. an intelligent transportation system in Seoul, with links to vehicles via 5Gor SK Telecom’s T-Map navigation service for non-5G users.

It is too early to judge whether these B2B smart applications are a success, and we will continue to monitor progress.

Acquisition strategy

SK Telecom has been growing these new business areas over the past few years, both organically and by acquisition. Its entry into the security business has been entirely by acquisition, where it has bought new revenue to compensate for that lost in the core mobile business. It is too early to assess what the ongoing impact and success of these businesses will be as part of SK Telecom.

Acquisitions in general have a mixed record of success. SK Telecom’s usual approach of acquiring a controlling interest and investing in its acquisitions, but keeping them as separate businesses, is one which often, together with the right management approach from the parent, causes the least disruption to the acquired business and therefore increases the likelihood of longer-term success. It also allows for investment from other sources, reducing the cost and risk to SK Telecom as the acquiring company. Yet as a counterpoint to this, M&A in this style doesn’t help change practices in the rest of the business.

However, it has also shown willingness to change its position as and when appropriate, either by sale, or by a change in investment strategy. For example, through its subsidiary SK Planet, it acquired Shopkick, a shopping loyalty rewards business in 2014, but sold it in 2019, for the price it paid for it. It took a different approach to its activity in quantum technologies, originally set up in-house in 2011, which it rolled into IDQ following its acquisition in 2018.

SKT has also recently entered into partnerships and agreements concerning the following areas of business:

 

Table of Contents

  • Executive Summary
  • Introduction and overview
    • Overview of SKT’s activities
  • Business strategy and structure
    • Strategy and lessons
    • 5G deployment
    • Vertical industry applications
    • AI
    • SK Telecom ‘New Business’ and other areas
  • Business performance
    • Financial results
    • Competitive environment
  • Industry and national context
    • International context

Enter your details below to request an extract of the report


The future of assurance: How to deliver quality of service at the edge

Why does edge assurance matter?

The assurance of telecoms networks is one of the most important application areas for analytics, automation and AI (A3) across telcos operations. In a previous report estimating the potential value of A3 across telcos’ core business, including networks, customer channels, sales and marketing, we estimated that service assurance accounts for nearly 10% of the total potential value of A3 (see the report A3 for telcos: Mapping the financial value). The only area of greater combined value was in resource management across telecoms existing networks and planned deployments.

Within service assurance, the biggest value buckets are self-healing networks, impact on customer experience and churn, and dynamic SLA management. This estimate was developed through a bottom up analysis of specific applications for automation, analytics and AI within each segment, and their potential to deliver cost savings or revenue uplift for an average sized telecoms operator (see the original report for the full methodology).

Breakdown of the value of A3 in service assurance, US$ millions

Breakdown of the value of A3 in service assurance (US$ millions)

Source: STL Partners, Charlotte Patrick Consult

While this previous research demonstrates there is significant value for telcos in improving assurance on their legacy networks, over the next five years edge assurance will become an increasingly important topic for operators.

What we mean by edge assurance is the new capabilities operators will require to enable visibility across much more distributed, cloud-based networks, and monitoring of a wider and more dynamic range of services and devices, in order to deliver high quality experience and self-healing networks. This need is driven by operators’ accelerating adoption of virtualisation and software-defined networking, for example with increasing experimentation and excitement around open RAN, as well as some operators’ ambitions to play a significant role in the edge computing market (see our report Telco edge computing: How to partner with hyperscalers for analysis of telcos’ ambitions in edge computing).

To give an idea of the scale of the challenge ahead of operators in assuring increasingly distributed network functions and infrastructure, STL Partners’ expects a Tier-1 operator will deploy more than 8,000 edge servers to support virtual RAN by 2025 (see Building telco edge infrastructure: MEC, private LTE and vRAN for the full forecasts).

Forecast of Tier 1 operator edge servers by domain

Forecast of Tier-1 operator edge servers by domain

Source: STL Partners

Given this dramatic shift in network operations, without new edge assurance capabilities:

  • A telco will not be able to understand where issues are occurring across the (virtualised) network and the underlying infrastructure, and diagnose the root cause
  • The promises of cost saving and better customer experience from self-healing networks will not be fully realised in next-generation networks
  • Potential revenue generators such as network slicing and URLLC will be of limited value to customers if the telco can’t offer sufficient SLAs on reliability, latency and visibility
  • It will not be possible to make promises to ecosystem partners around service quality.

Despite the significant number of unknowns in the future of telco activities around 5G, IoT and edge computing, this research ventures a framework to allow telcos to plan for their future service assurance needs. The first section describes the drivers affecting telcos decision-making around the types of assurance that they need at the edge. The second sets out products and capabilities that will be required and types of assurance products that telcos could create and monetise.

Enter your details below to request an extract of the report


Table of contents

  • Executive Summary
    • The three main telco strategies in edge assurance
    • What exactly do telcos need to assure?
  • Why edge assurance matters
  • Factors affecting edge assurance development
    • What are telcos measuring?
    • Internal assurance applications
    • Location of measurement and analysis
    • Ownership status of equipment and assets being assured
    • Requirements of external assurance users
    • Requirements from specific applications
    • Telco business model
  • The status of edge assurance and recommendations for telcos
    • Edge assurance vendors
    • Telco assurance products
  • Appendix

Enter your details below to request an extract of the report


Telco edge computing: How to partner with hyperscalers

Edge computing is getting real

Hyperscalers such as Amazon, Microsoft and Google are rapidly increasing their presence in the edge computing market by launching dedicated products, establishing partnerships with telcos on 5G edge infrastructure and embedding their platforms into operators’ infrastructure.

Many telecoms operators, who need cloud infrastructure and platform support to run their edge services, have welcomed the partnership opportunity. However, they are yet to develop clear strategies on how to use these partnerships to establish a stronger proposition in the edge market, move up the value chain and play a role beyond hosting infrastructure and delivering connectivity. Operators that miss out on the partnership opportunity or fail to fully utilise it to develop and differentiate their capabilities and resources could risk either being reduced to connectivity providers with a limited role in the edge market and/or being late to the game.

Edge computing or multi-access edge computing (MEC) enables processing data closer to the end user or device (i.e. the source of data), on physical compute infrastructure that is positioned on the spectrum between the device and the internet or hyperscale cloud.

Telco edge computing is mainly defined as a distributed compute managed by a telco operator. This includes running workloads on customer premises as well as locations within the operator network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs. Depending on where the computing resources reside, edge computing can be broadly divided into:

  • Network edge which includes sites or points of presence (PoPs) owned by a telecoms operator such as base stations, central offices and other aggregation points on the access and/or core network.
  • On-premise edge where the computing resources reside at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.

Our overview on edge computing definitions, network structure, market opportunities and business models can be found in our previous report Telco Edge Computing: What’s the operator strategy?

The edge computing opportunity for operators and hyperscalers

Many operators are looking at edge computing as a good opportunity to leverage their existing assets and resources to innovate and move up the value chain. They aim to expand their services and revenue beyond connectivity and enter the platform and application space. By deploying computing resources at the network edge, operators can offer infrastructure-as-a-service and alternative application and solutions for enterprises. Also, edge computing as a distributed compute structure and an extension of the cloud supports the operators’ own journey into virtualising the network and running internal operations more efficiently.

Cloud hyperscalers, especially the biggest three – Amazon Web Services (AWS), Microsoft Azure and Google – are at the forefront of the edge computing market. In the recent few years, they have made efforts to spread their influence outside of their public clouds and have moved the data acquisition point closer to physical devices. These include efforts in integrating their stack into IoT devices and network gateways as well as supporting private and hybrid cloud deployments. Recently, hyperscalers took another step to get closer to customers at the edge by launching platforms dedicated to telecom networks and enabling integration with 5G networks. The latest of these products include Wavelength from AWS, Azure Edge Zones from Microsoft and Anthos for Telecom from Google Cloud. Details on these products are available in section.

Enter your details below to request an extract of the report


From competition to coopetition

Both hyperscalers and telcos are among the top contenders to lead the edge market. However, each stakeholder lacks a significant piece of the stack which the other has. This is the cloud platform for operators and the physical locations for hyperscalers. Initially, operators and hyperscalers were seen as competitors racing to enter the market through different approaches. This has resulted in the emergence of new types of stakeholders including independent mini data centre providers such as Vapor IO and EdgeConnex, and platform start-ups such as MobiledgeX and Ori Industries.

However, operators acknowledge that even if they do own the edge clouds, these still need to be supported by hyperscaler clouds to create a distributed cloud. To fuel the edge market and build its momentum, operators will, in the most part, work with the cloud providers. Partnerships between operators and hyperscalers are starting to take place and shape the market, impacting edge computing short- and long-term strategies for operators as well as hyperscalers and other players in the market.

Figure 1: Major telco-hyperscalers edge partnerships

Major telco-hyperscaler partnerships

Source: STL Partners analysis

What does it mean for telcos?

Going to market alone is not an attractive option for either operators or hyperscalers at the moment, given the high investment requirement without a guaranteed return. The partnerships between two of the biggest forces in the market will provide the necessary push for the use cases to be developed and enterprise adoption to be accelerated. However, as markets grow and change, so do the stakeholders’ strategies and relationships between them.

Since the emergence of cloud computing and the development of the digital technologies market, operators have been faced with tough competition from the internet players, including hyperscalers who have managed to remain agile while building a sustained appetite for innovation and market disruption. Edge computing is not an exception and they are moving rapidly to define and own the biggest share of the edge market.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market while partnering with hyperscalers in edge computing.

Table of contents

  • Executive Summary
    • Operators’ roles along the edge computing value chain
    • Building a bigger ecosystem and pushing market adoption
    • How partnerships can shape the market
    • What next?
  • Introduction
    • The edge computing opportunity for operators and hyperscalers
    • From competition to coopetition
    • What does it mean for telcos?
  • Overview of the telco-hyperscalers partnerships
    • Explaining the major roles required to enable edge services
    • The hyperscaler-telco edge commercial model
  • Hyperscalers’ edge strategies
    • Overview of hyperscalers’ solutions and activities at the edge
    • Hyperscalers approach to edge sites and infrastructure acquisition
  • Operators’ edge strategies and their roles in the partnerships
    • Examples of operators’ edge computing activities
    • Telcos’ approach to integrating edge platforms
  • Conclusion
    • Infrastructure strategy
    • Platform strategy
    • Verticals and ecosystem building strategy

 

Enter your details below to request an extract of the report


Building telco edge infrastructure: MEC, Private LTE & VRAN

Reality check: edge computing is not yet mature, and much is still to be decided

Edge computing is still a maturing domain. STL Partners has written extensively on the topic of edge computing over the last 4 years. Within that timeframe, we have seen significant change in terminology, attitudes and approaches from telecoms and adjacent industries to the topic area.  Plans for building telco edge infrastructure have also evolved.

Within the past twelve months, we’ve seen high profile partnerships between hyperscale cloud providers (Amazon Web Services, Microsoft and Google) and telecoms operators that are likely to catalyse the industry and accelerate route to market. We’ve also seen early movers within the industry (such as SK Telecom) developing MEC platforms to enable access to their edge infrastructure.

In the course of this report, we will highlight which domains will drive early adoption for edge, and the potential roll out we could see over the next 5 years if operators move to capitalise on the opportunity. However, to start, it is important to evaluate the situation today.

Commercial deployments of edge computing are rare, and most operators are still in the exploration phase. For many, they have not and will not commit to the roll out of edge infrastructure until they have seen evidence from early movers that it is a genuine opportunity for the industry. For even more, the idea of additional capex investment on edge infrastructure, on top of their 5G rollout plans, is a difficult commitment to make.

Where is “the edge”?

There is no one clear definition of edge computing. Depending on the world you are coming from (Telco? Application developer? Data centre operator? Cloud provider? etc.), you are likely to define it differently. In practice, we know that even within these organisations there are differences between technical and commercial teams around the concept and terminology used to describe “the edge”.

For the purposes on this paper, we will be discussing edge computing primarily from the perspective of a telecoms operator. As such, we’ll be focusing on edge infrastructure that will be rolled out within their network infrastructure or that they will play a role in connecting. This may equate to adding additional servers into an existing technical space (such as a Central Office), or it may mean investing in new microdata centres. The servers may be bought, installed and managed by the telco themselves, or this could be done by a third party, but in all cases the real estate (e.g. the physical location as well as power and cooling) is owned either by the telecoms operator, or by the enterprise who is buying an edge-enabled solution.

Operators have choice and a range of options for where and how they might develop edge computing sites. The graphic below starts to map some of the potential physical locations for an edge site. In this report, STL Partners forecasts edge infrastructure deployments between 2020 and 2024, by type of operator, use-case domains, edge locations and type of computing.

There is a spectrum of edge infrastructure in which telcos may invest

mapping edge infrastructure investmentSource: STL Partners

This paper primarily draws on discussions with operators and others within the edge ecosystem conducted between February and March 2020. We interviewed a range of operators, and a range of job roles within them, to gain a snapshot of the existing attitudes and ambitions within the industry to shape our understanding of how telcos are likely to build out edge infrastructure.

Enter your details below to request an extract of the report


Table of Contents

  • Executive Summary
  • Preface
  • Reality check: edge computing is not yet mature, and much is still to be decided
    • Reality #1: Organisationally, operators are still divided
    • Reality #2: The edge ecosystem is evolving fast
    • Reality #3: Operators are trying to predict, respond to and figure out what the “new normal” will be post COVID-19
  • Edge computing: key terms and definitions
    • Where is “the edge”?
    • What applications & use cases will run at edge sites?
    • What is inside a telco edge site?
  • How edge will play out: 5-year evolution
    • Modelling exercise: converting hype into numbers
    • Our findings: edge deployments won’t be very “edgy” in 2024
    • Short-term adoption of vRAN is the driving factor
    • New revenues from MEC remain a longer-term opportunity
    • Short-term adoption is focused on efficient operations, but revenue opportunity has not been dismissed
  • Addressing the edge opportunity: operators can be more than infrastructure providers
  • Conclusions: practical recommendations for operators

Telco edge computing: What’s the operator strategy?

To access the report chart pack in PPT download the additional file on the left

Edge computing can help telcos to move up the value chain

The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.

However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.

Following this introduction, we focus on:

  1. Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
  2. An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
  3. Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
  4. Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
  5. A set of recommendations for operators that are building their strategy for the edge.

Request a report extract

What is edge computing and where exactly is the edge?

Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.

One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.

In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.

Edge computing offers mobile operators several opportunities such as:

  • Differentiating service offerings using edge capabilities
  • Providing new applications and solutions using edge capabilities
  • Enabling customers and partners to leverage the distributed computing network in application development
  • Improving networkperformance and achieving efficiencies / cost savings

As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.

These are some of the key terms that need to be identified when discussing edge computing:

  • Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
  • Telco edge cloud is mainly defined as distributed compute managed by a telco  This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
  • On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
  • Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.

Figure 1: Edge computing types

definition of edge computing

Source: STL Partners

Network infrastructure and how the edge relates to 5G

Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.

While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:

  • First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
  • Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.

The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:

  • Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
  • Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
  • Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
  • Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
  • Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
  • Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.

The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.

Figure 2: possible locations for edge computing

edge computing locations

Source: STL Partners

Table of Contents

  • Executive Summary
    • Recommendations for telco operators at the edge
    • Four key use cases for operators
    • Edge computing players are tackling market fragmentation with strategic partnerships
    • What next?
  • Table of Figures
  • Introduction
  • Definitions of edge computing terms and key components
    • What is edge computing and where exactly is the edge?
    • Network infrastructure and how the edge relates to 5G
  • Market overview and opportunities
    • The value chain and the types of stakeholders
    • Hyperscale cloud provider activities at the edge
    • Telco initiatives, pilots and plans
    • Investment and merger and acquisition trends in edge computing
  • Use cases and business models for telcos
    • Telco edge computing use cases
    • Vertical opportunities
    • Roles and business models for telcos
  • Telcos’ challenges at the edge
  • Scenarios for network edge infrastructure development
  • Recommendation
  • Index

Request STL research insights overview pack

Cloud gaming: What’s the telco play?

To access the report chart pack in PPT download the additional file on the left

Drivers for cloud gaming services

Although many people still think of PlayStation and Xbox when they think about gaming, the console market represents only a third of the global games market. From its arcade and console-based beginnings, the gaming industry has come a long way. Over the past 20 years, one of the most significant market trends has been growth of casual gamers. Whereas hardcore gamers are passionate about frequent play and will pay more to play premium games, casual gamers play to pass the time. With the rapid adoption of smartphones capable of supporting gaming applications over the past decade, the population of casual/occasional gamers has risen dramatically.

This trend has seen the advent of free-to-play business models for games, further expanding the industry’s reach. In our earlier report, STL estimated that 45% of the population in the U.S. are either casual gamers (between 2 and 5 hours a week) or occasional gamers (up to 2 hours a week). By contrast, we estimated that hardcore gamers (more than 15 hours a week) make up 5% of the U.S. population, while regular players (5 to 15 hours a week) account for a further 15% of the population.

The expansion in the number of players is driving interest in ‘cloud gaming’. Instead of games running on a console or PC, cloud gaming involves streaming games onto a device from remote servers. The actual game is stored and run on a remote compute with the results being live streamed to the player’s device. This has the important advantage of eliminating the need for players to purchase dedicated gaming hardware. Now, the quality of the internet connection becomes the most important contributor to the gaming experience. While this type of gaming is still in its infancy, and faces a number of challenges, many companies are now entering the cloud gaming fold in an effort to capitalise on the new opportunity.

5G can support cloud gaming traffic growth

Cloud gaming requires not just high bandwidth and low latency, but also a stable connection and consistent low latency (jitter). In theory, 5G promises to deliver stable ultra-low latency. In practice, an enormous amount of infrastructure investment will be required in order to enable a fully loaded 5G network to perform as well as end-to-end fibre5G networks operating in the lower frequency bands would likely buckle under the load if lots of gamers in a cell needed a continuous 25Mbps stream. While 5G in millimetre-wave spectrum would have more capacity, it would require small cells and other mechanisms to ensure indoor penetration, given the spectrum is short range and could be blocked by obstacles such as walls.

Request a report extract

A complicated ecosystem

As explained in our earlier report, Cloud gaming: New opportunities for telcos?, the cloud gaming ecosystem is beginning to take shape. This is being accelerated by the growing availability of fibre and high-speed broadband, which is now being augmented by 5G and, in some cases, edge data centres. Early movers in cloud gaming are offering a range of services, from gaming rigs, to game development platforms, cloud computing infrastructure, or an amalgamation of these.

One of the main attractions of cloud gaming is the potential hardware savings for gamers. High-end PC gaming can be an extremely expensive hobby: gaming PCs range from £500 for the very cheapest to over £5,000 for the very top end. They also require frequent hardware upgrades in order to meet the increasing processing demands of new gaming titles. With cloud gaming, you can access the latest graphics processing unit at a much lower cost.

By some estimates, cloud gaming could deliver a high-end gaming environment at a quarter of the cost of a traditional console-based approach, as it would eliminate the need for retailing, packaging and delivering hardware and software to consumers, while also tapping the economies of scale inherent in the cloud. However, in STL Partners’ view that is a best-case scenario and a 50% reduction in costs is probably more realistic.

STL Partners believes adoption of cloud gaming will be gradual and piecemeal for the next few years, as console gamers work their way through another generation of consoles and casual gamers are reluctant to commit to a monthly subscription. However, from 2022, adoption is likely to grow rapidly as cloud gaming propositions improve.

At this stage, it is not yet clear who will dominate the value chain, if anyone. Will the “hyperscalers” be successful in creating a ‘Netflix’ for games? Google is certainly trying to do this with its Stadia platform, which has yet to gain any real traction, due to both its limited games library and its perceived technological immaturity. The established players in the games industry, such as EA, Microsoft (Xbox) and Sony (PlayStation), have launched cloud gaming offerings, or are, at least, in the process of doing so. Some telcos, such as Deutsche Telekom and Sunrise, are developing their own cloud gaming services, while SK Telecom is partnering with Microsoft.

What telcos can learn from Shadow’s cloud gaming proposition

The rest of this report explores the business models being pursued by cloud gaming providers. Specifically, it looks at cloud gaming company Shadow and how it fits into the wider ecosystem, before evaluating how its distinct approach compares with that of the major players in online entertainment, such as Sony and Google. The second half of the report considers the implications for telcos.

Table of Contents

  • Executive Summary
  • Introduction
  • Cloud gaming: a complicated ecosystem
    • The battle of the business models
    • The economics of cloud gaming and pricing models
    • Content offering will trump price
    • Cloud gaming is well positioned for casual gamers
    • The future cloud gaming landscape
  • 5G and fixed wireless
  • The role of edge computing
  • How and where can telcos add value?
  • Conclusions

Request STL research insights overview pack

Telco edge computing: Turning vision into practice

The emerging opportunity for edge compute

There is ongoing interest in the telecoms industry about edge computing. The key rationale behind this is that telcos – through their distributed network assets – are in a unique position to push workloads closer to devices, reducing latency and/or data volumes over to the cloud, and thereby enabling new experiences and use cases, while enhancing existing ones.

After years of centralising workloads in the public cloud there is complementary demand emerging for more distributed compute. This is good news for telcos as it shows that the time is ripe for them to turn their ambition to edge computing. Telcos can exploit their own connectivity, unique network APIs and an existing distributed real-estate. Telcos are in a unique position to play a strong role in distributed and edge computing ecosystems.

Telcos’ excitement around edge is fuelled by new differentiation and revenue opportunities leveraging the dynamic application developer ecosystem which hitherto has been dominated by ever more sophisticated and technically advanced public clouds and proofs-of-concept (POCs). Furthermore, underlying trends in cloud computing are increasingly promising for distributed (edge) computing:

  • Hybrid and multi-cloud models and technologies will continue to facilitate more distributed compute scenarios beyond hyperscale-only and on-premise-only.
  • Lightweight compute models will enable the deployment of cloud-workloads on a smaller footprint (e.g. train AI models in the cloud and execute them at the edge, such as in a smartphone or a connected car). For example, containers and “serverless” compute models make it possible to run workloads more efficiently and elastically than virtual machines.
  • The adoption of more platform-agnostic deployment models (such as containers) will facilitate the shifting and moving of workloads within distributed and edge cloud environments.
  • Proliferation of edge gateways and IoT devices will drive processing and analytics outside the datacentre and closer to the customer (premises).
  • Regarding security, a more distributed computing model is well-suited to defending against certain types of attacks (e.g. DDOS). Furthermore, if/when breaches do occur, these can be quarantined to an edge “cloudlet”, limiting the potential damage and undermining the economics of an attack.

Our findings in this report are informed by a research programme STL Partners has conducted since January 2018, supported by and in cooperation with Aricent. For this research, STL Partners has conducted interviews with both telcos and technology companies, globally about their views and current efforts related to edge computing. Overall, the research forms part of STL Partners’ ongoing research work and consulting assignments around telco edge cloud.

Key questions arising for telcos

Notwithstanding the strategic opportunity, telcos face some big questions in formulating edge initiatives. These include:

“What is the business case for telco edge – where is the money?”

“Will massive demand for low-latency compute drive demand from core/central to edge compute?”

“How can we compete with the big cloud players – won’t they expand and control the edge too?”

“How should we play in Enterprise edge – should we offer edge services on customer premises?”

“How can we architect and charge for different edge services – those requiring expensive, specialised hardware for accelerated computing to process machine learning/AI workloads?”

“What edge services should we offer and through what distribution channels?”

These are (real examples of) questions that telcos must address in defining and delivering edge services. This report provides a framework to tackle these (and other) questions in a structured way. We will revisit these questions (and the answers) throughout the report.

Edge computing: Five viable telco business models

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

This report has been produced independently by STL Partners, in co-operation with Hewlett Packard Enterprise and Intel.

Introduction

The idea behind Multi-Access Edge Computing (MEC) is to make compute and storage capabilities available to customers at the edge of communications networks. This will mean that workloads and applications are closer to customers, potentially enhancing experiences and enabling new services and offers. As we have discussed in our recent report, there is much excitement within telcos around this concept:

  • MEC promises to enable a plethora of vertical and horizontal use cases (e.g. leveraging lowlatency) implying significant commercial opportunities. This is critical as the whole industry is trying to uncover new sources of revenue, ideally where operators may be able to build a sustainable advantage.
  • MEC should also theoretically fit with telcos’ 5G and SDN/NFV deployments, which will run certain virtualised network functions in a distributed way, including at the edge of networks. In turn, MEC potentially benefits from the capabilities of a virtualised network to extract the full potential of distributed computing.

Figure 1: Defining MEC

Source: STL Partners

However, despite the excitement around the potentially transformative impact of MEC on telcos,viable commercial models that leverage MEC are still unclear and undefined. As an added complication, a diverse ecosystem around edge computing is emerging – of which telcos’ MEC is only one part.

From this, the following key questions emerge:

  • Which business models will allow telcos to realise the various potential MEC use cases in a commercially viable way?
  • What are the right MEC business models for which telco?
  • What is needed for success? What are the challenges?

Contents:

  • Preface
  • Introduction
  • The emerging edge computing ecosystem
  • Telcos’ MEC opportunity
  • Hyperscale cloud providers are an added complication for telcos
  • How should telcos position themselves?
  • 5 telco business models for MEC
  • Business model 1: Dedicated edge hosting
  • Business model 2: Edge IaaS/PaaS/NaaS
  • Business model 3: Systems integration
  • Business model 4: B2B2X solutions
  • Business model 5: End-to-end consumer retail applications
  • Mapping use cases to business models
  • Some business models will require a long-term view on the investment
  • Which business models are right for which operator and which operator division?
  • Conclusion

Figures:

  • Figure 1: Defining MEC
  • Figure 2: MEC potential benefits
  • Figure 3: Microsoft’s new mantra – “Intelligent Cloud, Intelligent Edge”
  • Figure 4: STL Partners has identified 5 telco business models for MEC
  • Figure 5: The dedicated edge hosting value
  • Figure 6: Quantified example – Dedicated edge hosting
  • Figure 7: The Edge IaaS/PaaS/NaaS value chain
  • Figure 8: Quantified example – Edge IaaS/PaaS/NaaS
  • Figure 9: The SI value chain
  • Figure 10: Quantified example – Systems integration
  • Figure 11: The B2B2X solutions value chain
  • Figure 12: Quantified example – B2B2x solutions
  • Figure 13: Graphical representation of the end-to-end consumer retail applications business model
  • Figure 14: Quantified example – End-to-end consumer retail applications
  • Figure 15: Mapping MEC business models to possible use cases
  • Figure 16: High IRR correlates with low terminal value
  • Figure 17: Telcos need patience for edge-enabled consumer applications to become profitable (breakeven only in year 5)
  • Figure 18: The characteristics and skills required of the MEC operator depend on the business models

How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing