MWC 2023: You are now in a new industry

The birth of a new sector: “Connected Technologies”

Mobile World Congress (MWC) is the world’s biggest showcase for the mobile telecoms industry. MWC 2023 marked the second year back to full scale after COVID disruptions. With 88k visitors, 2,400 exhibitors and 1,000 speakers it did not quite reach pre-COVID heights, but remained an enormous scale event. Notably, 56% of visitors came from industries adjacent to the core mobile ecosystem, reflecting STL’s view that we are now in a new industry with a diverse range of players delivering connected technologies.

With such scale It can be difficult to find the significant messages through the noise. STL’s research team attended the event in full force, and we each focused on a specific topic. In this report we distil what we saw at MWC 2023 and what we think it means for telecoms operators, technology companies and new players entering the industry.

Enter your details below to download an extract of the report

STL Partners research team at MWC 2023

STL-Partners-MWC23-research-team

The diversity of companies attending and of applications demonstrated at MWC23 illustrated that the business being conducted is no longer the delivery of mobile communications. It is addressing a broader goal that we’ve described as the Coordination Age. This is the use of connected technologies to help a wide range of customers make better use of their resources.

The centrality of the GSMA Open Gateway announcement in discussions was one harbinger of the new model. The point of the APIs is to enable other players to access and use telecoms resources more automatically and rapidly, rather than through lengthy and complex bespoke processes. It starts to open many new business model opportunities across the economy. To steal the words of John Antanaitis, VP Global Portfolio Marketing at Vonage, APIs are “a small key to a big door”.

Other examples from MWC 2023 underlining the transition of “telecommunications” to a sector with new boundaries and new functions include:

  • The centrality of ecosystems and partnerships, which fundamentally serve to connect different parts of the technology value chain.
  • The importance of sustainability to the industry’s agenda. This is about careful and efficient use of resources within the industry and enabling customers to connect their own technologies to optimise energy consumption and their uses of other scarce resources such as land, water and carbon.
  • An increasing interest and experimentation with the metaverse, which uses connected technologies (AR/VR, high speed data, sometimes edge resources) to deliver a newly visceral experience to its users, in turn delivering other benefits, such as more engaging entertainment (better use of leisure time and attention), and more compelling training experiences (e.g. delivering more realistic and lifelike emergency training scenarios).
  • A primary purpose of telco cloud is to break out the functions and technologies within the operators and network domains. It makes individual processes, assets and functions programmable – again, linking them with signals from other parts of the ecosystem – whether an external customer or partner or internal users.
  • The growing dialogues around edge computing and private networks –evolving ways for enterprise customers to take control of all or part of their connected technologies.
  • The importance of AI and automation, both within operators and across the market. The nature of automation is to connect one technology or data source to another. An action in one place is triggered by a signal from another.

Many of these connecting technologies are still relatively nascent and incomplete at this stage. They do not yet deliver the experiences or economics that will ultimately make them successful. However, what they collectively reveal is that the underlying drive to connect technologies to make better use of resources is like a form of economic gravity. In the same way that water will always run downhill, so will the market evolve towards optimising the use of resources through connecting technologies.

Table of contents

  • Executive Summary
    • The birth of a new sector: ‘Connected technologies’
    • Old gripes remain
    • So what if you are in a new industry?
    • You might like it
    • How to go from telco to connected techco
    • Next steps
  • Introduction
  • Strategy: Does the industry know where it’s going?
    • Where will the money come from?
    • Telcos still demanding their “fair share”, but what’s fair, or constructive?
    • Hope for the future
  • Transformation leadership: Ecosystem practices
    • Current drivers for ecosystem thinking
    • Barriers to wider and less linear ecosystem practices
    • Conclusion
  • Energy crisis sparks efficiency drive
    • Innovation is happening around energy
    • Orange looks to change consumer behaviour
    • Moves on measuring enablement effects
    • Key takeaways
  • Telco Cloud: Open RAN is important
    • Brownfield open RAN deployments at scale in 2024-25
    • Acceleration is key for vRAN workloads on COTS hardware
    • Energy efficiency is a key use case of open RAN and vRAN
    • Other business
    • Conclusion
  • Consumer: Where are telcos currently focused?
    • Staying relevant: Metaverse returns
    • Consumer revenue opportunities: Commerce and finance
    • Customer engagement: Utilising AI
  • Enterprise: Are telcos really ready for new business models?
    • Metaverse for enterprise: Pure hype?
    • Network APIs: The tech is progressing
    • …But commercial value is still unclear
    • Final takeaways:
  • Private networks: Coming over the hype curve
    • A fragmented but dynamic ecosystem
    • A push for mid-market adoption
    • Finding the right sector and the right business case
  • Edge computing: Entering the next phase
    • Telcos are looking for ways to monetise edge
    • Edge computing and private networks – a winning combination?
    • Network APIs take centre stage
    • Final thoughts
  • AI and automation: Opening up access to operational data
    • Gathering up of end-to-end data across multiple-domains
    • Support for network automations
    • Data for external use
    • Key takeaways

Enter your details below to download an extract of the report

The Telco Cloud Manifesto 2.0

Nearly two years on from our first Telco Cloud Manifesto published in March 2021, we are even more convinced that going through the pain of learning how to orchestrate and manage network workloads in a cloud-native environment is essential for telcos to successfully create new business models, such as Network-as-a-Service in support of edge compute applications.

Since the first Manifesto, hyperscalers have emerged as powerful partners and enablers for telcos’ technology transformation. But telcos that simply outsource to hyperscalers the delivery and management of their telco cloud, and of the multi-vendor, virtualised network functions that run on it, will never realise the true potential of telco cloudification. By contrast, evolving and maintaining an ability to orchestrate and manage multi-vendor, virtualised network functions end-to-end across distributed, multi-domain and multi-vendor infrastructure represents a vital control point that telcos should not surrender to the hyperscalers and vendors. Doing so could relegate telcos to a role as mere physical connectivity and infrastructure providers helping to deliver services developed, marketed and monetised by others.

In short, operators must take on the ‘workload’ of transforming into and acting as cloud-centric organisations before they shift their ‘workloads’ to the hyperscale cloud. In this updated Manifesto, we outline why, and what telcos at different stages of maturity should prioritise.

Two developments have taken place since the publication of our first manifesto that have changed the terms on which telcos are addressing network cloudification:

  • Hyperscale cloud providers have increasingly developed capabilities and commercial offers in the area of telco cloud. To telcos uncertain about the strategy and financial implications of the next phase of their investments, the hyperscalers appear to offer a shortcut to telco cloud: the possibility of avoiding doing all the hard yards of developing the private telco cloud, and of evolving the internal skills and processes for deploying and managing multi-vendor VNFs / CNFs over it. Instead, the hyperscalers offer the prospect of getting telco cloud and VNFs / CNFs on an ‘as-a-Service’ basis – fundamentally like any other cloud service.
  • In April 2021, DISH announced it would build its greenfield 5G network with AWS providing much of the virtual infrastructure layer and all of the physical cloud infrastructure. In June 2021, AT&T sold its private telco cloud platform to Microsoft Azure. In both instances, the telcos involved are now deploying mobile core network functions and, in DISH’s case, all of the software-based functions of its on a hyperscale cloud. These events appear superficially to set an example validating the idea of outsourcing telco cloud to the hyperscalers. After all, AT&T had previously been a champion of the DIY approach to telco cloud but now looked as though it had thrown in the towel and gone all in with outsourcing its cloud from Azure.

Two main questions arise from these developments, which we address in detail in this second Manifesto:

  • Should telcos embarked or embarking on a Pathway 2 strategy outsource their telco cloud infrastructure and procure their critical network functions – in whole or in part – from one or more hyperscalers, on an as-a-Service basis?
  • What is the broader significance of AT&T’s and DISH’s moves? Does it represent the logical culmination of telco cloudification and, if so, what are the technological and business-model characteristics of the ‘infrastructure-independent, cloud-native telco’, as we define this new Pathway 4? Finally, is this a model that all Pathway 3 players – and even all telcos per se – should ultimately seek to emulate?

In this second Manifesto, we also propose an updated version of our pathways describing telco network cloudification strategies for different sizes and types of telco to implement telco cloud. We now have four pathways (we had three in the original Manifesto), as illustrated in the figure below.

The four telco cloud deployment pathways in STL’s Telco Cloud Manifesto 2.0

Source: STL Partners, 2023

Existing subscribers can download the Manifesto at the top of this page. Everyone else, please go here.

If you wish to speak to us about our new Manifesto, please book a call.

Table of contents

  • Executive Summary
    • Recommendations
  • Pathway 1: No way back
    • Two constituencies at operators: Cloud sceptics and cloud advocates
  • Pathway 2: Hyperscalers – friend or foe?
    • Cloud-native network functions are a vital control point telcos must not relinquish
  • Pathway 3: Build own telco cloud competencies before deploying on public cloud
    • AT&T and DISH are important proof points but not applicable to the industry as a whole
    • But telcos will not realise the full benefits of telco cloud unless they, too, become software and cloud businesses
  • Pathway 4: The path to Network-as-a-Service
    • Pathway 4 networks will enable Network-as-a-Service
  • Conclusion: Mastery of cloud-native is key for telcos to create value in the Coordination Age

Related research

Our telco cloud research aligned to this topic includes:

 

5G standalone (SA) core: Why and how telcos should keep going

Major 5G Standalone deployments are experiencing delays…

There is a widespread opinion among telco industry watchers that deployments of the 5G Standalone (SA) core are taking longer than originally expected. It is certainly the case that some of the world’s leading operators, and telco cloud innovators, are taking their time over these deployments, as illustrated below:

  • AT&T: Has no current, publicly announced deadline for launching its 5G SA core, which was originally expected to be deployed in mid-2021.
  • Deutsche Telekom: Launched an SA core in Germany on a trial basis in September 2022, having previously acknowledged that SA was taking longer than originally expected. In Europe, the only other opco that is advancing towards commercial deployment is Magenta Telekom in Austria. In 2021, the company cited various delay factors, such as 5G SA not being technically mature enough to fulfil customers’ expectations (on speed and latency), and a lack of consumer devices supporting 5G SA.
  • Rakuten Mobile: Was expected to launch an SA core co-developed with NEC in 2021. But at the time of writing, this had still not launched.
  • SK Telecom: Was originally expected to launch a Samsung-provided SA core in 2020. However, in November 2021, it was announced that SK Telecom would deploy an Ericsson converged Non-standalone (NSA) / SA core. By the time of writing, this had still not taken place.
  • Telefónica: Has carried out extensive tests and pilots of 5G SA to support different use cases but has no publicly announced timetable for launching the technology commercially.
  • Verizon: Originally planned to launch its SA core at the end of 2021. But this was pushed back to 2022; and recent pronouncements by the company indicate a launch of commercial services over the SA core only in 2023.
  • Vodafone: Has launched SA in Germany only, not in any of its other markets; and even then, nationwide SA coverage is not expected until 2025. An SA core is, however, expected to be launched in Portugal in the near future, although no definite deadline has been announced. A ‘commercial pilot’ in three UK cities, launched in June 2021, had still not resulted in a full commercial deployment by the time of writing.

…but other MNOs are making rapid progress

In contrast to the above catalogue of delay, several other leading operators have made considerable progress with their standalone deployments:

  • DISH: Launched its SA core- and open RAN-based network in the US, operated entirely over the AWS cloud, in May 2022. The initial population coverage of the network was required to be 20%. This is supposed to rise to 70% by June 2023.
  • Orange: Proceeding with a Europe-wide roll-out, with six markets expected to go live with SA cores in 2023.
  • Saudi Telecom Company (STC): Has launched SA services in two international markets, Kuwait (May 2021) and Bahrain (May 2022). Preparations for a launch in Saudi Arabia were ongoing at the time of writing.
  • Telekom Austria Group (A1): Rolling out SA cores across four markets in Central Europe (Bulgaria, Croatia, Serbia and Slovenia), although no announcement has been made regarding a similar deployment in its home market of Austria. In June 2022, A1 also carried out a PoC of end-to-end, SA core-enabled network slicing, in partnership with Amdocs.
  • T-Mobile US: Has reportedly migrated all of its mobile broadband traffic over to its SA core, which was launched back in 2020. It also launched one of the world’s first voice-over-New Radio (VoNR) services, run over the SA core, in parts of two cities in June 2022.
  • Zain (Kuwait): Launched SA in Saudi Arabia in February 2022, while a deployment in its home market was ongoing at the time of writing.
  • There are also a number of trials, and prospective and actual deployments, of SA cores over the public cloud in Europe. These are serving the macro network, not edge or private-networking use cases. The most notable examples include Magenta Telekom (Deutsche Telekom’s Austrian subsidiary, partnering with Google Cloud); Swisscom (partnering with AWS); and Working Group Two (wgtwo) – a Cisco and Telenor spin-off – that offers a multi-tenant, cloud-native 5G core delivered to third-party MNOs and MVNOs via the AWS cloud.
  • The three established Chinese MNOs are all making rapid progress with their 5G SA roll-outs, having launched in either 2020 (China Telecom and China Unicom) or 2021 (China Mobile). The country’s newly launched, fourth national player, Broadnet, is also rolling out SA. However, it is not publicly known what share of the country’s reported 848 million-odd 5G subscribers (at March 2022) were connected to SA cores.
  • At least eight other APAC operators had launched 5G SA-based services by July 2022, including KT in South Korea, NTT Docomo and SoftBank in Japan and Smart in the Philippines.

Enter your details below to request an extract of the report

Many standalone deployments in the offing – but few fixed deadlines

So, 5G standalone deployments are definitely a mixed bag: leading operators in APAC, Europe, the Middle East and North America are deploying and have launched at scale, while other leading players in the same regions have delayed launches, including some of the telcos that have helped drive telco cloud as a whole over the past few years, e.g. AT&T, Deutsche Telekom, Rakuten, Telefónica and Vodafone.

In the July 2022 update to our Telco Cloud Deployment Tracker, which contained a ‘deep dive’ on 5G core roll-outs, we presented an optimistic picture of 5G SA deployments. We pointed out that the number of SA and converged NSA / SA cores. We expect to be launched in 2022 outnumbered the total of NSA deployments. However, as illustrated in the figure below, SA and converged NSA/SA cores are still the minority of all 5G cores (29% in total).

We should also point out that some of the SA and converged NSA / SA deployments shown in the figure below are still in progress and some will continue to be so in 2023. In other words, the launch of these core networks has been announced and we have therefore logged them in our tracker, but we expect that the corresponding deployments will be completed in the remainder of 2022 or in 2023, based on a reasonable, typical gap between when the deployments are publicly announced and the time it normally takes to complete them. If, however, more of these predicted deployments are delayed as per the roll-outs of some of leading players listed above, then we will need to revise down our 2022 and 2023 totals.

Global 5G core networks by type, 2018 to 2023

 

Source: STL Partners

Table of contents

  • Executive Summary
  • Introduction
    • Major 5G Standalone deployments are experiencing delays
    • …but other MNOs are making rapid progress
    • Many SA deployments in the offing – but few fixed deadlines
  • What is holding up deployments?
    • Mass-market use cases are not yet mature
    • Enterprise use cases exploiting an SA core are not established
    • Business model and ROI uncertainty for 5G SA
    • Uncertainty about the role of hyperscalers
    • Coordination of investments in 5G SA with those in open RAN
    • MNO process and organisation must evolve to exploit 5G SA
  • 5G SA progress will unlock opportunities
    • Build out coverage to improve ‘commodity’ services
    • Be first to roll out 5G SA in the national market
    • For brownfield deployments, incrementally evolve towards SA
    • Greenfield deployments
    • Carefully elaborate deployment models on hyperscale cloud
    • Work through process and organisational change
  • Conclusion: 5G SA will enable transformation

    Related research

    Previous STL Partners reports aligned to this topic include:

  • Telco Cloud Deployment Tracker: 5G core deep dive
  • Telco cloud: short-term pain, long-term gain
  • Telco Cloud Deployment Tracker: 5G standalone and RAN

Enter your details below to request an extract of the report

VNFs on public cloud: Opportunity, not threat

VNF deployments on the hyperscale cloud are just beginning

Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date

The past three years have seen many major telcos concluding collaboration agreements with the leading hyperscalers. These have involved one or more of five business models for the telco-hyperscaler relationship that we discussed in a previous report, and which are illustrated below:

Five business models for telco-hyperscaler partnerships

Source: STL Partners

In this report, we focus more narrowly on the deployment, delivery and operation by and to telcos of virtualised and cloud-native network functions (VNFs / CNFs) over the hyperscale public cloud. To date, there have been few instances of telcos delivering live, commercial services on the public network via VNFs hosted on the public cloud. STL Partners’ Telco Cloud Deployment Tracker contains eight examples of this, as illustrated below:

Major telcos deploying VNFs in the public cloud

Source: STL Partners

Enter your details below to request an extract of the report

Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’

The telcos in the above table are all of comparable stature and ambition to the likes of AT&T and DISH in the realm of telco cloud but have a diametrically opposite stance when it comes to VNF deployment on public cloud. They have decided against large-scale public cloud deployments for a variety of reasons, including:

  • They have invested a considerable amount of money, time and human resources on their private clouddeployments, and they want and need to utilise the asset and generate the RoI.
  • Related to this, they have generated a large amount of intellectual property (IP) as a result of their DIY cloud– and VNF-development work. Clearly, they wish to realise the business benefits they sought to achieve through these efforts, such as cost and resource efficiencies, automation gains, enhanced flexibility and agility, and opportunities for both connectivityand edge compute service innovation. Apart from the opportunity cost of not realising these gains, it is demoralising for some CTO departments to contemplate surrendering the fruit of this effort in favour of a hyperscaler’s comparable cloud infrastructure, orchestration and management tools.
  • In addition, telcos have an opportunity to monetise that IP by marketing it to other telcos. The Rakuten Communications Platform (RCP) marketed by Rakuten Symphony is an example of this: effectively, a telco providing a telco cloud platform on an NFaaS basis to third-party operators or enterprises – in competition to similar offerings that might be developed by hyperscalers. Accordingly, RCP will be hosted over private cloud facilities, not public cloud. But in theory, there is no reason why RCP could not in future be delivered over public cloud. In this case, Rakuten would be acting like any other vendor adapting its solutions to the hyperscale cloud.
  • In theory also, telcos could also offer their private telcoclouds as a platform, or wholesale or on-demand service, for third parties to source and run their own network functions (i.e. these would be hosted on the wholesale provider’s facilities, in contrast to the RCP, which is hosted on the client telco’s facilities). This would be a logical fit for telcos such as BT or Deutsche Telekom, which still operate as their respective countries’ communications backbone provider and primary wholesale provider

BT and Deutsche Telekom have also been among the telcos that have been most visibly hostile to the idea of running NFs powering their own public, mass-market services on the public and hyperscale cloud. And for most operators, this is the main concern making them cautious about deploying VNFs on the public cloud, let alone sourcing them from the cloud on an NFaaS basis: that this would be making the ‘core’ telco business and asset – the network – dependent on the technology roadmaps, operational competence and business priorities of the hyperscalers.

Table of contents

  • Executive Summary
  • Introduction: VNF deployments on the hyperscale cloud are just beginning
    • Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date
    • DISH and AT&T: AWS vs Azure; vendor-supported vs DIY; NaaCP vs net compute
  • Other DIY or vendor-supported best-of-breed players are not hosting VNFs on public cloud
    • Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’
    • The reluctance to deploy VNFs on the cloud reflects a persistent, legacy concept of the telco
  • But NaaCP will drive more VNF deployments on public cloud, and opportunities for telcos
    • Multiple models for NaaCP present prospects for greater integration of cloud-native networks and public cloud
  • Conclusion: Convergence of network and cloud is inevitable – but not telcos’ defeat
  • Appendix

Related Research

 

Enter your details below to request an extract of the report

How telcos can make the world a safer place

Telecoms networks can support public safety

In the wake of the pandemic and multiple natural disasters, such as fire and flooding, both policymakers and people in general are placing a greater focus on preserving health and ensuring public safety. This report begins by explaining the concept of a digital nervous system – large numbers of connected sensors that can monitor events in real-time and thereby alert organizations and individuals to imminent threats to their health and safety.

With the advent of 5G, STL Partners believes telcos have a broad opportunity to help coordinate better use of the world’s resources and assets, as outlined in the report: The Coordination Age: A third age of telecoms. The application of reliable and ubiquitous connectivity to enable governments, companies and individuals to live in a safer world is one way in which operators can contribute to the Coordination Age.

Enter your details below to request an extract of the report

The chapters in this report consider the potential to use the data collected by telecoms networks to help counter the health and safety threats posed by:

  • Environmental factors, such as air pollution and high-levels of pollen,
  • Natural disasters, such as wildfires, flooding and earthquakes,
  • Infectious diseases
  • Violence, such as riots and shooting incidents
  • Accidents on roads, rivers and coastlines

In each case, the report considers how to harness new data collected by connected sensors, cameras and other monitors, in addition to data already captured by mobile networks (showing where people are and where they are moving to).  It also identifies who telcos will need to work with to develop and deploy such solutions, while discussing potential revenue streams.  In most cases, the report includes short case studies describing how telcos are trialling or deploying actual solutions, generally in partnership with other stakeholders.

The final chapter focuses on the role of telcos – the assets and the capabilities they have to improve health and safety.

It builds on previous STL Partners research including:

Managing an unstable world

Prior to the damage wrought by the pandemic, the world was gradually becoming a safer place for human beings. Global life expectancy has been rising steadily for many decades and the UN expects that trend to continue, albeit at a slower pace. That implies the world is safer than it was in the twentieth century and people are healthier than they used to be.

Global gains in life expectancy are slowing down

health and safety

Source: United Nations – World Population Prospects

But a succession of pandemics, more extreme weather events and rising pollution may yet reverse these positive trends. Indeed, many people now feel that they live in an increasingly unstable and dangerous world. Air pollution and over-crowding are worsening the health impact of respiratory conditions and infections, such as SARS-CoV-2. As climate change accelerates, experts expect an increase in flash flooding, wildfires, drought and intense heat. As extreme weather impacts the food and water supplies, civil unrest and even armed conflict could follow. In the modern world, the four horsemen of the apocalypse might symbolize infectious disease, extreme weather, pollution and violence.

As the human race grapples with these challenges, there is growing interest in services and technologies that could make the world a safer and healthier place. That demand is apparent among both individuals (hence the strong sales of wearable fitness monitors) and among public sector bodies’ rising interest in environment and crowd monitoring solutions.

As prevention is better than cure, both citizens and organisations are looking for early warning systems that can help them prepare for threats and take mitigating actions. For example, an individual with an underlying health condition could benefit from a service that warns them when they are approaching an area with poor air quality or large numbers of densely-packed people. Similarly, a municipality would welcome a solution that alerts them when large numbers of people are gathering in a public space or drains are close to being blocked or are overflowing.  The development of these kinds of early warning systems would involve tracking both events and people in real-time to detect patterns that signal a potential hazard or disruption, such as a riot or flooding.

Advances in artificial intelligence (AI), as well as the falling cost of cameras and other sensors, together with the rollout of increasingly dense telecoms networks, could make such systems viable. For example, a camera mounted on a lamppost could use image and audio recognition technologies to detect when a crowd is gathering in the locality, a gun has been fired, a drain has been flooded or an accident has occurred.

Many connected sensors and cameras, of course, won’t be in a fixed location – they will be attached to drones, vehicles and even bicycles, to support use cases where mobility will enhance the service. Such uses cases could include air quality monitoring, wildfire and flooding surveillance, and search and rescue.

Marty Sprinzen, CEO of Vantiq (a provider of event-driven, real-time collaborative applications) believes telecoms companies are best positioned to create a “global digital nervous system” as they have the networks and managed service capabilities to scale these applications for broad deployment. “Secure and reliable connectivity and networking (increasingly on ultrafast 5G networks) are just the beginning in terms of the value telcos can bring,” he wrote in an article for Forbes, published in November 2020. “They can lead on the provisioning and management of the literally billions of IoT devices — cameras, wearables and sensors of all types — that are integral to real-time systems. They can aggregate and analyze the massive amount of data that these systems generate and share insights with their customers. And they can bring together the software providers and integrators and various other parties that will be necessary to build, maintain and run such sophisticated systems.”

Sprinzen regards multi-access edge computing, or MEC, as the key to unlocking this market. He describes MEC as a new, distributed architecture that pushes compute and cloud-like capabilities out of data centres and the cloud to the edge of the network — closer to end-users and billions of IoT devices. This enables the filtering and processing of data at the edge in near real-time, to enable a rapid response to critical events.

This kind of digital nervous system could help curb the adverse impact of future pandemics. “I believe smart building applications will help companies monitor for and manage symptom detection, physical distancing, contact tracing, access management, safety compliance and asset tracking in the workplace,” Sprinzen wrote. “Real-time traffic monitoring will ease urban congestion and reduce the number and severity of accidents. Monitoring and management of water supplies, electrical grids and public transportation will safeguard us against equipment failures or attacks by bad actors. Environmental applications will provide early warnings of floods or wildfires. Food distribution and waste management applications will help us make more of our precious resources.”

Vantiq says one if its telco customers is implementing AI-enabled cameras, IoT sensors, location data and other technologies to monitor various aspects of its new headquarters building. He didn’t identify the telco, but added that it is the lead technology partner for a city that’s implementing a spectrum of smart city solutions to improve mobility, reduce congestion and strengthen disaster prevention.

Table of contents

  • Executive Summary
  • Introduction
  • Managing an unstable world
  • Monitoring air quality
    • Exploiting existing cellular infrastructure
    • Is mobile network data enough?
    • Smart lampposts to play a broad role
    • The economics of connecting environmental sensors
    • Sensors in the sky
  • Natural disasters
    • Spotting wildfires early
    • Earthquake alert systems
    • Crowdsourcing data
    • Infectious diseases
  • On street security
  • Conclusions – the opportunities for telcos
    • Ecosystem coordination – kickstarting the market
    • Devices – finding the right locations
    • Network – reliable, low cost connectivity
    • Data platform
    • Applications
  • Index

 

 

Enter your details below to request an extract of the report

Why telcos need to capture the edge opportunity now

Why telcos need to capture the edge opportunity now

This report is based on an interview programme that STL Partners conducted in the months of July and August 2021. The interview programme consisted of 17 interviews: 13 with operators and 4 with enterprises. More information about the telecoms interviewees can be found below.

Figure 1: Interviewee profiles across telcos and enterprises

We asked operators and enterprises about the role of edge computing within their organisations, as well as their overall technology strategy. We investigated the key use cases they were exploring, their view on ecosystem partnerships and their vertical targeting strategy to understand: How can operators capture the edge opportunity?

Table of Contents

Executive Summary……………………………………………………………………………………………………………………………..2
Telco edge computing is now…………………………………………………………………………………………………………..2
Key takeaway: telcos must take a pragmatic approach going forward …………………………………………..2
Preface…………………………………………………………………………………………………………………………………………………5
Introduction………………………………………………………………………………………………………………………………………….7
Edge progress to date: laying the groundwork…………………………………………………………………………………….8
Building the 5G foundation has been the priority ……………………………………………………………………………..8
Private cellular network solutions are gaining traction …………………………………………………………………..10
Operators have been undergoing an education process………………………………………………………………..11
Clarity around use cases has allowed telcos to more actively engage the ecosystem………………….11
The inflection point: how to capture demand for edge computing through focused strategies ……….13
Vertical strategy………………………………………………………………………………………………………………………………13
Horizontal strategy………………………………………………………………………………………………………………………….18
Partnerships must underpin any successful edge strategy ………………………………………………………………20
Conclusion………………………………………………………………………………………………………………………………………….23

Further reading

STL Partners has an extensive catalogue of edge research, which can be found on our Edge Computing hub. These reports provide an overview of edge, an examination of the telco opportunities and challenges in pursuing edge, and the role of 5G in edge. We recommend reading the following articles and reports before pursuing this report to provide sufficient context.

SK Telecom’s journey in commercialising 5G

SK Telecom (SKT), Verizon and Telstra were among the first in the world to commence the commercialisation of 5G networks. SK Telecom and Verizon launched broadband-based propositions in 2018, but it was only in 2019, when 5G smartphones became available, that consumer, business and enterprise customers were really able to experience the networks.

Part 1 of our 3-part series looks at SKT’s journey and how its propositions have developed from when 5G was launched to the current time. It includes an analysis of both consumer and business offerings promoted on SKT’s website to identify the revenues streams that 5G is supporting now – as opposed to revenues that new 5G use cases might deliver in future.

Download the report extract

At launch, SKT introduced 5G-specific tariffs, that coupled large data allowances with unique apps and services designed to ensure data consumption and demonstrate the advantages of 5G access. 5G plans were more expensive than 4G plans, but the price of 5G data per MB was less than that for 4G to tempt customers to make the switch.

SKT’s well-documented approach to 5G has been regarded as inspirational by other telcos, though many consider a similar approach out-of-reach (e.g. for other telcos, coverage issues may limit their ability to charge a premium, or 5G-value-adding services may be lacking).

This report examines the market factors that have enabled and constrained SKT’s 5G actions, as it moves to deliver propositions for audiences beyond the early adopters and heavy data users. It identifies lessons in the commercialisation of 5G for those operators that are on their own 5G journeys and those that have yet to start.

5G performance to date

This analysis is based on the latest data available as we went to press in March 2021.

There were 10.9 million 5G subscribers in South Korea at end-November 2020 (15.5% of the total 70.5 million mobile subscriptions in the market, according to the Ministry of Science and ICT) and network coverage is reported to be more than 90% of the population (a figure that was already quoted in March 2020). Subscriber numbers grew by nearly one million in November 2020, boosted by the introduction of the iPhone 12, which sold 600K units that month.

SKT’s share of 5G subscribers was 46% (5.05 million) in November, to which SKT added a further 400K+ in December, reaching 5.48 million by the end of 2020.

The telco took just four and a half months to reach one million 5G subscribers following launch, significantly less than it had taken with 4G, which had attained the same milestone in eight months following 4G’s commercial launch in 2011.

SKT quarterly 5G subscriber numbers (millions)

SK Telecom 5G subscribers

Source: STL Partners, SK Telecom

SKT credits 5G subscriber growth for its 2.8% MNO revenue increase in the year to December 2020, however the impact on ARPU is less clear. An initial increase in overall ARPU followed the introduction of higher priced 5G plans at launch, but ARPU has fallen back slightly since then, possibly due to COVID-19 economic factors.

SKT total ARPU trend following 5G launch

SK Telecom 5G ARPU

Source: STL Partners

In its 2020 year-end earnings call, SKT reported that it was top of the leader board in South Korea’s three customer satisfaction surveys and in the 5G quality assessment by the Ministry of Science and ICT.

As a cautionary note, Hong Jung-min of the ruling Democratic Party reported that 500K 5G users had switched to 4G LTE during August 2020 due to network issues, including limited coverage, slower than expected speeds. It is unclear how SKT was affected by this.

 

Table of Contents

  • Executive Summary
    • Recommendations
    • Next steps
  • Introduction
  • 5G performance to date
  • Details of launch
  • Consumer propositions
    • At launch
    • …And now
  • Business and enterprise propositions
    • At launch
    • …And now
  • Analysis of 5G market development
    • What next?
    • mmWave
  • Conclusion
  • Appendix 1

Download the report extract

Growing B2B revenues from edge: Five new telco services

=======================================================================================

Download the additional file on the left for the PPT chart pack accompanying this report

=======================================================================================

Edge computing has sparked significant interest from telcos

Edge computing brings cloud capabilities such as data processing and storage closer to the end user, device, or the source of data. There are two main opportunity areas for telcos in edge computing. Firstly, telcos have an opportunity to provide edge computing via edge data centres at sites on the telecoms network – network edge, sometimes referred to as multi-access edge computing. Secondly, telcos can offer edge-enabled services through compute platforms at the customer premises – on-premise edge.

Although there is an opportunity for telcos to offer new services and an enhanced customer experience to their consumer customer base, much of the edge computing opportunity for telcos is in the B2B segment. We have covered the general strategy operators are taking for edge computing in our previous report Telco edge computing: What’s the operator strategy? and through insights on our Edge Hub. Within enterprise, edge offers a chance for operators to move beyond offering connectivity services and extend into the platform and application space.

However, the market is still young; enterprises are still at an early stage of understanding the potential benefits of edge computing. There is limited availability of network edges; telcos are still deploying sites and few have begun to offer mechanisms to access the edge compute infrastructure within them. As a result, developers are only just starting to build applications to leverage this new infrastructure.

Enter your details below to request an extract of the report

Telcos are still grappling with defining the opportunity. Since adoption is so nascent, many feel that they are not able to prove the commercial case to unlock significant investment. Some operators are pushing ahead by building out edge infrastructure, securing partnerships and launching edge computing services. Nonetheless, even these operators are keeping an open mind to edge and waiting to see what unfolds as the market matures. What is clear is that, with the hyperscalers and others moving into the edge, telcos are increasingly keen to capitalise on the edge opportunity and solidify their position in the market before it’s too late.The sweet spot opportunity for edge is highly dependent on telcos’ starting points: some have existing capabilities within B2B networking and cloud, partnerships, and strong customer relationships. But for other telcos, the B2B business is at a very early stage. Meanwhile, edge infrastructure build differs across telcos, with some choosing to partner with hyperscalers to create the hardware and software stack within edge data centres while others are opting to build their own stack.

It is therefore critical for telcos to:

  1. Assess whether they can leverage existing B2Bservices, customers and partners versus where they need to invest to fill the gaps
  2. Understand which factors may affect how successful they are in offering new edgeservices
  3. Prioritise which servicesthey could offer to B2B customers

In this report, we focus on answering the following questions:

Which B2B services can edge computing add value to? And how ready are telcos to take new edge services to market?

In order to better understand how operators are thinking about edge services and what they are looking to offer today, we interviewed eight technology and strategy leaders working in operators primarily across Europe.

To ensure an open and candid dialogue, we have anonymised their contributions. We would like to take the opportunity to thank those who participated in this research. A summary of the interviewee profiles is provided in the Appendix.

Telcos’ B2B businesses today

As consumer revenues come under increasing pressure, operators are looking to their B2B businesses to provide a new source of revenue growth. The maturity of their B2B businesses today varies from those who have a limited offering focussed primarily on phones, SIMs and basic connectivity (particularly mobile-only telcos, e.g. Three UK), to those who are providing full vertical applications or taking on the role of systems integrator (often incumbents or telcos with fixed networks, e.g. DTAG, Vodafone). Many telcos are looking for opportunities to take on more of the latter role, by expanding their B2B offerings and increasing their foothold in the value chain e.g. by offering managed services. Particularly with the arrival of 5G, they see greater potential to grow revenues through B2B services compared with B2C.

Maturity levels of telcos’ B2B business

Table of content

  • Executive Summary
  • Introduction
  • Strategic principles for B2B telco edge
    • Telcos’ B2B businesses today
    • Three telco strategies for B2B edge
    • On-premise edge and network edge are separate opportunities
    • Telcos are open to partnering with the hyperscalers for edge
  • Five types of B2B edge services
    • Edge-to-cloud networking
    • Private edge infrastructure
    • Network edge platforms
    • Multi-edge and cloud orchestration
    • Vertical solutions
  • Evaluating the opportunity: How should telcos prioritise?
    • It’s not just about technology
    • However, significant value creation does not come easy
    • Telcos should consider new business models to ensure success
  • Next steps for telcos in building B2B edge services
    • Prioritise services to monetise edge
    • Evaluate the role of partners
    • Work closely with customers given that edge is still nascent
  • Appendix
    • Interviewee overview
  • Index

Enter your details below to request an extract of the report

SK Telecom: Lessons in 5G, AI, and adjacent market growth

SK Telecom’s strategy

SK Telecom is the largest mobile operator in South Korea with a 42% share of the mobile market and is also a major fixed broadband operator. It’s growth strategy is focused on 5G, AI and a small number of related business areas where it sees the potential for revenue to replace that lost from its core mobile business.

By developing applications based on 5G and AI it hopes to create additional revenue streams both for its mobile business and for new areas, as it has done in smart home and is starting to do for a variety of smart business applications. In 5G it is placing an emphasis on indoor coverage and edge computing as basis for vertical industry applications. Its AI business is centred around NUGU, a smart speaker and a platform for business applications.

Its other main areas of business focus are media, security, ecommerce and mobility, but it is also active in other fields including healthcare and gaming.

The company takes an active role internationally in standards organisations and commercially, both in its own right and through many partnerships with other industry players.

It is a subsidiary of SK Group, one of the largest chaebols in Korea, which has interests in energy and oil. Chaebols are large family-controlled conglomerates which display a high level and concentration of management power and control. The ownership structures of chaebols are often complex owing to the many crossholdings between companies owned by chaebols and by family members. SK Telecom uses its connections within SK Group to set up ‘friendly user’ trials of new services, such as edge and AI

While the largest part of the business remains in mobile telecoms, SK Telecom also owns a number of subsidiaries, mostly active in its main business areas, for example:

  • SK Broadband which provides fixed broadband (ADSL and wireless), IPTV and mobile OTT services
  • ADT Caps, a securitybusiness
  • IDQ, which specialises in quantum cryptography (security)
  • 11st, an open market platform for ecommerce
  • SK Hynixwhich manufactures memory semiconductors

Few of the subsidiaries are owned outright by SKT; it believes the presence of other shareholders can provide a useful source of further investment and, in some cases, expertise.

SKT was originally the mobile arm of KT, the national operator. It was privatised soon after establishing a cellular mobile network and subsequently acquired by SK Group, a major chaebol with interests in energy and oil, which now has a 27% shareholding. The government pension service owns a 11% share in SKT, Citibank 10%, and 9% is held by SKT itself. The chairman of SK Group has a personal holding in SK Telecom.

Following this introduction, the report comprises three main sections:

  • SK Telecom’s business strategy: range of activities, services, promotions, alliances, joint ventures, investments, which covers:
    • Mobile 5G, Edge and vertical industry applications, 6G
    • AIand applications, including NUGU and Smart Homes
    • New strategic business areas, comprising Media, Security, eCommerce, and other areas such as mobility
  • Business performance
  • Industrial and national context.

Enter your details below to download an extract of the report

Overview of SKT’s activities

Network coverage

SK Telecom has been one of the earliest and most active telcos to deploy a 5G network. It initially created 70 5G clusters in key commercial districts and densely populated areas to ensure a level of coverage suitable for augmented reality (AR) and virtual reality (VR) and plans to increase the number to 240 in 2020. It has paid particular attention to mobile (or multi-access) edge computing (MEC) applications for different vertical industry sectors and plans to build 5G MEC centres in 12 different locations across Korea. For its nationwide 5G Edge cloud service it is working with AWS and Microsoft.

In recognition of the constraints imposed by the spectrum used by 5G, it is also working on ensuring good indoor 5G coverage in some 2,000 buildings, including airports, department stores and large shopping malls as well as small-to-medium-sized buildings using distributed antenna systems (DAS) or its in-house developed indoor 5G repeaters. It also is working with Deutsche Telekom on trials of the repeaters in Germany. In addition, it has already initiated activities in 6G, an indication of the seriousness with which it is addressing the mobile market.

NUGU, the AI platform

It launched its own AI driven smart speaker, NUGU in 2016/7, which SKT is using to support consumer applications such as Smart Home and IPTV. There are now eight versions of NUGU for consumers and it also serves as a platform for other applications. More recently it has developed several NUGU/AI applications for businesses and civil authorities in conjunction with 5G deployments. It also has an AI based network management system named Tango.

Although NUGU initially performed well in the market, it seems likely that the subsequent launch of smart speakers by major global players such as Amazon and Google has had a strong negative impact on the product’s recent growth. The absence of published data supports this view, since the company often only reports good news, unless required by law. SK Telecom has responded by developing variants of NUGU for children and other specialist markets and making use of the NUGU AI platform for a variety of smart applications. In the absence of published information, it is not possible to form a view on the success of the NUGU variants, although the intent appears to be to attract young users and build on their brand loyalty.

It has offered smart home products and services since 2015/6. Its smart home portfolio has continually developed in conjunction with an increasing range of partners and is widely recognised as one of the two most comprehensive offerings globally. The other being Deutsche Telekom’s Qivicon. The service appears to be most successful in penetrating the new build market through the property developers.

NUGU is also an AI platform, which is used to support business applications. SK Telecom has also supported the SK Group by providing new AI/5G solutions and opening APIs to other subsidiaries including SK Hynix. Within the SK Group, SK Planet, a subsidiary of SK Telecom, is active in internet platform development and offers development of applications based on NUGU as a service.

Smart solutions for enterprises

SKT continues to experiment with and trial new applications which build on its 5G and AI applications for individuals (B2C), businesses and the public sector. During 2019 it established B2B applications, making use of 5G, on-prem edge computing, and AI, including:

  • Smart factory(real time process control and quality control)
  • Smart distribution and robot control
  • Smart office (security/access control, virtual docking, AR/VRconferencing)
  • Smart hospital (NUGUfor voice command for patients, AR-based indoor navigation, facial recognition technology for medical workers to improve security, and investigating possible use of quantum cryptography in hospital network)
  • Smart cities; e.g. an intelligent transportation system in Seoul, with links to vehicles via 5Gor SK Telecom’s T-Map navigation service for non-5G users.

It is too early to judge whether these B2B smart applications are a success, and we will continue to monitor progress.

Acquisition strategy

SK Telecom has been growing these new business areas over the past few years, both organically and by acquisition. Its entry into the security business has been entirely by acquisition, where it has bought new revenue to compensate for that lost in the core mobile business. It is too early to assess what the ongoing impact and success of these businesses will be as part of SK Telecom.

Acquisitions in general have a mixed record of success. SK Telecom’s usual approach of acquiring a controlling interest and investing in its acquisitions, but keeping them as separate businesses, is one which often, together with the right management approach from the parent, causes the least disruption to the acquired business and therefore increases the likelihood of longer-term success. It also allows for investment from other sources, reducing the cost and risk to SK Telecom as the acquiring company. Yet as a counterpoint to this, M&A in this style doesn’t help change practices in the rest of the business.

However, it has also shown willingness to change its position as and when appropriate, either by sale, or by a change in investment strategy. For example, through its subsidiary SK Planet, it acquired Shopkick, a shopping loyalty rewards business in 2014, but sold it in 2019, for the price it paid for it. It took a different approach to its activity in quantum technologies, originally set up in-house in 2011, which it rolled into IDQ following its acquisition in 2018.

SKT has also recently entered into partnerships and agreements concerning the following areas of business:

 

Table of Contents

  • Executive Summary
  • Introduction and overview
    • Overview of SKT’s activities
  • Business strategy and structure
    • Strategy and lessons
    • 5G deployment
    • Vertical industry applications
    • AI
    • SK Telecom ‘New Business’ and other areas
  • Business performance
    • Financial results
    • Competitive environment
  • Industry and national context
    • International context

Enter your details below to download an extract of the report

The future of assurance: How to deliver quality of service at the edge

Why does edge assurance matter?

The assurance of telecoms networks is one of the most important application areas for analytics, automation and AI (A3) across telcos operations. In a previous report estimating the potential value of A3 across telcos’ core business, including networks, customer channels, sales and marketing, we estimated that service assurance accounts for nearly 10% of the total potential value of A3 (see the report A3 for telcos: Mapping the financial value). The only area of greater combined value was in resource management across telecoms existing networks and planned deployments.

Within service assurance, the biggest value buckets are self-healing networks, impact on customer experience and churn, and dynamic SLA management. This estimate was developed through a bottom up analysis of specific applications for automation, analytics and AI within each segment, and their potential to deliver cost savings or revenue uplift for an average sized telecoms operator (see the original report for the full methodology).

Breakdown of the value of A3 in service assurance, US$ millions

Breakdown of the value of A3 in service assurance (US$ millions)

Source: STL Partners, Charlotte Patrick Consult

While this previous research demonstrates there is significant value for telcos in improving assurance on their legacy networks, over the next five years edge assurance will become an increasingly important topic for operators.

What we mean by edge assurance is the new capabilities operators will require to enable visibility across much more distributed, cloud-based networks, and monitoring of a wider and more dynamic range of services and devices, in order to deliver high quality experience and self-healing networks. This need is driven by operators’ accelerating adoption of virtualisation and software-defined networking, for example with increasing experimentation and excitement around open RAN, as well as some operators’ ambitions to play a significant role in the edge computing market (see our report Telco edge computing: How to partner with hyperscalers for analysis of telcos’ ambitions in edge computing).

To give an idea of the scale of the challenge ahead of operators in assuring increasingly distributed network functions and infrastructure, STL Partners’ expects a Tier-1 operator will deploy more than 8,000 edge servers to support virtual RAN by 2025 (see Building telco edge infrastructure: MEC, private LTE and vRAN for the full forecasts).

Forecast of Tier 1 operator edge servers by domain

Forecast of Tier-1 operator edge servers by domain

Source: STL Partners

Given this dramatic shift in network operations, without new edge assurance capabilities:

  • A telco will not be able to understand where issues are occurring across the (virtualised) network and the underlying infrastructure, and diagnose the root cause
  • The promises of cost saving and better customer experience from self-healing networks will not be fully realised in next-generation networks
  • Potential revenue generators such as network slicing and URLLC will be of limited value to customers if the telco can’t offer sufficient SLAs on reliability, latency and visibility
  • It will not be possible to make promises to ecosystem partners around service quality.

Despite the significant number of unknowns in the future of telco activities around 5G, IoT and edge computing, this research ventures a framework to allow telcos to plan for their future service assurance needs. The first section describes the drivers affecting telcos decision-making around the types of assurance that they need at the edge. The second sets out products and capabilities that will be required and types of assurance products that telcos could create and monetise.

Enter your details below to request an extract of the report

Table of contents

  • Executive Summary
    • The three main telco strategies in edge assurance
    • What exactly do telcos need to assure?
  • Why edge assurance matters
  • Factors affecting edge assurance development
    • What are telcos measuring?
    • Internal assurance applications
    • Location of measurement and analysis
    • Ownership status of equipment and assets being assured
    • Requirements of external assurance users
    • Requirements from specific applications
    • Telco business model
  • The status of edge assurance and recommendations for telcos
    • Edge assurance vendors
    • Telco assurance products
  • Appendix

Enter your details below to request an extract of the report

Telco edge computing: How to partner with hyperscalers

Edge computing is getting real

Hyperscalers such as Amazon, Microsoft and Google are rapidly increasing their presence in the edge computing market by launching dedicated products, establishing partnerships with telcos on 5G edge infrastructure and embedding their platforms into operators’ infrastructure.

Many telecoms operators, who need cloud infrastructure and platform support to run their edge services, have welcomed the partnership opportunity. However, they are yet to develop clear strategies on how to use these partnerships to establish a stronger proposition in the edge market, move up the value chain and play a role beyond hosting infrastructure and delivering connectivity. Operators that miss out on the partnership opportunity or fail to fully utilise it to develop and differentiate their capabilities and resources could risk either being reduced to connectivity providers with a limited role in the edge market and/or being late to the game.

Edge computing or multi-access edge computing (MEC) enables processing data closer to the end user or device (i.e. the source of data), on physical compute infrastructure that is positioned on the spectrum between the device and the internet or hyperscale cloud.

Telco edge computing is mainly defined as a distributed compute managed by a telco operator. This includes running workloads on customer premises as well as locations within the operator network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs. Depending on where the computing resources reside, edge computing can be broadly divided into:

  • Network edge which includes sites or points of presence (PoPs) owned by a telecoms operator such as base stations, central offices and other aggregation points on the access and/or core network.
  • On-premise edge where the computing resources reside at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.

Our overview on edge computing definitions, network structure, market opportunities and business models can be found in our previous report Telco Edge Computing: What’s the operator strategy?

The edge computing opportunity for operators and hyperscalers

Many operators are looking at edge computing as a good opportunity to leverage their existing assets and resources to innovate and move up the value chain. They aim to expand their services and revenue beyond connectivity and enter the platform and application space. By deploying computing resources at the network edge, operators can offer infrastructure-as-a-service and alternative application and solutions for enterprises. Also, edge computing as a distributed compute structure and an extension of the cloud supports the operators’ own journey into virtualising the network and running internal operations more efficiently.

Cloud hyperscalers, especially the biggest three – Amazon Web Services (AWS), Microsoft Azure and Google – are at the forefront of the edge computing market. In the recent few years, they have made efforts to spread their influence outside of their public clouds and have moved the data acquisition point closer to physical devices. These include efforts in integrating their stack into IoT devices and network gateways as well as supporting private and hybrid cloud deployments. Recently, hyperscalers took another step to get closer to customers at the edge by launching platforms dedicated to telecom networks and enabling integration with 5G networks. The latest of these products include Wavelength from AWS, Azure Edge Zones from Microsoft and Anthos for Telecom from Google Cloud. Details on these products are available in section.

Enter your details below to request an extract of the report

From competition to coopetition

Both hyperscalers and telcos are among the top contenders to lead the edge market. However, each stakeholder lacks a significant piece of the stack which the other has. This is the cloud platform for operators and the physical locations for hyperscalers. Initially, operators and hyperscalers were seen as competitors racing to enter the market through different approaches. This has resulted in the emergence of new types of stakeholders including independent mini data centre providers such as Vapor IO and EdgeConnex, and platform start-ups such as MobiledgeX and Ori Industries.

However, operators acknowledge that even if they do own the edge clouds, these still need to be supported by hyperscaler clouds to create a distributed cloud. To fuel the edge market and build its momentum, operators will, in the most part, work with the cloud providers. Partnerships between operators and hyperscalers are starting to take place and shape the market, impacting edge computing short- and long-term strategies for operators as well as hyperscalers and other players in the market.

Figure 1: Major telco-hyperscalers edge partnerships

Major telco-hyperscaler partnerships

Source: STL Partners analysis

What does it mean for telcos?

Going to market alone is not an attractive option for either operators or hyperscalers at the moment, given the high investment requirement without a guaranteed return. The partnerships between two of the biggest forces in the market will provide the necessary push for the use cases to be developed and enterprise adoption to be accelerated. However, as markets grow and change, so do the stakeholders’ strategies and relationships between them.

Since the emergence of cloud computing and the development of the digital technologies market, operators have been faced with tough competition from the internet players, including hyperscalers who have managed to remain agile while building a sustained appetite for innovation and market disruption. Edge computing is not an exception and they are moving rapidly to define and own the biggest share of the edge market.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market while partnering with hyperscalers in edge computing.

Table of contents

  • Executive Summary
    • Operators’ roles along the edge computing value chain
    • Building a bigger ecosystem and pushing market adoption
    • How partnerships can shape the market
    • What next?
  • Introduction
    • The edge computing opportunity for operators and hyperscalers
    • From competition to coopetition
    • What does it mean for telcos?
  • Overview of the telco-hyperscalers partnerships
    • Explaining the major roles required to enable edge services
    • The hyperscaler-telco edge commercial model
  • Hyperscalers’ edge strategies
    • Overview of hyperscalers’ solutions and activities at the edge
    • Hyperscalers approach to edge sites and infrastructure acquisition
  • Operators’ edge strategies and their roles in the partnerships
    • Examples of operators’ edge computing activities
    • Telcos’ approach to integrating edge platforms
  • Conclusion
    • Infrastructure strategy
    • Platform strategy
    • Verticals and ecosystem building strategy

 

Enter your details below to request an extract of the report

Building telco edge infrastructure: MEC, Private LTE and VRAN

Reality check: edge computing is not yet mature, and much is still to be decided

Edge computing is still a maturing domain. STL Partners has written extensively on the topic of edge computing over the last 4 years. Within that timeframe, we have seen significant change in terminology, attitudes and approaches from telecoms and adjacent industries to the topic area.  Plans for building telco edge infrastructure have also evolved.

Within the past twelve months, we’ve seen high profile partnerships between hyperscale cloud providers (Amazon Web Services, Microsoft and Google) and telecoms operators that are likely to catalyse the industry and accelerate route to market. We’ve also seen early movers within the industry (such as SK Telecom) developing MEC platforms to enable access to their edge infrastructure.

In the course of this report, we will highlight which domains will drive early adoption for edge, and the potential roll out we could see over the next 5 years if operators move to capitalise on the opportunity. However, to start, it is important to evaluate the situation today.

Commercial deployments of edge computing are rare, and most operators are still in the exploration phase. For many, they have not and will not commit to the roll out of edge infrastructure until they have seen evidence from early movers that it is a genuine opportunity for the industry. For even more, the idea of additional capex investment on edge infrastructure, on top of their 5G rollout plans, is a difficult commitment to make.

Where is “the edge”?

There is no one clear definition of edge computing. Depending on the world you are coming from (Telco? Application developer? Data centre operator? Cloud provider? etc.), you are likely to define it differently. In practice, we know that even within these organisations there are differences between technical and commercial teams around the concept and terminology used to describe “the edge”.

For the purposes on this paper, we will be discussing edge computing primarily from the perspective of a telecoms operator. As such, we’ll be focusing on edge infrastructure that will be rolled out within their network infrastructure or that they will play a role in connecting. This may equate to adding additional servers into an existing technical space (such as a Central Office), or it may mean investing in new microdata centres. The servers may be bought, installed and managed by the telco themselves, or this could be done by a third party, but in all cases the real estate (e.g. the physical location as well as power and cooling) is owned either by the telecoms operator, or by the enterprise who is buying an edge-enabled solution.

Enter your details below to request an extract of the report

Operators have choice and a range of options for where and how they might develop edge computing sites. The graphic below starts to map some of the potential physical locations for an edge site. In this report, STL Partners forecasts edge infrastructure deployments between 2020 and 2024, by type of operator, use-case domains, edge locations and type of computing.

There is a spectrum of edge infrastructure in which telcos may invest

mapping edge infrastructure investmentSource: STL Partners

This paper primarily draws on discussions with operators and others within the edge ecosystem conducted between February and March 2020. We interviewed a range of operators, and a range of job roles within them, to gain a snapshot of the existing attitudes and ambitions within the industry to shape our understanding of how telcos are likely to build out edge infrastructure.

Table of Contents

  • Executive Summary
  • Preface
  • Reality check: edge computing is not yet mature, and much is still to be decided
    • Reality #1: Organisationally, operators are still divided
    • Reality #2: The edge ecosystem is evolving fast
    • Reality #3: Operators are trying to predict, respond to and figure out what the “new normal” will be post COVID-19
  • Edge computing: key terms and definitions
    • Where is “the edge”?
    • What applications & use cases will run at edge sites?
    • What is inside a telco edge site?
  • How edge will play out: 5-year evolution
    • Modelling exercise: converting hype into numbers
    • Our findings: edge deployments won’t be very “edgy” in 2024
    • Short-term adoption of vRAN is the driving factor
    • New revenues from MEC remain a longer-term opportunity
    • Short-term adoption is focused on efficient operations, but revenue opportunity has not been dismissed
  • Addressing the edge opportunity: operators can be more than infrastructure providers
  • Conclusions: practical recommendations for operators

Enter your details below to request an extract of the report

Telco edge computing: What is the operator strategy?

To access the report chart pack in PPT download the additional file on the left

Edge computing can help telcos to move up the value chain

The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.

However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.

Following this introduction, we focus on:

  1. Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
  2. An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
  3. Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
  4. Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
  5. A set of recommendations for operators that are building their strategy for the edge.

Enter your details below to request an extract of the report

What is edge computing and where exactly is the edge?

Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.

One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.

In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.

Edge computing offers mobile operators several opportunities such as:

  • Differentiating service offerings using edge capabilities
  • Providing new applications and solutions using edge capabilities
  • Enabling customers and partners to leverage the distributed computing network in application development
  • Improving networkperformance and achieving efficiencies / cost savings

As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.

These are some of the key terms that need to be identified when discussing edge computing:

  • Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
  • Telco edge cloud is mainly defined as distributed compute managed by a telco  This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
  • On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
  • Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.

Figure 1: Edge computing types

definition of edge computing

Source: STL Partners

Network infrastructure and how the edge relates to 5G

Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.

While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:

  • First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
  • Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.

The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:

  • Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
  • Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
  • Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
  • Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
  • Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
  • Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.

The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.

Figure 2: possible locations for edge computing

edge computing locations

Source: STL Partners

Table of Contents

  • Executive Summary
    • Recommendations for telco operators at the edge
    • Four key use cases for operators
    • Edge computing players are tackling market fragmentation with strategic partnerships
    • What next?
  • Table of Figures
  • Introduction
  • Definitions of edge computing terms and key components
    • What is edge computing and where exactly is the edge?
    • Network infrastructure and how the edge relates to 5G
  • Market overview and opportunities
    • The value chain and the types of stakeholders
    • Hyperscale cloud provider activities at the edge
    • Telco initiatives, pilots and plans
    • Investment and merger and acquisition trends in edge computing
  • Use cases and business models for telcos
    • Telco edge computing use cases
    • Vertical opportunities
    • Roles and business models for telcos
  • Telcos’ challenges at the edge
  • Scenarios for network edge infrastructure development
  • Recommendation
  • Index

Enter your details below to request an extract of the report

Cloud gaming: What is the telco play?

To access the report chart pack in PPT download the additional file on the left

Drivers for cloud gaming services

Although many people still think of PlayStation and Xbox when they think about gaming, the console market represents only a third of the global games market. From its arcade and console-based beginnings, the gaming industry has come a long way. Over the past 20 years, one of the most significant market trends has been growth of casual gamers. Whereas hardcore gamers are passionate about frequent play and will pay more to play premium games, casual gamers play to pass the time. With the rapid adoption of smartphones capable of supporting gaming applications over the past decade, the population of casual/occasional gamers has risen dramatically.

This trend has seen the advent of free-to-play business models for games, further expanding the industry’s reach. In our earlier report, STL estimated that 45% of the population in the U.S. are either casual gamers (between 2 and 5 hours a week) or occasional gamers (up to 2 hours a week). By contrast, we estimated that hardcore gamers (more than 15 hours a week) make up 5% of the U.S. population, while regular players (5 to 15 hours a week) account for a further 15% of the population.

The expansion in the number of players is driving interest in ‘cloud gaming’. Instead of games running on a console or PC, cloud gaming involves streaming games onto a device from remote servers. The actual game is stored and run on a remote compute with the results being live streamed to the player’s device. This has the important advantage of eliminating the need for players to purchase dedicated gaming hardware. Now, the quality of the internet connection becomes the most important contributor to the gaming experience. While this type of gaming is still in its infancy, and faces a number of challenges, many companies are now entering the cloud gaming fold in an effort to capitalise on the new opportunity.

5G can support cloud gaming traffic growth

Cloud gaming requires not just high bandwidth and low latency, but also a stable connection and consistent low latency (jitter). In theory, 5G promises to deliver stable ultra-low latency. In practice, an enormous amount of infrastructure investment will be required in order to enable a fully loaded 5G network to perform as well as end-to-end fibre5G networks operating in the lower frequency bands would likely buckle under the load if lots of gamers in a cell needed a continuous 25Mbps stream. While 5G in millimetre-wave spectrum would have more capacity, it would require small cells and other mechanisms to ensure indoor penetration, given the spectrum is short range and could be blocked by obstacles such as walls.

Enter your details below to request an extract of the report

A complicated ecosystem

As explained in our earlier report, Cloud gaming: New opportunities for telcos?, the cloud gaming ecosystem is beginning to take shape. This is being accelerated by the growing availability of fibre and high-speed broadband, which is now being augmented by 5G and, in some cases, edge data centres. Early movers in cloud gaming are offering a range of services, from gaming rigs, to game development platforms, cloud computing infrastructure, or an amalgamation of these.

One of the main attractions of cloud gaming is the potential hardware savings for gamers. High-end PC gaming can be an extremely expensive hobby: gaming PCs range from £500 for the very cheapest to over £5,000 for the very top end. They also require frequent hardware upgrades in order to meet the increasing processing demands of new gaming titles. With cloud gaming, you can access the latest graphics processing unit at a much lower cost.

By some estimates, cloud gaming could deliver a high-end gaming environment at a quarter of the cost of a traditional console-based approach, as it would eliminate the need for retailing, packaging and delivering hardware and software to consumers, while also tapping the economies of scale inherent in the cloud. However, in STL Partners’ view that is a best-case scenario and a 50% reduction in costs is probably more realistic.

STL Partners believes adoption of cloud gaming will be gradual and piecemeal for the next few years, as console gamers work their way through another generation of consoles and casual gamers are reluctant to commit to a monthly subscription. However, from 2022, adoption is likely to grow rapidly as cloud gaming propositions improve.

At this stage, it is not yet clear who will dominate the value chain, if anyone. Will the “hyperscalers” be successful in creating a ‘Netflix’ for games? Google is certainly trying to do this with its Stadia platform, which has yet to gain any real traction, due to both its limited games library and its perceived technological immaturity. The established players in the games industry, such as EA, Microsoft (Xbox) and Sony (PlayStation), have launched cloud gaming offerings, or are, at least, in the process of doing so. Some telcos, such as Deutsche Telekom and Sunrise, are developing their own cloud gaming services, while SK Telecom is partnering with Microsoft.

What telcos can learn from Shadow’s cloud gaming proposition

The rest of this report explores the business models being pursued by cloud gaming providers. Specifically, it looks at cloud gaming company Shadow and how it fits into the wider ecosystem, before evaluating how its distinct approach compares with that of the major players in online entertainment, such as Sony and Google. The second half of the report considers the implications for telcos.

Table of Contents

  • Executive Summary
  • Introduction
  • Cloud gaming: a complicated ecosystem
    • The battle of the business models
    • The economics of cloud gaming and pricing models
    • Content offering will trump price
    • Cloud gaming is well positioned for casual gamers
    • The future cloud gaming landscape
  • 5G and fixed wireless
  • The role of edge computing
  • How and where can telcos add value?
  • Conclusions

Enter your details below to request an extract of the report

Telco edge computing: Turning vision into practice

The emerging opportunity for edge compute

There is ongoing interest in the telecoms industry about edge computing. The key rationale behind this is that telcos – through their distributed network assets – are in a unique position to push workloads closer to devices, reducing latency and/or data volumes over to the cloud, and thereby enabling new experiences and use cases, while enhancing existing ones.

After years of centralising workloads in the public cloud there is complementary demand emerging for more distributed compute. This is good news for telcos as it shows that the time is ripe for them to turn their ambition to edge computing. Telcos can exploit their own connectivity, unique network APIs and an existing distributed real-estate. Telcos are in a unique position to play a strong role in distributed and edge computing ecosystems.

Telcos’ excitement around edge is fuelled by new differentiation and revenue opportunities leveraging the dynamic application developer ecosystem which hitherto has been dominated by ever more sophisticated and technically advanced public clouds and proofs-of-concept (POCs). Furthermore, underlying trends in cloud computing are increasingly promising for distributed (edge) computing:

  • Hybrid and multi-cloud models and technologies will continue to facilitate more distributed compute scenarios beyond hyperscale-only and on-premise-only.
  • Lightweight compute models will enable the deployment of cloud-workloads on a smaller footprint (e.g. train AI models in the cloud and execute them at the edge, such as in a smartphone or a connected car). For example, containers and “serverless” compute models make it possible to run workloads more efficiently and elastically than virtual machines.
  • The adoption of more platform-agnostic deployment models (such as containers) will facilitate the shifting and moving of workloads within distributed and edge cloud environments.
  • Proliferation of edge gateways and IoT devices will drive processing and analytics outside the datacentre and closer to the customer (premises).
  • Regarding security, a more distributed computing model is well-suited to defending against certain types of attacks (e.g. DDOS). Furthermore, if/when breaches do occur, these can be quarantined to an edge “cloudlet”, limiting the potential damage and undermining the economics of an attack.

Our findings in this report are informed by a research programme STL Partners has conducted since January 2018, supported by and in cooperation with Aricent. For this research, STL Partners has conducted interviews with both telcos and technology companies, globally about their views and current efforts related to edge computing. Overall, the research forms part of STL Partners’ ongoing research work and consulting assignments around telco edge cloud.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Key questions arising for telcos

Notwithstanding the strategic opportunity, telcos face some big questions in formulating edge initiatives. These include:

“What is the business case for telco edge – where is the money?”

“Will massive demand for low-latency compute drive demand from core/central to edge compute?”

“How can we compete with the big cloud players – won’t they expand and control the edge too?”

“How should we play in Enterprise edge – should we offer edge services on customer premises?”

“How can we architect and charge for different edge services – those requiring expensive, specialised hardware for accelerated computing to process machine learning/AI workloads?”

“What edge services should we offer and through what distribution channels?”

These are (real examples of) questions that telcos must address in defining and delivering edge services. This report provides a framework to tackle these (and other) questions in a structured way. We will revisit these questions (and the answers) throughout the report.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Edge computing: Five viable telco business models

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

This report has been produced independently by STL Partners, in co-operation with Hewlett Packard Enterprise and Intel.

Introduction

The idea behind Multi-Access Edge Computing (MEC) is to make compute and storage capabilities available to customers at the edge of communications networks. This will mean that workloads and applications are closer to customers, potentially enhancing experiences and enabling new services and offers. As we have discussed in our recent report, there is much excitement within telcos around this concept:

  • MEC promises to enable a plethora of vertical and horizontal use cases (e.g. leveraging lowlatency) implying significant commercial opportunities. This is critical as the whole industry is trying to uncover new sources of revenue, ideally where operators may be able to build a sustainable advantage.
  • MEC should also theoretically fit with telcos’ 5G and SDN/NFV deployments, which will run certain virtualised network functions in a distributed way, including at the edge of networks. In turn, MEC potentially benefits from the capabilities of a virtualised network to extract the full potential of distributed computing.

Figure 1: Defining MEC

Source: STL Partners

However, despite the excitement around the potentially transformative impact of MEC on telcos,viable commercial models that leverage MEC are still unclear and undefined. As an added complication, a diverse ecosystem around edge computing is emerging – of which telcos’ MEC is only one part.

From this, the following key questions emerge:

  • Which business models will allow telcos to realise the various potential MEC use cases in a commercially viable way?
  • What are the right MEC business models for which telco?
  • What is needed for success? What are the challenges?

Contents:

  • Preface
  • Introduction
  • The emerging edge computing ecosystem
  • Telcos’ MEC opportunity
  • Hyperscale cloud providers are an added complication for telcos
  • How should telcos position themselves?
  • 5 telco business models for MEC
  • Business model 1: Dedicated edge hosting
  • Business model 2: Edge IaaS/PaaS/NaaS
  • Business model 3: Systems integration
  • Business model 4: B2B2X solutions
  • Business model 5: End-to-end consumer retail applications
  • Mapping use cases to business models
  • Some business models will require a long-term view on the investment
  • Which business models are right for which operator and which operator division?
  • Conclusion

Figures:

  • Figure 1: Defining MEC
  • Figure 2: MEC potential benefits
  • Figure 3: Microsoft’s new mantra – “Intelligent Cloud, Intelligent Edge”
  • Figure 4: STL Partners has identified 5 telco business models for MEC
  • Figure 5: The dedicated edge hosting value
  • Figure 6: Quantified example – Dedicated edge hosting
  • Figure 7: The Edge IaaS/PaaS/NaaS value chain
  • Figure 8: Quantified example – Edge IaaS/PaaS/NaaS
  • Figure 9: The SI value chain
  • Figure 10: Quantified example – Systems integration
  • Figure 11: The B2B2X solutions value chain
  • Figure 12: Quantified example – B2B2x solutions
  • Figure 13: Graphical representation of the end-to-end consumer retail applications business model
  • Figure 14: Quantified example – End-to-end consumer retail applications
  • Figure 15: Mapping MEC business models to possible use cases
  • Figure 16: High IRR correlates with low terminal value
  • Figure 17: Telcos need patience for edge-enabled consumer applications to become profitable (breakeven only in year 5)
  • Figure 18: The characteristics and skills required of the MEC operator depend on the business models

How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing