Telco Cloud Deployment Tracker: 5G core deep dive

Deep dive: 5G core deployments 

In this July 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present granular information on 5G core launches. They fall into three categories:

  • 5G Non-standalone core (5G NSA core) deployments: The 5G NSA core (agreed as part of 3GPP Release in December 2017), involves using a virtualised and upgraded version of the existing 4G core (or EPC) to support 5G New Radio (NR) wireless transmission in tandem with existing LTE services. This was the first form of 5G to be launched and still accounts for 75% of all 5G core network deployments in our Tracker.
  • 5G Standalone core (5G SA core) deployments: The SA core is a completely new and 5G-only core. It has a simplified, cloud-native and distributed architecture, and is designed to support services and functions such as network slicing, Ultra-Reliable Low-Latency Communications (URLLC) and enhanced Machine-Type Communications (eMTC, i.e. massive IoT). Our Tracker indicates that the upcoming wave of 5G core deployments in 2022 and 2023 will be mostly 5G SA core.
  • Converged 5G NSA/SA core deployments: this is when a dual-mode NSA and SA platform is deployed; in most cases, the NSA core results from the upgrade of an existing LTE core (EPC) to support 5G signalling and radio. The principle behind a converged NSA/SA core is the ability to orchestrate different combinations of containerised network functions, and automatically and dynamically flip over from an NSA to an SA configuration, in tandem – for example – with other features and services such as Dynamic Spectrum Sharing and the needs of different network slices. For this reason, launching a converged NSA/SA platform is a marker of a more cloud-native approach in comparison with a simple 5G NSA launch. Ericsson is the most commonly found vendor for this type of platform with a handful coming from Huawei, Samsung and WorkingGroupTwo. Albeit interesting, converged 5G NSA/SA core deployments remain a minority (7% of all 5G core deployments over the 2018-2023 period) and most of our commentary will therefore focus on 5G NSA and 5G SA core launches.

Enter your details below to request an extract of the report

75% of 5G cores are still Non-standalone (NSA)

Global 5G core deployments by type, 2018–23

  • There is renewed activity this year in 5G core launches since the total number of 5G core deployments so far in 2022 (effective and in progress) stands at 49, above the 47 logged in the whole of 2021. At the very least, total 5G deployments in 2022 will settle between the level of 2021 and the peak of 2020 (97).
  • 5G in whichever form now exists in most places where it was both in demand and affordable; but there remain large economies where it is yet to be launched: Turkey, Russia and most notably India. It also remains to be launched in most of Africa.
  • In countries with 5G, the next phase of launches, which will see the migration of NSA to SA cores, has yet to take place on a significant scale.
  • To date, 75% of all 5G cores are NSA. However, 5G SA will outstrip NSA in terms of deployments in 2022 and represent 24 of the 49 launches this year, or 34 if one includes converged NSA/SA cores as part of the total.
  • All but one of the 5G launches announced for 2023 are standalone; they all involve Tier-1 MNOs including Orange (in its European footprint involving Ericsson and Nokia), NTT Docomo in Japan and Verizon in the US.

The upcoming wave of SA core (and open / vRAN) represents an evolution towards cloud-native

  • Cloud-native functions or CNFs are software designed from the ground up for deployment and operation in the cloud with:​
  • Portability across any hardware infrastructure or virtualisation platform​
  • Modularity and openness, with components from multiple vendors able to be flexibly swapped in and out based on a shared set of compute and OS resources, and open APIs (in particular, via software ‘containers’)​
  • Automated orchestration and lifecycle management, with individual micro-services (software sub-components) able to be independently modified / upgraded, and automatically re-orchestrated and service-chained based on a persistent, API-based, ‘declarative’ framework (one which states the desired outcome, with the service chain organising itself to deliver the outcome in the most efficient way)​
  • Compute, resource, and software efficiency: as a concomitant of the automated, lean and logically optimal characteristics described above, CNFs are more efficient (both functionally and in terms of operating costs) and consume fewer compute and energy resources.​
  • Scalability and flexibility, as individual functions (for example, distributed user plane functions in 5G networks) can be scaled up or down instantly and dynamically in response to overall traffic flows or the needs of individual services​
  • Programmability, as network functions are now entirely based on software components that can be programmed and combined in a highly flexible manner in accordance with the needs of individual services and use contexts, via open APIs.​

Previous telco cloud tracker releases and related research

Each new release of the tracker is global, but is accompanied by an analytical report which focusses on trends in given regions from time to time:

Enter your details below to request an extract of the report

Private networks: Lessons so far and what next

The private networks market is rapidly developing

Businesses across a range of sectors are exploring the benefits of private networks in supporting their connected operations. However, there are considerable variations between national markets, reflecting spectrum and other regulatory actions, as well as industrial structure and other local factors. US, Germany, UK, Japan and the Nordics are among the leading markets.

Enterprises’ adoption of digitalisation and automation programmes is growing across various industries. The demand from enterprises stems from their need for customised networks to meet their vertical-specific connectivity requirements – as well as more basic considerations of coverage and cost of public networks, or alternative wireless technologies.

On the supply side, the development in cellular standards, including the virtualisation of the RAN and core elements, the availability of edge computing, and cloud management solutions, as well as the changing spectrum regulations are making private networks more accessible for enterprises. That said, many recently deployed private cellular networks still use “traditional” integrated small cells, or major vendors’ bundled solutions – especially in conservative sectors such as utilities and public safety.

Many new players are entering the market through different vertical and horizontal approaches and either competing or collaborating with traditional telcos. Traditional telcos, new telcos (mainly building private networks or offering network services), and other stakeholders are all exploring strategies to engage with the market and assessing the opportunities across the value chain as private network adoption increases.

Following up on our 2019 report Private and vertical cellular networks: Threats and opportunities, we explore the recent developments in the private network market, regulatory activities and policy around local and shared spectrum, and the different deployment approaches and business cases. In this report we address several interdependent elements of the private networks landscape

Enter your details below to request an extract of the report

What is a private network?

A private network leverages dedicated resources such as infrastructure and spectrum to provide precise coverage and capacity to specific devices and user groups. The network can be as small as a single radio cell covering a single campus or a location such as a manufacturing site (or even a single airplane), or it can span across a wider geographical area such as a nationwide railway network or regional utility grids.

Private networks is an umbrella term that can includes different LAN (or WAN) connectivity options such as Wi-Fi and LPWAN. However, more commonly, the term is being associated with private cellular networks based on 3GPP mobile technologies, i.e. LTE or 5G New Radio (NR).

Private networks are also different from in-building densification solutions like small cells and DAS which extend the coverage of public network or strengthen its capacity indoors or in highly dense locations. These solutions are still part of the public network and do not support customised control over the local network access or other characteristics. In future, some may support local private networks as well as public MNOs’ services.

Besides dedicated coverage and capacity, private networks can be customised in other aspects such as security, latency and integration with the enterprise internal systems to meet business specific requirements in ways that best effort public networks cannot.

Unlike public networks, private networks are not available to the public through commercially available devices and SIM cards. The network owner or operator controls the authorisation and the access to the network for permissioned devices and users. These definitions blur somewhat if the network is run by a “community” such as a municipality.

Typically, devices will not work outside the boundaries of their private network. That is a requirement in many use cases, such as manufacturing, where devices are not expected to continue functioning outside the premise. However, in a few areas, such as logistics, solutions can include the use of dual-SIM devices for both public and private networks or the use of other wide area technologies such as TETRA for voice. Moreover, agreements with public networks to enable roaming can be activated to support certain service continuity outside the private network boundaries.

While the technology and market are still developing, several terms are being used interchangeably to describe 3GPP private networks such dedicated networks, standalone networks, campus networks, local networks, vertical mobile network and non-public networks (NPN) as defined by the 3GPP.

The emergence of new telcos

Many telcos are not ready to support private networks demands from enterprises on large scale because they lack sufficient resources and expertise. Also, some enterprises might be reluctant to work with telcos for different reasons including their concerns over the traditional telcos’ abilities in vertical markets and a desire to control costs. This gap is already catalysing the emergence of new types of mobile network service providers, as opposed to traditional MNOs that operate national or regional public mobile networks.

These players essentially carry out the same roles as traditional MNOs in configuring the network, provisioning the service, and maintaining the private network infrastructure. Some of them may also have access to spectrum and buy network equipment and technologies directly from network equipment vendors. In addition to “new telcos” or “new operators”, other terms have been used to describe these players such as specialist operators and alternative operators. Throughout this report, we will use new telcos or specialist operators when describing these players collectively and traditional/public operators when referring to typical wide area national mobile network provider. New players can be divided into the following categories:

Possible private networks service providers

private networks ecosystem

Source: STL Partners

Table of content

  • Executive Summary
    • What next
    • Trends and recommendations for telcos, vendors, enterprises and policymakers
  • Introduction
  • Types of private network operators
    • What is a private network?
    • The emergence of new telcos
  • How various stakeholders are approaching the market
    • Technology development: Choosing between LTE and 5G
    • Private network technology vendors
    • Regional overview
    • Vertical overview
    • Mergers and acquisitions activities
  • The development of spectrum regulations
    • Unlicensed spectrum for LTE and 5G is an attractive option, but it remains limited
    • The rise of local spectrum licensing threatens some telcos
    • …but there is no one-size fits all in local spectrum licensing
    • How local spectrum licensing shapes the market and enterprise adoption
    • Recommendations for different stakeholders
  • Assessing the approaches to network implementation
    • Private network deployment models
    • Business models and roles for telcos
  • Conclusion and recommendations
  • Index
  • Appendix 1:  Examples of private networks deployments in 2020 – 2021

Enter your details below to request an extract of the report

Why the consumer IoT is stuck in the slow lane

A slow start for NB-IoT and LTE-M

For telcos around the world, the Internet of Things (IoT) has long represented one of the most promising growth opportunities. Yet for most telcos, the IoT still only accounts for a low single digit percentage of their overall revenue. One of the stumbling blocks has been relatively low demand for IoT solutions in the consumer market. This report considers why that is and whether low cost connectivity technologies specifically-designed for the IoT (such as NB-IoT and LTE-M) will ultimately change this dynamic.

NB-IoT and LTE-M are often referred to as Massive IoT technologies because they are designed to support large numbers of connections, which periodically transmit small amounts of data. They can be distinguished from broadband IoT connections, which carry more demanding applications, such as video content, and critical IoT connections that need to be always available and ultra-reliable.

The initial standards for both technologies were completed by 3GPP in 2016, but adoption has been relatively modest. This report considers the key B2C and B2B2C use cases for Massive IoT technologies and the prospects for widespread adoption. It also outlines how NB-IoT and LTE-M are evolving and the implications for telcos’ strategies.

This builds on previous STL Partners’ research, including LPWA: Which way to go for IoT? and Can telcos create a compelling smart home?. The LPWA report explained why IoT networks need to be considered across multiple generations, including coverage, reliability, power consumption, range and bandwidth. Cellular technologies tend to be best suited to wide area applications for which very reliable connectivity is required (see Figure below).

IoT networks should be considered across multiple dimensions

IoT-networks-disruptive-analysis-stl-2021
Source: Disruptive Analysis

 

Enter your details below to request an extract of the report

 

The smart home report outlined how consumers could use both cellular and short-range connectivity to bolster security, improve energy efficiency, charge electric cars and increasingly automate appliances. One of the biggest underlying drivers in the smart home sector is peace of mind – householders want to protect their properties and their assets, as rising population growth and inequality fuels fear of crime.

That report contended that householders might be prepared to pay for a simple and integrated way to monitor and remotely control all their assets, from door locks and televisions to solar panels and vehicles.  Ideally, a dashboard would show the status and location of everything an individual cares about. Such a dashboard could show the energy usage and running cost of each appliance in real-time, giving householders fingertip control over their possessions. They could use the resulting information to help them source appropriate insurance and utility supply.

Indeed, STL Partners believes telcos have a broad opportunity to help coordinate better use of the world’s resources and assets, as outlined in the report: The Coordination Age: A third age of telecoms. Reliable and ubiquitous connectivity is a key enabler of the emerging sharing economy in which people use digital technologies to easily rent the use of assets, such as properties and vehicles, to others. The data collected by connected appliances and sensors could be used to help safeguard a property against misuse and source appropriate insurance covering third party rentals.

Do consumers need Massive IoT?

Whereas some IoT applications, such as connected security cameras and drones, require high-speed and very responsive connectivity, most do not. Connected devices that are designed to collect and relay small amounts of data, such as location, temperature, power consumption or movement, don’t need a high-speed connection.

To support these devices, the cellular industry has developed two key technologies – LTE-M (LTE for Machines, sometimes referred to as Cat M) and NB-IoT (Narrowband IoT). In theory, they can be deployed through a straightforward upgrade to existing LTE base stations. Although these technologies don’t offer the capacity, throughput or responsiveness of conventional LTE, they do support the low power wide area connectivity required for what is known as Massive IoT – the deployment of large numbers of low cost sensors and actuators.

For mobile operators, the deployment of NB-IoT and LTE-M can be quite straightforward. If they have relatively modern LTE base stations, then NB-IoT can be enabled via a software upgrade. If their existing LTE network is reasonably dense, there is no need to deploy additional sites – NB-IoT, and to a lesser extent LTE-M, are designed to penetrate deep inside buildings. Still, individual base stations may need to be optimised on a site-by-site basis to ensure that they get the full benefit of NB-IoT’s low power levels, according to a report by The Mobile Network, which notes that operators also need to invest in systems that can provide third parties with visibility and control of IoT devices, usage and costs.

There are a number of potential use cases for Massive IoT in the consumer market:

  • Asset tracking: pets, bikes, scooters, vehicles, keys, wallets, passport, phones, laptops, tablets etc.
  • Vulnerable persontracking: children and the elderly
  • Health wearables: wristbands, smart watches
  • Metering and monitoring: power, water, garden,
  • Alarms and security: smoke alarms, carbon monoxide, intrusion
  • Digital homes: automation of temperature and lighting in line with occupancy

In the rest of this report we consider the key drivers and barriers to take-up of NB-IoT and LTE-M for these consumer use cases.

Table of Contents

  • Executive Summary
  • Introduction
  • Do consumers need Massive IoT?
    • The role of eSIMs
    • Takeaways
  • Market trends
    • IoT revenues: Small, but growing
  • Consumer use cases for cellular IoT
    • Amazon’s consumer IoT play
    • Asset tracking: Demand is growing
    • Connecting e-bikes and scooters
    • Slow progress in healthcare
    • Smart metering gains momentum
    • Supporting micro-generation and storage
    • Digital buildings: A regulatory play?
    • Managing household appliances
  • Technological advances
    • Network coverage
  • Conclusions: Strategic implications for telcos

 

Enter your details below to request an extract of the report

 

Driving the agility flywheel: the stepwise journey to agile

Agility is front of mind, now more than ever

Telecoms operators today face an increasingly challenging market, with pressure coming from new non-telco competitors, the demands of unfamiliar B2B2X business models that emerge from new enterprise opportunities across industries and the need to make significant investments in 5G. As the telecoms industry undergoes these changes, operators are considering how best to realise commercial opportunities, particularly in enterprise markets, through new types of value-added services and capabilities that 5G can bring.

However, operators need to be able to react to not just near-term known opportunities as they arise but ready themselves for opportunities that are still being imagined. With such uncertainty, agility, with the quick responsiveness and unified focus it implies, is integral to an operator’s continued success and its ability to capitalise on these opportunities.

Traditional linear supply models are now being complemented by more interconnected ecosystems of customers and partners. Innovation of products and services is a primary function of these decentralised supply models. Ecosystems allow the disparate needs of participants to be met through highly configurable assets rather than waiting for a centralised player to understand the complete picture. This emphasises the importance of programmability in maximising the value returned on your assets, both in end-to-end solutions you deliver, and in those where you are providing a component of another party’s system. The need for agility has never been stronger, and this has accelerated transformation initiatives within operators in recent years.

Concepts of agility have crystallised in meaning

In 2015, STL Partners published a report on ‘The Agile Operator: 5 key ways to meet the agility challenge’, exploring the concept and characteristics of operator agility, including what it means to operators, key areas of agility and the challenges in the agile transformation. Today, the definition of agility remains as broad as in 2015 but many concepts of agility have crystallised through wider acceptance of the importance of the construct across different parts of the organisation.

Agility today is a pervasive philosophy of incremental innovation learned from software development that emphasises both speed of innovation at scale and carrier-grade resilience. This is achieved through cloud native modular architectures and practices such as sprints, DevOps and continuous integration and continuous delivery (CI/CD) – occurring in virtuous cycle we call the agility flywheel.

The Agility Flywheel

agility-flywheel

Source: STL Partners

Six years ago, operators were largely looking to borrow only certain elements of cloud native for adoption in specific pockets within the organisation, such as IT. Now, the cloud model is more widely embraced across the business and telcos profess ambitions to become software-centric companies.

Same problem, different constraints

Cloud native is the most fundamental version of the componentised cloud software vision and progress towards this ideal of agility has been heavily constrained by operators’ underlying capabilities. In 2015, operators were just starting to embark on their network virtualisation journeys with barriers such as siloed legacy IT stacks, inelastic infrastructures and software lifecycles that were architecture constrained. Though these barriers continue to be a challenge for many, the operators at the forefront – now unhindered by these basic constraints – have been driving a resurgence and general acceleration towards agility organisation-wide, facing new challenges around the unknowns underpinning the requirements of future capabilities.

With 5G, the network itself is designed as cloud native from the ground up, as are the leading edge of enterprise applications recently deployed by operators, alleviating by design some of the constraints on operators’ ability to become more agile. Uncertainty around what future opportunities will look like and how to support them requires agility to run deep into all of an operators’ processes and capabilities. Though there is a vast raft of other opportunities that do not need cloud native, ultimately the market is evolving in this direction and operators should benchmark ambitions on the leading edge, with a plan to get there incrementally. This report looks to address the following key question:

Given the flexibility and driving force that 5G provides, how can operators take advantage of recent enablers to drive greater agility and thrive in the current pace of change?

Enter your details below to request an extract of the report


 

 

Table of Contents

  • Executive Summary
  • Agility is front of mind, now more than ever
    • Concepts of agility have crystallised in meaning
    • Same problem, different constraints
  • Ambitions to be a software-centric business
    • Cloudification is supporting the need for agility
    • A balance between seemingly opposing concepts
  • You are only as agile as your slowest limb
    • Agility is achieved stepwise across three fronts
    • Agile IT and networks in the decoupled model
    • Renewed need for orchestration that is dynamic
    • Enabling and monetising telco capabilities
    • Creating momentum for the agility flywheel
  • Recommendations and conclusions

SK Telecom: Lessons in 5G, AI, and adjacent market growth

SK Telecom’s strategy

SK Telecom is the largest mobile operator in South Korea with a 42% share of the mobile market and is also a major fixed broadband operator. It’s growth strategy is focused on 5G, AI and a small number of related business areas where it sees the potential for revenue to replace that lost from its core mobile business.

By developing applications based on 5G and AI it hopes to create additional revenue streams both for its mobile business and for new areas, as it has done in smart home and is starting to do for a variety of smart business applications. In 5G it is placing an emphasis on indoor coverage and edge computing as basis for vertical industry applications. Its AI business is centred around NUGU, a smart speaker and a platform for business applications.

Its other main areas of business focus are media, security, ecommerce and mobility, but it is also active in other fields including healthcare and gaming.

The company takes an active role internationally in standards organisations and commercially, both in its own right and through many partnerships with other industry players.

It is a subsidiary of SK Group, one of the largest chaebols in Korea, which has interests in energy and oil. Chaebols are large family-controlled conglomerates which display a high level and concentration of management power and control. The ownership structures of chaebols are often complex owing to the many crossholdings between companies owned by chaebols and by family members. SK Telecom uses its connections within SK Group to set up ‘friendly user’ trials of new services, such as edge and AI

While the largest part of the business remains in mobile telecoms, SK Telecom also owns a number of subsidiaries, mostly active in its main business areas, for example:

  • SK Broadband which provides fixed broadband (ADSL and wireless), IPTV and mobile OTT services
  • ADT Caps, a securitybusiness
  • IDQ, which specialises in quantum cryptography (security)
  • 11st, an open market platform for ecommerce
  • SK Hynixwhich manufactures memory semiconductors

Few of the subsidiaries are owned outright by SKT; it believes the presence of other shareholders can provide a useful source of further investment and, in some cases, expertise.

SKT was originally the mobile arm of KT, the national operator. It was privatised soon after establishing a cellular mobile network and subsequently acquired by SK Group, a major chaebol with interests in energy and oil, which now has a 27% shareholding. The government pension service owns a 11% share in SKT, Citibank 10%, and 9% is held by SKT itself. The chairman of SK Group has a personal holding in SK Telecom.

Following this introduction, the report comprises three main sections:

  • SK Telecom’s business strategy: range of activities, services, promotions, alliances, joint ventures, investments, which covers:
    • Mobile 5G, Edge and vertical industry applications, 6G
    • AIand applications, including NUGU and Smart Homes
    • New strategic business areas, comprising Media, Security, eCommerce, and other areas such as mobility
  • Business performance
  • Industrial and national context.

Enter your details below to request an extract of the report

Overview of SKT’s activities

Network coverage

SK Telecom has been one of the earliest and most active telcos to deploy a 5G network. It initially created 70 5G clusters in key commercial districts and densely populated areas to ensure a level of coverage suitable for augmented reality (AR) and virtual reality (VR) and plans to increase the number to 240 in 2020. It has paid particular attention to mobile (or multi-access) edge computing (MEC) applications for different vertical industry sectors and plans to build 5G MEC centres in 12 different locations across Korea. For its nationwide 5G Edge cloud service it is working with AWS and Microsoft.

In recognition of the constraints imposed by the spectrum used by 5G, it is also working on ensuring good indoor 5G coverage in some 2,000 buildings, including airports, department stores and large shopping malls as well as small-to-medium-sized buildings using distributed antenna systems (DAS) or its in-house developed indoor 5G repeaters. It also is working with Deutsche Telekom on trials of the repeaters in Germany. In addition, it has already initiated activities in 6G, an indication of the seriousness with which it is addressing the mobile market.

NUGU, the AI platform

It launched its own AI driven smart speaker, NUGU in 2016/7, which SKT is using to support consumer applications such as Smart Home and IPTV. There are now eight versions of NUGU for consumers and it also serves as a platform for other applications. More recently it has developed several NUGU/AI applications for businesses and civil authorities in conjunction with 5G deployments. It also has an AI based network management system named Tango.

Although NUGU initially performed well in the market, it seems likely that the subsequent launch of smart speakers by major global players such as Amazon and Google has had a strong negative impact on the product’s recent growth. The absence of published data supports this view, since the company often only reports good news, unless required by law. SK Telecom has responded by developing variants of NUGU for children and other specialist markets and making use of the NUGU AI platform for a variety of smart applications. In the absence of published information, it is not possible to form a view on the success of the NUGU variants, although the intent appears to be to attract young users and build on their brand loyalty.

It has offered smart home products and services since 2015/6. Its smart home portfolio has continually developed in conjunction with an increasing range of partners and is widely recognised as one of the two most comprehensive offerings globally. The other being Deutsche Telekom’s Qivicon. The service appears to be most successful in penetrating the new build market through the property developers.

NUGU is also an AI platform, which is used to support business applications. SK Telecom has also supported the SK Group by providing new AI/5G solutions and opening APIs to other subsidiaries including SK Hynix. Within the SK Group, SK Planet, a subsidiary of SK Telecom, is active in internet platform development and offers development of applications based on NUGU as a service.

Smart solutions for enterprises

SKT continues to experiment with and trial new applications which build on its 5G and AI applications for individuals (B2C), businesses and the public sector. During 2019 it established B2B applications, making use of 5G, on-prem edge computing, and AI, including:

  • Smart factory(real time process control and quality control)
  • Smart distribution and robot control
  • Smart office (security/access control, virtual docking, AR/VRconferencing)
  • Smart hospital (NUGUfor voice command for patients, AR-based indoor navigation, facial recognition technology for medical workers to improve security, and investigating possible use of quantum cryptography in hospital network)
  • Smart cities; e.g. an intelligent transportation system in Seoul, with links to vehicles via 5Gor SK Telecom’s T-Map navigation service for non-5G users.

It is too early to judge whether these B2B smart applications are a success, and we will continue to monitor progress.

Acquisition strategy

SK Telecom has been growing these new business areas over the past few years, both organically and by acquisition. Its entry into the security business has been entirely by acquisition, where it has bought new revenue to compensate for that lost in the core mobile business. It is too early to assess what the ongoing impact and success of these businesses will be as part of SK Telecom.

Acquisitions in general have a mixed record of success. SK Telecom’s usual approach of acquiring a controlling interest and investing in its acquisitions, but keeping them as separate businesses, is one which often, together with the right management approach from the parent, causes the least disruption to the acquired business and therefore increases the likelihood of longer-term success. It also allows for investment from other sources, reducing the cost and risk to SK Telecom as the acquiring company. Yet as a counterpoint to this, M&A in this style doesn’t help change practices in the rest of the business.

However, it has also shown willingness to change its position as and when appropriate, either by sale, or by a change in investment strategy. For example, through its subsidiary SK Planet, it acquired Shopkick, a shopping loyalty rewards business in 2014, but sold it in 2019, for the price it paid for it. It took a different approach to its activity in quantum technologies, originally set up in-house in 2011, which it rolled into IDQ following its acquisition in 2018.

SKT has also recently entered into partnerships and agreements concerning the following areas of business:

 

Table of Contents

  • Executive Summary
  • Introduction and overview
    • Overview of SKT’s activities
  • Business strategy and structure
    • Strategy and lessons
    • 5G deployment
    • Vertical industry applications
    • AI
    • SK Telecom ‘New Business’ and other areas
  • Business performance
    • Financial results
    • Competitive environment
  • Industry and national context
    • International context

Enter your details below to request an extract of the report

Fixed wireless access growth: To 20% homes by 2025

=======================================================================================

Download the additional file on the left for the PPT chart pack accompanying this report

=======================================================================================

Fixed wireless access growth forecast

Fixed Wireless Access (FWA) networks use a wireless “last mile” link for the final connection of a broadband service to homes and businesses, rather than a copper, fibre or coaxial cable into the building. Provided mostly by WISPs (Wireless Internet Service Providers) or mobile network operators (MNOs), these services come in a wide range of speeds, prices and technology architectures.

Some FWA services are just a short “drop” from a nearby pole or fibre-fed hub, while others can work over distances of several kilometres or more in rural and remote areas, sometimes with base station sites backhauled by additional wireless links. WISPs can either be independent specialists, or traditional fixed/cable operators extending reach into areas they cannot economically cover with wired broadband.

There is a fair amount of definitional vagueness about FWA. The most expansive definitions include cheap mobile hotspots (“Mi-Fi” devices) used in homes, or various types of enterprise IoT gateway, both of which could easily be classified in other market segments. Most service providers don’t give separate breakouts of deployments, while regulators and other industry bodies report patchy and largely inconsistent data.

Our view is that FWA is firstly about providing permanent broadband access to a specific location or premises. Primarily, this is for residential wireless access to the Internet and sometimes typical telco-provided services such as IPTV and voice telephony. In a business context, there may be a mix of wireless Internet access and connectivity to corporate networks such as VPNs, again provided to a specific location or building.

A subset of FWA relates to M2M usage, for instance private networks run by utility companies for controlling grid assets in the field. These are typically not Internet-connected at all, and so don’t fit most observers’ general definition of “broadband access”.

Usually, FWA will be marketed as a specific service and package by some sort of network provider, usually including the terminal equipment (“CPE” – customer premise equipment), rather than allowing the user to “bring their own” device. That said, lower-end (especially 4G) offers may be SIM-only deals intended to be used with generic (and unmanaged) portable hotspots.
There are some examples of private network FWA, such as a large caravan or trailer park with wireless access provided from a central point, and perhaps in future municipal or enterprise cellular networks giving fixed access to particular tenant structures on-site – for instance to hangars at an airport.

Enter your details below to request an extract of the report

FWA today

Today, fixed-wireless access (FWA) is used for perhaps 8-9% of broadband connections globally, although this varies significantly by definition, country and region. There are various use cases (see below), but generally FWA is deployed in areas without good fixed broadband options, or by mobile-only operators trying to add an additional fixed revenue stream, where they have spare capacity.

Fixed wireless internet access fits specific sectors and uses, rather than the overall market

FWA Use Cases

Source: STL Partners

FWA has traditionally been used in sparsely populated rural areas, where the economics of fixed broadband are untenable, especially in developing markets without existing fibre transport to towns and villages, or even copper in residential areas. Such networks have typically used unlicensed frequency bands, as there is limited interference – and little financial justification for expensive spectrum purchases. In most cases, such deployments use proprietary variants of Wi-Fi, or its ill-fated 2010-era sibling WiMAX.

Increasingly however, FWA is being used in more urban settings, and in more developed market scenarios – for example during the phase-out of older xDSL broadband, or in places with limited or no competition between fixed-network providers. Some cellular networks primarily intended for mobile broadband (MBB) have been used for fixed usage as well, especially if spare capacity has been available. 4G has already catalysed rapid growth of FWA in numerous markets, such as South Africa, Japan, Sri Lanka, Italy and the Philippines – and 5G is likely to make a further big difference in coming years. These mostly rely on licensed spectrum, typically the national bands owned by major MNOs. In some cases, specific bands are used for FWA use, rather than sharing with normal mobile broadband. This allows appropriate “dimensioning” of network elements, and clearer cost-accounting for management.

Historically, most FWA has required an external antenna and professional installation on each individual house, although it also gets deployed for multi-dwelling units (MDUs, i.e. apartment blocks) as well as some non-residential premises like shops and schools. More recently, self-installed indoor CPE with varying levels of price and sophistication has helped broaden the market, enabling customers to get terminals at retail stores or delivered direct to their home for immediate use.

Looking forward, the arrival of 5G mass-market equipment and larger swathes of mmWave and new mid-band spectrum – both licensed and unlicensed – is changing the landscape again, with the potential for fibre-rivalling speeds, sometimes at gigabit-grade.

Enter your details below to request an extract of the report

Table of contents

  • Executive Summary
  • Introduction
    • FWA today
    • Universal broadband as a goal
    • What’s changed in recent years?
    • What’s changed because of the pandemic?
  • The FWA market and use cases
    • Niche or mainstream? National or local?
    • Targeting key applications / user groups
  • FWA technology evolution
    • A broad array of options
    • Wi-Fi, WiMAX and close relatives
    • Using a mobile-primary network for FWA
    • 4G and 5G for WISPs
    • Other FWA options
    • Customer premise equipment: indoor or outdoor?
    • Spectrum implications and options
  • The new FWA value chain
    • Can MNOs use FWA to enter the fixed broadband market?
    • Reinventing the WISPs
    • Other value chain participants
    • Is satellite a rival waiting in the wings?
  • Commercial models and packages
    • Typical pricing and packages
    • Example FWA operators and plans
  • STL’s FWA market forecasts
    • Quantitative market sizing and forecast
    • High level market forecast
  • Conclusions
    • What will 5G deliver – and when and where?
  • Index

Open RAN: What should telcos do?

————————————————————————————————————–

Related webinar: Open RAN: What should telcos do?

In this webinar STL Partners addressed the three most important sub-components of Open RAN (open-RAN, vRAN and C-RAN) and how they interact to enable a new, virtualized, less vendor-dominated RAN ecosystem. The webinar covered:

* Why Open RAN matters – and why it will be about 4G (not 5G) in the short term
* Data-led overview of existing Open RAN initiatives and challenges
* Our recommended deployment strategies for operators
* What the vendors are up to – and how we expect that to change

Date: Tuesday 4th August 2020
Time: 4pm GMT

Access the video recording and presentation slides

————————————————————————————————————————————————————————-

For the report chart pack download the additional file on the left

What is the open RAN and why does it matter?

The open RAN’ encompasses a group of technological approaches that are designed to make the radio access network (RAN) more cost effective and flexible. It involves a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

Legacy RAN: single-vendor and inflexible

The traditional, legacy radio access network (RAN) uses dedicated hardware to deliver the baseband function (modulation and management of the frequency range used for cellular network transmission), along with proprietary interfaces (typically based on the Common Public Radio Interface (CPRI) standard) for the fronthaul from the baseband unit (BBU) to the remote radio unit (RRU) at the top of the transmitter mast.

Figure 1: Legacy RAN architecture

Source: STL Partners

This means that, typically, telcos have needed to buy the baseband and the radio from a single vendor, with the market presently dominated largely by the ‘big three’ (Ericsson, Huawei and Nokia), together with a smaller market share for Samsung and ZTE.

The architecture of the legacy RAN – with BBUs typically but not always at every cell site – has many limitations:

  • It is resource-intensive and energy-inefficient – employing a mass of redundant equipment operating at well below capacity most of the time, while consuming a lot of power
  • It is expensive, as telcos are obliged to purchase and operate a large inventory of physical kit from a limited number of suppliers, which keeps the prices high
  • It is inflexible, as telcos are unable to deploy to new and varied sites – e.g. macro-cells, small cells and micro-cells with different radios and frequency ranges – in an agile and cost-effective manner
  • It is more costly to manage and maintain, as there is less automation and more physical kit to support, requiring personnel to be sent out to remote sites
  • It is not very programmable to support the varied frequency, latency and bandwidth demands of different services.

Enter your details below to request an extract of the report

Moving to the open RAN: C-RAN, vRAN and open-RAN

There are now many distinct technologies and standards emerging in the radio access space that involve a shift away from traditional, proprietary radio hardware and network architectures, driven by single vendors, towards new, virtualised platforms and a more open vendor ecosystem.

We have adopted ‘the open RAN’ as an umbrella term which encompasses all of these technologies. Together, they are expected to make the RAN more cost effective and flexible. The three most important sub-components of the open RAN are C-RAN, vRAN and open-RAN.

Centralised RAN (C-RAN), also known as cloud RAN, involves distributing and centralising the baseband functionality across different telco edge, aggregation and core locations, and in the telco cloud, so that baseband processing for multiple sites can be carried out in different locations, nearer or further to the end user.

This enables more effective control and programming of capacity, latency, spectrum usage and service quality, including in support of 5G core-enabled technologies and services such as network slicing, URLLC, etc. In particular, baseband functionality can be split between more centralised sites (central baseband units – CU) and more distributed sites (distributed unit – DU) in much the same way, and for a similar purpose, as the split between centralised control planes and distributed user planes in the mobile core, as illustrated below:

Figure 2: Centralised RAN (C-RAN) architecture

Cloud RAN architecture

Source: STL Partners

Virtual RAN (vRAN) involves virtualising (and now also containerising) the BBU so that it is run as software on generic hardware (General Purpose Processing – GPP) platforms. This enables the baseband software and hardware, and even different components of them, to be supplied by different vendors.

Figure 3: Virtual RAN (vRAN) architecture

vRAN architecture

Source: STL Partners

Open-RANnote the hyphenation – involves replacing the vendor-proprietary interfaces between the BBU and the RRU with open standards. This enables BBUs (and parts thereof) from one or multiple vendors to interoperate with radios from other vendors, resulting in a fully disaggregated RAN:

Figure 4: Open-RAN architecture

Open-RAN architecture

Source: STL Partners

 

RAN terminology: clearing up confusion

You will have noticed that the technologies above have similar-sounding names and overlapping definitions. To add to potential confusion, they are often deployed together.

Figure 5: The open RAN Venn – How C-RAN, vRAN and open-RAN fit together

Open-RAN venn: open-RAN inside vRAN inside C-RAN

Source: STL Partners

As the above diagram illustrates, all forms of the open RAN involve C-RAN, but only a subset of C-RAN involves virtualisation of the baseband function (vRAN); and only a subset of vRAN involves disaggregation of the BBU and RRU (open-RAN).

To help eliminate ambiguity we are adopting the typographical convention ‘open-RAN’ to convey the narrower meaning: disaggregation of the BBU and RRU facilitated by open interfaces. Similarly, where we are dealing with deployments or architectures that involve vRAN and / or cloud RAN but not open-RAN in the narrower sense, we refer to those examples as ‘vRAN’ or ‘C-RAN’ as appropriate.

In the coming pages, we will investigate why open RAN matters, what telcos are doing about it – and what they should do next.

Table of contents

  • Executive summary
  • What is the open RAN and why does it matter?
    • Legacy RAN: single-vendor and inflexible
    • The open RAN: disaggregated and flexible
    • Terminology, initiatives & standards: clearing up confusion
  • What are the opportunities for open RAN?
    • Deployment in macro networks
    • Deployment in greenfield networks
    • Deployment in geographically-dispersed/under-served areas
    • Deployment to support consolidation of radio generations
    • Deployment to support capacity and coverage build-out
    • Deployment to support private and neutral host networks
  • How have operators deployed open RAN?
    • What are the operators doing?
    • How successful have deployments been?
  • How are vendors approaching open RAN?
    • Challenger RAN vendors: pushing for a revolution
    • Incumbent RAN vendors: resisting the open RAN
    • Are incumbent vendors taking the right approach?
  • How should operators do open RAN?
    • Step 1: Define the roadmap
    • Step 2: Implement
    • Step 3: Measure success
  • Conclusions
    • What next?

Enter your details below to request an extract of the report

5G: Bridging hype, reality and future promises

The 5G situation seems paradoxical

People in China and South Korea are buying 5G phones by the million, far more than initially expected, yet many western telcos are moving cautiously. Will your company also find demand? What’s the smart strategy while uncertainty remains? What actions are needed to lead in the 5G era? What questions must be answered?

New data requires new thinking. STL Partners 5G strategies: Lessons from the early movers presented the situation in late 2019, and in What will make or break 5G growth? we outlined the key drivers and inhibitors for 5G growth. This follow on report addresses what needs to happen next.

The report is informed by talks with executives of over three dozen companies and email contacts with many more, including 21 of the first 24 telcos who have deployed. This report covers considerations for the next three years (2020–2023) based on what we know today.

“Seize the 5G opportunity” says Ke Ruiwen, Chairman, China Telecom, and Chinese reports claimed 14 million sales by the end of 2019. Korea announced two million subscribers in July 2019 and by December 2019 approached five million. By early 2020, The Korean carriers were confident 30% of the market will be using 5G by the end of 2020. In the US, Verizon is selling 5G phones even in areas without 5G services,  With nine phone makers looking for market share, the price in China is US$285–$500 and falling, so the handset price barrier seems to be coming down fast.

Yet in many other markets, operators progress is significantly more tentative. So what is going on, and what should you do about it?

Enter your details below to request an extract of the report

5G technology works OK

22 of the first 24 operators to deploy are using mid-band radio frequencies.

Vodafone UK claims “5G will work at average speeds of 150–200 Mbps.” Speeds are typically 100 to 500 Mbps, rarely a gigabit. Latency is about 30 milliseconds, only about a third better than decent 4G. Mid-band reach is excellent. Sprint has demonstrated that simply upgrading existing base stations can provide substantial coverage.

5G has a draft business case now: people want to buy 5G phones. New use cases are mostly years away but the prospect of better mobile broadband is winning customers. The costs of radios, backhaul, and core are falling as five system vendors – Ericsson, Huawei, Nokia, Samsung, and ZTE – fight for market share. They’ve shipped over 600,000 radios. Many newcomers are gaining traction, for example Altiostar won a large contract from Rakuten and Mavenir is in trials with DT.

The high cost of 5G networks is an outdated myth. DT, Orange, Verizon, and AT&T are building 5G while cutting or keeping capex flat. Sprint’s results suggest a smart build can quickly reach half the country without a large increase in capital spending. Instead, the issue for operators is that it requires new spending with uncertain returns.

The technology works, mostly. Mid-band is performing as expected, with typical speeds of 100–500Mbps outdoors, though indoor performance is less clear yet. mmWave indoor is badly degraded. Some SDN, NFV, and other tools for automation have reached the field. However, 5G upstream is in limited use. Many carriers are combining 5G downstream with 4G upstream for now. However, each base station currently requires much more power than 4G bases, which leads to high opex. Dynamic spectrum sharing, which allows 5G to share unneeded 4G spectrum, is still in test. Many features of SDN and NFV are not yet ready.

So what should companies do? The next sections review go-to-market lessons, status on forward-looking applications, and technical considerations.

Early go-to-market lessons

Don’t oversell 5G

The continuing publicity for 5G is proving powerful, but variable. Because some customers are already convinced they want 5G, marketing and advertising do not always need to emphasise the value of 5G. For those customers, make clear why your company’s offering is the best compared to rivals’. However, the draw of 5G is not universal. Many remain sceptical, especially if their past experience with 4G has been lacklustre. They – and also a minority swayed by alarmist anti-5G rhetoric – will need far more nuanced and persuasive marketing.

Operators should be wary of overclaiming. 5G speed, although impressive, currently has few practical applications that don’t already work well over decent 4G. Fixed home broadband is a possible exception here. As the objective advantages of 5G in the near future are likely to be limited, operators should not hype features that are unrealistic today, no matter how glamorous. If you don’t have concrete selling propositions, do image advertising or use happy customer testimonials.

Table of Contents

  • Executive Summary
  • Introduction
    • 5G technology works OK
  • Early go-to-market lessons
    • Don’t oversell 5G
    • Price to match the experience
    • Deliver a valuable product
    • Concerns about new competition
    • Prepare for possible demand increases
    • The interdependencies of edge and 5G
  • Potential new applications
    • Large now and likely to grow in the 5G era
    • Near-term applications with possible major impact for 5G
    • Mid- and long-term 5G demand drivers
  • Technology choices, in summary
    • Backhaul and transport networks
    • When will 5G SA cores be needed (or available)?
    • 5G security? Nothing is perfect
    • Telco cloud: NFV, SDN, cloud native cores, and beyond
    • AI and automation in 5G
    • Power and heat

Enter your details below to request an extract of the report

Vendors vs. telcos? New plays in enterprise managed services

Digital transformation is reshaping vendors’ and telcos’ offer to enterprises

What does ‘digital transformation’ mean?

The enterprise market for telecoms vendors and operators is being radically reshaped by digital transformation. This transformation is taking place across all industry verticals, not just the telecoms sector, whose digital transformation – desirable or actual – STL Partners has forensically mapped out for several years now.

The term ‘digital transformation’ is so familiar that it breeds contempt in some quarters. Consequently, it is worth taking a while to refresh our thinking on what ‘digital transformation’ actually means. This will in turn help explain how the digital needs and practices of enterprises are impacting on vendors and telcos alike.

The digitisation of enterprises across all sectors can be described as part of a more general social, economic and technological evolution toward ever more far-reaching use of software-, computing- and IP-based modes of: interacting with customers and suppliers; communicating; networking; collaborating; distributing and accessing media content; producing, marketing and selling goods and services; consuming and purchasing those goods and services; and managing money flows across the economy. Indeed, one definition of the term ‘digital’ in this more general sense could simply be ‘software-, computing- and IP-driven or -enabled’.

For the telecoms industry, the digitisation of society and technology in this sense has meant, among other things, the decline of voice (fixed and mobile) as the primary communications service, although it is still the single largest contributor to turnover for many telcos. Voice mediates an ‘analogue’ economy and way of working in the sense that the voice is a form of ‘physical’ communication between two or more persons. In addition, the activity and means of communication (i.e. the actual telephone conversation to discuss project issues) is a separate process and work task from other work tasks, in different physical locations, that it helps to co-ordinate. By contrast, in an online collaboration session, the communications activity and the work activity are combined in a shared virtual space: the digital service allows for greater integration and synchronisation of tasks previously carried out by physical means, in separate locations, and in a less inherently co-ordinated manner.

Similarly, data in the ATM and Frame Relay era was mainly a means to transport a certain volume of information or files from one work place to another, without joining those work places together as one: the work places remained separate, both physically and in terms of the processes and work activities associated with them. The traditional telecoms network itself reflected the physical economy and processes that it enabled: comprising massive hardware and equipment stacks responsible for shifting huge volumes of voice signals and data packets (so called on the analogy of postal packets) from one physical location to another.

By contrast, with the advent of the digital (software-, computing- and IP-enabled) society and economy, the value carried by communications infrastructure has increasingly shifted from voice and data (as ‘physical’ signals and packets) to that of new modes of always-on, virtual interconnectedness and interactivity that tend towards the goal of eliminating or transcending the physical separation and discontinuity of people, work processes and things.

Examples of this digital transformation of communications, and associated experiences of work and life, could include:

  • As stated above, simple voice communications, in both business and personal life, have been increasingly superseded by ‘real-time’ or near-real-time, one-to-one or one-to-many exchange and sharing of text and audio-visual content across modes of communication such as instant messaging, unified communications (UC), social media (including increasingly in the work place) or collaborative applications enabling simultaneous, multi-party reviewing and editing of documents and files
  • Similarly, location-to-location file transfers in support of discrete, geographically separated business processes are being replaced by centralised storage and processing of, and access to, enterprise data and applications in the cloud
  • These trends mean that, in theory, people can collaborate and ‘meet’ with each other from any location in the world, and the digital service constitutes the virtual activity and medium through which that collaboration takes place
  • Similarly, with the Internet of Things (IoT), physical objects, devices, processes and phenomena generate data that can be transmitted and analysed in ‘real time’, triggering rapid responses and actions directed towards those physical objects and processes based on application logic and machine learning – resulting in more efficient, integrated processes and physical events meeting the needs of businesses and people. In other words, the IoT effectively involves digitising the physical world: disparate physical processes, and the action of diverse physical things and devices, are brought together by software logic and computing around human goals and needs.

‘Virtualisation’ effectively means ‘digital optimisation’

In addition to the cloud and IoT, one of the main effects of enterprise digital transformation on the communications infrastructure has of course been Network Functions Virtualisation (NFV) and SoftwareDefined Networking (SDN). NFV – the replacement of network functionality previously associated with dedicated hardware appliances by software running on standard compute devices – could also simply be described as the digitisation of telecoms infrastructure: the transformation of networks into software-, computing- and IP-driven (digital) systems that are capable of supporting the functionality underpinning the virtual / digital economy.

This functionality includes things like ultrafast, reliable, scalable and secure routing, processing, analysis and storage of massive but also highly variable data flows across network domains and on a global scale – supporting business processes ranging from ‘mere’ communications and collaboration to co-ordination and management of large-scale critical services, multi-national enterprises, government functions, and complex industrial processes. And meanwhile, the physical, Layer-1 elements of the network have also to become lightning-fast to deliver the massive, ‘real-time’ data flows on which the digital systems and services depend.

Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos

Virtualisation frees vendors from ‘operator lock-in’

Virtualisation has generally been touted as a necessary means for telcos to adapt their networks to support the digital service demands of their customers and, in the enterprise market, to support those customers’ own digital transformations. It has also been advocated as a means for telcos to free themselves from so-called ‘vendor lock-in’: dependency on their network hardware suppliers for maintenance and upgrades to equipment capacity or functionality to support service growth or new product development.

From the other side of the coin, virtualisation could also be seen as a means for vendors to free themselves from ‘operator lock-in’: a dependency on telcos as the primary market for their networking equipment and technology. That is to say, the same dynamic of social and enterprise digitisation, discussed above, has driven vendors to virtualise their own product and service offerings, and to move away from the old business model, which could be described as follows:

▪ telcos and their implementation partners purchase hardware from the vendor
▪ deploy it at the enterprise customer
▪ and then own the business relationship with the enterprise and hold the responsibility for managing the services

By contrast, once the service-enabling technology is based on software and standard compute hardware, this creates opportunities for vendors to market their technology direct to enterprise customers, with which they can in theory take over the supplier-customer relationship.

Of course, many enterprises have continued to own and operate their own private networks and networking equipment, generally supplied to them by vendors. Therefore, vendors marketing their products and services direct to enterprises is not a radical innovation in itself. However, the digitisation / virtualisation of networking technology and of enterprise networks is creating a new competitive dynamic placing vendors in a position to ‘win back’ direct relationships to enterprise customers that they have been serving through the mediation of telcos.

Virtualisation changes the competitive dynamic

Virtualisation changes the competitive dynamic

Contents:

  • Executive Summary: Digital transformation is changing the rules of the game
  • Digital transformation is reshaping vendors’ and telcos’ offer to enterprises
  • What does ‘digital transformation’ mean?
  • ‘Virtualisation’ effectively means ‘digital optimisation’
  • Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos
  • Vendors and telcos: the business models are changing
  • New vendor plays in enterprise networking: four vendor business models
  • Vendor plays: Nokia, Ericsson, Cisco and IBM
  • Ericsson: changing the bet from telcos to enterprises – and back again?
  • Cisco: Betting on enterprises – while operators need to speed up
  • IBM: Transformation involves not just doing different things but doing things differently
  • Conclusion: Vendors as ‘co-Operators’, ‘co-opetors’ or ‘co-opters’ – but can telcos still set the agenda?
  • How should telcos play it? Four recommendations

Figures:

  • Figure 1: Virtualisation changes the competitive dynamic
  • Figure 2: The telco as primary channel for vendors
  • Figure 3: New direct-to-enterprise opportunities for vendors
  • Figure 4: Vendors as both technology supplier and OTT / operator-type managed services provider
  • Figure 5: Vendors as digital service creators, with telcos as connectivity providers and digital service enablers
  • Figure 6: Vendors as digital service enablers, with telcos as digital service creators / providers
  • Figure 7: Vendor manages communications / networking as part of overall digital transformation focus
  • Figure 8: Nokia as technology supplier and ‘operator-type’ managed services provider
  • Figure 9: Nokia’s cloud-native core network blueprint
  • Figure 10: Nokia WING value chain
  • Figure 11: Ericsson’s model for telcos’ roles in the IoT ecosystem
  • Figure 12: Ericsson generates the value whether operators provide connectivity only or also market the service
  • Figure 13: IBM’s model for telcos as digital service enablers or providers – or both

The ‘Agile Operator’: 5 Key Ways to Meet the Agility Challenge

Understanding Agility

What does ‘Agility’ mean? 

A number of business strategies and industries spring to mind when considering the term ‘agility’ but the telecoms industry is not front and centre… 

Agility describes the ability to change direction and move at speed, whilst maintaining control and balance. This innate flexibility and adaptability aptly describes an athlete, a boxer or a cheetah, yet this description can be (and is) readily applied in a business context. Whilst the telecoms industry is not usually referenced as a model of agility (and is often described as the opposite), a number of business strategies and industries have adopted more ‘agile’ approaches, attempting to simultaneously reduce inefficiencies, maximise the deployment of resources, learn though testing and stimulate innovation. It is worthwhile recapping some of the key ‘agile’ approaches as they inform our and the interviewees’ vision of agility for the telecoms operator.

When introduced, these approaches have helped redefine their respective industries. One of the first business strategies that popularised a more ‘agile’ approach was the infamous ‘lean-production’ and related ‘just-in-time’ methodologies, principally developed by Toyota in the mid-1900s. Toyota placed their focus on reducing waste and streamlining the production process with the mindset of “only what is needed, when it is needed, and in the amount needed,” reshaping the manufacturing industry.

The methodology that perhaps springs to many people’s minds when they hear the word agility is ‘agile software development’. This methodology relies on iterative cycles of rapid prototyping followed by customer validation with increasing cross-functional involvement to develop software products that are tested, evolved and improved repeatedly throughout the development process. This iterative and continuous improvement directly contrasts the waterfall development model where a scripted user acceptance testing phase typically occurs towards the end of the process. The agile approach to development speeds up the process and results in software that meets the end users’ needs more effectively due to continual testing throughout the process.

Figure 5: Agile Software Development

Source: Marinertek.com

More recently the ‘lean startup’ methodology has become increasingly popular as an innovation strategy. Similarly to agile development, this methodology also focuses on iterative testing (replacing the testing of software with business-hypotheses and new products). Through iterative testing and learning a startup is able to better understand and meet the needs of its users or customers, reducing the inherent risk of failure whilst keeping the required investment to a minimum. The success of high-tech startups has popularised this approach; however the key principles and lessons are not solely applicable to startups but also to established companies.

Despite the fact that (most of) these methodologies or philosophies have existed for a long time, they have not been adopted consistently across all industries. The digital or internet industry was built on these ‘agile’ principles, whereas the telecoms industry has sought to emulate this by adopting agile models and methods. Of course these two industries differ in nature and there will inevitably be constraints that affect the ability to be agile across different industries (e.g. the long planning and investment cycles required to build network infrastructure) yet these principles can broadly be applied more universally, underwriting a more effective way of working.

This report highlights the benefits and challenges of becoming more ‘agile’ and sets out the operator’s perspective of ‘agility’ across a number of key domains. This vision of the ‘Agile Operator’ was captured through 29 interviews with senior telecoms executives and is supplemented by STL analysis and research.

Barriers to (telco) agility 

…The telecoms industry is hindered by legacy systems, rigid organisational structures and cultural issues…

It is well known that the telecoms industry is hampered by legacy systems; systems that may have been originally deployed between 5-20 years ago are functionally limited. Coordinating across these legacy systems impedes a telco’s ability to innovate and customise product offerings or to obtain a complete view of customers. In addition to legacy system challenges, interview participants outlined a number of other key barriers to becoming more agile. Three principle barriers emerged:

  1. Legacy systems
  2. Mindset & Culture
  3. Organisational Structure & Internal Processes

Legacy Systems 

One of the main (and often voiced by interviewees) barriers to achieving greater agility are legacy systems. Dealing with legacy IT systems and technology can be very cumbersome and time-consuming as typically they are not built to be further developed in an agile way. Even seemingly simple change requests end in development queues that stretch out many months (often years). Therefore operators remain locked-in to the same, limited core capabilities and options, which in turn stymies innovation and agility. 

The inability to modify a process, a pricing plan or to easily on/off-board a 3rd-party product has significant ramifications for how agile a company can be. It can directly limit innovation within the product development process and indirectly diminish employees’ appetite for innovation.

It is often the case that operators are forced to find ‘workarounds’ to launch new products and services. These workarounds can be practical and innovative, yet they are often crude manipulations of the existing capabilities. They are therefore limited in terms what they can do and in terms of the information that can be captured for reporting and learning for new product development. They may also create additional technical challenges when trying to migrate the ‘workaround’ product or service to a new system. 

Figure 6: What’s Stopping Telco Agility?

Source: STL Partners

Mindset & Culture

The historic (incumbent) telco culture, born out of public sector ownership, is the opposite of an ‘agile’ mindset. It is one that put in place rigid controls and structure, repealed accountability and stymied enthusiasm for innovation – the model was built to maintain and scale the status quo. For a long time the industry invested in the technology and capabilities aligned to this approach, with notable success. As technology advanced (e.g. ever-improving feature phones and mobile data) this approach served telcos well, enhancing their offerings which in turn further entrenched this mindset and culture. However as technology has advanced even further (e.g. the internet, smartphones), this focus on proven development models has resulted in telcos becoming slow to address key opportunities in the digital and mobile internet ecosystems. They now face a marketplace of thriving competition, constant disruption and rapid technological advancement. 

This classic telco mindset is also one that emphasized “technical” product development and specifications rather than the user experience. It was (and still is) commonplace for telcos to invest heavily upfront in the creation of relatively untested products and services and then to let the product run its course, rather than alter and improve the product throughout its life.

Whilst this mindset has changed or is changing across the industry, interviewees felt that the mindset and culture has still not moved far enough. Indeed many respondents indicated that this was still the main barrier to agility. Generally they felt that telcos did not operate with a mindset that was conducive to agile practices and this contributed to their inability to compete effectively against the internet players and to provide the levels of service that customers are beginning to expect. 

Organisational Structure & Internal Processes

Organisational structure and internal processes are closely linked to the overall culture and mindset of an organisation and hence it is no surprise that interviewees also noted this aspect as a key barrier to agility. Interviewees felt that the typical (functionally-orientated) organisational structure hinders their companies’ ability to be agile: there is a team for sales, a team for marketing, a team for product development, a network team, a billing team, a provisioning team, an IT team, a customer care team, a legal team, a security team, a privacy team, several compliance teams etc.. This functional set-up, whilst useful for ramping-up and managing an established product, clearly hinders a more agile approach to developing new products and services through understanding customer needs and testing adoption/behaviour. With this set-up, no-one in particular has a full overview of the whole process and they are therefore not able to understand the different dimensions, constraints, usage and experience of the product/service. 

Furthermore, having these discrete teams makes it hard to collaborate efficiently – each team’s focus is to complete their own tasks, not to work collaboratively. Indeed some of the interviewees blamed the organisational structure for creating a layer of ‘middle management’ that does not have a clear understanding of the commercial pressures facing the organisation, a route to address potential opportunities nor an incentive to work outside their teams. This leads to teams working in silos and to a lack of information sharing across the organisation.

A rigid mindset begets a rigid organisational structure which in turn leads to the entrenchment of inflexible internal processes. Interviewees saw internal processes as a key barrier, indicating that within their organisation and across the industry in general internal decision-making is too slow and bureaucratic.

 

Interviewees noted that there were too many checks and processes to go through when making decisions and often new ideas or opportunities fell outside the scope of priority activities. Interviewees highlighted project management planning as an example of the lack of agility; most telcos operate against 1-2 year project plans (with associated budgeting). Typically the budget is locked in for the year (or longer), preventing the re-allocation of financing towards an opportunity that arises during this period. This inflexibility prevents telcos from quickly capitalising on potential opportunities and from (re-)allocating resources more efficiently.

  • Executive Summary
  • Understanding Agility
  • What does ‘Agility’ mean?
  • Barriers to (telco) agility
  • “Agility” is an aspiration that resonates with operators
  • Where is it important to be agile?
  • The Telco Agility Framework
  • Organisational Agility
  • The Agile Organisation
  • Recommended Actions: Becoming the ‘Agile’ Organisation
  • Network Agility
  • A Flexible & Scalable Virtualised Network
  • Recommended Actions: The Journey to the ‘Agile Network’
  • Service Agility
  • Fast & Reactive New Service Creation & Modification
  • Recommended Actions: Developing More-relevant Services at Faster Timescales
  • Customer Agility
  • Understand and Make it Easy for your Customers
  • Recommended Actions: Understand your Customers and Empower them to Manage & Customise their Own Service
  • Partnering Agility
  • Open and Ready for Partnering
  • Recommended Actions: Become an Effective Partner
  • Conclusion

 

  • Figure 1: Regional & Functional Breakdown of Interviewees
  • Figure 2: The Barriers to Telco Agility
  • Figure 3: The Telco Agility Framework
  • Figure 4: The Agile Organisation
  • Figure 5: Agile Software Development
  • Figure 6: What’s Stopping Telco Agility?
  • Figure 7: The Importance of Agility
  • Figure 8: The Drivers & Barriers of Agility
  • Figure 9: The Telco Agility Framework
  • Figure 10: The Agile Organisation
  • Figure 11: Organisational Structure: Functional vs. Customer-Segmented
  • Figure 12: How Google Works – Small, Open Teams
  • Figure 13: How Google Works – Failing Well
  • Figure 14: NFV managed by SDN
  • Figure 15: Using Big Data Analytics to Predictively Cache Content
  • Figure 16: Three Steps to Network Agility
  • Figure 17: Launch with the Minimum Viable Proposition – Gmail
  • Figure 18: The Key Components of Customer Agility
  • Figure 19: Using Network Analytics to Prioritise High Value Applications
  • Figure 20: Knowing When to Partner
  • Figure 21: The Telco Agility Framework

The Internet of Things: Impact on M2M, where it’s going, and what to do about it?

Introduction

From RFID in the supply chain to M2M today

The ‘Internet of Things’ first appeared as a marketing term in 1999 when it was applied to improved supply-chain strategies, leveraging the then hot-topics of RFID and the Internet.

Industrial engineers planned to use miniaturised, RFID tags to track many different types of asset, especially relatively low cost ones. However, their dependency on accessible RFID readers constrained their zonal range. This also constrained many such applications to the enterprise sector and within a well-defined geographic footprint.

Modern versions of RFID labelling have expanded the addressable market through barcode and digital watermarking approaches, for example, while mobile has largely removed the zonal constraint. In fact, mobile’s economies of scale have ushered in a relatively low-cost technology building block in the form of radio modules with local processing capability. These modules allow machines and sensors to be monitored and remotely managed over mobile networks. This is essentially the M2M market today.

M2M remained a specialist, enterprise sector application for a long time. It relied on niche, systems integration and hardware development companies, often delivering one-off or small-scale deployments. For many years, growth in the M2M market did not meet expectations for faster adoption, and this is visible in analyst forecasts which repeatedly time-shifted the adoption forecast curve. Figure 1 below, for example, illustrates successive M2M forecasts for the 2005-08 period (before M2M began to take off) as analysts tried to forecast when M2M module shipment volumes would breach the 100m units/year hurdle:

Figure 1: Historical analyst forecasts of annual M2M module shipment volumes

Source: STL Partners, More With Mobile

Although the potential of remote connectivity was recognised, it did not become a high-volume market until the GSMA brought about an alignment of interests, across mobile operators, chip- and module-vendors, and enterprise users by targeting mobile applications in adjacent markets.

The GSMA’s original Embedded Mobile market development campaign made the case that connecting devices and sensors to (Internet) applications would drive significant new use cases and sources of value. However, in order to supply economically viable connected devices, the cost of embedding connectivity had to drop. This meant:

  • Educating the market about new opportunities in order to stimulate latent demand
  • Streamlining design practices to eliminate many layers of implementation costs
  • Promoting adoption in high-volume markets such as automotive, consumer health and smart utilities, for example, to drive economies of scale in the same manner that led to the mass-adoption of mobile phones

The late 2000’s proved to be a turning point for M2M, with the market now achieving scale (c. 189m connections globally as of January 2014) and growing at an impressive rate (c. 40% per annum). 

From M2M to the Internet of Things?

Over the past 5 years, companies such as Cisco, Ericsson and Huawei have begun promoting radically different market visions to those of ‘traditional M2M’. These include the ‘Internet of Everything’ (that’s Cisco), a ‘Networked Society’ with 50 billion cellular devices (that’s Ericsson), and a ‘Cellular IoT’ with 100 billion devices (that’s Huawei).

Figure 2: Ericsson’s Promise: 50 billion connected ‘things’ by 2020

Source: Ericsson

Ericsson’s calculation builds on the idea that there will be 3 billion “middle class consumers”, each with 10 M2M devices, plus personal smartphones, industrial, and enterprise devices. In promoting such visions, the different market evangelists have shifted market terminology away from M2M and towards the Internet of Things (‘IoT’).

The transition towards IoT has also had consequences beyond terminology. Whereas M2M applications were previously associated with internal-to-business, operational improvements, IoT offers far more external market prospects. In other words, connected devices allow a company to interact with its customers beyond its strict operational boundaries. In addition, standalone products can now deliver one or more connected services: for example, a connected bus can report on its mechanical status, for maintenance purposes, as well as its location to deliver a higher quality, transit service.

Another consequence of the rise of IoT relates to the way that projects are evaluated. In the past, M2M applications tended to be justified on RoI criteria. Nowadays, there is a broader, commercial recognition that IoT opens up new avenues of innovation, efficiency gains and alternative sources of revenue: it was this recognition, for example, that drove Google’s $3.2 billion valuation of Nest (see the Connected Home EB).

In contrast to RFID, the M2M market required companies in different parts of the value chain to share a common vision of a lower cost, higher volume future across many different industry verticals. The mobile industry’s success in scaling the M2M market now needs to adjust for an IoT world. Before examining what these changes imply, let us first review the M2M market today, how M2M service providers have adapted their business models and where this positions them for future IoT opportunities.

M2M Today: Geographies, Verticals and New Business Models

Headline: M2M is now an important growth area for MNOs

The M2M market has now evolved into a high volume and highly competitive business, with leading telecoms operators and other service providers (so-called ‘M2M MVNOs’ e.g. KORE, Wyless) providing millions of cellular (and fixed) M2M connections across numerous verticals and applications.

Specifically, 428 MNOs were offering M2M services across 187 countries by January 2014 – 40% of mobile network operators – and providing 189 million cellular connections. The GSMA estimates the number of global connections to be growing by about 40% per annum. Figure 3 below shows that as of Q4 2013 China Mobile was the largest player by connections (32 million), with AT&T second largest but only half the size.

Figure 3: Selected leading service providers by cellular M2M connections, Q4 2013

 

Source: Various, including GSMA and company accounts, STL Partners, More With Mobile

Unsurprisingly, these millions of connections have also translated into material revenues for service providers. Although MNOs typically do not report M2M revenues (and many do not even report connections), Verizon reported $586m in ‘M2M and telematics’ revenues for 2014, growing 47% year-on-year, during its most recent earnings call. Moreover, analysis from the Telco 2.0 Transformation Index also estimates that Vodafone Group generated $420m in revenues from M2M during its 2013/14 March-March financial year.

However, these numbers need to be put in context: whilst $500m growing 40% YoY is encouraging, this still represents only a small percentage of these telcos’ revenues – c. 0.5% in the case of Vodafone, for example.

Figure 4: Vodafone Group enterprise revenues, implied forecast, FY 2012-18

 

Source: Company accounts, STL Partners, More With Mobile

Figure 4 uses data provided by Vodafone during 2013 on the breakdown of its enterprise line of business and grows these at the rates which Vodafone forecasts the market (within its footprint) to grow over the next five years – 20% YoY revenue growth for M2M, for example. Whilst only indicative, Figure 4 demonstrates that telcos need to sustain high levels of growth over the medium- to long-term and offer complementary, value added services if M2M is to have a significant impact on their headline revenues.

To do this, telcos essentially have three ways to refine or change their business model:

  1. Improve their existing M2M operations: e.g. new organisational structures and processes
  2. Move into new areas of M2M: e.g. expansion along the value chain; new verticals/geographies
  3. Explore the Internet of Things: e.g. new service innovation across verticals and including consumer-intensive segments (e.g. the connected home)

To provide further context, the following section examines where M2M has focused to date (geographically and by vertical). This is followed by an analysis of specific telco activities in 1, 2 and 3.

 

  • Executive Summary
  • Introduction
  • From RFID in the supply chain to M2M today
  • From M2M to the Internet of Things?
  • M2M Today: Geographies, Verticals and New Business Models
  • Headline: M2M is now an important growth area for MNOs
  • In-depth: M2M is being driven by specific geographies and verticals
  • New Business Models: Value network innovation and new service offerings
  • The Emerging IoT: Outsiders are raising the opportunity stakes
  • The business models and profitability potentials of M2M and IoT are radically different
  • IoT shifts the focus from devices and connectivity to data and its use in applications
  • New service opportunities drive IoT value chain innovation
  • New entrants recognise the IoT-M2M distinction
  • IoT is not the end-game
  • ‘Digital’ and IoT convergence will drive further innovation and new business models
  • Implications for Operators
  • About STL Partners and Telco 2.0: Change the Game
  • About More With Mobile

 

  • Figure 1: Historical analyst forecasts of annual M2M module shipment volumes
  • Figure 2: Ericsson’s Promise: 50 billion connected ‘things’ by 2020
  • Figure 3: Selected leading service providers by cellular M2M connections, Q4 2013
  • Figure 4: Vodafone Group enterprise revenues, implied forecast, FY 2012-18
  • Figure 5: M2M market penetration vs. growth by geographic region
  • Figure 6: Vodafone Group organisational chart highlighting Telco 2.0 activity areas
  • Figure 7: Vodafone’s central M2M unit is structured across five areas
  • Figure 8: The M2M Value Chain
  • Figure 9: ‘New entrant’ investments outstripped those of M2M incumbents in 2014
  • Figure 10: Characterising the difference between M2M and IoT across six domains
  • Figure 11: New business models to enable cross-silo IoT services
  • Figure 12: ‘Digital’ and IoT convergence

 

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”

Cloud 2.0: the fight for the next wave of customers

Summary: The fight for the Cloud Services market is about to move into new segments and territories. In the build up to the launch of our new strategy report, ‘Telco strategies in the Cloud’, we review perspectives on this shared at the 2012 EMEA and Silicon Valley Executive Brainstorms by strategists from major telcos and tech players, including: Orange, Telefonica, Verizon, Vodafone, Amazon, Bain, Cisco, and Ericsson (September 2012, , Executive Briefing Service, Cloud & Enterprise ICT Stream). Cloud Growth Groups September 2012
  Read in Full (Members only)   To Subscribe click here

Below is an extract from this 33 page Telco 2.0 Briefing Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here. Non-members can subscribe here and for this and other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Introduction

As part of the New Digital Economics Executive Brainstorm series, future strategies in Cloud Services were explored at the New Digital Economics Silicon Valley event at the Marriott Hotel, San Francisco, on the 27th March, 2012, and the second EMEA Cloud 2.0 event at the Grange St. Pauls Hotel on the 13th June 2012.

At the events, over 200 specially-invited senior executives from across the communications, media, retail, finance and technology sectors looked at how to make money from cloud services and the role and strategies of telcos in this industry, using a widely acclaimed interactive format called ‘Mindshare’.

This briefing summarises key points, participant votes, and our high-level take-outs from across the events, and focuses on the common theme that the cloud market is evolving to address new customers, and the consequence of this change on strategy and implementation. We are also publishing a comprehensive report on Cloud 2.0: Telco Strategies in the Cloud.

To share this article easily, please click:



Executive Summary

The end of the beginning

The first phase of enterprise cloud services has been dominated by the ‘big tech’ and web players like Amazon, Google, and Microsoft, who have developed highly sophisticated cloud operations at enormous scale. The customers in this first round are the classic ‘early adopters’ of enterprise ICT – players with a high proportion of IT genes in their corporate DNA such as Netflix, NASA, Silicon Valley start ups, some of the world’s largest industrial and marketing companies, and the IT industry itself. There is little doubt that these leading customers and major suppliers will retain their leading edge status in the market.

The next phase of cloud market development is the move into new segments in the broader market. Participants at the EMEA brainstorm thought that a combination of new customers and new propositions would drive the most growth in the next 3 years.

UK Services Revenues: Actual and Forecast (index)

These new segments comprise both industries and companies outside the early adopters in developed markets, and companies in new territories in emerging and developing markets. These customers are typically less technology oriented, more focused on business requirements, and need a combination of de-mystification of cloud and support to develop and run such systems.

Closer to the customer

There are opportunities for telcos in this evolving landscape. While the major players’ scale will be hard to beat, there are opportunities in the new segments in being ‘closer to the customer’. This involves telcos leveraging potential advantages of:

  • existing customer relationships, existing enterprise IT assets, and channels to markets (where they exist);
  • geographical proximity, where telcos can build, locate and connect more directly to overcome data sovereignty and latency issues.

Offering unique, differentiated services

Telcos should also be able to leverage existing assets and capabilities through APIs in the cloud to create distinctive offerings to enterprise and SME customers:

  • Network assets will enable better management of cloud services by allowing greater control of the network components;
  • Data assets will enable a wider range of potential applications for cloud services that use telco data (such as identification services);
  • And communications assets (such as APIs to voice and messaging) will allow communications services to be built in to cloud applications.

Next steps for telcos

  • Telcos need to move fast to leverage their existing relationships with customers both large and small and optimise their cloud offerings in line with new trends in the enterprise ICT market, such as bring-your-own-device (BYOD).
  • Customers are increasingly looking to outsource business processes to cut costs, and telcos are well-placed to take advantage of this opportunity.
  • Telcos need to continue to partner with independent software vendors, in order to build new products and services. Telcos should also focus on tight integration between their core services and cloud services or cloud service providers (either delivered by themselves or by third parties.) During the events, we saw examples from Vodafone, Verizon and Orange amongst others.
  • Telcos should also look at the opportunity to act as cloud service brokers. For example, delivering a mash up of Google Apps, Workday and other services that are tightly integrated with telco products, such as billing, support, voice and data services. The telco could ensure that the applications work well together and deliver a fully supported, managed and billed suite of products.
  • Identity management and security also came through as strong themes and there is a natural role for telcos to play here. Telcos already have a trusted billing relationship and hold personal customer information. Extending this capability to offer pre-population of forms, acting as an authentication broker on behalf of other services and integrating information about location and context through APIs would represent additional business and revenue generating opportunities.
  • Most telcos are already exploring opportunities to exploit APIs, which will enable them to start offering network-as-a-service, voice-as-a-service, device management, billing integration and other services. Depending on platform and network capability, there are literally hundreds of APIs that telcos could offer to external developers. These APIs could be used to develop applications that are integrated with telcos’ network product or service, which in turn makes the telco more relevant to their customers.

We will be exploring these strategies in depth in Cloud 2.0: Telco Strategies in the Cloud and at the invitation only New Digital Economics Executive Brainstorms in Digital Arabia in Dubai, 6-7 November, and Digital Asia in Singapore, 3-5 December, 2012.

Key questions explored at the brainstorms and in this briefing:

  • How will the Cloud Services market evolve?
  • Which customer and service segments are growing fastest (Iaas, PaaS, SaaS)?
  • What are the critical success factors to market adoption?
  • Who will be the leading players, and how will it impact different sectors?
  • What are the telcos’ strengths and who are the most advanced telcos today?
  • Which aspects of the cloud services market should they pursue first?
  • Where should telcos compete with IT companies and where should they cooperate?
  • What must telcos do to secure their share of the cloud and how much time do they have?

Stimulus Speakers/Panelists

Telcos

  • Peter Martin, Head of Strategy, Cloud Computing, Orange Group
  • Moisés Navarro Marín, Director, Strategy Global Cloud Services, Telefonica Digital
  • Alex Jinivizian, Head of Enterprise Strategy, Verizon Enterprise Solutions
  • Robert Brace, Head of Cloud Services, Vodafone Group

Technology Companies

  • Mohan Sadashiva, VP & GM, Cloud Services, Aepona
  • Gustavo Reyna, Solutions Marketing Manager, Aepona
  • Iain Gavin, Head of EMEA Web Services, Amazon
  • Pat Adamiak, Senior Director, Cloud Solutions, Cisco
  • Charles J. Meyers, President, Equinix Americas
  • Arun Bhikshesvaran, CMO, Ericsson
  • John Zanni, VP of Service Provider Marketing & Alliances, Parallels

Consulting & Industry Analysis

  • Chris Brahm, Partner, Head of Americas Technology Practices, Bain
  • Andrew Collinson, Research Director, STL Partners

With thanks to our Silicon Valley 2012 event sponsors and partners:

Silicon Valley 2012 Event Sponsors

And our EMEA 2012 event sponsors:

EMEA 2012 Event Sponsors

To read the note in full, including the following sections detailing support for the analysis…

  • Round 2 of the Cloud Fight
  • Selling to new customers
  • What channels are needed?
  • How will telcos perform in cloud?
  • With which services will telcos succeed?
  • How can telcos differentiate?
  • Comments on telcos’ role, objectives and opportunities
  • Four telcos’ perspectives
  • Telefonica Digital – focusing on business requirements
  • Verizon – Cloud as a key Platform
  • Orange Business Services – communications related cloud
  • Vodafone – future cloud vision
  • Techco’s Perspectives
  • Amazon – A history of Amazon Web Services (AWS)
  • Cisco – a world of many clouds
  • Ericsson – the networked society and telco cloud
  • Aepona – Cloud Brokerage & ‘Network as a Service’ (NaaS)
  • The Telco 2.0™ Initiative

…and the following figures…

  • Figure 1 – Bain forecasts for business cloud market size
  • Figure 2 – Key barriers to cloud adoption
  • Figure 3 – Identifying the cloud growth markets
  • Figure 4 – Requirements for success
  • Figure 5 – New customers to drive cloud growth
  • Figure 6 – How to increase revenues from cloud services
  • Figure 7 – How to move cloud services forward
  • Figure 8 – Enterprise cloud channels
  • Figure 9 – Small businesses cloud channels
  • Figure 10 – Vote on Telco Cloud Market Share
  • Figure 11 – Telcos’ top differentiators in the cloud
  • Figure 12 – The global reach of Orange Business
  • Figure 13 – The telco as an intermediary
  • Figure 14 – Vodafone’s vision of the cloud
  • Figure 15 – Amazon Web Services’ cloud infrastructure
  • Figure 16 – Cisco’s world of many clouds
  • Figure 17 – Cloud traffic in the data centre
  • Figure 18 – Ericsson’s vision for telco cloud
  • Figure 19 – Summary of Ericsson cloud functions
  • Figure 20 – Aepona Cloud Services Broker
  • Figure 21 – How to deliver network-enhanced cloud services

Members of the Telco 2.0 Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can download the full 33 page report in PDF format hereNon-Members, please subscribe here. For this or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Companies and technologies covered: Telefonica, Vodafone, Verizon, Orange, Cloud, Amazon, Google, Ericsson, Cisco, Aepona, Equinix, Parallels, Bain, Telco 2.0, IaaS, PaaS, SaaS, private cloud, public cloud, telecom, strategy, innovation, ICT, enterprise.

Mobile Broadband 2.0: The Top Disruptive Innovations

Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).

Trends in European data usage

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:



Introduction

Telco 2.0 has previously published a wide variety of documents and blog posts on mobile broadband topics – content delivery networks (CDNs), mobile CDNs, WiFi offloading, Public WiFi, network outsourcing (“‘Under-The-Floor’ (UTF) Players: threat or opportunity? ”) and so forth. Our conferences have featured speakers and panellists discussing operator data-plan pricing strategies, tablets, network policy and numerous other angles. We’ve also featured guest material such as Arete Research’s report LTE: Late, Tempting, and Elusive.

In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing

Under The Floor (UTF) Players Telco 2.0

This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.

During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.

It starts by discussing some of the issues which determine the extent of any disruption:

  • Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
  • The role of standardisation , and whether it is a facilitator or inhibitor of disruption
  • Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.

The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:

  • Network infrastructure evolution (radio and core)
  • Control and policy functions, and business-model enablers

It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.

This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.

Telco-driven disruption vs. external trends

There are various potential sources of disruption for the mobile broadband marketplace:

  • New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
  • 3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
  • External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.

The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,

External disruptions to monitor

But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls. 

In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.

It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.

Some of the areas being followed by Telco 2.0 include:

  • Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
  • SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
  • Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
  • Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?

It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.

There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.

The mobile data explosion… or maybe not?

It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?

A J-curve or an S-curve?

Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.

In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.

Figure 1 – Trends in European data usage

 Trends in European Data Usage
 

Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video. 

But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.

Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.

Figure 2 – A plethora of mobile data traffic management solutions

A Plethora of Mobile Data Traffic Management Solutions

What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.

There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.

So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?

The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:

  • Network coverage in terms of area or population
  • Network capacity at the busiest places/times

Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later). 

Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.

So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.

  • Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
  • The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
  • Regulators may start to take different views of traffic management or predicted spectrum requirements.
  • Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
  • Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
  • In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
  • Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.

Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.

Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:

  • “All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
    • Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
  • “Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
    • This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
  • “Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
    • This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business

To read the note in full, including the following analysis…

  • Introduction
  • Telco-driven disruption vs. external trends
  • External disruptions to monitor
  • The mobile data explosion… or maybe not?
  • A J-curve or an S-curve?
  • Evolving the mobile network
  • Overview
  • LTE
  • Network sharing, wholesale and outsourcing
  • WiFi
  • Next-gen IP core networks (EPC)
  • Femtocells / small cells / “cloud RANs”
  • HetNets
  • Advanced offload: LIPA, SIPTO & others
  • Peer-to-peer connectivity
  • Self optimising networks (SON)
  • M2M-specific broadband innovations
  • Policy, control & business model enablers
  • The internal politics of mobile broadband & policy
  • Two sided business-model enablement
  • Congestion exposure
  • Mobile video networking and CDNs
  • Controlling signalling traffic
  • Device intelligence
  • Analytics & QoE awareness
  • Conclusions & recommendations
  • Index

…and the following figures…

  • Figure 1 – Trends in European data usage
  • Figure 2 – A plethora of mobile data traffic management solutions
  • Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
  • Figure 4 – Internal ‘power tensions’ over managing mobile broadband
  • Figure 5 – How a congestion API could work
  • Figure 6 – Relative Maturity of MBB Management Solutions
  • Figure 7 – Laptops generate traffic volume, smartphones create signalling load
  • Figure 8 – Measuring Quality of Experience
  • Figure 9 – Summary of disruptive network innovations

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: 3GPP, Aero2, Alcatel Lucent, AllJoyn, ALU, Amazon, Amdocs, Android, Apple, AT&T, ATIS, BBC, BlackBerry, Bridgewater, CarrierIQ, China, China Mobile, China Unicom, Clearwire, Conex, DoCoMo, Ericsson, Europe, EverythingEverywhere, Facebook, Femto Forum, FlashLinq, Free, Germany, Google, GSMA, H3G, Huawei, IETF, IMEI, IMSI, InterDigital, iPhones,Kenya, Kindle, Light Radio, LightSquared, Los Angeles, MBNL, Microsoft, Mobily, Netflix, NGMN, Norway, NSN, O2, WiFi, Openet, Qualcomm, Radisys, Russia, Saudi Arabia, SoftBank, Sony, Stoke, Telefonica, Telenor, Time Warner Cable, T-Mobile, UK, US, Verizon, Vita, Vodafone, WhatsApp, Yota, YouTube, ZTE.

Technologies and industry terms referenced: 2G, 3G, 4.5G, 4G, Adaptive bitrate streaming, ANDSF (Access Network Discovery and Selection Function), API, backhaul, Bluetooth, BSS, capacity crunch, capex, caps/tiers, CDMA, CDN, CDNs, Cloud RAN, content delivery networks (CDNs), Continuous Computing, Deep packet inspection (DPI), DPI, DRM, Encryption, Enhanced video, EPC, ePDG (Evolved Packet Data Gateway), Evolved Packet System, Femtocells, GGSN, GPS, GSM, Heterogeneous Network (HetNet), Heterogeneous Networks (HetNets), HLRs, hotspots, HSPA, HSS (Home Subscriber Server), HTML5, HTTP Live Streaming, IFOM (IP Flow Mobility and Seamless Offload), IMS, IPR, IPv4, IPv6, LIPA (Local IP Access), LTE, M2M, M2M network enhancements, metro-cells, MiFi, MIMO (multiple in, MME (Mobility Management Entity), mobile CDNs, mobile data, MOSAP, MSISDN, MVNAs (mobile virtual network aggregators)., MVNO, Net Neutrality, network outsourcing, Network sharing, Next-generation core networks, NFC, NodeBs, offload, OSS, outsourcing, P2P, Peer-to-peer connectivity, PGW (PDN Gateway), picocells, policy, Policy and Charging Rules Function (PCRF), Pre-cached video, pricing, Proximity networks, Public WiFi, QoE, QoS, RAN optimisation, RCS, remote radio heads, RFID, self-optimising network technology (SON), Self-optimising networks (SON), SGW (Serving Gateway), SIM-free wireless, single RANs, SIPTO (Selective IP Traffic Offload), SMS, SoftSIM, spectrum, super-femtos, Telco 2.0 Happy Pipe, Transparent optimisation, UMTS, ‘Under-The-Floor’ (UTF) Players, video optimisation, VoIP, VoLTE, VPN, White space, WiFi, WiFi Direct, WiFi offloading, WiMAX, WLAN.

‘Under-The-Floor’ (UTF) Players: threat or opportunity?

Introduction

The ‘smart pipe’ imperative

In some quarters of the telecoms industry, the received wisdom is that the network itself is merely an undifferentiated “pipe”, providing commodity connectivity, especially for data services. The value, many assert, is in providing higher-tier services, content and applications, either to end-users, or as value-added B2B services to other parties. The Telco 2.0 view is subtly different. We maintain that:

  1. Increasingly valuable services will be provided by third-parties but that operators can provide a few end-user services themselves. They will, for example, continue to offer voice and messaging services for the foreseeable future.
  2. Operators still have an opportunity to offer enabling services to ‘upstream’ service providers such as personalisation and targeting (of marketing and services) via use of their customer data, payments, identity and authentication and customer care.
  3. Even if operators fail (or choose not to pursue) options 1 and 2 above, the network must be ‘smart’ and all operators will pursue at least a ‘smart network’ or ‘Happy Pipe’ strategy. This will enable operators to achieve three things.
  • To ensure that data is transported efficiently so that capital and operating costs are minimised and the Internet and other networks remain cheap methods of distribution.
  • To improve user experience by matching the performance of the network to the nature of the application or service being used – or indeed vice versa, adapting the application to the actual constraints of the network. ‘Best efforts’ is fine for asynchronous communication, such as email or text, but unacceptable for traditional voice telephony. A video call or streamed movie could exploit guaranteed bandwidth if possible / available, or else they could self-optimise to conditions of network congestion or poor coverage, if well-understood. Other services have different criteria – for example, real-time gaming demands ultra-low latency, while corporate applications may demand the most secure and reliable path through the network.
  • To charge appropriately for access to and/or use of the network. It is becoming increasingly clear that the Telco 1.0 business model – that of charging the end-user per minute or per Megabyte – is under pressure as new business models for the distribution of content and transportation of data are being developed. Operators will need to be capable of charging different players – end-users, service providers, third-parties (such as advertisers) – on a real-time basis for provision of broadband and maybe various types or tiers of quality of service (QoS). They may also need to offer SLAs (service level agreements), monitor and report actual “as-experienced” quality metrics or expose information about network congestion and availability.

Under the floor players threaten control (and smartness)

Either through deliberate actions such as outsourcing, or through external agency (Government, greenfield competition etc), we see the network-part of the telco universe suffering from a creeping loss of control and ownership. There is a steady move towards outsourced networks, as they are shared, or built around the concept of open-access and wholesale. While this would be fine if the telcos themselves remained in control of this trend (we see significant opportunities in wholesale and infrastructure services), in many cases the opposite is occurring. Telcos are losing control, and in our view losing influence over their core asset – the network. They are worrying so much about competing with so-called OTT providers that they are missing the threat from below.

At the point at which many operators, at least in Europe and North America, are seeing the services opportunity ebb away, and ever-greater dependency on new models of data connectivity provision, they are potentially cutting off (or being cut off from) one of their real differentiators.
Given the uncertainties around both fixed and mobile broadband business models, it is sensible for operators to retain as many business model options as possible. Operators are battling with significant commercial and technical questions such as:

  • Can upstream monetisation really work?
  • Will regulators permit priority services under Net Neutrality regulations?
  • What forms of network policy and traffic management are practical, realistic and responsive?

Answers to these and other questions remain opaque. However, it is clear that many of the potential future business models will require networks to be physically or logically re-engineered, as well as flexible back-office functions, like billing and OSS, to be closely integrated with the network.
Outsourcing networks to third-party vendors, particularly when such a network is shared with other operators is dangerous in these circumstances. Partners that today agree on the principles for network-sharing may have very different strategic views and goals in two years’ time, especially given the unknown use-cases for new technologies like LTE.

This report considers all these issues and gives guidance to operators who may not have considered all the various ways in which network control is being eroded, from Government-run networks through to outsourcing services from the larger equipment providers.

Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators Under The Floor Players Fig 1 Defending Network Capabilities

Source: STL Partners

Industry structure is being reshaped

Over the last year, Telco 2.0 has updated its overall map of the telecom industry, to reflect ongoing dynamics seen in both fixed and mobile arenas. In our strategic research reports on Broadband Business Models, and the Roadmap for Telco 2.0 Operators, we have explored the emergence of various new “buckets” of opportunity, such as verticalised service offerings, two-sided opportunities and enhanced variants of traditional retail propositions.
In parallel to this, we’ve also looked again at some changes in the traditional wholesale and infrastructure layers of the telecoms industry. Historically, this has largely comprised basic capacity resale and some “behind the scenes” use of carriers-carrier services (roaming hubs, satellite / sub-oceanic transit etc).

Figure 2 – Telco 1.0 Wholesale & Infrastructure structure

Under The Floor (UTF) Players Fig 2 Telco 1.0 Scenario

Source: STL Partners

Content

  • Revising & extending the industry map
  • ‘Network Infrastructure Services’ or UTF?
  • UTF market drivers
  • Implications of the growing trend in ‘under-the-floor’ network service providers
  • Networks must be smart and controlling them is smart too
  • No such thing as a dumb network
  • Controlling the network will remain a key competitive advantage
  • UTF enablers: LTE, WiFi & carrier ethernet
  • UTF players could reduce network flexibility and control for operators
  • The dangers of ceding control to third-parties
  • No single answer for all operators but ‘outsourcer beware’
  • Network outsourcing & the changing face of major vendors
  • Why become an under-the-floor player?
  • Categorising under-the-floor services
  • Pure under-the-floor: the outsourced network
  • Under-the-floor ‘lite’: bilateral or multilateral network-sharing
  • Selective under-the-floor: Commercial open-access/wholesale networks
  • Mandated under-the-floor: Government networks
  • Summary categorisation of under-the-floor services
  • Next steps for operators
  • Build scale and a more sophisticated partnership approach
  • Final thoughts
  • Index

 

  • Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators
  • Figure 2 – Telco 1.0 Wholesale & Infrastructure structure
  • Figure 3 – The battle over infrastructure services is intensifying
  • Figure 4 – Examples of network-sharing arrangements
  • Figure 5 – Examples of Government-run/influenced networks
  • Figure 6 – Four under-the-floor service categories
  • Figure 7: The need for operator collaboration & co-opetition strategies