Network-as-a-service: APIs, AI and the open cloud

NaaS is a cloud-native opportunity

Network virtualisation and disaggregation are creating opportunities that are broadly categorised as Network as a Service (NaaS). This concept has been around since the early 2010s, when the project to virtualise telecoms networks began. In other words, it is an idea that is native to telco cloud and a natural by-product of virtualising network functions. Some of the goals of network functions virtualisation implied NaaS. These were to enable networking capabilities to be:

  • Spun up and activated whenever required to meet user demand
  • Scaled up and out dynamically to provide greater capacity, bandwidth and reliability, along with lower latencies, whenever and wherever required
  • Programmable and instructible by operators, third parties such as application developers, and customers, including via APIs(see below)
  • Defined and managed centrally, through software, independently of the underlying network technologies and domains (for example, through software-defined networking [SDN], typically in SD-WANplatforms)
  • Made able – in the 5G era – to support multiple, parallel virtual networks running over the same physical core and access networks, for example in network slicing

Enter your details below to download an extract of the report

The role of network slicing relates to a distinction between the NaaS discussion at the present time and previous iterations of the idea in the earlier phases of the telco industry’s cloud evolution. Previously, NaaS referred to services that depended either on the enhanced scalability enabled by virtualised network functions or on SDN control over traffic flows. Earlier NaaS services included:

  • On-demand activation, or scaling up or down, of dedicated Ethernet links or broadband access
  • Flexible, rapid deployment of enterprise network services using Virtualised Network Functions (VNFs) hosted on vendor-neutral customer premises equipment (uCPE)
  • SD-WAN, involving on-demand creation and centralised, SDN-based management of WAN services, via a software overlay, across multiple physical network types and domains

Current thinking around NaaS is directed towards the opportunities resulting from enabling the largely virtualised functions of the telco network to be programmed and customised around the requirements of applications of different types, typically via APIs. This is an opportunity linked to other technology trends such as edge computing, IoT and the emergence of cloud-native networks and functions. Here, it is not just the standard attributes of rigid VNFs that can be scaled or controlled via the service, but the fundamental building blocks of the network – from core to access – that can be re-programmed, modified or swapped out altogether. The ultimate logic of this is to allow an almost indefinite number of virtual networks to be created and run across a single cloud-managed, physical network.

Many of the commercial and technological challenges and opportunities from network APIs were discussed in our recent report, Network APIs: Driving new revenue streams for telcos. Our research shows that APIs represent a substantial opportunity for telcos, with the revenue opportunity created by the top 11 mobile network APIs forecast to reach over $22 billion by 2028 (see graphic below).

Mobile network API revenue opportunity, 2022-2028, worldwide

Mobile-network-API-revenue-opportunity-2022-2028-worldwide-stl-partners

Source: STL Partners, TELUS

These APIs comprise network information APIs providing real-time information about the network (such as performance, hyper-precise location and device status) and network configuration APIs, which instruct the network (for example, quality-of-service on-demand, slice configuration and device onboarding).

NaaS is also an opportunity for non-telcos

Our forecast is, however, beset by a great deal of uncertainty. Firstly, this is because the business model for these sorts of network API is still highly unclear. For example, how much application developers will actually be prepared to pay for network access via this route. This depends on operators being able to establish a clear value proposition for their APIs, i.e. that they give access to capabilities that clearly enhance the functionality of applications or indeed are essential to their performance. And secondly, operators would need to assert themselves as the primary, even exclusive, providers of access to these capabilities.

Table of contents

  • Executive Summary
    • NaaS is a major opportunity for telcos and non-telcos alike
    • NaaS 2.0 will be delivered across an open telco cloud
    • Recommendation: NaaS 2.0 is a long-term but fast-evolving opportunity and telcos need to pick a strategy
    • Three NaaS business models: Co-creator, Distributor and Aggregator
  • NaaS is a cloud-native opportunity
  • NaaS is also an opportunity for non-telcos
  • AI-driven automation and cloud-native software could bypass telco APIs
    • Cloud-native and AI are made for each other
    • AI-based NaaS will enable a new breed of automation-enabling, edge compute applications
    • NaaS 2.0 threatens a “Wild West” of networking
    • NaaS will drive a restructuring of the telecoms industry as a whole: How should telcos play?
  • Three NaaS 2.0 business models for the telco: Co-creator, distributor and aggregator
    • Business model 1: Enabler and co-creator of NaaS 2.0 services
    • Business model 2: Physical distributor of NaaS 2.0 services
    • Business model 3: NaaS aggregator
  • Conclusion: NaaS is a significant opportunity — but not just for telcos

Related Research

Enter your details below to download an extract of the report

Network APIs: Driving new revenue streams for telcos

Network APIs promise new revenues for telcos

Since 2020 there has been a resurgent interest in applications interfacing with the network they run over. The exponential increase in the number of connected devices and complex traffic, particularly video, is exerting pressure on network resources. Applications must become more aware of network and edge compute resource availability to meet increasingly stringent customer requirements as well as energy efficiency targets – for example, by prioritising critical applications. MEC allows data to be collected and processed closer to the customer (more information on edge computing is available on our Edge hub).

STL Partners forecasts the revenue opportunity created by mobile network APIs to reach over $20 billion by 2028 (the full version of this report provides a breakdown of the opportunity for the top 11 network APIs), as well as enabling powerful new applications that leverage programmable, cloud-native networks.

Increased network programmability will enable developers to build applications that require guaranteed connection speed and bandwidth, giving users/providers the option to pay a premium for network resource when and where they need it. The network APIs fuelling this market fall into two broad categories:

  • Network information APIs: Basic network APIs that provide real-time information about the network will reach extremely high volumes over the next decade. These will gradually be consolidated into the core network offering as a hygiene factor for all operators. Examples include network performance (information only), hyper-precise location, real-time device status, etc.
  • Network configuration APIs: APIs that instruct the network will not reach the same volume of usage, instead offering a premium service to a smaller pool of users wanting to define their network environment. Examples of these APIs include quality-of-service on-demand, slice configuration and device onboarding. These APIs offer a longer-term monetisation opportunity for operators, although there is little visibility around what developers and enterprise will pay for these services (e.g., pay per use vs. monthly subscription, etc.).

In this report, we explore the work that is currently happening to develop network APIs from a technical and commercial point of view, surveying the telecoms industry consortia that are proactively building the technical and commercial tools to make network-as-a-service a revenue-driving success.

Enter your details below to download an extract of the report

Two API domains: The macro network and MEC

MEC APIs control both the compute and networking elements at the edge. In the instance that a telco is operating and managing the edge site, these APIs come under their remit. In some instances, however, the MEC APIs could be defining edge or cloud compute not operated by the telco. Therefore, we do not consider all MEC APIs to come under the umbrella of network APIs (See figure below).

MEC APIs vs. Network APIs

Source: STL Partners

A MEC API is a set of programming interfaces that allow developers to access and utilize the resources of mobile edge computing platforms. These resources include computing power, storage, and network connectivity, and can be used to run applications, services, and tasks at the edge of the network, closer to the end users. MEC APIs can provide a way to offload workloads from the cloud to the edge, reducing latency and improving the performance of applications and services. CSPs must make a strategic decision on where to focus their development: general network APIs (quality-on-demand, location, etc.) or MEC APIs (edge node discovery, intent-based workload placement, etc.).

Need for reliable, real-time connectivity across a wide area will drive demand

Based on our interviews with application developers, we developed a framework to assess the types of use cases network APIs are best suited to enable. This framework sets out the network API opportunity across two dimensions:

  • The geographic nature of the use case: Local area vs. wide-area use cases. This influences the type of edge that is likely to be used, with local-area use cases leveraging the on-premiseedge and wide-area use cases better suited to the network edge.
  • Need for real-time vs. non-real time insight and response: This depends on the mission criticality of the use case or the need from the application point of view to be dynamic (i.e., adapt to changing circumstances to maintain a consistent or enhanced customer experience).

As network operators, telcos’ primary value-add is the ability to provide quality connectivity. Application developers leverage awareness of the network throughout their development process, and the ability to define the network environment enables use cases which require constant, ultra-reliable connectivity (see figure below).

Importance of connectivity features for developers

Source: STL Partners Survey (December 2022), n=101

Table of Contents

  • Executive Summary
  • Network APIs promise new revenues for telcos
    • Two API domains: The macro network and MEC
    • Need for reliable, real-time connectivity across a wide area will drive demand
    • Layers of API needed to translate network complexity into valuable network functions
    • Cross-telco collaboration and engagement of developers
    • Each industry fora focuses on specific layers of the API value chain
  • Operators must leverage multiple distribution channels for network APIs
    • Failure to standardise quickly allows other distribution channels to achieve greater scale
    • Operators must engage the developer community to play an aggregator role
  • Challenges and barriers: What needs to change
  • Conclusion
  • Appendix
    • Understanding the fundamentals of APIs
    • What are network APIs and what has changed?

Related research

Enter your details below to download an extract of the report

Telco ecosystems: How to make them work

The ecosystem business framework

The success of large businesses such as Microsoft, Amazon and Google as well as digital disrupters like Airbnb and Uber is attributed to their adoption of platform-enabled ecosystem business frameworks. Microsoft, Amazon and Google know how to make ecosystems work. It is their ecosystem approach that helped them to scale quickly, innovate and unlock value in opportunity areas where businesses that are vertically integrated, or have a linear value chain, would have struggled. Internet-enabled digital opportunity areas tend to be unsuited to the traditional business frameworks. These depend on having the time and the ability to anticipate needs, plan and execute accordingly.

As businesses in the telecommunications sector and beyond try to emulate the success of these companies and their ecosystem approach, it is necessary to clarify what is meant by the term “ecosystem” and how it can provide a framework for organising business.

The word “ecosystem” is borrowed from biology. It refers to a community of organisms – of any number of species – living within a defined physical environment.

A biological ecosystem

The components of a biological ecosystem

Source: STL Partners

A business ecosystem can therefore be thought of as a community of stakeholders (of different types) that exist within a defined business environment. The environment of a business ecosystem can be small or large.  This is also true in biology, where both a tree and a rainforest can equally be considered ecosystem environments.

The number of organisms within a biological community is dynamic. They coexist with others and are interdependent within the community and the environment. Environmental resources (i.e. energy and matter) flow through the system efficiently. This is how the ecosystem works.

Companies that adopt an ecosystem business framework identify a community of stakeholders to help them address an opportunity area, or drive business in that space. They then create a business environment (e.g. platforms, rules) to organise economic activity among those communities.  The environment integrates community activities in a complementary way. This model is consistent with STL Partners’ vision for a Coordination Age, where desired outcomes are delivered to customers by multiple parties acting together.

Enter your details below to request an extract of the report

Characteristics of business ecosystems that work

In the case of Google, it adopted an ecosystem approach to tackle the search opportunity. Its search engine platform provides the environment for an external stakeholder community of businesses to reach consumers as they navigate the internet, based on what consumers are looking for.

  • Google does not directly participate in the business-consumer transaction, but its platform reduces friction for participants (providing a good customer experience) and captures information on the exchange.

While Google leverages a technical platform, this is not a requirement for an ecosystem framework. Nespresso built an ecosystem around its patented coffee pod. It needed to establish a user-base for the pods, so it developed a business environment that included licensing arrangements for coffee machine manufacturers.  In addition, it provided support for high-end homeware retailers to supply these machines to end-users. It also created the online Nespresso Club for coffee aficionados to maintain demand for its product (a previous vertically integrated strategy to address this premium coffee-drinking niche had failed).

Ecosystem relevance for telcos

Telcos are exploring new opportunities for revenue. In many of these opportunities, the needs of the customer are evolving or changeable, budgets are tight, and time-to-market is critical. Planning and executing traditional business frameworks can be difficult under these circumstances, so ecosystem business frameworks are understandably of interest.

Traditional business frameworks require companies to match their internal strengths and capabilities to those required to address an opportunity. An ecosystem framework requires companies to consider where those strengths and capabilities are (i.e. external stakeholder communities). An ecosystem orchestrator then creates an environment in which the stakeholders contribute their respective value to meet that end. Additional end-user value may also be derived by supporting stakeholder communities whose products and services use, or are used with, the end-product or service of the ecosystem (e.g. the availability of third party App Store apps add value for end customers and drives demand for high end Apple iPhones). It requires “outside-in” strategic thinking that goes beyond the bounds of the company – or even the industry (i.e. who has the assets and capabilities, who/what will support demand from end-users).

Many companies have rushed to implement ecosystem business frameworks, but have not attained the success of Microsoft, Amazon or Google, or in the telco arena, M-Pesa. Telcos require an understanding of the rationale behind ecosystem business frameworks, what makes them work and how this has played out in other telco ecosystem implementations. As a result, they should be better able to determine whether to leverage this approach more widely.

Table of Contents

  • Executive Summary
  • The ecosystem business framework
  • Why ecosystem business frameworks?
    • Benefits of ecosystem business frameworks
  • Identifying ecosystem business frameworks
  • Telco experience with ecosystem frameworks
    • AT&T Community
    • Deutsche Telekom Qivicon
    • Telecom Infra Project (TIP)
    • GSMA Mobile Connect
    • Android
    • Lessons from telco experience
  • Criteria for successful ecosystem businesses
    • “Destination” status
    • Strong assets and capabilities to share
    • Dynamic strategy
    • Deep end-user knowledge
    • Participant stakeholder experience excellence
    • Continuous innovation
    • Conclusions
  • Next steps
    • Index

Enter your details below to request an extract of the report

Fighting the fakes: How telcos can help

Internet platforms need a frictionless solution to fight the fakes

On the Internet, the old adage, nobody knows you are a dog, can still ring true. All of the major Internet platforms, with the partial exception of Apple, are fighting frauds and fakes. That’s generally because these platforms either allow users to remain anonymous or because they use lax authentication systems that prioritise ease-of-use over rigour. Some people then use the cloak of anonymity in many different ways, such as writing glowing reviews of products they have never used on Amazon (in return for a payment) or enthusiastic reviews of restaurants owned by friends on Tripadvisor. Even the platforms that require users to register financial details are open to abuse. There have been reports of multiple scams on eBay, while regulators have alleged there has been widespread sharing of Uber accounts among drivers in London and other cities.

At the same time, Facebook/WhatsApp, Google/YouTube, Twitter and other social media services are experiencing a deluge of fake news, some of which can be very damaging for society. There has been a mountain of misinformation relating to COVID-19 circulating on social media, such as the notion that if you can hold your breath for 10 seconds, you don’t have the virus. Fake news is alleged to have distorted the outcome of the U.S. presidential election and the Brexit referendum in the U.K.

In essence, the popularity of the major Internet platforms has made them a target for unscrupulous people who want to propagate their world views, promote their products and services, discredit rivals and have ulterior (and potentially criminal) motives for participating in the gig economy.

Although all the leading Internet platforms use tools and reporting mechanisms to combat misuse, they are still beset with problems. In reality, these platforms are walking a tightrope – if they make authentication procedures too cumbersome, they risk losing users to rival platforms, while also incurring additional costs. But if they allow a free-for-all in which anonymity reigns, they risk a major loss of trust in their services.

In STL Partners’ view, the best way to walk this tightrope is to use invisible authentication – the background monitoring of behavioural data to detect suspicious activities. In other words, you keep the Internet platform very open and easy-to-use, but algorithms process the incoming data and learn to detect the patterns that signal potential frauds or fakes. If this idea were taken to an extreme, online interactions and transactions could become completely frictionless. Rather than asking a person to enter a username and password to access a service, they can be identified through the device they are using, their location, the pattern of keystrokes and which features they access once they are logged in. However, the effectiveness of such systems depends heavily on the quality and quantity of data they are feeding on.

In come telcos

This report explores how telcos could use their existing systems and data to help the major Internet companies to build better systems to protect the integrity of their platforms.

It also considers the extent to which telcos will need to work together to effectively fight fraud, just as they do to combat telecoms-related fraud and prevent stolen phones from being used across networks. For most use cases, the telcos in each national market will generally need to provide a common gateway through which a third party could check attributes of the user of a specific mobile phone number. As they plot their way out of the current pandemic, governments are increasingly likely to call for such gateways to help them track the spread of COVID-19 and identify people who may have become infected.

Enter your details below to request an extract of the report

Using big data to combat fraud

In the financial services sector, artificial intelligence (AI) is now widely used to help detect potentially fraudulent financial transactions. Learning from real-world examples, neural networks can detect the behavioural patterns associated with fraud and how they are changing over time. They can then create a dynamic set of thresholds that can be used to trigger alarms, which could prompt a bank to decline a transaction.

In a white paper published in 2019, IBM claimed its AI and cognitive solutions are having a major impact on transaction monitoring and payment fraud modelling. In one of several case studies, the paper describes how the National Payment Switch in France (STET) is using behavioural information to reduce fraud losses by US$100 million annually. Owned by a consortium of financial institutions, STET processes more than 30 billion credit and debit card, cross-border, domestic and on-us payments annually.

STET now assesses the fraud risk for every authorisation request in real time. The white paper says IBM’s Safer Payments system generates a risk score, which is then passed to banks, issuers and acquirers, which combine it with customer information to make a decision on whether to clear or decline the transaction. IBM claims the system can process up to 1,200 transactions per second, and can compute a risk score in less than 10 milliseconds. While STET itself doesn’t have any customer data or data from other payment channels, the IBM system looks across all transactions, countrywide, as well as creating “deep behavioural profiles for millions of cards and merchants.”

Telcos, or at least the connectivity they provide, are also helping banks combat fraud. If they think a transaction is suspicious, banks will increasingly send a text message or call a customer’s phone to check whether they have actually initiated the transaction. Now, some telcos, such as O2 in the UK, are making this process more robust by enabling banks to check whether the user’s SIM card has been swapped between devices recently or if any call diverts are active – criminals sometimes pose as a specific customer to request a new SIM. All calls and texts to the number are then routed to the SIM in the fraudster’s control, enabling them to activate codes or authorisations needed for online bank transfers, such as a one-time PINs or passwords.

As described below, this is one of the use cases supported by Mobile Connect, a specification developed by the GSMA, to enable mobile operators to take a consistent approach to providing third parties with identification, authentication and attribute-sharing services. The idea behind Mobile Connect is that a third party, such as a bank, can access these services regardless of which operator their customer subscribes to.

Adapting telco authentication for Amazon, Uber and Airbnb

Telcos could also provide Internet platforms, such as Amazon, Uber and Airbnb, with identification, authentication and attribute-sharing services that will help to shore up trust in their services. Building on their nascent anti-fraud offerings for the financial services industry, telcos could act as intermediaries, authenticating specific attributes of an individual without actually sharing personal data with the platform.

STL Partners has identified four broad data sets telcos could use to help combat fraud:

  1. Account activity – checking which individual owns which SIM card and that the SIM hasn’t been swapped recently;
  2. Movement patterns – tracking where people are and where they travel frequently to help identify if they are who they say they are;
  3. Contact patterns – establishing which individuals come into contact with each other regularly;
  4. Spending patterns – monitoring how much money an individual spends on telecoms services.

Table of contents

  • Executive Summary
  • Introduction
  • Using big data to combat fraud
    • Account activity
    • Movement patterns
    • Contact patterns
    • Spending patterns
    • Caveats and considerations
  • Limited progress so far
    • Patchy adoption of Mobile Connect
    • Mobile identification in the UK
    • Turkcell employs machine learning
  • Big Internet use cases
    • Amazon – grappling with fake product reviews
    • Facebook and eBay – also need to clampdown
    • Google Maps and Tripadvisor – targets for fake reviews
    • Uber – serious safety concerns
    • Airbnb – balancing the interests of hosts and guests
  • Conclusions
  • Index

Enter your details below to request an extract of the report

Innovation Leaders: A Surprisingly Successful Telco API Programme

Introduction

The value of APIs

Application programming interfaces (APIs) are a central part of the mobile and cloud-based app economy. On the web, APIs serve to connect back-end and front-end applications (and their data) to one another. While often treated as a technical topic, APIs also have tremendous economic value. This was illustrated very recently when Oracle sued Google for copyright infringement over the use of Oracle-owned Java APIs during the development of Google’s Android operating system. Even though Google won the case, Oracle’s quest for around $9 billion showed the huge potential value associated with widely-adopted APIs.

The API challenge facing telcos…

For telcos, APIs represent an opportunity to monetise their unique network and IT assets by making them available to third-parties. This is particularly important in the context of declining ‘core’ revenues caused by cloud and content providers bypassing telco services. This so-called “over the top” (OTT) threat forces telcos to both partner with third-parties as well as create their own competing offerings in order to dampen the decline in revenues and profits. With mobile app ecosystems maturing and, increasingly, extending beyond smartphones into wearables, cars, TVs, virtual reality, productivity devices and so forth, telcos need to embrace these developments to avoid being a ‘plain vanilla’ connectivity provider – a low-margin low-growth business.

However, thriving in this co-opetitive environment is challenging for telcos because major digital players such as Google, Amazon, Netflix and Baidu, and a raft of smaller developers have an operating model and culture of agility and fast innovation. Telcos need to become easier to collaborate with and a systematic approach to API management and API exposure should be central to any telco partnership strategy and wider ‘transformation programme’.

…and Dialog’s best-practice approach

In this report, we will analyse how Dialog, Sri Lanka’s largest operator, has adopted a two-pronged API implementation strategy. Dialog has systematically exposed APIs:

  1. Externally in order to monetise in partnership with third-parties;
  2. Internally in order to foster agile service creation and reduce operational costs.

STL Partners believes that this two-pronged strategy has been instrumental in Dialog’s API success and that other operators should explore a similar strategy when seeking to launch or expand their API activities.

Dialog Axiata has steadily increased the number of API calls (indexed)

Source: Dialog Axiata

In this report, we will first cover the core lessons that can be drawn from Dialog’s approach and success and then we will outline in detail how Dialog’s Group CIO and Axiata Digital’s CTO, Anthony Rodrigo, and his team implemented APIs within the company and, subsequently, the wider Axiata Group.

 

  • Executive summary
  • Introduction
  • The value of APIs
  • The API challenge facing telcos…
  • …and Dialog’s best-practice approach
  • 5 key ‘telco API programme’ lessons
  • Background: What are APIs and why are they relevant to telcos?
  • API basics
  • API growth
  • The telecoms industry’s API track record is underwhelming
  • The Dialog API Programme (DAP)
  • Overview
  • Ideamart: A flexible approach to long-tail developer engagement
  • Axiata MIFE – building a multipurpose API platform
  • Drinking your own champagne : Dialog’s use of APIs internally
  • Expanding MIFE across Axiata opcos and beyond
  • Conclusion and outlook

 

  • Figure 1: APIs link backend infrastructure with applications
  • Figure 2: The explosive growth of open APIs
  • Figure 3: How a REST API works its magic
  • Figure 4: DAP service layers
  • Figure 5: Five APIs are available for Idea Pro apps
  • Figure 6: Idea Apps – pre-configured API templates
  • Figure 7: Ideadroid/Apptizer allows restaurants to specify food items they want to offer through the app
  • Figure 8: Ideamart’s developer engagement stats compare favourably to AT&T, Orange, and Vodafone
  • Figure 9: Steady increase in the number of API calls (indexed)
  • Figure 10: Dialog Allapps on Android
  • Figure 11: Ideabiz API platform for enterprise third-parties
  • Figure 12: Dialog Selfcare app user interface
  • Figure 13: Dialog Selfcare app functions – share in total number of hits
  • Figure 14: Apple App Store – Dialog Selfcare app ratings
  • Figure 15: Google Play Store – Dialog Selfcare app ratings
  • Figure 16: MIFE enables the creation of a variety of digital services – both internally and externally

How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing

 

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”

Software Defined People: How it Shapes Strategy (and us)

Introduction: software’s defining influence

Our knowledge, employment opportunities, work itself, healthcare, potential partners, purchases from properties to groceries, and much else can now be delivered or managed via software and mobile apps.

So are we all becoming increasingly ‘Software Defined’? It’s a question that has been stimulated in part by producing research on ‘Software Defined Networks (SDN): A Potential Game Changer’ and Enterprise Mobility, this video from McKinsey and Eric Schmidt, Google’s Exec Chairman, a number of observations throughout the past year, and particularly at this and last year’s Mobile World Congress (MWC).

But is software really the key?

The rapid adoption of smartphones and tablets, enabled by ever faster networks, is perhaps the most visible and tangible phenomenon in the market. Less visible but equally significant is the huge growth in ‘big data’ – the use of massive computing power to process types and volume of data that were previously inaccessible, as well as ‘small data’ – the increasing use of more personalised datasets.

However, what is now fuelling these trends is that many core life and business tools are now software of some form or another. In other words, programmes and ‘apps’ that create economic value, utility, fun or efficiency. Software is now the driving force, and the evolving data and hardware are by-products and enablers of the applications respectively.

Software: your virtual extra hand

In effect, mobile software is the latest great tool in humanity’s evolutionary path. With nearly a quarter of the world’s population using a smartphone, the human race has never had so much computing power by its side in every moment of everyday life. Many feature phones also possess significant processing power, and the extraordinary reach of mobile can now deliver highly innovative solutions like mobile money transfer even in markets with relatively underdeveloped financial service infrastructure.

How we are educated, employed and cared for are all starting to change with the growing power of mobile technologies, and will all change further and with increasing pace in the next phase of the mobile revolution. Knowing how to get the best from this world is now a key life skill.

The way that software is used is changing and will change further. While mobile apps have become a mainstream consumer phenomenon in many markets in the last few years, the application of mobile, personalised technologies is also changing education, health, employment, and the very fabric of our social lives. For example:

  • Back at MWC 2013 we saw the following fascinating video from Ericsson as part of its ‘Networked Society’ vision of why education has evolved as is has (to mass-produce workers to work in factories), and what the possibilities are with advanced technology, which is well worth a few minutes of your time whether you have kids or not.
  • We also saw this education demo video from a Singapore school from Qualcomm, based on the creative use of phones in all aspects of schooling in the WE Learn project.
  • There are now a growing number of eHealth applications (heart rate, blood pressure, stroke and outpatient care), and productivity apps and outreach of CRM applications like Salesforce into the mobile employment context are having an increasingly massive impact.
  • While originally a ‘fixed’ phenomena, the way we meet and find partners has seen a massive change in recent years. For example, in the US, 17% of recent marriages and 20% of ‘committed relationships’ started in the $1Bn online dating world – another world which is now increasingly going mobile.

The growing sophistication in human-software interactivity

Horace Dediu pointed out at a previous Brainstorm that the disruptive jumps in mobile handset technology have come from changes in the user interface – most recently in the touch-screen revolution accompanying smartphones and tablets.

And the way in which we interact with the software will continue to evolve, from the touch screens of smartphones, through voice activation, gesture recognition, retina tracking, on-body devices like watches, in-body sensors in the blood and digestive system, and even potentially by monitoring brainwaves, as illustrated in the demonstration from Samsung labs shown in Figure 1.

Figure 1: Software that reads your mind?

Source: Samsung Labs

Clearly, some of these techniques are still at an early stage of development. It is a hard call as to which will be the one to trigger the next major wave of innovation (e.g. see Facebook’s acquisition of Oculus Rift), as there are so many factors that influence the likely take-up of new technologies, from price through user experience to social acceptance.

Exploring and enhancing the senses

Interactive goggles / glasses such as Google Glass have now been around for over a year, and AR applications that overlay information from the virtual world onto images of the real world continue to evolve.

Search is also becoming a visual science – innovations such as Cortexica, recognise everyday objects (cereal packets, cars, signs, advertisements, stills from a film, etc.) and return information on how and where you can buy the related items. While it works from a smartphone today, it makes it possible to imagine a world where you open the kitchen cupboard and tell your glasses what items you want to re-order.

Screens will be in increasing abundance, able to interact with passers-by on the street or with you in your home or car. What will be on these screens could be anything that is on any of your existing screens or more – communication, information, entertainment, advertising – whatever the world can imagine.

Segmented by OS?

But is it really possible to define a person by the software they use? There is certainly an ‘a priori’ segmentation originating from device makers’ segmentation and positioning:

  • Apple’s brand and design ethos have held consistently strong appeal for upmarket, creative users. In contrast, Blackberry for a long time held a strong appeal in the enterprise segment, albeit significantly weakened in the last few years.
  • It is perhaps slightly harder to label Android users, now the largest group of smartphone users. However, the openness of the software leads to freedom, bringing with it a plurality of applications and widgets, some security issues, and perhaps a greater emphasis on ‘work it out for yourself’.
  • Microsoft, once ubiquitous through its domination of the PC universe, now finds itself a challenger in the world of mobiles and tablets, and despite gradually improving sales and reported OS experience and design has yet to find a clear identity, other than perhaps now being the domain of those willing to try something different. While Microsoft still has a strong hand in the software world through its evolving Office applications, these are not yet hugely mobile-friendly, and this is creating a niche for new players, such as Evernote and others, that have a more focused ‘mobile first’ approach.

Other segments

From a research perspective, there are many other approaches to thinking about what defines different types of user. For example:

  • In adoption, the Bass Diffusion Model segments e.g. Innovators, Early Adopters, Mass Market, Laggards;
  • Segments based on attitudes to usage, e.g. Lovers, Haters, Functional Users, Social Users, Cost Conscious, etc.;
  • Approaches to privacy and the use of personal data, e.g. Pragmatic, Passive, Paranoid.

It is tempting to hypothesise that there could be meta-segments combining these and other behavioural distinctions (e.g. you might theorise that there would be more ‘haters’ among the ‘laggards’ and the ‘paranoids’ than the ‘innovators’ and ‘pragmatics’), and there may indeed be underlying psychological drivers such as extraversion that drive people to use certain applications (e.g. personal communications) more.

However, other than anecdotal observations, we don’t currently have the data to explore or prove this. This knowledge may of course exist within the research and insight departments of major players and we’d welcome any insight that our partners and readers can contribute (please email contact@telco2.net if so).

Hypothesis: a ‘software fingerprint’?

The collection of apps and software each person uses, and how they use them, could be seen as a software fingerprint – a unique combination of tools showing interests, activities and preferences.

Human beings are complex creatures, and it may be a stretch to say a person could truly be defined by the software they use. However, there is a degree of cause and effect with software. Once you have the ability to use it, it changes what you can achieve. So while the software you use may not totally define you, it will play an increasing role in shaping you, and may ultimately form a distinctive part of your identity.

For example, Minecraft is a phenomenally successful and addictive game. If you haven’t seen it, imagine interactive digital Lego (or watch the intro video here). Children and adults all over the world play on it, make YouTube films about their creations, and share knowledge and stories from it as with any game.

To be really good at it, and to add enhanced features, players install ‘mods’ – essentially software upgrades, requiring the use of quite sophisticated codes and procedures, and the understanding of numerous file types and locations. So through this one game, ten year old kids are developing creative, social and IT skills, as well as exploring and creating new identities for themselves.

Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you

Minecraft March 2014

Source: Planetminecraft.com

But who is in charge – you or the software?

There are also two broad schools of thought in advanced IT design. One is that IT should augment human abilities and its application should always be controlled by its users. The other is the idea that IT can assist people by providing recommendations and suggestions that are outside the control of the user. An example of this second approach is Google showing you targeted ads based on your search history.

Being properly aware of this will become increasingly important to individuals’ freedom from unrecognised manipulation. Just as knowing that embarrassing photos on Facebook will be seen by prospective employers, knowing who’s pulling your data strings will be an increasingly important to controlling one’s own destiny in the future.

Back to the law of the Jungle?

Many of the opportunities and abilities conferred by software seem perhaps trivial or entertaining. But some will ultimately confer advantages on their users over those who do not possess the extra information, gain those extra moments, or learn that extra winning idea. The questions are: which will you use well; and which will you enable others to use? The answer to the first may reflect your personal success, and the second that of your business.

So while it used to be that your genetics, parents, and education most strongly steered your path, now how you take advantage of the increasingly mobile cyber-world will be a key additional competitive asset. It’s increasingly what you use and how you use it (as well as who you know, of course) that will count.

And for businesses, competing in an ever more resource constrained world, the effective use of software to track and manage activities and assets, and give insight to underlying trends and ways to improve performance, is an increasingly critical competence. Importantly for telcos and other ICT providers, it’s one that is enabled and enhanced by cloud, big data, and mobile.

The Software as a Service (SaaS) application Salesforce is an excellent case in point. It can brings instantaneous data on customers and business operations to managers’ and employees’ fingertips to any device. This can confer huge advantages over businesses without such capabilities.

Figure 3: Salesforce delivers big data and cloud to mobile

Salesforce delivers big data and cloud to mobile March 2014

Source: Powerbrokersoftware.com

 

  • Executive Summary: the key role of mobile
  • Why aren’t telcos more involved?
  • Revenue Declines + Skills Shortage = Digital Hunger Gap
  • What should businesses do about it?
  • All Businesses
  • Technology Businesses and Enablers
  • Telcos
  • Next steps for STL Partners and Telco 2.0

 

  • Figure 1: Software that reads your mind?
  • Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you
  • Figure 3: Salesforce delivers big data and cloud to mobile
  • Figure 4: The Digital Hunger Gap for Telcos
  • Figure 5: Telcos need Software Skills to deliver a ‘Telco 2.0 Service Provider’ Strategy
  • Figure 6: The GSMA’s Vision 2020

Communications Services: What now makes a winning value proposition?

Introduction

This is an extract of two sections of the latest Telco 2.0 Strategy Report The Future Value of Voice and Messaging for members of the premium Telco 2.0 Executive Briefing Service.

The full report:

  • Shows how telcos can slow the decline of voice and messaging revenues and build new communications services to maximise revenues and relevance with both consumer and enterprise customers.
  • Includes detailed forecasts for 9 markets, in which the total decline is forecast between -25% and -46% on a $375bn base between 2012 and 2018, giving telcos an $80bn opportunity to fight for.
  • Shows impacts and implications for other technology players including vendors and partners, and general lessons for competing with disruptive players in all markets.
  • Looks at the impact of so-called OTT competition, market trends and drivers, bundling strategies, operators developing their own Telco-OTT apps, advanced Enterprise Communications services, and the opportunities to exploit new standards such as RCS, WebRTC and VoLTE.

The Transition in User Behaviour

A global change in user behaviour

In November, 2012 we published European Mobile: The Future’s not Bright, it’s Brutal. Very soon after its publication, we issued an update in the light of results from Vodafone and Telefonica that suggested its predictions were being borne out much faster than we had expected.

Essentially, the macro-economic challenges faced by operators in southern Europe are catalysing the processes of change we identify in the industry more broadly.

This should not be seen as a “Club Med problem”. Vodafone reported a 2.7% drop in service revenue in the Netherlands, driven by customers reducing their out-of-bundle spending. This sensitivity and awareness of how close users are getting to their monthly bundle allowances is probably a good predictor of willingness to adopt new voice and messaging applications, i.e. if a user is regularly using more minutes or texts than are included in their service bundle, they will start to look for free or lower cost alternatives. KPN Mobile has already experienced a “WhatsApp shock” to its messaging revenues. Even in Vodafone Germany, voice revenues were down 6.1% and messaging 3.7%. Although enterprise and wholesale business were strong, prepaid lost enough revenue to leave the company only barely ahead. This suggests that the sizable low-wage segment of the German labour market is under macro-economic stress, and a shock is coming.

The problem is global, for example, at the 2013 Mobile World Congress, the CEO of KT Corp described voice revenues as “collapsing” and stated that as a result, revenues from their fixed operation had halved in two years. His counterpart at Turk Telekom asserted that “voice is dead”.

The combination of technological and macro-economic challenge results in disruptive, rather than linear change. For example, Spanish subscribers who adopt WhatsApp to substitute expensive operator messaging (and indeed voice) with relatively cheap data because they are struggling financially have no particular reason to return when the recovery eventually arrives.

Price is not the only issue

Also, it is worth noting that price is not the whole problem. Back at MWC 2013, the CEO of Viber, an OTT voice and messaging provider, claimed that the app has the highest penetration in Monaco, where over 94% of the population use Viber every day. Not only is Monaco somewhere not short of money, but it is also a market where the incumbent operator bundles unlimited SMS, though we feel that these statistics might slightly stretch the definition of population as there are many French subscribers using Monaco SIM cards. However, once adoption takes off it will be driven by social factors (the dynamics of innovation diffusion) and by competition on features.

Differential psychological and social advantages of communications media

The interaction styles and use cases of new voice and messaging apps that have been adopted by users are frequently quite different to the ones that have been imagined by telecoms operators. Between them, telcos have done little more than add mobility to telephony during the last 100 years, However, because of the Internet and growth of the smartphone, users now have many more ways to communicate and interact other than just calling one another.

SMS (only telcos’ second mass ‘hit’ product after voice) and MMS are “fire-and-forget” – messages are independent of each other, and transported on a store-and-forward basis. Most IM applications are either conversation-based, with messages being organised in threads, or else stream-based, with users releasing messages on a broadcast or publish-subscribe basis. They often also have a notion of groups, communities, or topics. In getting used to these and internalising their shortcuts, netiquette, and style, customers are becoming socialised into these applications, which will render the return of telcos as the messaging platform leaders with Rich Communication System (RCS) less and less likely. Figure 1 illustrates graphically some important psychological and social benefits of four different forms of communication.

Figure 1:  Psychological and social advantages of voice, SMS, IM, and Social Media

Psychological and social advantages of voice, SMS, IM, and Social Media Dec 2013

Source: STL Partners

The different benefits can clearly be seen. Taking voice as an example, we can see that a voice call could be a private conversation, a conference call, or even part of a webinar. Typically, voice calls are 1 to 1, single instance, and with little presence information conveyed (engaged tone or voicemail to others). By their very nature, voice calls are real time and have a high time commitment along with the need to pay attention to the entire conversation. Whilst not as strong as video or face to face communication, a voice call can communicate high emotion and of course is audio.

SMS has very different advantages. The majority of SMS sent are typically private, 1 to 1 conversations, and are not thread based. They are not real time, have no presence information, and require low time commitment, because of this they typically have minimal attention needs and while it is possible to use a wide array of emoticons or smileys, they are not the same as voice or pictures. Even though some applications are starting to blur the line with voice memos, today SMS messaging is a visual experience.

Instant messaging, whether enterprise or consumer, offers a richer experience than SMS. It can include presence, it is often thread based, and can include pictures, audio, videos, and real time picture or video sharing. Social takes the communications experience a step further than IM, and many of the applications such as Facebook Messenger, LINE, KakaoTalk, and WhatsApp are exploiting the capabilities of these communications mechanisms to disrupt existing or traditional channels.

Voice calls, whether telephony or ‘OTT’, continue to possess their original benefits. But now, people are learning to use other forms of communication that better fit the psychological and social advantages that they seek in different contexts. We consider these changes to be permanent and ongoing shifts in customer behaviour towards more effective applications, and there will doubtless be more – which is both a threat and an opportunity for telcos and others.

The applicable model of how these shifts transpire is probably a Bass diffusion process, where innovators enter a market early and are followed by imitators as the mass majority. Subsequently, the innovators then migrate to a new technology or service, and the cycle continues.

One of the best predictors of churn is knowing a churner, and it is to be expected that users of WhatsApp, Vine, etc. will take their friends with them. Economic pain will both accelerate the diffusion process and also spread it deeper into the population, as we have seen in South Korea with KakaoTalk.

High-margin segments are more at risk

Generally, all these effects are concentrated and emphasised in the segments that are traditionally unusually profitable, as this is where users stand to gain most from the price arbitrage. A finding from European Mobile: The Future’s not Bright, it’s Brutal and borne out by the research carried out for this report is that prices in Southern Europe were historically high, offering better margins to operators than elsewhere in Europe. Similarly, international and roaming calls are preferentially affected – although international minutes of use continue to grow near their historic average rates, all of this and more accrues to Skype, Google, and others. Roaming, despite regulatory efforts, remains expensive and a target for disruptors. It is telling that Truphone, a subject of our 2008 voice report, has transitioned from being a company that competed with generic mobile voice to being one that targets roaming.

 

  • Consumers: enjoying the fragmentation
  • Enterprises: in search of integration
  • What now makes a winning value proposition?
  • The fall of telephony
  • Talk may be cheap, but time is not
  • The increasing importance of “presence”
  • The competition from Online Service Providers
  • Operators’ responses
  • Free telco & other low-cost voice providers
  • Meeting Enterprise customer needs
  • Re-imagining customer service
  • Telco attempts to meet changing needs
  • Voice Developers – new opportunities
  • Into the Hunger Gap
  • Summary: the changing telephony business model
  • Conclusions
  • STL Partners and the Telco 2.0™ Initiative

 

  • Figure 1:  Psychological and social advantages of voice, SMS, IM, and Social Media
  • Figure 2: Ideal Enterprise mobile call routing scenario
  • Figure 3: Mobile Clients used to bypass high mobile call charges
  • Figure 4: Call Screening Options
  • Figure 5: Mobile device user context and data source
  • Figure 6: Typical business user modalities
  • Figure 7:  OSPs are pursuing platform strategies
  • Figure 8: Subscriber growth of KakaoTalk
  • Figure 9: Average monthly minutes of use by market
  • Figure 10: Key features of Voice and Messaging platforms
  • Figure 11: Average user screen time Facebook vs. WhatsApp  (per month)
  • Figure 12: Disruptive price competition also comes from operators
  • Figure 13: The hunger gap in music

Digital Commerce 2.0: New $50bn Disruptive Opportunities for Telcos, Banks and Technology Players

Introduction – Digital Commerce 2.0

Digital commerce is centred on the better use of the vast amounts of data created and captured in the digital world. Businesses want to use this data to make better strategic and operational decisions, and to trade more efficiently and effectively, while consumers want more convenience, better service, greater value and personalised offerings. To address these needs, Internet and technology players, payment networks, banks and telcos are vying to become digital commerce intermediaries and win a share of the tens of billions of dollars that merchants and brands spend finding and serving customers.

Mobile commerce is frequently considered in isolation from other aspects of digital commerce, yet it should be seen as a springboard to a wider digital commerce proposition based on an enduring and trusted relationship with consumers. Moreover, there are major potential benefits to giving individuals direct control over the vast amount of personal data their smartphones are generating.

We have been developing strategies in these fields for a number of years, including our engagement with the World Economic Forum’s (WEF) Rethinking Personal Data project, and ongoing research into user data and privacy, digital money and payments, and digital advertising and marketing.

This report brings all of these themes together and is the first comprehensive strategic playbook on how smartphones and authenticated personal data can be combined to deliver a compelling digital commerce proposition for both merchants and consumers. It will save customers valuable time, effort and money by providing a fast-track to developing and / or benchmarking a leading edge strategy and approach in the fast-evolving new world of digital commerce.

Benefits of the Report to Telcos, Other Players, Investors and Merchants


For telcos, this strategy report:

  • Shows how to evaluate and implement a comprehensive and successful digital commerce strategy worth up to c.$50bn (5% of core revenues in 5 years)
  • Saves time and money by providing a fast-track for decision making and an outline business case
  • Rapidly challenges / validates existing strategy and services against relevant ‘best in class’, including their peers, ‘OTT players’ and other leading edge players.


For other players including Internet companies, technology vendors, banks and payment networks:

  • The report provides independent market insight on how telcos and other players will be seeking to generate $ multi-billion revenues from digital commerce
  • As a potential partner, the report will provide a fast-track to guide product and business development decisions to meet the needs of telcos (and others) that will need to make commensurate investment in technologies and partnerships to achieve their value creation goals
  • As a potential competitor, the report will save time and improve the quality of competitor insight by giving a detailed and independent picture of the rationale and strategic approach you and your competitors will need to take


For merchants building digital commerce strategies, it will:

 

  • Help to improve revenue outlook, return on investment and shareholder value by improving the quality of insight to strategic decisions, opportunities and threats lying ahead in digital commerce
  • Save vital time and effort by accelerating internal decision making and speed to market


For investors, it will:

  • Improve investment decisions and strategies returning shareholder value by improving the quality of insight on the outlook of telcos and other digital commerce players
  • Save vital time and effort by accelerating decision making and investment decisions
  • Help them better understand and evaluate the needs, goals and key strategies of key telcos and their partners / competitors

Digital Commerce 2.0: Report Content Summary

  • Executive Summary. (9 pages outlining the opportunity and key strategic options)
  • Strategy. The shape and scope of the opportunities, the convergence of personal data, mobile, digital payments and advertising, and personal cloud. The importance of giving consumers control. and the nature of the opportunity, including Amazon and Vodafone case studies.
  • The Marketplace. Cultural, commercial and regulatory factors, and strategies of the market leading players. Further analysis of Google, Facebook, Apple, eBay and PayPal, telco and financial services market plays.
  • The Value Proposition. How to build attractive customer propositions in mobile commerce and personal cloud. Solutions for banked and unbanked markets, including how to address consumers and merchants.
  • The Internal Value Network. The need for change in organisational structure in telcos and banks, including an analysis of Telefonica and Vodafone case studies.
  • The External Value Network. Where to collaborate, partner and compete in the value chain – working with telcos, retailers, banks and payment networks. Building platforms and relationships with Internet players. Case studies include Weve, Isis, and the Merchant Customer Exchange.
  • Technology. Making appropriate use of personal data in different contexts. Tools for merchants and point-of-sale transactions. Building a flexible, user-friendly digital wallet.
  • Finance. Potential revenue streams from mobile commerce, personal cloud, raw big data, professional services, and internal use.
  • Appendix – the cutting edge. An analysis of fourteen best practice and potentially disruptive plays in various areas of the market.

 

Mobile Broadband 2.0: The Top Disruptive Innovations

Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).

Trends in European data usage

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:



Introduction

Telco 2.0 has previously published a wide variety of documents and blog posts on mobile broadband topics – content delivery networks (CDNs), mobile CDNs, WiFi offloading, Public WiFi, network outsourcing (“‘Under-The-Floor’ (UTF) Players: threat or opportunity? ”) and so forth. Our conferences have featured speakers and panellists discussing operator data-plan pricing strategies, tablets, network policy and numerous other angles. We’ve also featured guest material such as Arete Research’s report LTE: Late, Tempting, and Elusive.

In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing

Under The Floor (UTF) Players Telco 2.0

This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.

During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.

It starts by discussing some of the issues which determine the extent of any disruption:

  • Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
  • The role of standardisation , and whether it is a facilitator or inhibitor of disruption
  • Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.

The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:

  • Network infrastructure evolution (radio and core)
  • Control and policy functions, and business-model enablers

It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.

This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.

Telco-driven disruption vs. external trends

There are various potential sources of disruption for the mobile broadband marketplace:

  • New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
  • 3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
  • External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.

The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,

External disruptions to monitor

But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls. 

In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.

It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.

Some of the areas being followed by Telco 2.0 include:

  • Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
  • SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
  • Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
  • Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?

It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.

There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.

The mobile data explosion… or maybe not?

It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?

A J-curve or an S-curve?

Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.

In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.

Figure 1 – Trends in European data usage

 Trends in European Data Usage
 

Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video. 

But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.

Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.

Figure 2 – A plethora of mobile data traffic management solutions

A Plethora of Mobile Data Traffic Management Solutions

What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.

There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.

So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?

The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:

  • Network coverage in terms of area or population
  • Network capacity at the busiest places/times

Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later). 

Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.

So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.

  • Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
  • The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
  • Regulators may start to take different views of traffic management or predicted spectrum requirements.
  • Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
  • Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
  • In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
  • Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.

Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.

Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:

  • “All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
    • Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
  • “Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
    • This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
  • “Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
    • This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business

To read the note in full, including the following analysis…

  • Introduction
  • Telco-driven disruption vs. external trends
  • External disruptions to monitor
  • The mobile data explosion… or maybe not?
  • A J-curve or an S-curve?
  • Evolving the mobile network
  • Overview
  • LTE
  • Network sharing, wholesale and outsourcing
  • WiFi
  • Next-gen IP core networks (EPC)
  • Femtocells / small cells / “cloud RANs”
  • HetNets
  • Advanced offload: LIPA, SIPTO & others
  • Peer-to-peer connectivity
  • Self optimising networks (SON)
  • M2M-specific broadband innovations
  • Policy, control & business model enablers
  • The internal politics of mobile broadband & policy
  • Two sided business-model enablement
  • Congestion exposure
  • Mobile video networking and CDNs
  • Controlling signalling traffic
  • Device intelligence
  • Analytics & QoE awareness
  • Conclusions & recommendations
  • Index

…and the following figures…

  • Figure 1 – Trends in European data usage
  • Figure 2 – A plethora of mobile data traffic management solutions
  • Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
  • Figure 4 – Internal ‘power tensions’ over managing mobile broadband
  • Figure 5 – How a congestion API could work
  • Figure 6 – Relative Maturity of MBB Management Solutions
  • Figure 7 – Laptops generate traffic volume, smartphones create signalling load
  • Figure 8 – Measuring Quality of Experience
  • Figure 9 – Summary of disruptive network innovations

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: 3GPP, Aero2, Alcatel Lucent, AllJoyn, ALU, Amazon, Amdocs, Android, Apple, AT&T, ATIS, BBC, BlackBerry, Bridgewater, CarrierIQ, China, China Mobile, China Unicom, Clearwire, Conex, DoCoMo, Ericsson, Europe, EverythingEverywhere, Facebook, Femto Forum, FlashLinq, Free, Germany, Google, GSMA, H3G, Huawei, IETF, IMEI, IMSI, InterDigital, iPhones,Kenya, Kindle, Light Radio, LightSquared, Los Angeles, MBNL, Microsoft, Mobily, Netflix, NGMN, Norway, NSN, O2, WiFi, Openet, Qualcomm, Radisys, Russia, Saudi Arabia, SoftBank, Sony, Stoke, Telefonica, Telenor, Time Warner Cable, T-Mobile, UK, US, Verizon, Vita, Vodafone, WhatsApp, Yota, YouTube, ZTE.

Technologies and industry terms referenced: 2G, 3G, 4.5G, 4G, Adaptive bitrate streaming, ANDSF (Access Network Discovery and Selection Function), API, backhaul, Bluetooth, BSS, capacity crunch, capex, caps/tiers, CDMA, CDN, CDNs, Cloud RAN, content delivery networks (CDNs), Continuous Computing, Deep packet inspection (DPI), DPI, DRM, Encryption, Enhanced video, EPC, ePDG (Evolved Packet Data Gateway), Evolved Packet System, Femtocells, GGSN, GPS, GSM, Heterogeneous Network (HetNet), Heterogeneous Networks (HetNets), HLRs, hotspots, HSPA, HSS (Home Subscriber Server), HTML5, HTTP Live Streaming, IFOM (IP Flow Mobility and Seamless Offload), IMS, IPR, IPv4, IPv6, LIPA (Local IP Access), LTE, M2M, M2M network enhancements, metro-cells, MiFi, MIMO (multiple in, MME (Mobility Management Entity), mobile CDNs, mobile data, MOSAP, MSISDN, MVNAs (mobile virtual network aggregators)., MVNO, Net Neutrality, network outsourcing, Network sharing, Next-generation core networks, NFC, NodeBs, offload, OSS, outsourcing, P2P, Peer-to-peer connectivity, PGW (PDN Gateway), picocells, policy, Policy and Charging Rules Function (PCRF), Pre-cached video, pricing, Proximity networks, Public WiFi, QoE, QoS, RAN optimisation, RCS, remote radio heads, RFID, self-optimising network technology (SON), Self-optimising networks (SON), SGW (Serving Gateway), SIM-free wireless, single RANs, SIPTO (Selective IP Traffic Offload), SMS, SoftSIM, spectrum, super-femtos, Telco 2.0 Happy Pipe, Transparent optimisation, UMTS, ‘Under-The-Floor’ (UTF) Players, video optimisation, VoIP, VoLTE, VPN, White space, WiFi, WiFi Direct, WiFi offloading, WiMAX, WLAN.