Network use metrics: Good versus easy and why it matters

Introduction

Telecoms, like much of the business world, often revolves around measurements, metrics and KPIs. Whether these relate to coverage of networks, net-adds and churn rates of subscribers, or financial metrics such as ARPU, there is a plethora of numerical measures to track.

They are used to determine shifts in performance over time, or benchmark between different companies and countries. Regulators and investors scrutinise the historical data and may set quantitative targets as part of policy or investment criteria.

This report explores the nature of such metrics, how they are (mis)used and how the telecoms sector – and especially its government and regulatory agencies – can refocus on good (i.e., useful, accurate and meaningful) data rather than over-simplistic or just easy-to-collect statistics.

The discussion primarily focuses on those metrics that relate to overall industry trends or sector performance, rather than individual companies’ sales and infrastructure – although many datasets are built by collating multiple companies’ individual data submissions. It considers mechanisms to balance the common “data asymmetry” between internal telco management KPIs and metrics available to outsiders such as policymakers.

A poor metric often has huge inertia and high switching costs. The phenomenon of historical accidents leading to entrenched, long-lasting effects is known as “path dependence”. Telecoms reflects a similar situation – as do many other sub-sectors of the economy. There are many old-fashioned metrics that are no longer really not fit for purpose and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

An important question is why some less-than-perfect metrics such as ARPU still have utility – and how and where to continue using them, with awareness of their limitations – or modify them slightly to reflect market reality. Sometimes maintaining continuity and comparability of statistics over time is important. Conversely, other old metrics such as “minutes” of voice telephony actually do more harm than good and should be retired or replaced.

Enter your details below to download an extract of the report

Looking beyond operator KPIs

Throughout the report, we make a semantic distinction between industry-wide metrics and telco KPIs. KPIs are typically generated for specific individual companies, rather than aggregated across a sector. And while both KPIs and metrics can be retrospective or set as goals, metrics can also be forecast, especially where they link operational data to other underlying variables, such as population, geographic areas or demand (rather than supply).

STL Partners has previous published work on telcos’ external KPIs, including discussion of the focus on “defensive” statistics on core connectivity, “progressive” numbers on new revenue-generating opportunities, and socially-oriented datasets on environmental social and governance (ESG) and staffing. See the figure below.

Types of internal KPIs found in major telcos

Source: STL Partners

Policymakers need metrics

The telecoms policy realm spans everything from national broadband plans to spectrum allocations, decisions about mergers and competition, net neutrality, cybersecurity, citizen inclusion and climate/energy goals. All of them use metrics either during policy development and debate, or as goalposts for quantifying electoral pledges or making regional/international comparisons.

And it is here that an informational battleground lies.

There are usually multiple stakeholder groups in these situations, whether it is incumbents vs. new entrants, tech #1 vs. tech #2, consumers vs. companies, merger proponents vs. critics, or just between different political or ideological tribes and the numerous industry organisations and lobbying institutions that surround them. Everyone involved wants data points that make themselves look good and which allow them to argue for more favourable treatment or more funding.

The underlying driver here is policy rather than performance.

Data asymmetry

A major problem that emerges here is data asymmetry. There is a huge gulf between the operational internal KPIs used by telcos, and those that are typically publicised in corporate reports and presentations or made available in filings to regulators. Automation and analytics technologies generate ever more granular data from networks’ performance and customers’ usage of, and payment for, their services – but these do not get disseminated widely.

Thus, policymakers and regulators often lack the detailed and disaggregated primary information and data resources available to large companies’ internal reporting functions. They typically need to mandate specific (comparable) data releases via operators’ license terms or rely on third-party inputs from sources such as trade associations, vendor analysis, end-user surveys or consultants.

 

Table of content

  • Executive Summary
    • Key recommendations
    • Next steps
  • Introduction
    • Key metrics overview
    • KPIs vs. metrics: What’s in a name?
    • Who uses telco metrics and why?
    • Data used in policy-making and regulation
    • Metrics and KPIs enshrined in standards
    • Why some stakeholders love “old” metrics
    • Granularity
  • Coverage, deployment and adoption
    • Mobile network coverage
    • Fixed network deployment/coverage
  • Usage, speed and traffic metrics
    • Voice minutes and messages
    • Data traffic volumes
    • Network latency
  • Financial metrics
    • Revenue and ARPU
    • Capex
  • Future trends and innovation in metrics
    • The impact of changing telecom industry structure
    • Why applications matter: FWA, AR/VR, P5G, V2X, etc
    • New sources of data and measurements
  • Conclusion and recommendations
    • Recommendations for regulators and policymakers
    • Recommendations for fixed and cable operators
    • Recommendations for mobile operators
    • Recommendations for telecoms vendors
    • Recommendations for content, cloud and application providers
    • Recommendations for investors and consultants
  • Appendix
    • Key historical metrics: Overview
    • How telecoms data is generated
  • Index

Enter your details below to download an extract of the report

Pursuing hyperscale economics

The promise of hyperscale economics

Managing demands and disruption

As telecoms operators move to more advanced, data intensive services enabled by 5G, fibre to the X (FTTX) and other value-added services, they are looking to build the capabilities to support the growing demands on the network. However, in most cases, telco operators are expanding their own capabilities in such a way that results in their costs increasing in line with their capabilities.

Access a free copy of this report here

This is becoming an increasingly pressing issue given the commoditisation of traditional connectivity services and changing competitive dynamics from within and outside the telecoms industry. Telcos are facing stagnating or declining ARPUs within the telecoms sector as price becomes the competitive weapon and service differentiation of connectivity services diminishes.A

The competitive landscape within the telecoms industry is also becoming much more dynamic, with differences in progress made by telecoms operators adopting cloud-native technologies from a new ecosystem of vendors. At the same time, the rate of innovation is accelerating and revenue shares are being eroded due to the changes in the competitive landscape and the emergence of new competitors, including:

  • Greenfield operators like DISH and Rakuten;
  • More software-centric digital enterprise service providers that provide advanced innovative applications and services;
  • Content and SaaS players and the hyperscale cloud providers, such as AWS, Microsoft and Google, as well as the likes of Netflix and Disney.

We are in another transition period in the telco space. We’ve made a lot of mess in the past, but now everyone is talking about cloud-native and containers which gives us an opportunity to start over based on the lessons we‘ve learned.

VP Cloudified Production, European converged operator 1

Even for incumbents or established challengers in more closed and stable markets where connectivity revenues are still growing, there is still a risk of complacency for these telcos. Markets with limited historic competition and high barriers to entry can be prone to major systemic shocks or sudden unexpected changes to the market environment such as government policy, new 5G entrants or regulatory changes that mandate for structural separation.

Source:  Company accounts, stock market data; STL Partners analysis

Note: The data for the Telecoms industry covers 165 global telecoms operators

Telecoms industry seeking hyperscaler growth

The telecoms industry’s response to threats has traditionally been to invest in better networks to differentiate but networks have become increasingly commoditised. Telcos can no longer extract value from services that exclusively run on telecoms networks. In other words, the defensive moat has been breached and owning fibre or spectrum is not sufficient to provide an advantage. The value has now shifted from capital expenditure to the network-independent services that run over networks. The capital markets therefore believe it is the service innovators – content and SaaS players and internet giants such as Amazon, Microsoft or Apple – that will capture future revenue and profit growth, rather than telecoms operators. However, with 5G, edge computing and telco cloud, there has been a resurgence in interest in more integration between applications and the networks they run over to leverage greater network intelligence and insight to deliver enhanced outcomes.

Defining telcos’ roles in the Coordination Age

Given that the need for connectivity is not going away but the value is not going to grow, telcos are now faced with the challenge of figuring out what their new role and purpose is within the Coordination Age, and how they can leverage their capabilities to provide unique value in a more ecosystem-centric B2B2X environment.

Success in the Coordination Age requires more from the network than ever before, with a greater need for applications to interface and integrate with the networks they run over and to serve not only customers but also new types of partners. This calls for the need to not only move to more flexible, cost-effective and scalable networks and operations, but also the need to deliver value higher up in the value chain to enable further differentiation and growth.

Telcos can either define themselves as a retail business selling mobile and last mile connectivity, or figure out how to work more closely with demanding partners and customers to provide greater value. It is not just about scale or volume, but about the competitive environment. At the end of the day, telcos need to prepare for the capabilities to do innovative things like dynamic slicing.

Group Executive, Product and Technology, Asia Pacific operator

Responding to the pace of change

The introduction of cloud-native technologies and the promise of software-centric networking has the potential to (again) significantly disrupt the market and change the pace of innovation. For example, the hyperscale cloud providers have already disrupted the IT industry and are seen simultaneously as a threat, potential partners and as a model example for operators to adopt. More significantly, they have been able to achieve significant growth whilst still maintaining their agile operations, culture and mindset.

With the hyperscalers now seeking to play a bigger role in the network, many telco operators are looking to understand how they should respond in light of this change of pace, otherwise run the risk of being relegated to being just the connectivity provider or the ‘dumb pipe’.

Our report seeks to address the following key question:

Can telecoms operators realistically pursue hyperscale economics by adopting some of the hyperscaler technologies and practices, and if so, how?

Our findings in this report are based on an interview programme with 14 key leaders from telecoms operators globally, conducted from June to August 2021. Our participant group spans across different regions, operator types and types of roles within the organisation.

Related research

Fibre for 5G and edge: Who does it and how to build it?

Opportunities for fibre network operators

4G/5G densification and the growth in edge end points will place fresh demands on telecoms network infrastructure to deliver high bandwidth connections to new locations. Many of these will be sites on the streets of urban centres without existing connections, where installation of new fibre cables is costly. This will require careful planning and optimum selection of existing infrastructure to minimise costs and strengthen the business cases for fibre deployment.

While much of the growth in deployment of small cells and edge end points will be on private sites, their deployment in public areas, in support of public network services, will pose specific challenges to providing the broad bandwidth connectivity required. This includes both backhaul from cell sites and edge end points to the fibre transport network, plus any fronthaul needs for new open RAN deployments, from baseband equipment to radio units and antennas. In almost all cases this will entail installing new fibre in areas where laying a new duct is at its most expensive, although in a few cases fixed point-to-point radio links could be deployed instead.

Enter your details below to request an extract of the report

Global deployments of small cells and non-telco edge end points
in public areas

Source: Small Cell Forum, STL research and analysis

In addition, operators of 5G small cells and public cloud edge sites will require access to fibre links for backhaul to their core networks to provide the high bandwidths required. In some cases, they may need multiple fibres, especially if diverse paths are needed for security and resilience purposes.

Many newer networks have been built for a specific purpose, such as residential or business FTTP. Others are trunk routes to connect large businesses and data centres, and may serve local, regional, national or international areas. In addition, changing regulations have encouraged the creation of new businesses such as neutral hosts (also called “open access” for wholesale fibre) and, as a result, the supply side of the market is composed of an increasing variety of players. If this pattern were to continue, then it would very likely prove uneconomic to build dedicated networks for some applications, such as small cell densification or some standalone edge applications.

However, provided build qualities meet the required standard and costs can be contained there is no reason why networks deployed to address one market cannot be extended and repurposed to serve others. For new fibre builds being planned, it is also important to consider these new FTTX opportunities upfront and in some detail, rather than as an afterthought or just a throw-away bullet point on investor slide-decks.  

This report looks at the opportunities these developments offer to fibre network operators and considers the business cases that need to be made. It looks at the means and scope for minimising costs necessary to profitably satisfy the widest range of needs.

The fibre market is changing

FTTH/P has been largely satisfied in many countries, and even in slower markets such as the UK and Germany, the bulk of the network is expected to be in place by 2025/6 for most urban premises, at least on the basis of “homes passed”, if not actually connected.

By contrast the requirement of higher bandwidth connectivity for mobile base stations being upgraded from 3G to 4G and 5G is current and ongoing. Demand for links to small cells needed to support 5G densification, standalone edge, and smart city applications is only just beginning to appear and is likely to develop significantly over the next 10 years or more. In future high speed broadband links will be required to support an increasing range of applications for different organisations: for example, autonomous and semi-autonomous vehicle (V2X) applications operated by government or city authorities.

Both densification and edge will need local connections for fronthaul and backhaul as well as longer connections to provide backhaul to the core network. Building from scratch is expensive owing to the high costs associated with digging in the public highway, especially in urban centres. Digging can be complex, depending on the surfaces and buried services encountered, and extensions after the initial main build can be very expensive.

Laying fibre and ducts are a long-term investment and can usually be amortised over 15 to 20 years.  Nevertheless, network operators need to be sure of a good return on their investment and therefore need to find ways to minimise costs while maximising revenues. In markets with multiple players, there will also be a desire by potential acquisition targets to underscore their valuations, by maximising their addressable market, while reducing any post-merger remedial or expansion costs. Good planning, including watching for new opportunities and trends and the smart use of existing assets to minimise costs, can help ensure this.

  • Serving multiple markets through good forecasting and planning can help maximise revenues.
  • Operators and others can make use of various infrastructure assets to reduce costs, including incumbents’ physical duct/pole infrastructure sewers, disused water and hydraulic pipes, neutral hosts’ networks, council ducts, and traffic management ducts. Obviously these will not extend everywhere that fibre is required, but can make a meaningful contribution in many situations.

The remaining sections of this report examine in more detail the specific opportunities offered to fixed network operators, by densification of mobile base stations and growth of edge computing. It covers:

  • Market demand, including drivers of demand, and end users’ and the industry’s needs and options
  • The changing supply side and regulation
  • Technologies, build options and costs
  • How to maximise revenues and returns on investment.

Table of Contents

  • Executive Summary
  • Introduction
    • The fibre market is changing
  • Small cell and edge: Demand
    • Demand for small cells
    • Demand for edge end points
  • Small cell and edge: Supply
    • The changing network supply structure
  • Build options
    • Pros and cons of seven building options
  • How do they compare on costs?
  • Impact of regulation and policy
  • How to mitigate unforeseen costs
  • The business case
  • Conclusions
  • Index

Related Research

 

Enter your details below to request an extract of the report

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

Key Questions for NextGen Broadband Part 1: The Business Case

Introduction

It’s almost a cliché to talk about “the future of the network” in telecoms. We all know that broadband and network infrastructure is a never-ending continuum that evolves over time – its “future” is continually being invented and reinvented. We also all know that no two networks are identical, and that despite standardisation there are always specific differences, because countries, regulations, user-bases and legacies all vary widely.

But at the same time, the network clearly matters still – perhaps more than it has for the last two decades of rapid growth in telephony and SMS services, which are now dissipating rapidly in value. While there are certainly large swathes of the telecom sector benefiting from content provision, commerce and other “application-layer” activities, it is also true that the bulk of users’ perceived value is in connectivity to the Internet, IPTV and enterprise networks.

The big question is whether CSPs can continue to convert that perceived value from users into actual value for the bottom-line, given the costs and complexities involved in building and running networks. That is the paradox.

While the future will continue to feature a broader set of content/application revenue streams for telcos, it will also need to support not just more and faster data connections, but be able to cope with a set of new challenges and opportunities. Top of the list is support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. There is a significant chance that many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition evolving – faster- or lower-powered networks, or perhaps even the semi-mythical “QoS”, which if not paid for directly, could perhaps be integrated into compelling packages and data-service bundles. There is also the potential for “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. But does this add more value than investing in more web/OTT-style applications and services, de-coupled from the network?

Again, this raises questions about technology, business models – and the practicalities of making it happen.

This plays directly into the concept of the revenue “hunger gap” we have analysed for the past two years – without ever-better (but more efficient) networks, the telecom industry is going to get further squeezed. While service innovation is utterly essential, it also seems to be slow-moving and patchy. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Depending on the trajectory of regulatory change, we may also see more consolidation among parts of the service provider industry, more quad-play networks, more sharing and wholesale models.

We also see communications networks and applications permeating deeper into society and government. There is a sense among some policymakers that “telecoms is too important to leave up to the telcos”, with initiatives like Smart Cities and public-safety networks often becoming decoupled from the mainstream of service providers. There is an expectation that technology – and by extension, networks – will enable better economies, improved healthcare and education, safer and more efficient transport, mechanisms for combatting crime and climate change, and new industries and jobs, even as old ones become automated and robotised.

Figure 1 – New services are both network-integrated & independent

 

Source: STL Partners

And all of this generates yet more uncertainty, with yet more questions – some about the innovations needed to support these new visions, but also whether they can be brought to market profitably, given the starting-point we find ourselves at, with fragmented (yet growing) competition, regulatory uncertainty, political interference – and often, internal cultural barriers within the CSPs themselves. Can these be overcome?

A common theme from the section above is “Questions”. This document – and a forthcoming “sequel” – is intended to group, lay out and introduce the most important ones. Most observers just tend to focus on a few areas of uncertainty, but in setting up the next year or so of detailed research, Telco 2.0 wants to fully list and articulate all of the hottest issues. Only once they are collated, can we start to work out the priorities – and inter-dependencies.

Our belief is that all of the detailed questions on “Future Networks” can, it fact, be tied back to one of two broader, more over-reaching themes:

  • What are the business cases and operational needs for future network investment?
  • Which disruptions (technological or other) are expected in the future?

The business case theme is covered in this document. It combines future costs (spectrum, 4G/5G/fibre deployments, network-sharing, virtualisation, BSS/OSS transformation etc.) and revenues (data connectivity, content, network-integrated service offerings, new Telco 2.0-style services and so on). It also encompasses what is essential to make the evolution achievable, in terms of organisational and cultural transformation within telcos.

A separate Telco 2.0 document, to be published in coming weeks, will cover the various forthcoming disruptions. These are expected to include new network technologies that will ultimately coalesce to form 5G mobile and new low-power wireless, as well as FTTx and DOCSIS cable evolution. In addition, virtualisation in both NFV and SDN guises will be hugely transformative.

There is also a growing link between mobile and fixed domains, reflected in quad-play propositions, industry consolidation, and the growth of small-cells and WiFi with fixed-line backhaul. In addition, to support future service innovation, there need to be adequate platforms for both internal and external developers, as well as a meaningful strategy for voice/video which fits with both network and end-user trends. Beyond the technical, additional disruption will be delivered by regulatory change (for example on spectrum and neutrality), and also a reshaped vendor landscape.

The remainder of this report lays out the first five of the Top 10 most important questions for the Future Network. We can’t give definitive analyses, explanations or “answers” in a report of this length – and indeed, many of them are moving targets anyway. But taking a holistic approach to laying out each question properly – where it comes from, and what the “moving parts” are, we help to define the landscape. The objective is to help management teams apply those same filters to their own organisations, understand how can costs be controlled and revenues garnered, see where consolidation and regulatory change might help or hinder, and deal with users and governments’ increasing expectations.

The 10 Questions also lay the ground for our new Future Network research stream, forthcoming publications and comment/opinion.

Overview: what is the business case for Future Networks?

As later sections of both this document and the second in the series cover, there are various upcoming technical innovations in the networking pipeline. Numerous advanced radio technologies underpin 4.5G and 5G, there is ongoing work to improve fibre and DSL/cable broadband, virtualisation promises much greater flexibility in carrier infrastructure and service enablement, and so on. But all those advances are predicated on either (ideally) more revenues, or at least reduced costs to deploy and operate. All require economic justification for investment to occur.

This is at the core of the Future Networks dilemma for operators – what is the business case for ongoing investment? How can the executives, boards of directors and investors be assured of returns? We all know about the ongoing shift of business & society online, the moves towards smarter cities and national infrastructure, changes in entertainment and communication preferences and, of course, the Internet of Things – but how much benefit and value might accrue to CSPs? And is that value driven by network investments, or should telecom companies re-focus their investments and recruitment on software, content and the cloud?

This is not a straightforward question. There are many in the industry that assert that “the network is the key differentiator & source of value”, while others counter that it is a commodity and that “the real value is in the services”.

What is clear is that better/faster networks will be needed in any case, to achieve some of the lofty goals that are being suggested for the future. However, it is far from clear how much of the overall value-chain profit can be captured from just owning the basic machinery – recent years have shown a rapid de-coupling of network and service, apart from a few areas.

In the past, networks largely defined the services offered – most notably broadband access, phone calls and SMS, as well as cable TV and IPTV. But with the ubiquitous rise of Internet access and service platforms/gateways, an ever-increasing amount of service “logic” is located on the web, or in the cloud – not enshrined in the network itself. This is an important distinction – some services are abstracted and designed to be accessed from any network, while others are intimately linked to the infrastructure.

Over the last decade, the prevailing shift has been for network-independent services. In many ways “the web has won”. Potentially this trend may reverse in future though, as servers and virtualised, distributed cloud capabilities get pushed down into localised network elements. That, however, brings its own new complexities, uncertainties and challenges – it a brave (or foolhardy) telco CEO that would bet the company on new in-network service offers alone. We will also see API platforms expose network “capabilities” to the web/cloud – for example, W3C is working on standards to allow web developers to gain insights into network congestion, or users’ data-plans.

But currently, the trend is for broadband access and (most) services to be de-coupled. Nonetheless, some operators seem to have been able to make clever pricing, distribution and marketing decisions (supported by local market conditions and/or regulation) to enable bundles to be made desirable.

US operators, for example, have generally fared better than European CSPs, in what should have been comparably-mature markets. But was that due to a faster shift to 4G networks? Or other factors, such as European telecom fragmentation and sub-scale national markets, economic pressures, or perhaps a different legacy base? Did the broad European adoption of pre-paid (and often low-ARPU) mobile subscriptions make it harder to justify investments on the basis of future cashflows – or was it more about the early insistence that 2.6GHz was going to be the main “4G band”, with its limitations later coming back to bite people? It is hard to tease apart the technology issues from the commercial ones.

Similar differences apply in the fixed-broadband world. Why has adoption and typical speed varied so much? Why have some markets preferred cable to DSL? Why are fibre deployments patchy and very nation-specific? Is it about the technology involved – or the economy, topography, government policies, or the shape of the TV/broadcast sector?

Understanding these issues – and, once again, articulating the questions properly – is core to understanding the future for CSPs’ networks. We are in the middle of 4G rollout in most countries, with operators looking at the early requirements for 5G. SDN and NFV are looking important – but their exact purpose, value and timing still remain murky, despite the clear promises. Can fibre rollouts – FTTC or FTTH – still be justified in a world where TV/video spend is shifting away from linear programming and towards online services such as Netflix?

Given all these uncertainties, it may be that either network investments get slowed down – or else consolidation, government subsidy or other top-level initiatives are needed to stimulate them. On the other hand, it could be the case that reduced costs of capex and opex – perhaps through outsourcing, sharing or software-based platforms, or even open-source technology – make the numbers work out well, even for raw connectivity. Certainly, the last few years have seen rising expenditure by end-users on mobile broadband, even if it has also contributed to the erosion of legacy services such as telephony and SMS, by enabling more modern/cheaper rivals. We have also seen a shift to lower-cost network equipment and software suppliers, and an emphasis for “off the shelf” components, or open interfaces, to reduce lock-in and encourage competition.

The following sub-sections each frame a top-level, critical question relating to the business case for Future Networks:

  • Will networks support genuinely new services & enablers/APIs, or just faster/more-granular Internet access?
  • Speed, coverage, performance/QoS… what actually generates network value? And does this derive from customer satisfaction, new use-cases, or other sources?
  • Does quad-play and fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation and culture: what needs to change to support future network investments?

 

  • Executive Summary
  • Introduction
  • Overview: what is the business case for Future Networks?
  • Supporting new services or just faster Internet?
  • Speed, coverage, quality…what is most valuable?
  • Does quad-play & fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation & culture: what changes?
  • Conclusions

 

  • Figure 1 – New services are both network-integrated & independent
  • Figure 2 – Mobile data device & business model evolution
  • Figure 3 – Some new services are directly enabled by network capabilities
  • Figure 4 – Network investments ultimately need to map onto customers’ goals
  • Figure 5 – Customers put a priority on improving indoor/fixed connectivity
  • Figure 6 – Notional “coverage” does not mean enough capacity for all apps
  • Figure 7 – Different operator teams have differing visions of the future
  • Figure 8 – “Software telcos” may emulate IT’s “DevOps” organisational dynamic

 

Free-T-Mobile: Disruptive Revolution or a Bridge Too Far?

Free’s Bid for T-Mobile USA 

The future of the US market and its 3rd and 4th operators has been a long-running saga. The market, the world’s richest, remains dominated by the duopoly of AT&T and Verizon Wireless. It was long expected that Softbank’s acquisition of Sprint heralded disruption, but in the event, T-Mobile was simply quicker to the punch.

Since the launch of T-Mobile’s “uncarrier” price-war strategy, we have identified signs of a “Free Mobile-like” disruption event, for example, substantial net-adds for the disruptor, falling ARPUs, a shakeout of MVNOs and minor operators, and increased industry-wide subscriber growth. However, other key indicators like a rapid move towards profitability by the disruptor are not yet in evidence, and rather than industry-wide deflation, we observe divergence, with Verizon Wireless increasing its ARPU, revenues, and margins, while AT&T’s are flat, Sprint’s flat to falling, and T-Mobile’s plunging.

This data is summarised in Figure 1.

Figure 1: Revenue and margins in the US. The duopoly is still very much with us

 

Source: STL Partners, company filings

Compare and contrast Figure 2, which shows the fully developed disruption in France. 

 

Figure 2: Fully-developed disruption. Revenue and margins in France

 

Source: STL Partners, company filings

T-Mobile: the state of play in Q2 2014

When reading Figure 1, you should note that T-Mobile’s Q2 2014 accounts contain a negative expense item of $747m, reflecting a spectrum swap with Verizon Wireless, which flatters their margin. Without it, the operating margin would be 2.99%, about a third of Sprint’s. Poor as this is, it is at least positive territory, after a Q1 in which T-Mobile lost money. It is not quite true to say that T-Mobile only made it to profitability thanks to the one-off spectrum deal; excluding it, the carrier would have made $215m in operating income in Q2, a $243m swing from the $28m net loss in Q1. This is explained by a $223m narrowing of T-Mobile’s losses on device sales, as shown in Figure 2, and may explain why the earnings release makes no mention of profits instead of adjusted EBITDA despite it being a positive quarter.

Figure 3: T-Mobile’s return to underlying profitability – caused by moderating its smartphone bonanza somewhat

Source: STL Partners, company filings

T-Mobile management likes to cite its ABPU (Average Billings per User) metric in preference to ARPU, which includes the hire-purchase charges on device sales under its quick-upgrade plans. However, as Figure 3 shows, this is less exciting than it sounds. The T-Mobile management story is that as service prices, and hence ARPU, fall in order to bring in net-adds, payments for device sales “decoupled” from service plans will rise and take up the slack. They are, so far, only just doing so. Given that T-Mobile is losing money on device pricing, this is no surprise.

 

  • Executive Summary
  • Free’s Bid for T-Mobile USA
  • T-Mobile: the state of play in Q2 2014
  • Free-Mobile: the financials
  • Indicators of a successful LBO
  • Free.fr: a modus operandi for disruption
  • Surprise and audacity
  • Simple products
  • The technical edge
  • Obstacles to the Free modus operandi
  • Spectrum
  • Fixed-mobile synergy
  • Regulation
  • Summary
  • Two strategic options
  • Hypothesis one: change the circumstances via a strategic deal with the cablecos
  • Hypothesis two: 80s retro LBO
  • Problems that bite whichever option is taken
  • The other shareholders
  • Free’s management capacity and experience
  • Conclusion

 

  • Figure 1: Revenue and margins in the US. The duopoly is still very much with us
  • Figure 2: Fully-developed disruption. Revenue and margins in France
  • Figure 3: T-Mobile’s return to underlying profitability – caused by moderating its smartphone bonanza somewhat
  • Figure 4: Postpaid ARPU falling steadily, while ABPU just about keeps up
  • Figure 5: T-Mobile’s supposed “decoupling” of devices from service has extended $3.5bn of credit to its customers, rising at $1bn/quarter
  • Figure 6: Free’s valuation of T-Mobile is at the top end of a rising trend
  • Figure 7: Example LBO
  • Figure 8: Free-T-Mobile in the context of notable leveraged buyouts
  • Figure 9: Free Mobile’s progress towards profitability has been even more impressive than its subscriber growth