Convergence, coexistence or competition: How will 5G and Wi-Fi 6 interact?

Introduction: Wi-Fi vs cellular

The debate around Wi-Fi and cellular convergence is not new. However, the introduction of next generation mobile and cellular technologies, Wi-Fi 6 and 5G, has once again reignited this debate. Further impetus for discussion has been provided by industry bodies, including the Wi-Fi Alliance, IEEE, Wireless Broadband Alliance (WBA), Next Generation Mobile Networks Alliance (NGMN) and 3GPP, developing standards to enable the convergence between 5G and Wi-Fi.

5G, introduced by 3GPP’s release 15 in 2018, and deployed internationally by telecoms operators since 2019, is considered a significant upgrade to 4G and LTE. Its improved capabilities such as increased speed, coverage, reliability, and security promise to enable a host of new use cases in a wide range of industries.

Simultaneously, Wi-Fi has evolved into its 6th generation, with Wi-Fi 6 technology emerging in 2019. This new evolution of Wi-Fi can provide speeds that are 40% higher than its predecessor, as well as improved visibility and transparency for better network control and management. Some of the key enhancements of the new generation are detailed in the table below.

Figure 1: There are a number of key differences between next generation Wi-Fi and cellular connectivity

key-differences-next-generation-wifi-cellular-activity

Source: STL Partners

The market context for convergence

Industry bodies have been promoting convergence

The Wireless Broadband Alliance (WBA) and the Next Generation Mobile Networks Alliance (NGMN) produced a joint report in 2021 promoting the future convergence between Wi-Fi and 5G. The report highlights the merits of convergence, noting a number of use cases and verticals that could stand to benefit from closer alignment between the two technologies. Further, the 3GPP have increasingly sought to include standards with each new release that enable convergence between Wi-Fi and cellular. 3GPP’s release 8 introduced the concept of ‘access network discovery and selection function’ (ANDSF) which allowed user equipment to discover non-3GPP access networks, including Wi-Fi. In 2018, release 15 included optional 3GPP access for native 5G services via these non 3GPP access networks. Most recently, release 16 introduced ‘access traffic steering, splitting and switching’ (ATSSS), allowing both 3GPP and non-3GPP connectivity to multiple access networks, which is a key enabler of the resilience model of convergence. Similarly, the IEEE, sponsored by the Wi-Fi Alliance has been discussing the potential pathways to convergence for a number of years. However, these bodies are less vocal about future convergence possibilities, likely given Wi-Fi’s current dominance in the provision of enterprise wireless connectivity.

Spectrum auctions

The possibility of convergence has been further supported in recent years by releases of spectrum in the 6GHz band for unlicensed use in the USA, UK, South Korea and other major markets. Spectrum in the same 6GHz range can also be used to support 5G connectivity in addition to the existing 5GHz band. With the ability to share the same spectrum, this could theoretically promote closer coupling of 5G and Wi-Fi. However, given similar propagation characteristics for each technology, it remains to be proven as to whether the increasing availability of spectrum will help to push convergence forward.

There is a disconnect between theory and practice

While standards define what is possible, the purpose of industry bodies is to be future-focused, paving the way for the rest of the ecosystem to follow. What is possible in theory must be supported in practice, and the supply-side ecosystem, including network operators, system integrators (SIs), network equipment providers (NEPs) and hardware manufacturers have a role to play if convergence is to become more widespread.

Similarly, for devices to access converged networks, they must be equipped with 5G and Wi-Fi chips. While mobile phones support both connectivity types, the vast majority of connected devices that enterprises deploy are Wi-Fi only. Until 5G chips or modules become more widely available, and used in a greater number of devices, convergence will likely remain relegated to specific use cases. For example, use cases that depend on the mobility afforded by being able to ‘switch over’ from Wi-Fi to mobile seamlessly, or highly mission critical use cases in verticals such as manufacturing that can justify the investment in (private) 5G as a back-up to Wi-Fi. We discuss both of these use cases in more detail in the report. The full ecosystem must ultimately work in concert for convergence to become a realistic possibility for a larger number of enterprises.

 

Table of Contents

  • Executive Summary
    • Convergence is still immature on both the demand and supply sides
    • What do we mean by co-existence, convergence and competition?
  • Preface
  • Introduction
  • The market context for convergence
    • Industry bodies have been promoting convergence
    • Spectrum auctions
    • There is a disconnect between theory and practice
    • There are two key use cases for convergence
  • A future trend towards convergence is still immature
    • Regional differences in the maturity of 5G
    • Inconsistent definitions
    • Who manages convergence?
  • It is still too early to see high levels of demand for convergence from enterprise customers
    • Wi-Fi is the incumbent, 5G must overcome a number of barriers before it can become a genuine partner or alternative
    • Decisions regarding convergence are driven by industry characteristics
    • Supply side players must educate enterprise customers about convergence (if they believe it is beneficial to the enterprise)
  • Conclusion

Related research

Private and vertical cellular networks: Threats and opportunities

For the report chart pack download the additional file on the left

5G is catalysing demand for customisation

The arrival of 5G has catalysed a huge amount of interest in enterprise, government and “vertical” use-cases for cellular networks. Cellular technology is becoming ever more important and applicable for businesses, for diverse use-cases from factory automation, to better hospitality guest-services, to replacement of legacy two-way radios.

Some of this fits in with STL’s view of the Coordination Age, and the shift towards connectivity becoming part of wider, society-level or economy-level applications and solutions. However, in many ways it is more of an evolution of traditional enterprise use of private wireless solutions, but updated with newer and more-performant 5G radios. The future battleground is whether such coordination requires external services (and thus SPs), or whether the capabilities are best-delivered in-house on private networks.

For various reasons of cost, performance, accountability or guaranteed coverage, there is a drive towards greater customisation and control, often beyond that currently deliverable by traditional MNOs.

However, there is significant confusion between three things:

  • Mobile network services and applications sold to, or used by, industrial and enterprise customers
  • Mobile networks optimised, extended or virtualised for industrial and enterprise requirements
  • Mobile networks built exclusively for, or owned by, industrial companies and other enterprises

This report is a joint exercise between STL Partners and affiliate Disruptive Analysis, which has covered this sector in depth for almost 20 years. Its founder Dean Bubley runs workshops on private cellular and neutral-host networks, as well as undertaking private projects and speaking engagements advising operators, vendors, regulators and investors on business models, spectrum policy and market dynamics.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

What is a private mobile network?

This report primarily focuses on the third category – private mobile networks – although there is some overlap with the second, especially when techniques like network-slicing enter the discussion. There are different models of “private” too – from completely standalone networks that are entirely isolated from public mobile networks, to ones which use some dedicated infrastructure / management, alongside shared radio- or core-network elements provided by an MNO. They can be nationwide networks (for example, for utility grids), or highly localised, such as to a factory or hotel.

There are also various hybrids and nuances of all of this, such as private networks where certain functions are installed by, outsourced to, or managed by, telcos. It may be possible for users or devices to roam between private and public networks, for instance when a truck leaves a logistics facility with a local private network, and switches to the telco while it’s on the road.

Various government bodies – ranging from police forces to local council authorities – are also interested in creating private or shared 4G / 5G networks. Over the next 3-4 years, we can expect a wide diversity of approaches, and some very vague and fluid definitions from the industry.

Three building blocks for private networks

There are three main enablers (and numerous secondary drivers) behind the private network concept:

  • Availability of spectrum
  • Small cells and distributed radios
  • The move from 4G to 5G

A critical element in this is access to suitable spectrum for creating private networks. In recent years, many governments and regulatory authorities have started to make localised mobile licences available, suitable for covering enterprise sites, or wider areas such as cities. While private Wi-Fi and other networks have long been created with (free) unlicensed spectrum, this does not give the protections against contention and interference that more formal licensing enables. Other localised spectrum licenses have been given for point-to-point fixed links, temporary outside broadcast & events, or other purposes – but not cellular networks for normal mobile users. There are also discussions ongoing about making more national or wide-area spectrum available, suitable for mobile use in certain specialised verticals such as utilities.

Small cells and other types of enterprise-grade radio network (RAN) equipment are critical building- blocks for private mobile infrastructure, particularly indoors or on small/medium campus sites. They need to be low-cost, easy to install and operate, and ideally integrated with other IT and networking systems. While small cells have been around for 20 years or more, they have often been hard to deploy and manage. We are also seeing further innovation around distributed/cloud RAN which further increases the options for campus and in-building coverage systems.

5G – or more accurately the 5G era – changes the game in a number of ways. Firstly, IoT use-cases are becoming far more important, especially as analogue equipment and business processes become more connected and intelligent. Secondly, 5G brings new technical challenges, especially around the use of higher-frequency spectrum that struggles to go through walls – which highlights the paradox of telcos providing public network services on private property. Finally, with the advent of cloud-based and virtualised functions such as core networks, it is becoming easier to deploy and operate smaller infrastructures.

Some of the specialised skills requirements for building/running cellular networks can be reduced with automation, although this is still a significant obstacle for enterprises. This will drive significant demand for new tiers and types of managed services provider for private cellular – some of which will be satisfied by telcos, but which will also targeted by many others from towerco’s to systems integrators to cloud/Internet players.

It is worth stressing that this concept is not new. Private cellular networks have existed in small niches for 10-20 years. Railways have a dedicated version of 2G called GSM-R. Military squads and disaster- response teams can carry small localised base stations and controllers in their vehicles or even backpacks. Remote mines or oil-exploration sites have private wireless networks of various types. The author of this report first saw cellular small-cells in 2000, and worked on projects around enterprise adoption of private 2G as early as 2005.

Private and vertical cellular networks: Threats and opportunities aims to clarify the concept of “private” networks. It explores the domain of business-focused cellular networks, where the enterprise has some degree of ownership or control over the infrastructure – and, sometimes, the radio network itself. The report then sets out the motivations and use cases for private networks, as well as the challenges and obstacles faced.

This report is a joint exercise between STL Partners and affiliate Disruptive Analysis, which has covered this sector in depth for almost 20 years. Its founder Dean Bubley runs workshops on private cellular and neutral-host networks, as well as undertaking private projects and speaking engagements advising operators, vendors, regulators and investors on business models, spectrum policy and market dynamics. Please see deanbubley.com or @disruptivedean on Twitter for details and inquiries.

Table of contents

  • Executive Summary
  • Introduction
    • Public vs. non-public networks
    • Private network vs. private MVNO vs. slices
  • Motivations & use-cases for private networks
    • Business drivers for private cellular
    • Technical use-cases for private cellular
    • Industrial sites & IIoT
    • Enterprise/public in-building coverage
    • Neutral host networks (NHN)
    • Fixed 4G / 5G networks
  • Regulatory & spectrum issues
    • Other regulatory considerations
  • Building private networks – technology
    • Architectural choices, technology standards & industry bodies
  • The emerging private networks value chain
  • Conclusions & Recommendations
    • How large is the private network opportunity?
    • Challenges and obstacles for private networks
    • What is the implication for traditional telcos and MNOs?
    • Telcos’ relationship to project scope

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Telco apps: What works?

Introduction

Part of STL Partners’ (Re)connecting with Consumers stream, this report analyses a selection of successful mobile apps run by telcos or their subsidiaries. It explains why mobile apps will continue to play a major in the digital economy for the foreseeable future before considering the factors that have made particular telco apps successful. Most of the apps considered in the report are from Asia, primarily because operators in that world have typically been more aggressive in pursuing the digital services market than their counterparts elsewhere. Note, the list of apps analysed in this report is far from exhaustive – there are other successful telco-run apps on the market.

The ultimate goal of this report is to explain how apps can engage customers and give telcos greater traction with consumers. Although many apps are rarely used and quickly discarded, the most popular apps, such as Instagram, Spotify and YouTube, have become an integral part of the daily lives of hundreds of millions of people. Some apps, such as Uber and Google Maps, regularly provide people with services and/or information that make their lives much easier – getting a taxi or navigating through an unfamiliar city is now much easier than it used to be. Indeed, a well-designed app dedicated to a specific service can deliver both relevance and revenues.

This report builds on previous STL research, notably:

Can Netflix and Spotify make the leap to the top tier?

AI in customer services: It’s not all about chatbots

AI on the Smartphone: What telcos should do 

Enter your details below to request an extract of the report

Why apps matter for telcos

Telcos’ most successful digital services, notably SMS, pre-date the smartphone app era.  Even more recent triumphs, such as the M-Pesa, the ground breaking mobile money service in Kenya, were originally designed to work on feature phones.  Many similar services, such as MTN Money and Orange Money, aimed at the large numbers of people without bank accounts in Africa and developing Asia, continue to be accessed largely through text-based menus via SIM toolkit.

But the widespread adoption of smartphones in developed and developing markets alike mean that telcos everywhere need to ensure all the consumer services they offer can be accessed via well-designed and intuitive apps with graphical user interfaces. By the end of 2017, there were 4.3 billion smartphones in use worldwide, according to Ericsson’s estimates. Moreover, smartphone adoption continues to rise rapidly, particularly in Africa, India and other developing countries. Ericsson reckons the number of smartphone subscriptions will reach 7.2 billion in 2023 (see Figure 3).

Figure 3: The number of smartphones in use is rising steadily across the world

Global App take up

Source: Ericsson Mobility Report, June 2018

Subscriptions associated with smartphones now account for around 60% of all mobile phone subscriptions, according to Ericsson, which says that 85% of all mobile phones sold in the first quarter of 2018 were smartphones.

With smartphones the default handset for people in developed markets and many developing markets, apps have become a major medium for interactions between consumers and service providers across the economy. Now approximately ten years old, the so-called app economy is worth tens of billions of dollars per annum.

Although there has been a backlash, as people’s smartphones get clogged up with apps, the sector still has considerable momentum.

The most popular apps, such as Uber and Amazon Shopping, combine ease of access (straightforward authentication), with ease-of-use and ease-of-payment, enabling them to attract tens of millions of users.

With some justification, proponents contend that apps will continue to be one of the main drivers of the digital economy for the foreseeable future. The broader app economy will be worth $6.3 trillion by 2021, up from $1.3 trillion in 2016, according to App Annie. Note, those figures include in-app ads and mobile commerce, as well as the revenues generated through app stores. In other words, this is the total value of the business conducted via apps, rather than the revenue accrued by app stores and developers. This dramatic forecast assumes the ongoing shift of physical transactions to the mobile medium continues apace: App Annie expects the value of mobile commerce transactions to rise from $344 per user in 2016 to $946 by 2021.

Although most of the leading apps are free, many do generate a subscription fee or one-off sales. Annual consumer spending in app stores is set to rise 18% between 2016 and 2021 to reach $139 billion worldwide, according to specialist app analytics firm App Annie, which also forecasts the total time spent in apps will grow to 3.5 trillion hours in 2021, up from 1.6 trillion in 2016.

In reality, some of these aggressive forecasts may prove to be too bullish, as consumers begin to make greater use of messaging services and voice-activated speakers to interact with local merchants and purchase digital content and services.  Even so, it is clear that the leading mobile apps will continue to be a major consumer engagement tool for many brands and merchants well into the next decade. In some cases, such as Spotify or the fitness app Strava, the user has typically put significant effort into creating a personalised experience, helping to cement their loyalty.

In developed countries, some telcos, notably AT&T and Verizon, have belatedly and expensively acquired a major presence in the app economy by buying leading digital content producers and service providers. With the $85.4 billion acquisition of Time Warner, AT&T is now the owner of HBO Now, which was the third highest app by consumer spend in the US in 2017, according to App Annie. HBO Now also ranked fifth in Mexico and eighth in the world on this measure. Having acquired Yahoo! and AOL apps over the past few years, Verizon ranked eighth among companies in terms of downloads in the US in 2017.

The delicate transition from SIM toolkit to app

But expensive acquisitions are not the only way into the app economy. For telcos that have developed consumer services from the ground-up, the rise of the smartphone offers opportunities to provide much richer functionality and a more intuitive interface, as well cross-selling and up-selling. In Kenya, Safaricom has been expanding the mobile money transfer service M-Pesa into a much broader financial services proposition, while prodding users to switch from the SIM toolkit to the app, which can properly highlight M-Pesa’s wider proposition. At the same time, the telco has integrated M-Pesa into its customer service app, mySafaricom, helping it to promote its broader telecoms offering to frequent users of its mobile money services.

However, Safaricom is well aware that it needs to tread cautiously, continuing to cater for those customers who are comfortable with the SIM toolkit experience. Its softly-softly approach is to reassure Kenyans that they can always fall back on the SIM toolkit, if they don’t like the app.  In a Safaricom-sponsored article from August 2017, Emmanuel Chenze wrote the following on the online site, Android Kenya:

“For over a year now, Safaricom has had the mySafaricom application available on the Google Play Store for users to be able to better manage the services they receive from the telecommunications company. However, it wasn’t until March this year when the application was updated to include M-PESA.

“With M-PESA finally integrated, the over 1 million smartphone users can now take full advantage and transact even faster thanks to the app. While good ol’ SIM toolkit still works wonders and remains a good backup option when you’re not connected to the internet or when the mySafaricom app is acting up, using the application, which has since been updated to reflect Safaricom’s recent rebranding, is way better than using the otherwise cumbersome SIM toolkit.”

If they can make their apps straightforward and easily accessible, Africa’s telcos could still become major players in the app economy – as Figure 4 indicates, the number of smartphones in use in sub-Saharan Africa could double between now and 2023. That gives telcos a major opportunity to promote their apps to first-time smartphone users as they buy their new handsets. Pan-Africa operator MTN is pursuing this strategy with its MTN Game+ , Music+ and video apps (see Figure 4).

Figure 4: MTN is pushing its entertainment apps to new smartphone users

Safaricom app chart

Source: MTN interim results presentation for the six months ended June 2018

In Asia, some telcos have successfully developed widely used apps from scratch, notably in the customer care space, as explained in the next section (continued in full report).

Table of Contents

  • Executive Summary
  • Introduction
  • Why apps matter
  • The delicate transition from SIM toolkit to app
  • Telcos can build on customer care
  • My AIS – a top ten app in Thailand
  • Takeaways
  • Information apps have traction
  • Call management apps prove popular in South Korea
  • T Map in top ten apps in South Korea
  • Takeaways
  • Telcos’ entertainment apps go regional
  • PCCW’s Viu plays in sixteen markets
  • Liberty Global
  • Takeaways
  • Turkcell: Using apps to up engagement
  • Competitive in communications
  • Takeaways

Table of Figures

  • Figure 1: Alternative routes for telcos to build out their app proposition
  • Figure 2: Overview of the telco-owned apps covered in this report
  • Figure 3: The number of smartphones in use is rising steadily across the world
  • Figure 4: MTN is pushing its entertainment apps to new smartphone users
  • Figure 5: My AIS supports payments and loyalty points, as well as usage monitoring
  • Figure 6: The True iService app has a clear and straightforward graphic interface
  • Figure 7: True Digital’s app portfolio covers everything from coffee to communications
  • Figure 8: WhoWho helps user manage incoming calls on phones and wearables
  • Figure 9: SK Telecom’s T map app for public transport covers trains, buses and taxis
  • Figure 10: KKBOX Claims Strong Customer Base Among iPhone Users
  • Figure 11: Turkcell’s broad portfolio of apps covers content and communications
  • Figure 12: Turkcell’s BiP Messenger is designed to be fun
  • Figure 13: Turkcell is focused on how much time customers spend in its apps
  • Figure 14: Turkcell’s foreign subsidiaries are much smaller than its domestic operation

Enter your details below to request an extract of the report

AI on the Smartphone: What telcos should do

Introduction

Following huge advances in machine learning and the falling cost of cloud storage over the last several years, artificial intelligence (AI) technologies are now affordable and accessible to almost any company. The next stage of the AI race is bringing neural networks to mobile devices. This will radically change the way people use smartphones, as voice assistants morph into proactive virtual assistants and augmented reality is integrated into everyday activities, in turn changing the way smartphones use telecoms networks.

Besides implications for data traffic, easy access to machine learning through APIs and software development kits gives telcos an opportunity to improve their smartphone apps, communications services, entertainment and financial services, by customising offers to individual customer preferences.

The leading consumer-facing AI developers – Google, Apple, Facebook and Amazon – are in an arms race to attract developers and partners to their platforms, in order to further refine their algorithms with more data on user behaviours. There may be opportunities for telcos to share their data with one of these players to develop better AI models, but any partnership must be carefully weighed, as all four AI players are eyeing up communications as a valuable addition to their arsenal.

In this report we explore how Google, Apple, Facebook and Amazon are adapting their AI models for smartphones, how this will change usage patterns and consumer expectations, and what this means for telcos. It is the first in a series of reports exploring what AI means for telcos and how they can leverage it to improve their services, network operations and customer experience.

Contents:

  • Executive Summary
  • Smartphones are the key to more personalised services
  • Implications for telcos
  • Introduction
  • Defining artificial intelligence
  • Moving AI from the cloud to smartphones
  • Why move AI to the smartphone?
  • How to move AI to the smartphone?
  • How much machine learning can smartphones really handle?
  • Our smartphones ‘know’ a lot about us
  • Smartphone sensors and the data they mine
  • What services will all this data power?
  • The privacy question – balancing on-device and the cloud
  • SWOT Analysis: Google, Apple, Facebook and Amazon
  • Implications for telcos

Figures:

  • Figure 1: How smartphones can use and improve AI models
  • Figure 2: Explaining artificial intelligence terminology
  • Figure 3: How machine learning algorithms see images
  • Figure 4: How smartphones can use and improve AI models
  • Figure 5: Google Translate works in real-time through smartphone cameras
  • Figure 6: Google Lens in action
  • Figure 7: AR applications of Facebook’s image segmentation technology
  • Figure 8: Comparison of the leading voice assistants
  • Figure 9: Explanation of Federated Learning

Sense check: Can data growth save telco revenues?

Introduction

A recent STL Partners report – Which operator growth strategies will remain viable in 2017 and beyond? – looked at the growth strategies of 68 operator groups, and identified eight different growth strategies employed over this sample. The eighth strategy was to expect mobile data growth to start to reverse the decline in revenues once the decline in voice and messaging revenues is complete. In the previous report, we argued that data revenue growth would not rapidly counterbalance the losses of voice and messaging due to the forces outlined in Figure 2 below:

Figure 2: Trust in the increasing value of (and spend in) broadband data 

Source: STL Partners

In that report, we showed a number of examples, including NTT Docomo in Japan, which has been experiencing voice and messaging declines for the longest period of telcos we are aware of, and the UK market, which is competitive with relatively good availability of market data (See Figure 3):

Figure 3: STL Partners can find no evidence of long term revenue growth driven by increased mobile broadband demand in mature markets (outside duopolies)

Source: Company accounts, STL Partners

Despite the clarity of our own convictions on this matter, we are aware that some telcos are growing their revenues, and also that a minority of our clients (perhaps one in ten based on a number of informal surveys we have run in workshops etc.) believe that data could start to regrow the market in certain conditions.

Given how attractive this idea is to the industry, and how difficult and lengthy the path of transformation and creating digital services is proving for telcos, we decided that it would be useful to revisit our assertions, to dig deeper to see what signs of growth we could find and what might be learned from them. This report contains our findings from this further analysis.

Background: The telco ‘hunger gap’

This decline is not a new story, and STL Partners has been warning about this phenomenon and the need for business model change since 2006.

Back in 2013, STL Partners estimated that digital business would need to represent 25+% of Telco revenue by 2020 to avoid long-term industry decline. However, to date we have not taken the view that data revenues will to grow enough to make up for the decline in traditional services, meaning that “hunger gap” will not be filled this way (see Figure 4).

Figure 4: The telco ‘hunger gap’ between the decline in traditional and data revenues

Source: STL Partners

However, making the transition to new business models is challenging for telcos, who have traditionally relied on an infrastructure-based business model. Digital businesses are very different, and the astronomical growth in demand for mobile data services over the past decade is placing severe strain on networks and resources.

We have argued that telcos now need to make a fundamental shift from their traditional infrastructure-based business model to a complex amalgam of infrastructure, platform, and product innovation businesses.

Alternatively, growing data would be an innately attractive prospect for the telecoms industry. It would not require all the hard work, risk, change and investment of transformation. Hard-pressed executives would love nothing better than the ‘do little’ strategy to work out. It’s an idea that can easily find traction and supporters.

But is it a realistic prospect to grow data revenues faster than voice and messaging are shrinking?

To sense-check our original assertion that data will not grow overall revenues, this report takes a new look at the available evidence. We picked six different telcos appearing to exhibit representative or outlier strategies to see whether there may currently be grounds to change our view that data revenue growth will not grow the overall telecoms market.

Content:

  • Executive Summary
  • Introduction
  • Background: the telco ‘hunger gap’
  • Methodology
  • Review of global trends in data growth
  • The explosion in mobile data growth
  • The link between data consumption and ARPU
  • The rise of 4G
  • Data tariff bundles increase in volume
  • Mobile data offloading
  • Multiplay bundling and the fixed network advantage
  • International data roaming
  • Zero rating and net neutrality
  • Case studies – different data strategies
  • Four data growth strategies
  • The traditional growth model
  • The disruptor/challenger model
  • The innovator model
  • The OTT proposition
  • Case studies comparison: Investment vs risk in summary
  • Case study: Innovator: DNA (Finland)
  • Case study: Disruptor/Innovator: T-Mobile US
  • Case study: Super-disruptor: Reliance Jio (India)
  • Case study: Disruptor: Free (France)
  • Case study: Traditional/Innovator: Vodafone UK
  • Case study: Traditional: Cosmote (Greece)
  • Conclusions
  • Case studies comparison: Investment vs risk in summary
  • Telcos need to seek fresh business models
  • Network investment will need to be even more intelligently targeted than with 3G/4G
  • New growth opportunities are emerging
  • A little thoughtful innovation goes a long way
  • Recommendations

Figures:

  • Figure 1: Trust in the increasing value of (and spend) in broadband data
  • Figure 2: Trust in the increasing value of (and spend) in broadband data
  • Figure 3: STL Partners can find no evidence of long-term revenue growth driven by increased mobile broadband demand in mature markets (outside duopolies)
  • Figure 4: The telco “hunger gap” between the decline in traditional and data revenues
  • Figure 5: Cisco global data growth 2016-2021
  • Figure 6: Total estimated UK mobile retail revenues
  • Figure 7: SMS and MMS sent in the UK, 2007-2015
  • Figure 8: Selected telco data growth strategies
  • Figure 9: Analysis of mobile operator growth strategies
  • Figure 10: DNA revenues and churn 2012-2016
  • Figure 11: DNA mobile data growth 2010-2016
  • Figure 12: DNA mobile data growth forecast
  • Figure 13: USA average monthly data use, 2010-2015
  • Figure 14: Deutsche Telekom non-voice % of ARPU, 2009-2016
  • Figure 15: T-Mobile US total revenues and non-voice ARPU, 2009-2016
  • Figure 16: Reliance Jio subscription growth
  • Figure 17: Free Mobile 4G subscriptions and 4G data, 2015-2016
  • Figure 18: Iliad Free revenue growth 2012-2016
  • Figure 19: France average mobile data use per SIM, 2009-2015
  • Figure 20: France mobile value added service revenues, 2009-2015
  • Figure 21: Vodafone UK data use and total mobile ARPU, 2011-2016
  • Figure 22: UK mobile retail ARPU, 2010-2016
  • Figure 23: UK estimated mobile retail revenues, 2010-2015
  • Figure 24: Vodafone UK total mobile revenue 2013-2016
  • Figure 25: Greece data use and total mobile revenues

MobiNEX: The Mobile Network Experience Index, H1 2016

Executive Summary

In response to customers’ growing usage of mobile data and applications, in April 2016 STL Partners developed MobiNEX: The Mobile Network Experience Index, which ranks mobile network operators by key measures relating to customer experience. To do this, we benchmark mobile operators’ network speed and reliability, allowing individual operators to see how they are performing in relation to the competition in an objective and quantitative manner.

Operators are assigned an individual MobiNEX score out of 100 based on their performance across four measures that STL Partners believes to be core drivers of customer app experience: download speed, average latency, error rate and latency consistency (the proportion of app requests that take longer than 500ms to fulfil).

Our partner Apteligent has provided us with the raw data for three out of the four measures, based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in H1 2016. While our April report focused on the top three or four operators in just seven Western markets, this report covers 80 operators drawn from 25 markets spread across the globe in the first six months of this year.

The top ten operators were from Japan, France, the UK and Canada:

  • Softbank JP scores highest on the MobiNEX for H1 2016, with high scores across all measures and a total score of 85 out of 100.
  • Close behind are Bouygues FR (80) and Free FR (79), which came first and second respectively in the Q4 2015 rankings. Both achieve high scores for error rate, latency consistency and average latency, but are slightly let down by download speed.
  • The top six is completed by NTT DoCoMo JP (78), Orange FR (75) and au (KDDI) JP (71).
  • Slightly behind are Vodafone UK (65), EE UK (64), SFR FR (63), O2 UK (62) and Rogers CA (62). Except in the case of Rogers, who score similarly on all measures, these operators are let down by substantially worse download speeds.

The bottom ten operators all score a total of 16 or lower out of 100, suggesting a materially worse customer app experience.

  • Trailing the pack with scores of 1 or 2 across all four measures were Etisalat EG (4), Vodafone EG (4), Smart PH (5) and Globe PH (5).
  • Beeline RU (11) and Malaysian operators U Mobile MY (9) and Digi MY (9) also fare poorly, but benefit from slightly higher latency consistency scores. Slightly better overall, but still achieving minimum scores of 1 for download speed and average latency, are Maxis MY (14) and MTN ZA (12).

Overall, the extreme difference between the top and bottom of the table highlights a vast inequality in network quality customer experience across the planet. Customer app experience depends to a large degree on where one lives. However, our analysis shows that while economic prosperity does in general lead to a more advanced mobile experience as you might expect, it does not guarantee it. Norway, Sweden, Singapore and the US market are examples of high income countries with lower MobiNEX scores than might be expected against the global picture. STL Partners will do further analysis to uncover more on the drivers of differentiation between markets and players within them.

 

MobiNEX H1 2016 – included markets

MobiNEX H1 2016 – operator scores

 Source: Apteligent, OpenSignal, STL Partners analysis

 

  • About MobiNEX
  • Changes for H1 2016
  • MobiNEX H1 2016: results
  • The winners: top ten operators
  • The losers: bottom ten operators
  • The surprises: operators where you wouldn’t expect them
  • MobiNEX by market
  • MobiNEX H1 2016: segmentation
  • MobiNEX H1 2016: Raw data
  • Error rate
  • Latency consistency
  • Download speed
  • Average latency
  • Appendix 1: Methodology and source data
  • Latency, latency consistency and error rate: Apteligent
  • Download speed: OpenSignal
  • Converting raw data into MobiNEX scores
  • Setting the benchmarks
  • Why measure customer experience through app performance?
  • Appendix 2: Country profiles
  • Country profile: Australia
  • Country profile: Brazil
  • Country profile: Canada
  • Country profile: China
  • Country profile: Colombia
  • Country profile: Egypt
  • Country profile: France
  • Country profile: Germany
  • Country profile: Italy
  • Country profile: Japan
  • Country profile: Malaysia
  • Country profile: Mexico
  • Country profile: New Zealand
  • Country profile: Norway
  • Country profile: Philippines
  • Country profile: Russia
  • Country profile: Saudi Arabia
  • Country profile: Singapore
  • Country profile: South Africa
  • Country profile: Spain
  • Country profile: United Arab Emirates
  • Country profile: United Kingdom
  • Country profile: United States
  • Country profile: Vietnam

 

  • Figure 1: MobiNEX scoring breakdown, benchmarks and raw data used
  • Figure 2: MobiNEX H1 2016 – included markets
  • Figure 3: MobiNEX H1 2016 – operator scores breakdown (top half)
  • Figure 4: MobiNEX H1 2016 – operator scores breakdown (bottom half)
  • Figure 5: MobiNEX H1 2016 – average scores by country
  • Figure 6: MobiNEX segmentation dimensions
  • Figure 7: MobiNEX segmentation – network speed vs reliability
  • Figure 8: MobiNEX segmentation – network speed vs reliability – average by market
  • Figure 9: MobiNEX vs GDP per capita – H1 2016
  • Figure 10: MobiNEX vs smartphone penetration – H1 2016
  • Figure 11: Error rate per 10,000 requests, H1 2016 – average by country
  • Figure 12: Error rate per 10,000 requests, H1 2016 (top half)
  • Figure 13: Error rate per 10,000 requests, H1 2016 (bottom half)
  • Figure 14: Requests with total roundtrip latency > 500ms (%), H1 2016 – average by country
  • Figure 15: Requests with total roundtrip latency > 500ms (%), H1 2016 (top half)
  • Figure 16: Requests with total roundtrip latency > 500ms (%), H1 2016 (bottom half)
  • Figure 17: Average weighted download speed (Mbps), H1 2016 – average by country
  • Figure 18: Average weighted download speed (Mbps), H1 2016 (top half)
  • Figure 19: Average weighted download speed (Mbps), H1 2016 (bottom half)
  • Figure 20: Average total roundtrip latency (ms), H1 2016 – average by country
  • Figure 21: Average total roundtrip latency (ms), H1 2016 (top half)
  • Figure 22: Average total roundtrip latency (ms), H1 2016 (bottom half)
  • Figure 23: Benchmarks and raw data used

Digital Health: How Can Telcos Compete with Google, Apple and Microsoft?

Introduction

With the ever-increasing amount of data collected by smartphones, fitness monitors and smart watches, telcos and other digital players are exploring opportunities to create value from consumers’ ability to capture data on many aspects of their own health and physical activity. Connected devices leverage inbuilt sensors and associated apps to collect data about users’ activities, location and habits.

New health-focused platforms are emerging that use the data collected by sensors to advise individual users on how to improve their health (e.g. a reminder to stand up every 60 minutes), while enhancing their ability to share data meaningfully with healthcare providers, whether in-person or remotely. This market has thus far been led by the major Internet and device players, but telecoms operators may be able to act as distributors, enablers/integrators, and, in some cases, even providers of consumer health and wellness apps (e.g., Telefonica’s Saluspot).

High level drivers for the market

At a macro level, there are a number of factors driving digital healthcare.  These include:

  • Population ageing – The number of people globally who are aged over 65 is expected to triple over the next 30 years , and this will create unprecedented demand for healthcare.
  • Rising costs of healthcare provision globally – Serving an aging population, the increase globally in lifestyle and chronic diseases, and rising underlying costs, is pushing up healthcare spending – while at the same time, due to economic pressures there are more limited funds available to pay for this.
  • Limited supply of trained clinicians – Policy issues and changes in job and lifestyle preferences are limiting both educational capacity and ability to recruit and retain appropriately trained healthcare staff in most markets.
  • Shift in funding policy – In many countries, funding for healthcare is shifting away from being based on reimbursement-for-events (e.g., a practice or hospital is paid for every patient visit, for each patient they register, for each vaccination administered), to a greater emphasis on ‘value-based care’ – reimbursement based on successful patient health outcomes.
  • Increased focus on prevention in healthcare provision – in some cases funding is starting to be provided for preventative population health measures, such as weight-loss or quit-smoking programmes.
  • Development of personalised medicine – Personalised medicine is beginning to gain significant attention. It involves the delivery of more effective personalised treatments (and potentially drugs) based on an individual’s specific genomic characteristics, supported by advances in genotyping and analytics, and by ongoing analysis of individual and population health data.
  • Consumerisation of healthcare – There is a general trend for patients – or rather, consumers – to take more responsibility for their own health and their own healthcare, and to demand always-on access both to healthcare and to their own health information, at a level of engagement they choose.

The macro trends above are unlikely to disappear or diminish in the short-to-medium term; and providers, policymakers and payers  are struggling to cope as healthcare systems increasingly fall short of both targets and patients’ expectations.

Digital healthcare will play a key role in addressing the challenges these trends present. It promises better use and sharing of data, of analytics offering deep insight on health trends for individuals and across the wider population, and of the potential for greater convenience, efficacy and reach of healthcare provisioning.

While many (if not most) of the opportunities around digital health will centre on advances in healthcare providers’ ICT systems, there is significant interest in how consumer wellness and fitness apps and devices will contribute to the digital health ecosystem. Consumer digital health and wellness is particularly relevant to two of the trends above: consumerisation of healthcare, and the shift to prevention as a focus of both healthcare providers and payers.

Fitness trackers and smartwatches, and the associated apps for these devices, as well as wellness and fitness apps for smartphone users, could open up new revenue streams for some service providers, as well as a vast amount of personal data that could feed into both medical records and analytics initiatives. The increasing use of online resources by consumers for both health information and consultation, as well as cloud-based storage of and access to their own health data, also creates opportunities to make more timely and effective healthcare interventions.  For telcos, the question is where and how they can play effectively in this market.

Market Trends and Overview

The digital healthcare market is both very large and very diverse. Digital technologies can be applied in many different segments of the healthcare market (see figure below), both to improve efficiency and enable the development of new services, such as automated monitoring of chronic conditions.

The different segments of the digital healthcare market

Source: STL Partners based on categories identified by Venture Scanner

The various segments in Figure 1 are defined as below:

Wellness

  • Mobile fitness and health apps enable consumers to monitor how much exercise they are doing, how much sleep they are getting, their diet and other aspects of their lifestyle.
  • Wearable devices, such as smart watches and fitness bands, are equipped with sensors that collect the data used by fitness and health apps.
  • Electronic health records are a digital record of data and information about an individual’s health, typically collating clinical data from multiple sources and healthcare providers.

Information

  • Services search are digital portals and directories that help individuals find out healthcare information and identify potential service providers.
  • Online health sites and communities provide consumers with information and discussion forums.
  • Healthcare marketing refers to digital activities by healthcare providers to attract people to use their services.

Interactions

  • Payments and insurance – digital apps and services that enable consumers to pay for healthcare or insurance.
  • Patient engagement refers to digital mechanisms, such as apps, through which healthcare providers can interact with the individuals using their services.
  • Doctor networks are online services that enable clinicians to interact with each other and exchange information and advice.

Research

  • Population health management refers to the use of digital tools by clinicians to capture data about groups of patients or individuals that can then be used to inform treatment.
  • Genomics: An individual’s genetic code can be collated in a digital form so it can be used to understand their likely susceptibility specific conditions and treatments.
  • Medical big data involves capturing and analysing large volumes of data from multiple sources to help identify patterns in the progression of specific illnesses and the effectiveness of particular treatment combinations.

In-hospital care

  • Electronic medical records: A digital version of a hospital or clinic’s records of a specific patient. Unlike electronic health records, electronic medical records aren’t designed to be portable across different healthcare providers.
  • Clinical admin: The use of digital technologies to improve the efficiency of healthcare facilities.
  • Robotics: The use of digital machines to perform specific healthcare tasks, such as transporting medicines or spoon-feeding a patient.

In-home care

  • Digital medical devices: All kinds of medical devices, from thermometers to stethoscopes to glucosometers to sophisticated MRI and medical imaging equipment, are increasingly able to capture and transfer data in a digital form.
  • Remote monitoring involves the use of connected sensors to regularly capture and transmit information on a patient’s health. Such tools can be used to help monitor the condition of people with chronic diseases, such as diabetes.
  • Telehealth refers to patient-clinician consultations via a telephone, chat or video call.

The wellness opportunity

This report focuses primarily primarily on the ‘wellness’ segment (highlighted in the figure below), which is experiencing major disruption as a result of devices, apps and services being launched by Apple, Google and Microsoft, but it also touches on some of these players’ activities in other segments.

This report focuses on wellness, which is undergoing major disruption

Source: STL Partners based on categories identified by Venture Scanner

 

  • Executive summary
  • Introduction
  • High level drivers for the market
  • Market Trends and Overview
  • Market size and trends: smartwatches will overtake fitness brands
  • Health app usage has doubled in two years in the U.S.
  • Are consumers really interested in the ‘quantified self’?
  • Barriers and constraining factors for consumer digital health
  • Disruption in Consumer Digital Wellness
  • Case studies: Google, Apple and Microsoft
  • Google: leveraging Android and analytics capabilities
  • Apple: more than the Watch…
  • Microsoft: an innovative but schizophrenic approach
  • Telco Opportunities in Consumer Health
  • Recommendations for telcos

 

  • Figure 1: The different segments of the digital healthcare market
  • Figure 2: This report focuses on wellness, which is undergoing major disruption
  • Figure 3: Consumer digital health and wellness: leading products and services, 2016
  • Figure 4: Wearable Shipments by Type of Device, 2015-2020
  • Figure 5: Wearable OS Worldwide Market Share, 2015 and 2019
  • Figure 6: Take-up of different types of health apps in the U.S. market (2016)
  • Figure 7: % of health wearable and app users willing to share data US market (2016)
  • Figure 8: Elements of the ‘quantified self’, as envisioned by Orange
  • Figure 9: Less than two-third of US wearable buyers wear their acquisition long-term
  • Figure 10: Google Consumer Health and Fitness Initiatives
  • Figure 11: Snapshot of Google Fit User Interface, 2016
  • Figure 12: Google/Alphabet’s areas of focus in the digital healthcare market
  • Figure 13: Apple’s Key Digital Health and Wellness Initiatives
  • Figure 14: Apple Health app interface and dashboard
  • Figure 15: Apple’s ResearchKit-based EpiWatch App
  • Figure 16: Apple’s current areas of focus in the digital healthcare market
  • Figure 17: Microsoft Consumer Fitness/Wellness Device Initiatives
  • Figure 18: Microsoft Health can integrate data from a range of fitness trackers
  • Figure 19: Microsoft Consumer Fitness/Wellness Applications and Services
  • Figure 20: The MDLive Telehealth Proposition, August 2016
  • Figure 21: Microsoft’s areas of focus in the digital healthcare market
  • Figure 22: Telefónica’s Saluspot: Interactive online doctor consultations on-demand

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

MobiNEX: The Mobile Network Experience Index, Q4 2015

Executive Summary

In response to customers’ growing usage of mobile data and applications, STL Partners has developed MobiNEX: The Mobile Network Customer Experience Index, which benchmarks mobile operators’ network speed and reliability by measuring the consumer app experience, and allows individual players to see how they are performing in relation to competition in an objective and quantitative manner.

We assign operators an individual MobiNEX score based on their performance across four measures that are core drivers of customer app experience: download speed; average latency; error rate; latency consistency (the percentage of app requests that take longer than 500ms to fulfil). Apteligent has provided us with the raw data for three out of four of the measures based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in Q4 2015. We plan to expand the index to cover other operators and to track performance over time with twice-yearly updates.

Encouragingly, MobiNEX scores are positively correlated with customer satisfaction in the UK and the US suggesting that a better mobile app experience contributes to customer satisfaction.

The top five performers across twenty-seven operators in seven countries in Europe and North America (Canada, France, Germany, Italy, Spain, UK, US) were all from France and the UK suggesting a high degree of competition in these markets as operators strive to improve relative to peers:

  • Bouygues Telecom in France scores highest on the MobiNEX for Q4 2015 with consistently high scores across all four measures and a total score of 76 out of 100.
  • It is closely followed by two other French operators. Free, the late entrant to the market, which started operations in 2012, scores 73. Orange, the former national incumbent, is slightly let down by the number of app errors experienced by users but achieves a healthy overall score of 70.
  • The top five is completed by two UK operators: EE (65) and O2 (61) with similar scores to the three French operators for everything except download speed which was substantially worse.

The bottom five operators have scores suggesting a materially worse customer app experience and we suggest that management focuses on improvements across all four measures to strengthen their customer relationships and competitive position. This applies particularly to:

  • E-Plus in Germany (now part of Telefónica’s O2 network but identified separately by Apteligent).
  • Wind in Italy, which is particularly let down by latency consistency and download speed.
  • Telefónica’s Movistar, the Spanish market share leader.
  • Sprint in the US with middle-ranking average latency and latency consistency but, like other US operators, poor scores on error rate and download speed.
  • 3 Italy, principally a result of its low latency consistency score.

Surprisingly, given the extensive deployment of 4G networks there, the US operators perform poorly and are providing an underwhelming customer app experience:

  • The best-performing US operator, T-Mobile, scores only 45 – a full 31 points below Bouygues Telecom and 4 points below the median operator.
  • All the US operators perform very poorly on error rate and, although 74% of app requests in the US were made on LTE in Q4 2015, no US player scores highly on download speed.

MobiNEX scores – Q4 2015

 Source: Apteligent, OpenSignal, STL Partners analysis

MobiNEX vs Customer Satisfaction

Source: ACSI, NCSI-UK, STL Partners

 

  • Introduction
  • Mobile app performance is dependent on more than network speed
  • App performance as a measure of customer experience
  • MobiNEX: The Mobile Network Experience Index
  • Methodology and key terms
  • MobiNEX Q4 2015 Results: Top 5, bottom 5, surprises
  • MobiNEX is correlated with customer satisfaction
  • Segmenting operators by network customer experience
  • Error rate
  • Quantitative analysis
  • Key findings
  • Latency consistency: Requests with latency over 500ms
  • Quantitative analysis
  • Key findings
  • Download speed
  • Quantitative analysis
  • Key findings
  • Average latency
  • Quantitative analysis
  • Key findings
  • Appendix: Source data and methodology
  • STL Partners and Telco 2.0: Change the Game
  • About Apteligent

 

  • MobiNEX scores – Q4 2015
  • MobiNEX vs Customer Satisfaction
  • Figure 1: MobiNEX – scoring methodology
  • Figure 2: MobiNEX scores – Q4 2015
  • Figure 3: Customer Satisfaction vs MobiNEX, 2015
  • Figure 4: MobiNEX operator segmentation – network speed vs network reliability
  • Figure 5: MobiNEX operator segmentation – with total scores
  • Figure 6: Major Western markets – error rate per 10,000 requests
  • Figure 7: Major Western markets – average error rate per 10,000 requests
  • Figure 8: Major Western operators – percentage of requests with total roundtrip latency greater than 500ms
  • Figure 9: Major Western markets – average percentage of requests with total roundtrip latency greater than 500ms
  • Figure 10: Major Western operators – average weighted download speed across 3G and 4G networks (Mbps)
  • Figure 11: Major European markets – average weighted download speed (Mbps)
  • Figure 12: Major Western markets – percentage of requests made on 3G and LTE
  • Figure 13: Download speed vs Percentage of LTE requests
  • Figure 14: Major Western operators – average total roundtrip latency (ms)
  • Figure 15: Major Western markets – average total roundtrip latency (ms)
  • Figure 16: MobiNEX benchmarks

Innovation Leaders: Iliad – A Disruptive Operator Tackles The Cloud

Introduction

To understand how disruptive Iliad’s approach to cloud services is, it is useful to consider it within the wider context of operator cloud services and technology strategies.

Although telecoms operators have often talked a good game when it comes to offering enterprise cloud services, most have found it challenging to compete with the major dedicated and Internet-focused cloud providers like Rackspace, Google, Microsoft, and most of all, Amazon Web Services. Smaller altnets and challenger mobile operators – and even smaller incumbents – have struggled to find enough scale, while even huge operators like Telefonica or Verizon have largely failed to differentiate themselves from the competition. Further, the success of the software and Internet services cloud providers in building hyperscale infrastructure has highlighted a skills gap between telcos and these competitors in the data centre. Although telcos are meant to be infrastructure businesses, their showing on this has largely been rather poor.

In our earlier 2012 Strategy Report Cloud 2.0: Telco Strategies in the Cloud, we pointed to differentiation as the biggest single challenge for telco cloud services. The report argued that the more telcos bought into pre-packaged technology solutions from vendors like VMWare, the less control over the future development path of their software they would have, and the more difficult it would be for them to differentiate effectively. We show the distinction in Figure 1 (see the Technology section of the heatmap). Relying heavily on third-party proprietary technology solutions for cloud would give telcos a structural disadvantage relative to the major non-telco cloud players, who either develop their own, or contribute to fast-evolving open-source projects.

We also observed in that report that nearly all the operators we evaluated who were making any effort to compete in Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS), had opted to resell VMWare technology.

Looking back from 2016, we observe that the operators who went down this route – Verizon is a prime example – have not succeeded in the cloud. The ones that chose to own their technology, building the skills base internally by contributing to the key open-source projects, like AT&T (with its commitment to the OpenStack solution), or who became a preferred regional partner for the major cloud providers (like Telstra), have done much better.

Figure 1: Telco strategies in the cloud, 2012 – most providers go with VMWare-based solutions

Source: STL Partners, Cloud 2.0 Strategy Report

AT&T’s strategy of using the transition to cloud to take control of its own technology, move forward on the SDN/NFV tech transition, and re-organise its product line around its customers’ needs, has helped to set its revenue from strategic business services powering ahead of its key competitor, Verizon, as Figure 2 shows.

Figure 2: Getting the cloud right pays off at AT&T Strategic Business Services

Source: STL Partners

The above is the opening of the report’s introduction, which goes on to outline our views on the cloud market and reprise telcos’ opportunity and progress in it. To access the other 23 pages of this 26 page Telco 2.0 Report, including…

  • Executive Summary
  • Introduction
  • Iliad: A Champion Disruptor
  • Cloud at Iliad
  • Responding to cloud market disruption: Iliad draws on its hi-lo segmentation experience
  • Scaleway: Address the start-ups and scale-ups
  • Dedibox Power 8: doubling down on the high end
  • Nodebox: build-your-own network switches
  • Financial impact for Iliad
  • Conclusions

…and the following report figures…

  • Figure 1: Telco strategies in the cloud, 2012 – most providers go with VMWare-based solutions
  • Figure 2: Getting the cloud right pays off at AT&T Strategic Business Services
  • Figure 3: AWS is not just a price leader
  • Figure 4: STL Partners’ cloud adoption forecast
  • Figure 5: Free Mobile’s growth repeatedly surprises on the upside
  • Figure 6: Free Mobile’s 4G build overtakes SFR
  • Figure 7: Free Mobile is a top scorer on our network quality metrics
  • Figure 8: Free Mobile’s customer satisfaction ratings are excellent
  • Figure 9: Specs for ‘extreme performance’ Dedibox server models
  • Figure 10: The C1 ‘Pimouss’ microserver
  • Figure 11: 18 C1s close-packed in a standard server blade
  • Figure 12: Scaleway Hosted C1 Server Pricing
  • Figure 13: The case for more POWER8: IBM POWER8 vs Intel x86 E5
  • Figure 14: A Nodebox, Free’s internally developed network switch
  • Figure 15: A useful business, if no AWS

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Connecting Brands with Customers: How leading operators are building sustainable advertising businesses

Executive Summary

2015 has witnessed the turning point at which internet access on mobile devices exceeds desktops and laptops combined for the first time and, worldwide, digital advertising has followed the audience migration from desktop to smartphone and tablet.  A new ecosystem has evolved to service the needs of the mobile advertising industry. Ad exchanges and ad networks have adapted to facilitate access by brands to an ever-wider range of content on multiple devices, whilst DMPs (Data Management Platforms), DSPs & SSPs (Demand Side and Supply Side Platforms respectively) are fuelling the growth of ‘programmatic buying’ by enabling the flow of data within the ecosystem.

There is an opportunity for telcos to establish a sustainable and profitable role as an enabler within this rapidly developing market

Advertising should be an important diversification strategy for telcos as income from core communications continues to decline because they can make use of existing assets (e.g. audience reach, inventory, data), whilst maintaining subscriber trust. Telecoms operators’ ability to use their own customers’ data (with consent) to improve their own service offerings is a key advantage that provides a strong basis for developing advertising and marketing solutions for third-parties.

Walking in the footsteps of giants does not kill the opportunity for telcos

Facebook and Google will represent more than half of the $69 billion worldwide mobile-advertising market in 2015. This dominance has led some operators to question whether they can build a viable advertising business. However STL Partners believes that there has never been a better time for many operators to consider ramping-up their efforts to secure a sustainable practice through leveraging the value of their own customer data. In fact, many telcos are actively working with OTT players such as Google and Facebook to assist them in understanding territory-specific mobile behaviour.

Three telcos lead the way in advertising – Sprint, Turkcell and SingTel – and provide important lessons for others

In the main body of this report, STL Partners identifies the role that each telco has chosen to perform within the advertising ecosystem, assesses their strategy and execution, and identifies the core reasons for their success. The three case studies display several common characteristics and point to six Key Success Factors (KSFs) for a telco advertising business. The first is a ‘start-up mindset’ pre-requisite for establishing such a business and the other five are core actions and capabilities which mutually strengthen each other to produce a ‘flywheel’ that drives growth (see Figure 1).  As a telco exec, whether your organisation is just embarking on the advertising journey, if it has tried to build an advertising business and withdrawn or, indeed, if you are well on the way to building a successful business, we outline how to deliver the following six KSFs in the downloadable report:

  1. How to secure senior management support
  2. How to develop a semi-independent organisation with advertising skills and a start-up culture
  3. How to build or buy best-in-class technical capability and continuously improve
  4. Demand-side: How to build value for subscribers
  5. Supply-side: How to build value for media buyers and sellers
  6. How to pursue opportunities to scale aggressively
Figure 1: The Telco Advertising Business Flywheel

Why now is the right time for telcos to take a more prominent role within mobile advertising

After years of hype, mobile advertising is now starting to mature in terms of technical solutions, business models, and customer acceptance. The catalyst for this growing awareness of the potential of mobile advertising is the increasing demand for first-party (own customer) data to personalize and contextualize marketing communications both within telcos and more widely among enterprises as a way of improving on coarse-grained segmentation. Telcos hold more and better data than most organisations and have wonderful distribution networks (the network itself) for managing information flows, as well as delivering marketing messages and services.

 

For those within and outside telcos that are developing marketing and advertising solutions, we would love to hear your stories and facilitate discussions with your peers, so please do get in touch: contact@stlpartners.com

 

  • Executive Summary
  • Introduction
  • Why is advertising important for Telcos?
  • Walking in the footsteps of Giants?
  • Case study 1: Sprint
  • Summary: Reasons for Sprint’s success
  • A track record in innovation
  • Making data matter
  • How successful is Sprint’s strategy?
  • What does the future hold for Sprint?
  • Case study 2: Turkcell
  • Summary: Reasons for Turkcell’s success
  • A heritage in mobile marketing
  • Retaining control, enabling access
  • Co-opetition from a position of strength
  • How successful is Turkcell’s strategy?
  • What does the future hold for Turkcell?
  • Case study 3: SingTel
  • Summary: Reasons for SingTel’s success
  • Assembling a digital marketing capability through acquisition
  • Retaining revenue within the value chain
  • Providing technology at scale
  • How successful is SingTel’s strategy?
  • What does the future hold for SingTel?
  • Conclusion and recommendations

 

  • Figure 1: The Telco Advertising Business Flywheel
  • Figure 2: Time Spent per Adult per Day with Digital Media, USA, 2008-2015
  • Figure 3: Mobile Internet Ad Spending, Worldwide, 2013 – 2019
  • Figure 4: Mobile Marketing Ecosystem (extract)
  • Figure 5: The “Wheel of Commerce”
  • Figure 6: The Digital Gameboard – an OTT view of the world
  • Figure 6: Sprint’s data asset overview
  • Figure 7: Sprint’s role in the mobile advertising ecosystem
  • Figure 9: Top App Widget
  • Figure 10: Visual voicemail
  • Figure 11: Turkcell’s role in the mobile advertising ecosystem
  • Figure 12: Turkcell’s mobile marketing solution portfolio
  • Figure 13: Turkcell’s permission database overview
  • Figure 14: SingTel’s role in the mobile advertising ecosystem
  • Figure 15: SingTel’s digital portfolio prioritisation
  • Figure 16: The role of first-party data
  • Figure 17: The promise of first-party data
  • Figure 18: The Telco Advertising Business Flywheel

Mobile app latency in Europe: French operators lead; Italian & Spanish lag

Latency as a proxy for customer app experience

Latency is a measure of the time taken for a packet of data to travel from one designated point to another. The complication comes in defining the start and end point. For an operator seeking to measure its network latency, it might measure only the transmission time across its network.

However, to objectively measure customer app experience, it is better to measure the time it takes from the moment the user takes an action, such as pressing a button on a mobile device, to receiving a response – in effect, a packet arriving back and being processed by the application at the device.

This ‘total roundtrip latency’ time is what is measured by our partner, Crittercism, via embedded code within applications themselves on an aggregated and anonymised basis. Put simply, total roundtrip latency is the best measure of customer experience because it encompasses the total ‘wait time’ for a customer, not just a portion of the multi-stage journey

Latency is becoming increasingly important

Broadband speeds tend to attract most attention in the press and in operator advertising, and speed does of course impact downloads and streaming experiences. But total roundtrip latency has a bigger impact on many user digital experiences than speed. This is because of the way that applications are built.

In modern Web applications, the business logic is parcelled-out into independent ‘microservices’ and their responses re-assembled by the client to produce the overall digital user experience. Each HTTP request is often quite small, although an overall onscreen action can be composed of a number of requests of varying sizes so broadband speed is often less of a factor than latency – the time to send and receive each request. See Appendix 2: Why latency is important, for a more detailed explanation of why latency is such an important driver of customer app experience.

The value of using actual application latency data

As we have already explained, STL Partners prefers to use total roundtrip latency as an indicator of customer app experience as it measures the time that a customer waits for a response following an action. STL Partners believes that Crittercism data reflects actual usage in each market because it operates within apps – in hundreds of thousands of apps that people use in the Apple App Store and in Google Play. This is a quite different approach to other players which require users to download a specific app which then ‘pings’ a server and awaits a response. This latter approach has a couple of limitations:

1. Although there have been several million downloads of the OpenSignal and Actual Experience app, this doesn’t get anywhere near the number of people that have downloaded apps containing the Crittercism measurement code.

2. Because the Crittercism code is embedded within apps, it directly measures the latency experienced by users when using those apps1. A dedicated measurement app fails to do this. It could be argued that a dedicated app gives the ‘cleanest’ app reading – it isn’t affected by variations in app design, for example. This is true but STL Partners believes that by aggregating the data for apps such variation is removed and a representative picture of total roundtrip latency revealed. Crittercism data can also show more granular data. For example, although we haven’t shown it in this report, Crittercism data can show latency performance by application type – e.g. Entertainment, Shopping, and so forth – based on the categorisation of apps used by Google and Apple in their app stores.

A key premise of this analysis is that, because operators’ customer bases are similar within and across markets, the profile of app usage (and therefore latency) is similar from one operator to the next. The latency differences between operators are, therefore, down to the performance of the operator.

Why it isn’t enough to measure average latency

It is often said that averages hide disparities in data, and this is particularly true for latency and for customer experience. This is best illustrated with an example. In Figure 2 we show the distribution of latencies for two operators. Operator A has lots of very fast requests and a long tail of requests with high latencies.

Operator B has much fewer fast requests but a much shorter tail of poor-performing latencies. The chart clearly shows that operator B has a much higher percentage of requests with a satisfactory latency even though its average latency performance is lower than operator A (318ms vs 314ms). Essentially operator A is let down by its slowest requests – those that prevent an application from completing a task for a customer.

This is why in this report we focus on average latency AND, critically, on the percentage of requests that are deemed ‘unsatisfactory’ from a customer experience perspective.

Using latency as a measure of performance for customers

500ms as a key performance cut-off

‘Good’ roundtrip latency is somewhat subjective and there is evidence that experience declines in a linear fashion as latency increases – people incrementally drop off the site. However, we have picked 500ms (or half a second) as a measure of unsatisfactory performance as we believe that a delay of more than this is likely to impact mobile users negatively (expectations on the ‘fixed’ internet are higher). User interface research from as far back as 19682 suggests that anything below 100ms is perceived as “instant”, although more recent work3 on gamers suggests that even lower is usually better, and delay starts to become intrusive after 200-300ms. Google experiments from 20094 suggest that a lasting effect – users continued to see the site as “slow” for several weeks – kicked in above 400ms.

Percentage of app requests with total roundtrip latency above 500ms – markets

Five key markets in Europe: France, Germany, Italy, and the UK.

This first report looks at five key markets in Europe: France, Germany, Italy, and the UK. We explore performance overall for Europe by comparing the relative performance of each country and then dive into the performance of operators within each country.

We intend to publish other reports in this series, looking at performance in other regions – North America, the Middle East and Asia, for example. This first report is intended to provider a ‘taster’ to readers, and STL Partners would like feedback on additional insight that readers would welcome, such as latency performance by:

  • Operating system – Android vs Apple
  • Specific device – e.g. Samsung S6 vs iPhone 6
  • App category – e.g. shopping, games, etc.
  • Specific countries
  • Historical trends

Based on this feedback, STL Partners and Crittercism will explore whether it is valuable to provide specific total roundtrip latency measurement products.

Contents

  • Latency as a proxy for customer app experience
  • ‘Total roundtrip latency’ is the best measure for customer ‘app experience’
  • Latency is becoming increasingly important
  • STL Partners’ approach
  • Europe: UK, Germany, France, Italy, Spain
  • Quantitative Analysis
  • Key findings
  • UK: EE, O2, Vodafone, 3
  • Quantitative Analysis
  • Key findings
  • Germany: T-Mobile, Vodafone, e-Plus, O2
  • Quantitative Analysis
  • Key findings
  • France: Orange, SFR, Bouygues Télécom, Free
  • Quantitative Analysis
  • Key findings
  • Italy: TIM, Vodafone, Wind, 3
  • Quantitative Analysis
  • Key findings
  • Spain: Movistar, Vodafone, Orange, Yoigo
  • Quantitative Analysis
  • Key findings
  • About STL Partners and Telco 2.0
  • About Crittercism
  • Appendix 1: Defining latency
  • Appendix 2: Why latency is important

 

  • Figure 1: Total roundtrip latency – reflecting a user’s ‘wait time’
  • Figure 2: Why a worse average latency can result in higher customer satisfaction
  • Figure 3: Major European markets – average total roundtrip latency (ms)
  • Figure 4: Major European markets – percentage of requests above 500ms
  • Figure 5: The location of Google and Amazon’s European data centres favours operators in France, UK and Germany
  • Figure 6: European operators – average total roundtrip latency (ms)
  • Figure 7: European operators – percentage of requests with latency over 500ms
  • Figure 8: Customer app experience is likely to be particularly poor at 3 Italy, Movistar (Spain) and Telecom Italia
  • Figure 9: UK Operators – average latency (ms)
  • Figure 10: UK operators – percentage of requests with latency over 500ms
  • Figure 11: German Operators – average latency (ms)
  • Figure 12: German operators – percentage of requests with latency over 500ms
  • Figure 13: French Operators – average latency (ms)
  • Figure 14: French operators – percentage of requests with latency over 500ms
  • Figure 15: Italian Operators – average latency (ms)
  • Figure 16: Italian operators – percentage of requests with latency over 500ms
  • Figure 17: Spanish Operators – average latency (ms)
  • Figure 18: Spanish operators – percentage of requests with latency over 500ms
  • Figure 19: Breakdown of HTTP requests in facebook.com, by type and size

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

Google’s MVNO: What’s Behind it and What are the Implications?

Google’s core business is under pressure

Google, the undisputed leader in online advertising and tech industry icon, has more problems than you might think. The grand narrative is captured in the following chart, showing basic annual financial metrics for Google, Inc. between 2009 and 2014.

Figure 1: Google’s margins have eroded substantially over time

Source: STL Partners, Google 10-K filing

This is essentially the classic problem of commoditisation. The IT industry has been structurally deflationary throughout its existence, which has always posed problems for its biggest successes – how do you maintain profitability in a business where prices only ever fall? Google is growing in terms of volume, but its margins are sliding, and as a result, profitability is growing much more slowly than revenue. Since 2010, the operating margin has shrunk from around 35% to around 25%, a period during which a major competitor emerged (Facebook) and Google initiated a variety of major investments, research projects, and flirted with manufacturing hardware (through the Motorola acquisition).

And it could get worse. In its most recent 10-K filing, Google says: “We anticipate downward pressure on our operating margin in the future.” It cites increasing competition and increased expenditures, while noting that it is becoming more reliant on lower margin products: “The margin on the sale of digital content and apps, advertising revenues from mobile devices and newer advertising formats are generally less than the margin on revenues we generate from advertising on our websites on traditional formats.”

Google remains massively dependent on a commoditising advertising business

Google is very, very dependent on selling advertising for revenue. It does earn some revenue from content, but most of this is generated from the ContentID program, which places adverts on copyrighted material and shares revenue with the rightsholder, and therefore, amounts to much the same thing. Over the past two years, Google has actually become more advert-dominated, as Figure 2 shows. Advertising revenues are not only vastly greater than non-advertising revenues, they are growing much faster and increasing as a share of the total. Over- reliance on the fickle and fast changing advertising market is obviously risky. Also, while ad brokering is considered a high-margin business, Google’s margins are now at the same level as AT&T’s.

Figure 2: Not only is Google overwhelmingly dependent on advertising, advertising revenue is growing faster than non-advertising

Source: STL Partners, Google 10-K

The growth rate of non-advertising revenue at Google has slowed sharply since last year. It is now growing more slowly than either advertising on Google properties, or in the Google affiliate network (see Figure 3).

Figure 3: Google’s new-line businesses are growing slower than the core business

Source: STL Partners, Google 10-K

At the same time, the balance has shifted a little between Google’s own properties (such as Google.com) and its affiliate network. Historically, more and more Google revenue has come from its own inventory and less from placing ads on partner sites. Costs arise from the affiliate network because Google pays out revenue share to the partner sites, known as traffic-acquisition costs or TACs. Own-account ad inventory, however, isn’t free – Google has to create products to place advertising in, and this causes it to incur R&D expenditures.

In a real sense, R&D is the equivalent to TAC for the 60-odd per cent of Google’s business that occurs on its own web sites. Google engineering excellence, and perhaps economies of scale, mean that generating ad inventory via product creation might be a better deal than paying out revenue share to hordes of bloggers or app developers, and Figure 4 shows this is indeed the case. R&D makes up a much smaller percentage of revenue from Google properties than TAC does of revenue from the affiliate network.

Figure 4: R&D is a more efficient means of generating ad inventory than affiliate payouts

Source: STL Partners, Google 10-K

Note, that although TAC might well be rising, the spike for Q4 2014 is probably a seasonal effect – Q4 is likely to be a month when a lot of adverts get clicked across the web.

 

  • Executive Summary
  • Google’s core business is under pressure
  • Google remains massively dependent on a commoditising advertising business
  • Google spends far more on R&D and capex than Apple
  • But while costs soar, Google ad pricing is falling
  • Google also has very high running costs
  • The threats from Facebook and Apple are real
  • Google MVNO: a strategic initiative
  • What do you need to make a mini-carrier?
  • The Google MVNO will launch into a state of price war
  • How low could the Google MVNO’s prices be?
  • Google’s MVNO: The Strategic Rationale
  • Option 1: Ads
  • Option 2: Straightforward carrier business model
  • Option 3: Android-style strategic initiative vs MNOs
  • Option 4: Anti-Apple virus, 2.0
  • Conclusions

 

  • Figure 1: Google’s margins have eroded substantially over time
  • Figure 2: Not only is Google overwhelmingly dependent on advertising, advertising revenue is growing faster than non-advertising
  • Figure 3: Growth in Google’s new-line businesses is now slower than in the core business
  • Figure 4: R&D is a more efficient means of generating ad inventory than affiliate payouts
  • Figure 5: Google spends a lot of money on research
  • Figure 6: Proportionately, Google research spending is even higher
  • Figure 7: Google’s dollar capex is almost identical to vastly bigger Apple’s
  • Figure 8: Google is startlingly capex-intensive compared to Apple, especially for an ad broker versus a global manufacturing titan
  • Figure 9: Google’s ad pricing is declining, and volume growth paused for most of 2014
  • Figure 10: Google is a more expensive company to run than Apple
  • Figure 11: The aircraft hangar Google leases from NASA
  • Figure 12: Facebook is pursuing quality over quantity in ad placement
  • Figure 12: Facebook is gradually closing the gap on Google in digital advertising
  • Figure 14: Despite a huge revenue quarter, Facebook’s Q4 saw a sharp hit to margin
  • Figure 15: Facebook’s margin hit is explained by the rise in R&D spending
  • Figure 16: Apple’s triumph – a terrible Q4 for the Android ecosystem
  • Figure 17: Price disruption in France and in the United States
  • Figure 18: Price disruption in the US – this is only the beginning
  • Figure 19: Defending AT&T and Verizon Wireless’ ARPU comes at a price
  • Figure 20: Modelling the service price of a mini-carrier
  • Figure 21: A high WiFi offload rate could make Google’s pricing aggressive
  • Figure 21: Handset subsidies are alive and well at T-Mobile

 

Facebook: Telcos’ New Best Friend?

How Facebook is changing

A history of adaptation

One of the things that sets Facebook apart from its largely defunct predecessors, such as MySpace, Geocities and Friends Reunited, is its ability to adapt to the evolution of the Internet and consumer behaviour. In its decade-long history, Facebook has evolved from a text-heavy, PC-based experience used by American students into a world-leading digital communications and commerce platform used by people of all ages. The basic student matchmaking service Zuckerberg and his fellow Harvard students created in 2004 now matches buyers and sellers in competition with Google, Amazon and eBay (see Figure 1).

Figure 1: From student matchmaking service to a leading digital commerce platform

Source: Zuckerberg’s Facebook page and Facebook investor relations

Launched in early 2004, Facebook initially served as a relatively basic directory with photos and limited communications functionality for Harvard students only. In the spring of 2004, it began to expand to other universities, supported by seed funding from Peter Thiel (co-founder of Paypal). In September 2005, Facebook was opened up to the employees of some technology companies, including Apple and Microsoft. By the end of 2005, it had reached five million users.

Accel Partners invested US$12.7 million in the company in May 2005 and Greylock Partners and others followed this up with another US$27.5 million in March 2006. The additional investment enabled Facebook to expand rapidly. During 2006, it added the hugely popular newsfeed and the share functions and opened up the registration process to anyone. By December 2006, Facebook had 12 million users.

The Facebook Platform was launched in 2007, enabling affiliate sites and developers to interact and create applications for the social network. In a far-sighted move, Microsoft invested US$240 million in October 2007, taking a 1.6% stake and valuing Facebook at US$15 billion. By August 2008, Facebook had 100 million users.

Achieving the 100 million user milestone appears to have given Facebook ‘critical mass’ because at that point growth accelerated dramatically. The company doubled its user base to 200 million in nine months (May 2009) and has continued to grow at a similar rate since then.

As usage continue to grow rapidly, it was increasingly clear that Facebook could erode Google’s dominant position in the Internet advertising market. In June 2011, Google launched the Google + social network – the latest move in a series of efforts by the search giant to weaken Facebook’s dominance of the social networking market. But, like its predecessors, Google+ has had little impact on Facebook.

2012-2013 – the paranoid years

Although Facebook shrugged off the challenge from Google+, the rapid rise of the mobile Internet did cause the social network to wobble in 2012. The service, which had been designed for use on desktop PCs, didn’t work so well on mobile devices, both in terms of providing a compelling user experience and achieving monetisation. Realising Facebook could be disrupted by the rise of the mobile Internet, Zuckerberg belatedly called a mass staff meeting and announced a “mobile first” strategy in early 2012.

In an IPO filing in February 2012, Facebook acknowledged it wasn’t sure it could effectively monetize mobile usage without alienating users. “Growth in use of Facebook through our mobile products, where we do not currently display ads, as a substitute for use on personal computers may negatively affect our revenue and financial results,” it duly noted in the filing.

Although usage of Facebook continued to rise on both the desktop and the mobile, there was increasing speculation that it could be superseded by a more mobile-friendly service, such as fast-growing photo-sharing service Instagram. Zuckerberg’s reaction was to buy Instagram for US$1 billion in April 2012 (a bargain compared with the $21 billion plus Facebook paid for WhatsApp less than two years later).

Moreover, Facebook did figure out how to monetise its mobile usage. Cautiously at first, it began embedding adverts into consumers’ newsfeeds, so that they were difficult to ignore. Although Facebook and some commentators worried that consumers would find these adverts annoying, the newsfeed ads have proven to be highly effective and Facebook continued to grow. In October 2012, now a public company, Facebook triumphantly announced it had one billion active users, with 604 million of them using the mobile site.

Even so, Facebook spent much of 2013 tinkering and experimenting with changes to the user experience. For example, it altered the design of the newsfeed making the images bigger and adding in new features. But some commentators complained that the changes made the site more complicated and confusing, rather than simplifying it for mobile users equipped with a relatively small screen. In April 2013, Facebook tried a different tack, launching Facebook Home, a user interface layer for Android-compatible phones that provides a replacement home screen.

And Zuckerberg continued to worry about upstart mobile-orientated competitors. In November 2013, a number of news outlets reported that Facebook offered to buy Snapchat, which enables users to send messages that disappear after a set period, for US$3 billion. But the offer was turned down.

A few months later, Facebook announced it was acquiring the popular mobile messaging app WhatsApp for what amounted to more than US$21 billion at the time of completion.

In 2014 – going on the offensive

By acquiring WhatsApp at great expense, Facebook alleviated immediate concerns that the social network could be dislodged by another disruptor, freeing up Zuckerberg to turn his attention to new technologies and new markets. The acquisition also put to rest investors’ immediate fears that Facebook could be superseded by a more fashionable, dedicated mobile service, pushing up the share price (see the section on Facebook’s valuation). In May 2014, Facebook wrong-footed many industry watchers and some of its rivals by announcing it had agreed to acquire Oculus VR, Inc., a leading virtual reality company, for US$2 billion in cash and stock.

Zuckerberg has since described the WhatsApp and Oculus acquisitions as “big bets on the next generation of communication and computing platforms.” And Facebook is also investing heavily in organic expansion, increasing its headcount by 45% in 2014, while opening another data center in Altoona, Iowa.

Zuckerberg also continues to devote time and attention to Internet.org, a multi-company initiative to bring free basic Internet services to people who aren’t connected. Announced in August 2013, Internet.org has since launched free basic internet services in six developing countries. For example, in February 2015, Facebook and Reliance Communications launched Internet.org in India. As a result, Reliance customers in six Indian states (Tamil Nadu, Mahararashtra, Andhra Pradesh, Gujarat, Kerala, and Telangana) now have access to about 40 services ranging from news, maternal health, travel, local jobs, sports, communication, and local government information.

Zuckerberg said that more than 150 million people now have the option to connect to the internet using Internet.org, and the initiative had, so far, succeeded in connecting seven million people to the internet who didn’t before have access. “2015 is going to be an important year for our long term plans,” he noted.

The Facebook exception – no fear, more freedom

Although it is now listed, Facebook is clearly not a typical public company. Its massive lead in the social networking market has given it an unusual degree of freedom. Zuckerberg has a controlling stake in the social network (he is able to exercise voting rights with respect to a majority of the voting power of the outstanding capital stock) and the self-confidence to ignore any grumblings on Wall Street. Facebook is able to make acquisitions most other companies couldn’t contemplate and can continue to put Zuckerberg’s long-term objectives ahead of those of short-term shareholders. Like Amazon, Facebook frequently reminds investors that it isn’t trying to maximise short-term profitability. And unlike Amazon, Facebook may not even be trying to maximize long-term profitability.

On Facebook’s quarterly earning calls, Zuckerberg likes to talk about Facebook’s broad, long-term aims, without explaining clearly how fulfilling these objectives will make the company money. “In the next decade, Facebook is focused on our mission to connect the entire world, welcoming billions of people to our community and connecting many more people to the internet through Internet.org (see Figure 2),” he said in the January 2015 earnings call. “Similar to our transition to mobile over the last couple of years, now we want to really focus on serving everyone in the world.”

Figure 2: Zuckerberg is pushing hard for the provision of basic Internet services

 

Source: Facebook.com

Not all of the company’s investors are entirely comfortable with this mission. On that earnings call, one analyst asked Zuckerberg: “Mark, I think during your remarks in every earnings call, you talk to your investors for a considerable amount of time about Facebook’s efforts to connect the world, and specifically about Internet.org which suggest you think this is important to investors. Can you clarify why you think this matters to investors?”

Zuckerberg’s response: “It matters to the kind of investors that we want to have, because we are really a mission-focused company. We wake up every day and make decisions because we want to help connect the world. That’s what we’re doing here.

“Part of the subtext of your question is that, yes, if we were only focused on making money, we might put all of our energy on just increasing ads to people in the US and the other most developed countries. But that’s not the only thing that we care about here.

“I do think that over the long term, that focusing on helping connect everyone will be a good business opportunity for us, as well. We may not be able to tell you exactly how many years that’s going to happen in. But as these countries get more connected, the economies grow, the ad markets grow, and if Facebook and the other services in our community, or the number one, and number two, three, four, five services that people are using, then over time we will be compensated for some of the value that we’ve provided. This is why we’re here. We’re here because our mission is to connect the world. I just think it’s really important that investors know that.”

Takeaways

Facebook may be a public company, but it doesn’t worry much about shareholders’ short-term aspirations. It often behaves like a private company that is focused first and foremost on fulfilling the goals of its founder. It is clear Zuckerberg is playing the long game. But it isn’t clear what yardsticks he is using to measure success. Although Zuckerberg knows Facebook needs to be profitable enough to ensure investors’ continued support, his primary goal may be to bring hundreds of millions more people online and secure his place in posterity. There is a danger that Zuckerberg’s focus on connecting people in Africa and developing Asia means that there won’t be sufficient top management attention on the multi-faceted digital commerce struggle with Google in North America and Western Europe.

Financials and business model

Network effects still strong

Within that wider mission to connect the world, Facebook continues to do a great job of connecting people to Facebook. Fuelled by network effects, Facebook says that 1.39 billion people now use Facebook each month (see Figure 3) and 890 million people use the service daily, an increase of 165 million monthly active users and 133 million daily active users in 2014. In developed markets, many consumers use Facebook as a primary medium for communications, relying on it to send messages, organize events and relay their news. As a result, in parts of Europe and North America, adults without a Facebook account are increasingly considered eccentric.

Figure 3: Facebook’s user base continues to grow rapidly

Source: Facebook and STL Partners analysis

Having said that, some active users are clearly more active and valuable than others. In a regulatory filing, Facebook admits that some active users may, in fact, be bots: “Some of our metrics have also been affected by applications on certain mobile devices that automatically contact our servers for regular updates with no user action involved, and this activity can cause our system to count the user associated with such a device as an active user on the day such contact occurs. The impact of this automatic activity on our metrics varied by geography because mobile usage varies in different regions of the world.”

This automatic polling of Facebook’s servers by mobile devices makes it difficult to judge the true value of the social network’s user base. Anecdotal evidence suggests many people with Facebook profiles are kept active on Facebook primarily by their smartphone apps, rather than because they are actively choosing to use the service. Still, Facebook would argue that these people are seeing the notifications on their mobile devices and are, therefore, at least partially engaged.

 

  • Executive Summary
  • How Facebook is changing
  • A history of adaptation
  • The Facebook exception – no fear, more freedom
  • Financials and business model
  • Growth prospects for the core business
  • User growth
  • Monetisation – better targeting, higher prices
  • Mobile advertising spend lags behind usage
  • The Facebook Platform – Beyond the Walled Garden
  • Multimedia – taking on YouTube
  • Search – challenging Google’s core business
  • Enabling transactions – moving beyond advertising
  • Virtual reality – a long-term game
  • Takeaways
  • Threats and risks
  • Facebook fatigue
  • Google – Facebook enemy number one
  • Privacy concerns
  • Wearables and the Internet of Things
  • Local commerce – in need of a map
  • Facebook and communication services
  • Conclusions
  • Facebook is spread too thin
  • Partnering with Facebook – why and how
  • Competing with Facebook – why and how

 

  • Figure 1: From student matchmaking service to a leading digital commerce platform
  • Figure 2: Zuckerberg is pushing hard for the provision of basic Internet services
  • Figure 3: Facebook’s user base continues to grow rapidly
  • Figure 4: Facebook’s revenue growth has accelerated in the past two years
  • Figure 5: Facebook’s ARPU has risen sharply in the past two years
  • Figure 6: After wobbling in 2012, investors’ belief in Facebook has strengthened
  • Figure 7: Despite a rebound, Facebook’s valuation per user is still below its peak
  • Figure 8: Facebook could be serving 2.3 billion people by 2020
  • Figure 9: Share of digital advertising – Facebook is starting to close the gap on Google but remains a long way behind
  • Figure 10: The gap between click through rates for search and social remains substantial
  • Figure 11: Social networks’ revenue per click is rising but remains 40% of search
  • Figure 12: Facebook’s advertising has moved from the right column to centre stage
  • Figure 13: Facebook’s startling mobile advertising growth
  • Figure 14: Zynga’s share price reflects decline of Facebook.com as an app platform
  • Figure 15 – Facebook Connect – an integral part of the Facebook Platform
  • Figure 16: Leading Internet players’ share of social log-ins over time
  • Figure 17: Facebook’s personalised search proposition
  • Figure 18: Facebook’s new buy button – embedded in a newsfeed post
  • Figure 19: The rise and rise of Android – not good for Facebook
  • Figure 21: Facebook and Google are both heavily associated with privacy issues
  • Figure 22: Facebook wants to conquer the Wheel of Digital Commerce
  • Figure 23: Facebook’s cash flow is far behind that of Google and Apple
  • Figure 24: Facebook’s capital expenditure is relatively modest compared with peers
  • Figure 25: Facebook’s capex/revenue ratio has been high but is falling

 

Key Questions for NextGen Broadband Part 1: The Business Case

Introduction

It’s almost a cliché to talk about “the future of the network” in telecoms. We all know that broadband and network infrastructure is a never-ending continuum that evolves over time – its “future” is continually being invented and reinvented. We also all know that no two networks are identical, and that despite standardisation there are always specific differences, because countries, regulations, user-bases and legacies all vary widely.

But at the same time, the network clearly matters still – perhaps more than it has for the last two decades of rapid growth in telephony and SMS services, which are now dissipating rapidly in value. While there are certainly large swathes of the telecom sector benefiting from content provision, commerce and other “application-layer” activities, it is also true that the bulk of users’ perceived value is in connectivity to the Internet, IPTV and enterprise networks.

The big question is whether CSPs can continue to convert that perceived value from users into actual value for the bottom-line, given the costs and complexities involved in building and running networks. That is the paradox.

While the future will continue to feature a broader set of content/application revenue streams for telcos, it will also need to support not just more and faster data connections, but be able to cope with a set of new challenges and opportunities. Top of the list is support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. There is a significant chance that many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition evolving – faster- or lower-powered networks, or perhaps even the semi-mythical “QoS”, which if not paid for directly, could perhaps be integrated into compelling packages and data-service bundles. There is also the potential for “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. But does this add more value than investing in more web/OTT-style applications and services, de-coupled from the network?

Again, this raises questions about technology, business models – and the practicalities of making it happen.

This plays directly into the concept of the revenue “hunger gap” we have analysed for the past two years – without ever-better (but more efficient) networks, the telecom industry is going to get further squeezed. While service innovation is utterly essential, it also seems to be slow-moving and patchy. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Depending on the trajectory of regulatory change, we may also see more consolidation among parts of the service provider industry, more quad-play networks, more sharing and wholesale models.

We also see communications networks and applications permeating deeper into society and government. There is a sense among some policymakers that “telecoms is too important to leave up to the telcos”, with initiatives like Smart Cities and public-safety networks often becoming decoupled from the mainstream of service providers. There is an expectation that technology – and by extension, networks – will enable better economies, improved healthcare and education, safer and more efficient transport, mechanisms for combatting crime and climate change, and new industries and jobs, even as old ones become automated and robotised.

Figure 1 – New services are both network-integrated & independent

 

Source: STL Partners

And all of this generates yet more uncertainty, with yet more questions – some about the innovations needed to support these new visions, but also whether they can be brought to market profitably, given the starting-point we find ourselves at, with fragmented (yet growing) competition, regulatory uncertainty, political interference – and often, internal cultural barriers within the CSPs themselves. Can these be overcome?

A common theme from the section above is “Questions”. This document – and a forthcoming “sequel” – is intended to group, lay out and introduce the most important ones. Most observers just tend to focus on a few areas of uncertainty, but in setting up the next year or so of detailed research, Telco 2.0 wants to fully list and articulate all of the hottest issues. Only once they are collated, can we start to work out the priorities – and inter-dependencies.

Our belief is that all of the detailed questions on “Future Networks” can, it fact, be tied back to one of two broader, more over-reaching themes:

  • What are the business cases and operational needs for future network investment?
  • Which disruptions (technological or other) are expected in the future?

The business case theme is covered in this document. It combines future costs (spectrum, 4G/5G/fibre deployments, network-sharing, virtualisation, BSS/OSS transformation etc.) and revenues (data connectivity, content, network-integrated service offerings, new Telco 2.0-style services and so on). It also encompasses what is essential to make the evolution achievable, in terms of organisational and cultural transformation within telcos.

A separate Telco 2.0 document, to be published in coming weeks, will cover the various forthcoming disruptions. These are expected to include new network technologies that will ultimately coalesce to form 5G mobile and new low-power wireless, as well as FTTx and DOCSIS cable evolution. In addition, virtualisation in both NFV and SDN guises will be hugely transformative.

There is also a growing link between mobile and fixed domains, reflected in quad-play propositions, industry consolidation, and the growth of small-cells and WiFi with fixed-line backhaul. In addition, to support future service innovation, there need to be adequate platforms for both internal and external developers, as well as a meaningful strategy for voice/video which fits with both network and end-user trends. Beyond the technical, additional disruption will be delivered by regulatory change (for example on spectrum and neutrality), and also a reshaped vendor landscape.

The remainder of this report lays out the first five of the Top 10 most important questions for the Future Network. We can’t give definitive analyses, explanations or “answers” in a report of this length – and indeed, many of them are moving targets anyway. But taking a holistic approach to laying out each question properly – where it comes from, and what the “moving parts” are, we help to define the landscape. The objective is to help management teams apply those same filters to their own organisations, understand how can costs be controlled and revenues garnered, see where consolidation and regulatory change might help or hinder, and deal with users and governments’ increasing expectations.

The 10 Questions also lay the ground for our new Future Network research stream, forthcoming publications and comment/opinion.

Overview: what is the business case for Future Networks?

As later sections of both this document and the second in the series cover, there are various upcoming technical innovations in the networking pipeline. Numerous advanced radio technologies underpin 4.5G and 5G, there is ongoing work to improve fibre and DSL/cable broadband, virtualisation promises much greater flexibility in carrier infrastructure and service enablement, and so on. But all those advances are predicated on either (ideally) more revenues, or at least reduced costs to deploy and operate. All require economic justification for investment to occur.

This is at the core of the Future Networks dilemma for operators – what is the business case for ongoing investment? How can the executives, boards of directors and investors be assured of returns? We all know about the ongoing shift of business & society online, the moves towards smarter cities and national infrastructure, changes in entertainment and communication preferences and, of course, the Internet of Things – but how much benefit and value might accrue to CSPs? And is that value driven by network investments, or should telecom companies re-focus their investments and recruitment on software, content and the cloud?

This is not a straightforward question. There are many in the industry that assert that “the network is the key differentiator & source of value”, while others counter that it is a commodity and that “the real value is in the services”.

What is clear is that better/faster networks will be needed in any case, to achieve some of the lofty goals that are being suggested for the future. However, it is far from clear how much of the overall value-chain profit can be captured from just owning the basic machinery – recent years have shown a rapid de-coupling of network and service, apart from a few areas.

In the past, networks largely defined the services offered – most notably broadband access, phone calls and SMS, as well as cable TV and IPTV. But with the ubiquitous rise of Internet access and service platforms/gateways, an ever-increasing amount of service “logic” is located on the web, or in the cloud – not enshrined in the network itself. This is an important distinction – some services are abstracted and designed to be accessed from any network, while others are intimately linked to the infrastructure.

Over the last decade, the prevailing shift has been for network-independent services. In many ways “the web has won”. Potentially this trend may reverse in future though, as servers and virtualised, distributed cloud capabilities get pushed down into localised network elements. That, however, brings its own new complexities, uncertainties and challenges – it a brave (or foolhardy) telco CEO that would bet the company on new in-network service offers alone. We will also see API platforms expose network “capabilities” to the web/cloud – for example, W3C is working on standards to allow web developers to gain insights into network congestion, or users’ data-plans.

But currently, the trend is for broadband access and (most) services to be de-coupled. Nonetheless, some operators seem to have been able to make clever pricing, distribution and marketing decisions (supported by local market conditions and/or regulation) to enable bundles to be made desirable.

US operators, for example, have generally fared better than European CSPs, in what should have been comparably-mature markets. But was that due to a faster shift to 4G networks? Or other factors, such as European telecom fragmentation and sub-scale national markets, economic pressures, or perhaps a different legacy base? Did the broad European adoption of pre-paid (and often low-ARPU) mobile subscriptions make it harder to justify investments on the basis of future cashflows – or was it more about the early insistence that 2.6GHz was going to be the main “4G band”, with its limitations later coming back to bite people? It is hard to tease apart the technology issues from the commercial ones.

Similar differences apply in the fixed-broadband world. Why has adoption and typical speed varied so much? Why have some markets preferred cable to DSL? Why are fibre deployments patchy and very nation-specific? Is it about the technology involved – or the economy, topography, government policies, or the shape of the TV/broadcast sector?

Understanding these issues – and, once again, articulating the questions properly – is core to understanding the future for CSPs’ networks. We are in the middle of 4G rollout in most countries, with operators looking at the early requirements for 5G. SDN and NFV are looking important – but their exact purpose, value and timing still remain murky, despite the clear promises. Can fibre rollouts – FTTC or FTTH – still be justified in a world where TV/video spend is shifting away from linear programming and towards online services such as Netflix?

Given all these uncertainties, it may be that either network investments get slowed down – or else consolidation, government subsidy or other top-level initiatives are needed to stimulate them. On the other hand, it could be the case that reduced costs of capex and opex – perhaps through outsourcing, sharing or software-based platforms, or even open-source technology – make the numbers work out well, even for raw connectivity. Certainly, the last few years have seen rising expenditure by end-users on mobile broadband, even if it has also contributed to the erosion of legacy services such as telephony and SMS, by enabling more modern/cheaper rivals. We have also seen a shift to lower-cost network equipment and software suppliers, and an emphasis for “off the shelf” components, or open interfaces, to reduce lock-in and encourage competition.

The following sub-sections each frame a top-level, critical question relating to the business case for Future Networks:

  • Will networks support genuinely new services & enablers/APIs, or just faster/more-granular Internet access?
  • Speed, coverage, performance/QoS… what actually generates network value? And does this derive from customer satisfaction, new use-cases, or other sources?
  • Does quad-play and fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation and culture: what needs to change to support future network investments?

 

  • Executive Summary
  • Introduction
  • Overview: what is the business case for Future Networks?
  • Supporting new services or just faster Internet?
  • Speed, coverage, quality…what is most valuable?
  • Does quad-play & fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation & culture: what changes?
  • Conclusions

 

  • Figure 1 – New services are both network-integrated & independent
  • Figure 2 – Mobile data device & business model evolution
  • Figure 3 – Some new services are directly enabled by network capabilities
  • Figure 4 – Network investments ultimately need to map onto customers’ goals
  • Figure 5 – Customers put a priority on improving indoor/fixed connectivity
  • Figure 6 – Notional “coverage” does not mean enough capacity for all apps
  • Figure 7 – Different operator teams have differing visions of the future
  • Figure 8 – “Software telcos” may emulate IT’s “DevOps” organisational dynamic