Telco edge computing: What is the operator strategy?

To access the report chart pack in PPT download the additional file on the left

Edge computing can help telcos to move up the value chain

The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.

However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.

Following this introduction, we focus on:

  1. Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
  2. An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
  3. Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
  4. Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
  5. A set of recommendations for operators that are building their strategy for the edge.

Enter your details below to request an extract of the report

What is edge computing and where exactly is the edge?

Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.

One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.

In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.

Edge computing offers mobile operators several opportunities such as:

  • Differentiating service offerings using edge capabilities
  • Providing new applications and solutions using edge capabilities
  • Enabling customers and partners to leverage the distributed computing network in application development
  • Improving networkperformance and achieving efficiencies / cost savings

As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.

These are some of the key terms that need to be identified when discussing edge computing:

  • Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
  • Telco edge cloud is mainly defined as distributed compute managed by a telco  This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
  • On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
  • Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.

Figure 1: Edge computing types

definition of edge computing

Source: STL Partners

Network infrastructure and how the edge relates to 5G

Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.

While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:

  • First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
  • Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.

The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:

  • Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
  • Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
  • Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
  • Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
  • Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
  • Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.

The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.

Figure 2: possible locations for edge computing

edge computing locations

Source: STL Partners

Table of Contents

  • Executive Summary
    • Recommendations for telco operators at the edge
    • Four key use cases for operators
    • Edge computing players are tackling market fragmentation with strategic partnerships
    • What next?
  • Table of Figures
  • Introduction
  • Definitions of edge computing terms and key components
    • What is edge computing and where exactly is the edge?
    • Network infrastructure and how the edge relates to 5G
  • Market overview and opportunities
    • The value chain and the types of stakeholders
    • Hyperscale cloud provider activities at the edge
    • Telco initiatives, pilots and plans
    • Investment and merger and acquisition trends in edge computing
  • Use cases and business models for telcos
    • Telco edge computing use cases
    • Vertical opportunities
    • Roles and business models for telcos
  • Telcos’ challenges at the edge
  • Scenarios for network edge infrastructure development
  • Recommendation
  • Index

Enter your details below to request an extract of the report

Digital Health: How Can Telcos Compete with Google, Apple and Microsoft?

Introduction

With the ever-increasing amount of data collected by smartphones, fitness monitors and smart watches, telcos and other digital players are exploring opportunities to create value from consumers’ ability to capture data on many aspects of their own health and physical activity. Connected devices leverage inbuilt sensors and associated apps to collect data about users’ activities, location and habits.

New health-focused platforms are emerging that use the data collected by sensors to advise individual users on how to improve their health (e.g. a reminder to stand up every 60 minutes), while enhancing their ability to share data meaningfully with healthcare providers, whether in-person or remotely. This market has thus far been led by the major Internet and device players, but telecoms operators may be able to act as distributors, enablers/integrators, and, in some cases, even providers of consumer health and wellness apps (e.g., Telefonica’s Saluspot).

High level drivers for the market

At a macro level, there are a number of factors driving digital healthcare.  These include:

  • Population ageing – The number of people globally who are aged over 65 is expected to triple over the next 30 years , and this will create unprecedented demand for healthcare.
  • Rising costs of healthcare provision globally – Serving an aging population, the increase globally in lifestyle and chronic diseases, and rising underlying costs, is pushing up healthcare spending – while at the same time, due to economic pressures there are more limited funds available to pay for this.
  • Limited supply of trained clinicians – Policy issues and changes in job and lifestyle preferences are limiting both educational capacity and ability to recruit and retain appropriately trained healthcare staff in most markets.
  • Shift in funding policy – In many countries, funding for healthcare is shifting away from being based on reimbursement-for-events (e.g., a practice or hospital is paid for every patient visit, for each patient they register, for each vaccination administered), to a greater emphasis on ‘value-based care’ – reimbursement based on successful patient health outcomes.
  • Increased focus on prevention in healthcare provision – in some cases funding is starting to be provided for preventative population health measures, such as weight-loss or quit-smoking programmes.
  • Development of personalised medicine – Personalised medicine is beginning to gain significant attention. It involves the delivery of more effective personalised treatments (and potentially drugs) based on an individual’s specific genomic characteristics, supported by advances in genotyping and analytics, and by ongoing analysis of individual and population health data.
  • Consumerisation of healthcare – There is a general trend for patients – or rather, consumers – to take more responsibility for their own health and their own healthcare, and to demand always-on access both to healthcare and to their own health information, at a level of engagement they choose.

The macro trends above are unlikely to disappear or diminish in the short-to-medium term; and providers, policymakers and payers  are struggling to cope as healthcare systems increasingly fall short of both targets and patients’ expectations.

Digital healthcare will play a key role in addressing the challenges these trends present. It promises better use and sharing of data, of analytics offering deep insight on health trends for individuals and across the wider population, and of the potential for greater convenience, efficacy and reach of healthcare provisioning.

While many (if not most) of the opportunities around digital health will centre on advances in healthcare providers’ ICT systems, there is significant interest in how consumer wellness and fitness apps and devices will contribute to the digital health ecosystem. Consumer digital health and wellness is particularly relevant to two of the trends above: consumerisation of healthcare, and the shift to prevention as a focus of both healthcare providers and payers.

Fitness trackers and smartwatches, and the associated apps for these devices, as well as wellness and fitness apps for smartphone users, could open up new revenue streams for some service providers, as well as a vast amount of personal data that could feed into both medical records and analytics initiatives. The increasing use of online resources by consumers for both health information and consultation, as well as cloud-based storage of and access to their own health data, also creates opportunities to make more timely and effective healthcare interventions.  For telcos, the question is where and how they can play effectively in this market.

Market Trends and Overview

The digital healthcare market is both very large and very diverse. Digital technologies can be applied in many different segments of the healthcare market (see figure below), both to improve efficiency and enable the development of new services, such as automated monitoring of chronic conditions.

The different segments of the digital healthcare market

Source: STL Partners based on categories identified by Venture Scanner

The various segments in Figure 1 are defined as below:

Wellness

  • Mobile fitness and health apps enable consumers to monitor how much exercise they are doing, how much sleep they are getting, their diet and other aspects of their lifestyle.
  • Wearable devices, such as smart watches and fitness bands, are equipped with sensors that collect the data used by fitness and health apps.
  • Electronic health records are a digital record of data and information about an individual’s health, typically collating clinical data from multiple sources and healthcare providers.

Information

  • Services search are digital portals and directories that help individuals find out healthcare information and identify potential service providers.
  • Online health sites and communities provide consumers with information and discussion forums.
  • Healthcare marketing refers to digital activities by healthcare providers to attract people to use their services.

Interactions

  • Payments and insurance – digital apps and services that enable consumers to pay for healthcare or insurance.
  • Patient engagement refers to digital mechanisms, such as apps, through which healthcare providers can interact with the individuals using their services.
  • Doctor networks are online services that enable clinicians to interact with each other and exchange information and advice.

Research

  • Population health management refers to the use of digital tools by clinicians to capture data about groups of patients or individuals that can then be used to inform treatment.
  • Genomics: An individual’s genetic code can be collated in a digital form so it can be used to understand their likely susceptibility specific conditions and treatments.
  • Medical big data involves capturing and analysing large volumes of data from multiple sources to help identify patterns in the progression of specific illnesses and the effectiveness of particular treatment combinations.

In-hospital care

  • Electronic medical records: A digital version of a hospital or clinic’s records of a specific patient. Unlike electronic health records, electronic medical records aren’t designed to be portable across different healthcare providers.
  • Clinical admin: The use of digital technologies to improve the efficiency of healthcare facilities.
  • Robotics: The use of digital machines to perform specific healthcare tasks, such as transporting medicines or spoon-feeding a patient.

In-home care

  • Digital medical devices: All kinds of medical devices, from thermometers to stethoscopes to glucosometers to sophisticated MRI and medical imaging equipment, are increasingly able to capture and transfer data in a digital form.
  • Remote monitoring involves the use of connected sensors to regularly capture and transmit information on a patient’s health. Such tools can be used to help monitor the condition of people with chronic diseases, such as diabetes.
  • Telehealth refers to patient-clinician consultations via a telephone, chat or video call.

The wellness opportunity

This report focuses primarily primarily on the ‘wellness’ segment (highlighted in the figure below), which is experiencing major disruption as a result of devices, apps and services being launched by Apple, Google and Microsoft, but it also touches on some of these players’ activities in other segments.

This report focuses on wellness, which is undergoing major disruption

Source: STL Partners based on categories identified by Venture Scanner

 

  • Executive summary
  • Introduction
  • High level drivers for the market
  • Market Trends and Overview
  • Market size and trends: smartwatches will overtake fitness brands
  • Health app usage has doubled in two years in the U.S.
  • Are consumers really interested in the ‘quantified self’?
  • Barriers and constraining factors for consumer digital health
  • Disruption in Consumer Digital Wellness
  • Case studies: Google, Apple and Microsoft
  • Google: leveraging Android and analytics capabilities
  • Apple: more than the Watch…
  • Microsoft: an innovative but schizophrenic approach
  • Telco Opportunities in Consumer Health
  • Recommendations for telcos

 

  • Figure 1: The different segments of the digital healthcare market
  • Figure 2: This report focuses on wellness, which is undergoing major disruption
  • Figure 3: Consumer digital health and wellness: leading products and services, 2016
  • Figure 4: Wearable Shipments by Type of Device, 2015-2020
  • Figure 5: Wearable OS Worldwide Market Share, 2015 and 2019
  • Figure 6: Take-up of different types of health apps in the U.S. market (2016)
  • Figure 7: % of health wearable and app users willing to share data US market (2016)
  • Figure 8: Elements of the ‘quantified self’, as envisioned by Orange
  • Figure 9: Less than two-third of US wearable buyers wear their acquisition long-term
  • Figure 10: Google Consumer Health and Fitness Initiatives
  • Figure 11: Snapshot of Google Fit User Interface, 2016
  • Figure 12: Google/Alphabet’s areas of focus in the digital healthcare market
  • Figure 13: Apple’s Key Digital Health and Wellness Initiatives
  • Figure 14: Apple Health app interface and dashboard
  • Figure 15: Apple’s ResearchKit-based EpiWatch App
  • Figure 16: Apple’s current areas of focus in the digital healthcare market
  • Figure 17: Microsoft Consumer Fitness/Wellness Device Initiatives
  • Figure 18: Microsoft Health can integrate data from a range of fitness trackers
  • Figure 19: Microsoft Consumer Fitness/Wellness Applications and Services
  • Figure 20: The MDLive Telehealth Proposition, August 2016
  • Figure 21: Microsoft’s areas of focus in the digital healthcare market
  • Figure 22: Telefónica’s Saluspot: Interactive online doctor consultations on-demand

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

SD-WAN: New Enterprise Opportunity for Telcos, or a Threat to MPLS, SDN & NFV?

Rapid growth in SD-Wan networks

Software-defined Wide Area Networks (SD-WAN) have catapulted to prominence in the enterprise networking world in the last 12 months. They allow businesses to manage their connections between sites, data-centres, the Internet and external cloud services much more cost-effectively and flexibly than in the past.

Driven by the growth of enterprise demand for access to cloud applications, and businesses’ desire to control WAN costs, various start-ups and existing network-optimisation vendors have catalysed SD-WAN’s emergence. Its rapid growth as a new “intermediary” layer in the network has the potential to disrupt telcos’ enterprise aspirations, especially around NFV/SDN.

In essence, SD-WAN allows the creation of an “OTT intelligent network infrastructure”, as an overlay on top of one or more providers’ physical connections. SD-WANs allow combinations of multiple types of access network – and multiple network providers. This can improve QoS in certain areas, reliability and security of corporate networks, while simultaneously reducing costs.  SD-WANs also enable greater flexibility and agility in allocating enterprise network resources.

Why SD-WAN is at least in in part a threat

However, SD-WAN potentially poses major risks to traditional telcos’ enterprise offerings. It allows enterprise customers to deploy least-cost-routing more easily, or highest-quality-routing, by arbitraging differences in price or performance between multiple providers. It enables high-margin MPLS connections to be (at least partly) replaced with commodity Internet connectivity. And it reduces loyalty / lock-in by establishing an “abstraction” layer above the network, controlled by in-house IT teams or competing managed service providers.

SD-WAN has another, medium-term, set of implications for telcos, when considered through the lens of the emerging world of NFV/SDN and “telco cloud” – a topic on which STL Partners has written widely. By disconnecting the physical provision of corporate networks and a business’s data/application assets or clouds, SD-WAN may make it harder for telcos to move up the value chain in serving enterprise customers. Capabilities such as security systems, or unified communications services, may become associated with the SD-WAN, rather than the underlying connection(s); and would thus be provisioned by the SD-WAN provider, rather than by the telco that is providing basic connectivity.

In other words, SD-WAN represents three distinct threats for telcos:

  • Potential reduction in MPLS & other WAN services revenues
  • Potential reduction in today’s enterprise solution value-adds such as UCaaS & managed security services
  • Potential restriction of future telco enterprise SDN/NFV services opportunities to basic Network as a Service (NaaS) offers, with lower scope for upsell.

The current global market for WAN services is $60-100bn annually, depending on how it is defined; therefore, any risk of significant change is central to many operators’ strategic concerns.
Table of Contents

  • Executive Summary
  • Introduction
  • Background: Enterprise WANs
  • Shifting trends in WAN usage
  • The rise of SD-WAN
  • Overview – the holy grail of ‘good/fast/cheap’ in the WAN
  • SD-WAN technology and use-cases
  • SD-WAN vendors include start-ups and established enterprise market players
  • The role of service providers in SD-WAN
  • Bundling hosted voice/UCaaS and SD-WAN
  • Telcos Should take a Proactive Approach to SD-WAN
  • SD-WAN vs. SDN & NFV: Timing and Positioning
  • Future of SD-WAN and Recommendations
  • Recommendations

 

  • Figure 1: SD-WAN architecture example
  • Figure 2: SD-WAN & NaaS may help telcos maintain revenues in enterprise WAN
  • Figure 3: SD-WAN may reduce telco opportunities for SDN/NFV/cloud services
  • Figure 4: Different paths for SD-WAN service offer provision & procurement

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

‘Under-The-Floor’ (UTF) Players: threat or opportunity?

Introduction

The ‘smart pipe’ imperative

In some quarters of the telecoms industry, the received wisdom is that the network itself is merely an undifferentiated “pipe”, providing commodity connectivity, especially for data services. The value, many assert, is in providing higher-tier services, content and applications, either to end-users, or as value-added B2B services to other parties. The Telco 2.0 view is subtly different. We maintain that:

  1. Increasingly valuable services will be provided by third-parties but that operators can provide a few end-user services themselves. They will, for example, continue to offer voice and messaging services for the foreseeable future.
  2. Operators still have an opportunity to offer enabling services to ‘upstream’ service providers such as personalisation and targeting (of marketing and services) via use of their customer data, payments, identity and authentication and customer care.
  3. Even if operators fail (or choose not to pursue) options 1 and 2 above, the network must be ‘smart’ and all operators will pursue at least a ‘smart network’ or ‘Happy Pipe’ strategy. This will enable operators to achieve three things.
  • To ensure that data is transported efficiently so that capital and operating costs are minimised and the Internet and other networks remain cheap methods of distribution.
  • To improve user experience by matching the performance of the network to the nature of the application or service being used – or indeed vice versa, adapting the application to the actual constraints of the network. ‘Best efforts’ is fine for asynchronous communication, such as email or text, but unacceptable for traditional voice telephony. A video call or streamed movie could exploit guaranteed bandwidth if possible / available, or else they could self-optimise to conditions of network congestion or poor coverage, if well-understood. Other services have different criteria – for example, real-time gaming demands ultra-low latency, while corporate applications may demand the most secure and reliable path through the network.
  • To charge appropriately for access to and/or use of the network. It is becoming increasingly clear that the Telco 1.0 business model – that of charging the end-user per minute or per Megabyte – is under pressure as new business models for the distribution of content and transportation of data are being developed. Operators will need to be capable of charging different players – end-users, service providers, third-parties (such as advertisers) – on a real-time basis for provision of broadband and maybe various types or tiers of quality of service (QoS). They may also need to offer SLAs (service level agreements), monitor and report actual “as-experienced” quality metrics or expose information about network congestion and availability.

Under the floor players threaten control (and smartness)

Either through deliberate actions such as outsourcing, or through external agency (Government, greenfield competition etc), we see the network-part of the telco universe suffering from a creeping loss of control and ownership. There is a steady move towards outsourced networks, as they are shared, or built around the concept of open-access and wholesale. While this would be fine if the telcos themselves remained in control of this trend (we see significant opportunities in wholesale and infrastructure services), in many cases the opposite is occurring. Telcos are losing control, and in our view losing influence over their core asset – the network. They are worrying so much about competing with so-called OTT providers that they are missing the threat from below.

At the point at which many operators, at least in Europe and North America, are seeing the services opportunity ebb away, and ever-greater dependency on new models of data connectivity provision, they are potentially cutting off (or being cut off from) one of their real differentiators.
Given the uncertainties around both fixed and mobile broadband business models, it is sensible for operators to retain as many business model options as possible. Operators are battling with significant commercial and technical questions such as:

  • Can upstream monetisation really work?
  • Will regulators permit priority services under Net Neutrality regulations?
  • What forms of network policy and traffic management are practical, realistic and responsive?

Answers to these and other questions remain opaque. However, it is clear that many of the potential future business models will require networks to be physically or logically re-engineered, as well as flexible back-office functions, like billing and OSS, to be closely integrated with the network.
Outsourcing networks to third-party vendors, particularly when such a network is shared with other operators is dangerous in these circumstances. Partners that today agree on the principles for network-sharing may have very different strategic views and goals in two years’ time, especially given the unknown use-cases for new technologies like LTE.

This report considers all these issues and gives guidance to operators who may not have considered all the various ways in which network control is being eroded, from Government-run networks through to outsourcing services from the larger equipment providers.

Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators Under The Floor Players Fig 1 Defending Network Capabilities

Source: STL Partners

Industry structure is being reshaped

Over the last year, Telco 2.0 has updated its overall map of the telecom industry, to reflect ongoing dynamics seen in both fixed and mobile arenas. In our strategic research reports on Broadband Business Models, and the Roadmap for Telco 2.0 Operators, we have explored the emergence of various new “buckets” of opportunity, such as verticalised service offerings, two-sided opportunities and enhanced variants of traditional retail propositions.
In parallel to this, we’ve also looked again at some changes in the traditional wholesale and infrastructure layers of the telecoms industry. Historically, this has largely comprised basic capacity resale and some “behind the scenes” use of carriers-carrier services (roaming hubs, satellite / sub-oceanic transit etc).

Figure 2 – Telco 1.0 Wholesale & Infrastructure structure

Under The Floor (UTF) Players Fig 2 Telco 1.0 Scenario

Source: STL Partners

Content

  • Revising & extending the industry map
  • ‘Network Infrastructure Services’ or UTF?
  • UTF market drivers
  • Implications of the growing trend in ‘under-the-floor’ network service providers
  • Networks must be smart and controlling them is smart too
  • No such thing as a dumb network
  • Controlling the network will remain a key competitive advantage
  • UTF enablers: LTE, WiFi & carrier ethernet
  • UTF players could reduce network flexibility and control for operators
  • The dangers of ceding control to third-parties
  • No single answer for all operators but ‘outsourcer beware’
  • Network outsourcing & the changing face of major vendors
  • Why become an under-the-floor player?
  • Categorising under-the-floor services
  • Pure under-the-floor: the outsourced network
  • Under-the-floor ‘lite’: bilateral or multilateral network-sharing
  • Selective under-the-floor: Commercial open-access/wholesale networks
  • Mandated under-the-floor: Government networks
  • Summary categorisation of under-the-floor services
  • Next steps for operators
  • Build scale and a more sophisticated partnership approach
  • Final thoughts
  • Index

 

  • Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators
  • Figure 2 – Telco 1.0 Wholesale & Infrastructure structure
  • Figure 3 – The battle over infrastructure services is intensifying
  • Figure 4 – Examples of network-sharing arrangements
  • Figure 5 – Examples of Government-run/influenced networks
  • Figure 6 – Four under-the-floor service categories
  • Figure 7: The need for operator collaboration & co-opetition strategies

Full Article: Nokia’s Strange Services Strategy – Lessons from Apple iPhone and RIM

The profuse proliferation of poorly integrated projects suggests either – if we’re being charitable – a deliberate policy of experimenting with many different ideas, or else – if we’re not – the absence of a coherent strategy.

Clearly Nokia is aware of the secular tendency in all information technology fields that value migrates towards software and specifically towards applications. Equally clearly, they have the money, scale, and competence to deliver major projects in this field. However, so far they have failed to make services into a meaningful line of business, and even the well developed software ecosystem hasn’t seen a major hit like the iPhone and its associated app store.

Nokia Services: project proliferator

So far, the Services division in its various incarnations has brought forward Club Nokia, the Nokia Game, Forum Nokia, Symbian Developer Network, WidSets, Nokia Download!, MOSH, Nokia Comes With Music, Nokia Music Store, N-Gage, Ovi, Mail on Ovi, Contacts on Ovi, Ovi Store…it’s a lot of brands for one company, and that’s not even an exhaustive list. They’ve further acquired Intellisync, Sega.com, Loudeye, Twango, Enpocket, Oz Communications, Gate5, Starfish Software, Navteq and Avvenu since 2005 – that makes an average of just over two services acquisitions a year. Further, despite the decision to integrate all (or most) services into Ovi, there are still five different functional silos inside the Services division.

The great bulk of applications or services available or proposed for mobile devices fall into two categories – social or media. Under social we’re grouping anything that is primarily about communications; under media we’re grouping video, music, games, and content in general. Obviously there is a significant overlap. This is driven by fundamentals; no-one is likely to want to do computationally intensive graphics editing, CAD, or heavy data analysis on a mobile, run a database server on one, or play high-grade full-3D games. Batteries, CPU limitations, and most of all, form factor limitations see to that. And on the other side, communication is a fundamental human need, so there is demand pull as well as constraint push. As we pointed out back in the autumn of 2007, communication, not content, is king.

Aims

In trying to get user adoption of its applications and services, Nokia is pursuing two aims – one is to create products that will help to ship more Nokia devices, and to ship higher-value N- or E- series devices rather than featurephones, and the other is a longer-range hope to create a new business in its own right, which will probably be monetised through subscriptions, advertising,or transactions. This latter aim is much further off that the first, and is affected by the operators’ suspicion of any activity that seems to rival their treasured billing relationship. For example, although quick signup and data import are crucial to deploying a social application, Nokia probably wouldn’t get away with automatically enrolling all users in its services – the operators likely wouldn’t wear it.

Historical lessons

There have been several historical examples of similar business models, in which sales of devices are driven by a social network. However, the common factor is that success has always come from facilitating existing social networks rather than trying to create new ones. This is also true of the networks themselves; if new ones emerge, it’s usually as an epi-phenomenon of generally reduced friction. Some examples:

  1. Telephony itself: nobody subscribed in order to join the telephone community, they subscribed to talk to the people they wanted to talk to anyway.
  2. GSM: the unique selling point was that the people who might want to talk to you could reach you anywhere, and PSTN interworking was crucial.
  3. RIM’s BlackBerry: early BlackBerries weren’t that impressive as such, but they provided access to the social value of your e-mail workflow and groupware anywhere. Remember, the only really valuable IM user base is the 17 million Lotus Notes Sametime users.
  4. 3’s INQ: the Global Mobile Award-winning handset is really a hardware representation of the user’s virtual presence . Hutchison isn’t interested in trying to make people join Club Hutch or use 3Book; they’re interested in helping their users manage their social networks and charging for the privilege.

So it’s unlikely that trying to recruit users into Nokia-specific communities is at all sensible. Nobody likes vendor lock-in. And, if your product is really good, why restrict it to Nokia hardware users? As far as Web applications go, of course, there’s absolutely no reason why other devices shouldn’t be allowed to play. But this fundamental issue – that no-one organises their lives around their friends’ or the friends’ mobile operators’ choices of device vendor – would tend to explain why there have been so many service launches, mergers, and shutdowns. Nokia is trying to find the answer by trial and error, but it’s looking in the wrong place. There is some evidence, however, that they are looking more at facilitating other social applications, but this is subject to negotiation with the operators.

The operator relationship – root of the problem

One of the reasons why is the conflict with operators mentioned above. Nokia’s efforts to build a Nokia-only community mirror the telco fascination with the billing relationship. Telcos tend to imagine that being a customer of Telco X is enough to constitute a substantial social and emotional link; Nokia is apparently working on the assumption that being a customer of Nokia is sufficient to make you more like other Nokia customers than everyone else. So both parties are trying to “own the customer”, when in fact this is probably pointless, and they are succeeding in spoiling each others’ plans. Although telcos like to imagine they have a unique relationship with their subscribers, they in fact know surprisingly little about them, and carriers tend to be very unpopular with the public. Who wants to have a relationship with the Big Expensive Phone Company anyway? Both parties need to rethink their approach to sociability.

What would a Telco 2.0 take on this look like?

First of all, the operator needs to realise that the subscribers don’t love them for themselves; it was the connectivity they were after all along! Tears! Secondly, Nokia needs to drop the fantasy of recruiting users into a vendor-specific Nokiasphere. It won’t work. Instead, both ought to be looking at how they can contribute to other people’s processes. If Nokia can come up with a better service offering, very well – let them use the telco API suite. In fact, perhaps the model should be flipped, and instead of telcos marketing Nokia devices as a bundled add-in with their service, Nokia ought to be marketing its devices (and services) with connectivity and much else bundled into the upfront price, with the telcos getting their share through richer wholesale mechanisms and platform services.

Consider the iPhone. Looking aside from the industrial design and GUI for a moment – I dare you! you can do it! – its key features were integration with iTunes (i.e. with content), a developer platform that offered good APIs and documentation, but also a route to market for the developers and an easy way for users to discover, buy, and install their products, and an internal business model that sweetened the deal for the operators, by offering them exclusivity and a share of the revenue. Everyone still loves the iPhone, everyone still hates AT&T, but would AT&T ever consider not renewing the contract with Apple? They’re stealing our customers’ hearts! Of course not.

Apple succeeded in improving the following processes for two out of three key customer groups:

  1. Users: Acquiring and managing music and video across multiple devices.
  2. Users: Discovering, installing, and sharing mobile applications
  3. Developers: Deploying and selling mobile applications

And as two-sidedness would suggest, they offered the remaining group a share of revenue. The rest is history; the iPhone has become the main driver of growth and profitability at Apple, more than one billion applications downloads have been shipped from the App Store, etc, etc.

Conclusions: turn to small business?

So far, however, Nokia’s approach has mirrored the worst aspects of telcos’ attitude to their subscribers; a combination of possessiveness and indifference. They want to own the customer; they don’t know how or why. It might be more defensible if there was any sign that Nokia is serious about making money from services; that, of course, is poison to the operators and is therefore permanently delayed. Similarly, Nokia would like to have the sort of brand loyalty Apple enjoys and to build the sort of integrated user experience Apple specialises in, but it is paranoid about the operators. The result is essentially an Apple strategy, but not quite.

What else could they try? Consider Nokia Life Tools, the package of information services for farmers and small businesses they are building for the developing world. One thing that Nokia’s services strategy has so far lacked is engagement with enterprises; it’s all been about swapping photos and music and status updates. Although Nokia makes great business-class gadgets, and they provide a lot of useful enablers (multiple e-mail boxes, support for different push e-mail systems, VPN clients, screen output, printer support), there’s a hole shaped like work in their services offering. RIM has been much better here, working together with IBM and Salesforce.com to expand the range of enterprise applications they can mobilise.

Life Tools, however, shows a possible opportunity – it’s all right catering to companies who already have complex workflow systems, but who’s serving the ones that don’t have the scale to invest there? None of the vendors are addressing this, and neither are the telcos. It fits a whole succession of Telco 2.0 principles – focus on enterprises, look for areas where there’s a big difference between the value of bits and their quantity, and work hard at improving wholesale.

It’s almost certainly a better idea than trying to be Apple, but not quite.

Next Steps for Nokia and telcos

  • It is unlikely that ”Nokia users” are a valid community

  • Really successful social hardware facilitates existing social networks

  • Nokia’s problems are significantly explained by their difficult relationship with operators

  • Nokia’s emerging-market Life Tools package might be more of an example than they think

  • A Telco 2.0 approach would emphasise small businesses, offer bundled connectivity, and deal with the operators through better wholesale

Full Article: Nokia and Symbian – Missing an Opportunity?

The recent purchase of Symbian by Nokia highlights the tensions around running a consortium-owned platform business. Obviously, Nokia believes that making the software royalty-free and open source is the key to future mass adoption. While Nokia is busy buying Symbian, the competition has moved on and offers a lot more than purely handset features. The team at Telco 2.0 disagree and believe the creation of the Symbian Foundation will cure none of the governance or product issues going forward. Additionally, Symbian isn’t strong in the really important bits of the mobile jigsaw that generates the real value to any of the end-consumer, developer or mobile operator.

In this article, we look at the operating performance of Symbian. In a second we examine the “openness? of Symbian going forward, since “open? remains such a talisman of business model success.

Background

Symbian’s core product is a piece of software code that the user doesn’t interact with directly — it’s low-level operating system code to deal with key presses, screen display, and controlling the radio. Unlike Windows (but rather like Unix) there are three competing user interfaces built on this common foundation: Nokia’s Series 60 (S60), Sony Ericsson’s UIQ, and DoCoMo’s MOAP. Smartphones haven’t taken the world by storm yet, but Symbian is the dominant smartphone platform, and thus is well positioned to trickle down to lower-end handsets over time. What might be relevant to 100m handsets this year could be a billion handsets in two or three years from now. As we saw on the PC with Windows, the character of the handset operating system is critical to who makes money out of the mobile ecosystem.

The “what? of the deal is simple enough — Nokia spent a sum of money equivalent to two years’ licence fees buying out the other shareholders in Symbian, before staving off general horror from other vendors by promising to convert the firm into an open-source foundation like the ones behind Mozilla, Apache and many other open-source projects. The “how? is pretty simple, too. Nokia is going to chip in its proprietary S60, and assign the S60 developers to work on Symbian Foundation projects.

Shareholding Structure

The generic problem with consortium is typically not all members are equal and almost certainly have different objectives. This has always been the case with Symbian.

It is worth examining the final shareholder structure which has been stable since July 2004: Nokia – 47.9%, Ericsson – 15.6%, SonyEricsson – 13.1%, Panasonic – 10.5%, Siemens – 8.4% and Samsung – 4.5%. At the bottom of the article we have listed the key corporate events in Symbian history and the changes in shareholding.

It is interesting to note that: Siemens is out of the handset business, Panasonic doesn’t produce Symbian handsets (it uses LiMo), Ericsson only produces handsets indirectly through SonyEricsson, and Samsung is notably permissive towards handset operating systems.

SonyEricsson has been committed towards Symbian at the top end of its range, although recently is adding Windows Mobile for its Xperia range targeted at corporates.

Nokia seems almost committed though has recently purchased Trolltech — a notable fan of Linux and developer of Qt.

The tensions within the shareholders seem obvious: Siemens was probably in the consortium for pure financial return, whereas for Nokia it was a key component of industrial strategy and cost base for its high-end products. The other shareholders were somewhere in between those extremes. The added variable was that Samsung, Nokia’s strongest competitor, seemed hardly committed to the product.

It is easy to produce a hypotheses that the software roadmap and licence pricing for Symbian was difficult to agree and that was before the user interface angle (see below).

Ongoing Business Model

Going forward, Nokia has solved the argument of licence pricing — it is free. Whether this passed to consumers in the form of lower handset prices is open to debate. After all, Nokia somehow has to recover the cost of an additional 1,000 personnel on its payroll. For SonyEricsson with its recent profit warning, any improvement in margin will be appreciated, but this doesn’t necessarily mean a reduction in pricing.

It also seems obvious that Nokia will also control the software roadmap going forward: it seems to us that handset operators using Symbian will be faced with three options: free-ride on Nokia; pick and choose components and differentiate with self-build components; or pick another OS.

We think that given the chosen licence (Eclipse — described in more detail in next article), plus the history of Symbian user-interfaces, and the dominance of Nokia, all point towards other handset operators producing their own flavours of Symbian going forward.

Competition

Nokia may have bought Symbian, even without competitive pressures, purely to reduce its own royalties. However, the competitive environment adds an additional dimension to the decision.

RIM and Microsoft are extremely strong in the corporate space and both share two features that Symbian are currently extremely weak in — they both excel in synchronizing with messaging and calendaring services.

Apple has also raised the bar in usability. This is something where Symbian has stayed clear, but is certainly not one of the strengths of S60, the Nokia front end. The wife of one of our team — tech-savvy, tri-lingual, with a PhD in molecular biology — couldn’t work out how to change the ringtone, and not for lack of trying. What do you mean it’s not under ‘settings’? Some unkind tongues have even speculated that the S60 user interface was inspired by an Enigma Machine stolen to order by Nokia executives.

Qualcomm is rarely mentioned when phone operating systems are talked about, and that is because they take a completely different approach. Qualcomm’s BREW would be better classified as a content delivery system, and it is gaining traction in Europe. Two really innovative handsets of last year, the O2 Coccoon and the 3-Skypephone, were both based upon Qualcomm software. Qualcomm’s differentiator is that it is not a consumer brand and develops solutions in partnership with operators.

The RIM, Microsoft, Apple and Qualcomm solutions share one thing in common: they incorporate network elements which deliver services.

Nokia is of course moving into back-end solutions through its embryonic Ovi services. And this may be the major point about Symbian: it is only one, albeit important piece of the jigsaw. Meanwhile, as we’ve written before, Ovi remains obsessed around information and entertainment services, neglecting the network side of the core voice and messaging service. Contrast with Apple’s first advance with Visual Voicemail.

As James Balsillie, CEO of RIM, said this week “The sector is shifting rapidly. The middle part is hollowing — there are cheap, cheap, cheap phones and then it is smartphones to a connected platform.��?

Key Symbian Dates.

June 1998 – Launch with Psion owning 40%, Nokia 30% & Ericsson 30%.
Oct 1998 – Motorola Joins Consortium

Jan 1999 – Symbian acquires Ronneby Labs from Ericsson and with it the original UIQ team & codebase.

Mar 1999 – DoCoMo partnership

May 1999 – Panasonic joins Consortium. Equity Stakes now: Psion – 28%, Nokia / Ericsson / Motorola – 21%, Panasonic – 9%.

Jan 2002 – Funding Round of £20.75m. SonyEricsson tales up Ericsson Rights.

Jun 2002 – Siemens Joins Consortium with £14.25m for 5%. Implied Value £285m

Feb 2003 – Samsung Joins Consortium with £17m for 5%. Implied Value £340m.

Aug 2003 – Five Years Anniversary. Original Consortium Members can now sell. Motorola sells stake for £57m to Nokia & Psion. Implied Value £300m.

Feb 2004 – Original Founder Founder Psion decides to sell out. Announces to Sell 31.7% for £135.5m with part of payment dependant of future royalties. Implied Value £427m. Nokia would have > 50% control. David Potter of Psion says total investment in Symbian was £35m to-date, so £135.5m represents a good return.

July 2004 – Preemption of Psion Stake by Panasonic, SonyEricsson & Siemens. Additional Rights issue of £50m taken up by Panasonic, SonyEricsson, Siemens & Nokia. New Shareholding structure: Nokia – 47.9%, Ericsson – 15.6%, SonyEricsson – 13.1%, Panasonic – 10.5%, Siemens – 8.4% and Samsung – 4.5%.

Agree to rise cost base to c. £100m/per annum and headcount of c. 1,200.

Feb 2007 – Agree to sell UIQ to SonyEricsson for £7.1m.

June 2008 – Nokia buys rest of Symbian with Implied Value of €850m (£673m) with approx. payout of – Ericsson – £105m, SonyEricsson – £88.2m, Panasonic – £70.7m, Siemens of £56.5m and Samsung £30.3m. Note, Symbian had net cash of €182m. The price quoted by Nokia of €262m is the net price paid by Nokia to buy out the consortium not the value of the company.