Fibre for 5G and edge: Who does it and how to build it?

Opportunities for fibre network operators

4G/5G densification and the growth in edge end points will place fresh demands on telecoms network infrastructure to deliver high bandwidth connections to new locations. Many of these will be sites on the streets of urban centres without existing connections, where installation of new fibre cables is costly. This will require careful planning and optimum selection of existing infrastructure to minimise costs and strengthen the business cases for fibre deployment.

While much of the growth in deployment of small cells and edge end points will be on private sites, their deployment in public areas, in support of public network services, will pose specific challenges to providing the broad bandwidth connectivity required. This includes both backhaul from cell sites and edge end points to the fibre transport network, plus any fronthaul needs for new open RAN deployments, from baseband equipment to radio units and antennas. In almost all cases this will entail installing new fibre in areas where laying a new duct is at its most expensive, although in a few cases fixed point-to-point radio links could be deployed instead.

Enter your details below to request an extract of the report

Global deployments of small cells and non-telco edge end points
in public areas

Source: Small Cell Forum, STL research and analysis

In addition, operators of 5G small cells and public cloud edge sites will require access to fibre links for backhaul to their core networks to provide the high bandwidths required. In some cases, they may need multiple fibres, especially if diverse paths are needed for security and resilience purposes.

Many newer networks have been built for a specific purpose, such as residential or business FTTP. Others are trunk routes to connect large businesses and data centres, and may serve local, regional, national or international areas. In addition, changing regulations have encouraged the creation of new businesses such as neutral hosts (also called “open access” for wholesale fibre) and, as a result, the supply side of the market is composed of an increasing variety of players. If this pattern were to continue, then it would very likely prove uneconomic to build dedicated networks for some applications, such as small cell densification or some standalone edge applications.

However, provided build qualities meet the required standard and costs can be contained there is no reason why networks deployed to address one market cannot be extended and repurposed to serve others. For new fibre builds being planned, it is also important to consider these new FTTX opportunities upfront and in some detail, rather than as an afterthought or just a throw-away bullet point on investor slide-decks.  

This report looks at the opportunities these developments offer to fibre network operators and considers the business cases that need to be made. It looks at the means and scope for minimising costs necessary to profitably satisfy the widest range of needs.

The fibre market is changing

FTTH/P has been largely satisfied in many countries, and even in slower markets such as the UK and Germany, the bulk of the network is expected to be in place by 2025/6 for most urban premises, at least on the basis of “homes passed”, if not actually connected.

By contrast the requirement of higher bandwidth connectivity for mobile base stations being upgraded from 3G to 4G and 5G is current and ongoing. Demand for links to small cells needed to support 5G densification, standalone edge, and smart city applications is only just beginning to appear and is likely to develop significantly over the next 10 years or more. In future high speed broadband links will be required to support an increasing range of applications for different organisations: for example, autonomous and semi-autonomous vehicle (V2X) applications operated by government or city authorities.

Both densification and edge will need local connections for fronthaul and backhaul as well as longer connections to provide backhaul to the core network. Building from scratch is expensive owing to the high costs associated with digging in the public highway, especially in urban centres. Digging can be complex, depending on the surfaces and buried services encountered, and extensions after the initial main build can be very expensive.

Laying fibre and ducts are a long-term investment and can usually be amortised over 15 to 20 years.  Nevertheless, network operators need to be sure of a good return on their investment and therefore need to find ways to minimise costs while maximising revenues. In markets with multiple players, there will also be a desire by potential acquisition targets to underscore their valuations, by maximising their addressable market, while reducing any post-merger remedial or expansion costs. Good planning, including watching for new opportunities and trends and the smart use of existing assets to minimise costs, can help ensure this.

  • Serving multiple markets through good forecasting and planning can help maximise revenues.
  • Operators and others can make use of various infrastructure assets to reduce costs, including incumbents’ physical duct/pole infrastructure sewers, disused water and hydraulic pipes, neutral hosts’ networks, council ducts, and traffic management ducts. Obviously these will not extend everywhere that fibre is required, but can make a meaningful contribution in many situations.

The remaining sections of this report examine in more detail the specific opportunities offered to fixed network operators, by densification of mobile base stations and growth of edge computing. It covers:

  • Market demand, including drivers of demand, and end users’ and the industry’s needs and options
  • The changing supply side and regulation
  • Technologies, build options and costs
  • How to maximise revenues and returns on investment.

Table of Contents

  • Executive Summary
  • Introduction
    • The fibre market is changing
  • Small cell and edge: Demand
    • Demand for small cells
    • Demand for edge end points
  • Small cell and edge: Supply
    • The changing network supply structure
  • Build options
    • Pros and cons of seven building options
  • How do they compare on costs?
  • Impact of regulation and policy
  • How to mitigate unforeseen costs
  • The business case
  • Conclusions
  • Index

Related Research

 

Enter your details below to request an extract of the report

How can telcos be loved?

Why should telcos care about being a ‘loved brand’?

If you are from an engineering or financial background, it can be tempting to look at branding and think it is a trivial or ‘soft’ aspect of business. This is valid in the sense that perceptions are inherently subjective, but this subjectivity does not mean that such perceptions are unimportant. People respond very strongly and instinctively to emotional stimuli. These responses are deep in our nature. We have evolved to quickly learn the characteristics of things that we want to repeat; the things we like. This extends to social behaviours too: Who do we want to be with, and be seen to be with? Which ‘tribe’ are we in, and who do we associate with?

Businesses have learnt a lot about this, because it has proved hugely valuable to the best practitioners, and the study and practices of marketing, advertising and branding have developed significantly in the past seventy years as a result. To be a ‘loved brand’ is a shorthand description of the ideal state.

What is a loved brand and what are the advantages?

Loved brands create strong emotional bonds with their customers, through a set of values and beliefs that customers can identify with and incorporate into their daily lives. In theory, businesses with loved brands have a range of advantages over others, which over time create significant financial benefits.

Business advantages for loved brands

Source: STL Partners

They enable businesses to charge a premium over other competitors as consumers pay less notice to the price of products sold by the loved brand.

  1. Loved brands can charge a premium over other competitors as consumers pay less notice to the price of products sold by the loved brand. Apple iPhones are generally more expensive than competitors’ phones with similar feature sets. However, many Apple customers remain loyal with the status of owning the latest iPhone outweighing the additional cost.
  2. The emotional bonds with loved brands can become so robust that their customers do not consider their competitors and forcefully defend the brand. Customers are even willing to forgive the brand for making some mistakes.In 2010, Ferrari recalled more than one thousand Italia 458 cars after reports that a design fault could cause them to catch fire.Despite the obvious negative publicity, which would have had a catastrophic consequence on many manufacturers, Ferrari’s strong emotional connection with its customers protected their position in the luxury car market.
  3. Customers become valuable promotors of loved brands on their social networks, pushing the benefits and encouraging others to join. Tesla provides a great illustration of this advantage, where many of the customers are not only delighted with their new electric vehicle, but they are also strong advocates in persuading their friends and family to purchase a Tesla for themselves.
  4. Loved brands attract the best talent, which helps the business to sustain its success.

Enter your details below to request an extract of the report

Table of Contents

  • Executive Summary
  • Loved brands
    • Why should telcos care about being a ‘loved brand’?
    • What is a loved brand and what are the advantages?
  • Challenges for telcos in being a loved brand
    • How are telcos viewed by their customers?
    • Why do telcos find it hard to be loved?
  • Common telco strategies that have had limited success to date
    • Focus on having the best network
    • Offering the lowest prices in the market
    • Differentiating on customer relationship
    • Offering content bundles
    • Launching new service innovation and diversification strategies
  • What strategies could telcos adopt to succeed going forward?
  • Case study 1: TELUS brand positioning
  • Case study 2: o2 Priority Moments
  • Case study 3: MTN – sustainable economic value
  • Case study 4: Telstra Health
  • Deep dive: What learnings can be drawn from successful strategies adopted by Orange
    • What has Orange done?
    • What has been the impact on Orange’s results?
    • How has strategy contributed to Orange being a loved brand?
    • What lessons are there for other operators?
  • How do others develop and sustain “the love”?
  • Recommendations for being a loved brand in the new era for telecoms
  • Index

Related research

 

Edge computing market sizing forecast

Introducing STL Partners’ edge computing market sizing forecast

This report presents the key findings of STL Partners’ new demand forecast model for edge computing services. Its purpose is to:

  • Assess the demand from 20 use cases which currently rely on edge or will require edge to fully develop;
  • Identify the total revenue across the value chain: hardware, connectivity, application, edge infrastructure (network and on-premise), and integration and support;
  • Output a full set of results for over 180 countries over the 2020–2030 period per use case and per vertical.

This report is accompanied by a dashboard which presents a summary of our model output and the associated graphics for the world’s regions and for 20 major markets. The dashboard also presents the full revenue output for the 180+ countries.

Download the accompanying spreadsheet (Edge Insights subscribers only)

Enter your details below to request an extract of the report

Edge computing addressable revenue will reach US$543 billion by 2030

High-level findings from the model indicate that:

  • The growth in the number of connected devices, as well as the need for higher levels of automation, operational efficiency and cost reduction, will drive the adoption of edge computing across many use cases and verticals over the next 10 years. This will result in increased spend across the value chain.
  • The total edge computing addressable market will grow from US$10 billion in 2020 to US$543 billion in 2030 at a CAGR of 49% over the 10-year period.
  • The total value chain breaks into five main components which are hardware, connectivity, application, integration & support, in addition to the edge infrastructure which includes both on-prem edge and network edge.

Total edge computing addressable revenue

Edge computing

Source: STL Partners

Table of contents

  • Executive Summary
  • Methodology
  • Revenue by value chain component
  • Revenue by use case
  • Revenue by vertical
  • Revenue by region
  • Appendix

For more information on STL Partners’ edge-related services, please go to our Edge Insights Service page.

The new forecast is intended to complement:

Enter your details below to request an extract of the report

Telco edge computing: What is the operator strategy?

To access the report chart pack in PPT download the additional file on the left

Edge computing can help telcos to move up the value chain

The edge computing market and the technologies enabling it are rapidly developing and attracting new players, providing new opportunities to enterprises and service providers. Telco operators are eyeing the market and looking to leverage the technology to move up the value chain and generate more revenue from their networks and services. Edge computing also represents an opportunity for telcos to extend their role beyond offering connectivity services and move into the platform and the application space.

However, operators will be faced with tough competition from other market players such as cloud providers, who are moving rapidly to define and own the biggest share of the edge market. Plus, industrial solution providers, such as Bosch and Siemens, are similarly investing in their own edge services. Telcos are also dealing with technical and business challenges as they venture into the new market and trying to position themselves and identifying their strategies accordingly.

Telcos that fail to develop a strategic approach to the edge could risk losing their share of the growing market as non-telco first movers continue to develop the technology and dictate the market dynamics. This report looks into what telcos should consider regarding their edge strategies and what roles they can play in the market.

Following this introduction, we focus on:

  1. Edge terminology and structure, explaining common terms used within the edge computing context, where the edge resides, and the role of edge computing in 5G.
  2. An overview of the edge computing market, describing different types of stakeholders, current telecoms operators’ deployments and plans, competition from hyperscale cloud providers and the current investment and consolidation trends.
  3. Telcos challenges in addressing the edge opportunity: technical, organisational and commercial challenges given the market
  4. Potential use cases and business models for operators, also exploring possible scenarios of how the market is going to develop and operators’ likely positioning.
  5. A set of recommendations for operators that are building their strategy for the edge.

Enter your details below to request an extract of the report

What is edge computing and where exactly is the edge?

Edge computing brings cloud services and capabilities including computing, storage and networking physically closer to the end-user by locating them on more widely distributed compute infrastructure, typically at smaller sites.

One could argue that edge computing has existed for some time – local infrastructure has been used for compute and storage, be it end-devices, gateways or on-premises data centres. However, edge computing, or edge cloud, refers to bringing the flexibility and openness of cloud-native infrastructure to that local infrastructure.

In contrast to hyperscale cloud computing where all the data is sent to central locations to be processed and stored, edge computing local processing aims to reduce time and save bandwidth needed to send and receive data between the applications and cloud, which improves the performance of the network and the applications. This does not mean that edge computing is an alternative to cloud computing. It is rather an evolutionary step that complements the current cloud computing infrastructure and offers more flexibility in executing and delivering applications.

Edge computing offers mobile operators several opportunities such as:

  • Differentiating service offerings using edge capabilities
  • Providing new applications and solutions using edge capabilities
  • Enabling customers and partners to leverage the distributed computing network in application development
  • Improving networkperformance and achieving efficiencies / cost savings

As edge computing technologies and definitions are still evolving, different terms are sometimes used interchangeably or have been associated with a certain type of stakeholder. For example, mobile edge computing is often used within the mobile network context and has evolved into multi-access edge computing (MEC) – adopted by the European Telecommunications Standards Institute (ETSI) – to include fixed and converged network edge computing scenarios. Fog computing is also often compared to edge computing; the former includes running intelligence on the end-device and is more IoT focused.

These are some of the key terms that need to be identified when discussing edge computing:

  • Network edge refers to edge compute locations that are at sites or points of presence (PoPs) owned by a telecoms operator, for example at a central office in the mobile network or at an ISP’s node.
  • Telco edge cloud is mainly defined as distributed compute managed by a telco  This includes running workloads on customer premises equipment (CPE) at customers’ sites as well as locations within the operator network such as base stations, central offices and other aggregation points on access and/or core network. One of the reasons for caching and processing data closer to the customer data centres is that it allows both the operators and their customers to enjoy the benefit of reduced backhaul traffic and costs.
  • On-premise edge computing refers to the computing resources that are residing at the customer side, e.g. in a gateway on-site, an on-premises data centre, etc. As a result, customers retain their sensitive data on-premise and enjoy other flexibility and elasticity benefits brought by edge computing.
  • Edge cloud is used to describe the virtualised infrastructure available at the edge. It creates a distributed version of the cloud with some flexibility and scalability at the edge. This flexibility allows it to have the capacity to handle sudden surges in workloads from unplanned activities, unlike static on-premise servers. Figure 1 shows the differences between these terms.

Figure 1: Edge computing types

definition of edge computing

Source: STL Partners

Network infrastructure and how the edge relates to 5G

Discussions on edge computing strategies and market are often linked to 5G. Both technologies have overlapping goals of improving performance and throughput and reducing latency for applications such as AR/VR, autonomous vehicles and IoT. 5G improves speed by increasing spectral efficacy, it offers the potential of much higher speeds than 4G. Edge computing, on the other hand, reduces latency by shortening the time required for data processing by allocating resources closer to the application. When combined, edge and 5G can help to achieve round-trip latency below 10 milliseconds.

While 5G deployment is yet to accelerate and reach ubiquitous coverage, the edge can be utilised in some places to reduce latency where needed. There are two reasons why the edge will be part of 5G:

  • First, it has been included in the 5Gstandards (3GPP Release 15) to enable ultra-low latency which will not be achieved by only improvements in the radio interface.
  • Second, operators are in general taking a slow and gradual approach to 5G deployment which means that 5G coverage alone will not provide a big incentive for developers to drive the application market. Edge can be used to fill the network gaps to stimulate the application market growth.

The network edge can be used for applications that need coverage (i.e. accessible anywhere) and can be moved across different edge locations to scale capacity up or down as required. Where an operator decides to establish an edge node depends on:

  • Application latency needs. Some applications such as streaming virtual reality or mission critical applications will require locations close enough to its users to enable sub-50 milliseconds latency.
  • Current network topology. Based on the operators’ network topology, there will be selected locations that can meet the edge latency requirements for the specific application under consideration in terms of the number of hops and the part of the network it resides in.
  • Virtualisation roadmap. The operator needs to consider virtualisation roadmap and where data centre facilities are planned to be built to support future network
  • Site and maintenance costs. The cloud computing economies of scale may diminish as the number of sites proliferate at the edge, for example there is a significant difference in maintaining 1-2 large data centres to maintaining 100s across the country
  • Site availability. Some operators’ edge compute deployment plans assume the nodes reside in the same facilities as those which host their NFV infrastructure. However, many telcos are still in the process of renovating these locations to turn them into (mini) data centres so aren’t yet ready.
  • Site ownership. Sometimes the preferred edge location is within sites that the operators have limited control over, whether that is in the customer premise or within the network. For example, in the US, the cell towers are owned by tower operators such as Crown Castle, American Tower and SBA Communications.

The potential locations for edge nodes can be mapped across the mobile network in four levels as shown in Figure 2.

Figure 2: possible locations for edge computing

edge computing locations

Source: STL Partners

Table of Contents

  • Executive Summary
    • Recommendations for telco operators at the edge
    • Four key use cases for operators
    • Edge computing players are tackling market fragmentation with strategic partnerships
    • What next?
  • Table of Figures
  • Introduction
  • Definitions of edge computing terms and key components
    • What is edge computing and where exactly is the edge?
    • Network infrastructure and how the edge relates to 5G
  • Market overview and opportunities
    • The value chain and the types of stakeholders
    • Hyperscale cloud provider activities at the edge
    • Telco initiatives, pilots and plans
    • Investment and merger and acquisition trends in edge computing
  • Use cases and business models for telcos
    • Telco edge computing use cases
    • Vertical opportunities
    • Roles and business models for telcos
  • Telcos’ challenges at the edge
  • Scenarios for network edge infrastructure development
  • Recommendation
  • Index

Enter your details below to request an extract of the report

Lag Kills! How App Latency Wrecks Customer Experience

Executive Summary

  • STL Partners’ analysis shows that while latency and app errors are only weakly correlated across the whole of Europe, once outlying operators (SFR, Wind and those in Germany) are removed, there is a strong positive correlation between the two: as latency increases so do app errors.
  • Intuitively, this makes sense: apps ‘time out’ waiting for responses causing errors and crashes.
  • Latency and app errors both negatively affect customer experience – customers are more likely to abandon apps as responsiveness and error rates increase:
    • 48% of users would uninstall or stop using an app if it regularly ran slowly.
    • 53% of users would uninstall or stop using an app if it regularly crashed, stopped responding or had errors.
  • Historically, customers have tended to hold the app developer responsible for errors (55% of users blame the app for problems and only 22% the mobile operator) but mobile operators have a significant impact on how quickly an app runs and how likely it is to experience an error and, as understanding of the operators’ role grows, users may well use this as a criterion when selecting their mobile service provider.
  • Performance among Europe’s operators for app latency and errors varies widely:
    • The worst-performing operator in Europe (3 Italy) experiences over three times the amount of requests with poor latency compared to the best-performer (Bouygues Telecom).
    • The worst-performing operator in Europe (O2 Germany) results in over twice the number of app errors than the best-performer (Bouygues Telecom again).
  • Improving customer experience is rapidly becoming a mantra of operators globally and for several players (in Europe at least) improving latency performance and reducing app errors caused by latency and other factors should be a key priority. For without improvement, poor performing operators will find themselves at a disadvantage and may struggle to retain existing customers and recruit new ones.

Introduction

Key objectives

Network latency is a key driver of user experience. In applications as diverse as e-commerce, VoIP, gaming, video or audio content delivery, search, online advertising, financial services, and the Internet of Things, increased latency has a direct and negative impact on customers. With higher latency, customers fail to complete tasks, leave applications, or experience application errors. This, in turn, results poorer core business KPIs for the application provider – lower ratings, fewer subscribers, or reduced advertising fees.

As we showed in a recent report titled Mobile app latency in Europe: French operators lead; Italian & Spanish lag, with the modern Internet dominated by flows of small packets on fast networks, latency accounts for the biggest share of total load times and tends to determine the actual data transfer rates users see. And, as web and mobile applications increasingly consist of large numbers of requests to independent ‘microservices’, jitter – the variation in latency – becomes a more significant threat to the consumer experience. Furthermore, we benchmarked major European mobile network operators (MNOs) on average latency and the rate of unacceptably high-latency events (over 500ms).

In this second report on latency, which again uses data provided by app analytics specialist Apteligent (formerly Crittercism), we look at the rate of app errors – evidently, something that could not impact user experience more directly – and its correlation with both latency, and the rate of unacceptable high-latency events. We explore how often apps fail across the same set of MNOs, test if latency is a driver of app errors, and then conclude whether or not our theory that it is a real driver of consumer experience is correct.

Source data and methodology

Our partner, Apteligent, collects a wide variety of analytics data from thousands of mobile apps used by hundreds of millions of people around the world in their every-day lives and work. To date, the primary purpose of the data has been to help app developers make better apps. We are now working with Crittercism to produce further insights from the data to serve the global community of mobile operators.

This data-set includes the average network latency experienced at the application layer, the percentage of network requests above 500ms round-trip time, the 5th and 95th percentiles, and the rate of application errors. All of these data points are useful in trying to understand the overall experience of customers using their mobile apps, and in particular the delays and problems they’ve experienced such as long screen wait times and applications failing to work.

We showed in the previous report how the longest round-trip delays or ‘app-lags’ (i.e. those over 500ms) are the most important KPI to look at when trying to understand customer experience. This is firstly because people really notice individual delays of this length. For people used to high speed broadband, it’s like going back to narrowband internet – it seems incredibly slow!

Importantly though, in modern apps, the distribution of delays is even more significant, as each app or web page typically makes multiple requests over the internet before it can load fully – and each of these requests will suffer some form of delay or latency.

A detailed explanation of this and of the collection methodology is available in the first report.

The Impact of latency on app errors

First glance: a positive correlation overall, but a weak one

The following chart shows the error rate per 10,000 app requests, plotted against the percentage of requests over 500ms round-trip time, by carrier. Each dot represents a week’s performance and we’ve looked at 12 weeks of data from 20 operators, from the week of 03/08/15 to the week beginning 19/10/2015. The hypothesis being that the more requests with unacceptable latency there are, the more app errors, because apps ‘time-out’ or key requests are not fulfilled in time causing an app error or, worse, a crash.

Figure 1: Latency and errors for the top 20 European MNOs over the last 12-weeks appear correlated, but there are some important outliers

Source: STL Partners, Apteligent

At first glance, there appears to be only a weak positive relationship between latency and error rates but there does seem to be a natural grouping found between the two hand-drawn dotted lines on the chart with the weeks above the upper boundary (potentially) being outliers, in which at least one other factor is driving application errors up.

The lower boundary seems to represent the underlying rate of app-errors that occur when there are no latency issues (between 20 and 50 errors per ten thousand plus an increasing error rate as higher latency kicks in. For example, when 10% of requests experience latency above 500ms, the minimum error rate is around 30 per 10,000 requests, rising to 50 at the 35% mark.

  • Executive Summary
  • Introduction
  • Key objectives
  • Source data and methodology
  • The Impact of Latency on App Errors
  • First glance: a positive correlation overall, but a weak one
  • Outliers are specific countries and operators
  • Strong positive correlation between latency and app errors once outliers are excluded
  • App Errors: The Impact on Customer Experience
  • Latency and errors – both bad for the customer
  • Appendix: Country Analysis
  • France: A Clear Relationship
  • The UK: Strong Latency-Error Correlation
  • Spain: A mixed picture, but latency is still predictive of app errors
  • Italy: Wind is a super-outlier
  • Germany: Nothing but Outliers?
  • STL Partners and Telco 2.0: Change the Game
  • About Apteligent (formerly Crittercism)

 

  • Figure 1: Latency and errors for the top 20 European MNOs over the last 12-weeks appear correlated, but there are some important outliers
  • Figure 2: 12-week average latency and app error performance by operator
  • Figure 3: After excluding the key outliers, high-latency events explain 75% of the app error rate across Europe’s top 20 operators
  • Figure 4: Expected number of errors when loading 20 web pages of Amazon
  • Figure 5: France shows both the best performers, and a very clear relationship between latency and app errors
  • Figure 6: The latency-error correlation is strongest in the UK
  • Figure 7: High variation in latency complicates the picture, but a third of app error variation is still driven by latency
  • Figure 8: Wind complicates the picture, but the trend is still there
  • Figure 9: Germany – is there any trend at all?
  • Figure 10: The source of the outliers – Germany in August

Mobile app latency in Europe: French operators lead; Italian & Spanish lag

Latency as a proxy for customer app experience

Latency is a measure of the time taken for a packet of data to travel from one designated point to another. The complication comes in defining the start and end point. For an operator seeking to measure its network latency, it might measure only the transmission time across its network.

However, to objectively measure customer app experience, it is better to measure the time it takes from the moment the user takes an action, such as pressing a button on a mobile device, to receiving a response – in effect, a packet arriving back and being processed by the application at the device.

This ‘total roundtrip latency’ time is what is measured by our partner, Crittercism, via embedded code within applications themselves on an aggregated and anonymised basis. Put simply, total roundtrip latency is the best measure of customer experience because it encompasses the total ‘wait time’ for a customer, not just a portion of the multi-stage journey

Latency is becoming increasingly important

Broadband speeds tend to attract most attention in the press and in operator advertising, and speed does of course impact downloads and streaming experiences. But total roundtrip latency has a bigger impact on many user digital experiences than speed. This is because of the way that applications are built.

In modern Web applications, the business logic is parcelled-out into independent ‘microservices’ and their responses re-assembled by the client to produce the overall digital user experience. Each HTTP request is often quite small, although an overall onscreen action can be composed of a number of requests of varying sizes so broadband speed is often less of a factor than latency – the time to send and receive each request. See Appendix 2: Why latency is important, for a more detailed explanation of why latency is such an important driver of customer app experience.

The value of using actual application latency data

As we have already explained, STL Partners prefers to use total roundtrip latency as an indicator of customer app experience as it measures the time that a customer waits for a response following an action. STL Partners believes that Crittercism data reflects actual usage in each market because it operates within apps – in hundreds of thousands of apps that people use in the Apple App Store and in Google Play. This is a quite different approach to other players which require users to download a specific app which then ‘pings’ a server and awaits a response. This latter approach has a couple of limitations:

1. Although there have been several million downloads of the OpenSignal and Actual Experience app, this doesn’t get anywhere near the number of people that have downloaded apps containing the Crittercism measurement code.

2. Because the Crittercism code is embedded within apps, it directly measures the latency experienced by users when using those apps1. A dedicated measurement app fails to do this. It could be argued that a dedicated app gives the ‘cleanest’ app reading – it isn’t affected by variations in app design, for example. This is true but STL Partners believes that by aggregating the data for apps such variation is removed and a representative picture of total roundtrip latency revealed. Crittercism data can also show more granular data. For example, although we haven’t shown it in this report, Crittercism data can show latency performance by application type – e.g. Entertainment, Shopping, and so forth – based on the categorisation of apps used by Google and Apple in their app stores.

A key premise of this analysis is that, because operators’ customer bases are similar within and across markets, the profile of app usage (and therefore latency) is similar from one operator to the next. The latency differences between operators are, therefore, down to the performance of the operator.

Why it isn’t enough to measure average latency

It is often said that averages hide disparities in data, and this is particularly true for latency and for customer experience. This is best illustrated with an example. In Figure 2 we show the distribution of latencies for two operators. Operator A has lots of very fast requests and a long tail of requests with high latencies.

Operator B has much fewer fast requests but a much shorter tail of poor-performing latencies. The chart clearly shows that operator B has a much higher percentage of requests with a satisfactory latency even though its average latency performance is lower than operator A (318ms vs 314ms). Essentially operator A is let down by its slowest requests – those that prevent an application from completing a task for a customer.

This is why in this report we focus on average latency AND, critically, on the percentage of requests that are deemed ‘unsatisfactory’ from a customer experience perspective.

Using latency as a measure of performance for customers

500ms as a key performance cut-off

‘Good’ roundtrip latency is somewhat subjective and there is evidence that experience declines in a linear fashion as latency increases – people incrementally drop off the site. However, we have picked 500ms (or half a second) as a measure of unsatisfactory performance as we believe that a delay of more than this is likely to impact mobile users negatively (expectations on the ‘fixed’ internet are higher). User interface research from as far back as 19682 suggests that anything below 100ms is perceived as “instant”, although more recent work3 on gamers suggests that even lower is usually better, and delay starts to become intrusive after 200-300ms. Google experiments from 20094 suggest that a lasting effect – users continued to see the site as “slow” for several weeks – kicked in above 400ms.

Percentage of app requests with total roundtrip latency above 500ms – markets

Five key markets in Europe: France, Germany, Italy, and the UK.

This first report looks at five key markets in Europe: France, Germany, Italy, and the UK. We explore performance overall for Europe by comparing the relative performance of each country and then dive into the performance of operators within each country.

We intend to publish other reports in this series, looking at performance in other regions – North America, the Middle East and Asia, for example. This first report is intended to provider a ‘taster’ to readers, and STL Partners would like feedback on additional insight that readers would welcome, such as latency performance by:

  • Operating system – Android vs Apple
  • Specific device – e.g. Samsung S6 vs iPhone 6
  • App category – e.g. shopping, games, etc.
  • Specific countries
  • Historical trends

Based on this feedback, STL Partners and Crittercism will explore whether it is valuable to provide specific total roundtrip latency measurement products.

Contents

  • Latency as a proxy for customer app experience
  • ‘Total roundtrip latency’ is the best measure for customer ‘app experience’
  • Latency is becoming increasingly important
  • STL Partners’ approach
  • Europe: UK, Germany, France, Italy, Spain
  • Quantitative Analysis
  • Key findings
  • UK: EE, O2, Vodafone, 3
  • Quantitative Analysis
  • Key findings
  • Germany: T-Mobile, Vodafone, e-Plus, O2
  • Quantitative Analysis
  • Key findings
  • France: Orange, SFR, Bouygues Télécom, Free
  • Quantitative Analysis
  • Key findings
  • Italy: TIM, Vodafone, Wind, 3
  • Quantitative Analysis
  • Key findings
  • Spain: Movistar, Vodafone, Orange, Yoigo
  • Quantitative Analysis
  • Key findings
  • About STL Partners and Telco 2.0
  • About Crittercism
  • Appendix 1: Defining latency
  • Appendix 2: Why latency is important

 

  • Figure 1: Total roundtrip latency – reflecting a user’s ‘wait time’
  • Figure 2: Why a worse average latency can result in higher customer satisfaction
  • Figure 3: Major European markets – average total roundtrip latency (ms)
  • Figure 4: Major European markets – percentage of requests above 500ms
  • Figure 5: The location of Google and Amazon’s European data centres favours operators in France, UK and Germany
  • Figure 6: European operators – average total roundtrip latency (ms)
  • Figure 7: European operators – percentage of requests with latency over 500ms
  • Figure 8: Customer app experience is likely to be particularly poor at 3 Italy, Movistar (Spain) and Telecom Italia
  • Figure 9: UK Operators – average latency (ms)
  • Figure 10: UK operators – percentage of requests with latency over 500ms
  • Figure 11: German Operators – average latency (ms)
  • Figure 12: German operators – percentage of requests with latency over 500ms
  • Figure 13: French Operators – average latency (ms)
  • Figure 14: French operators – percentage of requests with latency over 500ms
  • Figure 15: Italian Operators – average latency (ms)
  • Figure 16: Italian operators – percentage of requests with latency over 500ms
  • Figure 17: Spanish Operators – average latency (ms)
  • Figure 18: Spanish operators – percentage of requests with latency over 500ms
  • Figure 19: Breakdown of HTTP requests in facebook.com, by type and size

BT/EE: Huge Regulatory Headache and Trigger for European Transformation

UK Cellular: The Context

The UK is a high-penetration market (134%), and has for the most part been considered a high-competition one, with 5 MNOs and numerous resellers/MVNOs. However, since the Free.fr and T-Mobile USA price disruptions, the UK has ceased to be one of the cheaper markets among rich countries and now seems a little expensive by French standards, while the EE joint venture effectively means a move down from 5 operators to 4. There has been considerable concern that a price disruption was in the offing since BT acquired 2.6GHz spectrum, perhaps via a “Free-style” BT deployment, or alternatively via BT leasing the spectrum to a third party, possibly Virgin Media or TalkTalk. However, it is not as obvious that there is a big target for price disruption as it was in France pre-Free or the US pre-T-Mobile, as Figure 1 shows. The UK operators are only slightly dearer than the French average, with one exception, and the market is more competitive.

Figure 1: The UK is a slightly dearer cellular market than France

Source: STL Partners, themobileworld.com

The following chart summarises the current status of the operators.

Figure 2: UK mobile market overview, 2012-2014

Source: Company Accounts, STL Partners analysis

One reason to pick EE over O2 is immediately clear – EE has substantially better ARPU, is increasing it, and is at least holding onto customers. A deeper look into the company shows that the 4G network is just recruiting customers fast enough to compensate for churn away from the two legacy networks. Overall, the market is just growing.

Figure 3: UK cellular subscriber growth, 2012-2014

Source: Company Accounts, STL Partners analysis

O2 is the cheapest of the four 4G operators and is discounting hard to win share. Meanwhile, Vodafone UK starts to look like a squeezed third operator, losing customers and ARPU at the same time, and fourth operator 3UK looks remarkably strong. In terms of profitability, Figure 4 shows that Vodafone is just managing to hold its margins, while O2 is growing at constant margins, EE is improving its margins, and 3UK is powering ahead, improving its margins, ARPU, and subscriber base at the same time.

Figure 4: 3UK is a remarkably strong fourth operator

Source: Company Accounts, STL Partners analysis

 

  • UK Cellular: The Context
  • Meanwhile, in the Retail ISP Market
  • The Business Case for BT+EE
  • An affordable deal?
  • Valuation and leverage
  • Synergy: operational cost savings
  • Synergy: marketing, customer data and cross-sales
  • Synergy: quad-play revenue
  • Can a BT-EE merger be acceptable to the Regulator?
  • The Spectrum Position
  • The Vertical Integration Problem
  • The Move towards Convergence and the Fixed Squeeze Potential Scenarios
  • Conclusion: big bets, tests, and signals
  • BT: betting big
  • The market: three big decisions
  • The regulator and the regulatory environment: a big test
  • Sending important signals

 

  • Figure 1: The UK is a slightly dearer cellular market than France
  • Figure 2: UK mobile market overview, 2012-2014
  • Figure 3: UK cellular subscriber growth, 2012-2014
  • Figure 4: 3UK is a remarkably strong fourth operator
  • Figure 5: UK consumer wireline overview
  • Figure 6: FTTC is mostly benefiting the “major independent” ISPs
  • Figure 7: BT Sport has peaked as a driver of broadband net-adds, but the football rights bills keep coming
  • Figure 8: Content costs are eating around 70% of wholesale fibre revenue at BT
  • Figure 9: BT Sport’s impact on its market valuation
  • Figure 10: BT-EE would blow through the 2013 regulatory cap on spectrum allocations, but not the proposed cap post-2.3/3.4GHz auctions
  • Figure 11: Although BT-EE is just compliant with the 2.3/3.4GHz cap, it looks suspiciously dominant
  • Figure 12: Fibre-rich MNOs break away from the herd of mediocrity in Europe Figure 13: Vodafone – light on fibre across the EU

Telco 1.0: Death Slide Starts in Europe

Telefonica results confirm that global telecoms revenue decline is on the way

Very weak Q1 2014 results from Telefonica and other European players 

Telefonica’s efforts to transition to a new Telco 2.0 business model are well-regarded at STL Partners.  The company, together with SingTel, topped our recent Telco 2.0 Transformation Index which explored six major Communication Service Providers (AT&T, Verizon, Telefonica, SingTel, Vodafone and Ooredoo) in depth to determine their relative strengths and weaknesses and provide specific recommendations for them, their partners and the industry overall.

But Telefonica’s Q1 2014 results were even worse than recent ones from two other European players, Deutsche Telekom and Orange, which both posted revenue declines of 4%.  Telefonica’s Group revenue came in at €12.2 billion which was down 12% on Q1 2013.  Part of this was a result of the disposal of the Czech subsidiary and weaker currencies in Latin America, in which around 50% of revenue is generated.  Nevertheless, the negative trend for Telefonica and other European players is clear.

As the first chart in Figure 1 shows, Telefonica’s revenues have followed a gentle parabola over the last eight years.  They rose from 2006 to 2010, reaching a peak in Q4 of that year, before declining steadily to leave the company in Q1 2014 back where it started in Q1 2006.

The second chart, however, adds more insight.  It shows the year-on-year percentage growth or decline in revenue for each quarter.  It is clear that between 2006 and 2008 revenue growth was already slowing down and, following the 2008 economic crisis in which Spain (which generates around quarter of Telefonica’s revenue) was hit particularly hard, the company’s revenue declined in 2009.  The economic recovery that followed enabled Telefonica to report growth again in 2010 and 2011 before the underlying structural challenges of the telecoms industry – the decline of voice and messaging – kicked in, resulting in revenue decline since 2012.

Figure 1: Telefonica’s growth and decline over the last 8 years

Telco 2.0 Telefonica Group Revenue

Source: Telefonica, STL Partners analysis

One thing is clear: the only way is down for most CSPs and for the industry overall

The biggest concern for Telefonica and something that STL Partners believes will be replicated in other CSPs over the next few years is the accelerating nature of the decline since the peak.  It seems clear that Telco 1.0 revenues are not going to decline in a steady fashion but, once they reach a tipping point, to tumble away quickly as:

  • Substitute voice and messaging products and alternate forms of communication scale;
  • CSPs fight hard to maintain customers, revenue and share in voice, messaging and data products, via attractive bundles

The results of the European CSPs confirms STL Partners belief that the outlook for the global industry in the next few years is negative overall.  It is clear that telecoms industry maturity is at different stages globally:

  • Europe: in decline
  • US: still growing but very close to the peak
  • Africa, Middle East, Latin America: slowing growth but still 2(?) years before peak
  • Asia: mixed, some markets growing, others in decline

Given these different mixes, STL Partners reaffirms its forecast of 2012 that overall the industry will contract by up to 10% between 2013 and 2017 as core Telco 1.0 service revenue decline accelerates once more and more countries get beyond the peak.  This is illustrated for the mobile industry in Figure 2, below.

Figure 2: Near-term global telecoms decline is assured; longer-term growth is dependent on management actions now

Global mobile telcoms revenue

Source: STL Partners

Upturn in telecoms industry fortunes after 2016 dependent on current activities

If the downturn to 2016 is a virtual certainty, the shape of the recovery beyond this, which STL Partners (tentatively) forecasts, is not. The industry’s fortunes could be much better or worse than the forecast owing to the importance of transformation activities which all players (CSPs, Network Equipment Providers, IT players, etc.) need to make now.

The growth of what we have termed Human Data (personal data for consumers and business customers, including some aspects of Enterprise Mobility), Non-Human Data (connection of devices and applications – Internet of Things, Machine2Machine, Infrastructure as a Service, and some Enterprise Mobility) and Digital Services (end-user and B2B2X enabling applications and services) requires CSPs and their partners to develop new skills, assets, partnerships, customer relationships and operating and financial models – a new business model.

As IBM found in moving from being hardware manufacturer to a services player during the 1990’s, transforming the business model is hard.  IBM was very close to bankruptcy in the early 90’s before disrupting itself and re-emerging as a dominant force again in recent years.  CSPs and NEPs, in particular, are now seeking to do the same and must act decisively from 2013-2016 if they are to enjoy a rebirth rather than continued and sustained decline.

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”

Facebook Home: what is the impact?

 

Summary: Facebook has launched ‘Facebook Home’, technically a shell around the Android OS, that in theory creates valuable new advertising inventory on the screens of users’ phones. What will its impact be in practice for Facebook, and on Google, mobile operators, and other device manufacturers? (April 2013, Foundation 2.0, Executive Briefing Service, Dealing with Disruption Stream.) Facebook Home 'Coverfeed' April 2013
  Read in Full (Members only)   To Subscribe click here

Below is an extract from this 15 page Telco 2.0 Briefing Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Dealing with Disruption Stream here. We’ll also be discussing the impact of ‘OTT’ and internet player services on other industries at our Executive Brainstorms on London (5-6 June) and Dubai (12-13 Nov). Non-members can subscribe here and for this and other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Introduction

On April 4th 2013, Mark Zuckerberg, Facebook’s CEO, launched a new mobile service named Facebook Home. In this executive briefing we examine the new service especially in regard to the impact on Facebook and other players in the mobile value chain.

What is Facebook Home?

Facebook has essentially rewritten the Android user experience giving its services prominence. Technically, it is a shell around Android. In computing circles, this is nothing new. The first version of Windows was effectively a shell above MS-DOS and most versions of the open source Linux have various shells that can be installed.

Facebook Home consists of three main features:

‘Coverfeed’

Figure 1 – Facebook Home ‘Coverfeed’
Facebook Home 'Coverfeed' April 2013

Source: Facebook Home Marketing Material

This feature turns the phone’s home screen into a Facebook news feed, continually updated as friends advertise their status and advertisers promote their wares. Interestingly, none of images show the signal strength, battery life and network operator indicators features on traditional phones.

Chat Heads

Figure 2 – Facebook Home ‘Chat Heads’
Facebook Home ‘Chat Heads’ (April 2013)

Source: Facebook Home Marketing Material

The unimaginatively named ‘Chat Heads’ is basically a messaging service with very similar features to iMessage. ChatHead-to-Chathead messages are sent on the Facebook network free of charge and if a user is not on Chathead then a SMS message is sent. Presumably at some date in future, this feature will be integrated with the desktop version of Facebook, probably with Voice calling features. Basically, it is a competitor to both traditional MNO voice and messaging services and OTT players such as WhatsApp.

App Launcher

Figure 3 – Facebook Home ‘Applauncher’
Home ‘Applauncher’ (April 2013)

Source: Facebook Home Marketing Material

The ‘AppLauncher’ feature is pretty self explanatory and provides access to non-Facebook services. The feature is neither earth shattering in its beauty nor its UI innovation, but Facebook has chosen this approach for a reason.

The advantage for Facebook of AppLauncher is that it can collect more data on other companies’ applications, even those where the users do not use Facebook Login.

Facebook’s strategy

Strategic context

Our consistent view of Facebook is that justifying a sky-high valuation is its biggest problem. Significant actual or realistically anticipated revenue growth is essential to support even our maximum valuation of $30Bn. Facebook current enterprise valuation (EV) is US$54bn which is calculated from a market capitalisation of US$64Bn less US$10Bn in cash. Nothing has substantially changed to alter our view and therefore we still believe Facebook is overvalued.

In our view the development of Facebook Home is effectively an admission that a mobile application alone will not deliver enough revenues. The stagnation of its share price indicates that the stock market is not really convinced at the moment by Facebook’s prospects.

Figure 4 – Facebook Vs Google Valuations

 

Facebook and Google Share Price April 2013
Souirce: Bigcharts.com

Mark Zuckerberg said in the Facebook Home launch event that Android users spent on average 23% of their time using the Facebook application. At first glance, this appears to be quite a large figure, and it deserves a little attention:

  • Does this figure include the huge base of Chinese Android users where Facebook is banned or is it just a USA figure? 
  • How does Facebook know time spent on other applications?
  • Is it actual traffic or service based? Does Facebook include more traditional phone applications such as voice and messaging in the figure? 

Despite these vagaries, Facebook with its Facebook Home service is effectively making the phone available 100% of the time to advertisers and thereby vastly increasing its inventory.

The value of this inventory is a completely different matter. Increasing supply without an associated increase in demand from brands will only depress unit pricing. Increased demand will only be brought about when the effectiveness of the advertising is proven to the brands. For Facebook, and the nascent mobile advertising industry overall, this is the greatest challenge: proving the effectiveness of mobile advertising to brands so that demand sharply increases.

Distribution

The other side of the equation is distribution – how can Facebook Home gather as many users as possible? We see three possible answers:

  1. By preloading on certain handsets. One of the launch partners is HTC and the Facebook Home shell will be available on some of their models in the USA on AT&T and in parts of Europe on the Orange network. This is the tried and tested ‘slowly but surely’ approach to distribution: convince OEM’s and MNO’s that the Facebook approach adds value and let them bundle the service with hardware and access packages. 
  2. By making the Facebook Home application available in the various App Stores. At the launch, Facebook indicated that the application would be rolled out gradually on a device-by-device basis. This is a major problem with Android fragmentation because developers effectively have to customise each Application for each individual device. Data from Google indicates the level of fragmentation. This data shows two axis of fragmentation: android version and screen size. But there is an additional axis which is the specific OEM API’s which vary by manufacturer and device. Of course, the user also has to be convinced to download the application. 
  3. By making Facebook Home the only way to access Facebook on Android. This was not mentioned at launch, but is the fastest way to ensure adoption. The big risk is of course that users do not like Facebook Home and prefer the old application. The more casual the Facebook user, the higher the risk of them not liking the persistent nature of Coverfeed and therefore Facebook risks alienating these users and driving them to consider alternate social networks. 

Overall, it is our view that Facebook have taken a conservative approach to distribution, but if the data from early adopters is positive then Facebook could shift to the far more aggressive third option.

Privacy concerns: a big issue or not?

Facebook was noticeably silent at the launch event around what data they would be collecting from the service and adding to their social graph or profile of their customers. There are significant privacy concerns with Facebook in some parts of the market that have been illustrated by Om Malik, for example. Facebook Home only strengthens the need for transparency around personal data which we will be exploring further at the EMEA Executive Brainstorm, 5-6 June 2013 in London.

Our view is that Facebook’s current privacy strategy of “Do it now, ask permission later” is fatally flawed and unsustainable. We are already seeing in the marketplace competitors, especially Apple and Microsoft, adopting different stances and the regulators are taking an interest. Change is coming to “Wild West of Internet Privacy” and both Google and Facebook may not like the new sheriffs.

Brendan Lynch, Chief Privacy Officer, Microsoft:

“Because consumers are telling us they care a lot about privacy, there are market forces at play. And we will see a lot more innovation in the privacy space. Our marketing campaign has become an evolution of that – consumers are telling us they are concerned about how data is being used online.” (See here.)

Nellie Kroes, Vice President of the European Commission:

“because of the high value attached to privacy, we are less shocked by default settings that are restrictive than by those which are wide open – especially as regards more vulnerable users. In line with this, we are working with industry to improve the ways default privacy settings can protect children.” (See here.)

To read the note in full, including the following sections detailing additional analysis…

  • How many users can Facebook Home acquire?
  • Cheap and Cheerful is a good way to experiment
  • Impact on other players
  • Google: how to manage the threat to Android?
  • Device Manufacturers: more difficult questions to address
  • Operators: must accelerate mobile advertising plans
  • Conclusions

…and the following figures…

  • Figure 1 – Facebook Home ‘Coverfeed’
  • Figure 2 – Facebook Home ‘Chat Heads’
  • Figure 3 – Facebook Home ‘Applauncher’
  • Figure 4 – Facebook Vs Google Valuations
  • Figure 5 – Facebook Active Users
  • Figure 6 – Facebook Mobile Users – distribution by OS
  • Figure 7 – The Rise and Fall of HTC Revenues

 

Serving the Digital Generation

Report Summary: This 120 page Strategy Report focuses on the ‘Digital Generation’ – the cohort which has grown up with new applications and technologies – whose behaviour will ultimately drive the future shape of the Telco business.

The report is a ‘must read’ for CxOs, strategists and product managers seeking to evolve telcos to succeed with the next generation.

To share this article easily, please click:

 





Read in Full (Members only)   To Subscribe click here

This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here

For more on any of these services, please email contact@telco2.net/ call +44 (0) 207 247 5003

The Needs Gap – a strategic threat to Telcos

The report shows that there is a deep disconnect between Telecos and the Digital Generation.

The Digital Generation wants:

  • Communication to be free
  • To express identity and content
  • To move seamlessly between media
  • To connect with their social groups
  • New applications, fast

Telcos want:

  • ARPU
  • To connect calls and lines
  • To control as much as possible
  • To minimize capital investment
  • Years to develop new products and services

The Digital Generation has integrated some technologies and applications with their lives and discarded the rest – those that don’t fit – rapidly. Many other applications and services familiar to our readers (Facebook, QQ, Apple, Google, etc) now serve some of the needs that Telcos alone used to serve.

Telcos have generally been slow to produce services that meet the needs and expectations of these customers. Unchecked, this will ultimately lead to the disintermediation of the telcos from their ultimate source of value – their customers. This is a strategic threat not just for the youth segment but ultimately across all generations. This report outlines the threat, the urgent need for change, and a framework to support that change.

Report – Key Points

  • Definition of the digital generation – youth-oriented but aging fast
  • Key digital generation needs and behaviours – the need for participation
  • Drivers of service value for these customers – supporting interaction and self-expression
  • A new approach to product development – the Customer Participation Framework
  • The economics of end user participation – driving ROI from customer interactions
  • User participation and the two-sided business model – kicking off Telco 2.0 strategies
  • Social forces shaping young people’s actions – a risk culture
  • Age, gender and national variations in the Digital Generation – similarities and differences
  • Attitudes to technology – only a means to an end

 

Overview of the Customer Participation Framework

 

Fig 1 Overview of the Customer Participation Framework

 

Fit with Telco 2.0 Business Model Innovation Strategies

In previous STL Part.ners’ reports the focus has been on how Telco assets could be used to open new revenue streams from upstream service providers wanting to interact with end-users. Reports such as the 2-Sided Telecoms Market Opportunity have focused upon the business opportunity of how operators could reduce digital friction and protect themselves from over the top providers eager to circumvent the operator and gain access to the end-user directly.

In this report we shift the focus to examine end-users and their behaviours, explaining how:

  1. Operators can improve their retail offering to these customers by better meeting their needs;
  2. Operators can increase the value of their assets by better engaging with these customers and, in so doing, how they can enhance the value of the two-sided business model.

Serving the Digital Generation focuses on why and how young people are adopting digital and communication technologies into their lives. By doing this, STL Partners can help Telco industry management better anticipate, and respond to, the main drivers and unmet needs of tomorrow’s Telco 2.0 customer. What we may regard as quirky segmented behaviour today (blogging, twittering, social networking, for example) is, in fact, mass consumer behaviour tomorrow. Here STL Partners gives an insight into mass-market behaviours for a new breed of customer, which will shape the future of the communications and media sectors.

This report explains this behaviour and explores how the desire to participate represents a new opportunity for Telco value creation. To realise this opportunity, we have developed a new framework for future product development and services, The Customer Participation Framework (CPF). Developed initially as a template for validating new service or application ideas, the CPF is a tool that can be used to support different phases of the product or service innovation process:

  1. At concept initiation, to validate ideas against customer needs;

  2. During the development and trial phase, to ensure usability issues are properly addressed;

  3. In the execution phase, as a means of feedback iteration and a measurement of success.

The CPF framework can help operators increase the value of the Telco Value-Added Services platform and lead to entirely new ways of defining, evaluation, developing and marketing Telco services (retail) to both upstream service providers/partners and end users.We believe that The Customer Participation Framework represents an opportunity for operators to increase the value of their platforms and retail strategies and thus help to realise the $375billion two-sided business opportunity outlined in the 2-Sided Telecoms Market Opportunity and the Future Broadband Business Model reports.

Who is this report for?

The report is for senior (CxO) decision-makers and business strategists, product managers, strategic sales, business development and marketing professionals acting in the following types of organisations:

  • Fixed & Mobile Operators – to set and drive product development and strategy.
  • Vendors & Business Partners – to understand customer need and develop winning customer propositions.
  • Regulators and Standards Bodies – to inform strategy and policy making.

Strategists and CxOs in IT and Investment Companies may also find this report useful to understand the future landscape of the Telecoms and related industries, and to help to spot likely winning and losing investment and operational strategies in the market.

Key Questions Answered

  • What is driving the behaviour of the digital generation and what does this segment value in products and services?
  • Which companies are best meeting the needs of these customers? What can operators learn from them?
  • What is the short and longer term benefit to operators of meeting these needs?
  • How should operators and vendors go about developing products and services that achieve this?

Background – The need for a new innovation process in telecoms

During the period of rapid growth when markets were emerging, the process of product or service development for Telcos was driven by a focus on network roll out, capacity issues, spectrum licences, supply chains, vendors, traffic forming, the regulatory environment and so on.

This was understandable. Uptake of Telco services was rapid and the challenges of meeting demand immense. Innovation was predominantly in hardware, which required long development cycles, massive investments and a stable regulatory environment. Everything was tested to destruction to ensure robustness and the ability to scale. The industry thrived, driven by some outstanding innovations in core networks, capacity handling etc.

Today, however, as markets mature and become saturated, this approach to innovation has run its course.

Increasingly, core propositions and networks are being commoditised and new services are being developed and delivered by others over the Telco infrastructure. Operators are under increased pressure to:

  1. Hold onto market share (or put more negatively, prevent churn) as an overriding consideration. Operators strive to increase customer retention and ‘stickiness’ on existing core services;
  2. Find new revenue streams – outside of the core personal communications services.

But to build stronger customer experience and innovate in new spheres, requires a shift in focus from being Telco-centric to customer-centric. Placing end-user engagement and participation at the forefront of what Telcos do requires a cultural revolution.

It means a change in processes and the revaluation of core assets.This report focuses on what areas of innovation operators should seek to focus on in their existing retail operations, as well as the core enabling services that form a cornerstone of the future business models.

A move from Telco-centric to customer-centric innovation

Fig 2 Telco-centric vs. Customer-centric

Case Studies, Companies and Services


Detailed Case Studies:
Blyk, Buongiorno, Cartoon Doll Emporium, Facebook, Maplestory, Mo1, Mobagetown, Puppyred, QQ.

Companies and Organisations Covered:
Amazon, Blyk, Buongiorno, Cartoon Doll Emporium, Ebay, Facebook, Firefox, LinkedIn, Livejournal, Maplestory, Mo1, Mobagetown, O2, Orange, Puppyred, QQ, Skype, Xanga, YouTube, Zygo

Summary of Contents

  • Introduction
  • Executive summary
  • Defining the Digital Generation
  • A Framework for Future Service and Product Development
  • Kids and Communication
  • The Changing Contours of Childhood
  • Digital differences: Age, gender & nation
  • Making technology their own

The Research Process

We interviewed senior marketing and product development executives in a dozen operators to fully understand the how the current innovation process is managed and what evaluation criteria are adopted when developing potential new propositions, products and services. This helped us to identify the shortcomings of current innovation approaches, rooted in a tradition of network deployment and subscriber acquisition.

For our other stream of research, we drew on the extensive body of existing industry and academic research into young people’s use of digital communications technology and their adoption of social software. We looked at what they are doing with technology and how adoption has occurred (including exploring nine case study examples).

Research Format

  • 120+ page manuscript document

This report is now available to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here.  To order or find out more please email contact@telco2.net, call +44 (0) 207 247 5003.

Full Article: Nokia’s Strange Services Strategy – Lessons from Apple iPhone and RIM

The profuse proliferation of poorly integrated projects suggests either – if we’re being charitable – a deliberate policy of experimenting with many different ideas, or else – if we’re not – the absence of a coherent strategy.

Clearly Nokia is aware of the secular tendency in all information technology fields that value migrates towards software and specifically towards applications. Equally clearly, they have the money, scale, and competence to deliver major projects in this field. However, so far they have failed to make services into a meaningful line of business, and even the well developed software ecosystem hasn’t seen a major hit like the iPhone and its associated app store.

Nokia Services: project proliferator

So far, the Services division in its various incarnations has brought forward Club Nokia, the Nokia Game, Forum Nokia, Symbian Developer Network, WidSets, Nokia Download!, MOSH, Nokia Comes With Music, Nokia Music Store, N-Gage, Ovi, Mail on Ovi, Contacts on Ovi, Ovi Store…it’s a lot of brands for one company, and that’s not even an exhaustive list. They’ve further acquired Intellisync, Sega.com, Loudeye, Twango, Enpocket, Oz Communications, Gate5, Starfish Software, Navteq and Avvenu since 2005 – that makes an average of just over two services acquisitions a year. Further, despite the decision to integrate all (or most) services into Ovi, there are still five different functional silos inside the Services division.

The great bulk of applications or services available or proposed for mobile devices fall into two categories – social or media. Under social we’re grouping anything that is primarily about communications; under media we’re grouping video, music, games, and content in general. Obviously there is a significant overlap. This is driven by fundamentals; no-one is likely to want to do computationally intensive graphics editing, CAD, or heavy data analysis on a mobile, run a database server on one, or play high-grade full-3D games. Batteries, CPU limitations, and most of all, form factor limitations see to that. And on the other side, communication is a fundamental human need, so there is demand pull as well as constraint push. As we pointed out back in the autumn of 2007, communication, not content, is king.

Aims

In trying to get user adoption of its applications and services, Nokia is pursuing two aims – one is to create products that will help to ship more Nokia devices, and to ship higher-value N- or E- series devices rather than featurephones, and the other is a longer-range hope to create a new business in its own right, which will probably be monetised through subscriptions, advertising,or transactions. This latter aim is much further off that the first, and is affected by the operators’ suspicion of any activity that seems to rival their treasured billing relationship. For example, although quick signup and data import are crucial to deploying a social application, Nokia probably wouldn’t get away with automatically enrolling all users in its services – the operators likely wouldn’t wear it.

Historical lessons

There have been several historical examples of similar business models, in which sales of devices are driven by a social network. However, the common factor is that success has always come from facilitating existing social networks rather than trying to create new ones. This is also true of the networks themselves; if new ones emerge, it’s usually as an epi-phenomenon of generally reduced friction. Some examples:

  1. Telephony itself: nobody subscribed in order to join the telephone community, they subscribed to talk to the people they wanted to talk to anyway.
  2. GSM: the unique selling point was that the people who might want to talk to you could reach you anywhere, and PSTN interworking was crucial.
  3. RIM’s BlackBerry: early BlackBerries weren’t that impressive as such, but they provided access to the social value of your e-mail workflow and groupware anywhere. Remember, the only really valuable IM user base is the 17 million Lotus Notes Sametime users.
  4. 3’s INQ: the Global Mobile Award-winning handset is really a hardware representation of the user’s virtual presence . Hutchison isn’t interested in trying to make people join Club Hutch or use 3Book; they’re interested in helping their users manage their social networks and charging for the privilege.

So it’s unlikely that trying to recruit users into Nokia-specific communities is at all sensible. Nobody likes vendor lock-in. And, if your product is really good, why restrict it to Nokia hardware users? As far as Web applications go, of course, there’s absolutely no reason why other devices shouldn’t be allowed to play. But this fundamental issue – that no-one organises their lives around their friends’ or the friends’ mobile operators’ choices of device vendor – would tend to explain why there have been so many service launches, mergers, and shutdowns. Nokia is trying to find the answer by trial and error, but it’s looking in the wrong place. There is some evidence, however, that they are looking more at facilitating other social applications, but this is subject to negotiation with the operators.

The operator relationship – root of the problem

One of the reasons why is the conflict with operators mentioned above. Nokia’s efforts to build a Nokia-only community mirror the telco fascination with the billing relationship. Telcos tend to imagine that being a customer of Telco X is enough to constitute a substantial social and emotional link; Nokia is apparently working on the assumption that being a customer of Nokia is sufficient to make you more like other Nokia customers than everyone else. So both parties are trying to “own the customer”, when in fact this is probably pointless, and they are succeeding in spoiling each others’ plans. Although telcos like to imagine they have a unique relationship with their subscribers, they in fact know surprisingly little about them, and carriers tend to be very unpopular with the public. Who wants to have a relationship with the Big Expensive Phone Company anyway? Both parties need to rethink their approach to sociability.

What would a Telco 2.0 take on this look like?

First of all, the operator needs to realise that the subscribers don’t love them for themselves; it was the connectivity they were after all along! Tears! Secondly, Nokia needs to drop the fantasy of recruiting users into a vendor-specific Nokiasphere. It won’t work. Instead, both ought to be looking at how they can contribute to other people’s processes. If Nokia can come up with a better service offering, very well – let them use the telco API suite. In fact, perhaps the model should be flipped, and instead of telcos marketing Nokia devices as a bundled add-in with their service, Nokia ought to be marketing its devices (and services) with connectivity and much else bundled into the upfront price, with the telcos getting their share through richer wholesale mechanisms and platform services.

Consider the iPhone. Looking aside from the industrial design and GUI for a moment – I dare you! you can do it! – its key features were integration with iTunes (i.e. with content), a developer platform that offered good APIs and documentation, but also a route to market for the developers and an easy way for users to discover, buy, and install their products, and an internal business model that sweetened the deal for the operators, by offering them exclusivity and a share of the revenue. Everyone still loves the iPhone, everyone still hates AT&T, but would AT&T ever consider not renewing the contract with Apple? They’re stealing our customers’ hearts! Of course not.

Apple succeeded in improving the following processes for two out of three key customer groups:

  1. Users: Acquiring and managing music and video across multiple devices.
  2. Users: Discovering, installing, and sharing mobile applications
  3. Developers: Deploying and selling mobile applications

And as two-sidedness would suggest, they offered the remaining group a share of revenue. The rest is history; the iPhone has become the main driver of growth and profitability at Apple, more than one billion applications downloads have been shipped from the App Store, etc, etc.

Conclusions: turn to small business?

So far, however, Nokia’s approach has mirrored the worst aspects of telcos’ attitude to their subscribers; a combination of possessiveness and indifference. They want to own the customer; they don’t know how or why. It might be more defensible if there was any sign that Nokia is serious about making money from services; that, of course, is poison to the operators and is therefore permanently delayed. Similarly, Nokia would like to have the sort of brand loyalty Apple enjoys and to build the sort of integrated user experience Apple specialises in, but it is paranoid about the operators. The result is essentially an Apple strategy, but not quite.

What else could they try? Consider Nokia Life Tools, the package of information services for farmers and small businesses they are building for the developing world. One thing that Nokia’s services strategy has so far lacked is engagement with enterprises; it’s all been about swapping photos and music and status updates. Although Nokia makes great business-class gadgets, and they provide a lot of useful enablers (multiple e-mail boxes, support for different push e-mail systems, VPN clients, screen output, printer support), there’s a hole shaped like work in their services offering. RIM has been much better here, working together with IBM and Salesforce.com to expand the range of enterprise applications they can mobilise.

Life Tools, however, shows a possible opportunity – it’s all right catering to companies who already have complex workflow systems, but who’s serving the ones that don’t have the scale to invest there? None of the vendors are addressing this, and neither are the telcos. It fits a whole succession of Telco 2.0 principles – focus on enterprises, look for areas where there’s a big difference between the value of bits and their quantity, and work hard at improving wholesale.

It’s almost certainly a better idea than trying to be Apple, but not quite.

Next Steps for Nokia and telcos

  • It is unlikely that ”Nokia users” are a valid community

  • Really successful social hardware facilitates existing social networks

  • Nokia’s problems are significantly explained by their difficult relationship with operators

  • Nokia’s emerging-market Life Tools package might be more of an example than they think

  • A Telco 2.0 approach would emphasise small businesses, offer bundled connectivity, and deal with the operators through better wholesale

Beyond Bundling: Growth Strategies for Fixed and Mobile Broadband – “Winning the $250Bn delivery game”

Summary: This report examines future retail and wholesale business models for fixed and mobile operators offering high speed packet data services. This includes – but is not limited to – providing Internet access.

The report charts the next 10 years for fixed and mobile telecoms network operators as the viability of the current broadband business model is threatened by intense competition and falling prices in maturing markets, changing usage patterns, and the adaptation of new technologies. The report identifies and profiles a new $250Bn content delivery market opportunity. (April 2008)


To share this article easily, please click:

 



 

  

 

Read in Full (Members only)   To Subscribe click here

This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here

For more on any of these services, please email contact@telco2.net/ call +44 (0) 207 247 5003 

Future Broadband Business Models Series

This report examines future retail and wholesale business models for fixed and mobile operators offering high speed packet data services. This includes – but is not limited to – providing Internet access.

The report charts the next 10 years for fixed and mobile telecoms network operators as the viability of the current broadband business model is threatened by intense competition and falling prices in maturing markets, changing usage patterns, and the adaptation of new technologies. The report identifies and profiles a new $250Bn content delivery market opportunity.

  • Report Summary
  • Key Points
  • Who is this report for?
  • Business Context – The Changing Face of Broadband Distribution
  • Key Questions Answered
  • Case Studies, Companies, Services, Technologies & Applications Covered
  • Forecasts Included
  • Summary of Contents
  • Pricing and User Licenses
  • Customer Workshops
  • Team Biographies
  • Fit with other Broadband Reports
  • Other Reports

This study is supported by BT, GSM Association, the Broadband Stakeholder Group, the TeleManagement Forum, and Telecom TV.

Report Abstract

Intense competition and falling prices in maturing markets coupled with the challenges presented by changing usage patterns and the adaptation of new technologies are all starting to threaten the viability of the current broadband business model.

This report reviews the pain points in current operational scenarios, case studies of successful strategies and emerging new entrants, and profiles the key threats and future opportunities to the industry. It outlines a number of key steps to develop business models that can be viable in the evolving marketplace, and touches on the future of core Voice & Messaging revenues, Video Distribution, P2P technologies, the Next Generation Network, E-commerce Value Added Services, and more. The report identifies and profiles a new $250Bn market opportunity.

Key Points

  • Pain points in current operational scenarios.
  • Case studies of successful strategies and emerging new entrants.
  • Threats and future opportunities to the industry.
  • Steps to develop business models that can be viable in the evolving marketplace.
  • The future of Voice, Video Distribution, P2P technologies, the Next Generation Network, E-commerce Value Added Services, and more.
  • New propositions, channels and partners for telco operators, cablecos, ISPs, NEPs, Device Manufacturers, Investors, and Public Policy bodies.
  • Scopes an attractive new $250Bn market opportunity.
  • Short, medium and long term actions required.

 

Who is this report for?

The report is for senior (CxO) decision-makers and business strategists setting business strategy, and for product managers, technologists, and strategic sales, business development and marketing professionals acting in the broadband arena in the following types of organisations:

  • Fixed & Mobile Broadband Operators – to set and drive strategy.
  • Vendors & Business Partners – to understand customer need and develop winning customer propositions.
  • Regulators & Industry Standards bodies – to inform policy making and strategy.

 

Strategists and CxOs in Media and Investment Companies may also find this report useful to understand the future landscape of the broadband industry, and to help to spot likely winning and losing investment and operational strategies in the market.

Business Context – The Changing Face of Broadband Distribution

The chart below shows how the telecoms industry today offers two dominant types of distribution systems for content and services.

  1. Vertically integrated networks, like the Public Switched Telephony Network, its mobile equivalent, Next Generation Network replacements for these, and SMS messaging (“PSTN & SMSC”). Here, a dedicated network integrates connectivity, service and payment.

  2. Internet access, where connectivity, services and payment are all separate (“Broadband Internet”).

  3. In the future there will be a wide range of new business and payment models which assemble devices, applications, content and connectivity in new technical and economic ways (“Other”). Wholesale markets will evolve greatly to support this. This original hypothesis, affirmed by our proprietary market research, is explored in depth in this report.

This study looks at the impact of this significant change on the business models of those in the broadband value chain.

Key Questions Answered

This report uniquely answers 3 key questions:

  1. “What are the business models for fixed and mobile broadband voice, video and data access over the next 5-10 years” – how will these revenue streams evolve for telcos and cablecos?

  2. “What are the future wholesale and retail business models” – managing costs and revenues by learning from outside the telecoms industry.

  3. “How to rejuvenate broadband growth strategies” – what are the new propositions, channels and partners for telco operators, cablecos, ISPs, NEPs, Device Manufacturers, Investors, and Public Policy bodies.

In addition, to help operators and vendors maximise future opportunities from broadband-based services the following questions are also addressed:

  • What are the key pain points and problems in the current Broadband Service Provider (BSP) business model?

  • What are the limitations of reliance on voice and video cross-subsidy?

  • What are new potential upstream and downstream revenue models?

  • Who puts money into BSPs today, and how does it gets re-allocated?

  • Who makes the margins today and why?

  • What are the drivers of economic activity inside and outside the network?

  • What are the competing fixed and mobile distribution systems and their relationship to services?

  • What lessons about wholesale/network business models can we learn from outside of telecoms?

  • How long are vertically-integrated service models likely to survive? What are the opportunities for new entrants?

  • What are the most successful players doing to combine multiple distribution systems to support the customer experience?

  • What are the lessons from dead or dying distribution systems (ATM, ISDN, MMS)

  • How much value will flow through new broadband distribution channels?

  • How to improve core Voice and Video services?

  • Which network ownership models will be most effective?

  • What are the economics of QoS, and how to create better alternatives?

  • What are the trends in traffic shaping and throttling?

  • What is the potential for new wholesale intermediaries to grow beyond providing backbone and interconnect peering for access networks?

What are the practical issues in taking new business models to market in a highly regulated and politicised industry?

Case Studies, Companies and Services, and Technologies & Applications Covered

Case Studies: Akamai, BT 21CN, BT Vision, e-TopUps, Illiad, Janet(UK), Joost, Kontiki, Limelight, LINX, Sky Anytime.

Companies and Services Covered: 3 UK, Akamai, Amazon, Amazon Kindle, Apple, Apple iPhone, Apple iTV, ASUS, AT&T, AT&T/Bell Labs, BBC, Blackberry, Blockbuster, Blyk, BSkyB, Carphone Warehouse, Cinema Paradiso, Cisco, Dell, Deutsche Telekom, Direct Connect, Disney, DoCoMo, DoCoMo iMode, Easyjet, Ericsson, France Telecom, Freebox, Gillette, Google, Google Phone, Hutchison 3, Intel, Liberty Global, Link, Livebox, Lovefilm, Lucasfilms, Maxjet, Microsoft, Motorola, Motorola Tetra, Moviebank, MSN, My Moviestream, Myspace, Netflix, News Corp, Nextel, Nokia Ovi, Pixar, Qualcomm, Ryanair, Scientific Atlanta, Setanta, Sky+, Skype, Slingbox, Sprint PCS, Swedish Metro, Swisscom Hotspots, Tandberg, Tesco Mobile, The Economist, Tracfone, TV Perso, Verizon FIOS, Verizon Wireless, Virgin, Wall Street Journal, Walmart, Yahoo!, YouTube.

Technologies & Applications Covered: Broadband, Broadband Video, Broadband Voice, Cable, CDMA, CDNs, Deep Packet Inspection, DSL, Edge-Caching, Ethernet/ATM unbundling, Fax, Femtocell, FON, GSM, HDD, IMS, Internet Video, IP, IP Multicast, IP Stream, IPTV, ISDN, Linksys, Linux, MMS, Mobile TV, Muni Nets, MVNO, Mxit, Netgear, OpenID, OPLANs, P2P, PAN, Peak Shaving, PSMN, PSMs, PSTN, Telex, Traffic Shaping, VoD, VOIP, VPN, Wifi, WiMax, WLAN.

Forecasts Included

For 2006-2017: Wholesale and Retail BSP revenues by Fixed and Mobile Access, TV, Data, Voice & Messaging across 12 Western European and North American markets.

Summary of Contents

Introduction

Executive summary

Background to this Telco 2.0 research project

Part 1: The business model

  • A framework for business model innovation
  • Business model change in the airline industry
  • Applying the framework to telecoms business models


Part 2: Broadband service provider industry review

  • ISP industry
  • Entertainment market
  • Voice and messaging
  • Business model issues


Part 3: Wholesale and network business models beyond telecoms

  • Container shipping
  • Automatic teller machines in the UK
  • Power and energy distribution


Part 4: Competing distribution systems – theory and practice

  • Broadband as a distribution system
  • Drivers of vertical integration

Part 5: Emerging and declining distribution systems

  • CDNs: A freight service for the digital world
  • Vertical distribution systems
  • Hybrid distribution system case studies
  • Lessons from other delivery systems
  • Conclusions


Part 6: Survey results

  • Broadband video – is internet video a threat or an opportunity?
  • Broadband voice – which companies will prevail?
  • The network – what does the internet carry today?
  • E-Commerce value-added services
  • The wholesale market
  • The retail market
  • Case studies
  • Winners and losers

Part 7: Future broadband revenue models and scenarios

  • BSP market sizing
  • Wholesale market opportunity


Part 8: Conclusions

  • Beyond bundling: the quest for a new business model
  • Respondent views
  • Recommendations


Appendices

  • Research methodology and respondent profile
  • Glossary

This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here.  To order or find out more please email contact@telco2.net, call +44 (0) 207 247 5003.

 

Telcos’ Role in the Advertising Value Chain

Summary: A report identifying how to build a valuable new business model and customer base.

To share this article easily, please click:

 



Read in Full (Members only)   To Subscribe click here

Background

Fixed and mobile voice and data revenues are in free-fall in most European and North American markets. Since the 3G auctions at the turn of the century, content has long been considered the key future growth area for operators in the consumer segment. However excluding SMS, the only material content revenues for Telcos to date have been through movement into adjacent markets – particularly acquisitions in the cable and media sectors.

Through advertising, operators have a potential opportunity to:

  • Reduce the price of content and services to end-users;
  • Increase the volume of available content and services, and
  • Provide value to the advertising community

 

To achieve this they must contribute to the development of a differentiated new advertising channel in which users are provided with a portfolio of content and services supported by contextually-relevant advertising.

Operators have an opportunity both to provide their own advertising-funded services as well as become an enabler to the advertising community by helping advertisers interact more effectively with their targets (who may or may not be Telco customers). In this report, we examine both of these opportunities in both the fixed and mobile markets. We explore in detail what advertisers and users really want and the opportunities available to operators to carve out a valuable role in meeting those needs.

Key Questions Answered

This report seeks to help operators and vendors maximise future advertising-funded service opportunities by answering the following questions:

  • What is the rationale for advertising-funded services?
  • When will the market take off and how big will it get?
  • How can operators prevent cannabalising existing revenue streams?
  • What are the needs of the advertising community?
  • How should operators work with Internet enablers (e.g. Google), content providers (e.g. Sony) and aggregators (e.g. Motricity)?
  • What implementation issues need to be resolved?
  • What are the options available to operators to add value and what is the best option available?
  • What are they key factors for success?
  • What value is there in opening-up Telco assets (open APIs etc.)?
  • Which can be learned from market-leaders in advertising-funded services?
  • What are the attitudes of operators, internet enablers, content providers and aggregators to the market and how to be successful in it?
  • What needs to be done to develop the market and generate near-term benefits?

Contents

  • Executive Summary
  • Background and Key Issues to Date
  • Growing Pressure on the Existing Operator Business Model
  • Content Delivery: Not a Panacea
  • Advertising-Funded Services: Tried and Tested in Adjacent Markets
  • Telcos’ Role in Advertising: Market Scope
  • Activity from Operators to Date
  • Advertising-Funded Services – Threat or Opportunity?
  • The risk of cannibalising existing revenues
  • Internet players – partner or competitor?
  • Show me the Money! – How big could the market be?
  • Understanding the Advertising-Funded Value Chain
  • Value Chain players in Internet Advertising
  • What do Advertisers *really* want?
  • Options for the Operator to add Value
  • Key Skills and Assets Required
  • Issues to resolve
  • Operator role: The Devil in the Detail
  • Who to Partner with and How
  • Meeting Advertiser and Customer Needs:
  • Return on Investment
  • Customer attention & interaction
  • Performance measurement
  • Ubiquity
  • Legal and Regulatory Issues
  • Learning from Web 2.0
  • Content and Communications: Two sides of the same coin
  • Social Networking Communities and Advertising
  • Case studies:
  • Learning from the Master: Google and the Art of Ad-Funding
  • Accelerating the need for Advertising Revenues: The X-Series from 3
  • The Whole Hog: Blyk’s Advertising-Funded MVNO
  • Delivering an Open Platform: Amazon
  • Views from the Industry – new primary research by STL Partners
  • Action steps & Conclusions

This report is now available to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here.  To order or find out more please email contact@telco2.net, call +44 (0) 207 247 5003.