How video analytics can kickstart the edge opportunity for telcos

Processing video is a key use for edge computing

In our analysis and sizing of the edge market, STL Partners found that processing video will be a strong driver of edge capacity and revenues. This is because a huge quantity of visual data is captured each day through many different processes. The majority of the information captured is straightforward (such as “how busy is this road?”), therefore it is highly inefficient for the whole data stream to be sent to the core of the network. It is much better to process it near to the point of origin and save the costs, energy and time of sending it back and forth. Hence “Video Analytics” is a key use for edge computing.

Enter your details below to request an extract of the report

The edge market is evolving rapidly

Edge computing is an exciting opportunity. The market is evolving rapidly, and although still fairly nascent today, is expected to scale significantly over the next 2-3 years. STL partners has estimated that the total edge computing addressable market was worth $10bn in 2020, and that this will grow to $534bn in 2030. This is driven by the increasing number of connected devices, and the rising adoption of IoT, Industry 4.0 and digital transformation solutions. While cloud adoption continues to grow in parallel, there are cases where the increasingly stringent connectivity demands of new and advanced use cases cannot be met by cloud or central data centres, or where sending data to the cloud is too costly. Edge answers this problem, and offers an alternative option with lower latency, reduced backhaul and greater reliability. For the many enterprises who are adopting a hybrid and multi-cloud strategy – strategically distributing their data across different clouds and locations – running workloads at the edge is a natural next step.

Developments in the technologies enabling edge computing are also contributing to market growth. For example, the increased agility of virtualised and 5G networks enables the migration of workloads from the cloud to the edge. Compute is also developing, becoming more lightweight, efficient, and powerful. These more capable devices can run workloads and perform operations that were not previously possible at the edge.

Defining different types of edge

Edge computing brings processing capabilities closer to the end user or end-device. The compute infrastructure is therefore more distributed, and typically at smaller sites. This differs from traditional on-premise compute (which is monolithic or based on proprietary hardware) because it utilises the flexibility and openness of cloud native infrastructure, i.e. highly scalable Kubernetes clusters.

The location of the edge may be defined as anywhere between an end device, and a point on the periphery of the core network. We have outlined the key types of edge computing and where they are located in the figure below.

The types of edge computing

It should be noted that although moving compute to the edge can be considered an alternative to cloud, edge computing also complements cloud computing and drives adoption, since data that is processed or filtered at the edge can ultimately be sent to the cloud for longer term storage or collation and analysis.

Telcos must identify which area of the edge market to focus on

For operators looking to move beyond connectivity and offer vertical solutions, edge is an opportunity to differentiate by incorporating their edge capabilities into solutions. If successful, this could result in significant revenue generation, since the applications and platforms layer is where most of the revenue from edge resides. In fact, by 2030, 70% of the addressable revenue for edge will come from the application, with only 9% in the pure connectivity. The remaining 21% represents the value of hardware, edge infrastructure and platforms, integration, and managed services.

Realistically, operators will not have the resource and management bandwidth to develop solutions for several use cases and verticals. They must therefore focus on key customers in one or two segments, understand their particular business needs, and deliver that value in concert with specific partners in their ecosystem. As it relates to MEC, most operators are selecting the key partners for each of the services they offer – broadcast video, immersive AR/VR experiences, crowd analytics, gaming etc.

When selecting the best area to focus on, telcos should weigh up the attractiveness of the market (including the size of the opportunity, how mature the opportunity is, and the need for edge) against their ability to compete.

Value of edge use cases (by size of total addressable market by 2030)

Source: STL Partners – Edge computing market sizing forecast

We assessed the market attractiveness of the top use cases that are expected to drive adoption of edge over the coming years, some of which are shown in the figure above. This revealed that the use cases that represent the largest opportunities in 2030 include edge CDN, cloud gaming, connected car driver assistance and video analytics. Of these, video analytics is the most mature opportunity, therefore represents a highly attractive proposition for CSPs.

Table of Contents

  • Executive Summary
  • Introduction
    • Processing video is a key use for edge computing
    • The edge market is evolving rapidly
    • Defining different types of edge
    • Telcos must identify which area of the edge market to focus on
  • Video analytics is a large and growing market
    • The market for edge-enabled video analytics will be worth $75bn by 2030
  • Edge computing changes the game and plays to operator strengths
    • What is the role of 5G?
  • Security is the largest growth area and operators have skills and assets in this
    • Video analytics for security will increasingly rely on the network edge
  • There is empirical evidence from early movers that telcos can be successful in this space
    • What are telcos doing today?
    • Telcos can front end-to-end video analytics solutions
    • It is important to maintain openness
    • Conquering the video analytics opportunity will open doors for telcos
  • Conclusion
  • Index

Enter your details below to request an extract of the report

 

Building telco edge infrastructure: MEC, Private LTE and VRAN

Reality check: edge computing is not yet mature, and much is still to be decided

Edge computing is still a maturing domain. STL Partners has written extensively on the topic of edge computing over the last 4 years. Within that timeframe, we have seen significant change in terminology, attitudes and approaches from telecoms and adjacent industries to the topic area.  Plans for building telco edge infrastructure have also evolved.

Within the past twelve months, we’ve seen high profile partnerships between hyperscale cloud providers (Amazon Web Services, Microsoft and Google) and telecoms operators that are likely to catalyse the industry and accelerate route to market. We’ve also seen early movers within the industry (such as SK Telecom) developing MEC platforms to enable access to their edge infrastructure.

In the course of this report, we will highlight which domains will drive early adoption for edge, and the potential roll out we could see over the next 5 years if operators move to capitalise on the opportunity. However, to start, it is important to evaluate the situation today.

Commercial deployments of edge computing are rare, and most operators are still in the exploration phase. For many, they have not and will not commit to the roll out of edge infrastructure until they have seen evidence from early movers that it is a genuine opportunity for the industry. For even more, the idea of additional capex investment on edge infrastructure, on top of their 5G rollout plans, is a difficult commitment to make.

Where is “the edge”?

There is no one clear definition of edge computing. Depending on the world you are coming from (Telco? Application developer? Data centre operator? Cloud provider? etc.), you are likely to define it differently. In practice, we know that even within these organisations there are differences between technical and commercial teams around the concept and terminology used to describe “the edge”.

For the purposes on this paper, we will be discussing edge computing primarily from the perspective of a telecoms operator. As such, we’ll be focusing on edge infrastructure that will be rolled out within their network infrastructure or that they will play a role in connecting. This may equate to adding additional servers into an existing technical space (such as a Central Office), or it may mean investing in new microdata centres. The servers may be bought, installed and managed by the telco themselves, or this could be done by a third party, but in all cases the real estate (e.g. the physical location as well as power and cooling) is owned either by the telecoms operator, or by the enterprise who is buying an edge-enabled solution.

Enter your details below to request an extract of the report

Operators have choice and a range of options for where and how they might develop edge computing sites. The graphic below starts to map some of the potential physical locations for an edge site. In this report, STL Partners forecasts edge infrastructure deployments between 2020 and 2024, by type of operator, use-case domains, edge locations and type of computing.

There is a spectrum of edge infrastructure in which telcos may invest

mapping edge infrastructure investmentSource: STL Partners

This paper primarily draws on discussions with operators and others within the edge ecosystem conducted between February and March 2020. We interviewed a range of operators, and a range of job roles within them, to gain a snapshot of the existing attitudes and ambitions within the industry to shape our understanding of how telcos are likely to build out edge infrastructure.

Table of Contents

  • Executive Summary
  • Preface
  • Reality check: edge computing is not yet mature, and much is still to be decided
    • Reality #1: Organisationally, operators are still divided
    • Reality #2: The edge ecosystem is evolving fast
    • Reality #3: Operators are trying to predict, respond to and figure out what the “new normal” will be post COVID-19
  • Edge computing: key terms and definitions
    • Where is “the edge”?
    • What applications & use cases will run at edge sites?
    • What is inside a telco edge site?
  • How edge will play out: 5-year evolution
    • Modelling exercise: converting hype into numbers
    • Our findings: edge deployments won’t be very “edgy” in 2024
    • Short-term adoption of vRAN is the driving factor
    • New revenues from MEC remain a longer-term opportunity
    • Short-term adoption is focused on efficient operations, but revenue opportunity has not been dismissed
  • Addressing the edge opportunity: operators can be more than infrastructure providers
  • Conclusions: practical recommendations for operators

Enter your details below to request an extract of the report

Indoor wireless: A new frontier for IoT and 5G

Introduction to Indoor Wireless

A very large part of the usage of mobile devices – and mobile and other wireless networks – is indoors. Estimates vary but perhaps 70-80% of all wireless data is used while fixed or “nomadic”, inside a building. However, the availability and quality of indoor wireless connections (of all types) varies hugely. This impacts users, network operators, businesses and, ultimately, governments and society.

Whether the use-case is watching a YouTube video on a tablet from a sofa, booking an Uber from a phone in a company’s reception, or controlling a moving robot in a factory, the telecoms industry needs to give much more thought to the user-requirements, technologies and obstacles involved. This is becoming ever more critical as sensitive IoT applications emerge, which are dependent on good connectivity – and which don’t have the flexibility of humans. A sensor or piece of machinery cannot move and stand by a window for a better signal – and may well be in parts of a building that are inaccessible to both humans and many radio transmissions.

While mobile operators and other wireless service providers have important roles to play here, they cannot do everything, everywhere. They do not have the resources, and may lack site access. Planning, deploying and maintaining indoor coverage can be costly.

Indeed, the growing importance and complexity is such that a lot of indoor wireless infrastructure is owned by the building or user themselves – which then brings in further considerations for policymakers about spectrum, competition and more. There is a huge upsurge of interest in both improved Wi-Fi, and deployments of private cellular networks indoors, as some organisations recognise connectivity as so strategically-important they wish to control it directly, rather than relying on service providers. Various new classes of SP are emerging too, focused on particular verticals or use-cases.

In the home, wireless networks are also becoming a battleground for “ecosystem leverage”. Fixed and cable networks want to improve their existing Wi-Fi footprint to give “whole home” coverage worthy of gigabit fibre or cable connections. Cellular providers are hoping to swing some residential customers to mobile-only subscriptions. And technology firms like Google see home Wi-Fi as a pivotal element to anchor other smart-home services.

Large enterprise and “campus” sites like hospitals, chemical plants, airports, hotels and shopping malls each have complex on-site wireless characteristics and requirements. No two are alike – but all are increasingly dependent on wireless connections for employees, visitors and machines. Again, traditional “outdoors” cellular service-providers are not always best-placed to deliver this – but often, neither is anyone else. New skills and deployment models are needed, ideally backed with more cost—effective (and future-proofed) technology and tools.

In essence, there is a conflict between “public network service” and “private property” when it comes to wireless connectivity. For the fixed network, there is a well-defined “demarcation point” where a cable enters the building, and ownership and responsibilities switch from telco to building owner or end-user. For wireless, that demarcation is much harder to institutionalise, as signals propagate through walls and windows, often in unpredictable and variable fashion. Some large buildings even have their own local cellular base stations, and dedicated systems to “pipe the signal through the building” (distributed antenna systems, DAS).

Where is indoor coverage required?

There are numerous sub-divisions of “indoors”, each of which brings its own challenges, opportunities and market dynamics:

• Residential properties: houses & apartment blocks
• Enterprise “carpeted offices”, either owned/occupied, or multi-tenant
• Public buildings, where visitors are more numerous than staff (e.g. shopping malls, sports stadia, schools), and which may also have companies as tenants or concessions.
• Inside vehicles (trains, buses, boats, etc.) and across transport networks like metro systems or inside tunnels
• Industrial sites such as factories or oil refineries, which may blend “indoors” with “onsite”

In addition to these broad categories are assorted other niches, plus overlaps between the sectors. There are also other dimensions around scale of building, single-occupant vs. shared tenancy, whether the majority of “users” are humans or IoT devices, and so on.

In a nutshell: indoor wireless is complex, heterogeneous, multi-stakeholder and often expensive to deal with. It is no wonder that most mobile operators – and most regulators – focus on outdoor, wide-area networks both for investment, and for license rules on coverage. It is unreasonable to force a telco to provide coverage that reaches a subterranean, concrete-and-steel bank vault, when their engineers wouldn’t even be allowed access to it.

How much of a problem is indoor coverage?

Anecdotally, many locations have problems with indoor coverage – cellular networks are patchy, Wi- Fi can be cumbersome to access and slow, and GPS satellite location signals don’t work without line- of-sight to several satellites. We have all complained about poor connectivity in our homes or offices, or about needing to stand next to a window. With growing dependency on mobile devices, plus the advent of IoT devices everywhere, for increasingly important applications, good wireless connectivity is becoming more essential.

Yet hard data about indoor wireless coverage is also very patchy. UK regulator Ofcom is one of the few that reports on availability / usability of cellular signals, and few regulators (Japan’s is another) enforce it as part of spectrum licenses. Fairly clearly, it is hard to measure, as operators cannot do systematic “drive tests” indoors, while on-device measurements usually cannot determine if they are inside or outside without being invasive of the user’s privacy. Most operators and regulators estimate coverage, based on some samples plus knowledge of outdoor signal strength and typical building construction practices. The accuracy (and up-to-date assumptions) is highly questionable.

Indoor coverage data is hard to find

Contents:

  • Executive Summary
  • Likely outcomes
  • What telcos need to do
  • Introduction to Indoor Wireless
  • Overview
  • Where is indoor coverage required?
  • How much of a problem is indoor coverage?
  • The key science lesson of indoor coverage
  • The economics of indoor wireless
  • Not just cellular coverage indoors
  • Yet more complications are on the horizon…
  • The role of regulators and policymakers
  • Systems and stakeholders for indoor wireless
  • Technical approaches to indoor wireless
  • Stakeholders for indoor wireless
  • Home networking: is Mesh Wi-Fi the answer?
  • Is outside-in cellular good enough for the home on its own?
  • Home Wi-Fi has complexities and challenges
  • Wi-Fi innovations will perpetuate its dominance
  • Enterprise/public buildings and the rise of private cellular and neutral host models
  • Who pays?
  • Single-operator vs. multi-operator: enabling “neutral hosts”
  • Industrial sites and IoT
  • Conclusions
  • Can technology solve MNO’s “indoor problem”?
  • Recommendations

Figures:

  • Indoor coverage data is hard to find
  • Insulation impacts indoor penetration significantly
  • 3.5GHz 5G might give acceptable indoor coverage
  • Indoor wireless costs and revenues
  • In-Building Wireless face a dynamic backdrop
  • Key indoor wireless architectures
  • Different building types, different stakeholders
  • Whole-home meshes allow Wi-Fi to reach all corners of the building
  • Commercial premises now find good wireless essential
  • Neutral Hosts can offer multi-network coverage to smaller sites than DAS
  • Every industrial sector has unique requirements for wireless

How to build an open source telco – and why?

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

Introduction: Why an open source telecom?

Commercial pressures and technological opportunities

For telcos in many markets, declining revenues is a harsh reality. Price competition is placing telcos under pressure to reduce capital spending and operating costs.

At the same time, from a technological point of view, the rise of cloud-based solutions has raised the possibility of re-engineering telco operations to be run with virtualised and open sourced software on low cost, general purpose hardware.

Indeed, rather than pursuing the traditional technological model, i.e. licensing proprietary solutions from the mainstream telecoms vendors (e.g. Ericsson, Huawei, Amdocs, etc.), telcos can increasingly:

  1. Progressively outsource the entire technological infrastructure to a vendor;
  2. Acquire software with programmability and openness features: application programming interfaces(APIs) can make it easier to program telecommunications infrastructure.

The second option promises to enable telcos to achieve their long-standing goals of decreasing the time-to-market of new solutions, while further reducing their dependence on vendors.

Greater adoption of general IT-based tools and solutions also:

  • Allows flexibility in using the existing infrastructure
  • Optimises and reuses the existing resources
  • Enables integration between operations and the network
  • And offers the possibility to make greater use of the data that telcos have traditionally collected for the purpose of providing communications services.


In an increasingly squeezed commercial context, the licensing fees applied by traditional vendors for telecommunication solutions start to seem unrealistic, and the lack of flexibility poses serious issues for operators looking to push towards a more modern infrastructure. Moreover, the potential availability of competitive open source solutions provides an alternative that challenges the traditional model of making large investments in proprietary software, and dependence on a small number of vendors.

Established telecommunications vendors and/or new aggressive ones may also propose new business models (e.g., share of investments, partnership and the like), which could be attractive for some telcos.

In any case, operators should explore and evaluate the possibility of moving forward with a new approach based on the extensive usage of open source software.

This report builds on STL Partners’ 2015 report, The Open Source Telco: Taking Control of Destiny which looked at how widespread use of open source software is an important enabler of agility and innovation in many of the world’s leading internet and IT players. Yet while many telcos then said they crave agility, only a minority use open source to best effect.

In that 2015 report, we examined the barriers and drivers, and outlined six steps for telcos to safely embrace this key enabler of transformation and innovation:

  1. Increase usage of open source software: Overall, operators should look to increase their usage of open source software across their entire organisation due to its numerous strengths. It must, therefore, be consistently and fairly evaluated alongside proprietary alternatives. However, open source software also has disadvantages, dependencies, and hidden costs (such as internally-resourced maintenance and support), so it should not be considered an end in itself.
  2. Increase contributions to open source initiatives: Operators should also look to increase their level of contribution to open source initiatives so that they can both push key industry initiatives forward (e.g. OPNFV and NFV) and have more influence over the direction these take.
  3. Associate open source with wider transformation efforts: Successful open source adoption is both an enabler and symptom of operators’ broader transformation efforts, and should be recognised as such. It is more than simply a ‘technical fix’.
  4. Bring in new skills: To make effective use of open source software, operators need to acquire new software development skills and resources – likely from outside the telecoms industry.
  5. … but bring the whole organisation along too: Employees across numerous functional areas (not just IT) need to have experience with, or an understanding of, open source software – as well as senior management. This should ideally be managed by a dedicated team.
  6. New organisational processes: Specific changes also need to be made in certain functional areas, such as procurement, legal, marketing, compliance and risk management, so that their processes can effectively support increased open source software adoption.

This report goes beyond those recommendations to explore the changing models of IT delivery open to telcos and how they could go about adopting open source solutions. In particular, it outlines the different implementation phases required to build an open source telco, before considering two scenarios – the greenfield model and the brownfield model. The final section of the report draws conclusions and makes recommendations.

Why choose to build an open source telecom now?

Since STL Partners published its first report on open source software in telecoms in 2015, the case for embracing open source software has strengthened further. There are three broad trends that are creating a favourable market context for open source software.

Digitisation – the transition to providing products and services via digital channels and media. This may sometimes involve the delivery of the product, such as music, movies and books, in a digital form, rather than a physical form.

Virtualisation – executing software on virtualised platforms running on general-purpose hardware located in the cloud, rather than purpose-built hardware on premises. Virtualisation allows a better reuse of large servers by decoupling the relationship of one service to one server. Moreover, cloudification of these services means they can be made available to any connected device on a full-time basis.

Softwarisation – the redefinition of products and services though software. This is an extension of digitisation, i.e., the digitisation of music has allowed the creation of new services and propositions (e.g. Spotify). The same goes for the movie industry (e.g. Netflix) or the transformation of the book industry (e.g. ebooks) and newspapers. This paradigm is based on:

  • The ability to digitise the information (transformation of the analogue into a digital signal).
  • Availability of large software platforms offering relevant processing, storage and communications capabilities.
  • The definition of open and reusable application programming interfaces (APIs) which allow processes formerly ‘trapped’ within proprietary systems to be managed or enhanced with other information and by other systems.

These three features have started a revolution that is transforming other industries, e.g. travel agencies (e.g. Booking.com), large hotel chains (e.g. Airbnb), and taxis (e.g. Uber). Softwarisation is also now impacting other traditional industries, such as manufacturing (e.g., Industry 4.0) and, for sure, telecommunications.

Softwarisation in telecommunications amounts to the use of virtualisation, cloud computing, open APIs and programmable communication resources to transform the current network architecture. Software is playing a key role in enabling new services and functions, better customer experience, leaner and faster processes, faster introduction of innovation, and usually lower costs and prices. The softwarisation trend is very apparent in the widespread interest in two emerging technologies: network function virtualization (NFV) and software defined networking (SDN).

The likely impact of this technological transformation is huge: flexibility in service delivery, cost reduction, quicker time to market, higher personalisation of services and solutions, differentiation from competition and more. We have outlined some key telco NFV/SDN strategies in the report Telco NFV & SDN Deployment Strategies: Six Emerging Segments.

What is open source software?

A generally accepted open source definition is difficult to achieve because of different perspectives and some philosophical differences within the open source community.

One of the most high-profile definitions is that of the Open Source Initiative, which states the need to have access to the source code, the possibility to modify and redistribute it, and non-discriminatory clauses against persons, groups or ‘fields of endeavour’ (for instance, usage for commercial versus academic purposes) and others.

For the purpose of this report, STL defines open source software as follows:

▪ Open source software is a specific type of software for which the original source code is made freely available and may be redistributed and modified. This software is usually made available and maintained by specialised communities of developers that support new versions and ensure some form of backward compatibility.

Open source can help to enable softwarisation. As an example, it has greatly helped in moving from proprietary solutions in the web server sector to a common software platform (named LAMP) based on the Linux operating system, the Apache Http server, Mysql server, PhP programming language. All these components are made available as open source. This essentially means that people can freely acquire the source code, modify it and use it. Modifications and improvements are to be returned to the development community.

One of the earliest and most high profile examples of open source software was the Linux operating system, a Unix-like operating system developed under the model of free and open source software development and distribution.

Open source for telecoms: Benefits and barriers

The benefits of using open source for telecoms

As discussed in our earlier report, The Open Source Telco: Taking Control of Destiny, the adoption and usage of open source solutions are being driven by business and technological needs. Ideally, the adoption and exploitation of open source will be part of a broader transformation programme designed to deliver the specific operator’s strategic goals.

Operators implementing open source solutions today tend to do so in conjunction with the deployment of network function virtualization (NFV) and software defined networking (SDN), which will play an important role for the definition and consolidation of the future 5G architectures.

However, as Figure 1 shows, transformation programmes can face formidable obstacles, particularly where a cultural change and new skills are required.

Benefits of transformation and related obstacles

The following strategic forces are driving interest in open source approaches among telecoms operators:

Reduce infrastructure costs. Telcos naturally want to minimise investment in new technologies and reduce infrastructure maintenance costs. Open source solutions seem to provide a way to do this by reducing license fees paid to solution vendors under the traditional software procurement model. As open source software usually runs on general-purpose hardware, it could also cut the capital and maintenance costs of the telco’s computing infrastructure. In addition, the current trend towards virtualisation and SDN should enable a shift to more programmable and flexible communications platforms. Today, open source solutions are primarily addressing the core network (e.g., virtualisation of evolved packet core), which accounts for a fraction of the investment made in the access infrastructure (fibre deployment, antenna installation, and so forth). However, in time open source solutions could also play a major role in the access network (e.g., open base stations and others): an agile and well-formed software architecture should make it possible to progressively introduce new software-based solutions into access infrastructure.

Mitigate vendor lock-in. Major vendors have been the traditional enablers of new services and new network deployments. Moreover, to minimise risks, telco managers tend to prefer to adopt consolidated solutions from a single vendor. This approach has several consequences:

  • Telcos don’t tend to introduce innovative new solutions developed in-house.
  • As a result, the network is not fully leveraged as a differentiator, and can become the full care and responsibility of a vendor.
  • The internal innovation capabilities of a telco have effectively been displaced in favour of those of the vendor.

This has led to the “ossification” of much telecoms infrastructure and the inability to deliver differentiated offerings that can’t easily be replicated by competitors. Introducing open source solutions could be a means to lessen telcos’ dependence on specific vendors and increase internal innovation capabilities.

Enabling new services. The new services telcos introduce in their networks are essentially the same across many operators because the developers of these new services and features are a small set of consolidated vendors that offer the same portfolio to all the industry. However, a programmable platform could enable a telco to govern and orchestrate their network resources and become the “master of the service”, i.e., the operator could quickly create, customise and personalise new functions and services in an independent way and offer them to their customers. This capability could help telcos enter adjacent markets, such as entertainment and financial services, as well as defend their core communications and connectivity markets. In essence, employing an open source platform could give a telco a competitive advantage.

Faster innovation cycles. Depending on a vendor makes the telco dependent on its roadmap and schedule, and on the obsolescence and substitution of existing technologies. The use of out-dated technologies has a huge impact on a telco’s ability to offer new solutions in a timely fashion. An open source approach offers the possibility to upgrade and improve the existing platform (or to move to totally new technologies) without too many constraints posed by the “reference vendor”. This ability could be essential to acquiring and maintaining a technological advantage over competitors. Telcos need to clearly identify the benefits of this change, which represent the reasons, the “why”, for the softwarisation.

Complete contents of how to build an open source telecom report:

  • Executive Summary
  • Introduction: why open source?
  • Commercial pressures and technological opportunities
  • Open Source: Why Now?
  • What is open source software?
  • Open source: benefits and barriers
  • The benefits of using open source
  • Overcoming the barriers to using open source
  • Choosing the right path to open source
  • Selecting the right IT delivery model
  • Choosing the right model for the right scenario
  • Weighing the cost of open source
  • Which telcos are using open source today?
  • How can you build an open source telco?
  • Greenfield model
  • Brownfield model
  • Conclusions and recommendations
  • Controversial and challenging, yet often compelling
  • Recommendations for different kinds of telcos

Figures:

  • Figure 1: Illustrative open source costs versus a proprietary approach
  • Figure 2: Benefits of transformation and the related obstacles
  • Figure 3: The key barriers in the path of a shift to open source
  • Figure 4: Shaping an initial strategy for the adoption of open source solutions
  • Figure 5: A new open source component in an existing infrastructure
  • Figure 6: Different kinds of telcos need to select different delivery models
  • Figure 7: Illustrative estimate of Open Source costs versus a proprietary approach

4G Roll Out Analysis: Winning Strategies and 5G Implications

Identifying & Analysing Key Operators

In search of the best practice in 4G deployment, we first had to pick out the operators who did best on quantitative metrics, before we could drill down qualitatively to investigate why. We screened all the 40 MNOs that have so far launched 4G in the BRICS, the United States, the top 5 European markets, China, Japan, Taiwan, and South Korea, on the following indicators.

  • Monthly headline ARPU, converted to US dollars
  • Market share by subscribers
  • Quarterly net-adds
  • 4G adoption, % of the subscriber base
  • EBITDA margin %

Where possible, we also collected information on network density (i.e. subscribers per cell), and on spectrum holdings. In Figure 1, we plot EBITDA margin against the change in market share in percentage points since Q4 2012, sizing the bubbles by US dollar monthly ARPU. The axes are set to the average values for each metric.

Figure 1: 8 out of 40 MNOs made the cut for further analysis 

 

Source: STL Partners, themobileworld.com, company filings

The top-right quadrant shows those operators who are above average both on improving margins and gaining share. We picked those operators who got into the top-right quadrant – above-average EBITDA margin and positive share growth – for at least two quarters, and have a positive trend, for further research. Those are:

  • Chunghwa Telecom
  • Free Mobile
  • Verizon Wireless
  • AT&T Mobility
  • Wind
  • Bharti Airtel
  • 3UK
  • MTS

We expected to find that those operators who chose market share first, initiating the price disruptions in the US and in France, would have sacrificed margin as they chased share. An example would be T-Mobile USA, additionally marked in purple on the chart. This is essentially the scenario allegedly needing “market repair” which is dear to the hearts of European telco lobbyists. However, as Figure 2 shows, we found something very different. Profitability is actually gradually increasing with subscriber growth, but only for the top-performing operators. Again, bubbles are scaled to ARPU.

 

  • Executive Summary*
  • Identifying & Analysing Key Operators
  • A Design for Success*
  • Parameters*
  • Commercial Options*
  • What Strategy Did the Top Eight Adopt?*
  • Conclusions on the Network for the top Eight*
  • Conclusions on the Commercial Strategy*
  • Getting It Wrong*
  • Operator Case Studies
  • Conclusions and recommendations*

(* = not shown here)

 

  • Figure 1: 8 out of 40 MNOs made the cut for further analysis
  • Figure 2: The fastest-growing 4G operators are either holding or gradually increasing their EBITDA margins*
  • Figure 3: Scale helps, but less than you might think*
  • Figure 4: A strategy matrix for 4G operators*
  • Figure 5: An introduction to carrier aggregation*
  • Figure 6: Parameters of our 8 leading 4G deployers*
  • Figure 7: MTS holds onto margins as data volumes surge*
  • Figure 8: Wind’s data revenue gains now offset losses from voice entirely, at 37% margins*
  • Figure 9: VZW’s service margin soars despite the price disruption*
  • Figure 10: AT&T service margins are also high and rising*
  • Figure 11: Service margins are rising strongly at 3UK*
  • Figure 12: Six operators who are struggling to escape the lower-left quadrant*
  • Figure 13: Sprint and T-Mobile are playing the same game but only one is winning*
  • Figure 14: Sprint toned down the smartphone bonanza in 2015*
  • Figure 15: Vodafone’s European OpCos are improving, but it’s been a hard road*
  • Figure 16: Vodafone Germany’s turnaround plan – 1800MHz plus backhaul*
  • Figure 17: Project Spring still hasn’t filled the fibre gap*
  • Figure 18: Free, despite being the smallest and latest to start of the French MNOs, had an outstanding score on our latency index*
  • Figure 19: T-Mobile USA’s latency performance is market-leading on a blended 3G/4G basis*
  • Figure 20: T-Mobile USA generates fewer high latency events than any US operator*

(* = not shown here)

Problem: Telecoms technology inhibits operator business model change (Part 1)

Introduction

Everyone loves to moan about telcos

‘I just can’t seem to get anything done, it is like running through treacle.’

‘We gave up trying to partner with operators – they are too slow.’

‘Why are telcos unable to make the most basic improvements in their service offerings?’

‘They are called operators for a reason: they operate networks. But they can’t innovate and don’t know the first thing about marketing or customer service.’

Anyone within the telecoms industry will have heard these or similar expressions of dissatisfaction from colleagues, partners and customers.  It seems that despite providing the connectivity and communications services that have truly changed the world in the last 20 years, operators are unloved.  Everyone, and I think we are all guilty of this, feels that operators could do so much better.  There is a feeling that these huge organisations are almost wilfully seeking to be slow and inflexible – as if there is malice in the way they do business.

But the telecoms industry employs millions of people globally. It pays quite well and so attracts talent. Many, for example, have already enjoyed success in other industries. But nobody has yet, it seems, been able to make a telco, let alone the industry, fast, agile, and innovative.

Why not?

A structural problem

In this report, we argue that nobody is at fault for the perceived woes of telecoms operators.  Indeed, the difficulty the industry is facing in changing its business model is a result of financial and operational processes that have been adopted and refined over years in response to investor requirements and regulation.  In turn, investors and regulators have created such requirements as a result of technological constraints that have applied, even with ongoing improvements, to fixed and mobile telecommunications for decades. In essence, operators are constrained by the very structures that were put in place to ensure their success.

So should we give up?

If the limitations of telecoms operators is structural then it is easy to assume that change and development is impossible.  Certainly sceptics have plenty of empirical evidence for this view.  But as we outline in this report and will cover in more detail in a follow up to be published in early February 2016 (Answer: How 5G + Cloud + NFV can create the ‘agile telco’), changes in technology should have a profound impact on telecoms operators ability to become more flexible and innovative and so thrive in the fast-paced digital world.

Customer satisfaction is proving elusive in mature markets

Telecoms operators perform materially worst on customer service than other players in the US and UK

Improving customer experience has become something of a mantra within telecoms in the last few years. Many operators use Net Promoter Scores (NPS) as a way of measuring their performance, and the concept of ‘putting the customer first’ has gained in popularity as the industry has matured and new customers have become harder to find. Yet customer satisfaction remains low.

The American Customer Satisfaction Index (ACSI) publishes annual figures for customer satisfaction based on extensive consumer surveys. Telecommunications companies consistently come out towards the bottom of the range (scoring 65-70 out of 100). By contrasts internet and content players such as Amazon, Google, Apple and Netflix have much more satisfied customers and score 80+ – see Figure 1.

Figure 1: Customers are generally dissatisfied with telecoms companies

 

Source: American Customer Satisfaction index (http://www.theacsi.org/the-american-customer-satisfaction-index); STL Partners analysis

The story in the UK is similar.  The UK Customer Satisfaction Index, using a similar methodology to its US counterpart, places the Telecommunications and Media industry as the second-worst performer across 13 industry sectors scoring 71.7 in 2015 compared to a UK average of 76.2 and the best-performing sector, Non-food Retail, on 81.6.

Poor customer services scores are a lead indicator for poor financial performance

Most concerning for the telecoms industry is the work that ACSI has undertaken showing that customer satisfaction is linked to the financial performance of the overall economy and the performance of individual sectors and companies. The organisation states:

  • Customer satisfaction is a leading indicator of company financial performance. Stocks of companies with high ACSI scores tend to do better than those of companies with low scores.
  • Changes in customer satisfaction affect the general willingness of households to buy. As such, price-adjusted ACSI is a leading indicator of consumer spending growth and has accounted for more of the variation in future spending growth than any other single factor.

Source: American Customer Satisfaction index (http://www.theacsi.org/about-acsi/key-acsi-findings)  

In other words, consistently poor performance by all major players in the telecoms industry in the US and UK suggests aspirations of growth may be wildly optimistic. Put simply, why would customers buy more services from companies they don’t like? This bodes ill for the financial performance of telecoms operators going forward.

Senior management within telecoms knows this. They want to improve customer satisfaction by offering new and better services and customer care. But change has proved incredibly difficult and other more agile players always seem to beat operators to the punch. The next section shows why.

 

  • Introduction
  • Everyone loves to moan about telcos
  • A structural problem
  • So should we give up?
  • Customer satisfaction is proving elusive in mature markets
  • Telecoms operators perform materially worst on customer service than other players in the US and UK
  • Poor customer services scores are a lead indicator for poor financial performance
  • ‘One-function’ telecommunications technology stymies innovation and growth
  • Telecoms has always been an ‘infrastructure play’
  • …which means inflexibility and lack of innovation is hard-wired into the operating model
  • Why ‘Telco 2.0’ is so important for operators
  • Telco 2.0 aspirations remain thwarted
  • Technology can truly ‘change the game’ for operators

 

  • Figure 1: Customers are generally dissatisfied with telecoms companies
  • Figure 2: Historically, capital deployment has driven telecoms revenue
  • Figure 3: Financial & operational metrics for Infrastructure player (Vodafone) vs Platform (Google) & Product Innovator (Unilever)

Telcos’ Last Chance in Cloud? New $18bn Sovereign Cloud Opportunity

Preface

As we predicted in our 2012 report Cloud 2.0: Telco Strategies in the Cloud, operators have struggled to provide generically competitive cloud services, with those looking to provide infrastructure-as-a-service (IaaS) losing out to the larger hyperscale players (e.g. Amazon Web Services, Microsoft Azure). The majority of telcos have therefore reduced their focus and ambition within cloud (infrastructure) services over the last number of years.

However, recent legal and market developments and the emergence of new technologies are changing the cloud delivery model. The rescinding of the US-EU Safe Harbour agreement and the sovereign data trustee solution launched by Microsoft & Deutsche Telekom have put a spotlight on the need for sovereign cloud solutions that are better equipped to protect data. Operators are well-positioned to deliver and support these solutions but will need to act fast to ensure their role in the value chain.

Furthermore, new technologies (e.g. 5G, SDN/NFV) and requirements (e.g. low latency) may lead to the decentralisation of the current hyperscale data centre model, moving more computing power to the edge of the network (see How 5G is Disrupting Cloud and Network Strategy Today). This change in the architecture may lead to a long-term advantage for telcos.

In order to better understand data sovereignty requirements around the world and the potential opportunity for ‘sovereign’ cloud services, STL Partners (STL) conducted industry research. This research consisted of c.30 interviews with software-as-a-service (SaaS) providers, software companies, enterprises, public sector bodies, telecom operators and cloud service providers. This report presents and discusses the findings of this research.

The research programme was sponsored by Ericsson. This report and analysis was independently produced by STL Partners.

Introduction: The Return of Telco Cloud…

The telecoms industry has been undergoing a transformation process for much of the last decade. The threat from new players has marginalized the core communications business and operators have looked to gain traction and grow revenues through the provision of new services in adjacent areas, with one such area being cloud computing.

Cloud computing has ripped through the traditional IT infrastructure model, providing greater flexibility, enabling the pooling of resources and potentially reducing both capex and opex. This new delivery model has led to the development of new services and business models (e.g. ‘as-a-service’ models), disrupting how individuals consume services and how organisations do business.

The rise of cloud computing is a trend set to continue; indeed, STL Partners forecast that cloud IT infrastructure spending will equal spend on traditional IT infrastructure by 2020 (Figure 3).

Figure 3: Cloud IT infrastructure is rapidly gaining on traditional IT infrastructure

Source: IDC base figures; STL Partners analysis

Telcos have not remained oblivious to this industry transformation. Some (principally fixed-line) operators have a legacy providing IT outsourcing services and have looked to build on this footing, providing and managing infrastructure for cloud services, whilst others have partnered with cloud software providers to deliver new services to customers.

So far operators’ experiences offering cloud services have been mixed, with operators typically finding more success through partnerships. Rather than attempting to build their own cloud solutions operators have typically partnered with SaaS providers, such as Microsoft (e.g. Office 365) and Google (e.g. Google Apps for Work), acting as resellers of the software, potentially creating appealing bundles for enterprise customers.

On the other hand telcos attempting to provide IaaS, which one might intuitively think is more closely aligned to a telco’s core capabilities, have typically found that they have not been able to compete head-on with the larger IaaS providers (e.g. Amazon Web Services). Simply speaking, it has become a game of scale, with single operators or even telco groups unable to match the resources and investment of the hyperscale players. Indeed in our November 2014 report, Cloud: What is the role of telcos in cloud services in 2015?, we highlighted the challenge with telcos competing against the larger IaaS players:

“Pushing for pureplay IaaS solutions (Compute, Memory, Storage etc) is not going to be a sensible option for the majority of telcos. As an example of how hard it is to compete here, RackSpace came from a managed hosting/co location background and moved into IaaS, even collaborating on a virtualisation initiative that became OpenStack. But earlier in 2014, after spending less on IaaS investment than Microsoft or Google spend on infrastructure in a quarter, it announced it was going to refocus its efforts on its earlier product success with managed hosting and colocation because it was more able to differentiate itself from the other vendors who have significantly lower pricing.”

Telcos competing in infrastructure have therefore typically shifted their focus away from public cloud IaaS (competing against the larger providers) to more private cloud infrastructure and traditional managed hosting services. Despite mixed performance with IaaS services, albeit with exceptions in regions where the big IaaS players are not well established and where telcos can differentiate their offering (e.g. Telstra), there perhaps remains a still sizable opportunity, particularly as telcos begin to transform their networks.

This transformation involves the virtualisation of the network, embracing software defined-networking (SDN) and network functions virtualisation (NFV). As operators harness the power of these new technologies and associated business practices they will develop and implement the infrastructure, software and capabilities to deliver more advanced services through more efficient, automated and programmable networks. Operators in turn will be able to draw on these assets and associated skills to improve how they run and manage their cloud infrastructure.

Furthermore, as the industry develops and implements more advanced networks (i.e. 5G), there exists a potential advantage for telco infrastructure services due to the need for more localised delivery of service. The Next Generation Mobile Networks (NGMN) Alliance highlights that 5G should provide, “much greater throughput, much lower latency, ultra-high reliability, much higher connectivity density, and higher mobility range.”

STL Partners laid out a potential vision for 5G and network transformation in the report, How 5G is Disrupting Cloud and Network Strategy Today. To summarise the report, latency targets/requirements (how long it takes the network to respond to user requests) for 5G are very low; the target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. An example use case where low-latency could be very important could be communication between self-driving cars.

In order to meet these lofty requirements for latency the current delivery model may need to be rethought. Latency is limited by the time it takes to travel to the server and back at the speed of light; latency is therefore inherently linked to distance. In the 5G report, we explored the impact of these latency targets on the required distance of servers from users:

“The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)”

To deliver these latency requirements a radical change to the architecture of the network is needed as well as a change in how compute and storage infrastructure is managed. Content and applications that are within the 100km contour will have a competitive advantage over those that don’t take account of latency. The impact of this could lead to the decentralisation of the current hyperscale data centre model, moving more computing power to the edge of the network. This change in the architecture and delivery model may lend telcos an advantage in the infrastructure marketplace.

Figure 4: Shifting the balance in favour of more localised infrastructure

Source: STL Partners

Whilst telcos will not wrestle control of the infrastructure marketplace overnight, telcos, as they embark on their transformation process, should look to make inroads towards this vision. Indeed there are current market challenges that telcos could immediately address (and are addressing) through their localised infrastructure, creating a stepped/phased approach towards the future vision of a localised cloud delivery model.

Into this rapidly evolving context steps the long-standing challenge of data sovereignty. Data sovereignty requirements are regulations that consider the implications of geographical location of data and place restrictions on the movement of certain types of data across borders. The recent ruling rescinding the US-EU Safe Harbour Agreement has put a spotlight on the issue of data privacy and data sovereignty and new approaches taken by technology players are highlighting that this is a problem that needs to and is being solved (i.e. Microsoft’s decision to create a German sovereign version of Azure). Operators are natural candidates to play a role here and should look to better understand how they can form part of the value chain in the provision of locally trusted IaaS solutions.

This report analyses data sovereignty requirements around the world and explores the potential opportunity for ‘sovereign’ cloud services as a further ‘nudge’ towards a more localised cloud delivery model.

 

  • Preface
  • Executive Summary
  • The Return of Telco Cloud…
  • Understanding Data Sovereignty
  • Which Sectors Have the Strongest Sovereignty Requirements?
  • A Range of (Cloud) Solutions can Address Sovereignty Needs
  • 75% of Interviewees were Interested in Sovereign Cloud Solutions
  • Where is Data Sovereignty Important?
  • How could this Evolve?
  • Market Sizing: Sovereign Cloud could be Worth between $7-18bn in 2020
  • Why Telcos are Well Positioned to Address the ‘Sovereign’ Opportunity
  • Conclusions

 

  • Figure 1: A shift in the cloud delivery model may be occurring
  • Figure 2: Sovereign cloud has the potential to represent over X% of the cloud infrastructure marketplace
  • Figure 3: Cloud IT infrastructure is rapidly gaining on traditional IT infrastructure
  • Figure 4: Shifting the balance in favour of more localised infrastructure
  • Figure 5: How much data does Facebook store about you?
  • Figure 6: STL Industry Research Programme – Breakdown of interviewees
  • Figure 7: The significant majority of interviewees have encountered sovereignty requirements
  • Figure 8: More-regulated sectors are more likely to encounter restrictions
  • Figure 9: Infrastructure Deployment Models
  • Figure 10: The applicability of cloud deployment models to meet sovereignty requirements
  • Figure 11: The majority of Interviewees saw demand for sovereign cloud
  • Figure 12: More strictly regulated sectors are more interested in sovereign cloud solutions
  • Figure 13: Indicative map of data sovereignty requirements across the globe
  • Figure 14: Overview of data sovereignty requirements across regions
  • Figure 15: The rise of IoT could lead to increased demand for sovereign cloud
  • Figure 16: Sovereign cloud could be worth between $7-18bn in 2020
  • Figure 17: North America represents the biggest market for sovereign cloud
  • Figure 18: Sovereign cloud in the Middle East & Africa potentially represents the greatest proportion of cloud infrastructure spending
  • Figure 19: Government represents the largest market for sovereign cloud for existing services and Healthcare for sovereign cloud incl. IoT services
  • Figure 20: Healthcare is the largest sector for sovereign cloud as a percentage of spend on IT infrastructure

Huawei’s choice: 5G visionary, price warrior or customer champion?

Introduction: Huawei H1s

Huawei’s H1 2015 results caused something of a stir, as they seemed to promise a new cycle of rapid growth at the No.2 infrastructure vendor. The headline figure was that revenue for H1 was up 30% year-on-year, somewhat surprising as LTE infrastructure spending was thought to have passed the peak in much of the world. In context, Huawei’s revenue has grown at a 16% CAGR since 2010, while its operating profits have grown at 2%, implying very significant erosion of margins as the infrastructure business commoditises. Operating margins were in the region of 17-18% in 2010, before falling to 10-12% in 2012-2014.

Figure 1 – If Huawei’s H2 delivers as promised, it may have broken out of the commoditisation trap… for now

Source: STL Partners, Huawei press releases 

Our estimate, in Figure 1, uses the averages for the last 4 years to show two estimates for the full-year numbers. If the first, ‘2015E’, is delivered, this would take Huawei’s profitability back to the levels of 2010 and nearly double its operating profit. The second estimate ‘Alternate 2015E’, assumes a similar performance to last year’s, in which the second half of the year disappoints in terms of profitability. In this case, full-year margin would be closer to 12% rather than 18% and all the growth would be coming from volume. The H1 announcement promises margins for 2015 of 18%, which would therefore mean a very successful year indeed if they were delivered in H2. For the last few years, Huawei’s H2 revenue has been rather higher than H1, on average by about 10% for 2011-2014. You might expect this in a growing business, but profitability is much more erratic.

For reference, Figure 2 shows that the relationship between H1 and H2 profitability has varied significantly from year to year. While in 2012 and 2013 Huawei’s operating profits in H2 were higher than in H1, in 2011 and 2014, its H2 operating profits were much less than in H1. 2015E shows the scenario needed to deliver the 18% annual margin target; Alternate 2015E shows a scenario where H2 is relatively weak, in line with last year.

Figure 2 – Huawei’s H1 and H2 Profits have varied significantly year on year

Source: STL Partners, Huawei press releases 

Huawei’s annual report hints at some reasons for the weak H2 2014, notably poor sales in North America, stockbuilding ahead of major Chinese investment (inventory rose sharply through 2014), and the launch of the Honor low-cost device brand. However, although North American wireless investment was in fact low at the time, it’s never been a core market for Huawei, and Chinese carriers were spending heavily. It is plausible that adding a lot of very cheap devices would weigh on the company’s profitability. As we will see, though, there are reasons to think Huawei might not have got full value from strong carrier spending in this timeframe.

In any event, to hit Huawei’s ambitious 2015 target, it will need a great H2 2015 to follow from its strong H1. It hasn’t performed this particular ‘double’ for the last four years, so it will certainly be an achievement to do it in 2015. And if it does, how is the market looking for 2016 and beyond?

Where are we in the infrastructure cycle?

As Huawei is still primarily an infrastructure vendor, its business is closely coupled to operators’ CAPEX plans. In theory, these plans are meant to be cyclical, driven by the ever present urge to upgrade technology and build out networks. The theory goes that on one hand, technology drivers (new standards, higher-quality displays and camera sensors) and user behaviour (the secular growth in data traffic) drive operators to invest. On the other, financial imperatives (to derive as much margin from depreciating assets as possible) encourage operators to resist spending and sweat the assets.

Sometimes, the technology drivers get the upper hand; sometimes, the financial constraints. Therefore, the operator tends to “flip” between a high-investment and a low-investment state. Because operators compete, this behaviour may become synchronised within markets, causing large geographies to exhibit an infrastructure spending cycle.

In practice, there are other drivers that mitigate against the cyclical forces. There are ‘bottlenecks’ in integration and in scaling resources up and down, and generally, businesses prefer to keep expenditures as flat as possible to reduce variations and resulting ‘surprises’ for their shareholders. In general though, there is some ongoing variation in levels of capex investment in every market, as we examine in the following sections.

North America: operators take a breather before 5G

In North America, the tipping-point from sweating the assets to investment seems to have been reached roughly in 2011-2012, when the major carriers began a cycle of heavy investment in LTE infrastructure. This investment peaked in 2014. Recently, AT&T informed its shareholders to expect significantly lower CAPEX over the next few years, and in fact the actual numbers so far this year are substantially lower than the guidance of around 14-15% of revenue. Excluding the Mexican acquisitions, CAPEX/Revenue has been running around 13% since Q3 2014. From Q2 2013 to the end of Q2 2014, AT&T spent on average $5.7bn a quarter with the vendors. Since then, the average is $4.4bn, so AT&T has reduced its quarterly CAPEX by 21%.

Figure 3 – AT&T’s LTE investment cycle looks over.

Source: STL Partners, Huawei press releases

During 2013, AT&T, Sprint, and VZW were all in a higher spending phase, as Figure 3 shows. Since then, AT&T and Sprint have backed off considerably. However, despite its problems, Sprint does seem to be starting another round of investment, and VZW has started to invest again, while T-Mobile is rising gradually. We can therefore say that the investment pause in North America is overhyped, but does exist – compare the first half of 2013, when AT&T, Sprint, and T-Mobile were all near the top of the cycle while VZW was dead on the average.

Figure 4 – The investment cycle in North America.

Source: STL Partners

The pattern is somewhat clearer in terms of CAPEX as a percentage of revenue, shown in Figure 5. In late 2012 and most of 2013, AT&T, Sprint, and T-Mobile were all near the top of their historic ranges for CAPEX as a percentage of their revenue. Now, only Sprint is really pushing hard.

Figure 5 – Spot the outlier. Sprint is the only US MNO still investing heavily

Source: STL Partners, company filings

If there is cyclicality it is most visible here in Sprint’s numbers, and the cycle is actually pretty short – the peak-to-trough time seems to be about a year, so the whole cycle takes about two years to run. That suggests that if there is a more general cyclical uptick, it should be around H1 2016, and the one after that nicely on time for early 5G implementations in 2018.

  • Executive Summary
  • Introduction: Huawei H1s
  • Where are we in the infrastructure cycle?
  • North America: operators take a breather before 5G
  • Europe: are we seeing a return to growth?
  • China: full steam ahead under “special action mobilisation”
  • The infrastructure market is changing
  • Commoditisation on a historic scale
  • And Huawei is no longer the price leader
  • The China Mobile supercontract: a highly political event
  • Conclusion: don’t expect a surge in infrastructure profitability
  • Huawei’s 5G Strategy and the Standards Process
  • Huawei’s approach to 5G
  • What do operators want from 5G?
  • In search of consensus: 3GPP leans towards an simpler “early 5G” solution
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In Q2, the Euro-5 out-invested Chinese operators for the first time
  • Figure 2: If Huawei’s H2 delivers as promised, it may have broken out of the commoditisation trap for now
  • Figure 3: Huawei’s H1 and H2 Profits have varied significantly year on year
  • Figure 4: AT&T’s LTE investment cycle looks over.
  • Figure 5: The investment cycle in North America.
  • Figure 6: Spot the outlier. Sprint is the only US MNO still investing heavily
  • Figure 7: 3 of the Euro-5 carriers are beginning to invest again
  • Figure 8: European investment levels are not as far behind as you might think
  • Figure 9: Chinese CAPEX/Revenue levels have been 10 percent higher than US or European ones – but this may be changing
  • Figure 10: Chinese infrastructure spending was taking a breather too, until Xi’s intervention
  • Figure 11: Chinese MNOs are investing heavily
  • Figure 12: LTE deployments have grown 100x while prices have fallen even more
  • Figure 13: As usual, Huawei is very much committed to a single radio solution
  • Figure 14: Huawei wants most 5G features in R15 by H2 2018
  • Figure 15: Huawei only supports priority for MBB very weakly and emphasises R16 and beyond
  • Figure 16: Chinese operators, Alcatel-Lucent, ZTE, and academic researchers disagree with Huawei
  • Figure 17: Orange’s view of 5G: distinctly practical
  • Figure 18: Telefonica is really quite sceptical about much of the 5G technology base
  • Figure 19: Qualcomm sees R15 as a bolt-on new radio in an LTE het-net
  • Figure 20: 3GPP RAN chairman Dino Flores says “yes” to prioritisation
  • Figure 21: Working as a group, the operators were slightly more ambitious
  • Figure 22: The vendors are very broadband-focused
  • Figure 23: Vodafone and Huawei

Triple-Play in the USA: Infrastructure Pays Off

Introduction

In this note, we compare the recent performance of three US fixed operators who have adopted contrasting strategies and technology choices, AT&T, Verizon, and Comcast. We specifically focus on their NGA (Next-Generation Access) triple-play products, for the excellent reason that they themselves focus on these to the extent of increasingly abandoning the subscriber base outside their footprints. We characterise these strategies, attempt to estimate typical subscriber bundles, discuss their future options, and review the situation in the light of a “Deep Value” framework.

A Case Study in Deep Value: The Lessons from Apple and Samsung

Deep value strategies concentrate on developing assets that will be difficult for any plausible competitor to replicate, in as many layers of the value chain as possible. A current example is the way Apple and Samsung – rather than Nokia, HTC, or even Google – came to dominate the smartphone market.

It is now well known that Apple, despite its image as a design-focused company whose products are put together by outsourcers, has invested heavily in manufacturing throughout the iOS era. Although the first generation iPhone was largely assembled from proprietary parts, in many ways it should be considered as a large-scale pilot project. Starting with the iPhone 3GS, the proportion of Apple’s own content in the devices rose sharply, thanks to the acquisition of PA Semiconductor, but also to heavy investment in the supply chain.

Not only did Apple design and pilot-produce many of the components it wanted, it bought them from suppliers in advance to lock up the supply. It also bought machine tools the suppliers would need, often long in advance to lock up the supply. But this wasn’t just about a tactical effort to deny componentry to its competitors. It was also a strategic effort to create manufacturing capacity.

In pre-paying for large quantities of components, Apple provides its suppliers with the capital they need to build new facilities. In pre-paying for the machine tools that will go in them, they finance the machine tool manufacturers and enjoy a say in their development plans, thus ensuring the availability of the right machinery. They even invent tools themselves and then get them manufactured for the future use of their suppliers.

Samsung is of course both Apple’s biggest competitor and its biggest supplier. It combines these roles precisely because it is a huge manufacturer of electronic components. Concentrating on its manufacturing supply chain both enables it to produce excellent hardware, and also to hedge the success or failure of the devices by selling componentry to the competition. As with Apple, doing this is very expensive and demands skills that are both in short supply, and sometimes also hard to define. Much of the deep value embedded in Apple and Samsung’s supply chains will be the tacit knowledge gained from learning by doing that is now concentrated in their people.

The key insight for both companies is that industrial and user-experience design is highly replicable, and patent protection is relatively weak. The same is true of software. Apple had a deeply traumatic experience with the famous Look and Feel lawsuit against Microsoft, and some people have suggested that the supply-chain strategy was deliberately intended to prevent something similar happening again.

Certainly, the shift to this strategy coincides with the launch of Android, which Steve Jobs at least perceived as a “stolen product”. Arguably, Jobs repeated Apple’s response to Microsoft Windows, suing everyone in sight, with about as much success, whereas Tim Cook in his role as the hardware engineering and then supply-chain chief adopted a new strategy, developing an industrial capability that would be very hard to replicate, by design.

Three Operators, Three Strategies

AT&T

The biggest issue any fixed operator has faced since the great challenges of privatisation, divestment, and deregulation in the 1980s is that of managing the transition from a business that basically provides voice on a copper access network to one that basically provides Internet service on a co-ax, fibre, or possibly wireless access network. This, at least, has been clear for many years.

AT&T is the original telco – at least, AT&T likes to be seen that way, as shown by their decision to reclaim the iconic NYSE ticker symbol “T”. That obscures, however, how much has changed since the divestment and the extremely expensive process of mergers and acquisitions that patched the current version of the company together. The bit examined here is the AT&T Home Solutions division, which owns the fixed-line ex-incumbent business, also known as the merged BellSouth and SBC businesses.

AT&T, like all the world’s incumbents, deployed ADSL at the turn of the 2000s, thus getting into the ISP business. Unlike most world incumbents, in 2005 it got a huge regulatory boost in the form of the Martin FCC’s Comcast decision, which declared that broadband Internet service was not a telecommunications service for regulatory purposes. This permitted US fixed operators to take back the Internet business they had been losing to independent ISPs. As such, they were able to cope with the transition while concentrating on the big-glamour areas of M&A and wireless.

As the 2000s advanced, it became obvious that AT&T needed to look at the next move beyond DSL service. The option taken was what became U-Verse, a triple-play product which consists of:

  • Either ADSL, ADSL2+, or VDSL, depending on copper run length and line quality
  • Plus IPTV
  • And traditional telephony carried over IP.

This represents a minimal approach to the transition – the network upgrade requires new equipment in the local exchanges, or Central Offices in US terms, and in street cabinets, but it does not require the replacement of the access link, nor any trenching.

This minimisation of capital investment is especially important, as it was also decided that U-Verse would not deploy into areas where the copper might need investment to carry it. These networks would eventually, it was hoped, be either sold or closed and replaced by wireless service. U-Verse was therefore, for AT&T, in part a means of disposing of regulatory requirements.

It was also important that the system closely coupled the regulated domain of voice with the unregulated, or at least only potentially regulated, domain of Internet service and the either unregulated or differently regulated domain of content. In many ways, U-Verse can be seen as a content first strategy. It’s TV that is expected to be the primary replacement for the dwindling fixed voice revenues. Figure 1 shows the importance of content to AT&T vividly.

Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast

Telco 2 UVerse TV sales account for the largest chunk of Telco 2 revenue at ATandT although M2M is growing fast.png

Source: Telco 2.0 Transformation Index

This sounds like one of the telecoms-as-media strategies of the late 1990s. However, it should be clearly distinguished from, say, BT’s drive to acquire exclusive sports content and to build up a brand identity as a “channel”. U-Verse does not market itself as a “TV channel” and does not buy exclusive content – rather, it is a channel in the literal sense, a distributor through which TV is sold. We will see why in the next section.

The US TV Market

It is well worth remembering that TV is a deeply national industry. Steve Jobs famously described it as “balkanised” and as a result didn’t want to take part. Most metrics vary dramatically across national borders, as do qualitative observations of structure. (Some countries have a big public sector broadcaster, like the BBC or indeed Al-Jazeera, to give a basic example.) Countries with low pay-TV penetration can be seen as ones that offer greater opportunities, it being usually easier to expand the customer base than to win share from the competition (a “blue ocean” versus a “red sea” strategy).

However, it is also true that pay-TV in general is an easier sell in a market where most TV viewers already pay for TV. It is very hard to convince people to pay for a product they can obtain free.

In the US, there is a long-standing culture of pay-TV, originally with cable operators and more recently with satellite (DISH and DirecTV), IPTV or telco-delivered TV (AT&T U-Verse and Verizon FiOS), and subscription OTT (Netflix and Hulu). It is also a market characterised by heavy TV usage (an average household has 2.8 TVs). Out of the 114.2 million homes (96.7% of all homes) receiving TV, according to Nielsen, there are some 97 million receiving pay-TV via cable, satellite, or IPTV, a penetration rate of 85%. This is the largest and richest pay-TV market in the world.

In this sense, it ought to be a good prospect for TV in general, with the caveat that a “Sky Sports” or “BT Sport” strategy based on content exclusive to a distributor is unlikely to work. This is because typically, US TV content is sold relatively openly in the wholesale market, and in many cases, there are regulatory requirements that it must be provided to any distributor (TV affiliate, cable operator, or telco) that asks for it, and even that distributors must carry certain channels.

Rightsholders have backed a strategy based on distribution over one based on exclusivity, on the principle that the customer should be given as many opportunities as possible to buy the content. This also serves the interests of advertisers, who by definition want access to as many consumers as possible. Hollywood has always aimed to open new releases on as many cinema screens as possible, and it is the movie industry’s skills, traditions, and prejudices that shaped this market.

As a result, it is relatively easy for distributors to acquire content, but difficult for them to generate differentiation by monopolising exclusive content. In this model, differentiation tends to accrue to rightsholders, not distributors. For example, although HBO maintains the status of being a premium provider of content, consumers can buy it from any of AT&T, Verizon, Comcast, any other cable operator, satellite, or direct from HBO via an OTT option.

However, pay-TV penetration is high enough that any new entrant (such as the two telcos) is committed to winning share from other providers, the hard way. It is worth pointing out that the US satellite operators DISH and DirecTV concentrated on rural customers who aren’t served by the cable MSOs. At the time, their TV needs weren’t served by the telcos either. As such, they were essentially greenfield deployments, the first pay-TV propositions in their markets.

The biggest change in US TV in recent times has been the emergence of major new distributors, the two RBOCs and a range of Web-based over-the-top independents. Figure 2 summarises the situation going into 2013.

Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale

OTT video providers beat telcos cablecos and satellite for subscriber growth at scale

Source: Telco 2.0 Transformation Index

The two biggest classes of distributors saw either a marginal loss of subscribers (the cablecos) or a marginal gain (satellite). The two groups of (relatively) new entrants, as you’d expect, saw much more growth. However, the OTT players are both bigger and much faster growing than the two telco players. It is worth pointing out that this mostly represents additional TV consumption, typically, people who already buy pay-TV adding a Netflix subscription. “Cord cutting” – replacing a primary TV subscription entirely – remains rare. In some ways, U-Verse can be seen as an effort to do something similar, upselling content to existing subscribers.

Competing for the Whole Bundle – Comcast and the Cable Industry

So how is this option doing? The following chart, Figure 3, shows that in terms of overall service ARPU, AT&T’s fixed strategy is delivering inferior results than its main competitors.

Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up

Source: Telco 2.0 Transformation Index

The interesting point here is that Time Warner Cable is doing less well than some of its cable industry peers. Comcast, the biggest, claims a $159 monthly ARPU for triple-play customers, and it probably has a higher density of triple-players than the telcos. More representatively, they also quote a figure of $134 monthly average revenue per customer relationship, including single- and double-play customers. We have used this figure throughout this note. TWC, in general, is more content-focused and less broadband-focused than Comcast, having taken much longer to roll out DOCSIS 3.0. But is that important? After all, aren’t cable operators all about TV? Figure 4 shows clearly that broadband and voice are now just as important to cable operators as they are to telcos. The distinction is increasingly just a historical quirk.

Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators

Non video revenues ie Internet service and voice are the driver of growth for US cable operatorsSource: NCTA data, STL Partners

As we have seen, TV in the USA is not a differentiator because everyone’s got it. Further, it’s a product that doesn’t bring differentiation but does bring costs, as the rightsholders exact their share of the selling price. Broadband and voice are different – they are, in a sense, products the operator makes in-house. Most have to buy the tools (except Free.fr which has developed its own), but in any case the operator has to do that to carry the TV.

The differential growth rates in Figure 4 represent a substantial change in the ISP industry. Traditionally, the Internet engineering community tended to look down on cable operators as glorified TV distribution systems. This is no longer the case.

In the late 2000s, cable operators concentrated on improving their speeds and increasing their capacity. They also pressed their vendors and standardisation forums to practice continuous improvement, creating a regular upgrade cycle for DOCSIS firmware and silicon that lets them stay one (or more) jumps ahead of the DSL industry. Some of them also invested in their core IP networking and in providing a deeper and richer variety of connectivity products for SMB, enterprise, and wholesale customers.

Comcast is the classic example of this. It is a major supplier of mobile backhaul, high-speed Internet service (and also VoIP) for small businesses, and a major actor in the Internet peering ecosystem. An important metric of this change is that since 2009, it has transitioned from being a downlink-heavy eyeball network to being a balanced peer that serves about as much traffic outbound as it receives inbound.

The key insight here is that, especially in an environment like the US where xDSL unbundling isn’t available, if you win a customer for broadband, you generally also get the whole bundle. TV is a valuable bonus, but it’s not differentiating enough to win the whole of the subscriber’s fixed telecoms spend – or to retain it, in the presence of competitors with their own infrastructure. It’s also of relatively little interest to business customers, who tend to be high-value customers.

 

  • Executive Summary
  • Introduction
  • A Case Study in Deep Value: The Lessons from Apple and Samsung
  • Three Operators, Three Strategies
  • AT&T
  • The US TV Market
  • Competing for the Whole Bundle – Comcast and the Cable Industry
  • Competing for the Whole Bundle II: Verizon
  • Scoring the three strategies – who’s winning the whole bundles?
  • SMBs and the role of voice
  • Looking ahead
  • Planning for a Future: What’s Up Cable’s Sleeve?
  • Conclusions

 

  • Figure 1: U-Verse TV sales account for the largest chunk of Telco 2.0 revenue at AT&T, although M2M is growing fast
  • Figure 2: OTT video providers beat telcos, cablecos, and satellite for subscriber growth, at scale
  • Figure 3: Cable operators lead the way on ARPU. Verizon, with FiOS, is keeping up
  • Figure 4: Non-video revenues – i.e. Internet service and voice – are the driver of growth for US cable operators
  • Figure 5: Comcast has the best pricing per megabit at typical service levels
  • Figure 6: Verizon is ahead, but only marginally, on uplink pricing per megabit
  • Figure 7: FCC data shows that it’s the cablecos, and FiOS, who under-promise and over-deliver when it comes to broadband
  • Figure 7: Speed sells at Verizon
  • Figure 8: Comcast and Verizon at parity on price per megabit
  • Figure 9: Typical bundles for three operators. Verizon FiOS leads the way
  • Figure 12: The impact of learning by doing on FTTH deployment costs during the peak roll-out phase

Customer Experience: Is it Time for the Mobile CDN?

Summary: Changing consumer behaviours and the transition to 4G are likely to bring about a fresh surge of video traffic on many networks. Fortunately, mobile content delivery networks (CDNs), which should deliver both better customer experience and lower costs, are now potentially an option for carriers using a combination of technical advances and new strategic approaches to network design. This briefing examines why, how, and what operators should do, and includes lessons from Akamai, Level 3, Amazon, and Google. (May 2013, Executive Briefing Service). CDN Traffic as Percentage of Backbone May 2013

Introduction

Content delivery networks (CDNs) are by now a proven pattern for the efficient delivery of heavy content, such as video, and for better user experience in Web applications. Extensively deployed worldwide, they can be optimised to save bandwidth, to provide greater resilience, or to help scale up front-end applications. In the autumn of 2012, it was estimated that CDN providers accounted for 40% of the traffic entering residential ISP networks from the Internet core. This is likely to be an underestimate if anything, as a major use case for CDN is to reduce the volume of traffic that has to transit the Internet and to localise traffic within ISP networks. Craig Labovitz of DeepField Networks, formerly the head of Arbor’s ATLAS instrumentation project, estimates that from 35-45% of interdomain Internet traffic is accounted for by CDNs, rising to 60% for some smaller networks, and 85% of this is video.

Figure 1: CDNs, the supertankers of the Internet, are growing
CDN Traffic as Percentage of Backbone May 2013

Source: DeepField, STL

In the past, we have argued that mobile networks could benefit from deploying CDN, both in order to provide CDN services to content providers and in order to reduce their Internet transit and internal backhaul costs. We have also looked at the question of whether telcos should try to compete with major Internet CDN providers directly. In this note, we will review the CDN business model and consider whether the time has come for mobile CDN, in the light of developments at the market leader, Akamai.

The CDN Business Model

Although CDNs account for a very large proportion of Internet traffic and are indispensable to many content and applications providers, they are relatively small businesses. Dan Rayburn of Frost & Sullivan estimates that the video CDN market, not counting services provided by telcos internally, is around $1bn annually. In 2011, Cisco put it at $2bn with a 20% CAGR.

This is largely because much of the economic value created by CDNs accrues to the operators in whose networks they deploy their servers, in the form of efficiency savings, and to the content providers, in the form of improved sales conversions, less downtime, savings on hosting and transit, and generally, as an improvement in the quality of their product. It’s possible to see this as a two-sided business model – although the effective customer is the content provider, whose decisions determine the results of competition, much of the economic value created accrues to the operator and the content provider’s customer.

On top of this, it’s often suggested that margins in the core CDN product, video delivery, are poor and it would be worth moving to supposedly more lucrative “media services”, products like transcoding (converting original video files into the various formats served out of the CDN for networks with more or less bandwidth, mobile versus fixed devices, Apple HLS versus Adobe Flash, etc) and analytics aimed at content creators and rightsholders, or to lower-scale but higher-margin enterprise products. We are not necessarily convinced of this, and we will discuss the point further on page 9. For the time being, note that it is relatively easy to enter the CDN market, and it is influenced by Moore’s law.  Therefore, as with most electronic, computing, and telecoms products, there is structural pressure on prices.

The Problem: The Traffic Keeps Coming

A major 4G operator recently released data on the composition of traffic over their new network. As much as 40% of the total, it turned out, was music or video streaming. The great majority of this will attract precisely no revenue for the operator, unless by chance it turns out to represent the marginal byte that induces a user to spend money on out-of-bundle data. However, it all consumes spectrum and needs backhauling and therefore costs money.

The good news is that most, or even all, of this could potentially be distributed via a CDN, and in many cases probably will be distributed by a CDN as far as the mobile operator’s Internet point of presence. Some of this traffic will be uplink, a segment likely to grow fast with better radios and better device cameras, but there are technical options related to CDN that can benefit uplink applications as well.

Figure 2: Video, music, and photos are filling up a 4G mobile network

EE traffic by category and source Percentage May 2013

Source: EE, STL

Another 36.5% of the traffic is accounted for by Web browsing and e-mail. A large proportion of the Web activity could theoretically come from a CDN, too – even if the content itself has to be generated dynamically by application logic, things like images, fonts, and JavaScript libraries are a quick win in terms of performance. Estimates of how much Internet traffic in general could be served from a CDN range from 35% (AT&T) to 98% (Analysys Mason).

As 29% of their traffic originates from the top 3 point sources – YouTube, Facebook, and iTunes – it’s also observable that signing-up a relatively small subset of content providers as customers will provide considerable benefit. Out of those three, all of them use a CDN, and two of those – Facebook and iTunes – are customers of Akamai, while YouTube relies on Google’s own solution.

We can re-arrange the last chart to illustrate this more fully. (Note that Skype, as a peer-to-peer application that is also live, is unsuitable for CDN as usually understood.)

Figure 3: The top 9 CDN-able point sources represent 40% of EE’s traffic
Key Point Sources and Others May 2013

Source: EE, STL

Looking further afield, the next chart shows the traffic breakdown by application from DeepField’s observations in North American ISP networks.

Figure 4: The Web giants ride on the CDNs
Percentage Peak Hour Traffic May 2013

Source: DeepField

Clearly, the traffic sources and traffic types that are served from CDNs are both the heaviest to transport and also the ones that contribute most to the busy hour; note that these are peak measurements, and the total of the CDN traffic here (Netflix, YouTube, CDN other, Facebook) is substantially more than it is on average.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Akamai: the World’s No.1 CDN
  • Financial and KPI review
  • The Choice for CDN Customers: Akamai, Amazon, or DIY like Google?
  • CDN depth: the key question
  • CDN depth and mobile networks
  • Akamai’s guidelines for deployment
  • Why has mobile CDN’s time come?
  • What has held mobile CDN back?
  • But the world has changed…
  • …Networks are much less centralised…
  • …and IP penetrates much more deeply into the network
  • Licensed or Virtual CDN – a (relatively) new business model
  • SDN: a disruptive opportunity
  • So, why right now?
  • Conclusions
  • It may be time for telcos to move on mobile CDN
  • The CDN industry is exhibiting familiar category killer dynamics
  • Regional point sources remain important
  • CDN internals are changing the structure of the Internet
  • Recommendations for action

…and the following figures…

  • Figure 1: CDNs, the supertankers of the Internet, are growing
  • Figure 2: Video, music, and photos are filling up a 4G mobile network
  • Figure 3: The top 9 CDN-able point sources represent 40% of EE’s traffic
  • Figure 4: The Web giants ride on the CDNs
  • Figure 5: Akamai’s revenues by line of business
  • Figure 6: Observed traffic share for major CDNs

 

Dealing with the ‘Disruptors’: Google, Apple, Facebook, Microsoft/Skype and Amazon (Updated Extract)

Executive Summary (Extract)

This report analyses the strategies behind the success of Amazon, Apple, Facebook, Google and Skype, before going on to consider the key risks they face and how telcos and their partners should deal with these highly-disruptive Internet giants.

As the global economy increasingly goes digital, these five companies are using the Internet to create global brands with much broader followings than those of the traditional telecoms elite, such as Vodafone, AT&T and Nokia. However, the five have markedly different business models that offer important insights into how to create world-beating companies in the digital economy:

  • Amazon: Amazon’s business-to-business Marketplace and Cloud offerings are text-book examples of how to repurpose assets and infrastructure developed to serve consumers to open up new upstream markets. As the digital economy goes mobile, Amazon’s highly-efficient two-sided commerce platform is enabling it to compete effectively with rivals that control the leading smartphone and tablet platforms – Apple and Google.
  • Apple: Apple has demonstrated that, with enough vision and staying power, an individual company can single-handedly build an entire ecosystem. By combining intuitive and very desirable products, with a highly-standardised platform for software developers, Apple has managed to create an overall customer experience that is significantly better than that offered by more open ecosystems. But Apple’s strategy depends heavily on it continuing to produce the very best devices on the market, which will be difficult to sustain over the long-term.
  • Facebook: A compelling example of how to build a business on network effects. It took Facebook four years of hard work to reach a tipping point of 100 million users, but the social networking service has been growing easily and rapidly ever since. Facebook has the potential to attract 1.4 billion users worldwide, but only if it continues to sidestep rising privacy concerns, consumer fatigue or a sudden shift to a more fashionable service.
  • Google: The search giant’s virtuous circle keeps on spinning to great effect – Google develops scores of free, and often-compelling, Internet services, software platforms and apps, which attract consumers and advertisers, enabling it to create yet more free services. But Google’s acquisition of Motorola Mobility risks destabilising the Android ecosystem on which a big chunk of its future growth depends.
  • Skype: Like Facebook and Google, Skype sought users first and revenues second. By creating a low-cost, yet feature-rich, product, Skype has attracted more than 660 million users and created sufficient strategic value to persuade Microsoft to hand over $8.5bn. Skype’s share of telephony traffic is rising inexorably, but Google and Apple may go to great lengths to prevent a Microsoft asset gaining a dominant position in peer-to-peer communications.

The strategic challenge

There is a clear and growing risk that consumers’ fixation on the products and services provided by the five leading disruptors could leave telcos providing commoditised connectivity and struggling to make a respectable return on their massive investment in network infrastructure and spectrum.

In developed countries, telcos’ longstanding cash-cows – mobile voice calls and SMS – are already being undermined by Internet-based alternatives offered by Skype, Google, Facebook and others. Competition from these services could see telcos lose as much as one third of their messaging and voice revenues within five years (see Figure 1) based on projections from our global survey, carried out in September 2011.

Figure 1 – The potential combined impact of the disruptors on telcos’ core services

Impact of Google, Apple, Facebook, Microsoft/Skype, Amaxon on telco services

Source: Telco 2.0 online survey, September 2011, 301 respondents

Moreover, most individual telcos lack the scale and the software savvy to compete effectively in other key emerging mobile Internet segments, such as local search, location-based services, digital content, apps distribution/retailing and social-networking.

The challenge for telecoms and media companies is to figure out how to deal with the Internet giants in a strategic manner that both protects their core revenues and enables them to expand into new markets. Realistically, that means a complex, and sometimes nuanced, co-opetition strategy, which we characterise as the “Great Game”.

In Figure 3 below, we’ve mapped the players’ roles and objectives against the markets they operate in, giving an indication of the potential market revenue at stake, and telcos’ generic strategies.

Figure 3- The Great Game – Positions, Roles and Strategies

The Great Game - Telcos, Amazon, Apple, Google, Facebook, Skype/Microsoft

Our in-depth analysis, presented in this report, describes the ‘Great Game’ and the strategies that we recommend telcos and others can adopt in summary and in detail. [END OF FIRST EXTRACT]

Report contents

  • Executive Summary [5 pages – including partial extract above]
  • Key Recommendations for telcos and others [20 pages]
  • Introduction [10 pages – including further extract below]


The report then contains c.50 page sections with detailed analysis of objectives, business model, strategy, and options for co-opetition for:

  • Google
  • Apple
  • Facebook
  • Microsoft/Skype
  • Amazon

Followed by:

  • Conclusions and recommendations [10 pages]
  • Index

The report includes 124 charts and tables.

The rest of this page comprises an extract from the report’s introduction, covering the ‘new world order’, investor views, the impact of disruptors on telcos, and how telcos are currently fighting back (including pricing, RCS and WAC), and further details of the report’s contents. 

 

Introduction

The new world order

The onward march of the Internet into daily life, aided and abetted by the phenomenal demand for smartphones since the launch of the first iPhone in 2007, has created a new world order in the telecoms, media and technology (TMT) industry.

Apple, Google and Facebook are making their way to the top of that order, pushing aside some of the world’s biggest telcos, equipment makers and media companies. This trio, together with Amazon and Skype (soon to be a unit of Microsoft), are fundamentally changing consumers’ behaviour and dismantling longstanding TMT value chains, while opening up new markets and building new ecosystems.

Supported by hundreds of thousands of software developers, Apple, Google and Facebook’s platforms are fuelling innovation in consumer and, increasingly, business services on both the fixed and mobile Internet. Amazon has set the benchmark for online retailing and cloud computing services, while Skype is reinventing telephony, using IP technology to provide compelling new functionality and features, as well as low-cost calls.

On their current trajectory, these five companies are set to suck much of the value out of the telecoms services market, substituting relatively expensive and traditional voice and messaging services with low-cost, feature-rich alternatives and leaving telcos simply providing data connectivity. At the same time, Apple, Amazon, Google and Facebook have become major conduits for software applications, games, music and other digital content, rewriting the rules of engagement for the media industry.

In a Telco2.0 online survey of industry executives conducted in September 2011, respondents said they expect Apple, Google, Facebook and Skype together to have a major impact on telcos’ voice and messaging revenues in the next three to five years . Although these declines will be partially compensated for by rising revenues from mobile data services, the respondents in the survey anticipate that telcos will see a major rise in data carriage costs (see Figure 1 – The potential combined impact of the disruptors on telcos’ core services).

In essence, we consider Amazon, Apple, Facebook, Google and Skype-Microsoft to be the most disruptive players in the TMT ecosystem right now and, to keep this report manageable, we have focused on these five giants. Still, we acknowledge that other companies, such as RIM, Twitter and Baidu, are also shaping consumers’ online behaviour and we will cover these players in more depth in future research.

The Internet is, of course, evolving rapidly and we fully expect new disruptors to emerge, taking advantage of the so-called Social, Local, Mobile (SoLoMo) forces, sweeping through the TMT landscape. At the same time, the big five will surely disrupt each other. Google is increasingly in head-to-head competition with Facebook, as well as Microsoft, in the online advertising market, while squaring up to Apple and Microsoft in the smartphone platform segment. In the digital entertainment space, Amazon and Google are trying to challenge Apple’s supremacy, while also attacking the cloud services market.

Investor trust

Unlike telcos, the disruptors are generally growing quickly and are under little, or no, pressure from shareholders to pay dividends. That means they can accumulate large war chests and reinvest their profits in new staff, R&D, more data centres and acquisitions without any major constraints. Investors’ confidence and trust enables the disruptors to spend money freely, keep innovating and outflank dividend-paying telcos, media companies and telecoms equipment suppliers.

By contrast, investors generally don’t expect telcos to reinvest all their profits in their businesses, as they don’t believe telcos can earn a sufficiently high return on capital. Figure 16 shows the dividend yields of the leading telcos (marked in blue). Of the disruptors, only Microsoft (marked in green) pays a dividend to shareholders.

Figure 16: Investors expect dividends, not growth, from telcos

Figure 1 Chart Google Apple Facebook Microsoft Skype Amazon Sep 2011 Telco 2.0

Source: Google Finance 2/9/2011

The top telcos’ turnover and net income is comparable, or superior, to that of the leading disruptors, but this isn’t reflected in their respective market capitalisations. AT&T’s turnover is approximately four times that of Google and its net income twice as great, yet their market cap is similar. Even accounting for their different capital structures, investors clearly expect Google to grow much faster than AT&T and syphon off more of the value in the TMT sector.

More broadly, the disparity in the market value between the leading disruptors and the leading telcos’ market capitalisations suggest that investors expect Apple, Microsoft and Google’s revenues and profits to keep rising, while they believe telcos’ will be stable or go into decline. Figure 17 shows how the market capitalisation of the disruptors (marked in green) compares with that of the most valuable telcos (marked in blue) at the beginning of September 2011.

Figure 17: Investors value the disruptors highly

Figure 2 Chart Google Apple Facebook Microsoft Skype Amazon Market Capitalisation Sep 2011 Telco 2.0

Source: Google Finance 2/9/2011 (Facebook valued at Facebook $66bn based on IPG sale in August 2011)

Impact of disruptors on telcos

It has taken longer than many commentators expected, but Internet-based messaging and social networking services are finally eroding telcos’ SMS revenues in developed markets. KPN, for example, has admitted that smartphones, equipped with data communications apps (and Whatsapp in particular), are impacting its voice and SMS revenues in its consumer wireless business in its home market of The Netherlands (see Figure 18). Reporting its Q2 2011 results, KPN said that changing consumer behaviour cut its consumer wireless service revenues in Holland by 2% year-on-year.

Figure 18: KPN reveals falling SMS usage

Figure 3 Chart Google Apple Facebook Microsoft Skype Amazon KPN Trends Sep 2011 Telco 2.0

Source: KPN Q2 results

In the second quarter, Vodafone also reported a fall in messaging revenue in Spain and southern Africa, while Orange saw its average revenue per user from data and SMS services fall in Poland.

How telcos are fighting back

Big bundles

Carefully-designed bundles are the most common tactic telcos are using to try and protect their voice and messaging business. Most postpaid monthly contracts now come with hundreds of SMS messages and voice minutes, along with a limited volume of data, bundled into the overall tariff package. This mix encourages consumers to keep using the telcos’ voice and SMS services, which they are paying for anyway, rather than having Skype or another VOIP service soak up their precious data allowance.

To further deter usage of VOIP services, KPN and some other telcos are also creating tiered data tariffs offering different throughput speeds. The lower-priced tariffs tend to have slow uplink speeds, making them unsuitable for VOIP (see Figure 19 below). If consumers want to use VOIP, they will need to purchase a higher-priced data tariff, earning the telco back the lost voice revenue.

Figure 19: How KPN is trying to defend its revenues

Figure 4 Chart Google Apple Facebook Microsoft Skype Amazon KPN Defence Sep 2011 Telco 2.0

Source: KPN’s Q2 results presentation

Of course, such tactics can be undermined by competition – if one mobile operator in a market begins offering generous data-only tariffs, consumers may well gravitate towards that operator, forcing the others to adjust their tariff plans.

Moreover, bundling voice, SMS and data will generally only work for contract customers. Prepaid customers, who only want to pay for what they are use, are naturally charged for each minute of calls they make and each message they send. These customers, therefore, have a stronger financial incentive to find a free WiFi network and use that to send messages via Facebook or make calls via Skype.

The Rich Communications Suite (RCS)

To fend off the threat posed by Skype, Facebook, Google and Apple’s multimedia communications services, telcos are also trying to improve their own voice and messaging offerings. Overseen by mobile operator trade association the GSMA, the Rich Communications Suite is a set of standards and protocols designed to enable mobile phones to exchange presence information, instant messages, live video footage and files across any mobile network.

In an echo of social networks, the GSMA says RCS will enable consumers to create their own personal community and share content in real time using their mobile device.

From a technical perspective, RCS uses the Session Initiation Protocol (SIP) to manage presence information and relay real-time information to the consumer about which service features they can use with a specific contact. The actual RCS services are carried over an IP-Multimedia Subsystem (IMS), which telcos are using to support a shift to all-IP fixed and mobile networks.

Deutsche Telekom, Orange, Telecom Italia, Telefonica and Vodafone have publically committed to deploy RCS services, indicating that the concept has momentum in Europe, in particular. The GSMA says that interoperable RCS services will initially be launched by these operators in Spain, Germany, France and Italy in late 2011 and 2012. [NB We’ll be discussing RCSe with some of the operators at our EMEA event in London in November 2011.]

In theory, at least, RCS will have some advantages over many of the communications services offered by the disruptors. Firstly, it will be interoperable across networks, so you’ll be able to reach people using different service providers. Secondly, the GSMA says RCS service features will be automatically available on mobile devices from late 2011 without the need to download and install software or create an account (by contrast, Apple’s iMessage service, for example, will only be installed on Apple devices).

But questions remain over whether RCS devices will arrive in commercial quantities fast enough, whether RCS services will be priced in an attractive way and will be packaged and marketed effectively. Moreover, it isn’t yet clear whether IMS will be able to handle the huge signalling load that would arise from widespread usage of RCS.

Internet messaging protocols, such as XMPP, require the data channel to remain active continuously. Tearing down and reconnecting generates lots of signalling traffic, but the alternative – maintaining a packet data session – will quickly drain the device’s battery.
By 2012, Facebook and Skype may be even more entrenched than they are today and their fans may see no need to use telcos’ RCS services.

Competing head-on

Some of the largest mobile operators have tried, and mostly failed, to take on the disruptors at their own game. Vodafone 360, for example, was Vodafone’s much-promoted, but ultimately, unsuccessful €500 million attempt to insert itself between its customers and social networking and messaging services from the likes of Facebook, Windows Live, Google and Twitter.

As well as aggregating contacts and feeds from several social networks, Vodafone 360 also served as a gateway to the telco’s app and music store. But most Vodafone customers didn’t appear to see the need to have an aggregator sit between them and their Facebook feed. During 2011, the service was stripped back to be just the app and music store. In essence, Vodafone 360 didn’t add enough value to what the disruptors are already offering. We understand, from discussions with executives at Vodafone, that the service is now being mothballed.

A small number of large telcos, mostly in emerging markets where smartphones are not yet commonplace, have successfully built up a portfolio of value-added consumer services that go far beyond voice and messaging. One of the best examples is China Mobile, which claims more than 82 million users for its Fetion instant messaging service, for example (see Figure 20 – China Mobile’s Internet Services).

Figure 20 – China Mobile’s Internet Services

China Mobile Services, Google, Apple, Facebook Report, Telco 2.0

Source: China Mobile’s Q2 2011 results

However, it remains to be seen whether China Mobile will be able to continue to attract so many customers for its (mostly paid-for) Internet services once smartphones with full web access go mass-market in China, making it easier for consumers to access third-parties’ services, such as the popular QQ social network.

Some telcos have tried to compete with the disruptors by buying innovative start-ups. A good example is Telefonica’s acquisition of VOIP provider Jajah for US$207 million in January 2010. Telefonica has since used Jajah’s systems and expertise to launch low-cost international calling services in competition with Skype and companies offering calling cards. Telefonica expects Jajah’s products to generate $280 million of revenue in 2011, primarily from low-cost international calls offered by its German and UK mobile businesses, according to a report in the FT.

The Wholesale Applications Community (WAC)

Concerned about their growing dependence on the leading smartphone platforms, such as Android and Apple’s iOS, many of the world’s leading telcos have banded together to form the Wholesale Applications Community (WAC).

WAC’s goal is to create a platform developers can use to create apps that will run across different device operating systems, while tapping the capabilities of telcos’ networks and messaging and billing systems.

At the Mobile World Congress in February 2011, WAC said that China Mobile, MTS, Orange, Smart, Telefónica, Telenor, Verizon and Vodafone are “connected to the WAC platform”, while adding that Samsung and LG will ensure “that all devices produced by the two companies that are capable of supporting the WAC runtime will do so.”

It also announced the availability of the WAC 2.0 specification, which supports HTML5 web applications, while WAC 3.0, which is designed to enable developers to tap network assets, such as in-app billing and user authentication, is scheduled to be available in September 2011.

Ericsson, the leading supplier of mobile networks, is a particularly active supporter of WAC, which also counts leading Alcatel-Lucent, Huawei, LG Electronics, Qualcomm, Research in Motion, Samsung and ZTE, among its members.

In theory, at least, apps developers should also throw their weight behind WAC, which promises the so far unrealised dream of “write once, run anywhere.” But, in reality, games developers, in particular, will probably still want to build specific apps for specific platforms, to give their software a performance and functionality edge over rivals.

Still, the ultimate success or failure of WAC will likely depend on how enthusiastically Apple and Google, in particular, embrace HTML5 and actively support it in their respective smartphone platforms. We discuss this question further in the Apple and Google chapters of this report.

Summarising current telcos’ response to disruptors

 

Telcos, and their close allies in the equipment market, are clearly alert to the threat posed by the major disruptors, but they have yet to develop a comprehensive game plan that will enable them to protect their voice and messaging revenue, while expanding into new markets.

Collective activities, such as RCS and WAC, are certainly necessary and worthwhile, but are not enough. Telcos, and companies across the broader TMT ecosystem, need to also adapt their individual strategies to the rise of Amazon, Apple, Facebook, Google and Skype-Microsoft. This report is designed to help them do that.

[END OF EXTRACT]

 

‘Under-The-Floor’ (UTF) Players: threat or opportunity?

Introduction

The ‘smart pipe’ imperative

In some quarters of the telecoms industry, the received wisdom is that the network itself is merely an undifferentiated “pipe”, providing commodity connectivity, especially for data services. The value, many assert, is in providing higher-tier services, content and applications, either to end-users, or as value-added B2B services to other parties. The Telco 2.0 view is subtly different. We maintain that:

  1. Increasingly valuable services will be provided by third-parties but that operators can provide a few end-user services themselves. They will, for example, continue to offer voice and messaging services for the foreseeable future.
  2. Operators still have an opportunity to offer enabling services to ‘upstream’ service providers such as personalisation and targeting (of marketing and services) via use of their customer data, payments, identity and authentication and customer care.
  3. Even if operators fail (or choose not to pursue) options 1 and 2 above, the network must be ‘smart’ and all operators will pursue at least a ‘smart network’ or ‘Happy Pipe’ strategy. This will enable operators to achieve three things.
  • To ensure that data is transported efficiently so that capital and operating costs are minimised and the Internet and other networks remain cheap methods of distribution.
  • To improve user experience by matching the performance of the network to the nature of the application or service being used – or indeed vice versa, adapting the application to the actual constraints of the network. ‘Best efforts’ is fine for asynchronous communication, such as email or text, but unacceptable for traditional voice telephony. A video call or streamed movie could exploit guaranteed bandwidth if possible / available, or else they could self-optimise to conditions of network congestion or poor coverage, if well-understood. Other services have different criteria – for example, real-time gaming demands ultra-low latency, while corporate applications may demand the most secure and reliable path through the network.
  • To charge appropriately for access to and/or use of the network. It is becoming increasingly clear that the Telco 1.0 business model – that of charging the end-user per minute or per Megabyte – is under pressure as new business models for the distribution of content and transportation of data are being developed. Operators will need to be capable of charging different players – end-users, service providers, third-parties (such as advertisers) – on a real-time basis for provision of broadband and maybe various types or tiers of quality of service (QoS). They may also need to offer SLAs (service level agreements), monitor and report actual “as-experienced” quality metrics or expose information about network congestion and availability.

Under the floor players threaten control (and smartness)

Either through deliberate actions such as outsourcing, or through external agency (Government, greenfield competition etc), we see the network-part of the telco universe suffering from a creeping loss of control and ownership. There is a steady move towards outsourced networks, as they are shared, or built around the concept of open-access and wholesale. While this would be fine if the telcos themselves remained in control of this trend (we see significant opportunities in wholesale and infrastructure services), in many cases the opposite is occurring. Telcos are losing control, and in our view losing influence over their core asset – the network. They are worrying so much about competing with so-called OTT providers that they are missing the threat from below.

At the point at which many operators, at least in Europe and North America, are seeing the services opportunity ebb away, and ever-greater dependency on new models of data connectivity provision, they are potentially cutting off (or being cut off from) one of their real differentiators.
Given the uncertainties around both fixed and mobile broadband business models, it is sensible for operators to retain as many business model options as possible. Operators are battling with significant commercial and technical questions such as:

  • Can upstream monetisation really work?
  • Will regulators permit priority services under Net Neutrality regulations?
  • What forms of network policy and traffic management are practical, realistic and responsive?

Answers to these and other questions remain opaque. However, it is clear that many of the potential future business models will require networks to be physically or logically re-engineered, as well as flexible back-office functions, like billing and OSS, to be closely integrated with the network.
Outsourcing networks to third-party vendors, particularly when such a network is shared with other operators is dangerous in these circumstances. Partners that today agree on the principles for network-sharing may have very different strategic views and goals in two years’ time, especially given the unknown use-cases for new technologies like LTE.

This report considers all these issues and gives guidance to operators who may not have considered all the various ways in which network control is being eroded, from Government-run networks through to outsourcing services from the larger equipment providers.

Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators Under The Floor Players Fig 1 Defending Network Capabilities

Source: STL Partners

Industry structure is being reshaped

Over the last year, Telco 2.0 has updated its overall map of the telecom industry, to reflect ongoing dynamics seen in both fixed and mobile arenas. In our strategic research reports on Broadband Business Models, and the Roadmap for Telco 2.0 Operators, we have explored the emergence of various new “buckets” of opportunity, such as verticalised service offerings, two-sided opportunities and enhanced variants of traditional retail propositions.
In parallel to this, we’ve also looked again at some changes in the traditional wholesale and infrastructure layers of the telecoms industry. Historically, this has largely comprised basic capacity resale and some “behind the scenes” use of carriers-carrier services (roaming hubs, satellite / sub-oceanic transit etc).

Figure 2 – Telco 1.0 Wholesale & Infrastructure structure

Under The Floor (UTF) Players Fig 2 Telco 1.0 Scenario

Source: STL Partners

Content

  • Revising & extending the industry map
  • ‘Network Infrastructure Services’ or UTF?
  • UTF market drivers
  • Implications of the growing trend in ‘under-the-floor’ network service providers
  • Networks must be smart and controlling them is smart too
  • No such thing as a dumb network
  • Controlling the network will remain a key competitive advantage
  • UTF enablers: LTE, WiFi & carrier ethernet
  • UTF players could reduce network flexibility and control for operators
  • The dangers of ceding control to third-parties
  • No single answer for all operators but ‘outsourcer beware’
  • Network outsourcing & the changing face of major vendors
  • Why become an under-the-floor player?
  • Categorising under-the-floor services
  • Pure under-the-floor: the outsourced network
  • Under-the-floor ‘lite’: bilateral or multilateral network-sharing
  • Selective under-the-floor: Commercial open-access/wholesale networks
  • Mandated under-the-floor: Government networks
  • Summary categorisation of under-the-floor services
  • Next steps for operators
  • Build scale and a more sophisticated partnership approach
  • Final thoughts
  • Index

 

  • Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators
  • Figure 2 – Telco 1.0 Wholesale & Infrastructure structure
  • Figure 3 – The battle over infrastructure services is intensifying
  • Figure 4 – Examples of network-sharing arrangements
  • Figure 5 – Examples of Government-run/influenced networks
  • Figure 6 – Four under-the-floor service categories
  • Figure 7: The need for operator collaboration & co-opetition strategies

Cloud 2.0: Telcos to grow Revenues 900% by 2014

Summary: Telcos should grow Cloud Services revenues nine-fold and triple their overall market share in the next three years according to delegates at the May 2011 EMEA Executive Brainstorm. But which are the best opportunities and strategies? (June 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream)

NB Members can download a PDF of this Analyst Note in full here. Cloud Services will also feature at the Best Practice Live! Free global virtual event on 28-29 June 2011.

To share this article easily, please click:

//

Introduction

STL Partners’ New Digital Economics Executive Brainstorm & Developer Forum EMEA took place from 11-13 May in London. The event brought together 250 execs from across the telecoms, media and technology sectors to take part in 6 co-located interactive events: the Telco 2.0, Digital Entertainment 2.0, Mobile Apps 2.0, M2M 2.0 and Personal Data 2.0 Executive Brainstorms, and an evening AppCircus developer forum.

Building on output from the last Telco 2.0 events and new analysis from the Telco 2.0 Initiative – including the new strategy report ‘The Roadmap to New Telco 2.0 Business Models’ – the Telco 2.0 Executive Brainstorm explored latest thinking and practice in growing the value of telecoms in the evolving digital economy.

This document gives an overview of the output from the Cloud session of the Telco 2.0 stream.

Companies referenced: Aepona, Amazon Web Services, Apple, AT&T, Bain, BT, Centurylink, Cisco, Dropbox, Embarq, Equinix, Flexible 4 Business, Force.com, Google Apps, HP, IBM, Intuit, Microsoft, Neustar, Orange, Qwest, Salesforce.com, SAP, Savvis, Swisscom, Terremark, T-Systems, Verizon, Webex, WMWare.

Business Models and Technologies covered: cloud services, Enterprise Private Cloud (EPC), Virtual Private Cloud (VPC), Infrastucture as a service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

Cloud Market Overview: 25% CAGR to 2013

Today, Telcos have around a 5% share of nearly $20Bn p.a. cloud services revenue, with 25% compound annual growth rate (CAGR) forecast to 2013. Most market forecasts are that the total cloud services market will reach c.$45-50Bn revenue by 2013 / 2014, including the Bain forecast previewed at the Americas Telco 2.0 Brainstorm in April 2011.

At the EMEA brainstorm, delegates were presented with an overview of the component cloud markets and examples of different cloud services approaches, and were then asked for their views on what share telcos could take of cloud revenues in each. In total, delegates’ views amounted to telcos taking in the region of 18% by revenue of cloud services at the end of the next three years.

Applying these views to an extrapolated ‘mid-point’ forecast view of the Cloud Market in 2014, implies that Telcos will take just under $9Bn revenue from Cloud by 2014, thus increasing today’s c$1Bn share nine-fold. [NB More detailed methodology and sources are in the full paper available to members here.]

Figure 1 – Cloud Services Market Forecast & Players

Cloud 2.0 Forecast 2014 - Telco 2.0

Source: Telco 2.0 Presentation

Although already a multi-$Bn market already, there is still a reasonable degree of uncertainty and variance in Cloud forecasts as might be expected in a still maturing market, so this market could be a lot higher – or perhaps lower, especially if the consequences of the recent Amazon AWS breakdown significantly reduce CIO’s appetites for Cloud.

The potential for c.30% IT cost savings and speed to market benefits that can be achieved by telcos implementing Cloud internally previously shown by Cisco’s case study were highlighted but not explored in depth at this session.

Which cloud markets should telcos target?

Figure 2 – Cloud Services – Telco Positioning

Cloud 2.0 Market Positioning - Telco 2.0

Source: Cisco/Orange Presentation, 13th Telco 2.0 Executive Brainstorm, London, May 2011

An interesting feature of the debate was which areas telcos would be most successful in, and the timing of market entry strategies. Orange and Cisco argued that the area of ‘Virtual Private Cloud’, although neither the largest nor predicted to be the fastest growing area, should be the first market for some telcos to address, appealing to some telcos strong ‘trust’ credentials with CIOs and building on ‘managed services’ enterprise IT sales and delivery capabilities.

Orange described its value proposition ‘Flexible 4 Business’ in partnership with Cisco, VMWare virtualisation, and EMC2 storage, and although could not at this early stage give any performance metrics described strong demand and claimed satisfaction with progress to date.

Aepona described a Platform-as-a-Service (PaaS) concept that they are launching shortly with Neustar that aggregates telco APIs to enable the rapid creation and marketing of new enterprise services.

Figure 3 – Aepona / Neustar ‘Intelligent Cloud’ PaaS Concept

C;oud 2.0 - Intelligent Cloud PaaS Concept - Telco 2.0

In this instance, the cloud component makes the service more flexible, cheaper and easier to deliver than a traditional IT structure. This type of concept is sometimes described as a ‘mobile cloud’ because many of the interesting uses relate to mobile applications, and are not reliant on continuous high-grade mobile connectivity required for e.g. IaaS: rather they can make use of bursts of connectivity to validate identities etc. via APIs ‘in the cloud’.

To read the rest of this Analyst Note, containing…

  • Forecasts of telco share of cloud by VPC, IaaS, PaaS and SaaS
  • Telco 2.0 take-outs and next steps
  • And detailed Brainstorm delegate feedback

Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can access and download a PDF of the full report here. Non-Members, please see here for how to subscribe. Alternatively, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

Cloud 2.0: What are the Telco Opportunities?

Summary: Telco 2.0’s analysis of operators’ potential role and opportunity in ‘Cloud Services’, a set of new business model opportunities that are still in an early stage of development – although players such as Amazon have already blazed a substantial trail. (December 2010, , Executive Briefing Service, Cloud & Enterprise ICT Stream & Foundation 2.0)

  • Below is an extract from this Telco 2.0 Report. The report can be downloaded in full PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here.
  • Additionally, to give an introduction to the principles of Telco 2.0 and digital business model innovation, we now offer for download a small selection of free Telco 2.0 Briefing reports (including this one) and a growing collection of what we think are the best 3rd party ‘white papers’. To access these reports you will need to become a Foundation 2.0 member. To do this, use the promotional code FOUNDATION2 in the box provided on the sign-up page here. NB By signing up to this service you give consent to us passing your contact details to the owners / creators of any 3rd party reports you download. Your Foundation 2.0 member details will allow you to access the reports shown here only, and once registered, you will be able to download the report here.
  • See also the videos from IBM on what telcos need to do, and Oracle on the range of Cloud Services, and the Telco 2.0 Analyst Note describing Americas and EMEA Telco 2.0 Executive Brainstorm delegates’ views of the Cloud Services Opportunity for telcos.
  • We’ll also be discussing Cloud 2.0 at the Silicon Valley (27-28 March) and London (12-13 June) Executive Brainstorms.
  • To access reports from the full Telco 2.0 Executive Briefing service, or to submit whitepapers for review for inclusion in this service, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

 

The Cloud: What Is It?

Apart from being the leading buzzword in the enterprise half of the IT industry for the last few years, what is this thing called “Cloud”? Specifically, how does it differ from traditional server co-location, or indeed time-sharing on mainframes as we did in the 1970s? These are all variations on the theme of computing power being supplied from a remote machine shared with other users, rather than from PCs or servers deployed on-site.

Two useful definitions were voiced at the 11th Telco 2.0 EMEA Executive Brainstorm in November 2010:

  • “A standardised IT Capability delivered in a pay-per-use, self-service way.” Stephan Haddinger, Chief Architect Cloud Computing, Orange – citing a definition by Forrester.
  • “STEAM – A Self-Service, multi-Tenanted, Elastic, broad Access, and Metered IT Service.” Neil Sholay, VP Cloud and Comms, EMEA, Oracle.

The definition of Cloud has been rendered significantly more complicated by the hype around “cloud” and the resultant tendency to use it for almost anything that is network resident. For a start, it’s unhelpful to describe anything that includes a Web site as “cloud computing”. A good way to further understand ‘Cloud Services’ is to look at the classic products in the market.

The most successful of these, Amazon’s S3 and EC2, provide low-level access to computing resources – disk storage, in S3, and general-purpose CPU in EC2. This differs from an ASP (Application Service Provider) or Web 2.0 product in that what is provided isn’t any particular application, but rather something close to the services of a general purpose computer. It differs from traditional hosting in that what is provided is not access to one particular physical machine, but to a virtual machine environment running on many physical servers in a data-centre infrastructure, which is probably itself distributed over multiple locations. The cloud operator handles the administration of the actual servers, the data centres and internal networks, and the virtualisation software used to provide the virtual machines.

Varying degrees of user control over the system are available. A major marketing point, however, is that the user doesn’t need to worry about system administration – it can be abstracted out as in the cloud graphic that is used to symbolise the Internet on architecture diagrams. This tension between computing provided “like electricity” and the desire for more fine-grained control is an important theme. Nobody wants to specify how their electricity is routed through the grid, although increasing numbers of customers want to buy renewable power – but it is much more common for businesses (starting at surprisingly small scale) to have their own Internet routing policies.

So, for example, although Amazon’s cloud services are delivered from their global data centre infrastructure, it’s possible to specify where EC2 instances run to a continental scale. This provides for compliance with data protection law as well as for performance optimisation. Several major providers, notably Rackspace, BT Global Services, and IBM, offer “private cloud” services which represent a halfway house between hosting/managed service and fully virtualised cloud computing. And some explicit cloud products, such as Google’s App Engine, provide an application environment with only limited low-level access, as a rapid-prototyping tool for developers.

The Cloud: Why Is It?

Back at the November 2009 Telco 2.0 Executive Brainstorm in Orlando, Joe Weinman of AT&T presented an argument that cloud computing is “a mathematical inevitability”. His fundamental point is worth expanding on. For many cloud use cases, the decision between moving into the cloud and using a traditional fleet of hosted servers is essentially a rent-vs-buy calculus. Weinman’s point was that once you acquire servers, whether you own them and co-locate or rent them from a hosting provider, you are committed to acquiring that quantity of computing capacity whether you use it or not. Scaling up presents some problems, but it is not that difficult to co-locate more 1U racks. What is really problematic is scaling down.

Cloud computing services address this by basically providing volume pricing for general-purpose computing – you pay for what you use. It therefore has an advantage when there are compute-intensive tasks with a highly skewed traffic distribution, in a temporary deployment, or in a rapid-prototyping project. However, problems arise when there is a need for capacity on permanent standby, or serious issues of data security, business continuity, service assurance, and the like. These are also typical rent-vs-buy issues.

Another reason to move to the cloud is that providing high-availability computing is expensive and difficult. Cloud computing providers’ core business is supporting large numbers of customers’ business-critical applications – it might make sense to pass this task to a specialist. Also, their typical architecture, using virtualisation across large numbers of PC-servers to achieve high availability in the manner popularised by Google, doesn’t make sense except on a scale big enough to provide a significant margin of redundancy in the hardware and in the data centre infrastructure.

Why Not the Cloud?

The key objections to the cloud are centred around trust – one benefit of spreading computing across many servers in many locations is that this reduces the risk of hardware and/or connectivity failure. However, the problem with moving your infrastructure into a multi-tenant platform is of course that it’s another way of saying that you’ve created a new, enormous single point of commercial and/or software failure. It’s also true that the more critical and complex the functions that are moved into cloud infrastructure, and the more demanding the contractual terms that result, the more problematic it becomes to manage the relationship. (Neil Lock, IT Services Director at BT Global Services, contributed an excellent presentation on this theme at the 9th Telco 2.0 Executive Brainstorm.) At some point, the additional costs of managing the outsourcer relationship intersect with the higher costs of owning the infrastructure and internalising the contract. One option involves spending more money on engineers, the other, spending more money on lawyers.

Similar problems exist with regard to information security – a malicious actor who gains access to administrative features of the cloud solution has enormous opportunities to cause trouble, and the scaling features of the cloud mean that it is highly attractive to spammers and denial-of-service attackers. Nothing else offers them quite as much power.

Also, as many cloud systems make a virtue of the fact that the user doesn’t need to know much about the physical infrastructure, it may be very difficult to guarantee compliance with privacy and other legislation. Financial and other standards sometimes mandate specific cryptographic, electronic, and physical security measures. It is quite possible that the users of major clouds would be unable to say in which jurisdiction users’ personal data is stored. They may consider this a feature, but this is highly dependent on the nature of your business.

From a provider perspective, the chief problem with the cloud is commoditisation. At present, major clouds are the cheapest way bar none to buy computing power. However, the very nature of a multi-tenant platform demands significant capital investment to deliver the reliability and availability the customers expect. The temptation will always be there to oversubscribe the available capacity – until the first big outage. A capital intensive, very high volume, and low price business is the classic case of a commodity – many operators would argue that this is precisely what they’re trying to get away from. Expect vigorous competition, low margins, and significant CAPEX requirements.

To download a full PDF of this article, covering…

  • What’s in it for Telcos?
  • Conclusions and Recommendations

…Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud & Enterprise ICT Stream can read the Executive Summary and download the full report in PDF format here. Non-Members, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud Services 2.0: Clearing Fog, Sunshine Forecast, say Telco 2.0 Delegates

Summary: the early stage of development of the market means there is some confusion on the telco Cloud opportunity, yet clarity is starting to emerge, and the concept of ‘Network-as-a-Service’ found particular favour with Telco 2.0 delegates at our October 2010 Americas and November 2010 EMEA Telco 2.0 Executive Brainstorms. (December 2010, Executive Briefing Service, Cloud & Enterprise ICT Streamm)

The full 15 page PDF report is available for members of the Executive Briefing Service and Cloud and Enterprise ICT Stream here. For membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003. Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

Executive Summary

Clearing Fog

Cloud concepts can sometimes seem as baffling, and as nebulous as their namesakes. However, in the recent Telco 2.0 Executive Brainstorms, (Americas in October 2010 and EMEA November 2010), stimulus presentations by IBM, Oracle, FT-Orange Group, Deutsche Telekom, Intel, Salesforce.com, Cisco, BT-Ribbit, and delegate discussions really brought the Cloud Services opportunities to life.

While it was generally agreed that the precise definitions delineating the many possible varieties of the service are not always useful, it does matter how operators can make money from the services, and there was at least consensus on this.

Sunshine Forecast: A Significant Opportunity…

IBM identified an $88.5Bn opportunity in the Cloud over the next 5 years, the majority of which is applicable to telcos, although the share that will end up in the telco industry might be as much as 70% or as little as 30%, depending on how operators go about it (video here).

According to Cisco, there is a $44Bn telco opportunity in Cloud Services by 2014, supported by the evidence of 30%+ enterprise IT cost savings and productivity gains that resulted from Cisco’s own comprehensive internal adoption of cloud services (video here). We see this estimate as reasonably consistent with IBM’s.

Oracle also brought the range of opportunities to life with seven contrasting real-life case studies (video here).

Ribbit, AT&T, and Salesforce.com also supported the viability of Cloud Cervices, arguing that concerns over trust and privacy are gradually being allayed. Intel argued that Network as a Service (NaaS) is emerging as a cloud opportunity alongside Enterprise and Public Clouds, and that by combining NaaS with the telco influence over devices and device computing power, telcos can be a major player in a new ‘Pervasive Computing’ environment. EMEA delegates also viewed Network-as-a-Service as the most attractive opportunity.

Fig 1 – Delegates Favoured ‘Network-as-a-Service’ of the Cloud Opportunities

Telco 2.0 Delegates Cloud Vote, Nov 2010

Source: Telco 2.0 Delegate Vote, 11th Brainstorm, EMEA , Nov 2010.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud 2.0: What Should Telcos do? IBM’s View

Summary: IBM say that telcos are well positioned to provide cloud services, and forecast an $89Bn opportunity over 5 years globally. Video presentation and slides (members only) including forecast, case studies, and lessons for future competitiveness.

Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

 

At the 11th EMEA Telco 2.0 Brainstorm, November 2010, Craig Wilson, VP, IBM Global Telecoms Industry, said that:

  • Cloud Services represent an $89Bn opportunity in 5 years;
  • Telcos / Service Providers are “well positioned” to compete in Cloud Services;
  • Security remains the CIO’s biggest question mark, but one that telcos can help with;
  • and outlined two APAC telco Cloud case studies.

Members of the Telco 2.0 Executive Briefing Service and the Cloud and Enterprise ICT Stream can also download Craig’s presentation here (for membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003).

See also videos by Oracle describing a range of cloud case studies, Cisco on the market opportunity and their own case study of Cloud benefits, and Telco 2.0’s Analyst Note on the Cloud Opportunity.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables: