The Telco Cloud Manifesto 2.0

Nearly two years on from our first Telco Cloud Manifesto published in March 2021, we are even more convinced that going through the pain of learning how to orchestrate and manage network workloads in a cloud-native environment is essential for telcos to successfully create new business models, such as Network-as-a-Service in support of edge compute applications.

Since the first Manifesto, hyperscalers have emerged as powerful partners and enablers for telcos’ technology transformation. But telcos that simply outsource to hyperscalers the delivery and management of their telco cloud, and of the multi-vendor, virtualised network functions that run on it, will never realise the true potential of telco cloudification. By contrast, evolving and maintaining an ability to orchestrate and manage multi-vendor, virtualised network functions end-to-end across distributed, multi-domain and multi-vendor infrastructure represents a vital control point that telcos should not surrender to the hyperscalers and vendors. Doing so could relegate telcos to a role as mere physical connectivity and infrastructure providers helping to deliver services developed, marketed and monetised by others.

In short, operators must take on the ‘workload’ of transforming into and acting as cloud-centric organisations before they shift their ‘workloads’ to the hyperscale cloud. In this updated Manifesto, we outline why, and what telcos at different stages of maturity should prioritise.

Two developments have taken place since the publication of our first manifesto that have changed the terms on which telcos are addressing network cloudification:

  • Hyperscale cloud providers have increasingly developed capabilities and commercial offers in the area of telco cloud. To telcos uncertain about the strategy and financial implications of the next phase of their investments, the hyperscalers appear to offer a shortcut to telco cloud: the possibility of avoiding doing all the hard yards of developing the private telco cloud, and of evolving the internal skills and processes for deploying and managing multi-vendor VNFs / CNFs over it. Instead, the hyperscalers offer the prospect of getting telco cloud and VNFs / CNFs on an ‘as-a-Service’ basis – fundamentally like any other cloud service.
  • In April 2021, DISH announced it would build its greenfield 5G network with AWS providing much of the virtual infrastructure layer and all of the physical cloud infrastructure. In June 2021, AT&T sold its private telco cloud platform to Microsoft Azure. In both instances, the telcos involved are now deploying mobile core network functions and, in DISH’s case, all of the software-based functions of its on a hyperscale cloud. These events appear superficially to set an example validating the idea of outsourcing telco cloud to the hyperscalers. After all, AT&T had previously been a champion of the DIY approach to telco cloud but now looked as though it had thrown in the towel and gone all in with outsourcing its cloud from Azure.

Two main questions arise from these developments, which we address in detail in this second Manifesto:

  • Should telcos embarked or embarking on a Pathway 2 strategy outsource their telco cloud infrastructure and procure their critical network functions – in whole or in part – from one or more hyperscalers, on an as-a-Service basis?
  • What is the broader significance of AT&T’s and DISH’s moves? Does it represent the logical culmination of telco cloudification and, if so, what are the technological and business-model characteristics of the ‘infrastructure-independent, cloud-native telco’, as we define this new Pathway 4? Finally, is this a model that all Pathway 3 players – and even all telcos per se – should ultimately seek to emulate?

In this second Manifesto, we also propose an updated version of our pathways describing telco network cloudification strategies for different sizes and types of telco to implement telco cloud. We now have four pathways (we had three in the original Manifesto), as illustrated in the figure below.

The four telco cloud deployment pathways in STL’s Telco Cloud Manifesto 2.0

Source: STL Partners, 2023

Existing subscribers can download the Manifesto at the top of this page. Everyone else, please go here.

If you wish to speak to us about our new Manifesto, please book a call.

Table of contents

  • Executive Summary
    • Recommendations
  • Pathway 1: No way back
    • Two constituencies at operators: Cloud sceptics and cloud advocates
  • Pathway 2: Hyperscalers – friend or foe?
    • Cloud-native network functions are a vital control point telcos must not relinquish
  • Pathway 3: Build own telco cloud competencies before deploying on public cloud
    • AT&T and DISH are important proof points but not applicable to the industry as a whole
    • But telcos will not realise the full benefits of telco cloud unless they, too, become software and cloud businesses
  • Pathway 4: The path to Network-as-a-Service
    • Pathway 4 networks will enable Network-as-a-Service
  • Conclusion: Mastery of cloud-native is key for telcos to create value in the Coordination Age

Related research

Our telco cloud research aligned to this topic includes:

 

Data-driven telecoms: navigating regulations

Regulation has a significant impact on global communications markets

Telco relationships with telecoms regulators and the governments that influence them are very important. For data-driven telecoms, telcos must now also understand the regulation of digital markets, and how different types of data are treated, stored and transferred around the world. Data-driven telecoms is an essential part of telecoms growth strategy. The massive growth enjoyed by the global tech giants, in contrast with the stagnation of growth in the telecoms industry, provides a significant lure for telcos, to harness data and become digital businesses themselves. Of course, this necessitates complying with digital regulations, and understanding their direction.

Additionally, by participating in digital markets, and digitising their own systems, telcos are necessarily working with and sometimes competing against the global digital, for whom this legislation is essential to their ongoing business practices. Political reaction against some practices of these digital giants is leading to some toughened stances on digital regulation around the world, and a tarnished public perception.

Most businesses are impacted by digital regulation to some extent, but it is those most deeply embedded in digital markets that feel it the most, especially the digital hyper-scalers. What do Google, Meta, Microsoft et al need to do differently as digital regulations evolve and new standards come into play? And for telcos, apart from compliance, are there opportunities presented by new digital regulations? How can telcos and the digital giants evolve their relationships with the entities that regulate them? Can they ultimately work together to create a better future based on the Co-ordination Age vision, or will they remain adversarial with lines drawn around profit vs public good?

What is digital regulation?

The report covers two important aspects of digital regulation for telecoms players – data governance and digital market regulations.

It does not cover a third theme in digital regulation – the regulation of potentially harmful content and the responsibilities of digital platforms in this regard. This is a complex and far-reaching issue, affecting global trade agreements, sparking philosophical debates and leading to some tricky public relations challenges for digital platform providers. However, for the purposes of this report we will set aside this issue and focus instead on data governance and the regulation of digital markets which have most direct relevance to telcos in particular.

Data governance is a large topic, covering the treatment, storage and transfer of all kinds of data. Different national and regional regulatory bodies may have different approaches to data governance rules, broadly depending on where they find the balance between prioritising security, privacy and the rights of the individual, against the need for a free flow of data to fuel the growth of digital industries.

Regulation around data governance also naturally splits into two areas, one concerning personal data, and the other concerning industrial data, with greater regulatory scrutiny focused on the former. The regulation of these types of data are necessarily different because concerns about privacy only really apply to data that can be associated with individual people, although there may still be requirements around security, and fair access to industrial data. Examples of data governance regulation are the EU’s General Data Protection Regulation (GDPR) concerning personal data, and The Data Act concerning industrial data, or the Data Privacy and Protection Act in the US. All of these examples will be discussed in greater detail in the main body of the report.

Enter your details below to download an extract of the report

Significant types of digital regulation

Source: STL Partners

Regulation specific to policing digital markets has emerged when regulatory bodies decide that general competition law is not sufficient to serve digital markets, and that more specific and tailored rules or reparations are needed. Like other forms of competition law, this regulation aims to promote fair and open competition and curb market participants deemed to possess significant market power. Regulations of this nature are always to some degree controversial, because the exact boundaries of what constitutes significant market power have to be defined, and can be argued to be arbitrary or incorrectly drawn. Examples of this type of regulation that will be discussed in depth later in the report are the Digital Markets Act in the EU, and the Innovation and Choice Online Act in the US.

A global perspective

The market for digital services is by its nature global. Digital giants like Google, Meta, Amazon and Apple are offering a wide variety of digital services, both b2b and b2c, all over the world. Those services will be provisioned using storage, compute power, and even human workforce, that may or may not be located in the country or even region in which the service is being consumed. Thus digital regulations, especially those concerning data governance, are globally significant.

A global market

Source: STL Partners

This report places significant focus on the regulatory agendas of the European Union and the United States. This is because these are two of the most significant and influential global powers in setting trends in digital regulation. This significance is gained partly by market size – in a global market such as that for digital services, regulations that cover a large number of potential customers are going to have more weight, and the European Union has a population of roughly 447mn, while the population of the US is around 332mn. The US also maintains its significant role in setting the digital regulatory agenda by actively seeking influence and leadership, while the EU has gained influence by being one of the most proactive, and stringent, regulatory bodies in the world.

Table of Contents

  • Executive Summary
  • Introduction
  • Important trends in data governance regulation
    • Regulation of the processing, storage and use of personal data
    • Regulation of industrial data
  • Regulation of digital markets
    • The Digital Markets Act: Governing digital monopolies
    • The US approach to digital market regulation
  • A global perspective – how EU and US digital regulation trends are spreading around the world
    • The Globalisation of the EU Regulation: The Brussels Effect
    • Digital Economy Governance in the US Foreign Policy
    • Digital in the EU-US Transatlantic Relationship
    • A Patchwork of Digital Agreements in Asia
    • A New Global Framework on Cross-Border Data Flows
  • Conclusion
    • Advice for Telcos

Related research

Enter your details below to download an extract of the report

VNFs on public cloud: Opportunity, not threat

VNF deployments on the hyperscale cloud are just beginning

Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date

The past three years have seen many major telcos concluding collaboration agreements with the leading hyperscalers. These have involved one or more of five business models for the telco-hyperscaler relationship that we discussed in a previous report, and which are illustrated below:

Five business models for telco-hyperscaler partnerships

Source: STL Partners

In this report, we focus more narrowly on the deployment, delivery and operation by and to telcos of virtualised and cloud-native network functions (VNFs / CNFs) over the hyperscale public cloud. To date, there have been few instances of telcos delivering live, commercial services on the public network via VNFs hosted on the public cloud. STL Partners’ Telco Cloud Deployment Tracker contains eight examples of this, as illustrated below:

Major telcos deploying VNFs in the public cloud

Source: STL Partners

Enter your details below to request an extract of the report

Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’

The telcos in the above table are all of comparable stature and ambition to the likes of AT&T and DISH in the realm of telco cloud but have a diametrically opposite stance when it comes to VNF deployment on public cloud. They have decided against large-scale public cloud deployments for a variety of reasons, including:

  • They have invested a considerable amount of money, time and human resources on their private clouddeployments, and they want and need to utilise the asset and generate the RoI.
  • Related to this, they have generated a large amount of intellectual property (IP) as a result of their DIY cloud– and VNF-development work. Clearly, they wish to realise the business benefits they sought to achieve through these efforts, such as cost and resource efficiencies, automation gains, enhanced flexibility and agility, and opportunities for both connectivityand edge compute service innovation. Apart from the opportunity cost of not realising these gains, it is demoralising for some CTO departments to contemplate surrendering the fruit of this effort in favour of a hyperscaler’s comparable cloud infrastructure, orchestration and management tools.
  • In addition, telcos have an opportunity to monetise that IP by marketing it to other telcos. The Rakuten Communications Platform (RCP) marketed by Rakuten Symphony is an example of this: effectively, a telco providing a telco cloud platform on an NFaaS basis to third-party operators or enterprises – in competition to similar offerings that might be developed by hyperscalers. Accordingly, RCP will be hosted over private cloud facilities, not public cloud. But in theory, there is no reason why RCP could not in future be delivered over public cloud. In this case, Rakuten would be acting like any other vendor adapting its solutions to the hyperscale cloud.
  • In theory also, telcos could also offer their private telcoclouds as a platform, or wholesale or on-demand service, for third parties to source and run their own network functions (i.e. these would be hosted on the wholesale provider’s facilities, in contrast to the RCP, which is hosted on the client telco’s facilities). This would be a logical fit for telcos such as BT or Deutsche Telekom, which still operate as their respective countries’ communications backbone provider and primary wholesale provider

BT and Deutsche Telekom have also been among the telcos that have been most visibly hostile to the idea of running NFs powering their own public, mass-market services on the public and hyperscale cloud. And for most operators, this is the main concern making them cautious about deploying VNFs on the public cloud, let alone sourcing them from the cloud on an NFaaS basis: that this would be making the ‘core’ telco business and asset – the network – dependent on the technology roadmaps, operational competence and business priorities of the hyperscalers.

Table of contents

  • Executive Summary
  • Introduction: VNF deployments on the hyperscale cloud are just beginning
    • Numerous collaboration agreements between hyperscalers and leading telcos, but few live VNF deployments to date
    • DISH and AT&T: AWS vs Azure; vendor-supported vs DIY; NaaCP vs net compute
  • Other DIY or vendor-supported best-of-breed players are not hosting VNFs on public cloud
    • Telcos are looking to generate returns from their telco cloud investments and maintain control over their ‘core business’
    • The reluctance to deploy VNFs on the cloud reflects a persistent, legacy concept of the telco
  • But NaaCP will drive more VNF deployments on public cloud, and opportunities for telcos
    • Multiple models for NaaCP present prospects for greater integration of cloud-native networks and public cloud
  • Conclusion: Convergence of network and cloud is inevitable – but not telcos’ defeat
  • Appendix

Related Research

 

Enter your details below to request an extract of the report

Telco roadmap to net-zero carbon emissions: Why, when and how

Telcos’ role in reducing carbon emissions

There are over eighty telecoms operators globally that turn over $1 billion or more in revenues every year. As major companies, service providers (SPs) have a role to play in reducing global carbon emissions. So far, they have been behind the curve. In the Corporate Knights Global 100 of the world’s most sustainable corporations, only five of them are telcos (BT, KPN, Cogeco, Telus and StarHub) and none of them are in the top 30.

In this report, we explore the aims, visions and priorities of SPs in their journey to become more sustainable companies. More specifically, we have sought to understand the practical steps they are undertaking to reduce their carbon footprints. This includes discovering how they define, prioritise and drive initiatives as well as the governance and reporting used to determine their progress to ‘net-zero’.

Each SP’s journey is unique; we’ve explored how regional and market influences affect their journey and how different personas and influencers within the SP approach this topic. To do this, we have spoken to 40 individuals at SPs globally. Interviewees have varied, from corporate and social responsibility (CSR) representatives, to those responsible for the SP’s technology and enterprise strategies. This report reflects the strategies and ambitions we learnt about during these conversations.

Enter your details below to request an extract of the report

This report is informed by interviews from SPs globallytelcos carbon emissions

What do we mean by scope 1, 2 and 3?

Before diving in further, it’s important to align on the key terminology that all major SPs are drawing on to evaluate and report their sustainability efforts: in particular, how they disclose and commit to reducing their greenhouse gas emissions.

SPs divide their carbon emissions into scope 1, 2 and 3 – scope 3 is by far the most significant

For most SPs, scope 1 (e.g. emissions from the fleet of vehicles used to install equipment or perform maintenance tasks on base stations) and scope 2 (e.g. the electricity they purchase to run their networks) makes up less than 20% of their overall footprint. These emissions can be recorded and reported on accurately and there are established methodologies for doing so.

Scope 3, however, is where 80%+ of SP carbon emissions come from. This is because it captures the impact of the SP’s whole supply chain, e.g. the carbon emissions released from manufacturing the network equipment that they deploy. It also includes the carbon emissions arising from supplying customers with products and services that an SP sells, e.g. from shipping and de-commissioning consumer handsets or servers provided to enterprise customers.

Table of Contents

  • Executive Summary
  • Table of Figures
  • Introduction
    • What do we mean by scope 1, 2 and 3?
    • Where are SPs in their sustainability journey?
    • How does this differ by region?
    • What’s covered in the rest of the report?
  • Procurement and sustainable supply chain
    • Scope 1, 2 and 3: Where are procurement teams focused
    • Current priorities
    • Regional nuances
    • Best and next practices
  • Networking
  • IT and facilities
  • Enterprise products and services
  • Key recommendations and conclusion

Enter your details below to request an extract of the report

The Telco Cloud Manifesto

You are viewing a page relating to our first Telco Cloud Manifesto. It was updated in January 2023. Click here to see the new Manifesto.

Telco cloud: A key enabler of the Coordination Age

The Coordination Age is coming

As we have set out in our company manifesto, STL Partners believes that we are entering a new ‘Coordination Age’ in which technological developments will enable governments, enterprises, and consumers to coordinate their activities more effectively than ever before. The results of better and faster coordination will be game-changing for society as resources are distributed and used more effectively than ever before leading to substantial social, economic, and health benefits.

A critical component of the Coordination Age is the universal availability of flexible, fast, reliable, low-latency networks that support a myriad of applications which, in turn, enable a complex array of communications, decisions, transactions, and processes to be completed quickly and, in many cases, automatically without human intervention.  The network remains key: without it being fit for purpose the ability to match demand and supply real-time is impossible.

How telecoms can define a new role

Historically, telecoms networks have been created using specialist dedicated (proprietary) hardware and software.  This has ensured networks are reliable and secure but has also stymied innovation – from operators and from third-parties – that have found leveraging network capabilities challenging.  In fact, innovation accelerated with the arrival of the Internet which enabled services to be decoupled from the network and run ‘over the top’.

But the Coordination Age requires more from the network than ever before – applications require the network to be flexible, accessible and support a range of technical and commercial options. Applications cannot run independently of the network but need to integrate with it. The network must be able to impart actionable insights and flex its speed, bandwidth, latency, security, business model and countless other variables quickly and autonomously to meet the needs of applications using it.

Telco cloud – the move to a network built on common off-the-shelf hardware and flexible interoperable software from best-of-breed suppliers that runs wherever it is needed – is the enabler of this future.

Existing subscribers can download the Manifesto at the top of this page. Everyone else, please go here.

Table of Contents

  • Executive Summary
  • Telco cloud: A key enabler of the Coordination Age
    • The Coordination Age is coming
    • How telecoms can define a new role
  • Telco cloud: The growth enabler for the telecoms industry
    • Telecoms revenue growth has stalled, traffic has not
    • Telco cloud: A new approach to the network
    • …a fundamental shift in what it means to be an operator
    • …and the driver of future telecoms differentiation and growth
  • Realising the telco cloud vision
    • Moving to telco cloud is challenging
    • Different operator segments will take different paths

Network convergence: How to deliver a seamless experience

Operators need to adapt to the changing connectivity demands post-COVID19

The global dependency on consistent high-performance connectivity has recently come to the fore as the COVID-19 outbreak has transformed many of the remaining non-digital tasks into online activities.

The typical patterns of networking have broken and a ‘new normal’, albeit possibly a somewhat transitory one, is emerging. The recovery of the global economy will depend on governments, healthcare providers, businesses and their employees robustly communicating and gaining uninhibited access to content and cloud through their service providers – at any time of day, from any location and on any device.

Reliable connectivity is a critical commodity. Network usage patterns have shifted more towards the home and remote working. Locations which were previously light-usage now have high demands. Conversely, many business locations no longer need such high capacity. Utilisation is not expected to return to pre-COVID-19 patterns either, as people and businesses adapt to new daily routines – at least for some time.

The strategies with which telcos started the year have of course been disrupted with resources diverted away from strategic objectives to deal with a new mandate – keep the country connected. In the short-term, the focus has shifted to one which is more tactical – ensuring customer satisfaction through a reliable and adaptable service with rapid response to issues. In the long-term, however, the objectives for capacity and coverage remain. Telcos are still required to reach national targets for a minimum connection quality in rural areas, whilst delivering high bandwidth service demands in hotspot locations (although these hotspot locations might now change).

Of course, modern networks are designed with scalability and adaptability in mind – some recent deployments from new disruptors (such as Rakuten) demonstrate the power of virtualisation and automation in that process, particularly when it comes to the radio access network (RAN). In many legacy networks, however, one area which is not able to adapt fast enough is the physical access. Limits on spectrum, coverage (indoors and outdoors) and the speed at which physical infrastructure can be installed or updated become a bottleneck in the adaptation process. New initiatives to meet home working demand through an accelerated fibre rollout are happening, but they tend to come at great cost.

Network convergence is a concept which can provide a quick and convenient way to address this need for improved coverage, speed and reliability in the access network, without the need to install or upgrade last mile infrastructure. By definition, it is the coming-together of multiple network assets, as part of a transformation to one intelligent network which can efficiently provide customers with a single, unified, high-quality experience at any time, in any place.

It has already attracted interest and is finding an initial following. A few telcos have used it to provide better home broadband. Internet content and cloud service providers are interested, as it adds resilience to the mobile user experience, and enterprises are interested in utilising multiple lower cost commodity backhauls – the combination of which benefits from inherent protection against costly network outages.

Enter your details below to request an extract of the report

Network convergence helps create an adaptable and resilient last mile

Most telcos already have the facility to connect with their customers via multiple means; providing mobile, fixed line and public Wi-Fi connectivity to those in their coverage footprint. The strategy has been to convert individual ‘pure’ mobile or fixed customers into households. The expectation is that this creates revenue increase through bundling and loyalty whilst bringing some added friction into the ability to churn – a concept which has been termed ‘convergence’. Although the customer may see one converged telco through brand, billing and customer support, the delivery of a consistent user experience across all modes of network access has been lacking and awkward. In the end, it is customer dissatisfaction which drives churn, so delivering a consistent user experience is important.

Convergence is a term used to mean many different things, from a single bill for all household connectivity, to modernising multiple core networks into a single efficient core. While most telcos have so far been concentrating on increasing operational efficiency, increasing customer loyalty/NPS and decreasing churn through some initial aspects of convergence, some are now looking into network convergence – where multiple access technologies (4G, 5G, Wi-Fi, fixed line) can be used together to deliver a resilient, optimised and consistent network quality and coverage.

Overview of convergence

Source: STL Partners

As an overarching concept, network convergence introduces more flexibility into the access layer. It allows a single converged core network to utilise and aggregate whichever last mile connectivity options are most suited to the environment. Some examples are:

  • Hybrid Access: DSL and 4G macro network used together to provide extra speed and fallback reliability in hybrid fixed/mobile home gateways.
  • Cell Densification: 5G and Wi-Fi small cells jointly providing short range capacity to augment the macro network in dense urban areas.
  • Fixed Wireless Access: using cellular as a fibre alternative in challenging areas.

The ability to combine various network accesses is attractive as an option for improving adaptability, resilience and speed. Strategically, putting such flexibility in place can support future growth and customer retention with the added advantage of improving operational efficiency. Tactically, it enables an ability to quickly adapt resources to short-term changes in demand. COVID-19 has been a clear example of this need.

Table of Contents

  • Executive Summary
    • Convergence and network convergence
    • Near-term benefits of network convergence
    • Strategic benefits of network convergence
    • Balancing the benefits of convergence and divergence
    • A three-step plan
  • Introduction
    • The changing environment
    • Network convergence: The adaptable and resilient last mile
    • Anticipated benefits to telcos
    • Challenges and opposing forces
  • The evolution to network convergence
    • Everyone is combining networks
    • Converging telco networks
    • Telco adoption so far
  • Strategy, tactics and hurdles
    • The time is right for adaptability
    • Tactical motivators
    • Increasing the relationship with the customer
    • Modernisation and efficiency – remaining competitive
    • Hurdles from within the telco ecosystem
    • Risk or opportunity? Innovation above-the-core
  • Conclusion
    • A three-step plan
  • Index

Enter your details below to request an extract of the report

 

 

VEON – Transition from telco to consumer IP communications platform

Introduction to Veon

Geographical footprint and brands

Veon came into being at the start of 2017, a rebrand of VimpelCom. The Amsterdam-based telco was founded in its current form in 2009 when shareholders Telenor and Alfa agreed to merge their assets in VimpelCom and Ukraine’s Kyivstar to create VimpelCom Ltd.

Veon is among the world’s 10 largest communications network operators by subscription, with around 235 million customers in 13 countries (see Figure 1).

Figure 1: Veon’s geographical footprint (September 2017)

Source: Veon, STL Partners

The telco operates a number of brands across its geographical footprint (see Figure 2).

Figure 2: Veon’s brands (September 2017)

Source: Veon, STL Partners

Veon’s largest market is Russia, where it has over 58 million mobile subscribers, making up 24% of its global total. Pakistan and Bangladesh comprise its second-largest markets by subscribers, while it has over 30 million customers in Italy under its Wind Tre brand, a joint venture with CK Hutchison (see Figure 3).

Figure 3: Veon mobile customers by region, H2 2017 (millions)

Source: Veon, STL Partners

A brief history of Veon

  • 1992: Veon began life as Russian operator PJSC VimpelCom in 1992.
  • 2009: VimpelCom Ltd. founded as Telenor and Alfa Group (Altimo) agree to merge their assets in VimpelCom (Russia and CIS) and Ukraine (Kyivstar).
  • 2010: VimpelCom acquires Orascom Telecom Holding (operating in Pakistan, Bangladesh, Algeria) and Wind Italy from Egypt’s Naguib Sawiris.
  • 2017: VimpelCom Ltd. rebrands as Veon.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

The somewhat unusual development of both Veon’s shareholder structure and geographical footprint means the telco faces some unique challenges, but has also enabled a degree of flexibility in the company’s path to transformation.

Veon’s shareholder structure – an enabler of transformation

At the time of writing, Veon is 47.9%-owned (common and voting shares) by Alfa (via investment vehicle LetterOne), and 19.7% by Norway’s Telenor (with the remaining 32.4% split between free float and minority shareholders).

This structure means that the company is less beholden to dividend-hungry shareholders, allowing the telco more ease of alignment than many of its contemporaries. This extra “breathing space” also allows change to occur faster with fewer levels of managerial approval required, whilst the board of directors has given its backing to Veon’s transformation journey, offering full “top-down support”. Nevertheless there is some doubt about how the transformation plans will be greeted at local OpCo level, and the group faces some serious cultural challenges in this area.

Faced with lacklustre organic growth and in the face of headwinds of currency devaluations in its former Soviet markets, Veon has chosen to, in the words of CEO Jean-Yves Charlier, “disrupt itself from within”.

Reversing the revenue decline

Speaking at Veon’s rebrand in February 2017, CEO Charlier spoke of how the telco sector has been backed into a corner by aggressive disruptive start-ups like Skype and WhatsApp, meaning the industry now needs to reinvent itself and find new paths to growth.

The company began by improving its capital structure, in part through the consolidation of operations in two of its largest markets, with the mergers of Mobilink and Warid to form Jazz in Pakistan, and the formation of joint venture Wind Tre from Wind Italy and CK Hutchison’s Tre (3).

Veon states it has realigned its corporate culture and values, introduced a robust control and compliance framework, and significantly cut its cost base, and the operator returned to positive revenue and EBITDA growth in the second quarter of 2017.

Contents:

  • Executive Summary 
  • Introduction to Veon
  • Veon’s digital strategy
  • What are the strengths of Veon’s offering?
  • What must Veon do to succeed?
  • Will Veon make it work?
  • Introduction
  • Introduction to Veon
  • The path to total transformation
  • Veon’s digital strategy
  • Reinvent customer experience
  • Network virtualisation
  • The product
  • An omni-channel platform
  • The strengths of the holistic platform
  • Can Veon’s consumer IP communications proposition succeed? 
  • Can Veon beat the GAFA and Chinese giants to the market?
  • What must Veon do to succeed?
  • Conclusions

Figures:

  • Figure 1: Veon’s geographical footprint (September 2017)
  • Figure 2: Veon’s brands (September 2017)
  • Figure 3: Veon mobile customers by region, H2 2017 (millions)
  • Figure 4: Veon revenue and EBITDA, Q4 2015-Q2 2017 ($ billion)
  • Figure 5: Veon’s transformation from telco to tech company
  • Figure 6: Penetration of leading social networks in Russia (2016)
  • Figure 7: Veon IT stack scope of responsibilities
  • Figure 8: VEON app screenshots – a IP communication platform
  • Figure 9: Veon app access requirements
  • Figure 10: Comparison of consumer IP communications plays
  • Figure 11: Veon – a SWOT analysis

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

The Devil’s Advocate: SDN / NFV can never work, and here’s why!

Introduction

The Advocatus Diaboli (Latin for Devil’s Advocate), was formerly an official position within the Catholic Church; one who “argued against the canonization (sainthood) of a candidate in order to uncover any character flaws or misrepresentation evidence favouring canonization”.

In common parlance, the term a “devil’s advocate” describes someone who, given a certain point of view, takes a position they do not necessarily agree with (or simply an alternative position from the accepted norm), for the sake of debate or to explore the thought further.

SDN / NFV runs into problems: a ‘devil’s advocate’ assessment

The telco industry’s drive toward Network Functions Virtualization (NFV) got going in a major way in 2014, with high expectations that the technology – along with its sister technology SDN (Software-Defined Networking ) – would revolutionize operators’ abilities to deliver innovative communications and digital services, and transform the ways in which these services can be purchased and consumed.

Unsurprisingly, as with so many of these ‘revolutions’, early optimism has now given way to the realization that full-scope NFV deployment will be complex, time-consuming and expensive. Meanwhile, it has become apparent that the technology may not transform telcos’ operations and financial fortunes as much as originally expected.

The following is a presentation of the case against SDN / NFV from the perspective of the ‘devil’s advocate’. It is a combination of the types of criticism that have been voiced in recent times, but taken to the extreme so as to represent a ‘damning’ indictment of the industry effort around these technologies. This is not the official view of STL Partners but rather an attempt to explore the limits of the skeptical position.

We will respond to each of the devil’s advocate’s arguments in turn in the second half of this report; and, in keeping with good analytical practice, we will endeavor to present a balanced synthesis at the end.

‘It’ll never work’: the devil’s advocate speaks

And here’s why:

1. Questionable financial and operational benefits:

Will NFV ever deliver any real cost savings or capacity gains? Operators that have launched NFV-based services have not yet provided any hard evidence that they have achieved notable reductions in their opex and capex on the basis of the technology, or any evidence that the data-carrying capacity, performance or flexibility of their networks have significantly improved.

Operators talk a good talk, but where is the actual financial and operating data that supports the NFV business case? Are they refusing to disclose the figures because they are in fact negative or inconclusive? And if this is so, how can we have any confidence that NFV and SDN will deliver anything like the long-term cost and performance benefits that have been touted for them?

 

  • Executive Summary
  • Introduction
  • SDN / NFV runs into problems: a ‘devil’s advocate’ assessment
  • ‘It’ll never work’: the devil’s advocate speaks
  • 1. Questionable financial and operational benefits
  • 2. Wasted investments and built-in obsolescence
  • 3. Depreciation losses
  • 4. Difficulties in testing and deploying
  • 5. Telco cloud or pie in the sky?
  • 6. Losing focus on competitors because of focusing on networks:
  • 7. Change the culture and get agile?
  • 8.It’s too complicated
  • The case for the defense
  • 1. Clear financial and operational benefits:
  • 2. Strong short-term investment and business case
  • 3. Different depreciation and valuation models apply to virtualized assets
  • 4. Short-term pain for long-term gains
  • 5. Don’t cloud your vision of the technological future
  • 6. Telcos can compete in the present while building the future
  • 7. Operators both can and must transform their culture and skills base to become more agile
  • 8. It may be complicated, but is that a reason not to attempt it
  • A balanced view of NFV: ‘making a virtual out of necessity’ without making NFV a virtue in itself

MobiNEX: The Mobile Network Experience Index, H1 2016

Executive Summary

In response to customers’ growing usage of mobile data and applications, in April 2016 STL Partners developed MobiNEX: The Mobile Network Experience Index, which ranks mobile network operators by key measures relating to customer experience. To do this, we benchmark mobile operators’ network speed and reliability, allowing individual operators to see how they are performing in relation to the competition in an objective and quantitative manner.

Operators are assigned an individual MobiNEX score out of 100 based on their performance across four measures that STL Partners believes to be core drivers of customer app experience: download speed, average latency, error rate and latency consistency (the proportion of app requests that take longer than 500ms to fulfil).

Our partner Apteligent has provided us with the raw data for three out of the four measures, based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in H1 2016. While our April report focused on the top three or four operators in just seven Western markets, this report covers 80 operators drawn from 25 markets spread across the globe in the first six months of this year.

The top ten operators were from Japan, France, the UK and Canada:

  • Softbank JP scores highest on the MobiNEX for H1 2016, with high scores across all measures and a total score of 85 out of 100.
  • Close behind are Bouygues FR (80) and Free FR (79), which came first and second respectively in the Q4 2015 rankings. Both achieve high scores for error rate, latency consistency and average latency, but are slightly let down by download speed.
  • The top six is completed by NTT DoCoMo JP (78), Orange FR (75) and au (KDDI) JP (71).
  • Slightly behind are Vodafone UK (65), EE UK (64), SFR FR (63), O2 UK (62) and Rogers CA (62). Except in the case of Rogers, who score similarly on all measures, these operators are let down by substantially worse download speeds.

The bottom ten operators all score a total of 16 or lower out of 100, suggesting a materially worse customer app experience.

  • Trailing the pack with scores of 1 or 2 across all four measures were Etisalat EG (4), Vodafone EG (4), Smart PH (5) and Globe PH (5).
  • Beeline RU (11) and Malaysian operators U Mobile MY (9) and Digi MY (9) also fare poorly, but benefit from slightly higher latency consistency scores. Slightly better overall, but still achieving minimum scores of 1 for download speed and average latency, are Maxis MY (14) and MTN ZA (12).

Overall, the extreme difference between the top and bottom of the table highlights a vast inequality in network quality customer experience across the planet. Customer app experience depends to a large degree on where one lives. However, our analysis shows that while economic prosperity does in general lead to a more advanced mobile experience as you might expect, it does not guarantee it. Norway, Sweden, Singapore and the US market are examples of high income countries with lower MobiNEX scores than might be expected against the global picture. STL Partners will do further analysis to uncover more on the drivers of differentiation between markets and players within them.

 

MobiNEX H1 2016 – included markets

MobiNEX H1 2016 – operator scores

 Source: Apteligent, OpenSignal, STL Partners analysis

 

  • About MobiNEX
  • Changes for H1 2016
  • MobiNEX H1 2016: results
  • The winners: top ten operators
  • The losers: bottom ten operators
  • The surprises: operators where you wouldn’t expect them
  • MobiNEX by market
  • MobiNEX H1 2016: segmentation
  • MobiNEX H1 2016: Raw data
  • Error rate
  • Latency consistency
  • Download speed
  • Average latency
  • Appendix 1: Methodology and source data
  • Latency, latency consistency and error rate: Apteligent
  • Download speed: OpenSignal
  • Converting raw data into MobiNEX scores
  • Setting the benchmarks
  • Why measure customer experience through app performance?
  • Appendix 2: Country profiles
  • Country profile: Australia
  • Country profile: Brazil
  • Country profile: Canada
  • Country profile: China
  • Country profile: Colombia
  • Country profile: Egypt
  • Country profile: France
  • Country profile: Germany
  • Country profile: Italy
  • Country profile: Japan
  • Country profile: Malaysia
  • Country profile: Mexico
  • Country profile: New Zealand
  • Country profile: Norway
  • Country profile: Philippines
  • Country profile: Russia
  • Country profile: Saudi Arabia
  • Country profile: Singapore
  • Country profile: South Africa
  • Country profile: Spain
  • Country profile: United Arab Emirates
  • Country profile: United Kingdom
  • Country profile: United States
  • Country profile: Vietnam

 

  • Figure 1: MobiNEX scoring breakdown, benchmarks and raw data used
  • Figure 2: MobiNEX H1 2016 – included markets
  • Figure 3: MobiNEX H1 2016 – operator scores breakdown (top half)
  • Figure 4: MobiNEX H1 2016 – operator scores breakdown (bottom half)
  • Figure 5: MobiNEX H1 2016 – average scores by country
  • Figure 6: MobiNEX segmentation dimensions
  • Figure 7: MobiNEX segmentation – network speed vs reliability
  • Figure 8: MobiNEX segmentation – network speed vs reliability – average by market
  • Figure 9: MobiNEX vs GDP per capita – H1 2016
  • Figure 10: MobiNEX vs smartphone penetration – H1 2016
  • Figure 11: Error rate per 10,000 requests, H1 2016 – average by country
  • Figure 12: Error rate per 10,000 requests, H1 2016 (top half)
  • Figure 13: Error rate per 10,000 requests, H1 2016 (bottom half)
  • Figure 14: Requests with total roundtrip latency > 500ms (%), H1 2016 – average by country
  • Figure 15: Requests with total roundtrip latency > 500ms (%), H1 2016 (top half)
  • Figure 16: Requests with total roundtrip latency > 500ms (%), H1 2016 (bottom half)
  • Figure 17: Average weighted download speed (Mbps), H1 2016 – average by country
  • Figure 18: Average weighted download speed (Mbps), H1 2016 (top half)
  • Figure 19: Average weighted download speed (Mbps), H1 2016 (bottom half)
  • Figure 20: Average total roundtrip latency (ms), H1 2016 – average by country
  • Figure 21: Average total roundtrip latency (ms), H1 2016 (top half)
  • Figure 22: Average total roundtrip latency (ms), H1 2016 (bottom half)
  • Figure 23: Benchmarks and raw data used

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

MobiNEX: The Mobile Network Experience Index, Q4 2015

Executive Summary

In response to customers’ growing usage of mobile data and applications, STL Partners has developed MobiNEX: The Mobile Network Customer Experience Index, which benchmarks mobile operators’ network speed and reliability by measuring the consumer app experience, and allows individual players to see how they are performing in relation to competition in an objective and quantitative manner.

We assign operators an individual MobiNEX score based on their performance across four measures that are core drivers of customer app experience: download speed; average latency; error rate; latency consistency (the percentage of app requests that take longer than 500ms to fulfil). Apteligent has provided us with the raw data for three out of four of the measures based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in Q4 2015. We plan to expand the index to cover other operators and to track performance over time with twice-yearly updates.

Encouragingly, MobiNEX scores are positively correlated with customer satisfaction in the UK and the US suggesting that a better mobile app experience contributes to customer satisfaction.

The top five performers across twenty-seven operators in seven countries in Europe and North America (Canada, France, Germany, Italy, Spain, UK, US) were all from France and the UK suggesting a high degree of competition in these markets as operators strive to improve relative to peers:

  • Bouygues Telecom in France scores highest on the MobiNEX for Q4 2015 with consistently high scores across all four measures and a total score of 76 out of 100.
  • It is closely followed by two other French operators. Free, the late entrant to the market, which started operations in 2012, scores 73. Orange, the former national incumbent, is slightly let down by the number of app errors experienced by users but achieves a healthy overall score of 70.
  • The top five is completed by two UK operators: EE (65) and O2 (61) with similar scores to the three French operators for everything except download speed which was substantially worse.

The bottom five operators have scores suggesting a materially worse customer app experience and we suggest that management focuses on improvements across all four measures to strengthen their customer relationships and competitive position. This applies particularly to:

  • E-Plus in Germany (now part of Telefónica’s O2 network but identified separately by Apteligent).
  • Wind in Italy, which is particularly let down by latency consistency and download speed.
  • Telefónica’s Movistar, the Spanish market share leader.
  • Sprint in the US with middle-ranking average latency and latency consistency but, like other US operators, poor scores on error rate and download speed.
  • 3 Italy, principally a result of its low latency consistency score.

Surprisingly, given the extensive deployment of 4G networks there, the US operators perform poorly and are providing an underwhelming customer app experience:

  • The best-performing US operator, T-Mobile, scores only 45 – a full 31 points below Bouygues Telecom and 4 points below the median operator.
  • All the US operators perform very poorly on error rate and, although 74% of app requests in the US were made on LTE in Q4 2015, no US player scores highly on download speed.

MobiNEX scores – Q4 2015

 Source: Apteligent, OpenSignal, STL Partners analysis

MobiNEX vs Customer Satisfaction

Source: ACSI, NCSI-UK, STL Partners

 

  • Introduction
  • Mobile app performance is dependent on more than network speed
  • App performance as a measure of customer experience
  • MobiNEX: The Mobile Network Experience Index
  • Methodology and key terms
  • MobiNEX Q4 2015 Results: Top 5, bottom 5, surprises
  • MobiNEX is correlated with customer satisfaction
  • Segmenting operators by network customer experience
  • Error rate
  • Quantitative analysis
  • Key findings
  • Latency consistency: Requests with latency over 500ms
  • Quantitative analysis
  • Key findings
  • Download speed
  • Quantitative analysis
  • Key findings
  • Average latency
  • Quantitative analysis
  • Key findings
  • Appendix: Source data and methodology
  • STL Partners and Telco 2.0: Change the Game
  • About Apteligent

 

  • MobiNEX scores – Q4 2015
  • MobiNEX vs Customer Satisfaction
  • Figure 1: MobiNEX – scoring methodology
  • Figure 2: MobiNEX scores – Q4 2015
  • Figure 3: Customer Satisfaction vs MobiNEX, 2015
  • Figure 4: MobiNEX operator segmentation – network speed vs network reliability
  • Figure 5: MobiNEX operator segmentation – with total scores
  • Figure 6: Major Western markets – error rate per 10,000 requests
  • Figure 7: Major Western markets – average error rate per 10,000 requests
  • Figure 8: Major Western operators – percentage of requests with total roundtrip latency greater than 500ms
  • Figure 9: Major Western markets – average percentage of requests with total roundtrip latency greater than 500ms
  • Figure 10: Major Western operators – average weighted download speed across 3G and 4G networks (Mbps)
  • Figure 11: Major European markets – average weighted download speed (Mbps)
  • Figure 12: Major Western markets – percentage of requests made on 3G and LTE
  • Figure 13: Download speed vs Percentage of LTE requests
  • Figure 14: Major Western operators – average total roundtrip latency (ms)
  • Figure 15: Major Western markets – average total roundtrip latency (ms)
  • Figure 16: MobiNEX benchmarks

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

How to be Agile: Agility by Design and Information Intensity

Background: The Telco 2.0 Agility Challenge

Agility is a highly desirable capability for telecoms operators seeking to compete and succeed in their core businesses and the digital economy in general. In our latest industry research, we found that most telco executives that responded rated their organisations as ‘moderately agile’, and identified a number of practical steps that telco management could and should take to improve agility.

The Definition and Value of Agility

In the Telco 2.0 Agility Challenge, STL Partners first researched with 29 senior telecoms operator executives a framework to define agility in the industry’s own terms, and then gathered quantitative input to benchmark the industry’s agility from 74 further executives via an online self-diagnosis tool. The analysis in this report examines the aggregate quantitative input of those executives.

The Telco 2.0 Agility framework comprises the five agility domains illustrated below.

Figure 4: The Telco 2.0 Agility Framework

Source: STL Partners, The ‘Agile Operator’: 5 Key Ways to Meet the Agility Challenge

  • Organisational Agility: Establish a more agile culture and mindset, allowing you to move at faster speeds and to innovate more effectively
  • Network Agility: Embrace new networking technologies/approaches to ensure that you provide the best experience for customers and manage your resources and investment more efficiently
  • Service Agility: Develop the capability to create products and services in a much more iterative manner, resulting in products that are developed faster, with less investment and better serve customer needs
  • Customer Agility: Provide customers with the tools to manage their service and use analytics to gain insight into customer behaviour to develop and refine services
  • Partnering Agility: Become a more effective partner by developing the right skills to understand and assess potential partnerships and ensure that the right processes/technologies are in place to make partnering as easy as possible

A key finding of the first stage was that all of the executives we spoke to considered achieving agility as very important or critical to their organisations’ success, as exemplified by this quote.

“It is fundamental to be agile. For me it is much more important than being lean – it is more than just efficiency.”

European Telco CTO

This research project was kindly sponsored by Ericsson. STL Partners independently created the methodology, questions, findings, analysis and conclusions.

Purpose of this report

This report details:

  • The headline findings of the Telco 2.0 Agility Challenge
  • The category winners
  • What are the lessons revealed about telco agility overall?
  • What do telcos need to address to improve their overall agility?
  • What can others do to help?

Key Findings

The Majority of Operators were ‘Moderately Agile’

Just over two thirds of respondents achieved a total score between 50%-75%. All of the twenty questions had 4 choices, so a score in this range means that for most of the questions these respondents were choosing the second or third option out of four choices increasing from the least to the most agile. The mean score achieved was 63% and the median 61%. This shows that most telcos believe they have some way to go before they would realistically consider themselves truly Agile by the definition set out in the benchmark.

Figure 5: Distribution of Total Agility Scores

Source: STL Partners Telco 2.0 Agility Challenge, n =74

Agility Champions

A further part of the Agility Challenge was to identify Agility Champions, who were recognised through Agility Domain Awards at TM Forum Live! in Nice in June. The winners of these prizes were additionally interviewed by STL Partners to check the evidence of their claims, and the winners were:

  • Telus, which won the Customer Agility Challenge Award. Telus adopted a Customer First initiative across the whole organization; this commitment to customers has led to both a significant increase in the ‘likelihood to recommend’ metric and a substantial reduction in customer complaints.
  • Zain Jordan, which won the Service Agility Challenge. Zain Jordan has achieved the speed and flexibility needed to differentiate itself in the marketplace through deployment of state-of-the-art, real time service enablement platforms and solutions. These are managed and operated by professional, specialized, and qualified teams, and are driving an increase in profitability and customer satisfaction.
  • Telecom Italia Digital Solutions, (TIDS) which won the Partnering Agility Challenge. TIDS have partnered effectively to deliver innovative digital services, including establishing and launching an IoT platform from scratch within 6 months. It is also developing and coordinating all the digital presence at the Expo Milan 2015.

Network Agility is hardest to achieve

Most respondents scored lower on Network Agility than the other domains, and we believe this is partly because the network criteria were harder to achieve (e.g. configuring networks in real time) but also that achieving meaningful agility in a network is as a rule harder than in the other areas.

Figure 6: Average Score by Agility Domain

Note: The maximum score was 4 and the minimum 1, with 4 = Strongly Agile, 3 = Mostly Agile, 2 = Somewhat Agile, and 1 = Not Agile.

Source: STL Partners, n = 74

Next Section: Looking Deeper

 

  • Executive Summary
  • Introduction
  • Background: The Telco 2.0 Agility Challenge
  • Purpose of this report
  • Key Findings
  • The Majority of Operators were ‘Moderately Agile’
  • Agility Champions
  • Network Agility is hardest to achieve
  • Looking Deeper
  • Organisational Agility: ‘Mindset’ is not enough
  • Information Agility is an important factor
  • If you had to choose One Metric that Matters (OMTM) it would be…
  • Conclusions

 

  • Figure 1: The Telco 2.0 Agility Framework
  • Figure 2: Respondents can be grouped into 3 types based on the level and nature of their organisational agility
  • Figure 3: Information Agility Sub-Segments
  • Figure 4: The Telco 2.0 Agility Framework
  • Figure 5: Distribution of Total Agility Scores
  • Figure 6: Average Score by Agility Domain
  • Figure 7: We were surprised that Organisational Agility was not a stronger indicator of Total Agility
  • Figure 8: Differences in Responses to Organisational Agility Questions
  • Figure 9: Organisational Agility a priori Segments and Scores
  • Figure 10: ‘Agile by Design’ Organisations Scored higher than others
  • Figure 11: Defining Information Agility Segments
  • Figure 12: The Information Agile Segment scored higher than the others

How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing

 

Gigabit Cable Attacks This Year

Introduction

Since at least May, 2014 and the Triple Play in the USA Executive Briefing, we have been warning that the cable industry’s continuous improvement of its DOCSIS 3 technology threatens fixed operators with a succession of relatively cheap (in terms of CAPEX) but dramatic speed jumps. Gigabit chipsets have been available for some time, with the actual timing of the roll-out being therefore set by cable operators’ commercial choices.

With the arrival of DOCSIS 3.1, multi-gigabit cable has also become available. As a result, cable operators have become the best value providers in the broadband mass markets: typically, we found in the Triple Play briefing, they were the cheapest in terms of price/megabit in the most common speed tiers, at the time between 50 and 100Mbps. They were sometimes also the leaders for outright speed, and this has had an effect. In Q3 2014, for the first time, Comcast had more high-speed Internet subscribers than it had TV subscribers, on a comparable basis. Furthermore, in Europe, cable industry revenues grew 4.6% in 2014 while the TV component grew 1.8%. In other words, cable operators are now broadband operators above all.

Figure 1: Comcast now has more broadband than TV customers

Source: STL Partners, Comcast Q1 2015 trending schedule 

In the December, 2014 Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons? Executive Briefing, we covered the impact on AT&T’s consumer wireline business, and pointed out that its strategy of concentrating on content as opposed to broadband has not really delivered. In the context of ever more competition from streaming video, it was necessary to have an outstanding broadband product before trying to add content revenues. This was something which their DSL infrastructure couldn’t deliver in the context of cable or fibre competitors. The cable competition concentrated on winning whole households’ spending with broadband, with content as an upsell, and has undermined the wireline base to the point where AT&T might well exit a large proportion of it or perhaps sell off the division, refocusing on wireless, DirecTV satellite TV, and enterprise. At the moment, Comcast sees about 2 broadband net-adds for each triple-play net-add, although the increasing numbers of business ISP customers complicate the picture.

Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups

Source: STL, Comcast Q1 trending schedule

Since Christmas, the trend has picked up speed. Comcast announced a 2Gbps deployment to 1.5 million homes in the Atlanta metropolitan area, with a national deployment to follow. Time Warner Cable has announced a wave of upgrades in Charlotte, North Carolina that ups their current 30Mbps tier to 200Mbps and their 50Mbps tier to 300Mbps, after Google Fiber announced plans to deploy in the area. In the UK, Virgin Media users have been reporting unusually high speeds, apparently because the operator is trialling a 300Mbps speed tier, not long after it upgraded 50Mbps users to 152Mbps.

It is very much worth noting that these deployments are at scale. The Comcast and TWC rollouts are in the millions of premises. When the Virgin Media one reaches production status, it will be multi-million too. Vodafone-owned KDG in Germany is currently deploying 200Mbps, and it will likely go further as soon as it feels the need from a tactical point of view. This is the advantage of an upgrade path that doesn’t require much trenching. Not only can the upgrades be incremental and continuous, they can also be deployed at scale without enormous disruption.

Technology is driving the cable surge

This year’s CES saw the announcement, by Broadcom, of a new system-on-a-chip (SoC) for cable modems/STBs that integrates the new DOCSIS 3.1 cable standard. This provides for even more speeds, theoretically up to 7Gbps downlink, while still providing a broadcast path for pure TV. The SoC also, however, includes a WLAN radio with the newest 802.11ac technology, including beamforming and 4×4 multiple-input and multiple-output (MIMO), which is rated for gigabit speeds in the local network.

Even taking into account the usual level of exaggeration, this is an impressive package, offering telco-hammering broadband speeds, support for broadcast TV, and in-home distribution at speeds that can keep up with 4K streaming video. These are the SoCs that Comcast will be using for its gigabit cable rollouts. STMicroelectronics demonstrated its own multigigabit solution at CES, and although Intel has yet to show a DOCSIS 3.1 SoC, the most recent version of its Puma platform offers up to 1.6Gbps in a DOCSIS 3 network. DOCSIS 3 and 3.1 are designed to be interoperable, so this product has a future even after the head-ends are upgraded.

Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset

Source: RCRWireless 

With multiple chipset vendors shipping products, CableLabs running regular interoperability tests, and large regional deployments beginning, we conclude that the big cable upgrade is now here. Even if cable operators succeed in virtualising their set-top box software, you can’t provide the customer-end modem nor the WiFi router from the cloud. It’s important to realise that FTTH operators can upgrade in a similarly painless way by replacing their optical network terminals (ONTs), but DSL operators need to replace infrastructure. Also, ONTs are often independent from the WLAN router or other customer equipment , so the upgrade won’t necessarily improve the WiFi.

WiFi is also getting a major upgrade

The Broadcom device is so significant, though, because of the very strong WiFi support built in with the cable modem. Like the cable industry, the WiFi ecosystem has succeeded in keeping up a steady cycle of continuous improvements that are usually backwards compatible, from 802.11b through to 802.11ac, thanks to a major standards effort, the scale that Intel and Apple’s support gives us, and its relatively light intellectual property encumbrance.

802.11ac adds a number of advanced radio features, notably multiple-user MIMO, beamforming, and higher-density modulation, that are only expected to arrive in the cellular network as part of 5G some time after 2020, as well as some incremental improvements over 802.11n, like additional MIMO streams, wider channels, and 5GHz spectrum by default. As a result, the industry refers to it as “gigabit WiFi”, although the gigabit is a per-station rather than per-user throughput.

The standard has been settled since January 2014, and support is available in most flagship-class devices and laptop chipsets since then, so this is now a reality. The upgrade of the cable networks to 802.11ac WiFi backed with DOCSIS3.1 will have major strategic consequences for telcos, as it enables the cable operators and any strategic partners of theirs to go in even harder on the fixed broadband business and also launch a WiFi-plus-MVNO mobile service at the same time. The beamforming element of 802.11ac should help them to support higher user densities, as it makes use of the spatial diversity among different stations to reduce interference. Cablevision already launched a mobile service just before Christmas. We know Comcast is planning to launch one sometime this year, as they have been hiring a variety of mobile professionals quite aggressively. And, of course, the CableWiFi roaming alliance greatly facilitates scaling up such a service. The economics of a mini-carrier, as we pointed out in the Google MVNO: What’s Behind It and What Are the Implications? Executive Briefing, hinge on how much traffic can be offloaded to WiFi or small cells.

Figure 4: Modelling a mini-carrier shows that the WiFi is critical

Source: STL Partners

Traffic carried on WiFi costs nothing in terms of spectrum and much less in terms of CAPEX (due to the lower intellectual property tax and the very high production runs of WiFi equipment). In a cable context, it will often be backhauled in the spare capacity of the fixed access network, and therefore will account for very little additional cost on this score. As a result, the percentage of data traffic transferred to WiFi, or absorbed by it, is a crucial variable. KDDI, for example, carries 57% of its mobile data traffic on WiFi and hopes to reach 65% by the end of this year. Increasing the fraction from 30% to 57% roughly halved their CAPEX on LTE.

A major regulatory issue at the moment is the deployment of LTE-LAA (Licensed-Assisted Access), which aggregates unlicensed radio spectrum with a channel from licensed spectrum in order to increase the available bandwidth. The 5GHz WiFi band is the most likely candidate for this, as it is widely available, contains a lot of capacity, and is well-supported in hardware.

We should expect the cable industry to push back very hard against efforts to rush deployment of LTE-LAA cellular networks through the regulatory process, as they have a great deal to lose if the cellular networks start to take up a large proportion of the 5GHz band. From their point of view, a major purpose of LTE-LAA might be to occupy the 5GHz and deny it to their WiFi operations.

  • Executive Summary
  • Introduction
  • Technology is driving the cable surge
  • WiFi is also getting a major upgrade
  • Wholesale and enterprise markets are threatened as well
  • The Cable Surge Is Disrupting Wireline
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game 
  • Figure 1: Comcast now has more broadband than TV customers
  • Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups
  • Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset
  • Figure 4: Modelling a mini-carrier shows that the WiFi is critical
  • Figure 5: Comcast’s growth is mostly driven by business services and broadband
  • Figure 6: Comcast Business is its growth start with a 27% CAGR
  • Figure 7: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 8: 3 major cable operators’ business services are now close to AT&T or Verizon’s scale
  • Figure 9: Summary of gigabit deployments
  • Figure 10: CAPEX as a % of revenue has been falling for some time…

 

Key Questions for The Future of the Network, Part 2: Forthcoming Disruptions

We recently published a report, Key Questions for The Future of the Network, Part 1: The Business Case, exploring the drivers for network investment.  In this follow-up report, we expand the coverage into two separate areas through which we explore 5 key questions:

Disruptive network technologies

  1. Virtualisation & the software telco – how far, how fast?
  2. What is the path to 5G? And what will it be used for?
  3. What is the role of WiFi & other wireless technologies?

External changes

  1. What are the impacts of government & regulation on the network?
  2. How will the vendor landscape change & what are the implications of this?

In the extract below, we outline the context for the first area – disruptive network technologies – and explore the rationales and processes associated with virtualisation (Question 1).

Critical network-technology disruptions

This section covers three huge questions which should be at the top of any CTO’s mind in a CSP – and those of many other executives as well. These are strategically-important technology shifts that have the potential to “change the game” in the longer term. While two of them are “wireless” in nature, they also impact fixed/fibre/cable domains, both through integration and potential substitution. These will also have knock-on effects in financial terms – directly in terms of capex/opex costs, or indirectly in terms of services enabled and revenues.

This is not intended as a round-up of every important trend across the technology spectrum. Clearly, there are many other evolutions occurring in device design, IoT, software-engineering, optical networking and semiconductor development. These will all intersect in some ways with telcos, but there are so many “logical hops” away from the process of actually building and running networks, that they don’t really fit into this document easily. (Although they do appear in contexts such as drivers of desirable 5G network capabilities).

Instead, the focus once again is on unanswered questions that link innovation with “disruption” of how networks are conceived and deployed. As described below, network-virtualisation has huge and diverse impacts across the CSP universe. 5G will likely have a large gap versus today’s 4G architecture, too. This is very different to changes which are mostly incremental.

The mobile and software focus of this section is deliberate. Fixed-network technologies – fast-evolving though they are – generally do not today cause “disruption” in a technical sense. As the name suggests, the current newest cable-industry standard, DOCSIS3.1, is an evolution of 3.0, not a revolution. There is no 4.0 on the drawing-boards, yet. But the relative ease of upgrade to “gigabit cable” may unleash more market-related disruptions, as telcos feel the need to play catch-up with their rivals’ swiftly-escalating headline speeds.

Fibre technologies also tend to be comparatively incremental, rather than driving (or enabling) massive organisational and competitive shifts. In fixed networks there are other important drivers – competition, network unbundling, 4K television, OTT-style video and so on – as well as important roles for virtualisation, which covers both mobile and fixed domains. For markets with high use of residential “OTT video” services such as Netflix – especially in 4K variants – the push to gigabit-range speeds may be faster than expected. This will also have knock-on impacts on the continued improvement of WiFi, defending against ever-faster cellular WiFi networks. Indeed, faster gigabit cable and FTTH networks will be necessary to provide backhaul for 4.5G and 5G cellular networks, both for normal cell-towers and the expected rapid growth of small-cells.

The questions covered in more depth here examine:

  • Virtualisation & the “software telco”: How fast will SDN and NFV appear in commercial networks, and how broad are their impacts in both medium and longer terms? 
  • What is the path from 4G to 5G? This is a less-obvious question than it might appear, as we do yet even have agreed definitions of what we want “5G” to do, let alone defined standards to do it.
  • What is the role of WiFi and other wireless technologies? 

All of these intersect, and have inter-dependencies. For instance, 5G networks are likely to embrace SDN/NFV as a core component, and also perhaps form an “umbrella” over other low-power wireless networks.

A fourth “critical” question would have been to consider security technology and processes. Clearly, the future network is going to face continued challenges from hackers and maybe even cyber-warfare, against which we will need to prepare. However, that is in many ways a broader set of questions that actually reflect on all the others – virtualisation will bring its own security dilemmas, as (no doubt) will 5G. WiFi already does. It is certainly a critical area that bears consideration at a strategic level within CSPs, although it is not addressed here as a specific “question”. It is also a huge and complex area that deserves separate study.

Non-disruptive network technologies

As well as being prepared to exploit truly disruptive innovations, the industry also needs to get better at spotting non-disruptive ones that are doomed to failure, and abandoning them before they incur too much cost or distraction. The telecoms sector has a long way to go before it embraces the start-up mentality of “failing fast” – there are too many hypothetical “standards” gathering dust on a metaphorical shelf, and never being deployed despite a huge amount of work. Sometimes they get shoe-horned into new architectures, as a way to breathe life into them – but that often just encumbers shiny new technologies with the failures of the past.

For example, over the past 10+ years, the telecom industry has been pitching IMS (IP Multimedia Subsystem) as the future platform for interoperating services. It is finally gaining some adoption, but essentially only as a way to implement VoIP versions of the phone system – and even then, with huge increases in complexity and often higher costs. It is not “disruptive” except insofar as sucking huge amounts of resources and management attention, away from other possible sources of genuine innovation. Few developers care about it, and the “technology politics” behind it have helped contribute to the industry’s problems, not the solutions. While there is growth in the deployment of IMS (e.g. as a basis for VoLTE – voice on LTE, or fixed-line VoIP) it is primarily an extra cost, rather than a source of new revenue or competitive advantage. It might help telcos reduce costs by retiring old equipment or reclaiming spectrum for re-use, but that seems to be the limit of its utility and opportunity.

Figure 1: IMS-based services (mostly VoIP) are evolutionary not disruptive

Source: Disruptive Analysis

A common theme in recent years has been for individual point solutions for technical standards to seem elegant “in isolation”, but actually fail to take account of the wider market context. Real-world “offload” of mobile data traffic to WiFi and femtocells has been minimal, because of various practical and commercial constraints – many of which have been predictable. Self-optimising networks (where radio components configured, provisioned and diagnosed themselves automatically) suffered from apathy by vendors – as well as fears from operator staff that they might make themselves redundant. A whole slew of attempts at integrating WiFi with cellular have also had minimal impact, because they ignored the existence of private WiFi and user behaviour. Some of these are now making a return, engineered into more holistic solutions like HetNets and SDN. Telcos execs need to ensure that their representatives on standards bodies, or industry fora, are able to make pragmatic decisions with multiple contributory inputs, rather than always pursue “engineering purity”.

Virtualisation & the “software telco” – how far, how fast?

Spurred by rapid advances in standardised computing products and cloud platforms, the idea of virtualisation is now almost ubiquitous across the telecom sector. Yet the specialised nature of network equipment means that “switching to the cloud” is a lot more complicated than is the case for enterprise IT. But change is happening – the industry is now slowly moving from inflexible, non-scalable network elements or technology sub-systems, to ones which are programmable, running on commercial hardware, and which can “spin up” or down in terms of capacity. We are still comparatively early in this new cycle, but the trend now appears to be inexorable. It is being driven both by what is becoming possible – and also the threats posed by other denizens of the “cloud universe” migrating towards the telecoms industry and threatening to replace aspects unilaterally.

Two acronyms cover the main developments:

  • Software-defined networks (SDN) change the basic network “plumbing” – rather than hugely-complex switches and routers, transmitting and processing data streams individually, SDN puts a central “controller” function in charge of more flexible boxes. These can be updated more easily, have new network-processing capabilities enabled, and allow (hopefully) for better reliability and lower costs.
  • Network function virtualisation (NFV) is less about the “big iron” parts of the network, instead focusing on the myriad of other smaller units needed to do more specific tasks relating to control, security, optimisation and so forth. It allows these supporting functions to be re-cast in software, running as apps on standard servers, rather than needing a variety of separate custom-built boxes and chips.

Figure 2: ETSI’s vision for NFV

                                                                                    Source: ETSI & STL Partners

And while a lot of focus has been placed on operators’ own data-centres and “data-plane” boxes like routers and assorted traffic-processing “middle-boxes” even, that is not the whole story. Virtualisation also extends to the other elements of telco kit: “control-plane” elements used to oversee the network and internal signalling, billing and OSS systems, and even bits of the access and radio network. Tying them all together – and managing the new virtual components – brings new challenges in “orchestration”.

But this begs a number of critical subsidiary questions.

  • Executive Summary
  • Introduction
  • Does the network matter? And will it face “disruption”?
  • Raising questions
  • Overview: Which disruptions are next?
  • Critical network-technology disruptions
  • Non-disruptive network technologies
  • Virtualisation & the “software telco” – how far, how fast?
  • What is the path to 5G? And what will it be used for?
  • What is the role of WiFi & other wireless technologies?
  • What else needs to happen?
  • What are the impacts of government & regulation?
  • Will the vendor landscape shift?
  • Conclusions & Other Questions
  • STL Partners and Telco 2.0: Change the Game
  • Figure 1: New services are both network-integrated & independent
  • Figure 2: IMS-based services (mostly VoIP) are evolutionary not disruptive
  • Figure 3: ETSI’s vision for NFV
  • Figure 4: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 5: Virtualisation roadmap: Telefonica
  • Figure 6: 5G timeline & top-level uses
  • Figure 7: Suggested example 5G use-cases
  • Figure 8: 5G architecture will probably be virtualised from Day 1
  • Figure 9: Key 5G Research Initiatives
  • Figure 10: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 11: Proliferating wireless options for IoT
  • Figure 12: Forthcoming IoT-related wireless technologies
  • Figure 13: London bus with free WiFi sponsored by ice-cream company
  • Figure 14: Vendor landscape in turmoil as IT & network domains merge