Telco roadmap to net-zero carbon emissions: Why, when and how

Telcos’ role in reducing carbon emissions

There are over eighty telecoms operators globally that turn over $1 billion or more in revenues every year. As major companies, service providers (SPs) have a role to play in reducing global carbon emissions. So far, they have been behind the curve. In the Corporate Knights Global 100 of the world’s most sustainable corporations, only five of them are telcos (BT, KPN, Cogeco, Telus and StarHub) and none of them are in the top 30.

In this report, we explore the aims, visions and priorities of SPs in their journey to become more sustainable companies. More specifically, we have sought to understand the practical steps they are undertaking to reduce their carbon footprints. This includes discovering how they define, prioritise and drive initiatives as well as the governance and reporting used to determine their progress to ‘net-zero’.

Each SP’s journey is unique; we’ve explored how regional and market influences affect their journey and how different personas and influencers within the SP approach this topic. To do this, we have spoken to 40 individuals at SPs globally. Interviewees have varied, from corporate and social responsibility (CSR) representatives, to those responsible for the SP’s technology and enterprise strategies. This report reflects the strategies and ambitions we learnt about during these conversations.

Enter your details below to request an extract of the report


 

This report is informed by interviews from SPs globallytelcos carbon emissions

What do we mean by scope 1, 2 and 3?

Before diving in further, it’s important to align on the key terminology that all major SPs are drawing on to evaluate and report their sustainability efforts: in particular, how they disclose and commit to reducing their greenhouse gas emissions.

SPs divide their carbon emissions into scope 1, 2 and 3 – scope 3 is by far the most significant

For most SPs, scope 1 (e.g. emissions from the fleet of vehicles used to install equipment or perform maintenance tasks on base stations) and scope 2 (e.g. the electricity they purchase to run their networks) makes up less than 20% of their overall footprint. These emissions can be recorded and reported on accurately and there are established methodologies for doing so.

Scope 3, however, is where 80%+ of SP carbon emissions come from. This is because it captures the impact of the SP’s whole supply chain, e.g. the carbon emissions released from manufacturing the network equipment that they deploy. It also includes the carbon emissions arising from supplying customers with products and services that an SP sells, e.g. from shipping and de-commissioning consumer handsets or servers provided to enterprise customers.

Table of Contents

  • Executive Summary
  • Table of Figures
  • Introduction
    • What do we mean by scope 1, 2 and 3?
    • Where are SPs in their sustainability journey?
    • How does this differ by region?
    • What’s covered in the rest of the report?
  • Procurement and sustainable supply chain
    • Scope 1, 2 and 3: Where are procurement teams focused
    • Current priorities
    • Regional nuances
    • Best and next practices
  • Networking
  • IT and facilities
  • Enterprise products and services
  • Key recommendations and conclusion

Enter your details below to request an extract of the report


 

The Telco Cloud Manifesto

Telco cloud: A key enabler of the Coordination Age

The Coordination Age is coming

As we have set out in our company manifesto, STL Partners believes that we are entering a new ‘Coordination Age’ in which technological developments will enable governments, enterprises, and consumers to coordinate their activities more effectively than ever before. The results of better and faster coordination will be game-changing for society as resources are distributed and used more effectively than ever before leading to substantial social, economic, and health benefits.

A critical component of the Coordination Age is the universal availability of flexible, fast, reliable, low-latency networks that support a myriad of applications which, in turn, enable a complex array of communications, decisions, transactions, and processes to be completed quickly and, in many cases, automatically without human intervention.  The network remains key: without it being fit for purpose the ability to match demand and supply real-time is impossible.

Enter your details below to download the report extract

How telecoms can define a new role

Historically, telecoms networks have been created using specialist dedicated (proprietary) hardware and software.  This has ensured networks are reliable and secure but has also stymied innovation – from operators and from third-parties – that have found leveraging network capabilities challenging.  In fact, innovation accelerated with the arrival of the Internet which enabled services to be decoupled from the network and run ‘over the top’.

But the Coordination Age requires more from the network than ever before – applications require the network to be flexible, accessible and support a range of technical and commercial options. Applications cannot run independently of the network but need to integrate with it. The network must be able to impart actionable insights and flex its speed, bandwidth, latency, security, business model and countless other variables quickly and autonomously to meet the needs of applications using it.

Telco cloud – the move to a network built on common off-the-shelf hardware and flexible interoperable software from best-of-breed suppliers that runs wherever it is needed – is the enabler of this future.

 

Table of Contents

  • Executive Summary
  • Telco cloud: A key enabler of the Coordination Age
    • The Coordination Age is coming
    • How telecoms can define a new role
  • Telco cloud: The growth enabler for the telecoms industry
    • Telecoms revenue growth has stalled, traffic has not
    • Telco cloud: A new approach to the network
    • …a fundamental shift in what it means to be an operator
    • …and the driver of future telecoms differentiation and growth
  • Realising the telco cloud vision
    • Moving to telco cloud is challenging
    • Different operator segments will take different paths

Enter your details below to download the report extract

Network convergence: How to deliver a seamless experience

Operators need to adapt to the changing connectivity demands post-COVID19

The global dependency on consistent high-performance connectivity has recently come to the fore as the COVID-19 outbreak has transformed many of the remaining non-digital tasks into online activities.

The typical patterns of networking have broken and a ‘new normal’, albeit possibly a somewhat transitory one, is emerging. The recovery of the global economy will depend on governments, healthcare providers, businesses and their employees robustly communicating and gaining uninhibited access to content and cloud through their service providers – at any time of day, from any location and on any device.

Reliable connectivity is a critical commodity. Network usage patterns have shifted more towards the home and remote working. Locations which were previously light-usage now have high demands. Conversely, many business locations no longer need such high capacity. Utilisation is not expected to return to pre-COVID-19 patterns either, as people and businesses adapt to new daily routines – at least for some time.

The strategies with which telcos started the year have of course been disrupted with resources diverted away from strategic objectives to deal with a new mandate – keep the country connected. In the short-term, the focus has shifted to one which is more tactical – ensuring customer satisfaction through a reliable and adaptable service with rapid response to issues. In the long-term, however, the objectives for capacity and coverage remain. Telcos are still required to reach national targets for a minimum connection quality in rural areas, whilst delivering high bandwidth service demands in hotspot locations (although these hotspot locations might now change).

Of course, modern networks are designed with scalability and adaptability in mind – some recent deployments from new disruptors (such as Rakuten) demonstrate the power of virtualisation and automation in that process, particularly when it comes to the radio access network (RAN). In many legacy networks, however, one area which is not able to adapt fast enough is the physical access. Limits on spectrum, coverage (indoors and outdoors) and the speed at which physical infrastructure can be installed or updated become a bottleneck in the adaptation process. New initiatives to meet home working demand through an accelerated fibre rollout are happening, but they tend to come at great cost.

Network convergence is a concept which can provide a quick and convenient way to address this need for improved coverage, speed and reliability in the access network, without the need to install or upgrade last mile infrastructure. By definition, it is the coming-together of multiple network assets, as part of a transformation to one intelligent network which can efficiently provide customers with a single, unified, high-quality experience at any time, in any place.

It has already attracted interest and is finding an initial following. A few telcos have used it to provide better home broadband. Internet content and cloud service providers are interested, as it adds resilience to the mobile user experience, and enterprises are interested in utilising multiple lower cost commodity backhauls – the combination of which benefits from inherent protection against costly network outages.Request a report extract

Network convergence helps create an adaptable and resilient last mile

Most telcos already have the facility to connect with their customers via multiple means; providing mobile, fixed line and public Wi-Fi connectivity to those in their coverage footprint. The strategy has been to convert individual ‘pure’ mobile or fixed customers into households. The expectation is that this creates revenue increase through bundling and loyalty whilst bringing some added friction into the ability to churn – a concept which has been termed ‘convergence’. Although the customer may see one converged telco through brand, billing and customer support, the delivery of a consistent user experience across all modes of network access has been lacking and awkward. In the end, it is customer dissatisfaction which drives churn, so delivering a consistent user experience is important.

Convergence is a term used to mean many different things, from a single bill for all household connectivity, to modernising multiple core networks into a single efficient core. While most telcos have so far been concentrating on increasing operational efficiency, increasing customer loyalty/NPS and decreasing churn through some initial aspects of convergence, some are now looking into network convergence – where multiple access technologies (4G, 5G, Wi-Fi, fixed line) can be used together to deliver a resilient, optimised and consistent network quality and coverage.

Overview of convergence

Source: STL Partners

As an overarching concept, network convergence introduces more flexibility into the access layer. It allows a single converged core network to utilise and aggregate whichever last mile connectivity options are most suited to the environment. Some examples are:

  • Hybrid Access: DSL and 4G macro network used together to provide extra speed and fallback reliability in hybrid fixed/mobile home gateways.
  • Cell Densification: 5G and Wi-Fi small cells jointly providing short range capacity to augment the macro network in dense urban areas.
  • Fixed Wireless Access: using cellular as a fibre alternative in challenging areas.

The ability to combine various network accesses is attractive as an option for improving adaptability, resilience and speed. Strategically, putting such flexibility in place can support future growth and customer retention with the added advantage of improving operational efficiency. Tactically, it enables an ability to quickly adapt resources to short-term changes in demand. COVID-19 has been a clear example of this need.

Table of Contents

  • Executive Summary
    • Convergence and network convergence
    • Near-term benefits of network convergence
    • Strategic benefits of network convergence
    • Balancing the benefits of convergence and divergence
    • A three-step plan
  • Introduction
    • The changing environment
    • Network convergence: The adaptable and resilient last mile
    • Anticipated benefits to telcos
    • Challenges and opposing forces
  • The evolution to network convergence
    • Everyone is combining networks
    • Converging telco networks
    • Telco adoption so far
  • Strategy, tactics and hurdles
    • The time is right for adaptability
    • Tactical motivators
    • Increasing the relationship with the customer
    • Modernisation and efficiency – remaining competitive
    • Hurdles from within the telco ecosystem
    • Risk or opportunity? Innovation above-the-core
  • Conclusion
    • A three-step plan
  • Index

Request STL research insights overview pack

 

 

VEON – Transition from telco to consumer IP communications platform

Introduction to Veon

Geographical footprint and brands

Veon came into being at the start of 2017, a rebrand of VimpelCom. The Amsterdam-based telco was founded in its current form in 2009 when shareholders Telenor and Alfa agreed to merge their assets in VimpelCom and Ukraine’s Kyivstar to create VimpelCom Ltd.

Veon is among the world’s 10 largest communications network operators by subscription, with around 235 million customers in 13 countries (see Figure 1).

Figure 1: Veon’s geographical footprint (September 2017)

Source: Veon, STL Partners

The telco operates a number of brands across its geographical footprint (see Figure 2).

Figure 2: Veon’s brands (September 2017)

Source: Veon, STL Partners

Veon’s largest market is Russia, where it has over 58 million mobile subscribers, making up 24% of its global total. Pakistan and Bangladesh comprise its second-largest markets by subscribers, while it has over 30 million customers in Italy under its Wind Tre brand, a joint venture with CK Hutchison (see Figure 3).

Figure 3: Veon mobile customers by region, H2 2017 (millions)

Source: Veon, STL Partners

A brief history of Veon

  • 1992: Veon began life as Russian operator PJSC VimpelCom in 1992.
  • 2009: VimpelCom Ltd. founded as Telenor and Alfa Group (Altimo) agree to merge their assets in VimpelCom (Russia and CIS) and Ukraine (Kyivstar).
  • 2010: VimpelCom acquires Orascom Telecom Holding (operating in Pakistan, Bangladesh, Algeria) and Wind Italy from Egypt’s Naguib Sawiris.
  • 2017: VimpelCom Ltd. rebrands as Veon.


The somewhat unusual development of both Veon’s shareholder structure and geographical footprint means the telco faces some unique challenges, but has also enabled a degree of flexibility in the company’s path to transformation.

Veon’s shareholder structure – an enabler of transformation

At the time of writing, Veon is 47.9%-owned (common and voting shares) by Alfa (via investment vehicle LetterOne), and 19.7% by Norway’s Telenor (with the remaining 32.4% split between free float and minority shareholders).

This structure means that the company is less beholden to dividend-hungry shareholders, allowing the telco more ease of alignment than many of its contemporaries. This extra “breathing space” also allows change to occur faster with fewer levels of managerial approval required, whilst the board of directors has given its backing to Veon’s transformation journey, offering full “top-down support”. Nevertheless there is some doubt about how the transformation plans will be greeted at local OpCo level, and the group faces some serious cultural challenges in this area.

Faced with lacklustre organic growth and in the face of headwinds of currency devaluations in its former Soviet markets, Veon has chosen to, in the words of CEO Jean-Yves Charlier, “disrupt itself from within”.

Reversing the revenue decline

Speaking at Veon’s rebrand in February 2017, CEO Charlier spoke of how the telco sector has been backed into a corner by aggressive disruptive start-ups like Skype and WhatsApp, meaning the industry now needs to reinvent itself and find new paths to growth.

The company began by improving its capital structure, in part through the consolidation of operations in two of its largest markets, with the mergers of Mobilink and Warid to form Jazz in Pakistan, and the formation of joint venture Wind Tre from Wind Italy and CK Hutchison’s Tre (3).

Veon states it has realigned its corporate culture and values, introduced a robust control and compliance framework, and significantly cut its cost base, and the operator returned to positive revenue and EBITDA growth in the second quarter of 2017.

Contents:

  • Executive Summary 
  • Introduction to Veon
  • Veon’s digital strategy
  • What are the strengths of Veon’s offering?
  • What must Veon do to succeed?
  • Will Veon make it work?
  • Introduction
  • Introduction to Veon
  • The path to total transformation
  • Veon’s digital strategy
  • Reinvent customer experience
  • Network virtualisation
  • The product
  • An omni-channel platform
  • The strengths of the holistic platform
  • Can Veon’s consumer IP communications proposition succeed? 
  • Can Veon beat the GAFA and Chinese giants to the market?
  • What must Veon do to succeed?
  • Conclusions

Figures:

  • Figure 1: Veon’s geographical footprint (September 2017)
  • Figure 2: Veon’s brands (September 2017)
  • Figure 3: Veon mobile customers by region, H2 2017 (millions)
  • Figure 4: Veon revenue and EBITDA, Q4 2015-Q2 2017 ($ billion)
  • Figure 5: Veon’s transformation from telco to tech company
  • Figure 6: Penetration of leading social networks in Russia (2016)
  • Figure 7: Veon IT stack scope of responsibilities
  • Figure 8: VEON app screenshots – a IP communication platform
  • Figure 9: Veon app access requirements
  • Figure 10: Comparison of consumer IP communications plays
  • Figure 11: Veon – a SWOT analysis

The Devil’s Advocate: SDN / NFV can never work, and here’s why!

Introduction

The Advocatus Diaboli (Latin for Devil’s Advocate), was formerly an official position within the Catholic Church; one who “argued against the canonization (sainthood) of a candidate in order to uncover any character flaws or misrepresentation evidence favouring canonization”.

In common parlance, the term a “devil’s advocate” describes someone who, given a certain point of view, takes a position they do not necessarily agree with (or simply an alternative position from the accepted norm), for the sake of debate or to explore the thought further.

SDN / NFV runs into problems: a ‘devil’s advocate’ assessment

The telco industry’s drive toward Network Functions Virtualization (NFV) got going in a major way in 2014, with high expectations that the technology – along with its sister technology SDN (Software-Defined Networking ) – would revolutionize operators’ abilities to deliver innovative communications and digital services, and transform the ways in which these services can be purchased and consumed.

Unsurprisingly, as with so many of these ‘revolutions’, early optimism has now given way to the realization that full-scope NFV deployment will be complex, time-consuming and expensive. Meanwhile, it has become apparent that the technology may not transform telcos’ operations and financial fortunes as much as originally expected.

The following is a presentation of the case against SDN / NFV from the perspective of the ‘devil’s advocate’. It is a combination of the types of criticism that have been voiced in recent times, but taken to the extreme so as to represent a ‘damning’ indictment of the industry effort around these technologies. This is not the official view of STL Partners but rather an attempt to explore the limits of the skeptical position.

We will respond to each of the devil’s advocate’s arguments in turn in the second half of this report; and, in keeping with good analytical practice, we will endeavor to present a balanced synthesis at the end.

‘It’ll never work’: the devil’s advocate speaks

And here’s why:

1. Questionable financial and operational benefits:

Will NFV ever deliver any real cost savings or capacity gains? Operators that have launched NFV-based services have not yet provided any hard evidence that they have achieved notable reductions in their opex and capex on the basis of the technology, or any evidence that the data-carrying capacity, performance or flexibility of their networks have significantly improved.

Operators talk a good talk, but where is the actual financial and operating data that supports the NFV business case? Are they refusing to disclose the figures because they are in fact negative or inconclusive? And if this is so, how can we have any confidence that NFV and SDN will deliver anything like the long-term cost and performance benefits that have been touted for them?

 

  • Executive Summary
  • Introduction
  • SDN / NFV runs into problems: a ‘devil’s advocate’ assessment
  • ‘It’ll never work’: the devil’s advocate speaks
  • 1. Questionable financial and operational benefits
  • 2. Wasted investments and built-in obsolescence
  • 3. Depreciation losses
  • 4. Difficulties in testing and deploying
  • 5. Telco cloud or pie in the sky?
  • 6. Losing focus on competitors because of focusing on networks:
  • 7. Change the culture and get agile?
  • 8.It’s too complicated
  • The case for the defense
  • 1. Clear financial and operational benefits:
  • 2. Strong short-term investment and business case
  • 3. Different depreciation and valuation models apply to virtualized assets
  • 4. Short-term pain for long-term gains
  • 5. Don’t cloud your vision of the technological future
  • 6. Telcos can compete in the present while building the future
  • 7. Operators both can and must transform their culture and skills base to become more agile
  • 8. It may be complicated, but is that a reason not to attempt it
  • A balanced view of NFV: ‘making a virtual out of necessity’ without making NFV a virtue in itself

MobiNEX: The Mobile Network Experience Index, H1 2016

Executive Summary

In response to customers’ growing usage of mobile data and applications, in April 2016 STL Partners developed MobiNEX: The Mobile Network Experience Index, which ranks mobile network operators by key measures relating to customer experience. To do this, we benchmark mobile operators’ network speed and reliability, allowing individual operators to see how they are performing in relation to the competition in an objective and quantitative manner.

Operators are assigned an individual MobiNEX score out of 100 based on their performance across four measures that STL Partners believes to be core drivers of customer app experience: download speed, average latency, error rate and latency consistency (the proportion of app requests that take longer than 500ms to fulfil).

Our partner Apteligent has provided us with the raw data for three out of the four measures, based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in H1 2016. While our April report focused on the top three or four operators in just seven Western markets, this report covers 80 operators drawn from 25 markets spread across the globe in the first six months of this year.

The top ten operators were from Japan, France, the UK and Canada:

  • Softbank JP scores highest on the MobiNEX for H1 2016, with high scores across all measures and a total score of 85 out of 100.
  • Close behind are Bouygues FR (80) and Free FR (79), which came first and second respectively in the Q4 2015 rankings. Both achieve high scores for error rate, latency consistency and average latency, but are slightly let down by download speed.
  • The top six is completed by NTT DoCoMo JP (78), Orange FR (75) and au (KDDI) JP (71).
  • Slightly behind are Vodafone UK (65), EE UK (64), SFR FR (63), O2 UK (62) and Rogers CA (62). Except in the case of Rogers, who score similarly on all measures, these operators are let down by substantially worse download speeds.

The bottom ten operators all score a total of 16 or lower out of 100, suggesting a materially worse customer app experience.

  • Trailing the pack with scores of 1 or 2 across all four measures were Etisalat EG (4), Vodafone EG (4), Smart PH (5) and Globe PH (5).
  • Beeline RU (11) and Malaysian operators U Mobile MY (9) and Digi MY (9) also fare poorly, but benefit from slightly higher latency consistency scores. Slightly better overall, but still achieving minimum scores of 1 for download speed and average latency, are Maxis MY (14) and MTN ZA (12).

Overall, the extreme difference between the top and bottom of the table highlights a vast inequality in network quality customer experience across the planet. Customer app experience depends to a large degree on where one lives. However, our analysis shows that while economic prosperity does in general lead to a more advanced mobile experience as you might expect, it does not guarantee it. Norway, Sweden, Singapore and the US market are examples of high income countries with lower MobiNEX scores than might be expected against the global picture. STL Partners will do further analysis to uncover more on the drivers of differentiation between markets and players within them.

 

MobiNEX H1 2016 – included markets

MobiNEX H1 2016 – operator scores

 Source: Apteligent, OpenSignal, STL Partners analysis

 

  • About MobiNEX
  • Changes for H1 2016
  • MobiNEX H1 2016: results
  • The winners: top ten operators
  • The losers: bottom ten operators
  • The surprises: operators where you wouldn’t expect them
  • MobiNEX by market
  • MobiNEX H1 2016: segmentation
  • MobiNEX H1 2016: Raw data
  • Error rate
  • Latency consistency
  • Download speed
  • Average latency
  • Appendix 1: Methodology and source data
  • Latency, latency consistency and error rate: Apteligent
  • Download speed: OpenSignal
  • Converting raw data into MobiNEX scores
  • Setting the benchmarks
  • Why measure customer experience through app performance?
  • Appendix 2: Country profiles
  • Country profile: Australia
  • Country profile: Brazil
  • Country profile: Canada
  • Country profile: China
  • Country profile: Colombia
  • Country profile: Egypt
  • Country profile: France
  • Country profile: Germany
  • Country profile: Italy
  • Country profile: Japan
  • Country profile: Malaysia
  • Country profile: Mexico
  • Country profile: New Zealand
  • Country profile: Norway
  • Country profile: Philippines
  • Country profile: Russia
  • Country profile: Saudi Arabia
  • Country profile: Singapore
  • Country profile: South Africa
  • Country profile: Spain
  • Country profile: United Arab Emirates
  • Country profile: United Kingdom
  • Country profile: United States
  • Country profile: Vietnam

 

  • Figure 1: MobiNEX scoring breakdown, benchmarks and raw data used
  • Figure 2: MobiNEX H1 2016 – included markets
  • Figure 3: MobiNEX H1 2016 – operator scores breakdown (top half)
  • Figure 4: MobiNEX H1 2016 – operator scores breakdown (bottom half)
  • Figure 5: MobiNEX H1 2016 – average scores by country
  • Figure 6: MobiNEX segmentation dimensions
  • Figure 7: MobiNEX segmentation – network speed vs reliability
  • Figure 8: MobiNEX segmentation – network speed vs reliability – average by market
  • Figure 9: MobiNEX vs GDP per capita – H1 2016
  • Figure 10: MobiNEX vs smartphone penetration – H1 2016
  • Figure 11: Error rate per 10,000 requests, H1 2016 – average by country
  • Figure 12: Error rate per 10,000 requests, H1 2016 (top half)
  • Figure 13: Error rate per 10,000 requests, H1 2016 (bottom half)
  • Figure 14: Requests with total roundtrip latency > 500ms (%), H1 2016 – average by country
  • Figure 15: Requests with total roundtrip latency > 500ms (%), H1 2016 (top half)
  • Figure 16: Requests with total roundtrip latency > 500ms (%), H1 2016 (bottom half)
  • Figure 17: Average weighted download speed (Mbps), H1 2016 – average by country
  • Figure 18: Average weighted download speed (Mbps), H1 2016 (top half)
  • Figure 19: Average weighted download speed (Mbps), H1 2016 (bottom half)
  • Figure 20: Average total roundtrip latency (ms), H1 2016 – average by country
  • Figure 21: Average total roundtrip latency (ms), H1 2016 (top half)
  • Figure 22: Average total roundtrip latency (ms), H1 2016 (bottom half)
  • Figure 23: Benchmarks and raw data used

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

MobiNEX: The Mobile Network Experience Index, Q4 2015

Executive Summary

In response to customers’ growing usage of mobile data and applications, STL Partners has developed MobiNEX: The Mobile Network Customer Experience Index, which benchmarks mobile operators’ network speed and reliability by measuring the consumer app experience, and allows individual players to see how they are performing in relation to competition in an objective and quantitative manner.

We assign operators an individual MobiNEX score based on their performance across four measures that are core drivers of customer app experience: download speed; average latency; error rate; latency consistency (the percentage of app requests that take longer than 500ms to fulfil). Apteligent has provided us with the raw data for three out of four of the measures based on billions of requests made from tens of thousands of applications used by hundreds of millions of users in Q4 2015. We plan to expand the index to cover other operators and to track performance over time with twice-yearly updates.

Encouragingly, MobiNEX scores are positively correlated with customer satisfaction in the UK and the US suggesting that a better mobile app experience contributes to customer satisfaction.

The top five performers across twenty-seven operators in seven countries in Europe and North America (Canada, France, Germany, Italy, Spain, UK, US) were all from France and the UK suggesting a high degree of competition in these markets as operators strive to improve relative to peers:

  • Bouygues Telecom in France scores highest on the MobiNEX for Q4 2015 with consistently high scores across all four measures and a total score of 76 out of 100.
  • It is closely followed by two other French operators. Free, the late entrant to the market, which started operations in 2012, scores 73. Orange, the former national incumbent, is slightly let down by the number of app errors experienced by users but achieves a healthy overall score of 70.
  • The top five is completed by two UK operators: EE (65) and O2 (61) with similar scores to the three French operators for everything except download speed which was substantially worse.

The bottom five operators have scores suggesting a materially worse customer app experience and we suggest that management focuses on improvements across all four measures to strengthen their customer relationships and competitive position. This applies particularly to:

  • E-Plus in Germany (now part of Telefónica’s O2 network but identified separately by Apteligent).
  • Wind in Italy, which is particularly let down by latency consistency and download speed.
  • Telefónica’s Movistar, the Spanish market share leader.
  • Sprint in the US with middle-ranking average latency and latency consistency but, like other US operators, poor scores on error rate and download speed.
  • 3 Italy, principally a result of its low latency consistency score.

Surprisingly, given the extensive deployment of 4G networks there, the US operators perform poorly and are providing an underwhelming customer app experience:

  • The best-performing US operator, T-Mobile, scores only 45 – a full 31 points below Bouygues Telecom and 4 points below the median operator.
  • All the US operators perform very poorly on error rate and, although 74% of app requests in the US were made on LTE in Q4 2015, no US player scores highly on download speed.

MobiNEX scores – Q4 2015

 Source: Apteligent, OpenSignal, STL Partners analysis

MobiNEX vs Customer Satisfaction

Source: ACSI, NCSI-UK, STL Partners

 

  • Introduction
  • Mobile app performance is dependent on more than network speed
  • App performance as a measure of customer experience
  • MobiNEX: The Mobile Network Experience Index
  • Methodology and key terms
  • MobiNEX Q4 2015 Results: Top 5, bottom 5, surprises
  • MobiNEX is correlated with customer satisfaction
  • Segmenting operators by network customer experience
  • Error rate
  • Quantitative analysis
  • Key findings
  • Latency consistency: Requests with latency over 500ms
  • Quantitative analysis
  • Key findings
  • Download speed
  • Quantitative analysis
  • Key findings
  • Average latency
  • Quantitative analysis
  • Key findings
  • Appendix: Source data and methodology
  • STL Partners and Telco 2.0: Change the Game
  • About Apteligent

 

  • MobiNEX scores – Q4 2015
  • MobiNEX vs Customer Satisfaction
  • Figure 1: MobiNEX – scoring methodology
  • Figure 2: MobiNEX scores – Q4 2015
  • Figure 3: Customer Satisfaction vs MobiNEX, 2015
  • Figure 4: MobiNEX operator segmentation – network speed vs network reliability
  • Figure 5: MobiNEX operator segmentation – with total scores
  • Figure 6: Major Western markets – error rate per 10,000 requests
  • Figure 7: Major Western markets – average error rate per 10,000 requests
  • Figure 8: Major Western operators – percentage of requests with total roundtrip latency greater than 500ms
  • Figure 9: Major Western markets – average percentage of requests with total roundtrip latency greater than 500ms
  • Figure 10: Major Western operators – average weighted download speed across 3G and 4G networks (Mbps)
  • Figure 11: Major European markets – average weighted download speed (Mbps)
  • Figure 12: Major Western markets – percentage of requests made on 3G and LTE
  • Figure 13: Download speed vs Percentage of LTE requests
  • Figure 14: Major Western operators – average total roundtrip latency (ms)
  • Figure 15: Major Western markets – average total roundtrip latency (ms)
  • Figure 16: MobiNEX benchmarks

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

How to be Agile: Agility by Design and Information Intensity

Background: The Telco 2.0 Agility Challenge

Agility is a highly desirable capability for telecoms operators seeking to compete and succeed in their core businesses and the digital economy in general. In our latest industry research, we found that most telco executives that responded rated their organisations as ‘moderately agile’, and identified a number of practical steps that telco management could and should take to improve agility.

The Definition and Value of Agility

In the Telco 2.0 Agility Challenge, STL Partners first researched with 29 senior telecoms operator executives a framework to define agility in the industry’s own terms, and then gathered quantitative input to benchmark the industry’s agility from 74 further executives via an online self-diagnosis tool. The analysis in this report examines the aggregate quantitative input of those executives.

The Telco 2.0 Agility framework comprises the five agility domains illustrated below.

Figure 4: The Telco 2.0 Agility Framework

Source: STL Partners, The ‘Agile Operator’: 5 Key Ways to Meet the Agility Challenge

  • Organisational Agility: Establish a more agile culture and mindset, allowing you to move at faster speeds and to innovate more effectively
  • Network Agility: Embrace new networking technologies/approaches to ensure that you provide the best experience for customers and manage your resources and investment more efficiently
  • Service Agility: Develop the capability to create products and services in a much more iterative manner, resulting in products that are developed faster, with less investment and better serve customer needs
  • Customer Agility: Provide customers with the tools to manage their service and use analytics to gain insight into customer behaviour to develop and refine services
  • Partnering Agility: Become a more effective partner by developing the right skills to understand and assess potential partnerships and ensure that the right processes/technologies are in place to make partnering as easy as possible

A key finding of the first stage was that all of the executives we spoke to considered achieving agility as very important or critical to their organisations’ success, as exemplified by this quote.

“It is fundamental to be agile. For me it is much more important than being lean – it is more than just efficiency.”

European Telco CTO

This research project was kindly sponsored by Ericsson. STL Partners independently created the methodology, questions, findings, analysis and conclusions.

Purpose of this report

This report details:

  • The headline findings of the Telco 2.0 Agility Challenge
  • The category winners
  • What are the lessons revealed about telco agility overall?
  • What do telcos need to address to improve their overall agility?
  • What can others do to help?

Key Findings

The Majority of Operators were ‘Moderately Agile’

Just over two thirds of respondents achieved a total score between 50%-75%. All of the twenty questions had 4 choices, so a score in this range means that for most of the questions these respondents were choosing the second or third option out of four choices increasing from the least to the most agile. The mean score achieved was 63% and the median 61%. This shows that most telcos believe they have some way to go before they would realistically consider themselves truly Agile by the definition set out in the benchmark.

Figure 5: Distribution of Total Agility Scores

Source: STL Partners Telco 2.0 Agility Challenge, n =74

Agility Champions

A further part of the Agility Challenge was to identify Agility Champions, who were recognised through Agility Domain Awards at TM Forum Live! in Nice in June. The winners of these prizes were additionally interviewed by STL Partners to check the evidence of their claims, and the winners were:

  • Telus, which won the Customer Agility Challenge Award. Telus adopted a Customer First initiative across the whole organization; this commitment to customers has led to both a significant increase in the ‘likelihood to recommend’ metric and a substantial reduction in customer complaints.
  • Zain Jordan, which won the Service Agility Challenge. Zain Jordan has achieved the speed and flexibility needed to differentiate itself in the marketplace through deployment of state-of-the-art, real time service enablement platforms and solutions. These are managed and operated by professional, specialized, and qualified teams, and are driving an increase in profitability and customer satisfaction.
  • Telecom Italia Digital Solutions, (TIDS) which won the Partnering Agility Challenge. TIDS have partnered effectively to deliver innovative digital services, including establishing and launching an IoT platform from scratch within 6 months. It is also developing and coordinating all the digital presence at the Expo Milan 2015.

Network Agility is hardest to achieve

Most respondents scored lower on Network Agility than the other domains, and we believe this is partly because the network criteria were harder to achieve (e.g. configuring networks in real time) but also that achieving meaningful agility in a network is as a rule harder than in the other areas.

Figure 6: Average Score by Agility Domain

Note: The maximum score was 4 and the minimum 1, with 4 = Strongly Agile, 3 = Mostly Agile, 2 = Somewhat Agile, and 1 = Not Agile.

Source: STL Partners, n = 74

Next Section: Looking Deeper

 

  • Executive Summary
  • Introduction
  • Background: The Telco 2.0 Agility Challenge
  • Purpose of this report
  • Key Findings
  • The Majority of Operators were ‘Moderately Agile’
  • Agility Champions
  • Network Agility is hardest to achieve
  • Looking Deeper
  • Organisational Agility: ‘Mindset’ is not enough
  • Information Agility is an important factor
  • If you had to choose One Metric that Matters (OMTM) it would be…
  • Conclusions

 

  • Figure 1: The Telco 2.0 Agility Framework
  • Figure 2: Respondents can be grouped into 3 types based on the level and nature of their organisational agility
  • Figure 3: Information Agility Sub-Segments
  • Figure 4: The Telco 2.0 Agility Framework
  • Figure 5: Distribution of Total Agility Scores
  • Figure 6: Average Score by Agility Domain
  • Figure 7: We were surprised that Organisational Agility was not a stronger indicator of Total Agility
  • Figure 8: Differences in Responses to Organisational Agility Questions
  • Figure 9: Organisational Agility a priori Segments and Scores
  • Figure 10: ‘Agile by Design’ Organisations Scored higher than others
  • Figure 11: Defining Information Agility Segments
  • Figure 12: The Information Agile Segment scored higher than the others

How 5G is Disrupting Cloud and Network Strategy Today

5G – cutting through the hype

As with 3G and 4G, the approach of 5G has been heralded by vast quantities of debate and hyperbole. We contemplated reviewing some of the more outlandish statements we’ve seen and heard, but for the sake of brevity and progress we’ll concentrate in this report on the genuine progress that has also occurred.

A stronger definition: a collection of related technologies

Let’s start by defining terms. For us, 5G is a collection of related technologies that will eventually be incorporated in a 3GPP standard replacing the current LTE-A. NGMN, the forum that is meant to coordinate the mobile operators’ requirements vis-à-vis the vendors, recently issued a useful document setting out what technologies they wanted to see in the eventual solution or at least have considered in the standards process.

Incremental progress: ‘4.5G’

For a start, NGMN includes a variety of incremental improvements that promise substantially more capacity. These are things like higher modulation, developing the carrier-aggregation features in LTE-A to share spectrum between cells as well as within them, and improving interference coordination between cells. These are uncontroversial and are very likely to be deployed as incremental upgrades to existing LTE networks long before 5G is rolled out or even finished. This is what some vendors, notably Huawei, refer to as 4.5G.

Better antennas, beamforming, etc.

More excitingly, NGMN envisages some advanced radio features. These include beamforming, in which the shape of the radio beam between a base station and a mobile station is adjusted, taking advantage of the diversity of users in space to re-use the available radio spectrum more intensely, and both multi-user and massive MIMO (Multiple Input/Multiple Output). Massive MIMO simply means using many more antennas – at the moment the latest equipment uses 8 transmitter and 8 receiver antennas (8T*8R), whereas 5G might use 64. Multi-user MIMO uses the variety of antennas to serve more users concurrently, rather than just serving them faster individually. These promise quite dramatic capacity gains, at the cost of more computationally intensive software-defined radio systems and more complex antenna designs.Although they are cutting-edge, it’s worth pointing that 802.11ac Wave 2 WiFi devices shipping now have these features, and it is likely that the WiFi ecosystem will hold a lead in these for some considerable length of time.

New spectrum

NGMN also sees evolution towards 5G in terms of spectrum. We can divide this into a conservative and a radical phase – in the first, conservative phase, 5G is expected to start using bands below 6GHz, while in the second, radical phase, the centimetre/millimetre-wave bands up to and above 30GHz are in discussion. These promise vastly more bandwidth, but as usual will demand a higher density of smaller cells and lower transmitter power levels. It’s worth pointing out that it’s still unclear whether 6GHz will make the agenda for this year’s WRC-15 conference, and 60GHz may or may not be taken up in 2019 at WRC-19, so spectrum policy is a critical path for the whole project of 5G.

Full duplex radio – doubling capacity in one stroke

Moving on, we come to some much more radical proposals and exotic technologies. 5G may use the emerging technology of full-duplex radio, which leverages advances in hardware signal processing to get rid of self-interference and make it possible for radio devices to send and receive at the same time on the same frequency, something hitherto thought impossible and a fundamental issue in radio. This area has seen a lot of progress recently and is moving from an academic research project towards industrial status. If it works, it promises to double the capacity provided by all the other technologies together.

A new, flatter network architecture?

A major redesign of the network architecture is being studied. This is highly controversial. A new architecture would likely be much “flatter” with fewer levels of abstraction (such as the encapsulation of Internet traffic in the GTP protocol) or centralised functions. This, however, would be a very radical break with the GSM-inspired practice that worked in 2G, 3G, and in an adapted form in 4G. However, the very demanding latency targets we will discuss in a moment will be very difficult to satisfy with a centralised architecture.

Content-centric networking

Finally, serious consideration is being given to what the NGMN calls information-based networking, better known to the wider community as either name-based networking, named-data networking, or content-centric networking, as TCP-Reno inventor Van Jacobsen called it when he introduced the concept in a now-classic lecture. The idea here is that the Internet currently works by mapping content to domain names to machines. In content-centric networking, users request some item of content, uniquely identified by a name, and the network finds the nearest source for it, thus keeping traffic localised and facilitating scalable, distributed systems. This would represent a radical break with both GSM-inspired and most Internet practice, and is currently very much a research project. However, code does exist and has even beenimplemented using the OpenFlow NFV platform, and IETF standardisation is under way.

The mother of all stretch targets

5G is already a term associated with implausibly grand theoretical maxima, like every G before it. However, the NGMN has the advantage that it is a body that serves first of all the interests of the operators, the customers, rather than the vendors. Its expectations are therefore substantially more interesting than some of the vendors’ propaganda material. It has also recently started to reach out to other stakeholders, such as manufacturing companies involved in the Internet of Things.

Reading the NGMN document raises some interesting issues about the definition of 5G. Rather than set targets in an absolute sense, it puts forward parameters for a wide range of different use cases. A common criticism of the 5G project is that it is over-ambitious in trying to serve, for example, low bandwidth ultra-low power M2M monitoring networks and ultra-HD multicast video streaming with the same network. The range of use cases and performance requirements NGMN has defined are so diverse they might indeed be served by different radio interfaces within a 5G infrastructure, or even by fully independent radio networks. Whether 5G ends up as “one radio network to rule them all”, an interconnection standard for several radically different systems, or something in between (for example, a radio standard with options, or a common core network and specialised radios) is very much up for debate.

In terms of speed, NGMN is looking for 50Mbps user throughput “everywhere”, with half that speed available uplink. Success is defined here at the 95th percentile, so this means 50Mbps to 95% geographical coverage, 95% of the time. This should support handoff up to 120Km/h. In terms of density, this should support 100 users/square kilometre in rural areas and 400 in suburban areas, with 10 and 20 Gbps/square km capacity respectively. This seems to be intended as the baseline cellular service in the 5G context.

In the urban core, downlink of 300Mbps and uplink of 50Mbps is required, with 100Km/h handoff, and up to 2,500 concurrent users per square kilometre. Note that the density targets are per-operator, so that would be 10,000 concurrent users/sq km when four MNOs are present. Capacity of 750Gbps/sq km downlink and 125Gbps/sq km uplink is required.

An extreme high-density scenario is included as “broadband in a crowd”. This requires the same speeds as the “50Mbps anywhere” scenario, with vastly greater density (150,000 concurrent users/sq km or 30,000 “per stadium”) and commensurately higher capacity. However, the capacity planning assumes that this use case is uplink-heavy – 7.5Tbps/sq km uplink compared to 3.75Tbps downlink. That’s a lot of selfies, even in 4K! The fast handoff requirement, though, is relaxed to support only pedestrian speeds.

There is also a femtocell/WLAN-like scenario for indoor and enterprise networks, which pushes speed and capacity to their limits, with 1Gbps downlink and 500Mbps uplink, 75,000 concurrent users/sq km or 75 users per 1000 square metres of floor space, and no significant mobility. Finally, there is an “ultra-low cost broadband” requirement with 10Mbps symmetrical, 16 concurrent users and 16Mbps/sq km, and 50Km/h handoff. (There are also some niche cases, such as broadcast, in-car, and aeronautical applications, which we propose to gloss over for now.)

Clearly, the solution will have to either be very flexible, or else be a federation of very different networks with dramatically different radio properties. It would, for example, probably be possible to aggregate the 50Mbps everywhere and ultra-low cost solutions – arguably the low-cost option is just the 50Mbps option done on the cheap, with fewer sites and low-band spectrum. The “broadband in a crowd” option might be an alternative operating mode for the “urban core” option, turning off handoff, pulling in more aggregated spectrum, and reallocating downlink and uplink channels or timeslots. But this does begin to look like at least three networks.

Latency: the X factor

Another big stretch, and perhaps the most controversial issue here, is the latency requirement. NGMN draws a clear distinction between what it calls end-to-end latency, aka the familiar round-trip time measurement from the Internet, and user-plane latency, defined thus:

Measures the time it takes to transfer a small data packet from user terminal to the Layer 2 / Layer 3 interface of the 5G system destination node, plus the equivalent time needed to carry the response back.

That is to say, the user-plane latency is a measurement of how long it takes the 5G network, strictly speaking, to respond to user requests, and how long it takes for packets to traverse it. NGMN points out that the two metrics are equivalent if the target server is located within the 5G network. NGMN defines both using small packets, and therefore negligible serialisation delay, and assuming zero processing delay at the target server. The target is 10ms end-to-end, 1ms for special use cases requiring low latency, or 50ms end-to-end for the “ultra-low cost broadband” use case. The low-latency use cases tend to be things like communication between connected cars, which will probably fall under the direct device-to-device (D2D) element of 5G, but nevertheless some vendors seem to think it refers to infrastructure as well as D2D. Therefore, this requirement should be read as one for which the 5G user plane latency is the relevant metric.

This last target is arguably the biggest stretch of all, but also perhaps the most valuable.

The lower bound on any measurement of latency is very simple – it’s the time it takes to physically reach the target server at the speed of light. Latency is therefore intimately connected with distance. Latency is also intimately connected with speed – protocols like TCP use it to determine how many bytes it can risk “in flight” before getting an acknowledgement, and hence how much useful throughput can be derived from a given theoretical bandwidth. Also, with faster data rates, more of the total time it takes to deliver something is taken up by latency rather than transfer.

And the way we build applications now tends to make latency, and especially the variance in latency known as jitter, more important. In order to handle the scale demanded by the global Internet, it is usually necessary to scale out by breaking up the load across many, many servers. In order to make this work, it is usually also necessary to disaggregate the application itself into numerous, specialised, and independent microservices. (We strongly recommend Mary Poppendieck’s presentation at the link.)

The result of this is that a popular app or Web page might involve calls to dozens to hundreds of different services. Google.com includes 31 HTTP requests these days and Amazon.com 190. If the variation in latency is not carefully controlled, it becomes statistically more likely than not that a typical user will encounter at least one server’s 99th percentile performance. (EBay tries to identify users getting slow service and serve them a deliberately cut-down version of the site – see slide 17 here.)

We discuss this in depth in a Telco 2.0 Blog entry here.

Latency: the challenge of distance

It’s worth pointing out here that the 5G targets can literally be translated into kilometres. The rule of thumb for speed-of-light delay is 4.9 microseconds for each kilometre of fibre with a refractive index of 1.47. 1ms – 1000 microseconds – equals about 204km in a straight line, assuming no routing delay. A response back is needed too, so divide that distance in half. As a result, in order to be compliant with the NGMN 5G requirements, all the network functions required to process a data call must be physically located within 100km, i.e. 1ms, of the user. And if f the end-to-end requirement is taken seriously, the applications or content that they want must also be hosted within 1000km, i.e. 10ms, of the user. (In practice, there will be some delay contributed by serialisation, routing, and processing at the target server, so this would actually be somewhat more demanding.)

To achieve this, the architecture of 5G networks will need to change quite dramatically. Centralisation suddenly looks like the enemy, and middleboxes providing video optimisation, deep packet inspection, policy enforcement, and the like will have no place. At the same time, protocol designers will have to think seriously about localising traffic – this is where the content-centric networking concept comes in. Given the number of interested parties in the subject overall, it is likely that there will be a significant period of ‘horse-trading’ over the detail.

It will also need nothing more or less than a CDN and data-centre revolution. Content, apps, or commerce hosted within this 1000km contour will have a very substantial competitive advantage over those sites that don’t move their hosting strategy to take advantage of lower latency. Telecoms operators, by the same token, will have to radically decentralise their networks to get their systems within the 100km contour. Those content, apps, or commerce sites that move closer in still, to the 5ms/500km contour or further, will benefit further. The idea of centralising everything into shared services and global cloud platforms suddenly looks dated. So might the enormous hyperscale data centres one day look like the IT equivalent of sprawling, gas-guzzling suburbia? And will mobile operators become a key actor in the data-centre economy?

  • Executive Summary
  • Introduction
  • 5G – cutting through the hype
  • A stronger definition: a collection of related technologies
  • The mother of all stretch targets
  • Latency: the X factor
  • Latency: the challenge of distance
  • The economic value of snappier networks
  • Only Half The Application Latency Comes from the Network
  • Disrupt the cloud
  • The cloud is the data centre
  • Have the biggest data centres stopped getting bigger?
  • Mobile Edge Computing: moving the servers to the people
  • Conclusions and recommendations
  • Regulatory and political impact: the Opportunity and the Threat
  • Telco-Cloud or Multi-Cloud?
  • 5G vs C-RAN
  • Shaping the 5G backhaul network
  • Gigabit WiFi: the bear may blow first
  • Distributed systems: it’s everyone’s future

 

  • Figure 1: Latency = money in search
  • Figure 2: Latency = money in retailing
  • Figure 3: Latency = money in financial services
  • Figure 4: Networking accounts for 40-60 per cent of Facebook’s load times
  • Figure 5: A data centre module
  • Figure 6: Hyperscale data centre evolution, 1999-2015
  • Figure 7: Hyperscale data centre evolution 2. Power density
  • Figure 8: Only Facebook is pushing on with ever bigger data centres
  • Figure 9: Equinix – satisfied with 40k sq ft
  • Figure 10: ETSI architecture for Mobile Edge Computing

 

Gigabit Cable Attacks This Year

Introduction

Since at least May, 2014 and the Triple Play in the USA Executive Briefing, we have been warning that the cable industry’s continuous improvement of its DOCSIS 3 technology threatens fixed operators with a succession of relatively cheap (in terms of CAPEX) but dramatic speed jumps. Gigabit chipsets have been available for some time, with the actual timing of the roll-out being therefore set by cable operators’ commercial choices.

With the arrival of DOCSIS 3.1, multi-gigabit cable has also become available. As a result, cable operators have become the best value providers in the broadband mass markets: typically, we found in the Triple Play briefing, they were the cheapest in terms of price/megabit in the most common speed tiers, at the time between 50 and 100Mbps. They were sometimes also the leaders for outright speed, and this has had an effect. In Q3 2014, for the first time, Comcast had more high-speed Internet subscribers than it had TV subscribers, on a comparable basis. Furthermore, in Europe, cable industry revenues grew 4.6% in 2014 while the TV component grew 1.8%. In other words, cable operators are now broadband operators above all.

Figure 1: Comcast now has more broadband than TV customers

Source: STL Partners, Comcast Q1 2015 trending schedule 

In the December, 2014 Will AT&T shed copper, fibre-up, or buy more content – and what are the lessons? Executive Briefing, we covered the impact on AT&T’s consumer wireline business, and pointed out that its strategy of concentrating on content as opposed to broadband has not really delivered. In the context of ever more competition from streaming video, it was necessary to have an outstanding broadband product before trying to add content revenues. This was something which their DSL infrastructure couldn’t deliver in the context of cable or fibre competitors. The cable competition concentrated on winning whole households’ spending with broadband, with content as an upsell, and has undermined the wireline base to the point where AT&T might well exit a large proportion of it or perhaps sell off the division, refocusing on wireless, DirecTV satellite TV, and enterprise. At the moment, Comcast sees about 2 broadband net-adds for each triple-play net-add, although the increasing numbers of business ISP customers complicate the picture.

Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups

Source: STL, Comcast Q1 trending schedule

Since Christmas, the trend has picked up speed. Comcast announced a 2Gbps deployment to 1.5 million homes in the Atlanta metropolitan area, with a national deployment to follow. Time Warner Cable has announced a wave of upgrades in Charlotte, North Carolina that ups their current 30Mbps tier to 200Mbps and their 50Mbps tier to 300Mbps, after Google Fiber announced plans to deploy in the area. In the UK, Virgin Media users have been reporting unusually high speeds, apparently because the operator is trialling a 300Mbps speed tier, not long after it upgraded 50Mbps users to 152Mbps.

It is very much worth noting that these deployments are at scale. The Comcast and TWC rollouts are in the millions of premises. When the Virgin Media one reaches production status, it will be multi-million too. Vodafone-owned KDG in Germany is currently deploying 200Mbps, and it will likely go further as soon as it feels the need from a tactical point of view. This is the advantage of an upgrade path that doesn’t require much trenching. Not only can the upgrades be incremental and continuous, they can also be deployed at scale without enormous disruption.

Technology is driving the cable surge

This year’s CES saw the announcement, by Broadcom, of a new system-on-a-chip (SoC) for cable modems/STBs that integrates the new DOCSIS 3.1 cable standard. This provides for even more speeds, theoretically up to 7Gbps downlink, while still providing a broadcast path for pure TV. The SoC also, however, includes a WLAN radio with the newest 802.11ac technology, including beamforming and 4×4 multiple-input and multiple-output (MIMO), which is rated for gigabit speeds in the local network.

Even taking into account the usual level of exaggeration, this is an impressive package, offering telco-hammering broadband speeds, support for broadcast TV, and in-home distribution at speeds that can keep up with 4K streaming video. These are the SoCs that Comcast will be using for its gigabit cable rollouts. STMicroelectronics demonstrated its own multigigabit solution at CES, and although Intel has yet to show a DOCSIS 3.1 SoC, the most recent version of its Puma platform offers up to 1.6Gbps in a DOCSIS 3 network. DOCSIS 3 and 3.1 are designed to be interoperable, so this product has a future even after the head-ends are upgraded.

Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset

Source: RCRWireless 

With multiple chipset vendors shipping products, CableLabs running regular interoperability tests, and large regional deployments beginning, we conclude that the big cable upgrade is now here. Even if cable operators succeed in virtualising their set-top box software, you can’t provide the customer-end modem nor the WiFi router from the cloud. It’s important to realise that FTTH operators can upgrade in a similarly painless way by replacing their optical network terminals (ONTs), but DSL operators need to replace infrastructure. Also, ONTs are often independent from the WLAN router or other customer equipment , so the upgrade won’t necessarily improve the WiFi.

WiFi is also getting a major upgrade

The Broadcom device is so significant, though, because of the very strong WiFi support built in with the cable modem. Like the cable industry, the WiFi ecosystem has succeeded in keeping up a steady cycle of continuous improvements that are usually backwards compatible, from 802.11b through to 802.11ac, thanks to a major standards effort, the scale that Intel and Apple’s support gives us, and its relatively light intellectual property encumbrance.

802.11ac adds a number of advanced radio features, notably multiple-user MIMO, beamforming, and higher-density modulation, that are only expected to arrive in the cellular network as part of 5G some time after 2020, as well as some incremental improvements over 802.11n, like additional MIMO streams, wider channels, and 5GHz spectrum by default. As a result, the industry refers to it as “gigabit WiFi”, although the gigabit is a per-station rather than per-user throughput.

The standard has been settled since January 2014, and support is available in most flagship-class devices and laptop chipsets since then, so this is now a reality. The upgrade of the cable networks to 802.11ac WiFi backed with DOCSIS3.1 will have major strategic consequences for telcos, as it enables the cable operators and any strategic partners of theirs to go in even harder on the fixed broadband business and also launch a WiFi-plus-MVNO mobile service at the same time. The beamforming element of 802.11ac should help them to support higher user densities, as it makes use of the spatial diversity among different stations to reduce interference. Cablevision already launched a mobile service just before Christmas. We know Comcast is planning to launch one sometime this year, as they have been hiring a variety of mobile professionals quite aggressively. And, of course, the CableWiFi roaming alliance greatly facilitates scaling up such a service. The economics of a mini-carrier, as we pointed out in the Google MVNO: What’s Behind It and What Are the Implications? Executive Briefing, hinge on how much traffic can be offloaded to WiFi or small cells.

Figure 4: Modelling a mini-carrier shows that the WiFi is critical

Source: STL Partners

Traffic carried on WiFi costs nothing in terms of spectrum and much less in terms of CAPEX (due to the lower intellectual property tax and the very high production runs of WiFi equipment). In a cable context, it will often be backhauled in the spare capacity of the fixed access network, and therefore will account for very little additional cost on this score. As a result, the percentage of data traffic transferred to WiFi, or absorbed by it, is a crucial variable. KDDI, for example, carries 57% of its mobile data traffic on WiFi and hopes to reach 65% by the end of this year. Increasing the fraction from 30% to 57% roughly halved their CAPEX on LTE.

A major regulatory issue at the moment is the deployment of LTE-LAA (Licensed-Assisted Access), which aggregates unlicensed radio spectrum with a channel from licensed spectrum in order to increase the available bandwidth. The 5GHz WiFi band is the most likely candidate for this, as it is widely available, contains a lot of capacity, and is well-supported in hardware.

We should expect the cable industry to push back very hard against efforts to rush deployment of LTE-LAA cellular networks through the regulatory process, as they have a great deal to lose if the cellular networks start to take up a large proportion of the 5GHz band. From their point of view, a major purpose of LTE-LAA might be to occupy the 5GHz and deny it to their WiFi operations.

  • Executive Summary
  • Introduction
  • Technology is driving the cable surge
  • WiFi is also getting a major upgrade
  • Wholesale and enterprise markets are threatened as well
  • The Cable Surge Is Disrupting Wireline
  • Conclusions
  • STL Partners and Telco 2.0: Change the Game 
  • Figure 1: Comcast now has more broadband than TV customers
  • Figure 2: Sell the broadband and you get the whole bundle. About half Comcast’s broadband growth is associated with triple-play signups
  • Figure 3: This is your enemy. Broadcom’s DOCSIS3.1/802.11ac chipset
  • Figure 4: Modelling a mini-carrier shows that the WiFi is critical
  • Figure 5: Comcast’s growth is mostly driven by business services and broadband
  • Figure 6: Comcast Business is its growth start with a 27% CAGR
  • Figure 7: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 8: 3 major cable operators’ business services are now close to AT&T or Verizon’s scale
  • Figure 9: Summary of gigabit deployments
  • Figure 10: CAPEX as a % of revenue has been falling for some time…

 

Key Questions for The Future of the Network, Part 2: Forthcoming Disruptions

We recently published a report, Key Questions for The Future of the Network, Part 1: The Business Case, exploring the drivers for network investment.  In this follow-up report, we expand the coverage into two separate areas through which we explore 5 key questions:

Disruptive network technologies

  1. Virtualisation & the software telco – how far, how fast?
  2. What is the path to 5G? And what will it be used for?
  3. What is the role of WiFi & other wireless technologies?

External changes

  1. What are the impacts of government & regulation on the network?
  2. How will the vendor landscape change & what are the implications of this?

In the extract below, we outline the context for the first area – disruptive network technologies – and explore the rationales and processes associated with virtualisation (Question 1).

Critical network-technology disruptions

This section covers three huge questions which should be at the top of any CTO’s mind in a CSP – and those of many other executives as well. These are strategically-important technology shifts that have the potential to “change the game” in the longer term. While two of them are “wireless” in nature, they also impact fixed/fibre/cable domains, both through integration and potential substitution. These will also have knock-on effects in financial terms – directly in terms of capex/opex costs, or indirectly in terms of services enabled and revenues.

This is not intended as a round-up of every important trend across the technology spectrum. Clearly, there are many other evolutions occurring in device design, IoT, software-engineering, optical networking and semiconductor development. These will all intersect in some ways with telcos, but there are so many “logical hops” away from the process of actually building and running networks, that they don’t really fit into this document easily. (Although they do appear in contexts such as drivers of desirable 5G network capabilities).

Instead, the focus once again is on unanswered questions that link innovation with “disruption” of how networks are conceived and deployed. As described below, network-virtualisation has huge and diverse impacts across the CSP universe. 5G will likely have a large gap versus today’s 4G architecture, too. This is very different to changes which are mostly incremental.

The mobile and software focus of this section is deliberate. Fixed-network technologies – fast-evolving though they are – generally do not today cause “disruption” in a technical sense. As the name suggests, the current newest cable-industry standard, DOCSIS3.1, is an evolution of 3.0, not a revolution. There is no 4.0 on the drawing-boards, yet. But the relative ease of upgrade to “gigabit cable” may unleash more market-related disruptions, as telcos feel the need to play catch-up with their rivals’ swiftly-escalating headline speeds.

Fibre technologies also tend to be comparatively incremental, rather than driving (or enabling) massive organisational and competitive shifts. In fixed networks there are other important drivers – competition, network unbundling, 4K television, OTT-style video and so on – as well as important roles for virtualisation, which covers both mobile and fixed domains. For markets with high use of residential “OTT video” services such as Netflix – especially in 4K variants – the push to gigabit-range speeds may be faster than expected. This will also have knock-on impacts on the continued improvement of WiFi, defending against ever-faster cellular WiFi networks. Indeed, faster gigabit cable and FTTH networks will be necessary to provide backhaul for 4.5G and 5G cellular networks, both for normal cell-towers and the expected rapid growth of small-cells.

The questions covered in more depth here examine:

  • Virtualisation & the “software telco”: How fast will SDN and NFV appear in commercial networks, and how broad are their impacts in both medium and longer terms? 
  • What is the path from 4G to 5G? This is a less-obvious question than it might appear, as we do yet even have agreed definitions of what we want “5G” to do, let alone defined standards to do it.
  • What is the role of WiFi and other wireless technologies? 

All of these intersect, and have inter-dependencies. For instance, 5G networks are likely to embrace SDN/NFV as a core component, and also perhaps form an “umbrella” over other low-power wireless networks.

A fourth “critical” question would have been to consider security technology and processes. Clearly, the future network is going to face continued challenges from hackers and maybe even cyber-warfare, against which we will need to prepare. However, that is in many ways a broader set of questions that actually reflect on all the others – virtualisation will bring its own security dilemmas, as (no doubt) will 5G. WiFi already does. It is certainly a critical area that bears consideration at a strategic level within CSPs, although it is not addressed here as a specific “question”. It is also a huge and complex area that deserves separate study.

Non-disruptive network technologies

As well as being prepared to exploit truly disruptive innovations, the industry also needs to get better at spotting non-disruptive ones that are doomed to failure, and abandoning them before they incur too much cost or distraction. The telecoms sector has a long way to go before it embraces the start-up mentality of “failing fast” – there are too many hypothetical “standards” gathering dust on a metaphorical shelf, and never being deployed despite a huge amount of work. Sometimes they get shoe-horned into new architectures, as a way to breathe life into them – but that often just encumbers shiny new technologies with the failures of the past.

For example, over the past 10+ years, the telecom industry has been pitching IMS (IP Multimedia Subsystem) as the future platform for interoperating services. It is finally gaining some adoption, but essentially only as a way to implement VoIP versions of the phone system – and even then, with huge increases in complexity and often higher costs. It is not “disruptive” except insofar as sucking huge amounts of resources and management attention, away from other possible sources of genuine innovation. Few developers care about it, and the “technology politics” behind it have helped contribute to the industry’s problems, not the solutions. While there is growth in the deployment of IMS (e.g. as a basis for VoLTE – voice on LTE, or fixed-line VoIP) it is primarily an extra cost, rather than a source of new revenue or competitive advantage. It might help telcos reduce costs by retiring old equipment or reclaiming spectrum for re-use, but that seems to be the limit of its utility and opportunity.

Figure 1: IMS-based services (mostly VoIP) are evolutionary not disruptive

Source: Disruptive Analysis

A common theme in recent years has been for individual point solutions for technical standards to seem elegant “in isolation”, but actually fail to take account of the wider market context. Real-world “offload” of mobile data traffic to WiFi and femtocells has been minimal, because of various practical and commercial constraints – many of which have been predictable. Self-optimising networks (where radio components configured, provisioned and diagnosed themselves automatically) suffered from apathy by vendors – as well as fears from operator staff that they might make themselves redundant. A whole slew of attempts at integrating WiFi with cellular have also had minimal impact, because they ignored the existence of private WiFi and user behaviour. Some of these are now making a return, engineered into more holistic solutions like HetNets and SDN. Telcos execs need to ensure that their representatives on standards bodies, or industry fora, are able to make pragmatic decisions with multiple contributory inputs, rather than always pursue “engineering purity”.

Virtualisation & the “software telco” – how far, how fast?

Spurred by rapid advances in standardised computing products and cloud platforms, the idea of virtualisation is now almost ubiquitous across the telecom sector. Yet the specialised nature of network equipment means that “switching to the cloud” is a lot more complicated than is the case for enterprise IT. But change is happening – the industry is now slowly moving from inflexible, non-scalable network elements or technology sub-systems, to ones which are programmable, running on commercial hardware, and which can “spin up” or down in terms of capacity. We are still comparatively early in this new cycle, but the trend now appears to be inexorable. It is being driven both by what is becoming possible – and also the threats posed by other denizens of the “cloud universe” migrating towards the telecoms industry and threatening to replace aspects unilaterally.

Two acronyms cover the main developments:

  • Software-defined networks (SDN) change the basic network “plumbing” – rather than hugely-complex switches and routers, transmitting and processing data streams individually, SDN puts a central “controller” function in charge of more flexible boxes. These can be updated more easily, have new network-processing capabilities enabled, and allow (hopefully) for better reliability and lower costs.
  • Network function virtualisation (NFV) is less about the “big iron” parts of the network, instead focusing on the myriad of other smaller units needed to do more specific tasks relating to control, security, optimisation and so forth. It allows these supporting functions to be re-cast in software, running as apps on standard servers, rather than needing a variety of separate custom-built boxes and chips.

Figure 2: ETSI’s vision for NFV

                                                                                    Source: ETSI & STL Partners

And while a lot of focus has been placed on operators’ own data-centres and “data-plane” boxes like routers and assorted traffic-processing “middle-boxes” even, that is not the whole story. Virtualisation also extends to the other elements of telco kit: “control-plane” elements used to oversee the network and internal signalling, billing and OSS systems, and even bits of the access and radio network. Tying them all together – and managing the new virtual components – brings new challenges in “orchestration”.

But this begs a number of critical subsidiary questions.

  • Executive Summary
  • Introduction
  • Does the network matter? And will it face “disruption”?
  • Raising questions
  • Overview: Which disruptions are next?
  • Critical network-technology disruptions
  • Non-disruptive network technologies
  • Virtualisation & the “software telco” – how far, how fast?
  • What is the path to 5G? And what will it be used for?
  • What is the role of WiFi & other wireless technologies?
  • What else needs to happen?
  • What are the impacts of government & regulation?
  • Will the vendor landscape shift?
  • Conclusions & Other Questions
  • STL Partners and Telco 2.0: Change the Game
  • Figure 1: New services are both network-integrated & independent
  • Figure 2: IMS-based services (mostly VoIP) are evolutionary not disruptive
  • Figure 3: ETSI’s vision for NFV
  • Figure 4: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 5: Virtualisation roadmap: Telefonica
  • Figure 6: 5G timeline & top-level uses
  • Figure 7: Suggested example 5G use-cases
  • Figure 8: 5G architecture will probably be virtualised from Day 1
  • Figure 9: Key 5G Research Initiatives
  • Figure 10: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 11: Proliferating wireless options for IoT
  • Figure 12: Forthcoming IoT-related wireless technologies
  • Figure 13: London bus with free WiFi sponsored by ice-cream company
  • Figure 14: Vendor landscape in turmoil as IT & network domains merge

 

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Key Questions for NextGen Broadband Part 1: The Business Case

Introduction

It’s almost a cliché to talk about “the future of the network” in telecoms. We all know that broadband and network infrastructure is a never-ending continuum that evolves over time – its “future” is continually being invented and reinvented. We also all know that no two networks are identical, and that despite standardisation there are always specific differences, because countries, regulations, user-bases and legacies all vary widely.

But at the same time, the network clearly matters still – perhaps more than it has for the last two decades of rapid growth in telephony and SMS services, which are now dissipating rapidly in value. While there are certainly large swathes of the telecom sector benefiting from content provision, commerce and other “application-layer” activities, it is also true that the bulk of users’ perceived value is in connectivity to the Internet, IPTV and enterprise networks.

The big question is whether CSPs can continue to convert that perceived value from users into actual value for the bottom-line, given the costs and complexities involved in building and running networks. That is the paradox.

While the future will continue to feature a broader set of content/application revenue streams for telcos, it will also need to support not just more and faster data connections, but be able to cope with a set of new challenges and opportunities. Top of the list is support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. There is a significant chance that many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition evolving – faster- or lower-powered networks, or perhaps even the semi-mythical “QoS”, which if not paid for directly, could perhaps be integrated into compelling packages and data-service bundles. There is also the potential for “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. But does this add more value than investing in more web/OTT-style applications and services, de-coupled from the network?

Again, this raises questions about technology, business models – and the practicalities of making it happen.

This plays directly into the concept of the revenue “hunger gap” we have analysed for the past two years – without ever-better (but more efficient) networks, the telecom industry is going to get further squeezed. While service innovation is utterly essential, it also seems to be slow-moving and patchy. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Depending on the trajectory of regulatory change, we may also see more consolidation among parts of the service provider industry, more quad-play networks, more sharing and wholesale models.

We also see communications networks and applications permeating deeper into society and government. There is a sense among some policymakers that “telecoms is too important to leave up to the telcos”, with initiatives like Smart Cities and public-safety networks often becoming decoupled from the mainstream of service providers. There is an expectation that technology – and by extension, networks – will enable better economies, improved healthcare and education, safer and more efficient transport, mechanisms for combatting crime and climate change, and new industries and jobs, even as old ones become automated and robotised.

Figure 1 – New services are both network-integrated & independent

Source: STL Partners

And all of this generates yet more uncertainty, with yet more questions – some about the innovations needed to support these new visions, but also whether they can be brought to market profitably, given the starting-point we find ourselves at, with fragmented (yet growing) competition, regulatory uncertainty, political interference – and often, internal cultural barriers within the CSPs themselves. Can these be overcome?

A common theme from the section above is “Questions”. This document – and a forthcoming “sequel” – is intended to group, lay out and introduce the most important ones. Most observers just tend to focus on a few areas of uncertainty, but in setting up the next year or so of detailed research, Telco 2.0 wants to fully list and articulate all of the hottest issues. Only once they are collated, can we start to work out the priorities – and inter-dependencies.

Our belief is that all of the detailed questions on “Future Networks” can, it fact, be tied back to one of two broader, more over-reaching themes:

  • What are the business cases and operational needs for future network investment?
  • Which disruptions (technological or other) are expected in the future?

The business case theme is covered in this document. It combines future costs (spectrum, 4G/5G/fibre deployments, network-sharing, virtualisation, BSS/OSS transformation etc.) and revenues (data connectivity, content, network-integrated service offerings, new Telco 2.0-style services and so on). It also encompasses what is essential to make the evolution achievable, in terms of organisational and cultural transformation within telcos.

A separate Telco 2.0 document, to be published in coming weeks, will cover the various forthcoming disruptions. These are expected to include new network technologies that will ultimately coalesce to form 5G mobile and new low-power wireless, as well as FTTx and DOCSIS cable evolution. In addition, virtualisation in both NFV and SDN guises will be hugely transformative.

There is also a growing link between mobile and fixed domains, reflected in quad-play propositions, industry consolidation, and the growth of small-cells and WiFi with fixed-line backhaul. In addition, to support future service innovation, there need to be adequate platforms for both internal and external developers, as well as a meaningful strategy for voice/video which fits with both network and end-user trends. Beyond the technical, additional disruption will be delivered by regulatory change (for example on spectrum and neutrality), and also a reshaped vendor landscape.

The remainder of this report lays out the first five of the Top 10 most important questions for the Future Network. We can’t give definitive analyses, explanations or “answers” in a report of this length – and indeed, many of them are moving targets anyway. But taking a holistic approach to laying out each question properly – where it comes from, and what the “moving parts” are, we help to define the landscape. The objective is to help management teams apply those same filters to their own organisations, understand how can costs be controlled and revenues garnered, see where consolidation and regulatory change might help or hinder, and deal with users and governments’ increasing expectations.

The 10 Questions also lay the ground for our new Future Network research stream, forthcoming publications and comment/opinion.

Overview: what is the business case for Future Networks?

As later sections of both this document and the second in the series cover, there are various upcoming technical innovations in the networking pipeline. Numerous advanced radio technologies underpin 4.5G and 5G, there is ongoing work to improve fibre and DSL/cable broadband, virtualisation promises much greater flexibility in carrier infrastructure and service enablement, and so on. But all those advances are predicated on either (ideally) more revenues, or at least reduced costs to deploy and operate. All require economic justification for investment to occur.

This is at the core of the Future Networks dilemma for operators – what is the business case for ongoing investment? How can the executives, boards of directors and investors be assured of returns? We all know about the ongoing shift of business & society online, the moves towards smarter cities and national infrastructure, changes in entertainment and communication preferences and, of course, the Internet of Things – but how much benefit and value might accrue to CSPs? And is that value driven by network investments, or should telecom companies re-focus their investments and recruitment on software, content and the cloud?

This is not a straightforward question. There are many in the industry that assert that “the network is the key differentiator & source of value”, while others counter that it is a commodity and that “the real value is in the services”.

What is clear is that better/faster networks will be needed in any case, to achieve some of the lofty goals that are being suggested for the future. However, it is far from clear how much of the overall value-chain profit can be captured from just owning the basic machinery – recent years have shown a rapid de-coupling of network and service, apart from a few areas.

In the past, networks largely defined the services offered – most notably broadband access, phone calls and SMS, as well as cable TV and IPTV. But with the ubiquitous rise of Internet access and service platforms/gateways, an ever-increasing amount of service “logic” is located on the web, or in the cloud – not enshrined in the network itself. This is an important distinction – some services are abstracted and designed to be accessed from any network, while others are intimately linked to the infrastructure.

Over the last decade, the prevailing shift has been for network-independent services. In many ways “the web has won”. Potentially this trend may reverse in future though, as servers and virtualised, distributed cloud capabilities get pushed down into localised network elements. That, however, brings its own new complexities, uncertainties and challenges – it a brave (or foolhardy) telco CEO that would bet the company on new in-network service offers alone. We will also see API platforms expose network “capabilities” to the web/cloud – for example, W3C is working on standards to allow web developers to gain insights into network congestion, or users’ data-plans.

But currently, the trend is for broadband access and (most) services to be de-coupled. Nonetheless, some operators seem to have been able to make clever pricing, distribution and marketing decisions (supported by local market conditions and/or regulation) to enable bundles to be made desirable.

US operators, for example, have generally fared better than European CSPs, in what should have been comparably-mature markets. But was that due to a faster shift to 4G networks? Or other factors, such as European telecom fragmentation and sub-scale national markets, economic pressures, or perhaps a different legacy base? Did the broad European adoption of pre-paid (and often low-ARPU) mobile subscriptions make it harder to justify investments on the basis of future cashflows – or was it more about the early insistence that 2.6GHz was going to be the main “4G band”, with its limitations later coming back to bite people? It is hard to tease apart the technology issues from the commercial ones.

Similar differences apply in the fixed-broadband world. Why has adoption and typical speed varied so much? Why have some markets preferred cable to DSL? Why are fibre deployments patchy and very nation-specific? Is it about the technology involved – or the economy, topography, government policies, or the shape of the TV/broadcast sector?

Understanding these issues – and, once again, articulating the questions properly – is core to understanding the future for CSPs’ networks. We are in the middle of 4G rollout in most countries, with operators looking at the early requirements for 5G. SDN and NFV are looking important – but their exact purpose, value and timing still remain murky, despite the clear promises. Can fibre rollouts – FTTC or FTTH – still be justified in a world where TV/video spend is shifting away from linear programming and towards online services such as Netflix?

Given all these uncertainties, it may be that either network investments get slowed down – or else consolidation, government subsidy or other top-level initiatives are needed to stimulate them. On the other hand, it could be the case that reduced costs of capex and opex – perhaps through outsourcing, sharing or software-based platforms, or even open-source technology – make the numbers work out well, even for raw connectivity. Certainly, the last few years have seen rising expenditure by end-users on mobile broadband, even if it has also contributed to the erosion of legacy services such as telephony and SMS, by enabling more modern/cheaper rivals. We have also seen a shift to lower-cost network equipment and software suppliers, and an emphasis for “off the shelf” components, or open interfaces, to reduce lock-in and encourage competition.

The following sub-sections each frame a top-level, critical question relating to the business case for Future Networks:

  • Will networks support genuinely new services & enablers/APIs, or just faster/more-granular Internet access?
  • Speed, coverage, performance/QoS… what actually generates network value? And does this derive from customer satisfaction, new use-cases, or other sources?
  • Does quad-play and fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation and culture: what needs to change to support future network investments?

 

  • Executive Summary
  • Introduction
  • Overview: what is the business case for Future Networks?
  • Supporting new services or just faster Internet?
  • Speed, coverage, quality…what is most valuable?
  • Does quad-play & fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation & culture: what changes?
  • Conclusions

 

  • Figure 1 – New services are both network-integrated & independent
  • Figure 2 – Mobile data device & business model evolution
  • Figure 3 – Some new services are directly enabled by network capabilities
  • Figure 4 – Network investments ultimately need to map onto customers’ goals
  • Figure 5 – Customers put a priority on improving indoor/fixed connectivity
  • Figure 6 – Notional “coverage” does not mean enough capacity for all apps
  • Figure 7 – Different operator teams have differing visions of the future
  • Figure 8 – “Software telcos” may emulate IT’s “DevOps” organisational dynamic

 

Winning Strategies: Differentiated Mobile Data Services

Introduction

Verizon’s performance in the US

Our work on the US cellular market – for example, in the Disruptive Strategy: “Uncarrier” T-Mobile vs VZW, AT&T, and Free.fr  and Free-T-Mobile: Disruptive Revolution or a Bridge Too Far?  Executive Briefings – has identified that US carrier strategies are diverging. The signature of a price-disruption event we identified with regard to France was that industry-wide ARPU was falling, subscriber growth was unexpectedly strong (amounting to a substantial increase in penetration), and there was a shakeout of minor operators and MVNOs.

Although there are strong signs of a price war – for example, falling ARPU industry-wide, resumed subscriber growth, minor operators exiting, and subscriber-acquisition initiatives such as those at T-Mobile USA, worth as much as $400-600 in handset subsidy and service credit – it seems that Verizon Wireless is succeeding while staying out of the mire, while T-Mobile, Sprint, and minor operators are plunged into it, and AT&T may be going that way too. Figure 1 shows monthly ARPU, converted to Euros for comparison purposes.

Figure 1: Strategic divergence in the US

Figure 1 Strategic Divergence in the US
Source: STL Partners, themobileworld.com

We can also look at this in terms of subscribers and in terms of profitability, bringing in the cost side. The following chart, Figure 2, plots margins against subscriber growth, with the bubbles set proportional to ARPU. The base year 2011 is set to 100 and the axes are set to the average values. We’ve named the four quadrants that result appropriately.

Figure 2: Four carriers, four fates

Figure 2 Four carriers four fate
Source: STL Partners

Clearly, you’d want to be in the top-right, top-performer quadrant, showing subscriber growth and growing profitability. Ideally, you’d also want to be growing ARPU. Verizon Wireless is achieving all three, moving steadily north-west and climbing the ARPU curve.

At the same time, AT&T is gradually being drawn into the price war, getting closer to the lower-right “volume first” quadrant. Deep within that one, we find T-Mobile, which slid from a defensive crouch in the upper-left into the hopeless lower-left zone and then escaped via its price-slashing strategy. (Note that the last lot of T-Mobile USA results were artificially improved by a one-off spectrum swap.) And Sprint is thrashing around, losing profitability and going nowhere fast.

The usual description for VZW’s success is “network differentiation”. They’re just better than the rest, and as a result they’re reaping the benefits. (ABI, for example, reckons that they’re the world’s second most profitable operator on a per-subscriber basis  and the world’s most profitable in absolute terms.) We can restate this in economic terms, saying that they are the most efficient producer of mobile service capacity. This productive capacity can be used either to cut prices and gain share, or to increase quality (for example, data rates, geographic coverage, and voice mean-opinion score) at higher prices. This leads us to an important conclusion: network differentiation is primarily a cost concept, not a price concept.

If there are technical or operational choices that make network differentiation possible, they can be deployed anywhere. It’s also possible, though, that VZW is benefiting from structural factors, perhaps its ex-incumbent status, or its strong position in the market for backbone and backhaul fibre, or perhaps just its scale (although in that case, why is AT&T doing so much worse?). And another possibility often mooted is that the US is somehow a better kind of mobile market. Less competitive (although this doesn’t necessarily show up in metrics like the Herfindahl index of concentration), supposedly less regulated, and undoubtedly more profitable, it’s often held up by European operators as an example. Give us the terms, they argue, and we will catch up to the US in LTE deployment.

As a result, it is often argued in lobbying circles that European markets are “too competitive” or in need of “market repair”, and therefore, the argument runs, the regulator ought to turn a blind eye to more consolidation or at least accept a hollowing out of national operating companies. More formally, the prices (i.e. ARPUs) prevailing do not provide a sufficient margin over operators’ fixed costs to fund discretionary investment. If this was true, we would expect to find little scope for successful differentiation in Europe.

Further, if the “incumbent advantage” story was true of VZW over and above the strategic moves that it has made, we might expect to find that ex-incumbent, converged operators were pulling into the lead across Europe, benefiting from their wealth of access and backhaul assets. In this note, we will try to test these statements, and then assess what the answer might be.

How do European Operators compare?

We selected a clutch of European mobile operators and applied the same screen to identify what might be happening. In doing so we chose to review the UK, German, French, Swedish, and Italian markets jointly with the US, in an effort to avoid a purely European crisis-driven comparison.

Figure 3: Applying the screen to European carriers

Figure 3 Applying the screen to European carriers

Source: STL Partners

Our first observation is that the difference between European and American carriers has been more about subscriber growth than about profitability. The axes are set to the same values as in Figure 2, and the data points are concentrated to their left (showing less subscriber growth in Europe) not below them (less profitability growth).

Our second observation is that yes, there certainly are operators who are delivering differentiated performance in the EU. But they’re not the ones you might expect. Although the big converged incumbents, like T-Mobile Germany, have strong margins, they’re not increasing them and on the whole their performance is average only. Nor is scale a panacea, which brings us to our next observation.

Our third observation is that something is visible at this level that isn’t in the US: major opcos that are shrinking. Vodafone, not a company that is short of scale, gets no fewer than three of its OpCos into the lower-left quadrant. We might say that Vodafone Italy was bound to suffer in the context of the Italian macro-economy, as was TIM, but Vodafone UK is in there, and Vodafone Germany is moving steadily further left and down.

And our fourth observation is the opposite, significant growth. Hutchison OpCo 3UK shows strong performance growth, despite being a fourth operator with no fixed assets and starting with LTE after first-mover EE. Their sibling 3 Sweden is also doing well, while even 3 Italy was climbing up until the last quarter and it remains a valid price warrior. They are joined in the power quadrant with VZW by Telenor’s Swedish OpCo, Telia Mobile, and O2 UK (in the last two cases, only marginally). EE, for its part, has only marginally gained subscribers, but it has strongly increased its margins, and it may yet make it.

But if you want really dramatic success, or if you doubt that Hutchison could do it, what about Free? The answer is that they’re literally off the chart. In Figure 4, we add Free Mobile, but we can only plot the first few quarters. (Interestingly, since then, Free seems to be targeting a mobile EBITDA margin of exactly 9%.)

The distinction here is between the pure-play, T-Mobile-like price warriors in the lower right quadrant, who are sacrificing profitability for growth, and the group we’ve identified, who are improving their margins even as they gain subscribers. This is the signature of significant operational improvement, an operator that can move traffic more efficiently than its competitors. Because the data traffic keeps coming, ever growing at the typical 40% annual clip, it is necessary for any operator to keep improving in order to survive. Therefore, the pace of improvement marks operational excellence, not just improvement.

Figure 4: Free Mobile, a disruptive force that’s literally off the charts

Figure 4 Free Mobile a disruptive force thats literally off the charts

Source: STL Partners

We can also look at this at the level of the major multinational groups. Again, Free’s very success presents a problem to clarity in this analysis – even as part of a virtual group of independents, the ‘Indies’ in Figure 5, it’s difficult to visualise. T-Mobile USA’s savage price cutting, though, gets averaged out and the inclusion of EE boosts the result for Orange and DTAG. It also becomes apparent that the “market repair” story has a problem in that there isn’t a major group committed to hard discounting. But Hutchison, Telenor, and Free’s excellence, and Vodafone’s pain, stand out.

Figure 5: The differences are if anything more pronounced within Europe at the level of the major multinationals

Figure 5 The differences are if anything more pronounced within Europe at the level of the major multinationals

Source: STL Partners

In the rest of this report we analyse why and how these operators (3UK, Telenor Sweden and Free Mobile) are managing to achieve such differentiated performance, identify the common themes in their strategic approaches and the lessons from comparison to their peers, and the important wider consequences for the market.

 

  • Executive Summary
  • Introduction
  • Applying the Screen to European Mobile
  • Case study 1: Vodafone vs. 3UK
  • 3UK has substantially more spectrum per subscriber than Vodafone
  • 3UK has much more fibre-optic backhaul than Vodafone
  • How 3UK prices its service
  • Case study 2: Sweden – Telenor and its competitors
  • The network sharing issue
  • Telenor Sweden: heavy on the 1800MHz
  • Telenor Sweden was an early adopter of Gigabit Ethernet backhaul
  • How Telenor prices its service
  • Case study 3: Free Mobile
  • Free: a narrow sliver of spectrum, or is it?
  • Free Mobile: backhaul excellence through extreme fixed-mobile integration
  • Free: the ultimate in simple pricing
  • Discussion
  • IP networking metrics: not yet predictive of operator performance
  • Network sharing does not obviate differentiation
  • What is Vodafone’s strategy for fibre in the backhaul?
  • Conclusions

 

  • Figure 1: Strategic divergence in the US
  • Figure 2: Four carriers, four fates
  • Figure 3: Applying the screen to European carriers
  • Figure 4: Free Mobile, a disruptive force that’s literally off the charts
  • Figure 5: The differences are if anything more pronounced within Europe at the level of the major multinationals
  • Figure 6: Although Vodafone UK and O2 UK share a physical network, O2 is heading for VZW-like territory while VF UK is going nowhere fast
  • Figure 7: Strategic divergence in the UK
  • Figure 8: 3UK, also something of an ARPU star
  • Figure 9: 3UK is very different from Hutchison in Italy or even Sweden
  • Figure 10: 3UK has more spectrum on a per-subscriber basis than Vodafone
  • Figure 11: Vodafone’s backhaul upgrades are essentially microwave; 3UK’s are fibre
  • Figure 12: 3 Europe is more than coping with surging data traffic
  • Figure 13: 3UK service pricing
  • Figure 14: The Swedish market shows a clear winner…
  • Figure 15: Telenor.se is leading on all measures
  • Figure 16: How Swedish network sharing works
  • Figure 17: Network sharing does not equal identical performance in the UK
  • Figure 18: Although extensive network sharing complicates the picture, Telenor Sweden has a strong position, especially in the key 1800MHz band
  • Figure 19: If the customers want more data, why not sell them more data?
  • Figure 20: Free Mobile, network differentiator?
  • Figure 21: Free Mobile, the price leader as always
  • Figure 22: Free Mobile succeeds with remarkably little spectrum, until you look at the allocations that are actually relevant to its network
  • Figure 23: Free’s fixed-line network plans
  • Figure 24: Free leverages its FTTH for outstanding backhaul density
  • Figure 25: Free: value on 3G, bumper bundler on 4G
  • Figure 26: The carrier with the most IPv4 addresses per subscriber is…
  • Figure 27: AS_PATH length – not particularly predictive either
  • Figure 28: The buzzword count. “Fibre” beats “backhaul” as a concern
  • Figure 29: Are Project Spring’s targets slipping?