Cyber security: What will consumers pay for?

More connected lives, more cyber risks

The extent to which people live their lives online today can be summed up in LocaliQ’s internet minute statistics. Nine million searches happen on Google every minute. Facebook is the world’s third most visited website with three billion monthly active users spending 38 minutes per day on the site and clicking on an average of 12 ads per month. 251 million apps are downloaded per day and more than six million people are shopping online every minute with $4,722 spent every second on Amazon.

STL Partners highlighted the growing dominance of Wi-Fi in the home in Consumer Wi-Fi: Faster, smarter and near-impossible to replace, and the operator strategies to improve Wi-Fi experience with smart Wi-Fi apps and partnerships with value add players such as Plume. Connectivity in the home has become even more important since the COVID-19 pandemic as customers took on entertainment subscriptions (TV and gaming) and added smart TVs, cameras, doorbells, lights, and speakers (with voice assistants) to their home. According to Plume, smartphones (including “guest” phones) are the most prevalent devices in the home with an average of six per household. This is followed by computers (2.6 per household), tablets (1.3), smart TVs (1.1) and set-top boxes (1).

The graphic below highlights the growth in smart home IoT devices between the first half of 2021 and 2022 with 55% more cameras, 43% more doorbells, and 25% more smart bulbs as customers invest in making their homes more comfortable and secure. The average number of connected devices across Plume’s customer base of 41 million homes has grown to 17.1 in the first half of 2022 up from 15.5 in the first half of 2021. This figure is likely higher than the average household, as those with more devices are more likely to want a premium smart home Wi-Fi management set-up but is still indicative of growth trends.

Growth in devices between H1 2021 and H1 2022

plume-smart-home-device-in-home

Source: Plume smarthome market report – August 2022

With 40% of EU workers switching to working from home during COVID-19, the take up of digital technology has had a permanent effect on every-day life. IoT devices and digital technologies are projected to increasingly embed themselves in various aspects of our daily lives in coming years. Estimates on the number of connected devices by 2025 have ranged from 25 billion (GSMA) to 42 billion (IDC). The increasing volume and wide range of connected devices of varying hardware and software standards increases the attack surface for malicious actors who can inflict significant emotional and financial damage on consumers, their families and their employers.complex cybersecurity threat landscape

Enter your details below to download an extract of the report

A complex cybersecurity threat landscape

Cybersecurity Ventures – a leading researcher on the global cyber economy and publisher of Cybercrime Magazine – estimates that organisations suffered a ransomware attack every 11 seconds in 2021. It has also forecast that attacks on a consumer or business will happen every two seconds by 2031. It is believed the majority of cybercrimes go underreported by victims due to embarrassment, potential reputational harm and a perception that legal authorities cannot help. Even in a gaming community, a micro payment of less than $1 for a prize or item that doesn’t appear could go unreported due to the low cost of the transaction, but can be very lucrative for cybercriminals should enough games fall victim to the trick.

Cybersecurity Ventures forecasts this rise global cybercrime to inflict damages of $10.5 trillion annually by 2025. The cybersecurity specialists highlight that, if measured as a country, cybercrime would have the third largest GDP after USA and China.

The European Union Agency for Cybersecurity, ENISA, reports on the current cyber threats facing European consumers and businesses. In its latest 2022 threat landscape report (covering July 2021 to June 2022) it identified eight prime threats shown in the graphic below. These include:

  • Ransomware where bad actors take control of an organisation’s or individual’s assets and demand ransom in exchange for return of the assets and confidentiality of the information. The attack could involve locking out the user, encrypting, deleting or stealing the data. The most common attack vectors are phishingemails and brute-forcing on Remote Desktop Protocol (RDP). Cybersecurity Ventures estimates ransomware will cost victims $265bn annually by 2031.
  • Malware is commonly defined as “software, firmware or code intended to perform a malicious unauthorised process that will have an adverse impact on the confidentiality, integrity, or availability of a system”. Malware comes in the form of virus, worm, trojan, or software code that can negatively impact a host computer or mobile device. Spyware and adwareare considered subsets in this category. This malware could allow actors to take remote control of a system, denial skimmers, or steal information or enable botnets to carry out nefarious attacks such as distributed denial of service (DDoS). According to ENISA, malware attacks are on the rise in 2022 after a decline in the previous reporting period (2020 and 2021). The decline had been linked to increased working from home during the pandemic. While the rise could be attributed to workers returning to the office, ENISA also point out that there has been simply more malware.

One of the most known malware threats is Pegasus malware a WhatsApp exploit which can affect both iPhone and Android phones and can be used to access messages, photos and emails, record calls and activate the microphone.

  • Most mobile malware comes from malicious applications downloaded and installed by users. In 2021 fake adblockers or adware were common for Android. These adblocking apps can look for extensive permissions when being installed from downloads on third-party app stores and online forums.

ENISA reported a rise in malware from crypto-jacking (the unauthorised use of devices to mine for cryptocurrency – further described below) and IoT malware. In the first six months of 2022, the malware attack volume on IoT was higher than had been recorded over the previous four years with Mirai botnets responsible for most (seven million) attacks. ENISA reported in 2021 and 2022 the most common IoT targets were networking devices such as Netgear (DGN), D-Link339 (HNAP), and Dasan (GPON).

  • In 2021 Flubot (a banking Trojan delivered via fake SMS messages claiming to be from banks or government organisations) was a prevalent form of phone malware, and) lured many Android phone customers into downloading nefarious applications.

ENISA Threat Landscape 2022 – prime threats

ENISA-Threat-landscape-2022

Source: ENISA Threat Landscape report 2022

  • Social engineering attacks target weaknesses in human behaviour, where false actors exploit an individual’s trust in communication and in their online habits. These attacks consistently rank high according to ENISA. The most common threat vectors for social engineering attacks include phishing, spear-phishing (targeting specific individuals/businesses), whaling(attacking individuals in high positions such as executives and politicians), smishing (a combination of SMS and phishing), vishing (a combination of phishing on a voice call where sensitive information is given over the phone), business e-mail compromise (BEC) and spam. ENISA reported phishing was the most common vector for initial access in 2022. This rise was attributed to more advanced and sophisticated phishing practices, fatigue among users as well as more targeted and context-based phishing practices.
    • E-mail may be used by bad actors to carry out man-in-the-middle-attacks effectively using software to eavesdrop on users by using an innocent link to accessing e-mail and intercept messages between two people in order to steal data. A man-in-the-middle-attack could also take place over an unsecured Wi-Fi network where the attacker intercepts data transmitted from a user’s device over the network.
  • Threats against data refer to data breaches or leaks of sensitive, confidential, or protected information to bad actors / hackers and occur due to cyberattack, insider job, unintentional loss, or exposure of data. This includes data theft or identity theft where personal identifiable information (PII) is stolen and used to impersonate an individual. It also usually results in hack attempts on personal online accounts as well as spam e-mail, spam calls and SMS. Customers can check if their personal data has been exposed on the dark web due to a breach using the free online service Have I Been Pwned. Similar resources are also offered by consumer cyber safety players.
  • Threats against availability occur when users of a system or service cannot access the relevant datafrom that service or system. This is often commonly achieved through Distributed denial-of-service DdoS attacks which prevent users from accessing a website or system by overloading the website or network with requests resulting in decreased service performance, loss of data and outages. The attack has been in use for over 20 years now with many criminals using it to extort ransoms on organisations. It is also increasingly being used as part of a state-sponsored attack. ENISA highlighted that traditional DdoS attacks are increasingly moving towards mobile networks and IoT where such (IoT) devices have limited resources and poor security protection. Threats against the availability of the internet was cited in the context of the Russian invasion of Ukraine where access to the internet and websites have been curtailed in certain captured cities where internet infrastructure has been captured leading to re-routing internet traffic over Russian networks, censoring of (western) websites and shutting down of Ukrainian mobile networks.
  • Disinformation – includes creation and sharing of false information, usually by social media. In recent years there are number of websites and digital platforms that present false or erroneous information for their particular agenda and these sites are generally spurred through sharing of information through social media channels. ENISA pointed to the war between Russia and Ukraine as one example of current disinformation to target people’s perception of the status of the war. Wrong and purposely falsified information can often be mistakenly shared. This is where the definitions of misinformation and disinformation come in. Misinformation is the unintentional sharing or reporting of inaccurate information in good faith. Disinformation is an intentional attack where false or misleading information is intentionally created and shared.
  • Supply-chain attacks refers to the targeting of individuals, groups of individuals or organisations hardware and software resources including cloud storage, web applications, online stores and management software. The supply chain attack is usually a combination of at least two attacks; the first on the supplier to access their assets and from there access the suppliers’ own network of customers and suppliers. The most recent high-profile attack was Solar Winds in 2020.
    • Cryptojacking or hidden crypto-mining occurs when a hacker secretly uses a victim’s computing power to generate cryptocurrency after the victim mistakenly and unwittingly downloads malicious software. Cryptocurrency is popular due to its ability to offer anonymity and its use as payment in ransomware attacks. Crypto-crime – i.e. crimes involving cryptocurrencies – is predicted to cost the global economy $30bn in 2025 according to Cybersecurity Ventures, while Chainalysis estimated crypto-scams (i.e. rug pulls on fake crypto projects) generated revenue of more than $7.7bn in 2021 and is one of the largest types of cryptocurrency-based scams.

Attacks affecting customers identity, privacy, financial and emotional wellbeing

Threats such as ransomware, malware, phishing, man-in-the-middle and social engineering have given rise to fears of identity theft and financial losses as a result of hacked bank accounts, e-mail, and social media accounts. In the US for example, the Identity Theft Resource Center (ITRC) reported a sharp rise (1,000% in a year) in social media account take overs with criminals using stolen information not only to take over existing bank accounts but to set up new bank and credit accounts using information stolen in data breaches and phishing attacks. In a snap survey of 97 people who contacted the IRTC over a social media account take over, 66% reported strong emotional reactions to losing access to their social media account.

Snap Survey of social media account takeover victims in 2021

ITRC-social-media-account-takeover-victims-2021

Source: Identity Theft Resource Centre

Table of Contents

  • Executive Summary
    • The threat landscape in an increasingly connected life
    • How to build successful cyber security services
    • A digital life security opportunity
  • More connected lives, more cyber risks
    • A complex cybersecurity threat landscape
    • Are consumers willing to pay for cybersecurity?
  • Operator cybersecurity propositions
    • Vodafone’s Secure Net
    • Telia Security package
    • Telefónica – Secure Connection
    • NOS Portugal
    • MEO Portugal
    • Safe Net
    • Deutsche Telekom
    • AT&T USA
    • Comcast
    • MTS Russia
    • SmarTone Hong Kong
    • A1 Austria
  • Conclusions

Related research

 

Enter your details below to download an extract of the report

Telco Cloud Deployment Tracker: Will vRAN eclipse pure open RAN?

Is vRAN good enough for now?

In this October 2022 update to STL Partners’ Telco Cloud Deployment Tracker, we present data and analysis on progress with deployments of vRAN and open RAN. It is fair to say that open RAN (virtualised AND disaggregated RAN) deployments have not happened at the pace that STL Partners and many others had forecast. In parallel, some very significant deployments and developments are occurring with vRAN (virtualised NOT disaggregated RAN). Is open RAN a networking ideal that is not yet, or never will be, deployed in its purest form?

In our Telco Cloud Deployment Tracker, we track deployments of three types of virtualised RAN:

  1. Open RAN / O-RAN: Open, disaggregated, virtualised / cloud-native, with baseband (BU) functions distributed between a Central Unit (CU: control plane functions) and Distributed Unit (DU: data plane functions)
  2. vRAN: Virtualised and distributed CU/DU, with open interfaces but implemented as an integrated, single-vendor platform
  3. Cloud RAN (C-RAN): Single-vendor, virtualised / centralised BU, or CU only, with proprietary / closed interfaces

Cloud RAN is the most limited form of virtualised RAN: it is based on porting part or all of the functionality of the legacy, appliance-based BU into a Virtual Machine (VM). vRAN and open RAN are much more significant, in both technology and business-model terms, breaking open all parts of the RAN to more competition and opportunities for innovation. They are also cloud-native functions (CNFs) rather than VM-based.

Enter your details below to request an extract of the report

2022 was meant to be the breakthrough year for open RAN: what happened?

  • Of the eight deployments of open RAN we were expecting to go live in 2022 (shown in the chart below), only three had done so by the time of writing.
  • Two of these were on the same network: Altiostar and Mavenir RAN platforms at DISH. The other was a converged Parallel Wireless 2G / 3G RAN deployment for Orange Central African Republic.
  • This is hardly the wave of 5G open RAN, macro-network roll-outs that the likes of Deutsche Telekom, Orange, Telefónica and Vodafone originally committed to for 2022. What has gone wrong?
  • Open RAN has come up against a number of thorny technological and operational challenges, which are well known to open RAN watchers:
    • integration challenges and costs
    • hardware performance and optimisation
    • immature ecosystem and unclear lines of accountability when things go wrong
    • unproven at scale, and absence of economies of scale
    • energy efficiency shortcomings
    • need to transform the operating model and processes
    • pressured 5G deployment and Huawei replacement timelines
    • absence of mature, open, horizontal telco cloud platforms supporting CNFs.
  • Over and above these factors, open RAN is arguably not essential for most of the 5G use cases it was expected to support.
  • This can be gauged by looking at some of the many open RAN trials that have not yet resulted in commercial deployments.

Global deployments of C-RAN, vRAN and open RAN, 2016 to 2023

Image shows global deployments of C-RAN, vRAN and open RAN, 2016 to 2023

Source: STL Partners

Previous telco cloud tracker releases and related research

Enter your details below to request an extract of the report

Delivering on SD-WAN: How to choose the right partners

SD-WAN has been made in North America…

65% of the North American operators featured in our Telco Cloud Tracker had deployed SD-WAN by the end of 2020

By contrast, 49% Asia-Pacific-based telcos had launched SD-WAN in their region by the same time and 44% European telcos were offering SD-WAN within Europe

As this market matures operators that are new to the market, or seeking to expand their services internationally, should choose an SD-WAN platform that will enable them to differentiate in their local markets or play to the telcos strengths.

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

Challenges for telcos considering introducing SD-WAN

  1. Lack of relevant skills or experience: telcos worry about risks of ‘outsourcing’ a significant part of their WAN services, operations and infrastructure to SD-WAN vendor; and about integration with BSS / OSS etc.
    • Leading SD-WAN vendors collaborate closely with telcos to facilitate integration of their platforms with telcos’ networks and services
    • SD-WAN platforms provide management interfaces that are easy for non-technical staff to operate, and offer visibility into application workflows and network KPIs
  2. How to differentiate SD-WAN service: how to offer USPs for the local market and differentiate from competitors
    • Ensure you choose an SD-WAN platform that suits the key needs of your customer base (see competitive analysis in next section)
    • Differentiation can also be achieved through the services telcos and vendors offer around SD-WAN products, e.g. good local market and language support
  3. Absence of appropriate infrastructure, facilities and networks: e.g. lack of fixed broadband networks; insufficient SD-WAN platform support for LTE / 5G
    • Many SD-WAN platforms offer LTE and 5G connectivity mainly as a back-up to IP-MPLS and fixed broadband. But many telcos, especially in emerging markets, serve enterprise sites through FWA. How well do platforms support this?
    • Many SD-WAN platforms rely on redundant connectivity to cloud-based hubs: are these always available for telcos serving remote areas?
  4. Risk of cannibalising enterprise revenues and compromising ROI from existing products and assets: e.g. IP-MPLS; IP-VPN; dedicated Internet; etc.
    • Telcos can offer different classes of SD-WAN at different price points, inc. overlay-only services to clients that want them
    • SD-WAN now seen as a value-add to IP-MPLS, for which a premium can be charged: can be integrated with telcos’ managed services offerings

How to assess the different SD-WAN platforms?

How to assess SD-WAN paltforms

Source: STL Partners

The rest of this report includes a competitive analysis of key SD-WAN platform players and how they can enable telcos’ to meet enterprise customer needs and future proof their SD-WAN investments.

Table of Contents

  • Executive Summary
  • What are the challenges to introducing SD-WAN
  • Assessing different SD-WAN platforms
    • Cisco
    • VMWare
    • Fortinet
    • Versa Networks
    • Palo Alto
    • Silver Peak
    • Juniper
    • Aryaka
  • A framework for selecting and implementing SD-WAN platforms

Enter your details below to request an extract of the report

var MostRecentReportExtractAccess = “Most_Recent_Report_Extract_Access”;
var AllReportExtractAccess = “All_Report_Extract_Access”;
var formUrl = “https://go.stlpartners.com/l/859343/2022-02-16/dg485”;
var title = encodeURI(document.title);
var pageURL = encodeURI(document.location.href);
document.write(‘‘);

ngena SD-WAN: scaling innovation through partnership

Introducing ngena

This report focusses on ngena, a multi-operator alliance founded in 2016, which offers multi-national networking services aimed at enterprise customers. ngena is interesting to STL Partners for several reasons:

First, it represents a real, commercialised example of operators working together, across borders and boundaries, to a common goal – a key part of our Coordination Age vision.

Second, ngena’s SDN product is an example of a new service which was designed around a strong, customer-centric proposition, with a strong emphasis on partnership and shared vision – an alternative articulation, if you like, of Elisa’s cultural strategy.

Third, it was born out of Deutsche Telekom, the world’s sixth-largest telecoms group by revenue, which operates in more than fifty countries. This makes it a great case study of an established operator innovating new enterprise services.

And lastly, it is a unique example of a telco and technology company (in this case Cisco) coming together in a mutually beneficial creative partnership, rather than settling into traditional buyer-supplier roles.

Over the coming pages, we will explore ngena’s proposition to customers, how it has achieved what it has to date, and to what extent it has made a measurable impact on the companies that make up the alliance. The report explains STL Partners’ independent view, informed by conversations with Marcus Hacke, Founder and Managing Director, as well as others across the industry.

Enter your details below to request an extract of the report

Shifting enterprise needs

Enterprises throughout the world are rapidly digitising their operations, and in large part, that involves the move to a ‘multicloud’ environment, where applications and data are hosted in a complex ecosystem of private data centres, campus sites, public clouds, and so on.

Digital enterprises need to ensure that data and applications are accessible from any location, at any time, from any device, and any network, reliably and without headaches. A large enterprise such as a retail bank might have physical branches located all over the place – and the same data needs to be accessible from any branch.

Traditionally, this sort of connectivity was achieved over the wide area network (WAN), with enterprises investing in private networks (often virtual private networks) to ensure that data remained secure and reliably accessible. Traditional WAN architectures work well – but they are not known for flexibility of the sort required to support a multicloud set-up. The network topology is often static, requiring manual intervention to deploy and change, and in our fast-changing world, this becomes a bottleneck. Enterprises are still faced with several challenges:

Key enterprise networking challenges

Source: STL Partners, SD-WAN mini series

The rise of SD-WAN: 2014 to present

This is where, somewhere around 2014, software-defined WAN (SD-WAN) came on the scene. SD-WAN improves on traditional WAN by applying the principles of software-defined networking (SDN). Networking hardware is managed with a software-based controller that can be hosted in the cloud, which opens up a realm of possibilities for automation, smart traffic routing, optimisation, and so on – which makes managing a multicloud set-up a whole lot easier.

As a result, enterprises have adopted SD-WAN at a phenomenal pace, and over the past five years telecoms operators and other service providers worldwide have rushed to add it to their managed services portfolio, to the extent that it has become a mainstream enterprise service:

Live deployments of SD-WAN platforms by telcos, 2014-20 (global)

Source: STL Partners NFV Deployment Tracker
Includes only production deployments; excludes proof of concepts and pilots
Includes four planned/pending deployments expected to complete in 2020

The explosion of deployments between 2016 and 2019 had many contributing factors. It was around this time that vendor offerings in the space became mature enough for the long tail of service providers to adopt more-or-less off-the shelf. But also, the technology had begun to be seen as a “no-brainer” upgrade on existing enterprise connectivity solutions, and therefore was in heavy demand. Many telcos used it as a natural upsell to their broader suite of enterprise connectivity solutions.

The challenge of building a connectivity platform

While SD-WAN has gained significant traction, it is not a straightforward addition to an operator’s enterprise service portfolio – nor is it a golden ticket in and of itself.

First, it is no longer enough to offer SD-WAN alone. The trend – based on demand – is for it to be offered alongside a portfolio of other SDN-based cloud connectivity services, over an automated platform that enables customers to pick and choose predefined services, and quickly deploy and adapt networks without the effort and time needed for bespoke customer deployments. The need this addresses is obvious, but the barrier to entry in building such a platform is a big challenge for many operators – particularly mid-size and smaller telcos.

Second, there is the economic challenge of scaling a platform while remaining profitable. Platform-based services require continuous updating and innovation, and it is questionable whether many telecoms operators are up to have the financial strength to do so – a situation you find for nearly all IT cloud platforms.

Last – and by no means least – is the challenge of scaling across geographies. In a single-country scenario, where most operators (at least in developed markets) will already have the fixed network infrastructure in place to cover all of a potential customer’s branch locations, SD-WAN works well. It is difficult, from a service provider’s perspective, to manage network domains and services across the whole enterprise (#6 above) if that enterprise has locations outside of the geographic bounds of the service provider’s own network infrastructure. There are ways around this – including routing traffic over the public Internet, and other operators’ networks, but from a customer point-of-view, this is less than ideal, as it adds complexity and limits flexibility in the solution they are paying for.

There is a need, then, for a connectivity platform “with a passport”: that can cross borders between operators, networks and markets without issue. ngena, or the Next Generation Enterprise Network Alliance, aims to address this need.

Table of Contents

  • Executive summary
    • What is ngena?
    • Why does ngena matter?
    • Has ngena been successful?
    • What does ngena teach us about successful telco innovation?
    • What does this mean for other telcos?
    • What next?
  • Introduction
  • Context: Enterprise needs and SD-WAN
    • Shifting enterprise needs
    • The rise of SD-WAN: 2014 to present
    • The challenge of building a connectivity platform
  • ngena: Enterprise connectivity with a passport
    • A man with a vision
    • The ngena proposition
  • How successful has ngena been?
    • Growth in alliance membership
    • Growth in ngena itself
    • Making money for the partners
  • What does ngena teach us about successful innovation culture in telecoms?
    • Context: the need to disrupt and adapt in telecoms
    • Lessons from ngena
  • What does this mean for other telcos?
      • Consider how you support innovation
      • Consider how you partner for mutual benefit
      • What next?

Enter your details below to request an extract of the report

NFV Deployment Tracker: North American data and trends

Introduction

NFV in North America – how is virtualisation moving forward in telcos against global benchmarks?

Welcome to the sixth edition of the ‘NFV Deployment Tracker’

This report is the sixth analytical report in the NFV Deployment Tracker series and is intended as an accompaniment to the updated Tracker Excel spreadsheet.

This extended update covers seven months of deployments worldwide, from October 2018 to April 2019. The update also includes an improved spreadsheet format: a more user-friendly, clearer lay-out and a regional toggle in the ‘Aggregate data by region’ worksheet, which provides much quicker access to the data on each region separately.

The present analytical report provides an update on deployments and trends in the North American market (US, Canada and the Caribbean) since the last report focusing on that region (December 2017).

Scope, definitions and importance of the data

We include in the Tracker only verified, live deployments of NFV or SDN technology powering commercial services. The information is taken mainly from public-domain sources, such as press releases by operators or vendors, or reports in reputable trade media. However, a small portion of the data also derives from confidential conversations we have had with telcos. In these instances, the deployments are included in the aggregate, anonymised worksheets in the spreadsheet, but not in the detailed dataset listing deployments by operator and geography, and by vendor where known.

Our definition of a ‘deployment’, including how we break deployments down into their component parts, is provided in the ‘Explanatory notes’ worksheet, in the accompanying Excel document.

NFV in North America in global context

We have gathered data on 120 live, commercial deployments of NFV and SDN in North America between 2011 and April 2019. These were completed by 33 mainly Tier-One telcos and telco group subsidiaries: 24 based in the US, four in Canada, one Caribbean, three European (Colt, T-Mobile and Vodafone), and one Latin American (América Móvil). The data includes information on 217 known Virtual Network Functions (VNFs), functional sub-components and supporting infrastructure elements that have formed part of these deployments.

This makes North America the third-largest NFV/SDN market worldwide, as is illustrated by the comparison with other regions in the chart below.

Total NFV/SDN deployments by region, 2011 to April 2019

total NFV deployments by region North America Africa Asia-Pacific Europe Middle East

Source: STL Partners

Deployments of NFV in North America account for around 24% of the global total of 486 live deployments (or 492 deployments counting deployments spanning multiple regions as one deployment for each region). Europe is very marginally ahead on 163 deployments versus 161 for Asia-Pacific: both equating to around 33% of the total.

The NFV North America Deployment Tracker contains the following data, to May 2019:

  • Global aggregate data
  • Deployments by primary purpose
  • Leading VNFs and functional components
  • Leading operators
  • Leading vendors
  • Leading vendors by primary purpose
  • Above data points broken down by region
  • North America
  • Asia-Pacific
  • Europe
  • Latin America
  • Middle East
  • Africa
  • Detailed dataset on individual deployments

 

Contents of the accompanying analytical report:

  • Executive Summary
  • Introduction
  • Welcome to the sixth edition of the ‘NFV Deployment Tracker’
  • Scope, definitions and importance of the data
  • Analysis of NFV in North America
  • The North American market in global context
  • SD-WAN and core network functions are the leading categories
  • 5G is driving core network virtualisation
  • Vendor trends: Open source and operator self-builds outpace vendors
  • Operator trends: Verizon and AT&T are the clear leaders
  • Conclusion: Slow-down in enterprise platform deployments while 5G provides new impetus

Vendors vs. telcos? New plays in enterprise managed services

Digital transformation is reshaping vendors’ and telcos’ offer to enterprises

What does ‘digital transformation’ mean?

The enterprise market for telecoms vendors and operators is being radically reshaped by digital transformation. This transformation is taking place across all industry verticals, not just the telecoms sector, whose digital transformation – desirable or actual – STL Partners has forensically mapped out for several years now.

The term ‘digital transformation’ is so familiar that it breeds contempt in some quarters. Consequently, it is worth taking a while to refresh our thinking on what ‘digital transformation’ actually means. This will in turn help explain how the digital needs and practices of enterprises are impacting on vendors and telcos alike.

The digitisation of enterprises across all sectors can be described as part of a more general social, economic and technological evolution toward ever more far-reaching use of software-, computing- and IP-based modes of: interacting with customers and suppliers; communicating; networking; collaborating; distributing and accessing media content; producing, marketing and selling goods and services; consuming and purchasing those goods and services; and managing money flows across the economy. Indeed, one definition of the term ‘digital’ in this more general sense could simply be ‘software-, computing- and IP-driven or -enabled’.

For the telecoms industry, the digitisation of society and technology in this sense has meant, among other things, the decline of voice (fixed and mobile) as the primary communications service, although it is still the single largest contributor to turnover for many telcos. Voice mediates an ‘analogue’ economy and way of working in the sense that the voice is a form of ‘physical’ communication between two or more persons. In addition, the activity and means of communication (i.e. the actual telephone conversation to discuss project issues) is a separate process and work task from other work tasks, in different physical locations, that it helps to co-ordinate. By contrast, in an online collaboration session, the communications activity and the work activity are combined in a shared virtual space: the digital service allows for greater integration and synchronisation of tasks previously carried out by physical means, in separate locations, and in a less inherently co-ordinated manner.

Similarly, data in the ATM and Frame Relay era was mainly a means to transport a certain volume of information or files from one work place to another, without joining those work places together as one: the work places remained separate, both physically and in terms of the processes and work activities associated with them. The traditional telecoms network itself reflected the physical economy and processes that it enabled: comprising massive hardware and equipment stacks responsible for shifting huge volumes of voice signals and data packets (so called on the analogy of postal packets) from one physical location to another.

By contrast, with the advent of the digital (software-, computing- and IP-enabled) society and economy, the value carried by communications infrastructure has increasingly shifted from voice and data (as ‘physical’ signals and packets) to that of new modes of always-on, virtual interconnectedness and interactivity that tend towards the goal of eliminating or transcending the physical separation and discontinuity of people, work processes and things.

Examples of this digital transformation of communications, and associated experiences of work and life, could include:

  • As stated above, simple voice communications, in both business and personal life, have been increasingly superseded by ‘real-time’ or near-real-time, one-to-one or one-to-many exchange and sharing of text and audio-visual content across modes of communication such as instant messaging, unified communications (UC), social media (including increasingly in the work place) or collaborative applications enabling simultaneous, multi-party reviewing and editing of documents and files
  • Similarly, location-to-location file transfers in support of discrete, geographically separated business processes are being replaced by centralised storage and processing of, and access to, enterprise data and applications in the cloud
  • These trends mean that, in theory, people can collaborate and ‘meet’ with each other from any location in the world, and the digital service constitutes the virtual activity and medium through which that collaboration takes place
  • Similarly, with the Internet of Things (IoT), physical objects, devices, processes and phenomena generate data that can be transmitted and analysed in ‘real time’, triggering rapid responses and actions directed towards those physical objects and processes based on application logic and machine learning – resulting in more efficient, integrated processes and physical events meeting the needs of businesses and people. In other words, the IoT effectively involves digitising the physical world: disparate physical processes, and the action of diverse physical things and devices, are brought together by software logic and computing around human goals and needs.

‘Virtualisation’ effectively means ‘digital optimisation’

In addition to the cloud and IoT, one of the main effects of enterprise digital transformation on the communications infrastructure has of course been Network Functions Virtualisation (NFV) and SoftwareDefined Networking (SDN). NFV – the replacement of network functionality previously associated with dedicated hardware appliances by software running on standard compute devices – could also simply be described as the digitisation of telecoms infrastructure: the transformation of networks into software-, computing- and IP-driven (digital) systems that are capable of supporting the functionality underpinning the virtual / digital economy.

This functionality includes things like ultrafast, reliable, scalable and secure routing, processing, analysis and storage of massive but also highly variable data flows across network domains and on a global scale – supporting business processes ranging from ‘mere’ communications and collaboration to co-ordination and management of large-scale critical services, multi-national enterprises, government functions, and complex industrial processes. And meanwhile, the physical, Layer-1 elements of the network have also to become lightning-fast to deliver the massive, ‘real-time’ data flows on which the digital systems and services depend.

Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos

Virtualisation frees vendors from ‘operator lock-in’

Virtualisation has generally been touted as a necessary means for telcos to adapt their networks to support the digital service demands of their customers and, in the enterprise market, to support those customers’ own digital transformations. It has also been advocated as a means for telcos to free themselves from so-called ‘vendor lock-in’: dependency on their network hardware suppliers for maintenance and upgrades to equipment capacity or functionality to support service growth or new product development.

From the other side of the coin, virtualisation could also be seen as a means for vendors to free themselves from ‘operator lock-in’: a dependency on telcos as the primary market for their networking equipment and technology. That is to say, the same dynamic of social and enterprise digitisation, discussed above, has driven vendors to virtualise their own product and service offerings, and to move away from the old business model, which could be described as follows:

▪ telcos and their implementation partners purchase hardware from the vendor
▪ deploy it at the enterprise customer
▪ and then own the business relationship with the enterprise and hold the responsibility for managing the services

By contrast, once the service-enabling technology is based on software and standard compute hardware, this creates opportunities for vendors to market their technology direct to enterprise customers, with which they can in theory take over the supplier-customer relationship.

Of course, many enterprises have continued to own and operate their own private networks and networking equipment, generally supplied to them by vendors. Therefore, vendors marketing their products and services direct to enterprises is not a radical innovation in itself. However, the digitisation / virtualisation of networking technology and of enterprise networks is creating a new competitive dynamic placing vendors in a position to ‘win back’ direct relationships to enterprise customers that they have been serving through the mediation of telcos.

Virtualisation changes the competitive dynamic

Virtualisation changes the competitive dynamic

Contents:

  • Executive Summary: Digital transformation is changing the rules of the game
  • Digital transformation is reshaping vendors’ and telcos’ offer to enterprises
  • What does ‘digital transformation’ mean?
  • ‘Virtualisation’ effectively means ‘digital optimisation’
  • Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos
  • Vendors and telcos: the business models are changing
  • New vendor plays in enterprise networking: four vendor business models
  • Vendor plays: Nokia, Ericsson, Cisco and IBM
  • Ericsson: changing the bet from telcos to enterprises – and back again?
  • Cisco: Betting on enterprises – while operators need to speed up
  • IBM: Transformation involves not just doing different things but doing things differently
  • Conclusion: Vendors as ‘co-Operators’, ‘co-opetors’ or ‘co-opters’ – but can telcos still set the agenda?
  • How should telcos play it? Four recommendations

Figures:

  • Figure 1: Virtualisation changes the competitive dynamic
  • Figure 2: The telco as primary channel for vendors
  • Figure 3: New direct-to-enterprise opportunities for vendors
  • Figure 4: Vendors as both technology supplier and OTT / operator-type managed services provider
  • Figure 5: Vendors as digital service creators, with telcos as connectivity providers and digital service enablers
  • Figure 6: Vendors as digital service enablers, with telcos as digital service creators / providers
  • Figure 7: Vendor manages communications / networking as part of overall digital transformation focus
  • Figure 8: Nokia as technology supplier and ‘operator-type’ managed services provider
  • Figure 9: Nokia’s cloud-native core network blueprint
  • Figure 10: Nokia WING value chain
  • Figure 11: Ericsson’s model for telcos’ roles in the IoT ecosystem
  • Figure 12: Ericsson generates the value whether operators provide connectivity only or also market the service
  • Figure 13: IBM’s model for telcos as digital service enablers or providers – or both

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Facing Up to the Software-Defined Operator

Introduction

At this year’s Mobile World Congress, the GSMA’s eccentric decision to split the event between the Fira Gran Via (the “new Fira”, as everyone refers to it) and the Fira Montjuic (the “old Fira”, as everyone refers to it) was a better one than it looked. If you took the special MWC shuttle bus from the main event over to the developer track at the old Fira, you crossed a culture gap that is widening, not closing. The very fact that the developers were accommodated separately hints at this, but it was the content of the sessions that brought it home. At the main site, it was impressive and forward-thinking to say you had an app, and a big deal to launch a new Web site; at the developer track, presenters would start up a Web service during their own talk to demonstrate their point.

There has always been a cultural rift between the “netheads” and the “bellheads”, of which this is just the latest manifestation. But the content of the main event tended to suggest that this is an increasingly serious problem. Everywhere, we saw evidence that core telecoms infrastructure is becoming software. Major operators are moving towards this now. For example, AT&T used the event to announce that it had signed up Software Defined Networks (SDN) specialists Tail-F and Metaswitch Networks for its next round of upgrades, while Deutsche Telekom’s Terastream architecture is built on it.

This is not just about the overused three letter acronyms like “SDN and NFV” (Network Function Virtualisation – see our whitepaper on the subject here), nor about the duelling standards groups like OpenFlow, OpenDaylight etc., with their tendency to use the word “open” all the more the less open they actually are. It is a deeper transformation that will affect the device, the core network, the radio access network (RAN), the Operations Support Systems (OSS), the data centres, and the ownership structure of the industry. It will change the products we sell, the processes by which we deliver them, and the skills we require.

In the future, operators will be divided into providers of the platform for software-defined network services and consumers of the platform. Platform consumers, which will include MVNOs, operators, enterprises, SMBs, and perhaps even individual power users, will expect a degree of fine-grained control over network resources that amounts to specifying your own mobile network. Rather than trying to make a unitary public network provide all the potential options as network services, we should look at how we can provide the impression of one network per customer, just as virtualisation gives the impression of one computer per user.

To summarise, it is no longer enough to boast that your network can give the customer an API. Future operators should be able to provision a virtual network through the API. AT&T, for example, aims to provide a “user-defined network cloud”.

Elements of the Software-Defined Future

We see five major trends leading towards the overall picture of the ‘software defined operator’ – an operator whose boundaries and structure can be set and controlled through software.

1: Core network functions get deployed further and further forwards

Because core network functions like the Mobile Switching Centre (MSC) and Home Subscriber Server (HSS) can now be implemented in software on commodity hardware, they no longer have to be tied to major vendors’ equipment deployed in centralised facilities. This frees them to migrate towards the edge of the network, providing for more efficient use of transmission links, lower latency, and putting more features under the control of the customer.

Network architecture diagrams often show a boundary between “the Internet” and an “other network”. This is called the ‘Gi interface’ in 3G and 4G networks. Today, the “other network” is usually itself an IP-based network, making this distinction simply that between a carrier’s private network and the Internet core. Moving network functions forwards towards the edge also moves this boundary forwards, making it possible for Internet services like content-delivery networking or applications acceleration to advance closer to the user.

Increasingly, the network edge is a node supporting multiple software applications, some of which will be operated by the carrier, some by third-party services like – say – Akamai, and some by the carrier’s customers.

2: Access network functions get deployed further and further back

A parallel development to the emergence of integrated small cells/servers is the virtualisation and centralisation of functions traditionally found at the edge of the network. One example is so-called Cloud RAN or C-RAN technology in the mobile context, where the radio basebands are implemented as software and deployed as virtual machines running on a server somewhere convenient. This requires high capacity, low latency connectivity from this site to the antennas – typically fibre – and this is now being termed “fronthaul” by analogy to backhaul.

Another example is the virtualised Optical Line Terminal (OLT) some vendors offer in the context of fixed Fibre to the home (FTTH) deployments. In these, the network element that terminates the line from the user’s premises has been converted into software and centralised as a group of virtual machines. Still another would be the increasingly common “virtual Set Top Box (STB)” in cable networks, where the TV functions (electronic programming guide, stop/rewind/restart, time-shifting) associated with the STB are actually provided remotely by the network.

In this case, the degree of virtualisation, centralisation, and multiplexing can be very high, as latency and synchronisation are less of a problem. The functions could actually move all the way out of the operator network, off to a public cloud like Amazon EC2 – this is in fact how Netflix does it.

3: Some business support and applications functions are moving right out of the network entirely

If Netflix can deliver the world’s premier TV/video STB experience out of Amazon EC2, there is surely a strong case to look again at which applications should be delivered on-premises, in the private cloud, or moved into a public cloud. As explained later in this note, the distinctions between on-premises, forward-deployed, private cloud, and public cloud are themselves being eroded. At the strategic level, we anticipate pressure for more outsourcing and more hosted services.

4: Routers and switches are software, too

In the core of the network, the routers that link all this stuff together are also turning into software. This is the domain of true SDN – basically, the effort to substitute relatively smart routers with much cheaper switches whose forwarding rules are generated in software by a much smarter controller node. This is well reported elsewhere, but it is necessary to take note of it. In the mobile context, we also see this in the increasing prevalence of virtualised solutions for the LTE Enhanced Packet Core (EPC), Mobility Management Entity (MME), etc.

5: Wherever it is, software increasingly looks like the cloud

Virtualisation – the approach of configuring groups of computers to work like one big ‘virtual computer’ – is a key trend. Even when, as with the network devices, software is running on a dedicated machine, it will be increasingly found running in its own virtual machine. This helps with management and security, and most of all, with resource sharing and scalability. For example, the virtual baseband might have VMs for each of 2G, 3G, and 4G. If the capacity requirements are small, many different sites might share a physical machine. If large, one site might be running on several machines.

This has important implications, because it also makes sharing among users easier. Those users could be different functions, or different cell sites, but they could also be customers or other operators. It is no accident that NEC’s first virtualised product, announced at MWC, is a complete MVNO solution. It has never been as easy to provide more of your carrier needs yourself, and it will only get easier.

The following Huawei slide (from their Carrier Business Group CTO, Sanqi Li) gives a good visual overview of a software-defined network.

Figure 1: An architecture overview for a software-defined operator
An architecture overview for a software-defined operator March 2014

Source: Huawei

 

  • The Challenges of the Software-Defined Operator
  • Three Vendors and the Software-Defined Operator
  • Ericsson
  • Huawei
  • Cisco Systems
  • The Changing Role of the Vendors
  • Who Benefits?
  • Who Loses?
  • Conclusions
  • Platform provider or platform consumer
  • Define your network sharing strategy
  • Challenge the coding cultural cringe

 

  • Figure 1: An architecture overview for a software-defined operator
  • Figure 2: A catalogue for everything
  • Figure 3: Ericsson shares (part of) the vision
  • Figure 4: Huawei: “DevOps for carriers”
  • Figure 5: Cisco aims to dominate the software-defined “Internet of Everything”

Cisco, Microsoft, Google, AT&T, Telefonica, et al: the disruptive battle for value in communications

Technology: Products and Vendors’ Approaches

There are many vendors and products in the voice/telephony arena. Some started as pure voice products or solutions like Cisco Call Manager, while others such as Microsoft Office 365 started as an office productivity suite, to which voice and presence became a natural extension, and then later a central part of the core product functionality. We have included details on RCS, however RCS is not globally available, and is limited in its functionality compared to some of the other products listed here.

Unified Communications

Unified Communications (UC) is not a standard; there are many different interpretations, but there is a general consensus about what it means – the unification of voice, video, messaging, presence, conferencing, and collaboration into a simple integrated user experience.

UC is an important technology for enterprise customers, it brings mobility and agility to an organisation, improves communication and collaboration, adds a social element, and lowers costs by reducing the need for office space and multiple disparate communications systems each with their own management and control systems. UC can be delivered as a cloud service and has the acronym UCaaS. Leading providers are Microsoft, Google, and Cisco. Other players include IBM, 8X8, and a number of other smaller vendors, as well as telco equipment manufacturers such as Ericsson. We have covered some of the leading solutions in this report, and there are definite opportunities for telcos to collaborate with these vendors, adding integration with core services such as telephony and mobile data, as well as customer support and billing.

There are several elements for an enterprise to consider when developing a UC solution for it to be successful:

  • Fixed voice functions and needs (including PBX) and integration into a UC solution
  • Mobile voice – billing, call routing, integration with fixed and UC solutions
  • Desktop and mobile video calling
  • Collaboration tools (conferencing, video conferencing, desktop integration, desktop sharing etc.)
  • Desktop integration – how does the solution integrate with core productivity tools (Microsoft Office, Google Apps, OpenOffice etc?)
  • PC and mobile clients – can a mobile user participate in a video conference, share files
  • Instant messaging and social integration
  • How the user is able to interact with the system and how intuitive it is to use. This is sometimes called the user experience and is probably the most important aspect, as a good user experience promotes efficiency and end user satisfaction

From the user perspective, it would be desirable for the solution to include the basic elements shown in Figure 1.

Figure 1: Basic user needs from Unified Communications
Voice Messaging Tech Cover

Source: STL Partners

Historically, Enterprise communications has been an area where telcos have been a supplier to the enterprise – delivering voice end points (E.164 phone numbers and mobile devices), voice termination, and outgoing voice and data services.

Organisational voice communications (i.e. internal calling) has been an area of strength for companies like Cisco, Avaya, Nortel and others that have delivered on-premise solutions which offer sophisticated voice and video services. These have grown over the years to provide Instant Messaging (IM), desktop collaboration tools, and presence capabilities. PC clients often replace fixed phones, adding functionality, and can be used when out of the office. What these systems have lacked is deep integration with desktop office suites such as Microsoft Office, Google Apps, and Lotus Notes. Plug-ins or other tools can be used to integrate presence and voice, but the user experience is usually a compromise as different vendors are involved.

The big software vendors have also been active, with Microsoft and IBM adding video and telephony features, and Google building telephony and conferencing into its growing portfolio. Microsoft also acquired Skype and has delivered on its promise to integrate Skype with Lync. Meanwhile, Google has made a number of acquisitions in the video and voice arena like ON2, Global IP Solutions, and Grand Central. The technology from ON2 allows video to be compressed and sent over an Internet connection. Google is pushing the products from ON2 to be integrated into one of the next major disruptors – WebRTC.

Microsoft began including voice capability with its release of Office Communications Server (OCS) in 2007. An OCS user could send instant messages, make a voice call, or place a video call to another OCS user or group of users. Presence was directly integrated with Outlook and a separate product – Office Live Meeting – was used to collaborate. Although OCS included some Private Branch eXchange (PBX) features, few enterprises regarded it as having enough features or capability to replace existing systems from the likes of Cisco. With Office 365, Microsoft stepped up the game, adding a new user interface, enhanced telephony features, integrated collaboration, and multiple methods of deployment using Microsoft’s cloud, on premise, and service provider deployments.

 

  • Technology: Products and Vendors’ Approaches
  • Unified Communications
  • Microsoft Office 365 – building on enterprise software strengths
  • Skype – the popular international behemoth
  • Cisco – the incumbent enterprise giant
  • Google – everything browser-based
  • WebRTC – a major disruptive opportunity
  • Rich Communication Service (RCS) – too little too late?
  • Broadsoft – neat web integration
  • Twilio – integrate voice and SMS into applications
  • Tropo – telephony integration technology leader
  • Voxeo – a pathfinder in integration
  • Hypervoice –make voice a native web object
  • Calltrunk – makes calls searchable
  • Operator Voice and Messaging Services
  • Section Summary
  • Telco Case Studies
  • Vodafone – 360, One Net and RED
  • Telefonica – Digital, Tu Me, Tu Go, BlueVia, Free Wi-Fi
  • AT&T – VoIP, UC, Tropo, Watson
  • Section Summary
  • STL Partners and the Telco 2.0™ Initiative

 

  • Figure 1: Basic user needs from Unified Communications
  • Figure 2: Microsoft Lync 2013 client
  • Figure 3: Microsoft Lync telephony integration options
  • Figure 4: International Telephone and Skype Traffic 2005-2012
  • Figure 5: The Skype effect on international traffic
  • Figure 6: Voice call charging in USA
  • Figure 7: Google Voice call charging in USA
  • Figure 8: Google Voice call charging in Europe
  • Figure 9: Google outbound call rates
  • Figure 10: Calliflower beta support for WebRTC
  • Figure 11: Active individual user base for WebRTC, millions
  • Figure 12: Battery life compared for different services
  • Figure 13: Vodafone One Net Express call routing
  • Figure 14: Vodafone One Net Business Call routing
  • Figure 15: Enterprise is a significant part of Vodafone group revenue
  • Figure 16: Vodafone Red Bundles
  • Figure 17: Telefonica: Market Positioning Map, Q4 2012
  • Figure 18: US market in transition towards greater competition
  • Figure 19: Voice ARPU at AT&T, fixed and mobile
  • Figure 20: Industry Value is Concentrated at the Interfaces
  • Figure 21: Telco 2.0™ ‘two-sided’ telecoms business model

Communications Services: What now makes a winning value proposition?

Introduction

This is an extract of two sections of the latest Telco 2.0 Strategy Report The Future Value of Voice and Messaging for members of the premium Telco 2.0 Executive Briefing Service.

The full report:

  • Shows how telcos can slow the decline of voice and messaging revenues and build new communications services to maximise revenues and relevance with both consumer and enterprise customers.
  • Includes detailed forecasts for 9 markets, in which the total decline is forecast between -25% and -46% on a $375bn base between 2012 and 2018, giving telcos an $80bn opportunity to fight for.
  • Shows impacts and implications for other technology players including vendors and partners, and general lessons for competing with disruptive players in all markets.
  • Looks at the impact of so-called OTT competition, market trends and drivers, bundling strategies, operators developing their own Telco-OTT apps, advanced Enterprise Communications services, and the opportunities to exploit new standards such as RCS, WebRTC and VoLTE.

The Transition in User Behaviour

A global change in user behaviour

In November, 2012 we published European Mobile: The Future’s not Bright, it’s Brutal. Very soon after its publication, we issued an update in the light of results from Vodafone and Telefonica that suggested its predictions were being borne out much faster than we had expected.

Essentially, the macro-economic challenges faced by operators in southern Europe are catalysing the processes of change we identify in the industry more broadly.

This should not be seen as a “Club Med problem”. Vodafone reported a 2.7% drop in service revenue in the Netherlands, driven by customers reducing their out-of-bundle spending. This sensitivity and awareness of how close users are getting to their monthly bundle allowances is probably a good predictor of willingness to adopt new voice and messaging applications, i.e. if a user is regularly using more minutes or texts than are included in their service bundle, they will start to look for free or lower cost alternatives. KPN Mobile has already experienced a “WhatsApp shock” to its messaging revenues. Even in Vodafone Germany, voice revenues were down 6.1% and messaging 3.7%. Although enterprise and wholesale business were strong, prepaid lost enough revenue to leave the company only barely ahead. This suggests that the sizable low-wage segment of the German labour market is under macro-economic stress, and a shock is coming.

The problem is global, for example, at the 2013 Mobile World Congress, the CEO of KT Corp described voice revenues as “collapsing” and stated that as a result, revenues from their fixed operation had halved in two years. His counterpart at Turk Telekom asserted that “voice is dead”.

The combination of technological and macro-economic challenge results in disruptive, rather than linear change. For example, Spanish subscribers who adopt WhatsApp to substitute expensive operator messaging (and indeed voice) with relatively cheap data because they are struggling financially have no particular reason to return when the recovery eventually arrives.

Price is not the only issue

Also, it is worth noting that price is not the whole problem. Back at MWC 2013, the CEO of Viber, an OTT voice and messaging provider, claimed that the app has the highest penetration in Monaco, where over 94% of the population use Viber every day. Not only is Monaco somewhere not short of money, but it is also a market where the incumbent operator bundles unlimited SMS, though we feel that these statistics might slightly stretch the definition of population as there are many French subscribers using Monaco SIM cards. However, once adoption takes off it will be driven by social factors (the dynamics of innovation diffusion) and by competition on features.

Differential psychological and social advantages of communications media

The interaction styles and use cases of new voice and messaging apps that have been adopted by users are frequently quite different to the ones that have been imagined by telecoms operators. Between them, telcos have done little more than add mobility to telephony during the last 100 years, However, because of the Internet and growth of the smartphone, users now have many more ways to communicate and interact other than just calling one another.

SMS (only telcos’ second mass ‘hit’ product after voice) and MMS are “fire-and-forget” – messages are independent of each other, and transported on a store-and-forward basis. Most IM applications are either conversation-based, with messages being organised in threads, or else stream-based, with users releasing messages on a broadcast or publish-subscribe basis. They often also have a notion of groups, communities, or topics. In getting used to these and internalising their shortcuts, netiquette, and style, customers are becoming socialised into these applications, which will render the return of telcos as the messaging platform leaders with Rich Communication System (RCS) less and less likely. Figure 1 illustrates graphically some important psychological and social benefits of four different forms of communication.

Figure 1:  Psychological and social advantages of voice, SMS, IM, and Social Media

Psychological and social advantages of voice, SMS, IM, and Social Media Dec 2013

Source: STL Partners

The different benefits can clearly be seen. Taking voice as an example, we can see that a voice call could be a private conversation, a conference call, or even part of a webinar. Typically, voice calls are 1 to 1, single instance, and with little presence information conveyed (engaged tone or voicemail to others). By their very nature, voice calls are real time and have a high time commitment along with the need to pay attention to the entire conversation. Whilst not as strong as video or face to face communication, a voice call can communicate high emotion and of course is audio.

SMS has very different advantages. The majority of SMS sent are typically private, 1 to 1 conversations, and are not thread based. They are not real time, have no presence information, and require low time commitment, because of this they typically have minimal attention needs and while it is possible to use a wide array of emoticons or smileys, they are not the same as voice or pictures. Even though some applications are starting to blur the line with voice memos, today SMS messaging is a visual experience.

Instant messaging, whether enterprise or consumer, offers a richer experience than SMS. It can include presence, it is often thread based, and can include pictures, audio, videos, and real time picture or video sharing. Social takes the communications experience a step further than IM, and many of the applications such as Facebook Messenger, LINE, KakaoTalk, and WhatsApp are exploiting the capabilities of these communications mechanisms to disrupt existing or traditional channels.

Voice calls, whether telephony or ‘OTT’, continue to possess their original benefits. But now, people are learning to use other forms of communication that better fit the psychological and social advantages that they seek in different contexts. We consider these changes to be permanent and ongoing shifts in customer behaviour towards more effective applications, and there will doubtless be more – which is both a threat and an opportunity for telcos and others.

The applicable model of how these shifts transpire is probably a Bass diffusion process, where innovators enter a market early and are followed by imitators as the mass majority. Subsequently, the innovators then migrate to a new technology or service, and the cycle continues.

One of the best predictors of churn is knowing a churner, and it is to be expected that users of WhatsApp, Vine, etc. will take their friends with them. Economic pain will both accelerate the diffusion process and also spread it deeper into the population, as we have seen in South Korea with KakaoTalk.

High-margin segments are more at risk

Generally, all these effects are concentrated and emphasised in the segments that are traditionally unusually profitable, as this is where users stand to gain most from the price arbitrage. A finding from European Mobile: The Future’s not Bright, it’s Brutal and borne out by the research carried out for this report is that prices in Southern Europe were historically high, offering better margins to operators than elsewhere in Europe. Similarly, international and roaming calls are preferentially affected – although international minutes of use continue to grow near their historic average rates, all of this and more accrues to Skype, Google, and others. Roaming, despite regulatory efforts, remains expensive and a target for disruptors. It is telling that Truphone, a subject of our 2008 voice report, has transitioned from being a company that competed with generic mobile voice to being one that targets roaming.

 

  • Consumers: enjoying the fragmentation
  • Enterprises: in search of integration
  • What now makes a winning value proposition?
  • The fall of telephony
  • Talk may be cheap, but time is not
  • The increasing importance of “presence”
  • The competition from Online Service Providers
  • Operators’ responses
  • Free telco & other low-cost voice providers
  • Meeting Enterprise customer needs
  • Re-imagining customer service
  • Telco attempts to meet changing needs
  • Voice Developers – new opportunities
  • Into the Hunger Gap
  • Summary: the changing telephony business model
  • Conclusions
  • STL Partners and the Telco 2.0™ Initiative

 

  • Figure 1:  Psychological and social advantages of voice, SMS, IM, and Social Media
  • Figure 2: Ideal Enterprise mobile call routing scenario
  • Figure 3: Mobile Clients used to bypass high mobile call charges
  • Figure 4: Call Screening Options
  • Figure 5: Mobile device user context and data source
  • Figure 6: Typical business user modalities
  • Figure 7:  OSPs are pursuing platform strategies
  • Figure 8: Subscriber growth of KakaoTalk
  • Figure 9: Average monthly minutes of use by market
  • Figure 10: Key features of Voice and Messaging platforms
  • Figure 11: Average user screen time Facebook vs. WhatsApp  (per month)
  • Figure 12: Disruptive price competition also comes from operators
  • Figure 13: The hunger gap in music

The Future Value of Voice and Messaging

Background – ‘Voice and Messaging 2.0’

This is the latest report in our analysis of developments and strategies in the field of voice and messaging services over the past seven years. In 2007/8 we predicted the current decline in telco provided services in Voice & Messaging 2.0 “What to learn from – and how to compete with – Internet Communications Services”, further articulated strategic options in Dealing with the ‘Disruptors’: Google, Apple, Facebook, Microsoft/Skype and Amazon in 2011, and more recently published initial forecasts in European Mobile: The Future’s not Bright, it’s Brutal. We have also looked in depth at enterprise communications opportunities, for example in Enterprise Voice 2.0: Ecosystem, Species and Strategies, and trends in consumer behaviour, for example in The Digital Generation: Introducing the Participation Imperative Framework.  For more on these reports and all of our other research on this subject please see here.

The New Report


This report provides an independent and holistic view of voice and messaging market, looking in detail at trends, drivers and detailed forecasts, the latest developments, and the opportunities for all players involved. The analysis will save valuable time, effort and money by providing more realistic forecasts of future potential, and a fast-track to developing and / or benchmarking a leading-edge strategy and approach in digital communications. It contains

  • Our independent, external market-level forecasts of voice and messaging in 9 selected markets (US, Canada, France, Germany, Spain, UK, Italy, Singapore, Taiwan).
  • Best practice and leading-edge strategies in the design and delivery of new voice and messaging services (leading to higher customer satisfaction and lower churn).
  • The factors that will drive best and worst case performance.
  • The intentions, strategies, strengths and weaknesses of formerly adjacent players now taking an active role in the V&M market (e.g. Microsoft)
  • Case studies of Enterprise Voice applications including Twilio and Unified Communications solutions such as Microsoft Office 365
  • Case studies of Telco OTT Consumer Voice and Messaging services such as like Telefonica’s TuGo
  • Lessons from case studies of leading-edge new voice and messaging applications globally such as Whatsapp, KakaoTalk and other so-called ‘Over The Top’ (OTT) Players


It comprises a 18 page executive summary, 260 pages and 163 figures – full details below. Prices on application – please email contact@telco2.net or call +44 (0) 207 247 5003.

Benefits of the Report to Telcos, Technology Companies and Partners, and Investors


For a telco, this strategy report:

  • Describes and analyses the strategies that can make the difference between best and worst case performance, worth $80bn (or +/-20% revenues) in the 9 markets we analysed.
  • Externally benchmarks internal revenue forecasts for voice and messaging, leading to more realistic assumptions, targets, decisions, and better alignment of internal (e.g. board) and external (e.g. shareholder) expectations, and thereby potentially saving money and improving contributions.
  • Can help improve decisions on voice and messaging services investments, and provides valuable insight into the design of effective and attractive new services.
  • Enables more informed decisions on partner vs competitor status of non-traditional players in the V&M space with new business models, and thereby produce better / more sustainable future strategies.
  • Evaluates the attractiveness of developing and/or providing partner Unified Communication services in the Enterprise market, and ‘Telco OTT’ services for consumers.
  • Shows how to create a valuable and realistic new role for Voice and Messaging services in its portfolio, and thereby optimise its returns on assets and capabilities


For other players including technology and Internet companies, and telco technology vendors

  • The report provides independent market insight on how telcos and other players will be seeking to optimise $ multi-billion revenues from voice and messaging, including new revenue streams in some areas.
  • As a potential partner, the report will provide a fast-track to guide product and business development decisions to meet the needs of telcos (and others).
  • As a potential competitor, the report will save time and improve the quality of competitor insight by giving strategic insights into the objectives and strategies that telcos will be pursuing.


For investors, it will:

  • Improve investment decisions and strategies returning shareholder value by improving the quality of insight on forecasts and the outlook for telcos and other technology players active in voice and messaging.
  • Save vital time and effort by accelerating decision making and investment decisions.
  • Help them better understand and evaluate the needs, goals and key strategies of key telcos and their partners / competitors


The Future Value of Voice: Report Content Summary

  • Executive Summary. (18 pages outlining the opportunity and key strategic options)
  • Introduction. Disruption and transformation, voice vs. telephony, and scope.
  • The Transition in User Behaviour. Global psychological, social, pricing and segment drivers, and the changing needs of consumer and enterprise markets.
  • What now makes a winning Value Proposition? The fall of telephony, the value of time vs telephony, presence, Online Service Provider (OSP) competition, operators’ responses, free telco offerings, re-imaging customer service, voice developers, the changing telephony business model.
  • Market Trends and other Forecast Drivers. Model and forecast methodology and assumptions, general observations and drivers, ‘Peak Telephony/SMS’, fragmentation, macro-economic issues, competitive and regulatory pressures, handset subsidies.
  • Country-by-Country Analysis. Overview of national markets. Forecast and analysis of: UK, Germany, France, Italy, Spain, Taiwan, Singapore, Canada, US, other markets, summary and conclusions.
  • Technology: Products and Vendors’ Approaches. Unified Comminications. Microsoft Office 365, Skype, Cisco, Google, WebRTC, Rich Communications Service (RCS), Broadsoft, Twilio, Tropo, Voxeo, Hypervoice, Calltrunk, Operator voice and messaging services, summary and conclusions.
  • Telco Case Studies. Vodafone 360, One Net and RED, Telefonica Digital, Tu Me, Tu Go, Bluvia and AT&T.
  • Summary and Conclusions. Consumer, enterprise, technology and Telco OTT.

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application

 

Cloud 2.0: the fight for the next wave of customers

Summary: The fight for the Cloud Services market is about to move into new segments and territories. In the build up to the launch of our new strategy report, ‘Telco strategies in the Cloud’, we review perspectives on this shared at the 2012 EMEA and Silicon Valley Executive Brainstorms by strategists from major telcos and tech players, including: Orange, Telefonica, Verizon, Vodafone, Amazon, Bain, Cisco, and Ericsson (September 2012, , Executive Briefing Service, Cloud & Enterprise ICT Stream). Cloud Growth Groups September 2012
  Read in Full (Members only)   To Subscribe click here

Below is an extract from this 33 page Telco 2.0 Briefing Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here. Non-members can subscribe here and for this and other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Introduction

As part of the New Digital Economics Executive Brainstorm series, future strategies in Cloud Services were explored at the New Digital Economics Silicon Valley event at the Marriott Hotel, San Francisco, on the 27th March, 2012, and the second EMEA Cloud 2.0 event at the Grange St. Pauls Hotel on the 13th June 2012.

At the events, over 200 specially-invited senior executives from across the communications, media, retail, finance and technology sectors looked at how to make money from cloud services and the role and strategies of telcos in this industry, using a widely acclaimed interactive format called ‘Mindshare’.

This briefing summarises key points, participant votes, and our high-level take-outs from across the events, and focuses on the common theme that the cloud market is evolving to address new customers, and the consequence of this change on strategy and implementation. We are also publishing a comprehensive report on Cloud 2.0: Telco Strategies in the Cloud.

To share this article easily, please click:



Executive Summary

The end of the beginning

The first phase of enterprise cloud services has been dominated by the ‘big tech’ and web players like Amazon, Google, and Microsoft, who have developed highly sophisticated cloud operations at enormous scale. The customers in this first round are the classic ‘early adopters’ of enterprise ICT – players with a high proportion of IT genes in their corporate DNA such as Netflix, NASA, Silicon Valley start ups, some of the world’s largest industrial and marketing companies, and the IT industry itself. There is little doubt that these leading customers and major suppliers will retain their leading edge status in the market.

The next phase of cloud market development is the move into new segments in the broader market. Participants at the EMEA brainstorm thought that a combination of new customers and new propositions would drive the most growth in the next 3 years.

UK Services Revenues: Actual and Forecast (index)

These new segments comprise both industries and companies outside the early adopters in developed markets, and companies in new territories in emerging and developing markets. These customers are typically less technology oriented, more focused on business requirements, and need a combination of de-mystification of cloud and support to develop and run such systems.

Closer to the customer

There are opportunities for telcos in this evolving landscape. While the major players’ scale will be hard to beat, there are opportunities in the new segments in being ‘closer to the customer’. This involves telcos leveraging potential advantages of:

  • existing customer relationships, existing enterprise IT assets, and channels to markets (where they exist);
  • geographical proximity, where telcos can build, locate and connect more directly to overcome data sovereignty and latency issues.

Offering unique, differentiated services

Telcos should also be able to leverage existing assets and capabilities through APIs in the cloud to create distinctive offerings to enterprise and SME customers:

  • Network assets will enable better management of cloud services by allowing greater control of the network components;
  • Data assets will enable a wider range of potential applications for cloud services that use telco data (such as identification services);
  • And communications assets (such as APIs to voice and messaging) will allow communications services to be built in to cloud applications.

Next steps for telcos

  • Telcos need to move fast to leverage their existing relationships with customers both large and small and optimise their cloud offerings in line with new trends in the enterprise ICT market, such as bring-your-own-device (BYOD).
  • Customers are increasingly looking to outsource business processes to cut costs, and telcos are well-placed to take advantage of this opportunity.
  • Telcos need to continue to partner with independent software vendors, in order to build new products and services. Telcos should also focus on tight integration between their core services and cloud services or cloud service providers (either delivered by themselves or by third parties.) During the events, we saw examples from Vodafone, Verizon and Orange amongst others.
  • Telcos should also look at the opportunity to act as cloud service brokers. For example, delivering a mash up of Google Apps, Workday and other services that are tightly integrated with telco products, such as billing, support, voice and data services. The telco could ensure that the applications work well together and deliver a fully supported, managed and billed suite of products.
  • Identity management and security also came through as strong themes and there is a natural role for telcos to play here. Telcos already have a trusted billing relationship and hold personal customer information. Extending this capability to offer pre-population of forms, acting as an authentication broker on behalf of other services and integrating information about location and context through APIs would represent additional business and revenue generating opportunities.
  • Most telcos are already exploring opportunities to exploit APIs, which will enable them to start offering network-as-a-service, voice-as-a-service, device management, billing integration and other services. Depending on platform and network capability, there are literally hundreds of APIs that telcos could offer to external developers. These APIs could be used to develop applications that are integrated with telcos’ network product or service, which in turn makes the telco more relevant to their customers.

We will be exploring these strategies in depth in Cloud 2.0: Telco Strategies in the Cloud and at the invitation only New Digital Economics Executive Brainstorms in Digital Arabia in Dubai, 6-7 November, and Digital Asia in Singapore, 3-5 December, 2012.

Key questions explored at the brainstorms and in this briefing:

  • How will the Cloud Services market evolve?
  • Which customer and service segments are growing fastest (Iaas, PaaS, SaaS)?
  • What are the critical success factors to market adoption?
  • Who will be the leading players, and how will it impact different sectors?
  • What are the telcos’ strengths and who are the most advanced telcos today?
  • Which aspects of the cloud services market should they pursue first?
  • Where should telcos compete with IT companies and where should they cooperate?
  • What must telcos do to secure their share of the cloud and how much time do they have?

Stimulus Speakers/Panelists

Telcos

  • Peter Martin, Head of Strategy, Cloud Computing, Orange Group
  • Moisés Navarro Marín, Director, Strategy Global Cloud Services, Telefonica Digital
  • Alex Jinivizian, Head of Enterprise Strategy, Verizon Enterprise Solutions
  • Robert Brace, Head of Cloud Services, Vodafone Group

Technology Companies

  • Mohan Sadashiva, VP & GM, Cloud Services, Aepona
  • Gustavo Reyna, Solutions Marketing Manager, Aepona
  • Iain Gavin, Head of EMEA Web Services, Amazon
  • Pat Adamiak, Senior Director, Cloud Solutions, Cisco
  • Charles J. Meyers, President, Equinix Americas
  • Arun Bhikshesvaran, CMO, Ericsson
  • John Zanni, VP of Service Provider Marketing & Alliances, Parallels

Consulting & Industry Analysis

  • Chris Brahm, Partner, Head of Americas Technology Practices, Bain
  • Andrew Collinson, Research Director, STL Partners

With thanks to our Silicon Valley 2012 event sponsors and partners:

Silicon Valley 2012 Event Sponsors

And our EMEA 2012 event sponsors:

EMEA 2012 Event Sponsors

To read the note in full, including the following sections detailing support for the analysis…

  • Round 2 of the Cloud Fight
  • Selling to new customers
  • What channels are needed?
  • How will telcos perform in cloud?
  • With which services will telcos succeed?
  • How can telcos differentiate?
  • Comments on telcos’ role, objectives and opportunities
  • Four telcos’ perspectives
  • Telefonica Digital – focusing on business requirements
  • Verizon – Cloud as a key Platform
  • Orange Business Services – communications related cloud
  • Vodafone – future cloud vision
  • Techco’s Perspectives
  • Amazon – A history of Amazon Web Services (AWS)
  • Cisco – a world of many clouds
  • Ericsson – the networked society and telco cloud
  • Aepona – Cloud Brokerage & ‘Network as a Service’ (NaaS)
  • The Telco 2.0™ Initiative

…and the following figures…

  • Figure 1 – Bain forecasts for business cloud market size
  • Figure 2 – Key barriers to cloud adoption
  • Figure 3 – Identifying the cloud growth markets
  • Figure 4 – Requirements for success
  • Figure 5 – New customers to drive cloud growth
  • Figure 6 – How to increase revenues from cloud services
  • Figure 7 – How to move cloud services forward
  • Figure 8 – Enterprise cloud channels
  • Figure 9 – Small businesses cloud channels
  • Figure 10 – Vote on Telco Cloud Market Share
  • Figure 11 – Telcos’ top differentiators in the cloud
  • Figure 12 – The global reach of Orange Business
  • Figure 13 – The telco as an intermediary
  • Figure 14 – Vodafone’s vision of the cloud
  • Figure 15 – Amazon Web Services’ cloud infrastructure
  • Figure 16 – Cisco’s world of many clouds
  • Figure 17 – Cloud traffic in the data centre
  • Figure 18 – Ericsson’s vision for telco cloud
  • Figure 19 – Summary of Ericsson cloud functions
  • Figure 20 – Aepona Cloud Services Broker
  • Figure 21 – How to deliver network-enhanced cloud services

Members of the Telco 2.0 Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can download the full 33 page report in PDF format hereNon-Members, please subscribe here. For this or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Companies and technologies covered: Telefonica, Vodafone, Verizon, Orange, Cloud, Amazon, Google, Ericsson, Cisco, Aepona, Equinix, Parallels, Bain, Telco 2.0, IaaS, PaaS, SaaS, private cloud, public cloud, telecom, strategy, innovation, ICT, enterprise.

Mobile TV: going ‘Round The Side’ of telco networks?

Summary: Dyle TV, a new mobile TV broadcast network (supported by Fox), was presented at the Silicon Valley Brainstorm against the backdrop of Cisco’s VNI (Visual Networking Index) research on forecast growth in mobile video traffic. It was argued that Dyle’s model can both take the pressure off mobile operator data capacity by taking video traffic ‘round the side’ and make good use of TV Broadcasters’ spectrum. Could this model work, not only in the US but elsewhere around the world? (May 2012, Executive Briefing Service)

Dyle Mobile TV Image Telco 2.0

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 19 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service here. Non-members can subscribe here, buy a Single User license for this report online here for £595 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings at the London (12-13 June) New Digital Economics Brainstorm.

To share this article easily, please click:



Background

This is an extract of the analysis of a session at the Digital Entertainment 2.0 stream of the Silicon Valley New Digital Economics Executive Brainstorm, that took place on the 28th March, 2012. Using a widely acclaimed interactive format called ‘Mindshare’, the Digital Entertainment 2.0 stream enabled specially-invited senior executives from across the communications, entertainment and technology sectors to discuss and explore key strategic issues on the theme of ‘New Business Models in a multi-screen, 3D/HD, mobile world’. Presentations from the event can be found here and further STL Partners research on entertainment and content can be found here.

Mobile Video: How to Reduce Complexity

The hypothesis explored at this session (one of three) was that content owners and carriers want to deliver live video content to their customers but face significant barriers: hundreds of device types, various network conditions, bandwidth congestion, hundreds of simultaneous sessions, and a painful workflow.

Key questions:

  • How are new devices, formats and enabling technologies improving the situation?
  • What use cases are most compelling – for different markets, in different geographies?
  • What are the viable cost models? 
 
Presenters and Panellists:
  • Chris Osika, Senior Director IBSG, Cisco Systems presented an overview of Cisco’s VNI study of the future impact of video on communications networks;
  • Erik Moreno, SVP, Corporate Development, Fox Networks Group, presented on Dyle TV, a new innovation in Mobile TV in the US market; 
  • Andre James, Partner, Media Practice, Bain, also joined the panel.

The session was hosted and moderated by Andrew Collinson, Research Director, STL Partners. This Briefing summarises some of the high-level findings and includes the verbatim output of the brainstorm.

Stimulus presentations

Cisco’s VNI Study

Opening the session, Chris Osika, Senior Director IBSG, Cisco Systems, gave some background to mobile broadband data growth and especially video traffic, citing Cisco’s own VNI (Visual Networking Index) forecasts. As well as a summary of top-level findings below, here is a video of his presentation in full.

 

He covered changing end-user behaviours and business models in the TV and video sectors, citing tablets, multi-tasking and “TV Everywhere” services as catalysts of change, and you can see a video of this presentation below.

Figure 1 – Cisco VNI forecast growth of mobile data traffic

Mobile TV 'Round the Side' Telco 2.0 image

Source: Cisco
[Note: STL Partners will shortly be issuing its own analysis of the new Cisco VNI mobile data forecasts]

Counter-intuitively, he disagreed with part of the central notion of “Social TV”, stating that while consumers might use two devices simultaneously, it will likely be for two different experiences, not a single converged one. He also touched on the risks of video “breaking the network”, and subtly introduced the idea of using WiFi for offload, suggesting that this might be part of a service provider’s arsenal (rather than driven by the user, as is currently typical).

Figure 2 – Adoption of tablets & other examples of new consumer behaviour

Mobile TV 'Round the Side' Telco 2.0 image Fig 2

Source: Cisco

Dyle TV

Next, Erik Moreno, SVP Corporate Development, Fox Networks Group, introduced Dyle, a new partnership for mobile TV which Fox is working on with partners such as Comcast/xfinity. (Currently, five out of the top 7 US broadcast networks are participating – excluding ABC and CBS at present). It intends to use existing broadcasting technology to provide live TV onto mobile devices (including tablets & automotive screens). In essence, this is another attempt to create a mobilised version of broadcast (this technology is called ATSC-MH), complete with new chipsets to be included into handsets, and apps to decrypt and play back content. 

Figure 3 – An introduction to the Dyle mobile TV business model & technology

Mobile TV 'Round the Side' Telco 2.0 image Fig 3

Source: Fox Networks

However, unlike previous misadventures in mobile TV (think DVB-H in Europe, and Qualcomm’s MediaFlo network in the US), this time Dyle might be able to exploit a changing consumer behaviour mindset about on-the-go content (e.g. on tablets), coupled with different economics to 3G/4G usage – i.e. no data caps – as well as smarter and more user-friendly devices. Also, initially Dyle will be free-to-air, rather than demanding upfront monthly subscriptions, which has proven a major obstacle for occasional users.

He discussed the complexities of getting the service to market, juggling 11 different partnerships, cutting deals with content publishers, obtaining the first ATSC-MH integrated handset (from Samsung), starting build-out in 32 initial markets, gaining a distribution deal with MetroPCS and outlining its future roadmap such as an iPad antenna accessory from Belkin.

Figure 4 – Dyle mobile TV form-factors

Mobile TV 'Round the Side' Telco 2.0 image Fig 4

Source: Dyle

He sees four potential future revenue streams

  • Direct to consumer, which he thinks is “hard”
  • Wrapped up into MVPD services from cable companies wanting to offer TV Everywhere propositions
  • Targeted advertising – potentially location-based as well as individualised.
  • Distributed as an add-on to telcos’ voice and data plans

Figure 5 – Dyle has multiple business & distribution models

Mobile TV 'Round the Side' Telco 2.0 image Fig 5

Source: Fox Networks

Mr Moreno said that for mobile, “IP networks don’t scale” – especially for multiple viewers of live TV in the same location.

As part of the business rationale for Dyle, STL Partners thinks that it could help the TV industry justify continued ownership of spectrum in the face of a concerted effort by the telecoms industry to push regulators to repurpose it for mobile broadband.

To read this report in full, including…

  • Background
  • Mobile Video: How to Reduce Complexity
  • Stimulus presentations
  • Cisco’s VNI Study
  • Dyle TV
  • Panel Discussion & Delegate Input
  • Audience Q&As on presentations
  • Panel Discussion
  • Will Dyle work in the US and elsewhere? (Votes by region)
  • Verbatim delegate questions
  • What are the compelling mobile device video use cases? 
  • Conclusions and next steps
  • Key takeaways
  • Next steps

… and the following figures….

  • Figure 1 – Cisco VNI forecast growth of mobile data traffic
  • Figure 2 – Adoption of tablets & other examples of new consumer behaviour
  • Figure 3 – An introduction to the Dyle mobile TV business model & technology
  • Figure 4 – Dyle mobile TV form-factors
  • Figure 5 – Dyle has multiple business & distribution models
  • Figure 6 – Vote on Dyle model in the US
  • Figure 7 – Vote on Dyle model in Europe
  • Figure 8 – Vote on Dyle model in Asia

Members of the Telco 2.0 Executive Briefing Subscription Service can download the full 24 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £595 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Key terms referenced: Cisco, Dyle. Mobile TV, mobile operators, telcos, US, Europe, Asia, MediaFlo, VNI.

Customer Experience 2.0: Hosted Unicomms for Business – a major opportunity (Cisco Presentation)

Customer Experience 2.0: Hosted Unicomms for Business, Presentation by Fabio Gori, Head, SP Marketing, EMEA, Cisco Systems. Sizing the opportunity of business communications in the cloud. Presented at EMEA Brainstorm, November 2011. (November 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream) Understanding SMBs and enterprises' needs

Download presentation here.

Links here for more on New Digital Economics brainstorms and Cloud 2.0 research, or call +44 (0) 207 247 5003.

Video here:

Example slide from the presentation:

Understanding SMBs and enterprises' needs

Cloud 2.0: don’t blow it, telcos

Summary: enterprise cloud computing services need great connectivity to work, but there are opportunities for telcos to participate beyond the connectivity. What are the opportunities, how are telcos approaching them, and what are the key strategies? Includes forecasts for telcos’ shares of VPC, IaaS, PaaS and SaaS. (September 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream) Apps & Telco APIs Figure 1 Drivers of the App Market Telco 2.0 Sept 2011
  Read in Full (Members only)    To Subscribe

Below is an extract from this 28 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

In our previous analyses Cloud 2.0: What are the Telco Opportunities? and Cloud 2.0: Telcos to grow Revenues 900% by 2014 we’ve looked broadly at the growing cloud market opportunity for telcos. This new report takes this analysis forward, looking in detail at the service definitions, market forecasts and the industry’s confidence in them, and actual and potential strategies for telcos.

We’ll also be looking in depth at the opportunities in cloud services in the Cloud 2.0: Transforming technology, media and telecoms at the EMEA Executive Brainstorm in London on Thursday 10th November 2011.

The Cloud Market

Cloud computing represents the next wave of IT. Almost all organisations are saying that they will adopt cloud computing to a greater or lesser extent, across all segments and sizes. Consequently, we believe that there exists a large opportunity for telcos if they move quickly enough to take advantage of it.

Total market cloud forecasts – variation and uncertainty

In order to understand where the best opportunities are and how telcos can best take use their particular strengths to advantage of them, we need to examine the size of that opportunity and to understand which areas of cloud computing are most likely to offer the best returns.

Predictions for the size and growth of the cloud computing market are very diverse:

  • Merrill Lynch has previously offered the most optimistic estimate: $160 billion by the end of 2011 (The Cloud Wars: $100+ billion at stake, May 2008)
  • Gartner predicted expenditure of $150.1 billion by 2013 (Gartner forecast, March 2009)
  • IDC predicts annual cloud services revenues of $55.5 billion in by 2014 (IDC report, June 2010)
  • Cisco has estimated the cloud market at $43 billion by 2013 (STL Partners video, October 2010)
  • Bain expects spending to grow �?vefold from $30 billion in 2011 to $150 billion by 2020 (The Five Faces of the Cloud, 2011)
  • IBM’s Market Insights Cloud Phase 2 assessment of September 2011 sizes the cloud market at $88.5bn by 2015
  • Of that total, research by AMI Partners suggests that SMBs’ share of that spend will approach $100 billion by 2014 – over 60 % of the total (World Wide Cloud Services Study, December 2010)

Figure 1 – Cloud services market forecast comparisons

Cloud 2.0 Industry Forecast Comparisons Bain, Gartner, IDC, Cisco Sept 2011 Telco 2.0

Source: Bain, Cap Gemini, Cisco, Gartner, IBM, IDC, Merrill Lynch

Whichever way you look at it, the volume of spending on cloud computing is high and growing. But why are there such large variations in the estimates of that growth?

There is a clear correlation between the report dates and the market forecast sizes. Two of the forecasts – from Merrill Lynch and Gartner – are well over two years old, and are likely to have drawn conclusions from data gathered before the 2008 recession started to bite. Both are almost certainly over-optimistic as a result, and are included as an indication of the historic uncertainty in Cloud forecasts rather than criticism of the forecasters.

More generally, while each forecaster will be using different assumptions and extrapolation techniques, the variation is also likely to reflect a lack of maturity of the cloud services market: there exists little historical data from which to extrapolate the future, and little experience of what kinds of growth rates the market will experience. For example, well-known inhibitors to the adoption of cloud, such security and control, have yet to be resolved by cloud service providers to the point where enterprise customers are willing to commit a substantial volume of their IT spending.

Additionally, the larger the organisation, the slower the adoption of cloud computing is likely to be; it takes a long time for large enterprises to move to a new computing model that involves changing fundamental IT architectures and will be a process undertaken over time. It is hard to be precise about the degree to which they will inhibit the growth of cloud acceptance.

As a result, in a world where economic uncertainty seems unlikely to disappear in the short to medium term, it would be unwise to assume a high level of accuracy for market sizing predictions, although the general upward trend is very clear.

Cloud service types

Cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

Figure 2 – Cloud service layer definitions

Cloud 2.0 Service Types vs. layers Telco 2.0 Sept 2011

Source: STL Partners/Telco 2.0

Of the forecasts available, we prefer Bain’s near term forecast because: 1) it is based on their independent Cloud ‘Center of Excellence’ work; 2) it is relatively recent, and 3) it has clear and meaningful categories and definitions.
The following figure summarises Bain’s current market forecast, split by cloud service type.

Figure 3 – Cloud services: market forecast and current players

Cloud 2.0 Forecast growth by service type Sep 2011 Telco 2.0

Currently, telcos have around a 5% share of the c.$20 billion annual cloud services revenue, with 25 % CAGR forecast to 2013.

At the May 2011 EMEA Telco 2.0 Executive Brainstorm, we used these forecasts as a base to explore market views on the various cloud markets. There were c.200 senior executives at the brainstorm from industries across Telecoms, Media and Technology (TMT) and, following detailed presentations on Cloud Services, they were asked highly structured questions to ascertain their views on the likelihood of telco success in addressing each service.

Infrastructure as a Service (IaaS)

IaaS consists of cloud-based, usually virtualised servers, networking, and storage, which the customer is free to manage as they need. Billing is typically on a utility computing model: the more of each that you use, the more you pay. The largest of the three main segments, Bain forecasts IaaS to be worth around $3.5 billion in 2011, with 45 % CAGR forecast. The market leader is Amazon with about 18 % share. Other players include IBM and Rackspace. Telcos currently have about 20 % of this market – Qwest/Savvis/Equinix, and Verizon/Terremark.

Respondents at the EMEA Telco 2.0 Brainstorm estimated that telcos could take an average share of 25% of this market. The distribution was reasonably broad, with the vast majority in the 11-40% range.

Figure 4 – IaaS – Telco market share forecasts

Cloud 2.0 IaaS Telco Forecasts Sept 2011 Telco 2.0

Source: EMEA Telco 2.0 Executive Brainstorm delegate vote, May 2011

To read the note in full, including the following additional analysis…

  • Virtual Private Cloud (VPC)
  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Hybrid Cloud
  • Cloud Service Brokerage
  • Overall telco cloud market projections by type, including forecast uncertainties
  • Challenges for telcos
  • Which areas should telcos target?
  • Telcos’ advantages
  • IaaS, PaaS, or SaaS?
  • Developing other segments
  • What needs to change?
  • How can telcos deliver?
  • Telcos’ key strengths
  • Key strategy variables
  • Next Steps

…and the following charts…

  • Figure 1 – Cloud services market forecast comparisons
  • Figure 2 – Cloud service layer definitions
  • Figure 3 – Cloud services: market forecast and current players
  • Figure 4 – IaaS – Telco market share forecasts
  • Figure 5 – VPC – Telco market share forecasts
  • Figure 6 – SaaS – Telco market share forecasts
  • Figure 7 – PaaS – Telco market share forecasts
  • Figure 8 – Total telco cloud market size and share estimates – 2014
  • Figure 9 – Uncertainty in forecast by service
  • Figure 10 – Telco cloud strengths
  • Figure 11 – Cloud services timeline vs. profitability schematic
  • Figure 12 – Telcos’ financial stability

Members of the Telco 2.0 Executive Briefing Subscription Service and the Cloud and Enterprice ICT Stream can download the full 28 page report in PDF format here. Non-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, people and products referenced: Aepona, Amazon, AMI Partners, Bain, BT, CenturyLink, CENX, Cisco, CloudStack, Deutsche Telekom, EC2, Elastic Compute Cloud (EC2), EMC, Equinix, Flexible 4 Business, Force.com, Forrester, France Telecom, Gartner, Google App Engine, Google Docs, IBM, IDC, Intuit, Java, Merrill Lynch, Microsoft, Microsoft Office 365, MySQL, Neustar, NTT, OneVoice, OpenStack, Oracle, Orange, Peartree, Qwest, Rackspace, Red Hat, Renub Research, Sage, Salesforce.com, Savvis, Telstra, Terremark, T-Systems, Verizon, VMware, Vodafone, Webex.

Technologies and industry terms referenced: Azure, Carrier Ethernet, Cloud computing, cloud service providers, Cloud Services, Communications as a Service, compliance, Connectivity, control, forecast, Global reach, Hybrid Cloud, Infrastructure as a Service (IaaS), IT, Mobile Cloud, network, online, Platform as a Service (PaaS), Reliability, resellers, security, SMB, Software as a Service (SaaS), storage, telcos, telecoms, strategy, innovation, transformation, unified communications, video, virtualisation, Virtual Private Cloud (VPC), VPN.

Broadband 2.0: Mobile CDNs and video distribution

Summary: Content Delivery Networks (CDNs) are becoming familiar in the fixed broadband world as a means to improve the experience and reduce the costs of delivering bulky data like online video to end-users. Is there now a compelling need for their mobile equivalents, and if so, should operators partner with existing players or build / buy their own? (August 2011, Executive Briefing Service, Future of the Networks Stream).
Telco 2.0 Mobile CDN Schematic Small
  Read in Full (Members only)    Buy This Report    To Subscribe

Below is an extract from this 25 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can buy a Single User license for this report online here for £595 (+VAT) or subscribe here. For multiple user licenses, or to find out about interactive strategy workshops on this topic, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

As is widely documented, mobile networks are witnessing huge growth in the volumes of 3G/4G data traffic, primarily from laptops, smartphones and tablets. While Telco 2.0 is wary of some of the headline shock-statistics about forecast “exponential” growth, or “data tsunamis” driven by ravenous consumption of video applications, there is certainly a fast-growing appetite for use of mobile broadband.

That said, many of the actual problems of congestion today can be pinpointed either to a handful of busy cells at peak hour – or, often, the inability of the network to deal with the signalling load from chatty applications or “aggressive” devices, rather than the “tonnage” of traffic. Another large trend in mobile data is the use of transient, individual-centric flows from specific apps or communications tools such as social networking and messaging.

But “tonnage” is not completely irrelevant. Despite the diversity, there is still an inexorable rise in the use of mobile devices for “big chunks” of data, especially the special class of software commonly known as “content” – typically popular/curated standalone video clips or programmes, or streamed music. Images (especially those in web pages) and application files such as software updates fit into a similar group – sizeable lumps of data downloaded by many individuals across the operator’s network.

This one-to-many nature of most types of bulk content highlights inefficiencies in the way mobile networks operate. The same data chunks are downloaded time and again by users, typically going all the way from the public Internet, through the operator’s core network, eventually to the end user. Everyone loses in this scenario – the content publisher needs huge servers to dish up each download individually. The operator has to deal with transport and backhaul load from repeatedly sending the same content across its network (and IP transit from shipping it in from outside, especially over international links). Finally, the user has to deal with all the unpredictability and performance compromises involved in accessing the traffic across multiple intervening points – and ends up paying extra to support the operator’s heavier cost base.

In the fixed broadband world, many content companies have availed themselves of a group of specialist intermediaries called CDNs (content delivery networks). These firms on-board large volumes of the most important content served across the Internet, before dropping it “locally” as near to the end user as possible – if possible, served up from cached (pre-saved) copies. Often, the CDN operating companies have struck deals with the end-user facing ISPs, which have often been keen to host their servers in-house, as they have been able to reduce their IP interconnection costs and deliver better user experience to their customers.

In the mobile industry, the use of CDNs is much less mature. Until relatively recently, the overall volumes of data didn’t really move the needle from the point of view of content firms, while operators’ radio-centric cost bases were also relatively immune from those issues as well. Optimising the “middle mile” for mobile data transport efficiency seemed far less of a concern than getting networks built out and handsets and apps perfected, or setting up policy and charging systems to parcel up broadband into tiered plans. Arguably, better-flowing data paths and video streams would only load the radio more heavily, just at a time when operators were having to compress video to limit congestion.

This is now changing significantly. With the rise in smartphone usage – and the expectations around tablets – Internet-based CDNs are pushing much more heavily to have their servers placed inside mobile networks. This is leading to a certain amount of introspection among the operators – do they really want to have Internet companies’ infrastructure inside their own networks, or could this be seen more as a Trojan Horse of some sort, simply accelerating the shift of content sales and delivery towards OTT-style models? Might it not be easier for operators to build internal CDN-type functions instead?

Some of the earlier approaches to video traffic management – especially so-called “optimisation” without the content companies’ permission of involvement – are becoming trickier with new video formats and more scrutiny from a Net Neutrality standpoint. But CDNs by definition involve the publishers, so potentially any necessary compression or other processing can be collaboratively, rather than “transparently” without cooperation or willingness.

At the same time, many of the operators’ usual vendors are seeing this transition point as a chance to differentiate their new IP core network offerings, typically combining CDN capability into their routing/switching platforms, often alongside the optimisation functions as well. In common with other recent innovations from network equipment suppliers, there is a dangled promise of Telco 2.0-style revenues that could be derived from “upstream” players. In this case, there is a bit more easily-proved potential, since this would involve direct substitution of the existing revenues already derived from content companies, by the Internet CDN players such as Akamai and Limelight. This also holds the possibility of setting up a two-sided, content-charging business model that fits OK with rules on Net Neutrality – there are few complaints about existing CDNs except from ultra-purist Neutralists.

On the other hand, telco-owned CDNs have existed in the fixed broadband world for some time, with largely indifferent levels of success and adoption. There needs to be a very good reason for content companies to choose to deal with multiple national telcos, rather than simply take the easy route and choose a single global CDN provider.

So, the big question for telcos around CDNs at the moment is “should I build my own, or should I just permit Akamai and others to continue deploying servers into my network?” Linked to that question is what type of CDN operation an operator might choose to run in-house.

There are four main reasons why a mobile operator might want to build its own CDN:

  • To lower costs of network operation or upgrade, especially in radio network and backhaul, but also through the core and in IP transit.
  • To improve the user experience of video, web or applications, either in terms of data throughput or latency.
  • To derive incremental revenue from content or application providers.
  • For wider strategic or philosophical reasons about “keeping control over the content/apps value chain”

This Analyst Note explores these issues in more details, first giving some relevant contextual information on how CDNs work, especially in mobile.

What is a CDN?

The traditional model for Internet-based content access is straightforward – the user’s browser requests a piece of data (image, video, file or whatever) from a server, which then sends it back across the network, via a series of “hops” between different network nodes. The content typically crosses the boundaries between multiple service providers’ domains, before finally arriving at the user’s access provider’s network, flowing down over the fixed or mobile “last mile” to their device. In a mobile network, that also typically involves transiting the operator’s core network first, which has a variety of infrastructure (network elements) to control and charge for it.

A Content Delivery Network (CDN) is a system for serving Internet content from servers which are located “closer” to the end user either physically, or in terms of the network topology (number of hops). This can result in faster response times, higher overall performance, and potentially lower costs to all concerned.

In most cases in the past, CDNs have been run by specialist third-party providers, such as Akamai and Limelight. This document also considers the role of telcos running their own “on-net” CDNs.

CDNs can be thought of as analogous to the distribution of bulky physical goods – it would be inefficient for a manufacturer to ship all products to customers individually from a single huge central warehouse. Instead, it will set up regional logistics centres that can be more responsive – and, if appropriate, tailor the products or packaging to the needs of specific local markets.

As an example, there might be a million requests for a particular video stream from the BBC. Without using a CDN, the BBC would have to provide sufficient server capacity and bandwidth to handle them all. The company’s immediate downstream ISPs would have to carry this traffic to the Internet backbone, the backbone itself has to carry it, and finally the requesters’ ISPs’ access networks have to deliver it to the end-points. From a media-industry viewpoint, the source network (in this case the BBC) is generally called the “content network” or “hosting network”; the destination is termed an “eyeball network”.

In a CDN scenario, all the data for the video stream has to be transferred across the Internet just once for each participating network, when it is deployed to the downstream CDN servers and stored. After this point, it is only carried over the user-facing eyeball networks, not any others via the public Internet. This also means that the CDN servers may be located strategically within the eyeball networks, in order to use its resources more efficiently. For example, the eyeball network could place the CDN server on the downstream side of its most expensive link, so as to avoid carrying the video over it multiple times. In a mobile context, CDN servers could be used to avoid pushing large volumes of data through expensive core-network nodes repeatedly.

When the video or other content is loaded into the CDN, other optimisations such as compression or transcoding into other formats can be applied if desired. There may also be various treatments relating to new forms of delivery such as HTTP streaming, where the video is broken up into “chunks” with several different sizes/resolutions. Collectively, these upfront processes are called “ingestion”.

Figure 1 – Content delivery with and without a CDN

Mobile CDN Schematic, Fig 1 Telco 2.0 Report

Source: STL Partners / Telco 2.0

Value-added CDN services

It is important to recognise that the fixed-centric CDN business has increased massively in richness and competition over time. Although some of the players have very clever architectures and IPR in the forms of their algorithms and software techniques, the flexibility of modern IP networks has tended to erode away some of the early advantages and margins. Shipping large volumes of content is now starting to become secondary to the provision of associated value-added functions and capabilities around that data. Additional services include:

  • Analytics and reporting
  • Advert insertion
  • Content ingestion and management
  • Application acceleration
  • Website security management
  • Software delivery
  • Consulting and professional services

It is no coincidence that the market leader, Akamai, now refers to itself as “provider of cloud optimisation services” in its financial statements, rather than a CDN, with its business being driven by “trends in cloud computing, Internet security, mobile connectivity, and the proliferation of online video”. In particular, it has started refocusing away from dealing with “video tonnage”, and towards application acceleration – for example, speeding up the load times of e-commerce sites, which has a measurable impact on abandonment of purchasing visits. Akamai’s total revenues in 2010 were around $1bn, less than half of which came from “media and entertainment” – the traditional “content industries”. Its H1 2011 revenues were relatively disappointing, with growth coming from non-traditional markets such as enterprise and high-tech (eg software update delivery) rather than media.

This is a critically important consideration for operators that are looking to CDNs to provide them with sizeable uplifts in revenue from upstream customers. Telcos – especially in mobile – will need to invest in various additional capabilities as well as the “headline” video traffic management aspects of the system. They will need to optimise for network latency as well as throughput, for example – which will probably not have the cost-saving impacts expected from managing “data tonnage” more effectively.

Although in theory telcos’ other assets should help – for example mapping download analytics to more generalised customer data – this is likely to involve extra complexity with the IT side of the business. There will also be additional efforts around sales and marketing that go significantly beyond most mobile operators’ normal footprint into B2B business areas. There is also a risk that an analysis of bottlenecks for application delivery / acceleration ends up simply pointing the finger of blame at the network’s inadequacies in terms of coverage. Improving delivery speed, cost or latency is only valuable to an upstream customer if there is a reasonable likelihood of the end-user actually having connectivity in the first place.

Figure 2: Value-added CDN capabilities

Mobile CDN Schematic - Functionality Chart - Telco 2.0 Report

Source: Alcatel-Lucent

Application acceleration

An increasingly important aspect of CDNs is their move beyond content/media distribution into a much wider area of “acceleration” and “cloud enablement”. As well as delivering large pieces of data efficiently (e.g. video), there is arguably more tangible value in delivering small pieces of data fast.

There are various manifestations of this, but a couple of good examples illustrate the general principles:

  • Many web transactions are abandoned because websites (or apps) seem “slow”. Few people would trust an airline’s e-commerce site, or a bank’s online interface, if they’ve had to wait impatiently for images and page elements to load, perhaps repeatedly hitting “refresh” on their browsers. Abandoned transactions can be directly linked to slow or unreliable response times – typically a function of congestion either at the server or various mid-way points in the connection. CDN-style hosting can accelerate the service measurably, leading to increased customer satisfaction and lower levels of abandonment.
  • Enterprise adoption of cloud computing is becoming exceptionally important, with both cost savings and performance enhancements promised by vendors. Sometimes, such platforms will involve hybrid clouds – a mixture of private (Internal) and public (Internet) resources and connectivity. Where corporates are reliant on public Internet connectivity, they may well want to ensure as fast and reliable service as possible, especially in terms of round-trip latency. Many IT applications are designed to be run on ultra-fast company private networks, with a lot of “hand-shaking” between the user’s PC and the server. This process is very latency-dependent, and especially as companies also mobilise their applications the additional overhead time in cellular networks may otherwise cause significant problems.

Hosting applications at CDN-type cloud acceleration providers achieves much the same effect as for video – they can bring the application “closer”, with fewer hops between the origin server and the consumer. Additionally, the CDN is well-placed to offer additional value-adds such as firewalling and protection against denial-of-service attacks.

To read the 25 note in full, including the following additional content…

  • How do CDNs fit with mobile networks?
  • Internet CDNs vs. operator CDNs
  • Why use an operator CDN?
  • Should delivery mean delivery?
  • Lessons from fixed operator CDNs
  • Mobile video: CDNs, offload & optimisation
  • CDNs, optimisation, proxies and DPI
  • The role of OVPs
  • Implementation and planning issues
  • Conclusion & recommendations

… and the following additional charts…

  • Figure 3 – Potential locations for CDN caches and nodes
  • Figure 4 – Distributed on-net CDNs can offer significant data transport savings
  • Figure 5 – The role of OVPs for different types of CDN player
  • Figure 6 – Summary of Risk / Benefits of Centralised vs. Distributed and ‘Off Net’ vs. ‘On-Net’ CDN Strategies

……Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 25 page report in PDF format here. Non-Members, please see here for how to subscribe, here to buy a single user license for £595 (+VAT), or for multi-user licenses and any other enquiries please email contact@telco2.net or call +44 (0) 207 247 5003.

Organisations and products referenced: 3GPP, Acision, Akamai, Alcatel-Lucent, Allot, Amazon Cloudfront, Apple’s Time Capsule, BBC, BrightCove, BT, Bytemobile, Cisco, Ericsson, Flash Networks, Huawei, iCloud, ISPs, iTunes, Juniper, Limelight, Netflix, Nokia Siemens Networks, Ooyala, OpenWave, Ortiva, Skype, smartphone, Stoke, tablets, TiVo, Vantrix, Velocix, Wholesale Content Connect, Yospace, YouTube.

Technologies and industry terms referenced: acceleration, advertising, APIs, backhaul, caching, CDN, cloud, distributed caches, DNS, Evolved Packet Core, eyeball network, femtocell, fixed broadband, GGSNs, HLS, HTTP streaming, ingestion, IP network, IPR, laptops, LIPA, LTE, macro-CDN, micro-CDN, middle mile, mobile, Net Neutrality, offload, optimisation, OTT, OVP, peering proxy, QoE, QoS, RNCs, SIPTO, video, video traffic management, WiFi, wireless.