Three new telco business models: Soft-net, Cloud-net, Compute-net

Introduction

This report outlines three new telecoms business models that builds on previous research where we have outlined our vision of an emerging third age of telecoms called the Coordination Age. This is based on a global need to improve the efficiency of resource utilisation is manifesting in industries and individuals as a desire to “make the world work better”. We discuss this concept in detail in the following reports:

We believe that three new business models for telcos are emerging as part of the Coordination Age.

  • The Soft-Net: the core business remains connectivity, but the softwarisation of the network through SDN / NFV enables the network to adapt and scale to support new, advanced connectivity services. This includes third-party digital and networked-compute services that depend on the physical network connectivity the Soft-Net provides.
  • The Cloud-Net: also connectivity-focused, but with the production, delivery and consumption of services increasingly effected via the cloud (i.e. cloud-native). SDN and virtualisation enable capacity and services to be spun up, managed and delivered on demand over any physical network and device.
  • The Compute-Net: the core business is to provide distributed, networked, compute- and software-based services, often for specific enterprise verticals. These depend on SDN and NFV to deliver the ultra-fast, low-latency compute, throughput and routing capabilities required.

The three new models represent distinct strategic options for telcos looking to either: optimise and evolve their existing connectivity business; create new value from cloud-based, ‘horizontal’ platforms; or expand into new vertical markets – or a combination of all three approaches. This is illustrated here:

Interdependence between the three future telco business models

Source: STL Partners

In other words:

  • The Soft-Net operates the physical and virtualised infrastructure that delivers flexible, advanced connectivity in support of Cloud-Net and Compute-Net services (as well as well as legacy communications and connectivity services, delivered in a more scalable and cost-effective way)
  • The Cloud-Net delivers flexible, on-demand connectivity over hybrid infrastructure (including that owned by multiple Soft-Nets) in support of the increasingly complex and variable networking requirements of globally distributed, digital enterprises
  • The Compute-Net delivers vertically focused, compute-enabled processes and outcomes across all areas of industry and society. In doing so, it relies on networking and cloud platform services supplied by the Soft-Net and Cloud-Net, which may or may not be vertically integrated as part of its own organisation.

Enter your details below to request an extract of the report

The three telecoms business models link to NFV / SDN strategies

One of the distinguishing features of these models is the different modes of telco engagement in NFV and SDN they are potentially driven by. In previous analyses, we have identified three pathways towards NFV and SDN deployment. This is how they link to the three business models:

Figure 1: The three future telco business models and corresponding NFV pathways

Source: STL Partners, NFV / SDN deployment pathways: Three telco futures

In the rest of this report, we define these telecoms business models in more detail and illustrate how they present a pragmatic framework for telcos to focus their technology investments and develop valuable new Coordination Age services.

Contents:

  • Executive Summary
  • Introduction
  • Three telco futures and Telco 2.0
  • Chapter 1: Three telecoms business models for the Coordination Age
  • Three new business models: but why ‘telco’?
  • Business model analysis: Telcos’ vs competitors’ strengths
  • Relationship between the Soft-Net, Cloud-Net and Compute-Net business models
  • Chapter 2: Roles of the Soft-Net, Cloud-Net and Compute-Net in a ‘driverless car-as-a-service’ ecosystem
  • A driverless car-as-a-service business involves coordination of data, processes and events across a broad supply chain
  • Soft-Nets provide the mainly wireless connectivity
  • Cloud-Nets provide the hybrid, on-demand wide-area networking
  • Compute-Nets design and coordinate the ecosystem
  • Conclusions
  • The Coordination Age: A new purpose for telecoms, and three models for realising it
  • Key takeaways for telcos

Figures:

  1. The three future telco business models and corresponding NFV pathways
  2. The Telco 2.0 infrastructure and service stack
  3. Interdependence between the three future telco business models
  4. Two examples of the three new business models
  5. The three new business models overview
  6. Telcos face some fierce competition as they move up the stack
  7. Telco expansion across the three business models
  8. Advantages and disadvantages of vertical integration
  9. Mapping the Soft-Net, Cloud-Net and Compute-Net roles in a driverless car environment
  10. Types of data and corresponding compute-based services in a driverless car-as-a-service ecosystem

Enter your details below to request an extract of the report

Vendors vs. telcos? New plays in enterprise managed services

Digital transformation is reshaping vendors’ and telcos’ offer to enterprises

What does ‘digital transformation’ mean?

The enterprise market for telecoms vendors and operators is being radically reshaped by digital transformation. This transformation is taking place across all industry verticals, not just the telecoms sector, whose digital transformation – desirable or actual – STL Partners has forensically mapped out for several years now.

The term ‘digital transformation’ is so familiar that it breeds contempt in some quarters. Consequently, it is worth taking a while to refresh our thinking on what ‘digital transformation’ actually means. This will in turn help explain how the digital needs and practices of enterprises are impacting on vendors and telcos alike.

The digitisation of enterprises across all sectors can be described as part of a more general social, economic and technological evolution toward ever more far-reaching use of software-, computing- and IP-based modes of: interacting with customers and suppliers; communicating; networking; collaborating; distributing and accessing media content; producing, marketing and selling goods and services; consuming and purchasing those goods and services; and managing money flows across the economy. Indeed, one definition of the term ‘digital’ in this more general sense could simply be ‘software-, computing- and IP-driven or -enabled’.

For the telecoms industry, the digitisation of society and technology in this sense has meant, among other things, the decline of voice (fixed and mobile) as the primary communications service, although it is still the single largest contributor to turnover for many telcos. Voice mediates an ‘analogue’ economy and way of working in the sense that the voice is a form of ‘physical’ communication between two or more persons. In addition, the activity and means of communication (i.e. the actual telephone conversation to discuss project issues) is a separate process and work task from other work tasks, in different physical locations, that it helps to co-ordinate. By contrast, in an online collaboration session, the communications activity and the work activity are combined in a shared virtual space: the digital service allows for greater integration and synchronisation of tasks previously carried out by physical means, in separate locations, and in a less inherently co-ordinated manner.

Similarly, data in the ATM and Frame Relay era was mainly a means to transport a certain volume of information or files from one work place to another, without joining those work places together as one: the work places remained separate, both physically and in terms of the processes and work activities associated with them. The traditional telecoms network itself reflected the physical economy and processes that it enabled: comprising massive hardware and equipment stacks responsible for shifting huge volumes of voice signals and data packets (so called on the analogy of postal packets) from one physical location to another.

By contrast, with the advent of the digital (software-, computing- and IP-enabled) society and economy, the value carried by communications infrastructure has increasingly shifted from voice and data (as ‘physical’ signals and packets) to that of new modes of always-on, virtual interconnectedness and interactivity that tend towards the goal of eliminating or transcending the physical separation and discontinuity of people, work processes and things.

Examples of this digital transformation of communications, and associated experiences of work and life, could include:

  • As stated above, simple voice communications, in both business and personal life, have been increasingly superseded by ‘real-time’ or near-real-time, one-to-one or one-to-many exchange and sharing of text and audio-visual content across modes of communication such as instant messaging, unified communications (UC), social media (including increasingly in the work place) or collaborative applications enabling simultaneous, multi-party reviewing and editing of documents and files
  • Similarly, location-to-location file transfers in support of discrete, geographically separated business processes are being replaced by centralised storage and processing of, and access to, enterprise data and applications in the cloud
  • These trends mean that, in theory, people can collaborate and ‘meet’ with each other from any location in the world, and the digital service constitutes the virtual activity and medium through which that collaboration takes place
  • Similarly, with the Internet of Things (IoT), physical objects, devices, processes and phenomena generate data that can be transmitted and analysed in ‘real time’, triggering rapid responses and actions directed towards those physical objects and processes based on application logic and machine learning – resulting in more efficient, integrated processes and physical events meeting the needs of businesses and people. In other words, the IoT effectively involves digitising the physical world: disparate physical processes, and the action of diverse physical things and devices, are brought together by software logic and computing around human goals and needs.

‘Virtualisation’ effectively means ‘digital optimisation’

In addition to the cloud and IoT, one of the main effects of enterprise digital transformation on the communications infrastructure has of course been Network Functions Virtualisation (NFV) and SoftwareDefined Networking (SDN). NFV – the replacement of network functionality previously associated with dedicated hardware appliances by software running on standard compute devices – could also simply be described as the digitisation of telecoms infrastructure: the transformation of networks into software-, computing- and IP-driven (digital) systems that are capable of supporting the functionality underpinning the virtual / digital economy.

This functionality includes things like ultrafast, reliable, scalable and secure routing, processing, analysis and storage of massive but also highly variable data flows across network domains and on a global scale – supporting business processes ranging from ‘mere’ communications and collaboration to co-ordination and management of large-scale critical services, multi-national enterprises, government functions, and complex industrial processes. And meanwhile, the physical, Layer-1 elements of the network have also to become lightning-fast to deliver the massive, ‘real-time’ data flows on which the digital systems and services depend.

Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos

Virtualisation frees vendors from ‘operator lock-in’

Virtualisation has generally been touted as a necessary means for telcos to adapt their networks to support the digital service demands of their customers and, in the enterprise market, to support those customers’ own digital transformations. It has also been advocated as a means for telcos to free themselves from so-called ‘vendor lock-in’: dependency on their network hardware suppliers for maintenance and upgrades to equipment capacity or functionality to support service growth or new product development.

From the other side of the coin, virtualisation could also be seen as a means for vendors to free themselves from ‘operator lock-in’: a dependency on telcos as the primary market for their networking equipment and technology. That is to say, the same dynamic of social and enterprise digitisation, discussed above, has driven vendors to virtualise their own product and service offerings, and to move away from the old business model, which could be described as follows:

▪ telcos and their implementation partners purchase hardware from the vendor
▪ deploy it at the enterprise customer
▪ and then own the business relationship with the enterprise and hold the responsibility for managing the services

By contrast, once the service-enabling technology is based on software and standard compute hardware, this creates opportunities for vendors to market their technology direct to enterprise customers, with which they can in theory take over the supplier-customer relationship.

Of course, many enterprises have continued to own and operate their own private networks and networking equipment, generally supplied to them by vendors. Therefore, vendors marketing their products and services direct to enterprises is not a radical innovation in itself. However, the digitisation / virtualisation of networking technology and of enterprise networks is creating a new competitive dynamic placing vendors in a position to ‘win back’ direct relationships to enterprise customers that they have been serving through the mediation of telcos.

Virtualisation changes the competitive dynamic

Virtualisation changes the competitive dynamic

Contents:

  • Executive Summary: Digital transformation is changing the rules of the game
  • Digital transformation is reshaping vendors’ and telcos’ offer to enterprises
  • What does ‘digital transformation’ mean?
  • ‘Virtualisation’ effectively means ‘digital optimisation’
  • Virtualisation creates opportunities for vendors to act like Internet players, OTT service providers and telcos
  • Vendors and telcos: the business models are changing
  • New vendor plays in enterprise networking: four vendor business models
  • Vendor plays: Nokia, Ericsson, Cisco and IBM
  • Ericsson: changing the bet from telcos to enterprises – and back again?
  • Cisco: Betting on enterprises – while operators need to speed up
  • IBM: Transformation involves not just doing different things but doing things differently
  • Conclusion: Vendors as ‘co-Operators’, ‘co-opetors’ or ‘co-opters’ – but can telcos still set the agenda?
  • How should telcos play it? Four recommendations

Figures:

  • Figure 1: Virtualisation changes the competitive dynamic
  • Figure 2: The telco as primary channel for vendors
  • Figure 3: New direct-to-enterprise opportunities for vendors
  • Figure 4: Vendors as both technology supplier and OTT / operator-type managed services provider
  • Figure 5: Vendors as digital service creators, with telcos as connectivity providers and digital service enablers
  • Figure 6: Vendors as digital service enablers, with telcos as digital service creators / providers
  • Figure 7: Vendor manages communications / networking as part of overall digital transformation focus
  • Figure 8: Nokia as technology supplier and ‘operator-type’ managed services provider
  • Figure 9: Nokia’s cloud-native core network blueprint
  • Figure 10: Nokia WING value chain
  • Figure 11: Ericsson’s model for telcos’ roles in the IoT ecosystem
  • Figure 12: Ericsson generates the value whether operators provide connectivity only or also market the service
  • Figure 13: IBM’s model for telcos as digital service enablers or providers – or both

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application

 

Cloud 2.0: Telstra, Singtel, China Mobile Strategies

Summary: In this extract from our forthcoming report ‘Cloud 2.0: Telco Strategies in the Cloud’ we outline the key components of Telstra, Singtel and China Mobile’s cloud strategies, and how they compare to the major ‘Big Technology’ players (such as Microsoft, VMWare, IBM, HP, etc.) and ‘Web Giants’ such as Google and Amazon. (November 2012, Executive Briefing Service, Cloud & Enterprise ICT Stream.) Vodafone results Nov 2012
  Read in Full (Members only)  To Subscribe click here

Below is an extract from this 14 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here. Non-members can subscribe here or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

We’ll also be discussing our findings at the New Digital Economics Brainstorms in Singapore (3-5 December, 2012).

To share this article easily, please click:



 

Introduction

This is an edited extract of Cloud 2.0: Telco Strategies in the Cloud, a new Telco 2.0 Strategy Report to be published next week. The report examines the evolution of cloud services; the current opportunities for vendors and Telcos in the Cloud market, plus a penetrating analysis on the positioning Telcos need to adopt in order to take advantage of the global $200Bn Cloud services market opportunity.

The report shows how CSP’s can create sustainable differentiated positions in Enterprise Cloud. It contains a concise and comprehensive analysis of key vendor and telco strategies, market forecasts (including our own for both the market and telcos), and key technologies.

Led by Robert Brace (formerly Global Head of Cloud Services for Vodafone), it leverages the knowledge and experience of Telco 2.0 analyst team, senior global brainstorm participants, and targeted industry research and interviews. Robert will also be presenting at Digital Asia, 4-5 Dec, Singapore 2012.

Methodology

In the full report, we reviewed both telcos and technology companies using a list of 30 criteria organised in six groups (Market, Vision, Finance, Proposition, Value Network, and Technology). We aimed to cover their objectives, strategy, market areas addressed, target customers, proposition strategy, routes to market, operational approach, buy / build partner approach, and technology choices.

We based our analysis on a combination of desk research, expert interviews, and output from our Executive Brainstorms.

Among the leading cloud technology companies we identify two groups, which we characterise as “Big Tech” and the “Web Giants”. The first of these are the traditional enterprise IT vendors, while the second are the players originating in the consumer web 2.0 space (hence the name).

  • Big Tech: Microsoft (Azure), Google (Dev & Enterprise), VMWare, Parallels, Rackspace, HP, IBM.
  • Web Giants: Microsoft (Office 365), Amazon, Google (Apps & Consumer), Salesforce, Akamai.

In the report and our analyses below, we use averages for each of these groups to give a key comparator for telco strategies. The full strategy report contains individual analyses for each of these companies and the following telcos: AT&T, Orange, Telefonica, Deutsche Telekom, Vodafone, Verizon, China Telecom, SFR, Belgacom, Elisa, Telenor, Telstra, BT, Cable and Wireless.

Summary

The ‘heatmap’ table below shows the summary results of a 4-box scoring against our key criteria for the four APAC telcos enterprise cloud product intentions (i.e. what they intend to do in the market), where 1 (light blue) is weakest, 4 (bright red) stronger.

Figure 1: Cloud ‘heatmap’ for selected APAC telcos
Cloud APAC Heatmap
Source: STL Partners / Telco 2.0

In the full report are similar tables and comparisons for capabilities and used these results to compare telco to vendor strategies and telco to telco strategies where they compete in the same markets.

In this briefing we summarise results for Telstra, Singtel, China Mobile, and China Telecom.

Telstra – building regional leadership

 

Operating in the somewhat special circumstances of Australia, Telstra is pursuing both an SMB SaaS strategy (typical of mobile operators) and an enterprise IaaS strategy (see Figure 2). Under the first, it resells a suite of business applications centred on Microsoft Office 365, for which it has exclusivity in Australia.

Under the second, it is trying to develop a cloud computing business out of its managed hosting business. VMWare is the main technology provider, with some Microsoft Hyper-V. Unlike many telcos, Telstra benefits from the fact that the major IaaS players are only just beginning to develop data centres in Australia, and therefore cloud applications hosted with Amazon etc. are subject to a considerable latency penalty.

 

Figure 2: Telstra: A local leader

Cloud Telstra Radar Map

Source: STL Partners / Telco 2.0

However, data sovereignty concerns in Australia will force other cloud providers to develop at least some presence if they wish to address a variety of important markets (finance, government, and perhaps even mining), and this will eventually bring greater competition.

So far, Telstra has a web portal for the reseller SaaS products, and relies on a mixture of its direct sales force and a partnership with Accenture as a channel for IaaS.

Figure 3: Telstra benefits from geography

Telstra Cloud Radar Map 2

Source: STL Partners / Telco 2.0

To read the note in full, including the following analysis…

  • Introduction
  • Methodology
  • Summary
  • Telstra – building regional leadership
  • SingTel – aiming to be a regional hub
  • China Mobile – the Great Cloud?
  • China Telecom – making a start
  • Conclusions
  • Next steps

…and the following figures…

  • Figure 1: Cloud ‘heatmap’ for selected APAC telcos
  • Figure 2: Telstra: A local leader
  • Figure 3: Telstra benefits from geography
  • Figure 4: SingTel’s strategy is typical, but well executed
  • Figure 5: China Mobile: A less average telco
  • Figure 6: China Mobile has a distinctly different technology strategy
  • Figure 7: China Mobile has some key differentiators (“spikes”) versus its rivals
  • Figure 8: Comparing the APAC Giants
  • Figure 9: Cluster analysis: Telco operators

 

Members of the Telco 2.0 Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can download the full 14 page report in PDF format hereNon-Members, please subscribe here or email contact@telco2.net / call +44 (0) 207 247 5003.

 

Technologies and industry terms referenced: strategy, cloud, business model, APAC, Singtel, Telstra, China Mobile, China Telecom, VMWare, Amazon, Google, IBM, HP.

The Cloud 2.0 Programme

This research report is a part of the ‘Cloud 2.0’ programme. The report was independently commissioned, written, edited and produced by STL Partners.

The Cloud 2.0 programme is a new initiative that brings together STL Partners’ research and senior thought-leaders and decision makers in the fast evolving Cloud ecosystem to develop new propositions and new partnerships. We’d like to thank the sponsors of the programme listed below for their support. To find out more or to join the Cloud 2.0 programme, please email contact@telco2.net or call +44 (0) 207 247 5003.

Stratus Partners:

Cordys Logo

 

Cloud 2.0: don’t blow it, telcos

Summary: enterprise cloud computing services need great connectivity to work, but there are opportunities for telcos to participate beyond the connectivity. What are the opportunities, how are telcos approaching them, and what are the key strategies? Includes forecasts for telcos’ shares of VPC, IaaS, PaaS and SaaS. (September 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream) Apps & Telco APIs Figure 1 Drivers of the App Market Telco 2.0 Sept 2011
  Read in Full (Members only)    To Subscribe

Below is an extract from this 28 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

In our previous analyses Cloud 2.0: What are the Telco Opportunities? and Cloud 2.0: Telcos to grow Revenues 900% by 2014 we’ve looked broadly at the growing cloud market opportunity for telcos. This new report takes this analysis forward, looking in detail at the service definitions, market forecasts and the industry’s confidence in them, and actual and potential strategies for telcos.

We’ll also be looking in depth at the opportunities in cloud services in the Cloud 2.0: Transforming technology, media and telecoms at the EMEA Executive Brainstorm in London on Thursday 10th November 2011.

The Cloud Market

Cloud computing represents the next wave of IT. Almost all organisations are saying that they will adopt cloud computing to a greater or lesser extent, across all segments and sizes. Consequently, we believe that there exists a large opportunity for telcos if they move quickly enough to take advantage of it.

Total market cloud forecasts – variation and uncertainty

In order to understand where the best opportunities are and how telcos can best take use their particular strengths to advantage of them, we need to examine the size of that opportunity and to understand which areas of cloud computing are most likely to offer the best returns.

Predictions for the size and growth of the cloud computing market are very diverse:

  • Merrill Lynch has previously offered the most optimistic estimate: $160 billion by the end of 2011 (The Cloud Wars: $100+ billion at stake, May 2008)
  • Gartner predicted expenditure of $150.1 billion by 2013 (Gartner forecast, March 2009)
  • IDC predicts annual cloud services revenues of $55.5 billion in by 2014 (IDC report, June 2010)
  • Cisco has estimated the cloud market at $43 billion by 2013 (STL Partners video, October 2010)
  • Bain expects spending to grow �?vefold from $30 billion in 2011 to $150 billion by 2020 (The Five Faces of the Cloud, 2011)
  • IBM’s Market Insights Cloud Phase 2 assessment of September 2011 sizes the cloud market at $88.5bn by 2015
  • Of that total, research by AMI Partners suggests that SMBs’ share of that spend will approach $100 billion by 2014 – over 60 % of the total (World Wide Cloud Services Study, December 2010)

Figure 1 – Cloud services market forecast comparisons

Cloud 2.0 Industry Forecast Comparisons Bain, Gartner, IDC, Cisco Sept 2011 Telco 2.0

Source: Bain, Cap Gemini, Cisco, Gartner, IBM, IDC, Merrill Lynch

Whichever way you look at it, the volume of spending on cloud computing is high and growing. But why are there such large variations in the estimates of that growth?

There is a clear correlation between the report dates and the market forecast sizes. Two of the forecasts – from Merrill Lynch and Gartner – are well over two years old, and are likely to have drawn conclusions from data gathered before the 2008 recession started to bite. Both are almost certainly over-optimistic as a result, and are included as an indication of the historic uncertainty in Cloud forecasts rather than criticism of the forecasters.

More generally, while each forecaster will be using different assumptions and extrapolation techniques, the variation is also likely to reflect a lack of maturity of the cloud services market: there exists little historical data from which to extrapolate the future, and little experience of what kinds of growth rates the market will experience. For example, well-known inhibitors to the adoption of cloud, such security and control, have yet to be resolved by cloud service providers to the point where enterprise customers are willing to commit a substantial volume of their IT spending.

Additionally, the larger the organisation, the slower the adoption of cloud computing is likely to be; it takes a long time for large enterprises to move to a new computing model that involves changing fundamental IT architectures and will be a process undertaken over time. It is hard to be precise about the degree to which they will inhibit the growth of cloud acceptance.

As a result, in a world where economic uncertainty seems unlikely to disappear in the short to medium term, it would be unwise to assume a high level of accuracy for market sizing predictions, although the general upward trend is very clear.

Cloud service types

Cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).

Figure 2 – Cloud service layer definitions

Cloud 2.0 Service Types vs. layers Telco 2.0 Sept 2011

Source: STL Partners/Telco 2.0

Of the forecasts available, we prefer Bain’s near term forecast because: 1) it is based on their independent Cloud ‘Center of Excellence’ work; 2) it is relatively recent, and 3) it has clear and meaningful categories and definitions.
The following figure summarises Bain’s current market forecast, split by cloud service type.

Figure 3 – Cloud services: market forecast and current players

Cloud 2.0 Forecast growth by service type Sep 2011 Telco 2.0

Currently, telcos have around a 5% share of the c.$20 billion annual cloud services revenue, with 25 % CAGR forecast to 2013.

At the May 2011 EMEA Telco 2.0 Executive Brainstorm, we used these forecasts as a base to explore market views on the various cloud markets. There were c.200 senior executives at the brainstorm from industries across Telecoms, Media and Technology (TMT) and, following detailed presentations on Cloud Services, they were asked highly structured questions to ascertain their views on the likelihood of telco success in addressing each service.

Infrastructure as a Service (IaaS)

IaaS consists of cloud-based, usually virtualised servers, networking, and storage, which the customer is free to manage as they need. Billing is typically on a utility computing model: the more of each that you use, the more you pay. The largest of the three main segments, Bain forecasts IaaS to be worth around $3.5 billion in 2011, with 45 % CAGR forecast. The market leader is Amazon with about 18 % share. Other players include IBM and Rackspace. Telcos currently have about 20 % of this market – Qwest/Savvis/Equinix, and Verizon/Terremark.

Respondents at the EMEA Telco 2.0 Brainstorm estimated that telcos could take an average share of 25% of this market. The distribution was reasonably broad, with the vast majority in the 11-40% range.

Figure 4 – IaaS – Telco market share forecasts

Cloud 2.0 IaaS Telco Forecasts Sept 2011 Telco 2.0

Source: EMEA Telco 2.0 Executive Brainstorm delegate vote, May 2011

To read the note in full, including the following additional analysis…

  • Virtual Private Cloud (VPC)
  • Software as a Service (SaaS)
  • Platform as a Service (PaaS)
  • Hybrid Cloud
  • Cloud Service Brokerage
  • Overall telco cloud market projections by type, including forecast uncertainties
  • Challenges for telcos
  • Which areas should telcos target?
  • Telcos’ advantages
  • IaaS, PaaS, or SaaS?
  • Developing other segments
  • What needs to change?
  • How can telcos deliver?
  • Telcos’ key strengths
  • Key strategy variables
  • Next Steps

…and the following charts…

  • Figure 1 – Cloud services market forecast comparisons
  • Figure 2 – Cloud service layer definitions
  • Figure 3 – Cloud services: market forecast and current players
  • Figure 4 – IaaS – Telco market share forecasts
  • Figure 5 – VPC – Telco market share forecasts
  • Figure 6 – SaaS – Telco market share forecasts
  • Figure 7 – PaaS – Telco market share forecasts
  • Figure 8 – Total telco cloud market size and share estimates – 2014
  • Figure 9 – Uncertainty in forecast by service
  • Figure 10 – Telco cloud strengths
  • Figure 11 – Cloud services timeline vs. profitability schematic
  • Figure 12 – Telcos’ financial stability

Members of the Telco 2.0 Executive Briefing Subscription Service and the Cloud and Enterprice ICT Stream can download the full 28 page report in PDF format here. Non-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, people and products referenced: Aepona, Amazon, AMI Partners, Bain, BT, CenturyLink, CENX, Cisco, CloudStack, Deutsche Telekom, EC2, Elastic Compute Cloud (EC2), EMC, Equinix, Flexible 4 Business, Force.com, Forrester, France Telecom, Gartner, Google App Engine, Google Docs, IBM, IDC, Intuit, Java, Merrill Lynch, Microsoft, Microsoft Office 365, MySQL, Neustar, NTT, OneVoice, OpenStack, Oracle, Orange, Peartree, Qwest, Rackspace, Red Hat, Renub Research, Sage, Salesforce.com, Savvis, Telstra, Terremark, T-Systems, Verizon, VMware, Vodafone, Webex.

Technologies and industry terms referenced: Azure, Carrier Ethernet, Cloud computing, cloud service providers, Cloud Services, Communications as a Service, compliance, Connectivity, control, forecast, Global reach, Hybrid Cloud, Infrastructure as a Service (IaaS), IT, Mobile Cloud, network, online, Platform as a Service (PaaS), Reliability, resellers, security, SMB, Software as a Service (SaaS), storage, telcos, telecoms, strategy, innovation, transformation, unified communications, video, virtualisation, Virtual Private Cloud (VPC), VPN.

Cloud 2.0: Telcos to grow Revenues 900% by 2014

Summary: Telcos should grow Cloud Services revenues nine-fold and triple their overall market share in the next three years according to delegates at the May 2011 EMEA Executive Brainstorm. But which are the best opportunities and strategies? (June 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream)

NB Members can download a PDF of this Analyst Note in full here. Cloud Services will also feature at the Best Practice Live! Free global virtual event on 28-29 June 2011.

To share this article easily, please click:

//

Introduction

STL Partners’ New Digital Economics Executive Brainstorm & Developer Forum EMEA took place from 11-13 May in London. The event brought together 250 execs from across the telecoms, media and technology sectors to take part in 6 co-located interactive events: the Telco 2.0, Digital Entertainment 2.0, Mobile Apps 2.0, M2M 2.0 and Personal Data 2.0 Executive Brainstorms, and an evening AppCircus developer forum.

Building on output from the last Telco 2.0 events and new analysis from the Telco 2.0 Initiative – including the new strategy report ‘The Roadmap to New Telco 2.0 Business Models’ – the Telco 2.0 Executive Brainstorm explored latest thinking and practice in growing the value of telecoms in the evolving digital economy.

This document gives an overview of the output from the Cloud session of the Telco 2.0 stream.

Companies referenced: Aepona, Amazon Web Services, Apple, AT&T, Bain, BT, Centurylink, Cisco, Dropbox, Embarq, Equinix, Flexible 4 Business, Force.com, Google Apps, HP, IBM, Intuit, Microsoft, Neustar, Orange, Qwest, Salesforce.com, SAP, Savvis, Swisscom, Terremark, T-Systems, Verizon, Webex, WMWare.

Business Models and Technologies covered: cloud services, Enterprise Private Cloud (EPC), Virtual Private Cloud (VPC), Infrastucture as a service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

Cloud Market Overview: 25% CAGR to 2013

Today, Telcos have around a 5% share of nearly $20Bn p.a. cloud services revenue, with 25% compound annual growth rate (CAGR) forecast to 2013. Most market forecasts are that the total cloud services market will reach c.$45-50Bn revenue by 2013 / 2014, including the Bain forecast previewed at the Americas Telco 2.0 Brainstorm in April 2011.

At the EMEA brainstorm, delegates were presented with an overview of the component cloud markets and examples of different cloud services approaches, and were then asked for their views on what share telcos could take of cloud revenues in each. In total, delegates’ views amounted to telcos taking in the region of 18% by revenue of cloud services at the end of the next three years.

Applying these views to an extrapolated ‘mid-point’ forecast view of the Cloud Market in 2014, implies that Telcos will take just under $9Bn revenue from Cloud by 2014, thus increasing today’s c$1Bn share nine-fold. [NB More detailed methodology and sources are in the full paper available to members here.]

Figure 1 – Cloud Services Market Forecast & Players

Cloud 2.0 Forecast 2014 - Telco 2.0

Source: Telco 2.0 Presentation

Although already a multi-$Bn market already, there is still a reasonable degree of uncertainty and variance in Cloud forecasts as might be expected in a still maturing market, so this market could be a lot higher – or perhaps lower, especially if the consequences of the recent Amazon AWS breakdown significantly reduce CIO’s appetites for Cloud.

The potential for c.30% IT cost savings and speed to market benefits that can be achieved by telcos implementing Cloud internally previously shown by Cisco’s case study were highlighted but not explored in depth at this session.

Which cloud markets should telcos target?

Figure 2 – Cloud Services – Telco Positioning

Cloud 2.0 Market Positioning - Telco 2.0

Source: Cisco/Orange Presentation, 13th Telco 2.0 Executive Brainstorm, London, May 2011

An interesting feature of the debate was which areas telcos would be most successful in, and the timing of market entry strategies. Orange and Cisco argued that the area of ‘Virtual Private Cloud’, although neither the largest nor predicted to be the fastest growing area, should be the first market for some telcos to address, appealing to some telcos strong ‘trust’ credentials with CIOs and building on ‘managed services’ enterprise IT sales and delivery capabilities.

Orange described its value proposition ‘Flexible 4 Business’ in partnership with Cisco, VMWare virtualisation, and EMC2 storage, and although could not at this early stage give any performance metrics described strong demand and claimed satisfaction with progress to date.

Aepona described a Platform-as-a-Service (PaaS) concept that they are launching shortly with Neustar that aggregates telco APIs to enable the rapid creation and marketing of new enterprise services.

Figure 3 – Aepona / Neustar ‘Intelligent Cloud’ PaaS Concept

C;oud 2.0 - Intelligent Cloud PaaS Concept - Telco 2.0

In this instance, the cloud component makes the service more flexible, cheaper and easier to deliver than a traditional IT structure. This type of concept is sometimes described as a ‘mobile cloud’ because many of the interesting uses relate to mobile applications, and are not reliant on continuous high-grade mobile connectivity required for e.g. IaaS: rather they can make use of bursts of connectivity to validate identities etc. via APIs ‘in the cloud’.

To read the rest of this Analyst Note, containing…

  • Forecasts of telco share of cloud by VPC, IaaS, PaaS and SaaS
  • Telco 2.0 take-outs and next steps
  • And detailed Brainstorm delegate feedback

Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can access and download a PDF of the full report here. Non-Members, please see here for how to subscribe. Alternatively, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

The Roadmap to New Telco 2.0 Business Models

$375Bn per annum Growth or Brutal Retrenchment? Which route will Telcos take?

Over the last three years, the Telco 2.0 Initiative has identified new business model growth opportunities for telcos of $375Bn p.a. in mature markets alone (see the ‘$125Bn Telco 2.0 ‘Two-Sided’ Market Opportunity’ and ‘New Mobile, Fixed and Wholesale Broadband Business Models’ Strategy Reports). In that time, most of the major operators have started to integrate elements of Telco 2.0 thinking into their strategic plans and some have begun to communicate these to investors.

But, as they struggle with the harsh realities of the seismic shift from being predominantly voice-centric to data-centric businesses, telcos now find themselves:

  • Facing rapidly changing consumer behaviours and powerful new types of competitors;
  • Investing heavily in infrastructure, without a clear payback;
  • Operating under less benign regulatory environments, which constrain their actions;
  • Being milked for dividends by shareholders, unable to invest in innovation.

As a result, far from yet realising the innovative growth potential we identified, many telcos around the world seem challenged to make the bold moves needed to make their business models sustainable, leaving them facing retrenchment and potentially ultimately utility status, while other players in the digital economy prosper.

In our new 284 page strategy report – ‘The Roadmap to Telco 2.0 Business Models’ – we describe the transformational path the telecoms industry needs to take to carve out a more valuable role in the evolving ‘digital economy’. Based on the output from 5 intensive senior executive ‘brainstorms’ attended by over 1000 industry leaders, detailed analysis of the needs of ‘upstream’ industries and ‘downstream’ end users markets, and with the input from members and partners of the Telco 2.0 Initiative from across the world, the report specifically describes:

  • A new ‘Telco 2.0 Opportunity Framework’ for planning revenue growth;
  • The critical changes needed to telco innovation processes;
  • The strategic priorities and options for different types of telcos in different markets;
  • Best practice case studies of business model innovation.

The ‘Roadmap’ Report Builds on Telco 2.0’s Original ‘Two-Sided’ Telecoms Business Model

Updated Telco 2.0 Industry Framework

Source: The Roadmap to New Telco 2.0 Business Models

 

Who should read this report

The report is for strategy decision makers and influences across the TMT (Telecoms, Media and Technology) sector. In particular, CxOs, Strategists, Technologists, Marketers, Product Managers, and Legal and Regulatory leaders in telecoms operators, vendors, consultants, and analyst companies. It will also be valuable to those managing or considering medium to long-term investment in the telecoms and adjacent industries, and to regulators and legislators.

It provides fresh, creative ideas to:

Grow revenues beyond current projections by:

  • Protecting revenues from existing customers;
  • Extending services to new customers;
  • Generating new service offering and revenues.

Stay relevant with customers through:

  • A broader range of services and offers;
  • More personalised services;
  • Greater interaction with customers.

Evolve business models by:

  • Moving from a one-sided to a two-sided business model;
  • Generating cross-platform network effects – between service providers and customers;
  • Exploiting existing latent assets, skills and relationships.


The Six Telco 2.0 Opportunity Areas

Six Telco 2.0 Opportunity Types

Source: The Roadmap to New Telco 2.0 Business Models

What are the Key Questions the Report Answers?

For Telcos:

  • Where should your company be investing for growth?
  • What is ‘best practice’ in telecoms Telco 2.0 business model innovation and how does your company compare to it?
  • Which additional strategies should you consider, and which should you avoid?
  • What are the key emerging trends to monitor?
  • What actions are required in the areas of value proposition, technology, value / partner network, and finances?

For Vendors and Partners:

  • How to segment telecoms operators?
  • How well does your offering support Telco 2.0 strategies and transformation needs in your key customers?
  • What are the most attractive new areas in which you could support telcos in business model innovation?

For Investors and Regulators:

  • What are and will be the main new categories of telcos/CSPs?
  • What are the principle opportunity areas for operators?
  • What are and will be operator’s main strategic considerations with respect to new business models?
  • What are the major regulatory considerations of new business models?
  • What are the main advantages and disadvantages that telcos have in each opportunity area?

Contents

  • Executive Summary & Introduction
  • Pressures on Operators
  • The new Telco 2.0 Framework
  • Principles of Innovation and Services Delivery
  • – Strategic Positioning
  • – Design
  • – Development and delivery
  • Categorising telcos
  • Category 1: Leading international operators
  • Category 2: Regional leaders
  • Category 3: Wholesale and business-focused telcos
  • Category 4: Challengers & disruptors
  • Category 5: Smaller national leaders
  • Conclusions and Recommendations

 

Cloud 2.0: What are the Telco Opportunities?

Summary: Telco 2.0’s analysis of operators’ potential role and opportunity in ‘Cloud Services’, a set of new business model opportunities that are still in an early stage of development – although players such as Amazon have already blazed a substantial trail. (December 2010, , Executive Briefing Service, Cloud & Enterprise ICT Stream & Foundation 2.0)

  • Below is an extract from this Telco 2.0 Report. The report can be downloaded in full PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here.
  • Additionally, to give an introduction to the principles of Telco 2.0 and digital business model innovation, we now offer for download a small selection of free Telco 2.0 Briefing reports (including this one) and a growing collection of what we think are the best 3rd party ‘white papers’. To access these reports you will need to become a Foundation 2.0 member. To do this, use the promotional code FOUNDATION2 in the box provided on the sign-up page here. NB By signing up to this service you give consent to us passing your contact details to the owners / creators of any 3rd party reports you download. Your Foundation 2.0 member details will allow you to access the reports shown here only, and once registered, you will be able to download the report here.
  • See also the videos from IBM on what telcos need to do, and Oracle on the range of Cloud Services, and the Telco 2.0 Analyst Note describing Americas and EMEA Telco 2.0 Executive Brainstorm delegates’ views of the Cloud Services Opportunity for telcos.
  • We’ll also be discussing Cloud 2.0 at the Silicon Valley (27-28 March) and London (12-13 June) Executive Brainstorms.
  • To access reports from the full Telco 2.0 Executive Briefing service, or to submit whitepapers for review for inclusion in this service, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

 

The Cloud: What Is It?

Apart from being the leading buzzword in the enterprise half of the IT industry for the last few years, what is this thing called “Cloud”? Specifically, how does it differ from traditional server co-location, or indeed time-sharing on mainframes as we did in the 1970s? These are all variations on the theme of computing power being supplied from a remote machine shared with other users, rather than from PCs or servers deployed on-site.

Two useful definitions were voiced at the 11th Telco 2.0 EMEA Executive Brainstorm in November 2010:

  • “A standardised IT Capability delivered in a pay-per-use, self-service way.” Stephan Haddinger, Chief Architect Cloud Computing, Orange – citing a definition by Forrester.
  • “STEAM – A Self-Service, multi-Tenanted, Elastic, broad Access, and Metered IT Service.” Neil Sholay, VP Cloud and Comms, EMEA, Oracle.

The definition of Cloud has been rendered significantly more complicated by the hype around “cloud” and the resultant tendency to use it for almost anything that is network resident. For a start, it’s unhelpful to describe anything that includes a Web site as “cloud computing”. A good way to further understand ‘Cloud Services’ is to look at the classic products in the market.

The most successful of these, Amazon’s S3 and EC2, provide low-level access to computing resources – disk storage, in S3, and general-purpose CPU in EC2. This differs from an ASP (Application Service Provider) or Web 2.0 product in that what is provided isn’t any particular application, but rather something close to the services of a general purpose computer. It differs from traditional hosting in that what is provided is not access to one particular physical machine, but to a virtual machine environment running on many physical servers in a data-centre infrastructure, which is probably itself distributed over multiple locations. The cloud operator handles the administration of the actual servers, the data centres and internal networks, and the virtualisation software used to provide the virtual machines.

Varying degrees of user control over the system are available. A major marketing point, however, is that the user doesn’t need to worry about system administration – it can be abstracted out as in the cloud graphic that is used to symbolise the Internet on architecture diagrams. This tension between computing provided “like electricity” and the desire for more fine-grained control is an important theme. Nobody wants to specify how their electricity is routed through the grid, although increasing numbers of customers want to buy renewable power – but it is much more common for businesses (starting at surprisingly small scale) to have their own Internet routing policies.

So, for example, although Amazon’s cloud services are delivered from their global data centre infrastructure, it’s possible to specify where EC2 instances run to a continental scale. This provides for compliance with data protection law as well as for performance optimisation. Several major providers, notably Rackspace, BT Global Services, and IBM, offer “private cloud” services which represent a halfway house between hosting/managed service and fully virtualised cloud computing. And some explicit cloud products, such as Google’s App Engine, provide an application environment with only limited low-level access, as a rapid-prototyping tool for developers.

The Cloud: Why Is It?

Back at the November 2009 Telco 2.0 Executive Brainstorm in Orlando, Joe Weinman of AT&T presented an argument that cloud computing is “a mathematical inevitability”. His fundamental point is worth expanding on. For many cloud use cases, the decision between moving into the cloud and using a traditional fleet of hosted servers is essentially a rent-vs-buy calculus. Weinman’s point was that once you acquire servers, whether you own them and co-locate or rent them from a hosting provider, you are committed to acquiring that quantity of computing capacity whether you use it or not. Scaling up presents some problems, but it is not that difficult to co-locate more 1U racks. What is really problematic is scaling down.

Cloud computing services address this by basically providing volume pricing for general-purpose computing – you pay for what you use. It therefore has an advantage when there are compute-intensive tasks with a highly skewed traffic distribution, in a temporary deployment, or in a rapid-prototyping project. However, problems arise when there is a need for capacity on permanent standby, or serious issues of data security, business continuity, service assurance, and the like. These are also typical rent-vs-buy issues.

Another reason to move to the cloud is that providing high-availability computing is expensive and difficult. Cloud computing providers’ core business is supporting large numbers of customers’ business-critical applications – it might make sense to pass this task to a specialist. Also, their typical architecture, using virtualisation across large numbers of PC-servers to achieve high availability in the manner popularised by Google, doesn’t make sense except on a scale big enough to provide a significant margin of redundancy in the hardware and in the data centre infrastructure.

Why Not the Cloud?

The key objections to the cloud are centred around trust – one benefit of spreading computing across many servers in many locations is that this reduces the risk of hardware and/or connectivity failure. However, the problem with moving your infrastructure into a multi-tenant platform is of course that it’s another way of saying that you’ve created a new, enormous single point of commercial and/or software failure. It’s also true that the more critical and complex the functions that are moved into cloud infrastructure, and the more demanding the contractual terms that result, the more problematic it becomes to manage the relationship. (Neil Lock, IT Services Director at BT Global Services, contributed an excellent presentation on this theme at the 9th Telco 2.0 Executive Brainstorm.) At some point, the additional costs of managing the outsourcer relationship intersect with the higher costs of owning the infrastructure and internalising the contract. One option involves spending more money on engineers, the other, spending more money on lawyers.

Similar problems exist with regard to information security – a malicious actor who gains access to administrative features of the cloud solution has enormous opportunities to cause trouble, and the scaling features of the cloud mean that it is highly attractive to spammers and denial-of-service attackers. Nothing else offers them quite as much power.

Also, as many cloud systems make a virtue of the fact that the user doesn’t need to know much about the physical infrastructure, it may be very difficult to guarantee compliance with privacy and other legislation. Financial and other standards sometimes mandate specific cryptographic, electronic, and physical security measures. It is quite possible that the users of major clouds would be unable to say in which jurisdiction users’ personal data is stored. They may consider this a feature, but this is highly dependent on the nature of your business.

From a provider perspective, the chief problem with the cloud is commoditisation. At present, major clouds are the cheapest way bar none to buy computing power. However, the very nature of a multi-tenant platform demands significant capital investment to deliver the reliability and availability the customers expect. The temptation will always be there to oversubscribe the available capacity – until the first big outage. A capital intensive, very high volume, and low price business is the classic case of a commodity – many operators would argue that this is precisely what they’re trying to get away from. Expect vigorous competition, low margins, and significant CAPEX requirements.

To download a full PDF of this article, covering…

  • What’s in it for Telcos?
  • Conclusions and Recommendations

…Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud & Enterprise ICT Stream can read the Executive Summary and download the full report in PDF format here. Non-Members, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud Services 2.0: Clearing Fog, Sunshine Forecast, say Telco 2.0 Delegates

Summary: the early stage of development of the market means there is some confusion on the telco Cloud opportunity, yet clarity is starting to emerge, and the concept of ‘Network-as-a-Service’ found particular favour with Telco 2.0 delegates at our October 2010 Americas and November 2010 EMEA Telco 2.0 Executive Brainstorms. (December 2010, Executive Briefing Service, Cloud & Enterprise ICT Streamm)

The full 15 page PDF report is available for members of the Executive Briefing Service and Cloud and Enterprise ICT Stream here. For membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003. Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

Executive Summary

Clearing Fog

Cloud concepts can sometimes seem as baffling, and as nebulous as their namesakes. However, in the recent Telco 2.0 Executive Brainstorms, (Americas in October 2010 and EMEA November 2010), stimulus presentations by IBM, Oracle, FT-Orange Group, Deutsche Telekom, Intel, Salesforce.com, Cisco, BT-Ribbit, and delegate discussions really brought the Cloud Services opportunities to life.

While it was generally agreed that the precise definitions delineating the many possible varieties of the service are not always useful, it does matter how operators can make money from the services, and there was at least consensus on this.

Sunshine Forecast: A Significant Opportunity…

IBM identified an $88.5Bn opportunity in the Cloud over the next 5 years, the majority of which is applicable to telcos, although the share that will end up in the telco industry might be as much as 70% or as little as 30%, depending on how operators go about it (video here).

According to Cisco, there is a $44Bn telco opportunity in Cloud Services by 2014, supported by the evidence of 30%+ enterprise IT cost savings and productivity gains that resulted from Cisco’s own comprehensive internal adoption of cloud services (video here). We see this estimate as reasonably consistent with IBM’s.

Oracle also brought the range of opportunities to life with seven contrasting real-life case studies (video here).

Ribbit, AT&T, and Salesforce.com also supported the viability of Cloud Cervices, arguing that concerns over trust and privacy are gradually being allayed. Intel argued that Network as a Service (NaaS) is emerging as a cloud opportunity alongside Enterprise and Public Clouds, and that by combining NaaS with the telco influence over devices and device computing power, telcos can be a major player in a new ‘Pervasive Computing’ environment. EMEA delegates also viewed Network-as-a-Service as the most attractive opportunity.

Fig 1 – Delegates Favoured ‘Network-as-a-Service’ of the Cloud Opportunities

Telco 2.0 Delegates Cloud Vote, Nov 2010

Source: Telco 2.0 Delegate Vote, 11th Brainstorm, EMEA , Nov 2010.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud 2.0: What Should Telcos do? IBM’s View

Summary: IBM say that telcos are well positioned to provide cloud services, and forecast an $89Bn opportunity over 5 years globally. Video presentation and slides (members only) including forecast, case studies, and lessons for future competitiveness.

Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

 

At the 11th EMEA Telco 2.0 Brainstorm, November 2010, Craig Wilson, VP, IBM Global Telecoms Industry, said that:

  • Cloud Services represent an $89Bn opportunity in 5 years;
  • Telcos / Service Providers are “well positioned” to compete in Cloud Services;
  • Security remains the CIO’s biggest question mark, but one that telcos can help with;
  • and outlined two APAC telco Cloud case studies.

Members of the Telco 2.0 Executive Briefing Service and the Cloud and Enterprise ICT Stream can also download Craig’s presentation here (for membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003).

See also videos by Oracle describing a range of cloud case studies, Cisco on the market opportunity and their own case study of Cloud benefits, and Telco 2.0’s Analyst Note on the Cloud Opportunity.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables: