Telcos are facing difficult choices about whether and how to invest in new technologies, how to cut costs, and how to create new services, either to pair with their core network services or to broaden their customer bases beyond connectivity users.
Through the Telco 2.0 vision (our shorthand for ‘what a future telco should look like’), STL Partners has long argued that telcos need to make fundamental changes to their business models in response to the commoditisation of connectivity and the ‘softwarisation’ of all industries, including telecoms. At the very least this means digitalising operations to become more data-centric and efficient in the way they deliver connectivity. But to generate significant new revenue growth, we still believe telcos need to look beyond connectivity and develop (or acquire) new product and service offerings.
The original Telco 2.0 two-sided business model
Source: STL Partners
Since 2011, a handful of telcos have made significant investments into areas beyond connectivity that fall into these categories. For example:
NTT Docomo has continued to expand its ‘dmarket’ consumer loyalty scheme, media and sports content and payment services, which accounted for nearly 20% of total revenues for FY2017.
Singtel acquired digital advertising provider Amobee in 2012, followed by several more acquisitions in the same area to build an end-to-end digital marketing platform. Its digital services accounted for more than 10% of quarterly revenues by December 2017, and was the fourth largest revenue segment, ahead of voice revenues.
TELUS first acquired a health IT company in 2008, and has since expanded its reach and range of services to become Canada’s largest provider of health IT solutions, such as a nation-wide e-prescription system. Based on a case study we did on TELUS, we estimate its health solutions accounted for at least 7% of total revenues by 2017.
However, these telcos are the exception rather than the rule. Over the last decade, most telcos have failed to build a significant revenue stream beyond their core services.
While many telcos remain cautious or even sceptical about their ability to generate significant revenue from non-connectivity based products and services, “digitalising” operations has become a widespread approach to sustain margins as revenue growth has slowed.
In Figure 3 we illustrate these as the two ‘digital dimensions’ along which telcos can drive change, where most telcos are prioritising an infrastructure play, but few are putting significant resources into product innovation, and only a small number with the ability to do both.
Digitalising telecoms operations: Reduction of capex and opex by reducing complexity and automating processes, and improving customer experience
Developing new services: This falls into two categories on the right-hand side of Figure 3
Product innovation: New services that are independent from the network, in which case digitalising telecoms operations is only moderately important
Platform (& product): New services that are strongly integrated with the network and therefore require the network to be opened up and digitalised
Few telcos are putting real resources into product & platform innovation
Source: STL Partners
Four developments driving our Telco 2.0 update
AI and automation technology is ready to deploy at scale. AI is no longer an over-hyped ideal – machine and deep learning techniques are proven to deliver faster and more accurate decision-making for repetitive and data-intensive tasks, regardless of the type of data (numerical, audio, images, etc.). This has the potential to transform all areas of operators’ businesses.
We live and work in a world of ecosystems. Few services are completely self-sufficient and independent from everything else, but rather enable, complement and/or augment other services. Telcos must accept that they are not immune to this trend, just because connectivity is one of the key enablers of content, cloud and IoT ecosystems (see Figure 4).
Software-defined networks and 5G are coming. This is happening at a different pace in different markets, but over the next five to ten years these technologies will drastically change the ‘thing’ that telcos operate: the ‘network’ will become another cloud service, with many operational functions instantiated in near real-time in hardware at the network edge, so never even reaching a centralised cloud. So telcos need to become more proficient in software and computing, and they should think of themselves as cloud service providers that operate in partnership with many other players to deliver end-users a complete service.
As other industries go through their own digital transformations, the connectivity and IT needs of enterprises have become much more complex and industry specific. This means the one-size-fits-all approach does not apply for operators or for their enterprise customers in any sector.
Telcos and connectivity are not a central pillar, but an enabler in a much richer ecosystem
Source: STL Partners
We are updating the Telco 2.0 Vision in light of these realities. Previously, we proposed six opportunity areas for new revenue growth, and expected large, proactive telcos to be able to address many of them. But telcos have been slow to change, margins are tighter now, implementing NFV/SDN is hard, and software skills are necessary for succeeding in any vertical. So telcos can no longer hope to do it all and must make choices of where to put their bets. As NTT Docomo, Singtel and TELUS show, it also takes time to succeed, so telcos need to choose and commit to a strategy now for long term success.
Time to update Telco 2.0
Four developments driving our Telco 2.0 update
Analysing the current market state
Options for the future
If connectivity won’t drive growth, do telcos’ network strategies matter?
Imagining the future telecoms stack
Figure 1: The telco stack
Figure 2: The original Telco 2.0 two-sided business model
Figure 3: Few telcos are putting real resources into product & platform innovation
Figure 4: Telcos and connectivity are not a central pillar, but an enabler in a much richer ecosystem
Figure 5: The network cloud platform within the telco stack
Figure 6: Steps to becoming a cloud platform
Figure 7: Horizontal specialisation within the telco stack
Figure 8: Vertical specialisation within the telco stack
Figure 9: Enterprise verticals
Figure 10: Consumer services and applications
Figure 11: Network technology company versus lean network operator
Figure 12: Example of a fixed telco stack
Figure 13: Example of a telco IoT stack
Figure 14: Example of a lean network operator stack
In early 2016, Facebook launched the Telecom Infra Project (TIP). It was set up as an open industry initiative, to reduce costs in creating telecoms network equipment, and associated processes and operations, primarily through open-source concepts applied to network hardware, interfaces and related software.
One of the key objectives was to split existing proprietary vendor “black boxes” (such as cellular base stations, or optical multiplexers) into sub-components with standard interfaces. This should enable competition for each constituent part, and allow the creation of lower-cost “white box” designs from a wider range of suppliers than today’s typical oligopoly. Critically, this is expected to enable much-broader adoption of networks in developing markets, where costs – especially for radio networks – remain too high for full deployments. Other outcomes may be around cheaper 5G infrastructure, or specialised networks for indoor use or vertical niches.
TIP’s emergence parallels a variety of open-source initiatives elsewhere in telecoms, notably ONAP – the merger of two NFV projects being developed by AT&T (ECOMP) and the Linux Foundation (Open-O). It also parallels many other approaches to improving network affordability for developing markets.
TIP got early support from a number of operators (including SK Telecom, Deutsche Telekom, BT/EE and Globe), hosting/cloud players like Equinix and Bandwidth, semiconductor suppliers including Intel, and various (mostly radio-oriented) network vendors like Radisys, Vanu, IP Access, Quortus and – conspicuously – Nokia. It has subsequently expanded its project scope, governance structure and member base, with projects on optical transmission and core-network functions as well as cellular radios.
More recently, it has signalled that not all its output will be open-source, but that it will also support RAND (reasonable and non-discriminatory) intellectual property rights (IPR) licensing as well. This reflected push-back from some vendors on completely relinquishing revenues from their (R&D-heavy) IPR. While services, integration and maintenance offered around open-source projects have potential, it is less clear that they will attract early-stage investment necessary for continued deep innovation in cutting-edge network technology.
At first sight, it is not obvious why Facebook should be the leading light here. But contrary to popular belief, Facebook – like Google and Amazon and Alibaba – is not really just a “web” company. They all design or build physical hardware as well – servers, network gear, storage, chips, data-centres and so on. They all optimise the entire computing / network chain to serve their needs, with as much efficiency as possible in terms of power consumption, physical space requirements and so on. They all have huge hardware teams and commit substantial R&D resources to the messy, expensive business of inventing new kit. Facebook in particular has set up Internet.org to help get millions online in the developing world, and is still working on its Aquila communications drones. It also set up OCP (Open Computing Platform) as a very successful open-source project for data-centre design; in many ways TIP is OCP’s newer and more telco-oriented cousin.
Many in the telecom industry often overlook the fact that their Internet peers now have more true “technology” investment – and especially networking innovation – than most operators. Some operators – notably DT and SKT – are pushing back against the vendor “establishment”, which they see as stifling network innovation by continuing to push monolithic, proprietary black boxes.
What does Open-Source mean, applied to hardware?
Focus areas for TIP
Strategic considerations and implications
Operator involvement with TIP
A different IPR model to other open-source domains
Fit with other Facebook initiatives
Who are the winners?
Who are the losers?
Conclusions and Recommendations
Figure 1: A core TIP philosophy is “unbundling” components of vendor “black boxes”
Figure 2: OpenCellular functional architecture and external design
Figure 3: SKT sees open-source, including TIP, as fundamental to 5G
A formal definition of MEC is that it enables IT, NFV and cloud-computing capabilities within the access network, in close proximity to subscribers. Those edge-based capabilities can be provided to internal network functions, in-house applications run by the operator, or potentially third-party partners / developers.
There has long been a vision in the telecoms industry to put computing functions at local sites. In fixed networks, operators have often worked with CDN and other partners on distributed network capabilities, for example. In mobile, various attempts have been made to put computing or storage functions alongside base stations – both big “macro” cells and in-building small/pico-cells. Part of the hope has been the creation of services tailored to a particular geography or building.
But besides content-cacheing, none of these historic concepts and initiatives have gained much traction. It turns out that “location-specific” services can be easily delivered from central facilities, as long as the endpoint knows its own location (e.g. using GPS) and communicates this to the server.
This is now starting to change. In the last three years, various market and technical trends have re-established the desire for localised computing. Standards have started to evolve, and early examples have emerged. Multiple groups of stakeholders – telcos and their network vendors, application developers, cloud providers, IoT specialists and various others have (broadly) aligned to drive the emergence of edge/fog computing. While there are numerous competing architectures and philosophies, there is clearly some scope for telco-oriented approaches.
While the origins of MEC (and the original “M”) come from the mobile industry, driven by visions of IoT, NFV and network-slicing, the pitch has become more nuanced, and now embraces fixed/cable networks as well – hence the renaming to “multi-access”.
Before discussing specific technologies and use-cases for MEC, it is important to contextualise some other trends in telecoms that are helping build a foundation for it:
Telcos need to reduce costs & increase revenues: This is a bit “obvious” but bears repeating. Most initiatives around telco cloud and virtualisation are driven by these two fundamental economic drivers. Here, they relate to a desire to (a) reduce network capex/opex by shifting from proprietary boxes to standardised servers, and (b) increase “programmability” of the network to host new functions and services, and allow them to be deployed/updated/scaled rapidly. These underpin broader trends in NFV and SDN, and then indirectly to MEC and edge-computing.
New telco services may be inherently “edge-oriented”: IoT, 5G, vertical enterprise applications, plus new consumer services like IPTV also fit into both the virtualisation story and the need for distributed capabilities. For example, industrial IoT connectivity may need realtime control functions for machinery, housed extremely close by, for millisecond (or less) latency. Connected vehicles may need roadside infrastructure. Enterprises might demand on-premise secure data storage, even for cloud-delivered services, for compliance reasons. Various forms of AI (such as machine vision and deep learning) involve particular needs and new ways of handling data.
The “edge” has its own context data: Some applications are not just latency-sensitive in terms of response between user and server, but also need other local, fast-changing data such as cell congestion or radio-interference metrics. Going all the way to a platform in the core of the network, to query that status, may take longer than it takes the status to change. The length of the “control loop” may mean that old/wrong contextual data is given, and the wrong action taken by the application. Locally-delivered information, via “edge APIs” could be more timely.
Not all virtual functions can be hosted centrally: While a lot of the discussion around NFV involves consolidated data-centres and the “telco cloud”, this does not apply to all network functions. Certain things can indeed be centralised (e.g. billing systems, border/gateway functions between core network and public Internet), but other things make more sense to distribute. For example, Virtual CPE (customer premises equipment) and CDN caches need to be nearer to the edge of the network, as do some 5G functions such as mobility management. No telco wants to transport millions of separate video streams to homes, all the way from one central facility, for instance.
There will therefore be localised telco compute sites anyway: Since some telco network functions have to be located in a distributed fashion, there will need to be some data-centres either at aggregation points / central offices or final delivery nodes (base stations, street cabinets etc.). Given this requirement, it is understandable that vendors and operators are looking at ways to extend such sites from the “necessary” to the “possible” – such as creating more generalised APIs for a broader base of developers.
Radio virtualisation is slightly different to NFV/SDN: While most virtualisation focus in telecoms goes into developments in the core network, or routers/switches, various other relevant changes are taking place. In particular, the concept of C-RAN (cloud-RAN) has taken hold in recent years, where traditional mobile base stations (usually called eNodeB’s) are sometimes being split into the electronics “baseband” units (BBUs) and the actual radio transmit/receive components, called the remote “radio head”, RRH. A number of eNodeB’s BBUs can be clustered together at one site (sometimes called a “hotel”), with fibre “front-haul” connecting the RRHs. This improves the efficiency of both power and space utilisation, and also means the BBUs can be combined and virtualised – and perhaps have extra compute functions added.
Property business interests: Telcos have often sold or rented physical space in their facilities – colocation of equipment racks for competitive carriers, or servers in hosting sites and data-centres. In turn, they also rely on renting space for their own infrastructure, especially for siting mobile cell-towers on roofs or walls. This two-way trade continues today – and the idea of mobile edge computing as a way to sell “virtual” space in distributed compute facilities maps well to this philosophy.
Background market drivers for MEC
Why Edge Computing matters
The ever-wider definition of “Edge”
Wider market trends in edge-computing
Use-cases & deployment scenarios for MEC
Addressing vertical markets – the hard realities
MEC involves extra costs as well as revenues
Current status & direction of MEC
Standards path and operator involvement
Conclusions & Recommendations
Figure 1: A taxonomy of mobile edge computing
Figure 2: Even within “low latency” there are many different sets of requirements
Figure 3: The “network edge” is only a slice of the overall cloud/computing space
Figure 4: Telcos can implement MEC at various points in their infrastructure
Figure 5: Networks, Cloud and IoT all have different starting-points for the edge
Figure 6: Network-centric use-cases for MEC suggested by ETSI
Figure 7: MEC needs to integrate well with many adjacent technologies and trends
Some people in the telecom industry believe that “voice is dead” – or, at least, that traditional phone calls are dying off. Famously, many younger mobile users eschew standalone realtime communications, instead preferring messaging loaded with images and emoji, via apps such as Facebook Messenger and WeChat, or those embedded e.g. in online gaming applications. At the other end of the spectrum, various forms of video-based communications are important, such as SnapChat’s disappearing video stories, as well as other services such as Skype and FaceTime.
Even for basic calling-type access, WhatsApp and Viber have grown huge, while assorted enterprise UC/UCaaS services such as Skype for Business and RingCentral are often “owning” the business customer base. Other instances of voice (and messaging and video) are appearing as secondary features “inside” other applications – games, social networks, enterprise collaboration, mobile apps and more – often enabled by the WebRTC standard and assorted platforms-as-a-service.
Smartphones and the advent of 4G have accelerated all these trends – although 3G networks have seen them as well, especially for messaging in developing markets. Yet despite the broad uptake of Internet-based messaging and voice/video applications, it is still important for mobile operators to provide “boring old phone calls” for mobile handset subscribers, not least in order to enable “ubiquitous connection” to friends, family and businesses – plus also emergency calls. Plenty of businesses still rely on the phone – and normal phone numbers as identifiers – from banks to doctors’ practices. Many of the VoIP services can “fall back” to normal telephony, or dial out (or in) from the traditional telco network. Many license terms mandate provision of voice capability.
This is true for both fixed and mobile users – and despite the threat of reaching “peak telephony”, there is a long and mostly-stable tail of calling that won’t be displaced for years, if ever.
Figure 1: Various markets are beyond “peak telephony” despite lower call costs
Source: Disruptive Analysis, National Regulators
In other words, even if usage and revenues are falling, telcos – and especially mobile operators – need to keep Alexander Graham Bell’s 140-year legacy alive. If the network transitions to 4G and all-IP, then the telephony service needs to do so as well – ideally with feature-parity and conformance to all the legacy laws and regulation.
(As a quick aside, it is worth noting that telephony is only one sort of “voice communication”, although people often use the terms synonymously. Other voice use-cases vary from conferencing, push-to-talk, audio captioning for the blind, voice-assistants like Siri and Alexa, karaoke, secure encrypted calls and even medical-diagnostics apps that monitor breathing noise. We discuss the relevance of non-telephony voice services for telcos later in this report). 4G phone calls: what are the options?
CSFB (Circuit-Switched Fallback): The connection temporarily drops from 4G, down to 3G or 2G. This enables a traditional non-IP (CS – circuit-switched) call to be made or received on a 4G phone. This is the way most LTE subscribers access telephony today.
VoLTE: This is a “pure” 4G phone call, made using the phone’s in-built dialler, the cellular IP connection and tightly-managed connectivity with prioritisation of voice packets, to ensure good QoS. It hooks into the telco’s IMS core network, from where it can either be directly connected to the other party (end-to-end over IP), go via a transit provider or exchange, or else it can interwork with the historic circuit-based phone network.
App-based calling: This involves making a VoIP call over the normal, best-efforts, data connection. The function could be provided by a telco itself (eg Reliance Jio’s 4GVoice app), an enterprise UC provider, or an Internet application like Skype or Viber. Increasingly, these applications are also integrated into phones “native dialler” interface and can share call-logs and other functions. [Note – STL’s Future of The Network research stream does not use the pejorative, obsolete and inaccurate term “OTT”.]
None of these three options is perfect.
Telephony is still necessary in the 4G era
4G phone calls: what are the options?
The history of VoLTE
The Good, the Bad & the Ugly
The motivations for VoLTE deployment
The problems for VoLTE deployment?
Market Status & Forecasts
Business & Strategic Implications
Is VoLTE really just “ToLTE”?
Link to NFV & Cloud
GSMA Universal Profile: Heaven or Hell for Telcos?
Do telcos have a role in video communications?
Intersection with enterprise voice
Figure 1: Various markets are beyond “peak telephony” despite lower call costs
Figure 2: VoLTE, mobile VoIP & LTE timeline
Figure 3: VoLTE coverage is often deployed progressively
Figure 4: LTE subscribers, by voice technology, 2009-2021
Radio spectrum is a key “raw material” for mobile networks, together with evolution of the transmission technology itself, and the availability of suitable cell-site locations. The more spectrum is made available for telcos, the more capacity there is overall for current and future mobile networks. The ability to provide good coverage is also determined largely by spectrum allocations.
Within the industry, we are accustomed to costly auction processes, as telcos battle for tranches of frequencies to add capacity, or support new generations of technology. In contrast, despite the huge costs to telcos for different spectrum allocation, most people have very little awareness of what bands their phones support, other than perhaps that it can use ‘mobile/cellular’ and WiFi.
Most people, even in the telecoms industry, don’t grasp the significance of particular numbers of MHz or GHz involved (Hz = number of cycles per second, measured in millions or billions). And that is just the tip of the jargon and acronym iceberg – a full discussion of mobile RAN (radio access network) technology involves different sorts of modulation, multiple antennas, propagation metrics, path loss (in decibels, dB) and so forth.
Yet as 5G pulls into view, it is critical to understand the process by which new frequencies will be released by governments, or old ones re-used by the mobile industry. To deliver the much-promised peak speeds and enhanced coverage of 5G, big chunks of frequencies are needed. Yet spectrum has many other uses besides public mobile networks, and battles will be fierce about any reallocations of incumbent users’ rights. The broadcast industry (especially TV), satellite operators, government departments (notably defence), scientific research communities and many other constituencies are involved here. In addition, there are growing demands for more bandwidth for unlicensed usage (as used for WiFi, Bluetooth and other low-power IoT networks such as SigFox).
Multiple big industries – usually referred to by the mobile community as “verticals” – are flexing their own muscles as well. Energy, transport, Internet, manufacturing, public safety and other sectors all see the benefits of wireless connectivity – but don’t necessarily want to involve mobile operators, nor subscribe to their preferred specifications and standards. Many have huge budgets, a deep legacy of systems-building and are hiring mobile specialists.
Lastly, parts of the technology industry are advocates of more nuanced approaches to spectrum management. Rather than dedicate bands to single companies, across whole countries or regions, they would rather develop mechanisms for sharing spectrum – either on a geographic basis, or by allowing some form of “peaceful coexistence” where different users’ radios behave nicely together, instead of creating interference. In theory, this could improve the efficient use of spectrum – but adds complexity, and perhaps introduces so much extra competition than willingness to invest suffers.
Which bands are made available for 5G, on what timescales, in what type of “chunks”, and the authorisation / licensing schemes involved, all define the potential opportunity for operators in 5G – as well as the risks of disruption, and (for some) how large the window is to fully-monetise 4G investments.
The whole area is a minefield to understand – it brings together the hardest parts of wireless technology to grasp, along with impenetrable legal processes, and labyrinthine politics at national and international levels. And ideally, it is possible to somehow to layer on consideration of end-user needs, and economic/social outputs as well.
Who are the stakeholders for spectrum?
At first sight, it might seem that spectrum allocations for mobile networks ought to be a comparatively simple affair, with governments deciding on tranches of frequencies and an appropriate auction process. MNOs can bid for their desired bands, and then deploy networks (and, perhaps, gripe about the costs afterwards).
The reality is much more complex. A later section describes some of the international bureaucracy involved in defining appropriate bands, which can then be doled out by governments (assuming they don’t decide to act unilaterally). But even before that, it is important to consider which organisations want to get involved in the decision process – and their motivations, whether for 5G or other issues that are closer to their own priorities, which intersect with it.
Governments have a broad set of drivers and priorities to reconcile – technological evolution of the economy as a whole, the desire for a competitive telecoms industry, exports, auction receipts – and the protection of other spectrum user groups such as defence, transport and public safety. Different branches of government and the public administration have differing views, and there may sometimes be tussles between the executive branch and various regulators.
Much the same is true at regional levels, especially in Europe, where there are often disagreements between European Commission, European Parliament, the regulators’ groups and 28 different EU nations’ parliaments (plus another 23 non-EU nations).
Even within the telecoms industry there are differences of opinion – some operators see 5G as an urgent strategic priority, that can help differentiation and reduce costs of existing infrastructure deployments. Others are still in the process of rolling out 4G networks and want to ensure that those investments continue to have relevance. There are variations in how much credence is assigned to the projections of IoT growth – and even there, whether there needs to be breathing room for 4G cellular types such as NB-IoT, which is yet to be deployed despite its putative replacement being discussed already.
The net result is many rounds of research, debate, consultation, disagreement and (eventually) compromise. Yet in many ways, 5G is different from 3G and 4G, especially because many new sectors are directly involved in helping define the use-cases and requirements. In many ways, telecoms is now “too important to be left to the telcos”, and many other voices will therefore need to be heard.
Why does spectrum matter?
Who are the stakeholders for spectrum?
Spectrum vs. business models
Does 5G need spectrum harmonisation as much as 4G?
Spectrum authorisation types & processes
Licensed, unlicensed and shared spectrum
Why is ITU involved, and what is IMT spectrum?
Key bands for 5G
5G Phase 1: just more of the same?
mmWave beckons – the high bands >6GHz
Figure 1 – 5G spectrum has multiple stakeholders with differing priorities
Figure 2 – Multi-band support has improved hugely since early 4G phones
Figure 3 – A potential 5G deployment & standardisation timeline
Figure 4 – ITU timeline for 5G spectrum harmonisation, 2014-2020
Figure 5 – High mmWave frequencies (e.g. 28GHz) don’t go through solid walls
Figure 6 – mmWave brings new technology and design challenges
It is fair to say that telcos have found only mixed success in financial services. While certain operators have had great success in recent years providing mobile money services, there have also been many examples of telco incursions into financial services that have not paid off. On the other hand, there have been many instances of successful disruption in financial services – even technology-led digital disruption. PayPal is the foremost example of a digital business that originally found a niche doing something that banks had made quite laborious – online payments for goods between private individuals – and making it easier. But these disruptions have, to date, been limited and individual. Why, then, should telcos pay attention now?
In the last two years, the wider landscape of financial services has begun to change, as the established players have faced disruption on multiple fronts from a large number of new businesses. This has become known as fintech, and interest and investment are taking off:
Figure 1: Google Trends search on ‘fintech’, 2011 – 2016
Source: Google Trends
Fintech therefore represents a potentially huge shift in the status quo in financial services: this short report provides an overview of this shift. STL Partners will follow up with a report that considers options for telecoms operators, and makes some strategic recommendations.
Disrupting the Financial Services Industry
Why fintech’s time has come
The state of the ecosystem: investment is accelerating
Key Capabilities and Service Areas
Fintech specific capabilities: doing the same, but differently
Fintech service areas: Diverse and developing
The Future of Fintech
…but there are uncertainties around the future evolution
The uncertainties could still play out well for start-ups
Conclusion and Outlook
Figure 1: Google Trends search on ‘fintech’, 2011 – 2016
Figure 2: Fintech companies are disrupting financial services
Figure 3: Global Investment in Fintech
Figure 4: VC-backed Investment in Fintech, by Region
Figure 5: A framework for understanding fintech
Figure 6: Fintech start-ups within each service area
In the last 18 months, several different visions of 5G have emerged.
One is the vision espoused by the major R&D collaborations, academics, standardisation groups, the European Union, and some operators. This is the one with the flying robots, self-driving cars, and fully automated factories whose internal networks are provided entirely by ultra-low latency critical communications profiles within the cellular network. The simplest way to describe its aims would be to say that they intend to create a genuinely universal mobile telecommunications system serving everything from 8K streaming video for football crowds, through basic (defined as 50Mbps) fixed-wireless coverage for low-ARPU developing markets, to low-rate and ultra-low power but massive-scale M2M, with the same radio waveform, backed by a single universal virtualised core network “sliced” between use-cases. This slide, from Samsung’s Raj Gawera, sums it up – 5G is meant to maximise all eight factors labelled on the vertices of the chart.
Figure 1: 5G, the vision: one radio for everything
Source: Samsung, 3G & 4G Wireless Blog
Most of its backers – the idealist group – are in no hurry, targeting 2020 at the earliest for the standard to be complete, and deployment to begin sometime after that. There are some recent signs of increasing urgency – and certainly various early demonstrations – although that is perhaps a response to the sense of movement elsewhere in the industry.
The other vision is the one backed in 3GPP (the main standards body for 5G) by an alliance of semiconductor companies – including Intel, Samsung, ARM, Qualcomm, and Mediatek – but also Nokia Networks and some carriers, notably Verizon Wireless. This vision is much more radio-centric, being focused on the so-called 5G New Radio (NR) element of the project, and centred on delivering ultra-high capacity mobile broadband. It differs significantly from the idealists’ on timing – the pragmatist group wants to have real deployments by 2018 or even earlier, and is willing (even keen) to take an IETF-like approach where the standards process ratifies the results of “rough consensus and running code”.
Carriers’ interests fall between the two poles. In general, operators’ contributions to the process focus on the three Cs – capacity, cost, and carbon dioxide – but they also usually have a special interest of their own. This might be network virtualisation and slicing for converged operators with significant cloud and enterprise interests, low-latency or massive-scale M2M for operators with major industrial customers, or low-cost mobile broadband for operators with emerging market opcos.
The summer and especially September 2016’s CTIA Mobility conference also pointed towards some players in the middle – AT&T is juggling its focus on its ECOMP NFV mega-project, with worries that Verizon will force its hand on 5G the same way it did with 4G. It would be in the idealist group if it could align 5G radio deployment and NFV perfectly, but it is probably aware of the gulf widening rather than narrowing between the two. Ericsson is pushing for 5G incrementalism (and minimising the risk of carriers switching vendors at a later date) with its “Plug-In” strategy for specific bits of functionality.
Dino Flores of Qualcomm, the chairman of 3GPP RAN (RAN = radio access network) has chosen to compromise by taking forward the core enhanced mobile broadband (eMBB) elements for what is now being called “Phase 1”, but also cherry-picking two of the future use cases – “massive” M2M, and “critical” communications. These last two differ in that the first is optimised for scalability and power saving, and the second is optimised for quality-of-service control (or PPP for Priority, Precedence, and Pre-emption in 3GPP terminology), reliable delivery, and very low latency. As the low-cost use case is essentially eMBB in low-band spectrum, with a less dense network and a high degree of automation, this choice covers carriers’ expressed needs rather well, at least in principle. In practice, the three have very different levels of commercial urgency.
Implicitly, of course, the other, more futuristic use cases (such as self-driving cars) have been relegated to “Phase 2”. As Phase 2 is expected to be delivered after 2020, or in other words, on the original timetable, this means that Phase 1 has indeed accelerated significantly. Delays in some of the more futuristic applications may not be a major worry to many people – self-driving cars probably have more regulatory obstacles than technical ones, while Vehicle to Vehicle (V2V) communications seems to be less of a priority for the automotive industry than many assert. A recent survey by Ericsson suggested that better mapping and navigation is more important than “platooning” vehicles (grouping them together on the highway in platoons, which increases the capacity of the highway) as a driver of next-gen mobile capabilities.
3GPP’s current timeline foresees issuing the Technical Report (TR) detailing the requirements for the New Radio standard at the RAN (Radio Access Network) 73 meeting next month, and finalising a Non-Standalone version of the New Radio standard at either RAN 78 in December 2017, with the complete NR specification being frozen by the TSG (Technical Specifications Group) 80 meeting in June 2018, in time to be included in 3GPP Release 14. (In itself this is a significant hurry-up – the original plan was for 5G to wait for R15.) This spec would include all three major use cases, support for both <6GHz and millimetre wave spectrum, and both Non-Standalone and Standalone.
Importantly, if both Non-Standalone and the features common to it and Standalone standards are ready by the end of 2017, we will be very close to a product that could be deployed in a ‘pragmatist’ scenario even ahead of the standards process. This seems to be what VZW, Nokia, Ericsson, and others are hoping for – especially for fixed-5G. The December 2017 meeting is an especially important juncture as it will be a joint meeting of both TSG and RAN. AT&T has also called for a speeding-up of standardisation.
The problem, however, is that it may be difficult to reconcile the technical requirements of all three in one new radio, especially as the new radio must also be extensible to deal with the many different use cases of Phase 2, and must work both with the 4G core network as “anchor” in Non-Standalone and with the new 5G core when that arrives, in Standalone.
Also, radio development is forging ahead of both core development and spectrum policy. Phase 1 5G is focused on the bands below 6GHz, but radio vendors have been demonstrating systems working in the 15, 28, 60, and 73GHz bands – for instance Samsung and T-Mobile working on 28GHz. The US FCC especially has moved very rapidly to make this spectrum available, while the 3GPP work item for millimetre wave isn’t meant to report before 2017 – and with harmonisation and allocation only scheduled for discussion at ITU’s 2019 World Radio Congress.
The upshot is that the March 2017 TSG 75 meeting is a critical decision point. Among much else it will have to confirm the future timeline and make a decision on whether or not the Non-Standalone (sometimes abbreviated to NSA) version of the New Radio will be ready by TSG/RAN 78 in December. The following 3GPP graphic summarises the timeline.
STL Partners developed our comprehensive ‘forward-view scenarios’ on the evolving cloud services market, and the role of telcos within this market, back in 2012. Times have certainly moved on. In 2016, the cloud has become an established part of the IT industry. The key cloud providers – Amazon.com, Microsoft, Google, Facebook – are seeing dramatic revenue growth and (at least in Amazon Web Services’ case) unexpectedly strong margins in the 25-30% range.
Estimates of server shipments and revenue suggest that, so far, the growth of the cloud is a blue-ocean phenomenon. In other words, rather than cloud services supplanting on-premises data centres, the market for computing power is growing fast enough that the cloud is mostly additional to them. Enterprises’ consumption of computing has risen dramatically, as its price has fallen – and cloud is the preferred delivery method for the delivery of these additional data services.
Since our last major cloud report in 2012, there have been some major shifts in the market.
Public cloud – think Amazon Elastic Compute Cloud (EC2) – has grown enormously, and to some extent subsumed part of the private cloud segment, as the public clouds have added more and more features. For example, Amazon EC2 offers “Reserved Instances”, rather like a dedicated server – these “allow you to reserve Amazon EC2 computing capacity for 1 or 3 years, in exchange for a significant discount (up to 75%) compared to On-Demand instance pricing”. EC2 also offers extensive “virtual private cloud” support, as does Microsoft Azure. This support has essentially put an end to the virtual private cloud as an industry segment.
Platform-as-a-service (PaaS) has, as we predicted, become less important compared with infrastructure-as-a-service (IaaS), as the latter has added more and more PaaS-like convenience.
Traditional managed-hosting providers, for their part, have begun to deliver managed hosting services in a “cloud-like”, programmatic, on-demand fashion, via the so-called “bare metal cloud”. Iliad’s Scaleway product is a notable example here.
Meanwhile, enterprise IT departments who choose to retain their own infrastructure are increasingly likely to do it by creating their own private clouds. Open-source software, like OpenStack, and open hardware like the Open Compute Project and OpenFlow, make this an increasingly attractive option.
The upshot for telcos has in general been pretty bleak. In the volume-dominated public cloud market, they’ve failed to achieve significant scale; while the various niche cloud services markets have largely either been subsumed by the public cloud, or been served better by the open-source ecosystem. Telcos’ focus on enterprise cloud and (in most cases) on reselling VMWare’s technology as their core PaaS offering has rendered them vulnerable to severe competition. Enterprises could serve themselves better thanks to open source, while the public clouds’ engineering excellence and use of open source projects has allowed them to progress faster and address developers’ (the key buyers’) needs better.
However, as we discuss below, the big four cloud companies still only account for about half the total spending. The niche opportunities in cloud remain very real, and there are still potential opportunities for telcos who offer compelling technical and product differentiation.
STL’s cloud scenarios from 2012, revisited
In 2012, STL Partners identified three scenarios for the future of cloud, in our market overview report.
“Menacing Stormcloud”: this scenario essentially envisioned a world in which hyperscale data centre infrastructure just kept getting better. As a result, the cloud majors would eventually take over, probably also cannibalising the on-premises and private cloud markets. This would require cloud customers to bite the bullet and trust the cloud, whatever security and privacy issues might arise. Prices, but also margins, would be hammered into the ground by sheer scale economics. In “Menacing Stormcloud”, AWS and its rivals would dominate the cloud market, and little would be left in terms of telco opportunities.
“Cloudburst”: our second scenario postulated that the cloud was a technology bubble and the bubble would do what all bubbles do – burst. Some triggering event – perhaps a security crisis, or a major cloud customer deciding to scale out – would bring home the downside risks to the investing public and the customer base. Investors would dump the sector, bankruptcies would ensue, and interest would move on, whether to a new generation of on-premises solutions or to a revived interest in P2P systems. In “Cloudburst”, both the cloud and the data centre in its current form would end up being much less relevant, and cloud opportunities for telcos (as well as other players) would accordingly be very limited.
“Cloud Layers”: this scenario foresaw a division between a hard core of hyperscale public cloud providers – dominated by AWS and its closest competitors – and a periphery of special-purpose, regional, private, and otherwise differentiated cloud providers. This latter group would include telcos, CDNs, software-as-a-service providers, and enterprise in-house IT departments. We noted that this was the option that had the best chance of offering telcos a significant opportunity to address the cloud market.
Looking at the market in 2016, “Cloud Layers” has turned out to be closest to the current reality. The cloud has certainly not burst, as we postulated in our second scenario. As far as the first “Menacing Stormcloud” scenario, public cloud majors have indeed become very dominant, but the resulting price drops this scenario envisioned have not necessarily ensued. Even the price leader, AWS, has only returned about half the cost-savings derived from technical advances (what we would call the annual ‘Moore’s law increment’) to its customers through its pricing, capturing the rest into margin.
Further, although there have been exits from the market, the exiting providers have not been niche cloud providers or traditional managed hosting providers. Rather, we have seen exits by players who have made unsuccessful attempts to compete in hyperscale. HP’s closure of its Helion Public Cloud product, Facebook’s closure of its Parse mobile developer PaaS, and the resounding lack of results for Verizon’s $1.4bn spent on Terremark, are cases in point.
Looking at the operators who managed to find a niche in the “Cloud Layers” scenario – such as AT&T, Telstra, or Iliad – an important common factor has been their commitment to owning their technology and building in-house expertise, and using this to differentiate themselves from “big cloud”. AT&T’s network-integrated cloud strategy is driven by both using open-source software as far as possible, and investing in the key open-source projects by contributing code back to them. Iliad introduced the first full bare-metal cloud, using a highly innovative ARM-based microserver it developed in-house. Telstra is bringing much more engineering back in-house, in support of its distinctive role as the preferred partner for all the major clouds in Australia.
Competitive Developments in Cloud Services, 2012-2016
Understanding the strategies of the non-telco cloud players
Most Telcos’ Cloud Initiatives Haven’t Worked
The Dash-for-Scale failed (because it wasn’t ‘hyperscale’)
Only the disruptors made any money
Too little investment in cloud innovation resources, and too much belief in marketing reach as a differentiator
Cloud innovation is demanding: the case of AT&T
Cloud 2.0 Scenarios 2016-2020
Scenario 1: Cumulonimbus – tech and Internet players’ global cloud oligopoly
Scenario 2: Cirro-cumulus – a core of big cloud players, plus specialists and DIY enterprises
Scenario 3: Disruptive 5G lightning storm fuses the Cloud with the Network
Figure 1: 2016 Forecasts of cloud market size through 2020
Figure 2: Forecasting the adoption of cloud
Figure 3: Our revised cloud services spending forecast: still a near-trillion dollar opportunity, even though IT spending slows
Figure 4: Our forecast in context
Figure 5: Public IaaS leads the way, with AWS and Microsoft
Figure 6: IaaS is forecast to grow as a share of the total Cloud opportunity
Figure 7: All the profit at Amazon is in AWS
Figure 8: Moore’s law runs ahead of AWS pricing, and Amazon grows margins
Figure 9: Cloud is the new driver of growth at Microsoft
Figure 10: Google is still the fourth company in the cloud
Figure 11: AT&T’s cloud line-item is pulling further and further ahead of Verizon’s
Figure 12: STL world cloud spending forecast (recap)
Figure 13: Driver/indicator/barrier matrix for Cloud 2.0 scenarios
To understand how disruptive Iliad’s approach to cloud services is, it is useful to consider it within the wider context of operator cloud services and technology strategies.
Although telecoms operators have often talked a good game when it comes to offering enterprise cloud services, most have found it challenging to compete with the major dedicated and Internet-focused cloud providers like Rackspace, Google, Microsoft, and most of all, Amazon Web Services. Smaller altnets and challenger mobile operators – and even smaller incumbents – have struggled to find enough scale, while even huge operators like Telefonica or Verizon have largely failed to differentiate themselves from the competition. Further, the success of the software and Internet services cloud providers in building hyperscale infrastructure has highlighted a skills gap between telcos and these competitors in the data centre. Although telcos are meant to be infrastructure businesses, their showing on this has largely been rather poor.
In our earlier 2012 Strategy Report Cloud 2.0: Telco Strategies in the Cloud, we pointed to differentiation as the biggest single challenge for telco cloud services. The report argued that the more telcos bought into pre-packaged technology solutions from vendors like VMWare, the less control over the future development path of their software they would have, and the more difficult it would be for them to differentiate effectively. We show the distinction in Figure 1 (see the Technology section of the heatmap). Relying heavily on third-party proprietary technology solutions for cloud would give telcos a structural disadvantage relative to the major non-telco cloud players, who either develop their own, or contribute to fast-evolving open-source projects.
We also observed in that report that nearly all the operators we evaluated who were making any effort to compete in Infrastructure-as-a-Service (IaaS) or Platform-as-a-Service (PaaS), had opted to resell VMWare technology.
Looking back from 2016, we observe that the operators who went down this route – Verizon is a prime example – have not succeeded in the cloud. The ones that chose to own their technology, building the skills base internally by contributing to the key open-source projects, like AT&T (with its commitment to the OpenStack solution), or who became a preferred regional partner for the major cloud providers (like Telstra), have done much better.
Figure 1: Telco strategies in the cloud, 2012 – most providers go with VMWare-based solutions
Source: STL Partners, Cloud 2.0 Strategy Report
AT&T’s strategy of using the transition to cloud to take control of its own technology, move forward on the SDN/NFV tech transition, and re-organise its product line around its customers’ needs, has helped to set its revenue from strategic business services powering ahead of its key competitor, Verizon, as Figure 2 shows.
Figure 2: Getting the cloud right pays off at AT&T Strategic Business Services
Source: STL Partners
The above is the opening of the report’s introduction, which goes on to outline our views on the cloud market and reprise telcos’ opportunity and progress in it. To access the other 23 pages of this 26 page Telco 2.0 Report, including…
Iliad: A Champion Disruptor
Cloud at Iliad
Responding to cloud market disruption: Iliad draws on its hi-lo segmentation experience
Scaleway: Address the start-ups and scale-ups
Dedibox Power 8: doubling down on the high end
Nodebox: build-your-own network switches
Financial impact for Iliad
…and the following report figures…
Figure 1: Telco strategies in the cloud, 2012 – most providers go with VMWare-based solutions
Figure 2: Getting the cloud right pays off at AT&T Strategic Business Services
Figure 3: AWS is not just a price leader
Figure 4: STL Partners’ cloud adoption forecast
Figure 5: Free Mobile’s growth repeatedly surprises on the upside
Figure 6: Free Mobile’s 4G build overtakes SFR
Figure 7: Free Mobile is a top scorer on our network quality metrics
Figure 8: Free Mobile’s customer satisfaction ratings are excellent
Figure 9: Specs for ‘extreme performance’ Dedibox server models
Figure 10: The C1 ‘Pimouss’ microserver
Figure 11: 18 C1s close-packed in a standard server blade
Figure 12: Scaleway Hosted C1 Server Pricing
Figure 13: The case for more POWER8: IBM POWER8 vs Intel x86 E5
Figure 14: A Nodebox, Free’s internally developed network switch
In this briefing, we analyse the bewildering array of technologies being deployed in the on-going mobile marketing and commerce land-grab. With different digital commerce brokers backing different technologies, confusion reigns among merchants and consumers, holding back uptake. Moreover, the technological fragmentation is limiting economies of scale, keeping costs too high.
This paper is designed to help telcos and other digital commerce players make the right technological bets. Will bricks and mortar merchants embrace NFC or Bluetooth Low Energy or cloud-based solutions? If NFC does take off, will SIM cards or trusted execution environments be used to secure services? Should digital commerce brokers use SMS, in-app notifications or IP-based messaging services to interact with consumers?
STL defines Digital Commerce 2.0 as the use of new digital and mobile technologies to bring buyers and sellers together more efficiently and effectively (see Digital Commerce 2.0: New $Bn Disruptive Opportunities for Telcos, Banks and Technology Players). Fast growing adoption of mobile, social and local services is opening up opportunities to provide consumers with highly-relevant advertising and marketing services, underpinned by secure and easy-to-use payment services. By giving people easy access to information, vouchers, loyalty points and electronic payment services, smartphones can be used to make shopping in bricks and mortar stores as interactive as shopping through web sites and mobile apps.
This executive briefing weighs the pros and cons of the different technologies being used to enable mobile commerce and identifies the likely winners and losers.
A new dawn for digital commerce
This section explains the driving forces behind the mobile commerce land-grab and the associated technology battle.
Digital commerce is evolving fast, moving out of the home and the office and onto the street and into the store. The advent of mass-market smartphones with touchscreens, full Internet browsers and an array of feature-rich apps, is turning out to be a game changer that profoundly impacts the way in which people and businesses buy and sell. As they move around, many consumers are now using smartphones to access social, local and mobile (SoLoMo) digital services and make smarter purchase decisions. As they shop, they can easily canvas opinion via Facebook, read product reviews on Amazon or compare prices across multiple stores. In developed markets, this phenomenon is now well established. Two thirds of 400 Americans surveyed in November 2013 reported that they used smartphones in stores to compare prices, look for offers or deals, consult friends and search for product reviews.
At the same time, the combination of Internet and mobile technologies, embodied in the smartphone, is enabling businesses to adopt new forms of digital marketing, retailing and payments that could dramatically improve their efficiency and effectiveness. The smartphones and the data they generate can be used to optimise and enable every part of the entire ‘wheel of commerce’ (see Figure 4).
Figure 4: The elements that make up the wheel of commerce
Source: STL Partners
The extensive data being generated by smartphones can give companies’ real-time information on where their customers are and what they are doing. That data can be used to improve merchants’ marketing, advertising, stock management, fulfilment and customer care. For example, a smartphone’s sensors can detect how fast the device is moving and in what direction, so a merchant could see if a potential customer is driving or walking past their store.
Marketing that makes use of real-time smartphone data should also be more effective than other forms of digital marketing. In theory, at least, targeting marketing at consumers in the right geography at a specific time should be far more effective than simply displaying adverts to anyone who conducts an Internet search using a specific term.
Similarly, local businesses should find sending targeted vouchers, promotions and information, delivered via smartphones, to be much more effective than junk mail at engaging with customers and potential customers. Instead of paying someone to put paper-based vouchers through the letterbox of every house in the entire neighbourhood, an Indian restaurant could, for example, send digital vouchers to the handsets of anyone who has said they are interested in Indian food as they arrive at the local train station between 7pm and 9pm in the evening. As it can be precisely targeted and timed, mobile marketing should achieve a much higher return on investment (ROI) than a traditional analogue approach.
In our recent Strategy Report, STL Partners argued that the disruption in the digital commerce market has opened up two major opportunities for telcos:
Real-time commerce enablement: The use ofmobile technologies and services to optimise all aspects of commerce. For example, mobile networks can deliver precisely targeted and timely marketing and advertising to consumer’s smartphones, tablets, computers and televisions.
Personal cloud: Act as a trusted custodian for individuals’ data and an intermediary between individuals and organisations, providing authentication services, digital lockers and other services that reduce the risk and friction in every day interactions. An early example of this kind of service is financial services web site Mint.com (profiled in the appendix of this report). As personal cloud services provide personalised recommendations based on individuals’ authorised data, they could potentially engage much more deeply with consumers than the generalised decision-support services, such as Google, TripAdvisor, moneysavingexpert.com and comparethemarket.com, in widespread use today.
These two opportunities are inter-related and could be combined in a single platform. In both cases, the telco is acting as a broker – matching buyers and sellers as efficiently as possible, competing with incumbent digital commerce brokers, such as Google, Amazon, eBay and Apple. The Strategy Report explains in detail how telcos could pursue these opportunities and potentially compete with the giant Internet players that dominate digital commerce today.
For most telcos, the best approach is to start with mobile commerce, where they have the strongest strategic position, and then use the resulting data, customer relationships and trusted brand to expand into personal cloud services, which will require high levels of investment. This is essentially NTT DOCOMO’s strategy.
However, in the mobile commerce market, telcos are having to compete with Internet players, banks, payment networks and other companies in land-grab mode – racing to sign up merchants and consumers for platforms that could enable them to secure a pivotal (and potentially lucrative) position in the fast growing mobile commerce market. Amazon, for example, is pursuing this market through its Amazon Local service, which emails offers from local merchants to consumers in specific geographic areas.
Moreover, a bewildering array of technologies are being used to pursue this land-grab, creating confusion for merchants and consumers, while fuelling fragmentation and limiting economies of scale.
In this paper, we weigh the pros and cons of the different technologies being used in each segment of the wheel of commerce, before identifying the most likely winners and losers. Note, the appendix of the Strategy Report profiles many of the key innovators in this space, such as Placecast, Shopkick and Square.
What’s at stake
This section considers the relative importance of the different segments of the wheel of commerce and explains why the key technological battles are taking place in the promote and transact segments.
Carving up the wheel of commerce
STL Partners’ recent Strategy Report models in detail the potential revenues telcos could earn from pursuing the real-time commerce and personal cloud opportunities. That is beyond the scope of this technology-focused paper, but suffice to say that the digital commerce market is large and is growing rapidly: Merchants and brands spend hundreds of billions of dollars across the various elements of the wheel of commerce. In the U.S., the direct marketing market alone is worth about $155 billion per annum, according to the Direct Marketing Association. In 2012, $62 billion of that total was spent on digital marketing, while about $93 billion was spent on traditional direct mail.
In the context of the STL Wheel of Commerce (see Figure 3), the promote segment (ads, direct marketing and coupons) is the most valuable of the six segments. Our analysis of middle-income markets for clients suggests that the promote segment accounts for approximately 40% of the value in the wheel of digital commerce today, while the transact segment (payments) accounts for 20% and planning (market research etc.) 16% (see Figure 5). These estimates draw on data released by WPP and American Express.
Note, that payments itself is a low margin business – American Express estimates that merchants in the U.S. spend four to five times as much on marketing activities, such as loyalty programmes and offers, as they do on payments.
Figure 5: The relative size of the segments of the wheel of commerce
Source: STL Partners
A new dawn for digital commerce
What’s at stake
Carving up the wheel of commerce
The importance of tracking transactions
It’s all about data
Different industries, different strategies
Tough technology choices
Likely winners and losers
The commercial implications
About STL Partners
Figure 1: App notifications are in pole position in the promotion segment
Figure 2: There isn’t a perfect point of sale solution
Figure 3: Different tech adoption scenarios and their commercial implications
Figure 4: The elements that make up the wheel of commerce
Figure 5: The relative size of the segments of the wheel of commerce
Figure 6: Examples of financial services-led digital wallets
Figure 7: Examples of Mobile-centric wallets in the U.S.
Figure 8: The mobile commerce strategy of leading Internet players
Figure 9: Telcos can combine data from different domains
Figure 10: How to reach consumers: The technology options
Figure 11: Balancing cost and consumer experience
Figure 12: An example of an easy-to-use tool for merchants
Figure 13: Drag and drop marketing collateral into Google Wallet
Figure 14: Contrasting a secure element with host-based card emulation
Figure 15: There isn’t a perfect point of sale solution
Figure 16: The proportion of mobile transactions to be enabled by NFC in 2017
Figure 17: Integrated platforms and point solutions both come with risks attached
Figure 18: Different tech adoption scenarios and their commercial implications
Summary:150 senior execs from Vodafone, Telefonica, Etisalat, Ooredoo (formerly Qtel), Axiata and Singtel supported our technology survey for the Telco 2.0 Transformation Index. This analysis of the results includes findings on prioritisation, alignment, accountability, speed of change, skills, partners, projects and approaches to transformation. It shows that there are common issues around urgency, accountability and skills, and interesting differences in priorities and overall approach to technology as an enabler of transformation. (November 2013, Executive Briefing Service, Transformation Stream.)
Below are a brief extract and detailed contents from a 29 page Telco 2.0 Briefing Report that can be downloaded in full in Powerpoint slideshow format by members of the Premium Telco 2.0 Executive Briefing service and the Telco 2.0 Transformation stream here.
This report is an extract from the overall analysis for the Telco 2.0 Transformation Index, a new service from Telco 2.0 Research. Non-members can find out more about subscribing to the Briefing Service here and the Transformation Index here. There will be a world first preview of the Telco 2.0 Transformation Index at our Digital Arabia Executive Brainstorm in Dubai on 11-13th November 2013. To find out more about any of these services please email email@example.com or call +44 (0) 207 247 5003.
One component of our analysis has been a survey of 150 senior execs on the reality of developing and implementing technology strategy in their organisations, and the results are now available to download to members of the Telco 2.0 Executive Briefing Service.
The report’s highly graphical and interactive Powerpoint show format makes it extremely easy to digest and reach valuable insights quickly
The structure of the analysis allows the reader to rapidly and concisely assimilate the complex similarities and differences between players
It is underpinned with detailed and sourced numerical and qualitative data
Example charts from the report
The report analyses similarities and differences in priorities across the six players.
It also assesses the skills profiles of the players against different strategic areas.
To access the contents of the report, including…
Introduction and Methodology
Background – the Telco 2.0 Transformation Index
Drivers of network and IT projects
Degree of challenge of ‘Transformation’ by operator
Priority areas for Transformation by operator
What are the preferred project approaches for transformation?
Alignment of techology and commercial priorities
Accountability for leveraging and generating value from technology projects
IT Skills – ‘Telco 1.0’ Vs ‘Telco 2.0’
Nature of strategic partnerships by operator
Technology project life-cycles by operator
Groupings by attitude to technology as a driver of success
Priority areas for technological improvement or transformation
…Members of the Telco 2.0 Executive Briefing Subscription Service and the Telco 2.0 Transformation stream can download the full 29 page report in interactive Powerpoint slideshow format here. Non-Members, please subscribe here. For other enquiries, please email firstname.lastname@example.org / call +44 (0) 207 247 5003.
Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream).
Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.
With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).
SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:
Venture capitalists are on the lookout for new opportunities;
There are plenty of start-ups all with “the next big thing”;
Incumbents are looking to quickly acquire new skills through acquisition;
And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.
However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:
SDN is open to interpretation and is not an industry standard, so what is it?
Is it better than what we have today?
What are the implications for your business, whether telcos, or vendors?
Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
Could we see a future where networking equipment becomes commoditised just like server hardware?
Will standards prevail?
Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.
This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.
How and why has SDN evolved?
SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:
It is a mature market that has evolved steadily for many years
There are relatively few leading players who have a dominant market position
Technology developments have generally focussed in speed rather than cost reduction or innovation
Low cost silicon is available to compete with custom chips developed by the market leaders
There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
Until SDN, no one really took a clean slate view on what might be possible
New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.
Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.
One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.
There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.
Network Functions Virtualisation (NFV)
It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.
In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.
What is driving the move to SDN?
The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.
But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.
Figure 2 – Process of connecting to an Internet web address
Source STL Partners
The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.
While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.
The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.
Figure 3 – Simple router network
Source STL Partners
This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.
Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.
SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.
As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.
To read the Software Defined Networking in full, including the following sections detailing additional analysis…
Executive Summary including detailed recommendations for telcos and vendors
Introduction (reproduced above)
How and why has SDN evolved? (reproduced above)
What is driving the move to SDN? (reproduced above)
CDN 2.0: Event Summary Analysis. A summary of the findings of the CDN 2.0 session, 10th November 2011, held in the Guoman Hotel, London
Part of the New Digital Economics Executive Brainstorm
series, the CDN 2.0 session took place at the Guoman Hotel, London on the 10th
November and looked at the future of online video, both the star product telcos
rely on for much of their revenue and the main driver of their costs.
Using a widely acclaimed interactive format called ‘Mindshare’, the event enabled
specially-invited senior executives from across the communications, media,
banking and technology sectors to discuss the field of content delivery
networking and the digital logistics systems Netflix, YouTube and other online
video providers rely on.
This note summarises some of the high-level
findings and includes the verbatim output of the brainstorm.
M2M 2.0: Event Summary Analysis: A summary of the findings of the M2M 2.0 session, 10th November 2011, held in the Guoman Hotel, London
Part of the New Digital
Economics Executive Brainstorm series, the M2M 2.0 session took place at
the Guoman Hotel, London on the 10th November and reviewed real-world
experience with M2M projects from operators and other actors. Using
a widely acclaimed interactive format called ‘Mindshare’, the
event enabled specially-invited senior executives from across the
communications, energy and technology sectors to.
note summarises some of the high-level findings and includes the verbatim
output of the brainstorm.
Report Summary: This 120 page Strategy Report focuses on the ‘Digital Generation’ – the cohort which has grown up with new applications and technologies – whose behaviour will ultimately drive the future shape of the Telco business.
The report is a ‘must read’ for CxOs, strategists and product managers seeking to evolve telcos to succeed with the next generation.
To share this article easily, please click:
This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here.
For more on any of these services, please email email@example.com/ call +44 (0) 207 247 5003
The Needs Gap – a strategic threat to Telcos
The report shows that there is a deep disconnect between Telecos and the Digital Generation.
The Digital Generation wants:
Communication to be free
To express identity and content
To move seamlessly between media
To connect with their social groups
New applications, fast
To connect calls and lines
To control as much as possible
To minimize capital investment
Years to develop new products and services
The Digital Generation has integrated some technologies and applications with their lives and discarded the rest – those that don’t fit – rapidly. Many other applications and services familiar to our readers (Facebook, QQ, Apple, Google, etc) now serve some of the needs that Telcos alone used to serve.
Telcos have generally been slow to produce services that meet the needs and expectations of these customers. Unchecked, this will ultimately lead to the disintermediation of the telcos from their ultimate source of value – their customers. This is a strategic threat not just for the youth segment but ultimately across all generations. This report outlines the threat, the urgent need for change, and a framework to support that change.
Report – Key Points
Definition of the digital generation – youth-oriented but aging fast
Key digital generation needs and behaviours – the need for participation
Drivers of service value for these customers – supporting interaction and self-expression
A new approach to product development – the Customer Participation Framework
The economics of end user participation – driving ROI from customer interactions
User participation and the two-sided business model – kicking off Telco 2.0 strategies
Social forces shaping young people’s actions – a risk culture
Age, gender and national variations in the Digital Generation – similarities and differences
Attitudes to technology – only a means to an end
Overview of the Customer Participation Framework
Fit with Telco 2.0 Business Model Innovation Strategies
In previous STL Part.ners’ reports the focus has been on how Telco assets could be used to open new revenue streams from upstream service providers wanting to interact with end-users. Reports such as the 2-Sided Telecoms Market Opportunity have focused upon the business opportunity of how operators could reduce digital friction and protect themselves from over the top providers eager to circumvent the operator and gain access to the end-user directly.
In this report we shift the focus to examine end-users and their behaviours, explaining how:
Operators can improve their retail offering to these customers by better meeting their needs;
Operators can increase the value of their assets by better engaging with these customers and, in so doing, how they can enhance the value of the two-sided business model.
Serving the Digital Generation focuses on why and how young people are adopting digital and communication technologies into their lives. By doing this, STL Partners can help Telco industry management better anticipate, and respond to, the main drivers and unmet needs of tomorrow’s Telco 2.0 customer. What we may regard as quirky segmented behaviour today (blogging, twittering, social networking, for example) is, in fact, mass consumer behaviour tomorrow. Here STL Partners gives an insight into mass-market behaviours for a new breed of customer, which will shape the future of the communications and media sectors.
This report explains this behaviour and explores how the desire to participate represents a new opportunity for Telco value creation. To realise this opportunity, we have developed a new framework for future product development and services, The Customer Participation Framework (CPF). Developed initially as a template for validating new service or application ideas, the CPF is a tool that can be used to support different phases of the product or service innovation process:
At concept initiation, to validate ideas against customer needs;
During the development and trial phase, to ensure usability issues are properly addressed;
In the execution phase, as a means of feedback iteration and a measurement of success.
The CPF framework can help operators increase the value of the Telco Value-Added Services platform and lead to entirely new ways of defining, evaluation, developing and marketing Telco services (retail) to both upstream service providers/partners and end users.We believe that The Customer Participation Framework represents an opportunity for operators to increase the value of their platforms and retail strategies and thus help to realise the $375billion two-sided business opportunity outlined in the 2-Sided Telecoms Market Opportunity and the Future Broadband Business Model reports.
Who is this report for?
The report is for senior (CxO) decision-makers and business strategists, product managers, strategic sales, business development and marketing professionals acting in the following types of organisations:
Fixed & Mobile Operators – to set and drive product development and strategy.
Vendors & Business Partners – to understand customer need and develop winning customer propositions.
Regulators and Standards Bodies – to inform strategy and policy making.
Strategists and CxOs in IT and Investment Companies may also find this report useful to understand the future landscape of the Telecoms and related industries, and to help to spot likely winning and losing investment and operational strategies in the market.
Key Questions Answered
What is driving the behaviour of the digital generation and what does this segment value in products and services?
Which companies are best meeting the needs of these customers? What can operators learn from them?
What is the short and longer term benefit to operators of meeting these needs?
How should operators and vendors go about developing products and services that achieve this?
Background – The need for a new innovation process in telecoms
During the period of rapid growth when markets were emerging, the process of product or service development for Telcos was driven by a focus on network roll out, capacity issues, spectrum licences, supply chains, vendors, traffic forming, the regulatory environment and so on.
This was understandable. Uptake of Telco services was rapid and the challenges of meeting demand immense. Innovation was predominantly in hardware, which required long development cycles, massive investments and a stable regulatory environment. Everything was tested to destruction to ensure robustness and the ability to scale. The industry thrived, driven by some outstanding innovations in core networks, capacity handling etc.
Today, however, as markets mature and become saturated, this approach to innovation has run its course.
Increasingly, core propositions and networks are being commoditised and new services are being developed and delivered by others over the Telco infrastructure. Operators are under increased pressure to:
Hold onto market share (or put more negatively, prevent churn) as an overriding consideration. Operators strive to increase customer retention and ‘stickiness’ on existing core services;
Find new revenue streams – outside of the core personal communications services.
But to build stronger customer experience and innovate in new spheres, requires a shift in focus from being Telco-centric to customer-centric. Placing end-user engagement and participation at the forefront of what Telcos do requires a cultural revolution.
It means a change in processes and the revaluation of core assets.This report focuses on what areas of innovation operators should seek to focus on in their existing retail operations, as well as the core enabling services that form a cornerstone of the future business models.
A move from Telco-centric to customer-centric innovation
A Framework for Future Service and Product Development
Kids and Communication
The Changing Contours of Childhood
Digital differences: Age, gender & nation
Making technology their own
The Research Process
We interviewed senior marketing and product development executives in a dozen operators to fully understand the how the current innovation process is managed and what evaluation criteria are adopted when developing potential new propositions, products and services. This helped us to identify the shortcomings of current innovation approaches, rooted in a tradition of network deployment and subscriber acquisition.
For our other stream of research, we drew on the extensive body of existing industry and academic research into young people’s use of digital communications technology and their adoption of social software. We looked at what they are doing with technology and how adoption has occurred (including exploring nine case study examples).
120+ page manuscript document
This report is now available to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here. To order or find out more please firstname.lastname@example.org, call +44 (0) 207 247 5003.
NB A full PDF copy of this briefing can be downloaded here.
This special Executive Briefing report summarises the brainstorming output from the Technical Architecture 2.0 section of the 6th Telco 2.0 Executive Brainstorm, held on 6-7 May in Nice, France, with over 200 senior participants from across the Telecoms, Media and Technology sectors. See: www.telco2.net/event/may2009.
It forms part of our effort to stimulate a structured, ongoing debate within the context of our ‘Telco 2.0′ business model framework (see www.telco2research.com).
Each section of the Executive Brainstorm involved short stimulus presentations from leading figures in the industry, group brainstorming using our ‘Mindshare’ interactive technology and method, a panel discussion, and a vote on the best industry strategy for moving forward.
There are 5 other reports in this post-event series, covering the other sections of the event: Retail Services 2.0, Content Distribution 2.0, Enterprise Services 2.0, Piloting 2.0, Open APIs 2.0, and Devices 2.0. In addition there will be an overall ‘Executive Summary’ report highlighting the overall messages from the event.
Each report contains:
Our independent summary of some of the key points from the stimulus presentations
An analysis of the brainstorming output, including a large selection of verbatim comments
The ‘next steps’ vote by the participants
Our conclusions of the key lessons learnt and our suggestions for industry next steps.
The brainstorm method generated many questions in real-time. Some were covered at the event itself and others we have responded to in each report. In addition we have asked the presenters and other experts to respond to some more specific points.
Background to this report
The implementation of new ‘Two-Sided’ Telecoms Business Models has major consequences on telco network architecture. Perhaps most importantly, data from separate internal silos needs to be aggregated and synthesised to provide valuable information on a real-time basis. Key process interfaces that enable new services must be made available to external parties securely and on-demand. Network and IT functions must start collaborating and function as a single entity. Operators need to migrate to a workable architecture quickly and efficiently; vendors have to support this direction with relevant new product offerings and strategies.
What are the implications of adopting 2-sided business models on telco technical architecture?
What does the roadmap to a Telco 2.0 architecture look like?
As the network becomes more intelligent to support smart phones and App Stores, what are the most important investments for telcos?
What are the priority areas for transformation to enable new services?
Why are user profiles so important for telcos?
Stimulus Presenters and Panellists
Werner Vogels, CTO, Amazon.com
Richard Mishra, Director, Strategy and Standards, Amdocs
Alireza Mahmoodshahi, CTO, COLT
Paul Magelli, Head, Subscriber Data Management, Nokia Siemens Networks
Michel Burger, Head, Service Architecture, Vodafone Group
Thomas Rambold, CEO, DESS, Associate, Telco 2.0
Chris Barraclough, Managing Director, Telco 2.0 Initiative
Dean Bubley, Senior Associate, Telco 2.0 Initiative
Alex Harrowell, Analyst, Telco 2.0 Initiative
Stimulus Presentation Summaries
Technical Architecture 2.0
Thomas Rambold, Associate, Telco 2.0, presented on the end of clearly defined services – we now face thousands of segments, and no distinction between ”just voice” and ”just data”. Broadband means that I can simultaneously be an Amazon user, a father, and many other things. Distinctions between enterprise and consumer services, private and public, have changed.
This implies much greater complexity in the customer relationships; the Over-The-Top (OTT) players have struggled to get their arms around the total relationships. Carriers have identified Fort Knox on the diagram (customer and network data management) as somewhere they can excel.
The walled garden is no longer sustainable. OTT offerings are already growing fast and, when they start using Telco APIs in earnest, this will accelerate. We need to link these services with customers – the carriers are the only actor capable of acting as a broker between the API users on the one hand, and the full suite of customer data (residing in ‘Fort Knox’) on the other. It is vital to be on-demand, if not necessarily truly real time. You can’t make people fill in forms to start a service – authentication, sign-on and billing and payment need to be automated.
Paul Magelli, Head, Subscriber Data Management, NSN: ”Would you be happy letting a Telco do data mining on you that you wouldn’t tolerate from the Government?”
Currently we are struggling to bridge the gap between the network and the OSS/BSS systems. We simply can’t get the on-demand response we need. So now we’re merging telco and IT organisations together, analysing the telco and IT environments, trying to get them to work together, in concert, and react faster.
The service delivery environment; a year ago, this would have been a big SDP from a traditional telco vendor. But now the services have moved out to the network – into the cloud, into mash-up environments, on to native applications on devices. It’s increasingly challenging to make these services useful and timely and to guarantee security and privacy across all these domains.
Service Value Management, Amdocs’ vision for next-generation services
Richard Mishra, Director, Strategy and Standards, Amdocs, said we are living in interesting times: We face sophisticated customers, exotic delivery platforms, and a recession! Perhaps we need to revisit core values, get back in the box and think more deeply. Revisit the core strengths and disciplines of operating a carrier-grade telecoms network (but not retain the bad attributes).
There is constant talk of service, even by the TM Forum back-office people, who are never seen from outside the Telco. But the next step after resource deployment is, inescapably, fulfilment. And with all this talk of service, what about the shareholders? Finally, having created the service, spent the capex, and deployed, you need assurance – monitoring service performance against the performance you offered the customers.
We’re working with ever-increasing interdependence between infrastructure, service, consumer and enterprise applications, and devices. This needs diagnostics and monitoring for all these levels, and careful management of the consumer experience.
Our Full Service Provider Integration Framework provides tools for continuous improvement as defined in the left half of eTOM; covered by contracts designed to match. In our deployment in Atlanta, we used this and special tools we developed for the carrier Ethernet network. We didn’t try to recreate the special capabilities there; but we did subject it to the traditional disciplines of managing a carrier network.
As a rule, Telcos no longer customise stuff; we can’t make our own SDH management tools any more, notably for reasons of intellectual property. So it’s now a question of assembling agile value chains from many other vendors, components and sources.
Building Trusted Relationships
Paul Magelli, Head, Subscriber Data Management, Nokia Siemens Networks said that customer data plays a key role in Telco 2.0…but does it exist? Is there enough available in our networks?
Nokia Siemens is working on the following assumptions:
1. A multitude of business models;
2. Broadband connectivity everywhere;
3. 5 billion subscribers;
4. Applications all migrated onto the Internet.
This implies that successful applications and services will be information- and subscriber-centric.
Richard Mishra, Head, OSS Strategy and Standards, Amdocs: ”Data will become a treasured part of the business model.”
We really need rich profile information – profiles are what we say about ourselves. It’s clear that there is enough information, but can we get at it? 76% of respondents in our survey think it’s the most important issue; 86% think it’s important for network development.
Consider a use case in banking. To improve contact with customers, we need to know things like: is the customer available? Interested? Is it a good time to reach me? Is this the best way to reach me? Is this the right language? For this, we need the ability to do real-time subscriber profiling as well as historical data analysis.
But it’s more complicated than that. Current data isn’t enough – time series is really important. It is surprising that operators recognise this but haven’t done very much to solve it. Only 14% have real-time data analysis. And identity is more complicated than that – people have multiple devices, multiple SIMs, and multiple identities. Privacy is another big issue. Permission is frequently abused; there is a huge generation gap in attitudes around what constitutes privacy, and the legislation is very different between different jurisdictions (and usually lags the market).
If we could provide a single point of access for managing your identity…huge opportunities await. But it’s crucial to resolve the privacy issue by giving customers control of their own data.
Building the business-grade cloud
Alireza Mahmoodshahi, CTO, Colt asked how much do our customers know about cloud computing? Not much, he said, we’ve done well in clouding their minds. He gave a short brief on COLT; its origins in the City of London, its European fibre network, its large enterprise customers.
There is a framework for ICT; the Telco at the bottom, providing dark fibre, bulk data and voice. Then, above that, data centres and IT infrastructure like hosting, co-location, network operation. Then a vast range of applications specific to tasks run on top of that.
Not many operators are enamoured with their position at the bottom of the stack…
…on the other hand, it’s hard to find anyone who can replace the things that Telcos can do. Most clouds don’t offer any kind of SLA, so critical transactional services can’t use them. Traditional clouds are on the left of the diagram, similar to IT infrastructure. Operators don’t want to be penned into the low-value bottom right: they need to push up and left to escape. Meanwhile, clouds tend to have no SLA for the sector from the cloud to the end-user, and not necessarily between the enterprise and the cloud: there are too many participants to provide an SLA covering the whole thing. So the opportunity for the Telco cloud is to provide end-to-end SLAs.
The crucial development to enable business-grade clouds is to virtualise all elements of the system. Then, control priority for applications running on them. This is achieved by queuing them through a COLT-patented policy scheduler; this despatches tasks to a pool of virtualised servers, themselves providing a pool of threads.
Then we set up the API/SDK third-party access to the platform to empower the applications developers. If you want to do this you need to control QoS and also application-layer dispatching across the entire system; carriers are probably the only actors who can offer this.
The fragmented and sometimes opposing views of the audience in the feedback (and voting) are not new or surprising. It is a key indication for service providers and operators that changes are overdue and must be taken seriously. It also demonstrates the complexity of current silo’d approaches, and the inability of a single company to change this. Despite this complexity, operators will have to focus investment on strategically important projects, particularly during these difficult financial times. Such an approach will give us the chance to reduce complexity and reduce operating costs. Most importantly it will enable operators to produce agile technical architectures with the required flexibility to meet customer demands.
Feedback: grounds for optimism
The technical architecture session produced some significant levels of optimism from the audience….
· Promise of new future. [#5]
· Some real applications of Telco 2.0 model. [#6]
· Good ideas for large enterprise and government segments. [#12]
· Triggers my thinking and confirms my assumption. [#15]
Feedback: …and cynicism
…albeit tempered with a degree of cynicism about the validity of the examples cited
· We don’t know when we will get there. [#9]
· Still industry jargon driven. [#17]
· All seemed to be a bit ‘defensive’ of legacy models and were more oriented to Telco 1.0. [#18]
o Re 18 good point. 90% of large carrier environments could not move to Telco 2.0 due to restrictive contracting structures with their existing OSS vendors. To add to the pain, they usually don’t have their source code so they don’t really control their destiny (especially if outsourced). [#43]
· Mostly still talk, no examples on the ground yet. [#20]
· Still very much 1.0 and technology driven, to view user and real new business model. [#44]
· Unfortunately human beings are not compliant to marketers Telco use cases ;-). [#46]
o 46; Telco use cases are uninformed by real data. [#49]
· Would it be better to define what to do and where to go, before arguing potential obstacles and potential regulatory matter before the idea has started? [#62]
Feedback: Customer data issues
In particular, the subscriber data management model seems aspirational, if trust issues can be resolved. However, there were a lot of questions wondering whether Telcos’ use of customer data would be either as accomplished – or consumer-friendly – as Internet-based players like Amazon’s. There also appears to be a debate brewing over whether or not Telcos’ customer data is “better” than that of web players’.
· Single customer data ‘vault’ from Nokia/Siemens is a great idea; doubtful any carrier brand has the consumer trust to pull this off-more likely for VeriSign, Symantec or someone else. [#25]
o 25 – I agree. There also needs to be reciprocity. I want to be able to authenticate on an operator network with my Facebook ID or similar – or with another operator’s identity. [#33]
· Very illustrative presentation about profile, profiling and identity. [#39]
· Nokia Siemens idea seems good, but how can they guarantee that the customer data go to a trustworthy person? The bank manager of today could be the disgruntled laid-off employee of tomorrow selling the data illegally to somebody else. For my part, I rather trust my voice mail box for everybody to leave a message. [#48]
o 48 no business has succeeded by arguing that innovation were dangerous to its established business paradigm. [#51]
· Give customers even little incentives and they willingly hand over their data. [#63]
· In France this week a guy spent a night in jail for receiving a SMS whose content was considered ‘security threat’ ….trust????? [#60]
· Any examples of where operators have offered services using the subscriber profile mgt capabilities discussed by NSN? [#10]
· In the NSN subscriber profile example, to which extend are the subscriber data exposed to 3rd parties, and by which standardised approach? [#35]
· For the NSN concept, do we first need a regulatory framework for ‘identity portability’ so that we can churn ID providers? [#37]
· Why as a user can I not hold my data and share it with who I want, if it is so valuable why don’t I charge for it as well? [#47]
o Re: 47 THAT IS THE IDEA! The idea of the 2 sided biz model (IMHO) is that your data is valuable. You should sell it by allowing the operator to monetize it with 3rd parties then ‘pay’ you through reduced service charges and greater service coverage/offerings. [#55]
o 47, in a sense you do. You get better products 0n Amazon by giving product feedback; on Blyk you get free minutes and data in exchange for insight and data. Payment from users is not always monetary (it is sometimes data and information) and, likewise, payment to users for their ‘data’ will not be monetary but through increased value. [#61]
· Further to Werner’s question of who owns the data — who do operators give access to sensitive data? Would customers like it if they know that the carrier may allow their BSS vendors to manage call information or data event records? Do carriers have access to the source code to verify integrity of mission critical apps? [#52]
· Private information is valuable. Why do not think share the value of private date with the consumer? [#56]
· Today – what percentage of Amazon’s data it collects is used to improve user experienced and increased engagement + purchasing? [#59]
· Access to data for Telco 2.0 is regional/legal/societal dependent. Operators need a good PR effort to allow opt in for customers to fuel the two sided revenue model. [#66]
· Maybe there could be a bureau operator that can abstract the customer and all related data on behalf of the operators … where people that are willing to be Telco2.0 involved can opt in. [#68]
· Bravo Werner. Telcos need to think about the customer first. What Telcos facilitate the option for customers to send data to 3rd party for monetization opportunities? [#69]
· Isn’t Telco 3.0 a free connection to the network in exchange for shared consumer behaviour data? [#71]
o 71 could see Google trying this in 5 years time if the cost of Telco network is driven down sufficiently. [#75]
· People are scared by poor data sunk into Telcos that they have no control over. When is data opening up to cleansing going to happen? [#73]
o Re: 73 the data is already there. [#76]
Feedback: the cloud
There was plenty of interest in the COLT presentation on the Cloud…. but also plenty of cynicism again
· Colt just showed what Telco 2.0 is all about. That model enables innovation thru cost reduction and state of the art architectures. Why don’t operators run their back office in a similar approach? [#13]
· Is Colt’s cloud real yet and have COLT sold anything yet? [#16]
· Bravo Colt. If it exists it is a model as it should be. They sign up for the SLA and they lay forth a bed of enablement. [#19]
· If Colt has that architecture in operation, they should be running the back office for other Telco operators. [#34]
· Relative to Colt’s view – what about Salesforce.com who seems to have pulled this off on their own. [#29]
· Does COLT use its own cloud services? [#38]
· Where the incentive is for existing OSS vendors who have long terms managed services contracts to innovate and move to Telco 2.0 architectures (e.g. SaaS) like Colt? [#28]
Feedback: COLT vs Amazon
… although whether COLT’s vision (or Amdocs’) is as advanced as that presented by Amazon was doubted
· Amazon ‘eats’ their own cooking (their cloud) and COLT? [#14]
· What is colt’s business model (price structure) and how does this differ from Amazon? [#8]
· How does Werner really feel about legacy telecom operator back office Telco environments? [#30]
· How does Amdocs view Colt’s model? [#23]
· How would Amdocs view Colt and Amazon models? Seems much difference with their approach. [#7]
Feedback: Carrier-grade – what is it worth?
But do end users really think “carrier grade” is important as the operators? Will they pay a premium for it?
· I’ll believe in ‘carrier grade’ when enterprises can get SLAs for mobile coverage. [#45]
· Is carrier grade not shifting now that many users have several alternative means to perform a communication? [#57]
· Seems the industry still have major problems in freeing themselves from network drives. [#22]
· Carrier interoperability and collaboration could solve many network issues. [#24]
· From what I can gather there has been talk about all this for some time. Based on this the Telcos have failed to transform while other companies have. Surely Telcos will change too late to take any true advantage of the possible opportunities that exist today and other companies will take advantage. Why don’t Telcos focus on their core competence – the network? [#36]
o 36, for the simple reason that the value of the network itself is in continuous decline. Got to look elsewhere – make money because they own network not through the network itself. [#42]
· These arguments are the same ‘net head’ vs. ‘bell head’ arguments that occurred in the late 90’s. How do we resolve the issues? Telcos clearly are behind and are insisting on making everything ‘carrier grade’ at extraordinary costs? Doesn’t seem very 2.0, does it? [#41]
Feedback: crucial enablers
There were also various other comments about the enablers of the two-sided platform, and the consequences of personal device proliferation:
· Cascading SLA’s is old stuff, nothing new. [#50]
· XaaS is very valid a concept, and so is cloud computing/storage/network, but these are two different things – XaaS may or may not be using cloud principles and technologies. [#65
· Simplification of capabilities available for third parties. [#27]
· What is the highest priority issue for Telco 2.0 enabling architectures that does not exist today? [#21]
· OPEN standards vs. licensed software? [#32]
· 5 B user…. 5o B devices….customer data from all these!! [#26]
· How many people have more than one mobile device? 5 are extreme. [#31]
o re31 – in Europe, probably about 1.4 mobile devices per person, and >2 in Italy. Verizon in the US has suggested that ultimately 4+ is not implausible. [#40]
Participants’ “Next Steps” Vote
Participants were asked “How well current industry activity around technical architectures supports the development of Telco 2.0 business models?”
Very well – the industry is shaping up well for delivery and new business models.
Good start but more needs to be done – major building blocks are missing.
Lost cause – the industry will never deploy the capabilities for new business models
Lessons learnt & next steps
Since the development of broadband access, the Internet world has recognised that customers can have many, dramatically different roles and attributes, needing specific functionality, preferences, and user profiles. Operators are in a unique position in that they have a fuller picture of customers than any single website or retailer or service provider. Several have already recognised this, and a number of vendors are offering scalable platforms which claim to be in line with the current EU legislation on data protection.
Marc Davis, Chief Scientist, Yahoo! Mobile: ”Data is to the information economy as money is to the economy. But there is a missing infrastructure – because there’s no user interface for this data and what is the equivalent of a bank for this data – who looks after it?”
But as well as user profile data, the 2-sided business model requires on-demand response from the network infrastructure. It will not matter whether it is the network or OSS/BSS/IT element that is breaking down – customers won’t care, they will just find the situation unacceptable. Both the network and IT elements must work together to deliver this. Operators are moving in that direction organisationally and structurally.
Telco 2.0 expects that this will result in new implementations of control & monitoring systems such as Resource & Service Control Systems (RSC). As services are the key business drivers, the opening up of the walled gardens is changing the service delivery platforms quite rapidly, as most new applications are centred around apps stores, mash-up environments, XaaS environments, and smartphone Web browsers, etc. which do not demand a traditional SDP or SDF. In addition, enabling services are becoming an essential element in operators’ core products. These enabling services will, in the future, allow operators to monetize their network assets.
These enabling services need a framework, which is highly flexible, agile and responsive, and integrated with the features defined by NGMN. While not all these points are implemented yet, there is increasing understanding at the operators, upstream service providers, and regulators that this new phase, opened up through the 2-sided business model, represents a historic opportunity for all members of this ecosystem.
Marc Davis, Chief Scientist, Yahoo! Mobile: ”What if we had new, industry standard terms of service under which users owned their data?”
Before the technical details can be finalised, of course, business models need to be scoped. However, the major technical areas discussed above are focal points for technology development. In the short term, Telcos should:
Build up a logical semantic database as preparation for database integration;
Include migration from 2G and 3G and backwards compatibility in LTE tenders;
Prepare a user profile database;
Reduce the number of OSS/BSS systems;
Develop real-time responsiveness in OSS/BSS systems;
Separate the control and data planes, separate services from transport;
Implement and deploy an RSC system as a multivendor abstraction layer.
In the longer term, operators will need to:
Integrate the network and IT elements of the on-demand infrastructure;
Set up a full user profile with privacy protection and more granular information;
Integrate provisioning, activation, network and bandwidth management, and policy enforcement;
Recognise that Web-based service environments will overtake the SDP;
Develop a collaborative approach to multi-vendor app stores.
Options and Opportunities for Distributors in a time of massive disruption
Summary: As online video challenges traditional distribution models, both old and new suppliers are pushing into the value chain in the hope of grabbing a share of the emerging global market. But how will the market develop and which companies will be the ultimate winners?
STL Partners has analysed the potential of online video, identified possible market winners and losers, and set out three interlocking scenarios depicting the evolution of the market. In each scenario, the role of distributors is examined, possible threats and opportunities revealed, and strategic options are discussed. (March 2009)
To share this article easily, please click:
This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here.
For more on any of these services, please email email@example.com/ call +44 (0) 207 247 5003
Market background, size and dynamics
Differences in, and lessons from, different geographies
Analysis of prospects by content type: movies, sport, music, adult and user-generated
Hulu Vs YouTube: Comparative business model analysis
Market forecasts for revenues related to online and mobile video
Evolving market scenarios
Positioning to maintain / develop advantages in scenarios
Recommends specific short, medium and long term actions for moving forward
Who is this report for?
The study is an invaluable guide to managers across the TV and video value chain who are seeking insight into how the online market will develop and the opportunities and threats it presents.
CxOs, Strategists, Product Managers, Investors, Operational Managers in Telecom’s Operators, Broadband Service Providers and ISPs, Media Companies, Content Aggregators and Creators.
Key Questions Answered
How will the online video market develop and what are the implications for value chain players?
Are there historical lessons (from cinema and TV) from which to learn?
Which content categories will be most affected by the shift online?
What is the best strategy for distributors and aggregators to maximise chances of success?
Background – Online Video: the Growing Bulge in the Fat Pipe
All recent data point towards video being the fastest growing segment of all internet traffic and the trend looks set to continue for the foreseeable future. This is true whichever metric is used: absolute number of viewers, total time spent viewing, data traffic volumes.
Growth is not limited to a content category: adult, sports, movies and music are all rapidly moving online. The internet has also led to a completely new category: User Generated Content – home movies have moved out of the privacy of the living room and are becoming more and more professional.
Growth is also not limited to a specific geography: the movement online is a worldwide phenomenon. The internet has no respect for traditional geographies and boundaries.
Overall, the evidence points towards a future where the internet will be a critical distribution channel for all forms of video.
The New Distribution is disruptive and no longer centrally controlled
Innovation in Video Distribution is nothing new and over the last century we have seen cinema, broadcast networks and physical media creating temporary shocks to older methods of distributing content – but the older methods survive.
However, there is only a certain amount of time in the day available for entertainment in general and watching video specifically. Legacy distribution channels are understandably worried about whether video online will be additive to or cannibalise their audiences, and our survey respondents largely share this view.
More Growth + Less Control = More Unpredictability
Positively, individuals have generated their own content and made it available to the world. Negatively, some individuals have used interactivity to distribute content without regard of the rights of the copyright holders. Copyright holders have struggled to enforce their rights. Illegal distribution of content not only threatens the absolute value of content, but has lead to unpopular and complicated mechanisms to protect content.
The absolute volume growth has also placed the internet access providers under severe strain: attempting to increase prices to compensate for the growth in traffic and gain extra revenue through developing additional services is proving very difficult.
These forces have generated a considerable amount of experimentation in the market especially in the area of pricing models: subscription, pay-as-you-go, advertising funded, bundles with other distribution channels and offset/subsidy – all exist in a variety of forms.
How & why is the current model broken?
The net result is the video market is in a state of flux and increasing tension as key players explore their positions. Will order emerge from the chaos? In what form will this new order take? What will be impact on the existing players in the video value chain? And, will powerful new players emerge?
How can it be fixed?
We believe that Video Distribution on the internet will reshape the value chain and the current forces point towards great uncertainty in the short term. In these circumstances, the key step is to explore possible future scenarios to assess their viability and robustness in the face of change.
Case Studies, Companies and Services, and Technologies & Applications Covered
Case Studies:Apple, Hulu, Phreadz, YouTube.
Companies and Organisations Covered:3 UK, AllOfMP3.com, Amazon, AOL Music, Apple, Babelgum, Barnes & Noble, BBC, BBC iPlayer, Bebo, Bit Torrent, Black Arrow, BlipTV, Blockbuster, BT, BT Openreach, BT Vision, Comscore, Del.icio.us, Deutsche Telecom, Deutsches Forschungsnetz (DFN), Diggnation, Digital Entertainment Content Ecosystem (DECE), eMarketer, EMI, European Union, Eurosat, Facebook, Flickr, Flickr, Forbes, Frost & Sullivan, Gartner, Google, Hanaro, Hitwise, Hulu, iBall, IBM, Imagenio, International Movie Database (IMDB), Joost, KDDI, Korea Times, KT+A94, Lenovo, London Business School, MGM, Mobilkom Austria, Mobuzz, MP3Sparks, MSN Music, MTV, MySpace, Napster, National Information Society Agency (NISA), NBC, Net Asia Research, Netflix, NewTeeVee, NicoNicoDouga, Nielsen SoundScan, Nintendo, Now, NTT DoCoMo, Ofcom, Orange, Phorm, Phreadz, Powercomm, Qik, Recording Industry Association of America (RIAA), Revision 3, Screen Digest, Seesmic, Seskimo, Silicon Valley Insider, Sky, Softbank, Sony, The Guardian, T-Mobile, Tremor Media, UK Football Premier League, Verizon, Video Egg, Virgin Media, Vivid, Walmart, Web Marketing Guide, Wikipedia, World Intellectual Property Organisation (WIPO), Yahoo, YouPorn, YouTube.
Technologies & Applications Covered:3G, 3GP, AAC, Adobe Flash, AMR, Android, Apple Quicktime, Apple TV, AVI, Batrest, BBC iPlayer, Beacon, Betamax, Broadband, CD, Cinema, DivX, DOCSIS 2.0, DOCSIS 3.0, DRM, DSL, DVD, Ethernet to the home, Fibre to the home, Final Cut HD/Pro/Studio, FLV, FON WLAN, Fring, GIF, H.264, H.264/AVC, HSDPA, iDVD, iMovie, Iobi, IP, iPhone, iPod, IPTV, iTunes, JPEG, Linux, MOV, MP3, MP4, MPEG, MPEG-2 SD, MPEG4, MPEG-4, NVOD, OGG, P2P, PAL, PNG, PopTab, P2P, RM, RMVB, Scopitones, Sky +, Slingbox, Soundies, TiVo, TV, VCR, VHS, Video over IP, VOB, VOD, WiFi, W-LAN, WMV, XviD.
Markets Covered and Forecasts Included
Markets Covered: Global, US, Canada, UK, France, Germany, Italy, Hungary, Spain, Sweden, Finland, Japan, South Korea.
Forecasts Included: Online Video Vs Cinema & TV 2012, Global TV, Video and Cinema to 2018, Online Video Subscription and Advertising Revenues, Pro-Tail content advertising forecasts, Mobile TV and Video 2013.
Summary of Contents
Part 1: Online video – the situation today
Part 2: Future scenarios
Part 3: Evolution of specific media genres
Part 4: Mobile evolution
Part 5: Geographical differences
The Research Process
The research evaluates the likelihood of three scenarios: Old Order Restored, Pirate World and New Players Emerge. Each of which paints a picture of the future entertainment industry in terms of: technology developments; consumer behaviour; service uptake and usage.
The research is based on comprehensive literature reviews, industry research and interviews with key staff from relevant organizations that shed insight on the needs and dynamics of the key players. Key Case Studies bring the story to life and provide a context for both successes and failures. An economic model of the resultant value chain is produced for each of the scenarios with analytical commentary.
130+ page manuscript document
This report is now availalable to members of our Telco 2.0 Research Executive Briefing Service. Below is an introductory extract and list of contents from this strategy Report that can be downloaded in full in PDF format by members of the executive Briefing Service here. To order or find out more please email firstname.lastname@example.org, call +44 (0) 207 247 5003.