How to build an open source telco – and why?

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

Introduction: Why an open source telecom?

Commercial pressures and technological opportunities

For telcos in many markets, declining revenues is a harsh reality. Price competition is placing telcos under pressure to reduce capital spending and operating costs.

At the same time, from a technological point of view, the rise of cloud-based solutions has raised the possibility of re-engineering telco operations to be run with virtualised and open sourced software on low cost, general purpose hardware.

Indeed, rather than pursuing the traditional technological model, i.e. licensing proprietary solutions from the mainstream telecoms vendors (e.g. Ericsson, Huawei, Amdocs, etc.), telcos can increasingly:

  1. Progressively outsource the entire technological infrastructure to a vendor;
  2. Acquire software with programmability and openness features: application programming interfaces(APIs) can make it easier to program telecommunications infrastructure.

The second option promises to enable telcos to achieve their long-standing goals of decreasing the time-to-market of new solutions, while further reducing their dependence on vendors.

Greater adoption of general IT-based tools and solutions also:

  • Allows flexibility in using the existing infrastructure
  • Optimises and reuses the existing resources
  • Enables integration between operations and the network
  • And offers the possibility to make greater use of the data that telcos have traditionally collected for the purpose of providing communications services.


In an increasingly squeezed commercial context, the licensing fees applied by traditional vendors for telecommunication solutions start to seem unrealistic, and the lack of flexibility poses serious issues for operators looking to push towards a more modern infrastructure. Moreover, the potential availability of competitive open source solutions provides an alternative that challenges the traditional model of making large investments in proprietary software, and dependence on a small number of vendors.

Established telecommunications vendors and/or new aggressive ones may also propose new business models (e.g., share of investments, partnership and the like), which could be attractive for some telcos.

In any case, operators should explore and evaluate the possibility of moving forward with a new approach based on the extensive usage of open source software.

This report builds on STL Partners’ 2015 report, The Open Source Telco: Taking Control of Destiny which looked at how widespread use of open source software is an important enabler of agility and innovation in many of the world’s leading internet and IT players. Yet while many telcos then said they crave agility, only a minority use open source to best effect.

In that 2015 report, we examined the barriers and drivers, and outlined six steps for telcos to safely embrace this key enabler of transformation and innovation:

  1. Increase usage of open source software: Overall, operators should look to increase their usage of open source software across their entire organisation due to its numerous strengths. It must, therefore, be consistently and fairly evaluated alongside proprietary alternatives. However, open source software also has disadvantages, dependencies, and hidden costs (such as internally-resourced maintenance and support), so it should not be considered an end in itself.
  2. Increase contributions to open source initiatives: Operators should also look to increase their level of contribution to open source initiatives so that they can both push key industry initiatives forward (e.g. OPNFV and NFV) and have more influence over the direction these take.
  3. Associate open source with wider transformation efforts: Successful open source adoption is both an enabler and symptom of operators’ broader transformation efforts, and should be recognised as such. It is more than simply a ‘technical fix’.
  4. Bring in new skills: To make effective use of open source software, operators need to acquire new software development skills and resources – likely from outside the telecoms industry.
  5. … but bring the whole organisation along too: Employees across numerous functional areas (not just IT) need to have experience with, or an understanding of, open source software – as well as senior management. This should ideally be managed by a dedicated team.
  6. New organisational processes: Specific changes also need to be made in certain functional areas, such as procurement, legal, marketing, compliance and risk management, so that their processes can effectively support increased open source software adoption.

This report goes beyond those recommendations to explore the changing models of IT delivery open to telcos and how they could go about adopting open source solutions. In particular, it outlines the different implementation phases required to build an open source telco, before considering two scenarios – the greenfield model and the brownfield model. The final section of the report draws conclusions and makes recommendations.

Why choose to build an open source telecom now?

Since STL Partners published its first report on open source software in telecoms in 2015, the case for embracing open source software has strengthened further. There are three broad trends that are creating a favourable market context for open source software.

Digitisation – the transition to providing products and services via digital channels and media. This may sometimes involve the delivery of the product, such as music, movies and books, in a digital form, rather than a physical form.

Virtualisation – executing software on virtualised platforms running on general-purpose hardware located in the cloud, rather than purpose-built hardware on premises. Virtualisation allows a better reuse of large servers by decoupling the relationship of one service to one server. Moreover, cloudification of these services means they can be made available to any connected device on a full-time basis.

Softwarisation – the redefinition of products and services though software. This is an extension of digitisation, i.e., the digitisation of music has allowed the creation of new services and propositions (e.g. Spotify). The same goes for the movie industry (e.g. Netflix) or the transformation of the book industry (e.g. ebooks) and newspapers. This paradigm is based on:

  • The ability to digitise the information (transformation of the analogue into a digital signal).
  • Availability of large software platforms offering relevant processing, storage and communications capabilities.
  • The definition of open and reusable application programming interfaces (APIs) which allow processes formerly ‘trapped’ within proprietary systems to be managed or enhanced with other information and by other systems.

These three features have started a revolution that is transforming other industries, e.g. travel agencies (e.g. Booking.com), large hotel chains (e.g. Airbnb), and taxis (e.g. Uber). Softwarisation is also now impacting other traditional industries, such as manufacturing (e.g., Industry 4.0) and, for sure, telecommunications.

Softwarisation in telecommunications amounts to the use of virtualisation, cloud computing, open APIs and programmable communication resources to transform the current network architecture. Software is playing a key role in enabling new services and functions, better customer experience, leaner and faster processes, faster introduction of innovation, and usually lower costs and prices. The softwarisation trend is very apparent in the widespread interest in two emerging technologies: network function virtualization (NFV) and software defined networking (SDN).

The likely impact of this technological transformation is huge: flexibility in service delivery, cost reduction, quicker time to market, higher personalisation of services and solutions, differentiation from competition and more. We have outlined some key telco NFV/SDN strategies in the report Telco NFV & SDN Deployment Strategies: Six Emerging Segments.

What is open source software?

A generally accepted open source definition is difficult to achieve because of different perspectives and some philosophical differences within the open source community.

One of the most high-profile definitions is that of the Open Source Initiative, which states the need to have access to the source code, the possibility to modify and redistribute it, and non-discriminatory clauses against persons, groups or ‘fields of endeavour’ (for instance, usage for commercial versus academic purposes) and others.

For the purpose of this report, STL defines open source software as follows:

▪ Open source software is a specific type of software for which the original source code is made freely available and may be redistributed and modified. This software is usually made available and maintained by specialised communities of developers that support new versions and ensure some form of backward compatibility.

Open source can help to enable softwarisation. As an example, it has greatly helped in moving from proprietary solutions in the web server sector to a common software platform (named LAMP) based on the Linux operating system, the Apache Http server, Mysql server, PhP programming language. All these components are made available as open source. This essentially means that people can freely acquire the source code, modify it and use it. Modifications and improvements are to be returned to the development community.

One of the earliest and most high profile examples of open source software was the Linux operating system, a Unix-like operating system developed under the model of free and open source software development and distribution.

Open source for telecoms: Benefits and barriers

The benefits of using open source for telecoms

As discussed in our earlier report, The Open Source Telco: Taking Control of Destiny, the adoption and usage of open source solutions are being driven by business and technological needs. Ideally, the adoption and exploitation of open source will be part of a broader transformation programme designed to deliver the specific operator’s strategic goals.

Operators implementing open source solutions today tend to do so in conjunction with the deployment of network function virtualization (NFV) and software defined networking (SDN), which will play an important role for the definition and consolidation of the future 5G architectures.

However, as Figure 1 shows, transformation programmes can face formidable obstacles, particularly where a cultural change and new skills are required.

Benefits of transformation and related obstacles

The following strategic forces are driving interest in open source approaches among telecoms operators:

Reduce infrastructure costs. Telcos naturally want to minimise investment in new technologies and reduce infrastructure maintenance costs. Open source solutions seem to provide a way to do this by reducing license fees paid to solution vendors under the traditional software procurement model. As open source software usually runs on general-purpose hardware, it could also cut the capital and maintenance costs of the telco’s computing infrastructure. In addition, the current trend towards virtualisation and SDN should enable a shift to more programmable and flexible communications platforms. Today, open source solutions are primarily addressing the core network (e.g., virtualisation of evolved packet core), which accounts for a fraction of the investment made in the access infrastructure (fibre deployment, antenna installation, and so forth). However, in time open source solutions could also play a major role in the access network (e.g., open base stations and others): an agile and well-formed software architecture should make it possible to progressively introduce new software-based solutions into access infrastructure.

Mitigate vendor lock-in. Major vendors have been the traditional enablers of new services and new network deployments. Moreover, to minimise risks, telco managers tend to prefer to adopt consolidated solutions from a single vendor. This approach has several consequences:

  • Telcos don’t tend to introduce innovative new solutions developed in-house.
  • As a result, the network is not fully leveraged as a differentiator, and can become the full care and responsibility of a vendor.
  • The internal innovation capabilities of a telco have effectively been displaced in favour of those of the vendor.

This has led to the “ossification” of much telecoms infrastructure and the inability to deliver differentiated offerings that can’t easily be replicated by competitors. Introducing open source solutions could be a means to lessen telcos’ dependence on specific vendors and increase internal innovation capabilities.

Enabling new services. The new services telcos introduce in their networks are essentially the same across many operators because the developers of these new services and features are a small set of consolidated vendors that offer the same portfolio to all the industry. However, a programmable platform could enable a telco to govern and orchestrate their network resources and become the “master of the service”, i.e., the operator could quickly create, customise and personalise new functions and services in an independent way and offer them to their customers. This capability could help telcos enter adjacent markets, such as entertainment and financial services, as well as defend their core communications and connectivity markets. In essence, employing an open source platform could give a telco a competitive advantage.

Faster innovation cycles. Depending on a vendor makes the telco dependent on its roadmap and schedule, and on the obsolescence and substitution of existing technologies. The use of out-dated technologies has a huge impact on a telco’s ability to offer new solutions in a timely fashion. An open source approach offers the possibility to upgrade and improve the existing platform (or to move to totally new technologies) without too many constraints posed by the “reference vendor”. This ability could be essential to acquiring and maintaining a technological advantage over competitors. Telcos need to clearly identify the benefits of this change, which represent the reasons, the “why”, for the softwarisation.

Complete contents of how to build an open source telecom report:

  • Executive Summary
  • Introduction: why open source?
  • Commercial pressures and technological opportunities
  • Open Source: Why Now?
  • What is open source software?
  • Open source: benefits and barriers
  • The benefits of using open source
  • Overcoming the barriers to using open source
  • Choosing the right path to open source
  • Selecting the right IT delivery model
  • Choosing the right model for the right scenario
  • Weighing the cost of open source
  • Which telcos are using open source today?
  • How can you build an open source telco?
  • Greenfield model
  • Brownfield model
  • Conclusions and recommendations
  • Controversial and challenging, yet often compelling
  • Recommendations for different kinds of telcos

Figures:

  • Figure 1: Illustrative open source costs versus a proprietary approach
  • Figure 2: Benefits of transformation and the related obstacles
  • Figure 3: The key barriers in the path of a shift to open source
  • Figure 4: Shaping an initial strategy for the adoption of open source solutions
  • Figure 5: A new open source component in an existing infrastructure
  • Figure 6: Different kinds of telcos need to select different delivery models
  • Figure 7: Illustrative estimate of Open Source costs versus a proprietary approach

The Open Source Telco: Taking Control of Destiny

Preface

This report examines the approaches to open source software – broadly, software for which the source code is freely available for use, subject to certain licensing conditions – of telecoms operators globally. Several factors have come together in recent years to make the role of open source software an important and dynamic area of debate for operators, including:

  • Technological Progress: Advances in core networking technologies, especially network functions virtualisation (NFV) and software-defined networking (SDN), are closely associated with open source software and initiatives, such as OPNFV and OpenDaylight. Many operators are actively participating in these initiatives, as well as trialling their software and, in some cases, moving them into production. This represents a fundamental shift away from the industry’s traditional, proprietary, vendor-procured model.
    • Why are we now seeing more open source activities around core communications technologies?
  • Financial Pressure: However, over-the-top (OTT) disintermediation, regulation and adverse macroeconomic conditions have led to reduced core communications revenues for operators in both developed and emerging markets alike. As a result, operators are exploring opportunities to move away from their core, infrastructure business, and compete in the more software-centric services layer.
    • How do the Internet players use open source software, and what are the lessons for operators?
  • The Need for Agility: In general, there is recognition within the telecoms industry that operators need to become more ‘agile’ if they are to succeed in the new, rapidly-changing ICT world, and greater use of open source software is seen by many as a key enabler of this transformation.
    • How can the use of open source software increase operator agility?

The answers to these questions, and more, are the topic of this report, which is sponsored by Dialogic and independently produced by STL Partners. The report draws on a series of 21 interviews conducted by STL Partners with senior technologists, strategists and product managers from telecoms operators globally.

Figure 1: Split of Interviewees by Business Area

Source: STL Partners

Introduction

Open source is less optional than it once was – even for Apple and Microsoft

From the audience’s point of view, the most important announcement at Apple’s Worldwide Developer Conference (WWDC) this year was not the new versions of iOS and OS X, or even its Spotify-challenging Apple Music service. Instead, it was the announcement that Apple’s highly popular programming language ‘Swift’ was to be made open source, where open source software is broadly defined as software for which the source code is freely available for use – subject to certain licensing conditions.

On one level, therefore, this represents a clever engagement strategy with developers. Open source software uptake has increased rapidly during the last 15 years, most famously embodied by the Linux operating system (OS), and with this developers have demonstrated a growing preference for open source tools and platforms. Since Apple has generally pushed developers towards proprietary development tools, and away from third-party ones (such as Adobe Flash), this is significant in itself.

An indication of open source’s growth can be found in OS market shares in consumer electronics devices. As Figure 2 shows below, Android (open source) had a 49% share of shipments in 2014; if we include the various other open source OS’s in ‘other’, this increases to more than 50%.

Figure 2: Share of consumer electronics shipments* by OS, 2014

Source: Gartner
* Includes smartphones, tablets, laptops and desktop PCs

However, one of the components being open sourced is Swift’s (proprietary) compiler – a program that translates written code into an executable program that a computer system understands. The implication of this is that, in theory, we could even see Swift applications running on non-Apple devices in the future. In other words, Apple believes the risk of Swift being used on Android is outweighed by the reward of engaging with the developer community through open source.

Whilst some technology companies, especially the likes of Facebook, Google and Netflix, are well known for their activities in open source, Apple is a company famous for its proprietary approach to both hardware and software. This, combined with similar activities by Microsoft (who open sourced its .NET framework in 2014), suggest that open source is now less optional than it once was.

Open source is both an old and a new concept for operators

At first glance, open source also appears to now be less optional for telecoms operators, who traditionally procure proprietary software (and hardware) from third-party vendors. Whilst many (but not all) operators have been using open source software for some time, such as Linux and various open source databases in the IT domain (e.g. MySQL), we have in the last 2-3 years seen a step-change in operator interest in open source across multiple domains. The following quote, taken directly from the interviews, summarises the situation nicely:

“Open source is both an old and a new project for many operators: old in the sense that we have been using Linux, FreeBSD, and others for a number of years; new in the sense that open source is moving out of the IT domain and towards new areas of the industry.” 

AT&T, for example, has been speaking widely about its ‘Domain 2.0’ programme. Domain 2.0 has the objectives to transform AT&T’s technical infrastructure to incorporate network functions virtualisation (NFV) and software-defined networking (SDN), to mandate a higher degree of interoperability, and to broaden the range of alternative suppliers available across its core business. By 2020, AT&T hopes to virtualise 75% of its network functions, and it sees open source as accounting for up to 50% of this. AT&T, like many other operators, is also a member of various recently-formed initiatives and foundations around NFV and SDN, such as OPNFV – Figure 3 lists some below.

Figure 3: OPNFV Platinum Members

Source: OPNFV website

However, based on publicly-available information, other operators might appear to have lesser ambitions in this space. As ever, the situation is more complex than it first appears: other operators do have significant ambitions in open source and, despite the headlines NFV and SDN draw, there are many other business areas in which open source is playing (or will play) an important role. Figure 4 below includes three quotes from the interviews which highlight this broad spectrum of opinion:

Figure 4: Different attitudes of operators to open source – selected interview quotes

Source: STL Partners interviews

Key Questions to be Addressed

We therefore have many questions which need to be addressed concerning operator attitudes to open source software, adoption (by area of business), and more:

  1. What is open source software, what are its major initiatives, and who uses it most widely today?
  2. What are the most important advantages and disadvantages of open source software? 
  3. To what extent are telecoms operators using open source software today? Why, and where?
  4. What are the key barriers to operator adoption of open source software?
  5. Prospects: How will this situation change?

These are now addressed in turn.

  • Preface
  • Executive Summary
  • Introduction
  • Open source is less optional than it once was – even for Apple and Microsoft
  • Open source is both an old and a new concept for operators
  • Key Questions to be Addressed
  • Understanding Open Source Software
  • The Theory: Freely available, licensed source code
  • The Industry: Dominated by key initiatives and contributors
  • Research Findings: Evaluating Open Source
  • Open source has both advantages and disadvantages
  • Debunking Myths: Open source’s performance and security
  • Where are telcos using open source today?
  • Transformation of telcos’ service portfolios is making open source more relevant than ever…
  • … and three key factors determine where operators are using open source software today
  • Open Source Adoption: Business Critical vs. Service Area
  • Barriers to Telco Adoption of Open Source
  • Two ‘external’ barriers by the industry’s nature
  • Three ‘internal’ barriers which can (and must) change
  • Prospects and Recommendations
  • Prospects: An open source evolution, not revolution
  • Open Source, Transformation, and Six Key Recommendations
  • About STL Partners and Telco 2.0
  • About Dialogic

 

  • Figure 1: Split of Interviewees by Business Area
  • Figure 2: Share of consumer electronics shipments* by OS, 2014
  • Figure 3: OPNFV Platinum Members
  • Figure 4: Different attitudes of operators to open source – selected interview quotes
  • Figure 5: The Open IT Ecosystem (incl. key industry bodies)
  • Figure 6: Three Forms of Governance in Open Source Software Projects
  • Figure 7: Three Classes of Open Source Software License
  • Figure 8: Web Server Share of Active Sites by Developer, 2000-2015
  • Figure 9: Leading software companies vs. Red Hat, market capitalisation, Oct. 2015
  • Figure 10: The Key Advantages and Disadvantages of Open Source Software
  • Figure 11: How Google Works – Failing Well
  • Figure 12: Performance gains from an open source activation (OSS) platform
  • Figure 13: Intel Hardware Performance, 2010-13
  • Figure 14: Open source is more likely to be found today in areas which are…
  • Figure 15: Framework mapping current telco uptake of open source software
  • Figure 16: Five key barriers to telco adoption of open source software
  • Figure 17: % of employees with ‘software’ in their LinkedIn job title, Oct. 2015
  • Figure 18: ‘Waterfall’ and ‘Agile’ Software Development Methodologies Compared
  • Figure 19: Four key cultural attributes for successful telco transformation

NFV: Great Promises, but How to Deliver?

Introduction

What’s the fuss about NFV?

Today, it seems that suddenly everything has become virtual: there are virtual machines, virtual LANs, virtual networks, virtual network interfaces, virtual switches, virtual routers and virtual functions. The two most recent and highly visible developments in Network Virtualisation are Software Defined Networking (SDN) and Network Functions Virtualisation (NFV). They are often used in the same breath, and are related but different.

Software Defined Networking has been around as a concept since 2008, has seen initial deployments in Data Centres as a Local Area Networking technology and according to early adopters such as Google, SDNs have helped to achieve better utilisation of data centre operations and of Data Centre Wide Area Networks. Urs Hoelzle of Google can be seen discussing Google’s deployment and findings here at the OpenNet summit in early 2012 and Google claim to be able to get 60% to 70% better utilisation out of their Data Centre WAN. Given the cost of deploying and maintaining service provider networks this could represent significant cost savings if service providers can replicate these results.

NFV – Network Functions Virtualisation – is just over two years old and yet it is already being deployed in service provider networks and has had a major impact on the networking vendor landscape. Globally the telecoms and datacomms equipment market is worth over $180bn and has been dominated by 5 vendors with around 50% of the market split between them.

Innovation and competition in the networking market has been lacking with very few major innovations in the last 12 years, the industry has focussed on capacity and speed rather than anything radically new, and start-ups that do come up with something interesting get quickly swallowed up by the established vendors. NFV has started to rock the steady ship by bringing the same technologies that revolutionised the IT computing markets, namely cloud computing, low cost off the shelf hardware, open source and virtualisation to the networking market.

Software Defined Networking (SDN)

Conventionally, networks have been built using devices that make autonomous decisions about how the network operates and how traffic flows. SDN offers new, more flexible and efficient ways to design, test, build and operate IP networks by separating the intelligence from the networking device and placing it in a single controller with a perspective of the entire network. Taking the ‘intelligence’ out of many individual components also means that it is possible to build and buy those components for less, thus reducing some costs in the network. Building on ‘Open’ standards should make it possible to select best in class vendors for different components in the network introducing innovation and competiveness.

SDN started out as a data centre technology aimed at making life easier for operators and designers to build and operate large scale data centre operations. However, it has moved into the Wide Area Network and as we shall see, it is already being deployed by telcos and service providers.

Network Functions Virtualisation (NFV)

Like SDN, NFV splits the control functions from the data forwarding functions, however while SDN does this for an entire network of things, NFV focusses specifically on network functions like routing, firewalls, load balancing, CPE etc. and looks to leverage developments in Common Off The Shelf (COTS) hardware such as generic server platforms utilising multi core CPUs.

The performance of a device like a router is critical to the overall performance of a network. Historically the only way to get this performance was to develop custom Integrated Circuits (ICs) such as Application Specific Integrated Circuits (ASICs) and build these into a device along with some intelligence to handle things like route acquisition, human interfaces and management. While off the shelf processors were good enough to handle the control plane of a device (route acquisition, human interface etc.), they typically did not have the ability to process data packets fast enough to build a viable device.

But things have moved on rapidly. Vendors like Intel have put specific focus on improving the data plane performance of COTS based devices and the performance of the devices has risen exponentially. Figure 1 clearly demonstrates that in just 3 years (2010 – 2013) a tenfold increase in packet processing or data plane performance has been achieved. Generally, CPU performance has been tracking Moore’s law which originally stated that the number of components in an integrated circuit would double very two years. If the number of components are related to performance, the same can be said about CPU performance. For example Intel will ship its latest processor family in the second half of 2015 which could have up to 72 individual CPU cores compared to the four or 6 used in 2010/2013.

Figure 1 – Intel Hardware performance

Source: ETSI & Telefonica

NFV was started by the telco industry to leverage the capability of COTS based devices to reduce the cost or networking equipment and more importantly to introduce innovation and more competition to the networking market.

Since its inception in 2012 and running as a special interest group within ETSI (European Telecommunications Standards Institute), NFV has proven to be a valuable initiative, not just from a cost perspective, but more importantly with what it means to telcos and service providers in being able to develop, test and launch new services quickly and efficiently.

ETSI set up a number of work streams to tackle the issues of performance, management & orchestration, proof of concept, reference architecture etc. and externally organisations like OPNFV (Open Platform for NFV) have brought together a number of vendors and interested parties.

Why do we need NFV? What we already have works!

NFV came into being to solve a number of problems. Dedicated appliances from the big networking vendors typically do one thing and do that thing very well, switching or routing packets, acting as a network firewall etc. But as each is dedicated to a particular task and has its own user interface, things can get a little complicated when there are hundreds of different devices to manage and staff to keep trained and updated. Devices also tend to be used for one specific application and reuse is sometimes difficult resulting in expensive obsolescence. By running network functions on a COTS based platform most of these issues go away resulting in:

  • Lower operating costs (some claim up to 80% less)
  • Faster time to market
  • Better integration between network functions
  • The ability to rapidly develop, test, deploy and iterate a new product
  • Lower risk associated with new product development
  • The ability to rapidly respond to market changes leading to greater agility
  • Less complex operations and better customer relations

And the real benefits are not just in the area of cost savings, they are all about time to market, being able to respond quickly to market demands and in essence becoming more agile.

The real benefits

If the real benefits of NFV are not just about cost savings and are about agility, how is this delivered? Agility comes from a number of different aspects, for example the ability to orchestrate a number of VNFs and the network to deliver a suite or chain of network functions for an individual user or application. This has been the focus of the ETSI Management and Orchestration (MANO) workstream.

MANO will be crucial to the long term success of NFV. MANO provides automation and provisioning and will interface with existing provisioning and billing platforms such as existing OSS/BSS. MANO will allow the use and reuse of VNFs, networking objects, chains of services and via external APIs allow applications to request and control the creation of specific services.

Figure 2 – Orchestration of Virtual Network Functions

Source: STL Partners

Figure 2 shows a hypothetical service chain created for a residential user accessing a network server. The service chain is made up of a number of VNFs that are used as required and then discarded when not needed as part of the service. For example the Broadband Remote Access Server becomes a VNF running on a common platform rather than a dedicated hardware appliance. As the users STB connects to the network, the authentication component checks that the user is valid and has a current account, but drops out of the chain once this function has been performed. The firewall is used for the duration of the connection and other components are used as required for example Deep Packet Inspection and load balancing. Equally as the user accesses other services such as media, Internet and voice services different VNFs can be brought into play such as SBC and Network Storage.

Sounds great, but is it real, is anyone doing anything useful?

The short answer is yes, there are live deployments of NFV in many service provider networks and NFV is having a real impact on costs and time to market detailed in this report. For example:

  • Vodafone Spain’s Lowi MVNO
  • Telefonica’s vCPE trial
  • AT&T Domain 2.0 (see pages 22 – 23 for more on these examples)

 

  • Executive Summary
  • Introduction
  • WTF – what’s the fuss about NFV?
  • Software Defined Networking (SDN)
  • Network Functions Virtualisation (NFV)
  • Why do we need NFV? What we already have works!
  • The real benefits
  • Sounds great, but is it real, is anyone doing anything useful?
  • The Industry Landscape of NFV
  • Where did NFV come from?
  • Any drawbacks?
  • Open Platform for NFV – OPNFV
  • Proprietary NFV platforms
  • NFV market size
  • SDN and NFV – what’s the difference?
  • Management and Orchestration (MANO)
  • What are the leading players doing?
  • NFV – Telco examples
  • NFV Vendors Overview
  • Analysis: the key challenges
  • Does it really work well enough?
  • Open Platforms vs. Walled Gardens
  • How to transition?
  • It’s not if, but when
  • Conclusions and recommendations
  • Appendices – NFV Reference architecture

 

  • Figure 1 – Intel Hardware performance
  • Figure 2 – Orchestration of Virtual Network Functions
  • Figure 3 – ETSI’s vision for Network Functions Virtualisation
  • Figure 4 – Typical Network device showing control and data planes
  • Figure 5 – Metaswitch SBC performance running on 8 x CPU Cores
  • Figure 6 – OPNFV Membership
  • Figure 7 – Intel OPNFV reference stack and platform
  • Figure 8 – Telecom equipment vendor market shares
  • Figure 9 – Autonomy Routing
  • Figure 10 – SDN Control of network topology
  • Figure 11 – ETSI reference architecture shown overlaid with functional layers
  • Figure 12 – Virtual switch conceptualised

 

Software Defined People: How it Shapes Strategy (and us)

Introduction: software’s defining influence

Our knowledge, employment opportunities, work itself, healthcare, potential partners, purchases from properties to groceries, and much else can now be delivered or managed via software and mobile apps.

So are we all becoming increasingly ‘Software Defined’? It’s a question that has been stimulated in part by producing research on ‘Software Defined Networks (SDN): A Potential Game Changer’ and Enterprise Mobility, this video from McKinsey and Eric Schmidt, Google’s Exec Chairman, a number of observations throughout the past year, and particularly at this and last year’s Mobile World Congress (MWC).

But is software really the key?

The rapid adoption of smartphones and tablets, enabled by ever faster networks, is perhaps the most visible and tangible phenomenon in the market. Less visible but equally significant is the huge growth in ‘big data’ – the use of massive computing power to process types and volume of data that were previously inaccessible, as well as ‘small data’ – the increasing use of more personalised datasets.

However, what is now fuelling these trends is that many core life and business tools are now software of some form or another. In other words, programmes and ‘apps’ that create economic value, utility, fun or efficiency. Software is now the driving force, and the evolving data and hardware are by-products and enablers of the applications respectively.

Software: your virtual extra hand

In effect, mobile software is the latest great tool in humanity’s evolutionary path. With nearly a quarter of the world’s population using a smartphone, the human race has never had so much computing power by its side in every moment of everyday life. Many feature phones also possess significant processing power, and the extraordinary reach of mobile can now deliver highly innovative solutions like mobile money transfer even in markets with relatively underdeveloped financial service infrastructure.

How we are educated, employed and cared for are all starting to change with the growing power of mobile technologies, and will all change further and with increasing pace in the next phase of the mobile revolution. Knowing how to get the best from this world is now a key life skill.

The way that software is used is changing and will change further. While mobile apps have become a mainstream consumer phenomenon in many markets in the last few years, the application of mobile, personalised technologies is also changing education, health, employment, and the very fabric of our social lives. For example:

  • Back at MWC 2013 we saw the following fascinating video from Ericsson as part of its ‘Networked Society’ vision of why education has evolved as is has (to mass-produce workers to work in factories), and what the possibilities are with advanced technology, which is well worth a few minutes of your time whether you have kids or not.
  • We also saw this education demo video from a Singapore school from Qualcomm, based on the creative use of phones in all aspects of schooling in the WE Learn project.
  • There are now a growing number of eHealth applications (heart rate, blood pressure, stroke and outpatient care), and productivity apps and outreach of CRM applications like Salesforce into the mobile employment context are having an increasingly massive impact.
  • While originally a ‘fixed’ phenomena, the way we meet and find partners has seen a massive change in recent years. For example, in the US, 17% of recent marriages and 20% of ‘committed relationships’ started in the $1Bn online dating world – another world which is now increasingly going mobile.

The growing sophistication in human-software interactivity

Horace Dediu pointed out at a previous Brainstorm that the disruptive jumps in mobile handset technology have come from changes in the user interface – most recently in the touch-screen revolution accompanying smartphones and tablets.

And the way in which we interact with the software will continue to evolve, from the touch screens of smartphones, through voice activation, gesture recognition, retina tracking, on-body devices like watches, in-body sensors in the blood and digestive system, and even potentially by monitoring brainwaves, as illustrated in the demonstration from Samsung labs shown in Figure 1.

Figure 1: Software that reads your mind?

Source: Samsung Labs

Clearly, some of these techniques are still at an early stage of development. It is a hard call as to which will be the one to trigger the next major wave of innovation (e.g. see Facebook’s acquisition of Oculus Rift), as there are so many factors that influence the likely take-up of new technologies, from price through user experience to social acceptance.

Exploring and enhancing the senses

Interactive goggles / glasses such as Google Glass have now been around for over a year, and AR applications that overlay information from the virtual world onto images of the real world continue to evolve.

Search is also becoming a visual science – innovations such as Cortexica, recognise everyday objects (cereal packets, cars, signs, advertisements, stills from a film, etc.) and return information on how and where you can buy the related items. While it works from a smartphone today, it makes it possible to imagine a world where you open the kitchen cupboard and tell your glasses what items you want to re-order.

Screens will be in increasing abundance, able to interact with passers-by on the street or with you in your home or car. What will be on these screens could be anything that is on any of your existing screens or more – communication, information, entertainment, advertising – whatever the world can imagine.

Segmented by OS?

But is it really possible to define a person by the software they use? There is certainly an ‘a priori’ segmentation originating from device makers’ segmentation and positioning:

  • Apple’s brand and design ethos have held consistently strong appeal for upmarket, creative users. In contrast, Blackberry for a long time held a strong appeal in the enterprise segment, albeit significantly weakened in the last few years.
  • It is perhaps slightly harder to label Android users, now the largest group of smartphone users. However, the openness of the software leads to freedom, bringing with it a plurality of applications and widgets, some security issues, and perhaps a greater emphasis on ‘work it out for yourself’.
  • Microsoft, once ubiquitous through its domination of the PC universe, now finds itself a challenger in the world of mobiles and tablets, and despite gradually improving sales and reported OS experience and design has yet to find a clear identity, other than perhaps now being the domain of those willing to try something different. While Microsoft still has a strong hand in the software world through its evolving Office applications, these are not yet hugely mobile-friendly, and this is creating a niche for new players, such as Evernote and others, that have a more focused ‘mobile first’ approach.

Other segments

From a research perspective, there are many other approaches to thinking about what defines different types of user. For example:

  • In adoption, the Bass Diffusion Model segments e.g. Innovators, Early Adopters, Mass Market, Laggards;
  • Segments based on attitudes to usage, e.g. Lovers, Haters, Functional Users, Social Users, Cost Conscious, etc.;
  • Approaches to privacy and the use of personal data, e.g. Pragmatic, Passive, Paranoid.

It is tempting to hypothesise that there could be meta-segments combining these and other behavioural distinctions (e.g. you might theorise that there would be more ‘haters’ among the ‘laggards’ and the ‘paranoids’ than the ‘innovators’ and ‘pragmatics’), and there may indeed be underlying psychological drivers such as extraversion that drive people to use certain applications (e.g. personal communications) more.

However, other than anecdotal observations, we don’t currently have the data to explore or prove this. This knowledge may of course exist within the research and insight departments of major players and we’d welcome any insight that our partners and readers can contribute (please email contact@telco2.net if so).

Hypothesis: a ‘software fingerprint’?

The collection of apps and software each person uses, and how they use them, could be seen as a software fingerprint – a unique combination of tools showing interests, activities and preferences.

Human beings are complex creatures, and it may be a stretch to say a person could truly be defined by the software they use. However, there is a degree of cause and effect with software. Once you have the ability to use it, it changes what you can achieve. So while the software you use may not totally define you, it will play an increasing role in shaping you, and may ultimately form a distinctive part of your identity.

For example, Minecraft is a phenomenally successful and addictive game. If you haven’t seen it, imagine interactive digital Lego (or watch the intro video here). Children and adults all over the world play on it, make YouTube films about their creations, and share knowledge and stories from it as with any game.

To be really good at it, and to add enhanced features, players install ‘mods’ – essentially software upgrades, requiring the use of quite sophisticated codes and procedures, and the understanding of numerous file types and locations. So through this one game, ten year old kids are developing creative, social and IT skills, as well as exploring and creating new identities for themselves.

Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you

Minecraft March 2014

Source: Planetminecraft.com

But who is in charge – you or the software?

There are also two broad schools of thought in advanced IT design. One is that IT should augment human abilities and its application should always be controlled by its users. The other is the idea that IT can assist people by providing recommendations and suggestions that are outside the control of the user. An example of this second approach is Google showing you targeted ads based on your search history.

Being properly aware of this will become increasingly important to individuals’ freedom from unrecognised manipulation. Just as knowing that embarrassing photos on Facebook will be seen by prospective employers, knowing who’s pulling your data strings will be an increasingly important to controlling one’s own destiny in the future.

Back to the law of the Jungle?

Many of the opportunities and abilities conferred by software seem perhaps trivial or entertaining. But some will ultimately confer advantages on their users over those who do not possess the extra information, gain those extra moments, or learn that extra winning idea. The questions are: which will you use well; and which will you enable others to use? The answer to the first may reflect your personal success, and the second that of your business.

So while it used to be that your genetics, parents, and education most strongly steered your path, now how you take advantage of the increasingly mobile cyber-world will be a key additional competitive asset. It’s increasingly what you use and how you use it (as well as who you know, of course) that will count.

And for businesses, competing in an ever more resource constrained world, the effective use of software to track and manage activities and assets, and give insight to underlying trends and ways to improve performance, is an increasingly critical competence. Importantly for telcos and other ICT providers, it’s one that is enabled and enhanced by cloud, big data, and mobile.

The Software as a Service (SaaS) application Salesforce is an excellent case in point. It can brings instantaneous data on customers and business operations to managers’ and employees’ fingertips to any device. This can confer huge advantages over businesses without such capabilities.

Figure 3: Salesforce delivers big data and cloud to mobile

Salesforce delivers big data and cloud to mobile March 2014

Source: Powerbrokersoftware.com

 

  • Executive Summary: the key role of mobile
  • Why aren’t telcos more involved?
  • Revenue Declines + Skills Shortage = Digital Hunger Gap
  • What should businesses do about it?
  • All Businesses
  • Technology Businesses and Enablers
  • Telcos
  • Next steps for STL Partners and Telco 2.0

 

  • Figure 1: Software that reads your mind?
  • Figure 2: Minecraft – building, killing ‘creepers’ and coding by a kid near you
  • Figure 3: Salesforce delivers big data and cloud to mobile
  • Figure 4: The Digital Hunger Gap for Telcos
  • Figure 5: Telcos need Software Skills to deliver a ‘Telco 2.0 Service Provider’ Strategy
  • Figure 6: The GSMA’s Vision 2020

Software Defined Networking (SDN): A Potential ‘Game Changer’

Summary: Software Defined Networking is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks. This briefing analyses its potential benefits and use cases, outlines strategic scenarios and key action plans for telcos, summarises key vendor positions, and why it is so important for both the telco and vendor communities to adopt and exploit SDN capabilities now. (May 2013, Executive Briefing Service, Cloud & Enterprise ICT Stream, Future of the Network Stream). Potential Telco SDN/NFV Deployment Phases May 2013

Figure 1 – Potential Telco SDN/NFV Deployment Phases
Potential Telco SDN/NFV Deployment Phases May 2013

Source STL Partners

Introduction

Software Defined Networking or SDN is a technological approach to designing and managing networks that has the potential to increase operator agility, lower costs, and disrupt the vendor landscape. Its initial impact has been within leading-edge data centres, but it also has the potential to spread into many other network areas, including core public telecoms networks.

With SDN, networks no longer need to be point to point connections between operational centres; rather the network becomes a programmable fabric that can be manipulated in real time to meet the needs of the applications and systems that sit on top of it. SDN allows networks to operate more efficiently in the data centre as a LAN and potentially also in Wide Area Networks (WANs).

SDN is new and, like any new technology, this means that there is a degree of hype and a lot of market activity:

  • Venture capitalists are on the lookout for new opportunities;
  • There are plenty of start-ups all with “the next big thing”;
  • Incumbents are looking to quickly acquire new skills through acquisition;
  • And not surprisingly there is a degree of SDN “Washing” where existing products get a makeover or a software upgrade and are suddenly SDN compliant.

However there still isn’t widespread clarity of what SDN is and how it might be used outside of vendor papers and marketing materials, and there are plenty of important questions to be answered. For example:

  • SDN is open to interpretation and is not an industry standard, so what is it?
  • Is it better than what we have today?
  • What are the implications for your business, whether telcos, or vendors?
  • Could it simply be just a passing fad that will fade into the networking archives like IP Switching or X.25 and can you afford to ignore it?
  • What will be the impact on LAN and WAN design and for that matter data centres, telcos and enterprise customers? Could it be a threat to service providers?
  • Could we see a future where networking equipment becomes commoditised just like server hardware?
  • Will standards prevail?

Vendors are to a degree adding to the confusion. For example, Cisco argues that it already has an SDN-capable product portfolio with Cisco One. It says that its solution is more capable than solutions dominated by open-source based products, because these have limited functionality.

This executive briefing will explain what SDN is, why it is different to traditional networking, look at the emerging market with some likely use cases and then look at the implications and benefits for service providers and vendors.

How and why has SDN evolved?

SDN has been developed in response to the fact that basic networking hasn’t really evolved much over the last 30 plus years, and that new capabilities are required to further the development of virtualised computing to bring innovation and new business opportunities. From a business perspective the networking market is a prime candidate for disruption:

  • It is a mature market that has evolved steadily for many years
  • There are relatively few leading players who have a dominant market position
  • Technology developments have generally focussed in speed rather than cost reduction or innovation
  • Low cost silicon is available to compete with custom chips developed by the market leaders
  • There is a wealth of open source software plus plenty of low cost general purpose computing hardware on which to run it
  • Until SDN, no one really took a clean slate view on what might be possible

New features and capabilities have been added to traditional equipment, but have tended to bloat the software content increasing costs to both purchase and operate the devices. Nevertheless – IP Networking as we know it has performed the task of connecting two end points very well; it has been able to support the explosion of growth required by the Internet and of mobile and mass computing in general.

Traditionally each element in the network (typically a switch or a router) builds up a network map and makes routing decisions based on communication with its immediate neighbours. Once a connection through the network has been established, packets follow the same route for the duration of the connection. Voice, data and video have differing delivery requirements with respect to delay, jitter and latency, but in traditional networks there is no overall picture of the network – no single entity responsible for route planning, or ensuring that traffic is optimised, managed or even flows over the most appropriate path to suit its needs.

One of the significant things about SDN is that it takes away the independence or autonomy from every networking element in order to remove its ability to make network routing decisions. The responsibility for establishing paths through the network, their control and their routing is placed in the hands of one or more central network controllers. The controller is able to see the network as complete entity and manage its traffic flows, routing, policies and quality of service, in essence treating the network as a fabric and then attempting to get maximum utilisation from that fabric. SDN Controllers generally offer external interfaces through which external applications can control and set up network paths.

There has been a growing demand to make networks programmable by external applications – data centres and virtual computing are clear examples of where it would be desirable to deploy not just the virtual computing environment, but all the associated networking functions and network infrastructure from a single console. With no common control point the only way of providing interfaces to external systems and applications is to place agents in the networking devices and to ask external systems to manage each networking device. This kind of architecture has difficulty scaling, creates lots of control traffic that reduces overall efficiency, it may end up with multiple applications trying to control the same entity and is therefore fraught with problems.

Network Functions Virtualisation (NFV)

It is worth noting that an initiative complementary to SDN was started in 2012 called Network Functions Virtualisation (NFV). This complicated sounding term was started by the European Telecommunications Standards Institute (ETSI) in order to take functions that sit on dedicated hardware like load balancers, firewalls, routers and other network devices and run them on virtualised hardware platforms lowering capex, extending their useful life and reducing operating expenditures. You can read more about NFV later in the report on page 20.

In contrast, SDN makes it possible to program or change the network to meet a specific time dependant need and establish end-to-end connections that meet specific criteria. The SDN controller holds a map of the current network state and the requests that external applications are making on the network, this makes it easier to get best use from the network at any given moment, carry out meaningful traffic engineering and work more effectively with virtual computing environments.

What is driving the move to SDN?

The Internet and the world of IP communications have seen continuous development over the last 40 years. There has been huge innovation and strict control of standards through the Internet Engineering Task Force (IETF). Because of the ad-hoc nature of its development, there are many different functions catering for all sorts of use cases. Some overlap, some are obsolete, but all still have to be supported and more are being added all the time. This means that the devices that control IP networks and connect to the networks must understand a minimum subset of functions in order to communicate with each other successfully. This adds complexity and cost because every element in the network has to be able to process or understand these rules.

But the system works and it works well. For example when we open a web browser and a session to stlpartners.com, initially our browser and our PC have no knowledge of how to get to STL’s web server. But usually within half a second or so the STL Partners web site appears. What actually happens can be seen in Figure 1. Our PC uses a variety of protocols to connect first to a gateway (1) on our network and then to a public name server (2 & 3) in order to query the stlpartners.com IP address. The PC then sends a connection to that address (4) and assumes that the network will route packets of information to and from the destination server. The process is much the same whether using public WAN’s or private Local Area Networks.

Figure 2 – Process of connecting to an Internet web address
Process of connecting to an Internet web address May 2013

Source STL Partners

The Internet is also highly resilient; it was developed to survive a variety of network outages including the complete loss of sub networks. Popular myth has it that the US Department of Defence wanted it to be able to survive a nuclear attack, but while it probably could, nuclear survivability wasn’t a design goal. The Internet has the ability to route around failed networking elements and it does this by giving network devices the autonomy to make their own decisions about the state of the network and how to get data from one point to any other.

While this is of great value in unreliable networks, which is what the Internet looked like during its evolution in the late 70’s or early 80’s, networks of today comprise far more robust elements and more reliable network links. The upshot is that networks typically operate at a sub optimum level, unless there is a network outage, routes and traffic paths are mostly static and last for the duration of the connection. If an outage occurs, the routers in the network decide amongst themselves how best to re-route the traffic, with each of them making their own decisions about traffic flow and prioritisation given their individual view of the network. In actual fact most routers and switches are not aware of the network in its entirety, just the adjacent devices they are connected to and the information they get from them about the networks and devices they in turn are connected to. Therefore, it can take some time for a converged network to stabilise as we saw in the Internet outages that affected Amazon, Facebook, Google and Dropbox last October.

The diagram in Figure 2 shows a simple router network, Router A knows about the networks on routers B and C because it is connected directly to them and they have informed A about their networks. B and C have also informed A that they can get to the networks or devices on router D. You can see from this model that there is no overall picture of the network and no one device is able to make network wide decisions. In order to connect a device on a network attached to A, to a device on a network attached to D, A must make a decision based on what B or C tell it.

Figure 3 – Simple router network
Simple router network May 2013

Source STL Partners

This model makes it difficult to build large data centres with thousands of Virtual Machines (VMs) and offer customers dynamic service creation when the network only understands physical devices and does not easily allow each VM to have its own range of IP addresses and other IP services. Ideally you would configure a complete virtual system consisting of virtual machines, load balancing, security, network control elements and network configuration from a single management console and then these abstract functions are mapped to physical hardware for computing and networking resources. VMWare have coined the term ‘Software Defined Data Centre’ or SDDC, which describes a system that allows all of these elements and more to be controlled by a single suite of management software.

Moreover, returning to the fact that every networking device needs to understand a raft of Internet Request For Comments (or RFC’s), all the clever code supporting these RFC’s in switches and routers costs money. High performance processing systems and memory are required in traditional routers and switches in order to inspect and process traffic, even in MPLS networks. Cisco IOS supports over 600 RFC’s and other standards. This adds to cost, complexity, compatibility, future obsolescence and power/cooling needs.

SDN takes a fresh approach to building networks based on the technologies that are available today, it places the intelligence centrally using scalable compute platforms and leaves the switches and routers as relatively dumb packet forwarding engines. The control platforms still have to support all the standards, but the platforms the controllers run on are infinitely more powerful than the processors in traditional networking devices and more importantly, the controllers can manage the network as a fabric rather than each element making its own potentially sub optimum decisions.

As one proof point that SDN works, in early 2012 Google announced that it had migrated its live data centres to a Software Defined Network using switches it designed and developed using off-the-shelf silicon and OpenFlow for the control path to a Google-designed Controller. Google claims many benefits including better utilisation of its compute power after implementing this system. At the time Google stated it would have liked to have been able to purchase OpenFlow-compliant switches but none were available that suited its needs. Since then, new vendors have entered the market such as BigSwitch and Pica8, delivering relatively low cost OpenFlow-compliant switches.

To read the Software Defined Networking in full, including the following sections detailing additional analysis…

  • Executive Summary including detailed recommendations for telcos and vendors
  • Introduction (reproduced above)
  • How and why has SDN evolved? (reproduced above)
  • What is driving the move to SDN? (reproduced above)
  • SDN: Definitions and Advantages
  • What is OpenFlow?
  • SDN Control Platforms
  • SDN advantages
  • Market Forecast
  • STL Partners’ Definition of SDN
  • SDN use cases
  • Network Functions Virtualisation
  • What are the implications for telcos?
  • Telcos’ strategic options
  • Telco Action Plans
  • What should telcos be doing now?
  • Vendor Support for OpenFlow
  • Big switch networks
  • Cisco
  • Citrix
  • Ericssson
  • FlowForwarding
  • HP
  • IBM
  • Nicira
  • OpenDaylight Project
  • Open Networking Foundation
  • Open vSwitch (OVS)
  • Pertino
  • Pica8
  • Plexxi
  • Tellabs
  • Conclusions & Recommendations

…and the following figures…

  • Figure 1 – Potential Telco SDN/NFV Deployment Phases
  • Figure 2 – Process of connecting to an Internet web address
  • Figure 3 – Simple router network
  • Figure 4 – Traditional Switches with combined Control/Data Planes
  • Figure 5 – SDN approach with separate control and data planes
  • Figure 6 – ETSI’s vision for Network Functions Virtualisation
  • Figure 7 – Network Functions Virtualised and managed by SDN
  • Figure 8 – Network Functions Virtualisation relationship with SDN
  • Table 1 – Telco SDN Strategies
  • Figure 9 – Potential Telco SDN/NFV Deployment Phases
  • Figure 10 – SDN used to apply policy to Internet traffic
  • Figure 11 – SDN Congestion Control Application

 

Strategy 2.0: Google’s Strategic Identity Crisis

Summary: Google’s shares have made little headway recently despite its dominance in search and advertising, and it faces increasing regulatory threats in this area. It either needs to find new sources of value growth or start paying out dividends, like Microsoft, Apple (or indeed, a telco). Overall, this is resulting in something of a strategic identity crisis. A review of Google’s strategy and implications for Telcos. (March 2012, Executive Briefing Service, Dealing with Disruption Stream).

Google's Advertising Revenues Cascade

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 24 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and the Telco 2.0 Dealing with Disruption Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £595 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Google at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:



Executive Summary

Google appears to be suffering from a strategic identity crisis. It is the giant of search advertising but it also now owns a handset maker, fibre projects, an increasingly fragmented mobile operating system, a social network of questionable success, and a driverless car programme (among other things). It has a great reputation for innovation and creativity, but risks losing direction and value by trying to focus on too many strategies and initiatives.

We believe that Google needs to stop trying to copy what Apple and Facebook are doing, de-prioritise its ‘Hail Mary’ hunt for a strategy (e.g. driverless cars), and continue to build new solutions that serve better the customers who are already willing to pay – namely, advertisers.

It is our view that the companies who have created most value in the market have done so by solving a customer problem really well. Apple’s recent success derives from creating a simpler and more beautiful way (platform + products) for people to manage their digital lives. People pay because it’s appealing and it works.

Google initially solved how people could find relevant information online and then, critically, how to use this to help advertisers get more customers. They do this so well that Google’s $37bn revenues continue to grow at double digit pace, and there’s plenty of headroom in the market for now. While the TV strategy may not yet be paying off, it would seem sensible to keep working at it to try to keep extending the reach of Google’s platform. 

While Android keeps Google in the mobile game to a degree, and has certainly helped to constrain certain rivals, we think Google should cast a hard eye over its other competing and distracting activities: Motorola, Payments, Google +, Driverless Cars etc. Its management team should look at the size of the opportunity, the strength of the competition, and their ability to execute in each. 

Pruning the projects might also lose Google an adversary or two, and it might also afford some reward to shareholders too. After all, even Apple has recently decided to pay back some cash to investors.

This may be very difficult for Google’s current leadership. Larry Page seems to have the restless instincts of the stereotypical Valley venture capitalist, hunting the latest ideas, and constantly trying to create the next big beautiful thing. The trouble is that this is Google in 2012, not 1995, and it looks to us at least that a degree of ‘sticking to the knitting’ within Google’s huge, profitable and growing search advertising business may be a better bet than the highly speculative (and expensive) ‘Hail Mary’ strategy route. 

This may sound surprising coming from us, the inveterate fans of innovation at Telco 2.0, so we’d like to point out some important differences between the situations that Google and the telcos are in:

  • Google’s core markets are growing, not flat or shrinking, and are at a different life-stage to the telecoms market;
  • Google is global, rather than being confined to any given geography. There are many opportunities still out there.
  • We are not saying that Google should stop innovating, but we are saying it should focus its innovative energy more clearly on activities that grow the core business.

Introduction

In January this year, Google achieved a first – it missed the consensus forecast for its quarterly earnings. There is of course no magic in the consensus, which is an average of highly conventionalised guesses from a bunch of City analysts, but it is as good a moment as ever to review Google’s strategic position. If you bought Google stock at the beginning, you may not need to read this, as you’re probably very rich (the return since then is of the order of 400%). The entirety of this return, however, is accounted for by the 2004-2007 bull run. On a five-year basis, Google stock is ahead 30%, which sounds pretty impressive (a 6% annual return), but again, all the growth is accounted for by the last surge upwards over the summer of 2007. The peak was achieved on the 2nd of November, 2007. 

As this chart shows, Google stock is still down about 9% from the peak, and perhaps more importantly, its path tracks Microsoft very closely indeed. Plus Microsoft investors get a dividend, whereas Google investors do not.

Figure 1: Google, Microsoft 2.0?

Google, Microsoft 2.0?
Source: Google Finance

Larry Page is reported to have said that “Google is no longer a “search company.” He says its model is now 

“invent wild things that will help humanity, get them adopted by users, profit, and then use the corporate structure to keep inventing new things.”

No longer a search company? Take a look at the revenues. Out of Google’s $37.9bn in revenues in 2011, $36bn came from advertising, aka the flip side of Google Search. Despite a whole string of mammoth product launches since 2007, Google’s business is essentially what it was in 2007 – a massive search-based advertising machine.

Google’s Challenges

Our last Google coverage – Android: An Anti-Apple Virus ? and the Dealing with the Disruptors Strategy Report   suggested that the search giant was suffering from a lack of direction, although some of this was accounted for by a deliberate policy of experimenting and shutting down failed initiatives.

Since then, Google has launched Google +, closed Google Buzz, and closed Google Wave while releasing it into a second life as an open-source project. It has been involved in major litigation over patents and in regulatory inquiries. It has seen an enormous boom in Android shipments but not necessarily much revenue. It is about to become a major hardware manufacturer by acquiring Motorola. And it has embarked on extensive changes to the core search product and to company-wide UI design.

In this note, we will explore Google’s activities since our last note, summarise key threats to the business and strategies to counter them, and consider if a bearish view of the company is appropriate.

We’ve found it convenient to organise Google’s business  into several themed groups as follows:

1: Questionable Victories

Pyrrhic victory is defined as a victory so costly it is indistinguishable from defeat. Although there is nothing so bad at Google, it seems to have a knack of creating products that are hugely successful without necessarily generating cash. Android is exhibit A. 

The obvious point here is surging, soaring growth – forecasts for Android shipments have repeatedly been made, beaten on the upside, adjusted upwards, and then beaten again. Android has hugely expanded the market for smartphones overall, caused seismic change in the vendor industry, and triggered an intellectual property war. It has found its way into an awe-inspiring variety of devices and device classes.

But questions are still hanging over how much actual money is involved. During the Q4 results call, a figure for “mobile” revenues of $2.5bn was quoted. This turns out to consist of advertising served to browsers that present a mobile device user-agent string. However, Google lawyer Susan Creighton is on record as saying  that 66% of Google mobile web traffic originates from Apple iOS devices. It is hard to see how this can be accounted for as Android revenue.

Further, the much-trailed “fragmentation” began in 2011 with a vengeance. “Forkdroids”, devices using an operating system based on Android but extensively adapted (“forked” from the main development line), appeared in China and elsewhere. Amazon’s Kindle Fire tablet is an example closer to home.

And the intellectual property fights with Oracle, Apple, and others are a constant source of disruption and a potentially sizable leakage of revenue. In so far as Google’s motivation in acquiring Motorola Mobility was to get hold of its patent portfolio, this has already involved very large sums of money. Another counter-strategy is the partnership with Intel and Lenovo to produce x86-based Android devices, which cannot be cheap either and will probably mean even more fragmentation.

This is not the only example, though – think of Google Books, an extremely expensive product which caused a great deal of litigation, eventually got its way (although not all the issues are resolved), and is now an excellent free tool for searching in old books but no kind of profit centre. Further, Google’s patented automatic scanning has the unfortunate feature of pulling in marginalia, etc. from the original text that its rivals (such as Amazon Kindle) don’t.
Further, Google has recently been trying to monetise one of its classic products, the Google Maps API that essentially started the Web 2.0 phenomenon, with the result that several heavy users (notably Apple and Foursquare)  have migrated to the free OpenStreetMap project and its OpenLayers API.

2: Telco-isation

Like a telco, Google is dependent on one key source of revenue that cross-subsidises the rest of the company – search-based advertising. 

Figure 2: Google’s advertising revenues cascade into all other divisions

Google's Advertising Revenues Cascade

[NB TAC = Traffic Acquisition Cost, CoNR = Cost of Net Revenues]

Having proven to be a category killer for search and advertising across the  whole of the Internet, the twins (search and ads) are hugely critical for Google and also for millions of web sites, content creators, and applications developers. As a result, just like a telco, they are increasingly subject to regulation and political risk. 

Google search rankings have always been subject to an arms race between the black art of search-engine optimisation and Google engineers’ efforts to ensure the integrity of their results, but the whole issue has taken a more serious twist with the arrival of a Federal Trade Commission inquiry into Google’s business practices. The potential problems were dramatised by the so-called “white lady from Google”  incident at Google Kenya, where Google employees scraped a rival directory website’s customers and cold-called them, misrepresenting their competitors’ services, and further by the $500 million online pharmacy settlement. Similarly, the case of the Spanish camp site that wants to be disassociated from horrific photographs of a disaster demonstrates both that there is a demand for regulation and that sooner or later, a regulator or legislator will be tempted to supply it.

The decision to stream Google search quality meetings online should be seen in this light, as an effort to cover this political flank.

As well as the FTC, there is also substantial regulatory risk in the EU. The European Commission, in giving permission for the Motorola acquisition, also stated that it would consider further transactions involving Google and Motorola’s intellectual property on a case-by-case basis. To put it another way, after the Motorola deal, the Commission has set up a Google Alert for M&A activity involving Google.

3: Look & Feel Problems

Google is in the process of a far-reaching refresh of its user interfaces, graphic design, and core search product. The new look affects Search, GMail, and Google + so far, but is presumably going to roll out across the entire company. At the same time, they have begun to integrate Google + content into the search results.

This is, unsurprisingly, controversial and has attracted much criticism, so far only from the early adopter crowd. There is a need for real data to evaluate it. However, there are some reasons to think that Search is looking in the wrong place.

Since the major release codenamed Caffeine in 2008, Google Search engineers have been optimising the system for speed and for first-hit relevance, while also indexing rapidly-changing content faster by redesigning the process of “spidering” web sites to work in parallel. Since then, Google Instant has further concentrated on speed to the first result. In the Q4 results, it was suggested that mobile users are less valuable to Google than desktop ones. One reason for this may be that “obvious” search – Wikipedia in the first two hits – is well served by mobile apps. Some users find that Google’s “deep web” search has suffered.

Under “Google and your world”, recommendations drawn from Google + are being injected into search results. This is especially controversial for a mixture of privacy and user-experience reasons. Danny Sullivan’s SearchEngineLand, for example, argues that it harms relevance without adding enough private results to be of value. Further, doubt has been cast on Google’s numbers regarding the new policy of integrating Google accounts into G+ and G+ content into search.

Another, cogent criticism is that it introduces an element of personality that will render regulatory issues more troublesome. When Google’s results were visibly the output of an algorithm, it was easier for Google to claim that they were the work of impartial machines. If they are given agency and associated with individuals, it may be harder to deny that there is an element of editorial judgment and hence the possibility of bias involved.

Social search has been repeatedly mooted since the mid-2000s as the next-big-thing, but it seems hard to implement. Yahoo!, Facebook, and several others have tried and failed.

Figure 3: Google + on Google Trends: fading into the noise?

 Google + on Google Trends: Fading Into the Noise?
Source: Google Trends

It is possible that Google may have a structural weakness in design as opposed to engineering (which is as excellent as ever). This may explain why a succession of design-focused initiatives have failed – Wave and Buzz have been shut down, Google TV hasn’t gained traction (there are less than one million active devices), and feedback on the developer APIs is poor.

4: Palpable Project Proliferation

Google’s tendency to launch new products is as intimidating as ever. However, there is a strong argument that its tireless creativity lacks focus, and the hit-rate is worrying low. Does Google really need two cut-down OSs for ultra-mobile devices? It has both Android, and ChromeOS, and if the first was intended for mobile phones and the second for netbooks, you can now buy a netbook-like (but rather more powerful) Asus PC that runs Android. Further, Google supports a third operating system for its own internal purposes – the highly customised version of Linux that powers the Google Platform – and could be said to support a fourth, as it pays the Mozilla Foundation substantial amounts of money under the terms of their distribution agreement and their Boot to Gecko project is essentially a mobile OS. IBM also supported four operating systems at its historic peak in the 1980s.  

Also, does Google really need to operate an FTTH network, or own a smartphone vendor? The Larry Page quote we opened with tends to suggest that Google’s historical tendency to do experiments is at work, but both Google’s revenue raisers (Ads and YouTube, which from an economic point of view is part of the advertising business) date from the first three years as a public company. The only real hit Google has had for some time is Android, and as we have seen, it’s not clear that it makes serious money.

Google Wallet, for example, was launched with a blaze of publicity, but failed to attract support from either the financial or the telecoms industry, rather like its predecessor Google Checkout. It also failed to gain user adoption, but it has this in common with all NFC-based payments initiatives. Recently, a major security bug was discovered, and key staff have been leaving steadily, including the head of consumer payments. Another shutdown is probably on the cards. 

Meanwhile, a whole range of minor applications have been shuttered

Another heavily hyped project which does not seem to be gaining traction is the Chromebook, the hardware-as-a-service IT offering aimed at enterprises. This has been criticised on the basis that its $28/seat/month pricing is actually rather high. Over a typical 3 year depreciation cycle for IT equipment, it’s on a par with Apple laptops, and has the restriction that all the applications must work in a Web browser on netbook-class hardware. Google management has been promoting small contract wins in US school districts . Meanwhile, it is frequently observed that Google’s own PC fleet consists mostly of Apple hardware. If Google won’t use them itself, why should any other enterprise IT shop do so? The Google Search meeting linked above contains 2 Lenovo ThinkPads and 13 Apple MacBooks of various models and zero Chromebooks, while none other than Eric Schmidt used a Mac for his MWC 2012 keynote. Traditionally, Google insisted on “dogfooding” its products by using them internally.

The Google Fibre project in Kansas City, for its part, has been struggling with regulatory problems related to its access to city-owned civil infrastructure. Kansas City’s utility poles have reserved areas for different services, for example telecoms and electrical power. Google was given the concession to string the fibre in the more spacious electrical section – however, this requires high voltage electricians rather than telecoms installers to do the job and costs substantially more. Google has been trying to change the terms, and use the telecoms section, but (unsurprisingly) local cable and Bell operators are objecting. As with the muni-WLAN projects of the mid-2000s, the abortive attempt to market the Nexus One without the carriers, and Google Voice, Google has had to learn the hard way that telecoms is difficult.

And while all this has been going on, you might wonder where Google Enterprise 2.0 or Google Ads 2.0 are.

5. Google Play – a Collection of Challenges?

Google recently announced its “new ecosystem”, Google Play. This consists of what was historically known as the Android Market, plus Google Books, Google Music, and the web-based elements of Google Wallet (aka Google Checkout). All of these products are more or less challenged. Although the Android Market has been a success in distributing apps to the growing fleets of Android devices, it continues to contain an unusually high percentage of free apps, developer payouts tend to be lower than on its rivals, and it has had repeated problems with malware. Google Books has been an expensive hobby, involving substantial engineering work and litigation, and seems unlikely to be a profit centre. Google Music – as opposed to YouTube – is also no great success, and it is worth asking why both projects continue.

However, it will be the existing manager of Google Music who takes charge, with Android Market management moving out. It is worth noting that in fact there were two heads of the Android Market – Eric Chu for developer relations and David Conway for product management. This is not ideal in itself.

Further, an effort is being made to force app developers to use the ex-Google Checkout system for in-app billing. This obviously reflects an increased concern for monetisation, but it also suggests a degree of “arguing with the customers”.

To read the note in full, including the following additional analysis…

  • On the Other Hand…
  • Strengths of the Core Business
  • “Apple vs. Google”
  • Content acquisition
  • Summary Key Product Review
  • Search & Advertising
  • YouTube and Google TV
  • Communications Products
  • Android
  • Enterprise
  • Developer Products
  • Summary: Google Dashboard
  • Conclusion
  • Recommendations for Operators
  • The Telco 2.0™ Initiative
  • Index

…and the following figures…

  • Figure 1: Google, Microsoft 2.0?
  • Figure 2: Google’s advertising revenues cascade into all other divisions
  • Figure 3: Google + on Google Trends: fading into the noise?
  • Figure 4: Google’s Diverse Advertiser Base
  • Figure 5: Google’s Content Acquisition. 2008-2009, the missing data point
  • Figure 6: Google Product Dashboard

Members of the Telco 2.0 Executive Briefing Subscription Service and the Telco 2.0 Dealing with Disruption Stream can download the full 24 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £595 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: AdSense, AdWords, Amazon, Android, Apple, Asus, AT&T, Australia, BBVA, Bell Labs, Boot to Gecko, Caffeine, CES, China, Chromebook, ChromeOS, ContentID, David Conway, Eric Chu, Eric Schmidt, European Commission, Facebook, Federal Trade Commission, GMail, Google, Google +, Google Books, Google Buzz, Google Checkout, Google Maps, Google Music, Google Play, Google TV, Google Voice, Google Wave, GSM, IBM, Intel, Kenya, Keyhole Software, Kindle Fire, Larry Page, Lenovo, Linux, MacBooks, Microsoft, Motorola, Mozilla Foundation, Netflix, Nexus, Office 365, OneNet, OpenLayers API, OpenStreetMap, Oracle, Susan Creighton, ThinkPads, VMWare, Vodafone, Western Electric, Wikipedia, Yahoo!, Your World, YouTube, Zynga

Technologies and industry terms referenced: advertisers, API, content acquisition costs, driverless car, Fibre, Forkdroids, M&A, mobile apps, muni-WLAN, NFC, Search, smart TV, spectrum, UI, VoIP, Wallet

Cloud 2.0: Telcos to grow Revenues 900% by 2014

Summary: Telcos should grow Cloud Services revenues nine-fold and triple their overall market share in the next three years according to delegates at the May 2011 EMEA Executive Brainstorm. But which are the best opportunities and strategies? (June 2011, Executive Briefing Service, Cloud & Enterprise ICT Stream)

NB Members can download a PDF of this Analyst Note in full here. Cloud Services will also feature at the Best Practice Live! Free global virtual event on 28-29 June 2011.

To share this article easily, please click:

//

Introduction

STL Partners’ New Digital Economics Executive Brainstorm & Developer Forum EMEA took place from 11-13 May in London. The event brought together 250 execs from across the telecoms, media and technology sectors to take part in 6 co-located interactive events: the Telco 2.0, Digital Entertainment 2.0, Mobile Apps 2.0, M2M 2.0 and Personal Data 2.0 Executive Brainstorms, and an evening AppCircus developer forum.

Building on output from the last Telco 2.0 events and new analysis from the Telco 2.0 Initiative – including the new strategy report ‘The Roadmap to New Telco 2.0 Business Models’ – the Telco 2.0 Executive Brainstorm explored latest thinking and practice in growing the value of telecoms in the evolving digital economy.

This document gives an overview of the output from the Cloud session of the Telco 2.0 stream.

Companies referenced: Aepona, Amazon Web Services, Apple, AT&T, Bain, BT, Centurylink, Cisco, Dropbox, Embarq, Equinix, Flexible 4 Business, Force.com, Google Apps, HP, IBM, Intuit, Microsoft, Neustar, Orange, Qwest, Salesforce.com, SAP, Savvis, Swisscom, Terremark, T-Systems, Verizon, Webex, WMWare.

Business Models and Technologies covered: cloud services, Enterprise Private Cloud (EPC), Virtual Private Cloud (VPC), Infrastucture as a service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

Cloud Market Overview: 25% CAGR to 2013

Today, Telcos have around a 5% share of nearly $20Bn p.a. cloud services revenue, with 25% compound annual growth rate (CAGR) forecast to 2013. Most market forecasts are that the total cloud services market will reach c.$45-50Bn revenue by 2013 / 2014, including the Bain forecast previewed at the Americas Telco 2.0 Brainstorm in April 2011.

At the EMEA brainstorm, delegates were presented with an overview of the component cloud markets and examples of different cloud services approaches, and were then asked for their views on what share telcos could take of cloud revenues in each. In total, delegates’ views amounted to telcos taking in the region of 18% by revenue of cloud services at the end of the next three years.

Applying these views to an extrapolated ‘mid-point’ forecast view of the Cloud Market in 2014, implies that Telcos will take just under $9Bn revenue from Cloud by 2014, thus increasing today’s c$1Bn share nine-fold. [NB More detailed methodology and sources are in the full paper available to members here.]

Figure 1 – Cloud Services Market Forecast & Players

Cloud 2.0 Forecast 2014 - Telco 2.0

Source: Telco 2.0 Presentation

Although already a multi-$Bn market already, there is still a reasonable degree of uncertainty and variance in Cloud forecasts as might be expected in a still maturing market, so this market could be a lot higher – or perhaps lower, especially if the consequences of the recent Amazon AWS breakdown significantly reduce CIO’s appetites for Cloud.

The potential for c.30% IT cost savings and speed to market benefits that can be achieved by telcos implementing Cloud internally previously shown by Cisco’s case study were highlighted but not explored in depth at this session.

Which cloud markets should telcos target?

Figure 2 – Cloud Services – Telco Positioning

Cloud 2.0 Market Positioning - Telco 2.0

Source: Cisco/Orange Presentation, 13th Telco 2.0 Executive Brainstorm, London, May 2011

An interesting feature of the debate was which areas telcos would be most successful in, and the timing of market entry strategies. Orange and Cisco argued that the area of ‘Virtual Private Cloud’, although neither the largest nor predicted to be the fastest growing area, should be the first market for some telcos to address, appealing to some telcos strong ‘trust’ credentials with CIOs and building on ‘managed services’ enterprise IT sales and delivery capabilities.

Orange described its value proposition ‘Flexible 4 Business’ in partnership with Cisco, VMWare virtualisation, and EMC2 storage, and although could not at this early stage give any performance metrics described strong demand and claimed satisfaction with progress to date.

Aepona described a Platform-as-a-Service (PaaS) concept that they are launching shortly with Neustar that aggregates telco APIs to enable the rapid creation and marketing of new enterprise services.

Figure 3 – Aepona / Neustar ‘Intelligent Cloud’ PaaS Concept

C;oud 2.0 - Intelligent Cloud PaaS Concept - Telco 2.0

In this instance, the cloud component makes the service more flexible, cheaper and easier to deliver than a traditional IT structure. This type of concept is sometimes described as a ‘mobile cloud’ because many of the interesting uses relate to mobile applications, and are not reliant on continuous high-grade mobile connectivity required for e.g. IaaS: rather they can make use of bursts of connectivity to validate identities etc. via APIs ‘in the cloud’.

To read the rest of this Analyst Note, containing…

  • Forecasts of telco share of cloud by VPC, IaaS, PaaS and SaaS
  • Telco 2.0 take-outs and next steps
  • And detailed Brainstorm delegate feedback

Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud and Enterprise ICT Stream can access and download a PDF of the full report here. Non-Members, please see here for how to subscribe. Alternatively, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

Cloud 2.0: What are the Telco Opportunities?

Summary: Telco 2.0’s analysis of operators’ potential role and opportunity in ‘Cloud Services’, a set of new business model opportunities that are still in an early stage of development – although players such as Amazon have already blazed a substantial trail. (December 2010, , Executive Briefing Service, Cloud & Enterprise ICT Stream & Foundation 2.0)

  • Below is an extract from this Telco 2.0 Report. The report can be downloaded in full PDF format by members of the Telco 2.0 Executive Briefing service and the Cloud and Enterprise ICT Stream here.
  • Additionally, to give an introduction to the principles of Telco 2.0 and digital business model innovation, we now offer for download a small selection of free Telco 2.0 Briefing reports (including this one) and a growing collection of what we think are the best 3rd party ‘white papers’. To access these reports you will need to become a Foundation 2.0 member. To do this, use the promotional code FOUNDATION2 in the box provided on the sign-up page here. NB By signing up to this service you give consent to us passing your contact details to the owners / creators of any 3rd party reports you download. Your Foundation 2.0 member details will allow you to access the reports shown here only, and once registered, you will be able to download the report here.
  • See also the videos from IBM on what telcos need to do, and Oracle on the range of Cloud Services, and the Telco 2.0 Analyst Note describing Americas and EMEA Telco 2.0 Executive Brainstorm delegates’ views of the Cloud Services Opportunity for telcos.
  • We’ll also be discussing Cloud 2.0 at the Silicon Valley (27-28 March) and London (12-13 June) Executive Brainstorms.
  • To access reports from the full Telco 2.0 Executive Briefing service, or to submit whitepapers for review for inclusion in this service, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

 

The Cloud: What Is It?

Apart from being the leading buzzword in the enterprise half of the IT industry for the last few years, what is this thing called “Cloud”? Specifically, how does it differ from traditional server co-location, or indeed time-sharing on mainframes as we did in the 1970s? These are all variations on the theme of computing power being supplied from a remote machine shared with other users, rather than from PCs or servers deployed on-site.

Two useful definitions were voiced at the 11th Telco 2.0 EMEA Executive Brainstorm in November 2010:

  • “A standardised IT Capability delivered in a pay-per-use, self-service way.” Stephan Haddinger, Chief Architect Cloud Computing, Orange – citing a definition by Forrester.
  • “STEAM – A Self-Service, multi-Tenanted, Elastic, broad Access, and Metered IT Service.” Neil Sholay, VP Cloud and Comms, EMEA, Oracle.

The definition of Cloud has been rendered significantly more complicated by the hype around “cloud” and the resultant tendency to use it for almost anything that is network resident. For a start, it’s unhelpful to describe anything that includes a Web site as “cloud computing”. A good way to further understand ‘Cloud Services’ is to look at the classic products in the market.

The most successful of these, Amazon’s S3 and EC2, provide low-level access to computing resources – disk storage, in S3, and general-purpose CPU in EC2. This differs from an ASP (Application Service Provider) or Web 2.0 product in that what is provided isn’t any particular application, but rather something close to the services of a general purpose computer. It differs from traditional hosting in that what is provided is not access to one particular physical machine, but to a virtual machine environment running on many physical servers in a data-centre infrastructure, which is probably itself distributed over multiple locations. The cloud operator handles the administration of the actual servers, the data centres and internal networks, and the virtualisation software used to provide the virtual machines.

Varying degrees of user control over the system are available. A major marketing point, however, is that the user doesn’t need to worry about system administration – it can be abstracted out as in the cloud graphic that is used to symbolise the Internet on architecture diagrams. This tension between computing provided “like electricity” and the desire for more fine-grained control is an important theme. Nobody wants to specify how their electricity is routed through the grid, although increasing numbers of customers want to buy renewable power – but it is much more common for businesses (starting at surprisingly small scale) to have their own Internet routing policies.

So, for example, although Amazon’s cloud services are delivered from their global data centre infrastructure, it’s possible to specify where EC2 instances run to a continental scale. This provides for compliance with data protection law as well as for performance optimisation. Several major providers, notably Rackspace, BT Global Services, and IBM, offer “private cloud” services which represent a halfway house between hosting/managed service and fully virtualised cloud computing. And some explicit cloud products, such as Google’s App Engine, provide an application environment with only limited low-level access, as a rapid-prototyping tool for developers.

The Cloud: Why Is It?

Back at the November 2009 Telco 2.0 Executive Brainstorm in Orlando, Joe Weinman of AT&T presented an argument that cloud computing is “a mathematical inevitability”. His fundamental point is worth expanding on. For many cloud use cases, the decision between moving into the cloud and using a traditional fleet of hosted servers is essentially a rent-vs-buy calculus. Weinman’s point was that once you acquire servers, whether you own them and co-locate or rent them from a hosting provider, you are committed to acquiring that quantity of computing capacity whether you use it or not. Scaling up presents some problems, but it is not that difficult to co-locate more 1U racks. What is really problematic is scaling down.

Cloud computing services address this by basically providing volume pricing for general-purpose computing – you pay for what you use. It therefore has an advantage when there are compute-intensive tasks with a highly skewed traffic distribution, in a temporary deployment, or in a rapid-prototyping project. However, problems arise when there is a need for capacity on permanent standby, or serious issues of data security, business continuity, service assurance, and the like. These are also typical rent-vs-buy issues.

Another reason to move to the cloud is that providing high-availability computing is expensive and difficult. Cloud computing providers’ core business is supporting large numbers of customers’ business-critical applications – it might make sense to pass this task to a specialist. Also, their typical architecture, using virtualisation across large numbers of PC-servers to achieve high availability in the manner popularised by Google, doesn’t make sense except on a scale big enough to provide a significant margin of redundancy in the hardware and in the data centre infrastructure.

Why Not the Cloud?

The key objections to the cloud are centred around trust – one benefit of spreading computing across many servers in many locations is that this reduces the risk of hardware and/or connectivity failure. However, the problem with moving your infrastructure into a multi-tenant platform is of course that it’s another way of saying that you’ve created a new, enormous single point of commercial and/or software failure. It’s also true that the more critical and complex the functions that are moved into cloud infrastructure, and the more demanding the contractual terms that result, the more problematic it becomes to manage the relationship. (Neil Lock, IT Services Director at BT Global Services, contributed an excellent presentation on this theme at the 9th Telco 2.0 Executive Brainstorm.) At some point, the additional costs of managing the outsourcer relationship intersect with the higher costs of owning the infrastructure and internalising the contract. One option involves spending more money on engineers, the other, spending more money on lawyers.

Similar problems exist with regard to information security – a malicious actor who gains access to administrative features of the cloud solution has enormous opportunities to cause trouble, and the scaling features of the cloud mean that it is highly attractive to spammers and denial-of-service attackers. Nothing else offers them quite as much power.

Also, as many cloud systems make a virtue of the fact that the user doesn’t need to know much about the physical infrastructure, it may be very difficult to guarantee compliance with privacy and other legislation. Financial and other standards sometimes mandate specific cryptographic, electronic, and physical security measures. It is quite possible that the users of major clouds would be unable to say in which jurisdiction users’ personal data is stored. They may consider this a feature, but this is highly dependent on the nature of your business.

From a provider perspective, the chief problem with the cloud is commoditisation. At present, major clouds are the cheapest way bar none to buy computing power. However, the very nature of a multi-tenant platform demands significant capital investment to deliver the reliability and availability the customers expect. The temptation will always be there to oversubscribe the available capacity – until the first big outage. A capital intensive, very high volume, and low price business is the classic case of a commodity – many operators would argue that this is precisely what they’re trying to get away from. Expect vigorous competition, low margins, and significant CAPEX requirements.

To download a full PDF of this article, covering…

  • What’s in it for Telcos?
  • Conclusions and Recommendations

…Members of the Telco 2.0TM Executive Briefing Subscription Service and the Cloud & Enterprise ICT Stream can read the Executive Summary and download the full report in PDF format here. Non-Members, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud Services 2.0: Clearing Fog, Sunshine Forecast, say Telco 2.0 Delegates

Summary: the early stage of development of the market means there is some confusion on the telco Cloud opportunity, yet clarity is starting to emerge, and the concept of ‘Network-as-a-Service’ found particular favour with Telco 2.0 delegates at our October 2010 Americas and November 2010 EMEA Telco 2.0 Executive Brainstorms. (December 2010, Executive Briefing Service, Cloud & Enterprise ICT Streamm)

The full 15 page PDF report is available for members of the Executive Briefing Service and Cloud and Enterprise ICT Stream here. For membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003. Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

Executive Summary

Clearing Fog

Cloud concepts can sometimes seem as baffling, and as nebulous as their namesakes. However, in the recent Telco 2.0 Executive Brainstorms, (Americas in October 2010 and EMEA November 2010), stimulus presentations by IBM, Oracle, FT-Orange Group, Deutsche Telekom, Intel, Salesforce.com, Cisco, BT-Ribbit, and delegate discussions really brought the Cloud Services opportunities to life.

While it was generally agreed that the precise definitions delineating the many possible varieties of the service are not always useful, it does matter how operators can make money from the services, and there was at least consensus on this.

Sunshine Forecast: A Significant Opportunity…

IBM identified an $88.5Bn opportunity in the Cloud over the next 5 years, the majority of which is applicable to telcos, although the share that will end up in the telco industry might be as much as 70% or as little as 30%, depending on how operators go about it (video here).

According to Cisco, there is a $44Bn telco opportunity in Cloud Services by 2014, supported by the evidence of 30%+ enterprise IT cost savings and productivity gains that resulted from Cisco’s own comprehensive internal adoption of cloud services (video here). We see this estimate as reasonably consistent with IBM’s.

Oracle also brought the range of opportunities to life with seven contrasting real-life case studies (video here).

Ribbit, AT&T, and Salesforce.com also supported the viability of Cloud Cervices, arguing that concerns over trust and privacy are gradually being allayed. Intel argued that Network as a Service (NaaS) is emerging as a cloud opportunity alongside Enterprise and Public Clouds, and that by combining NaaS with the telco influence over devices and device computing power, telcos can be a major player in a new ‘Pervasive Computing’ environment. EMEA delegates also viewed Network-as-a-Service as the most attractive opportunity.

Fig 1 – Delegates Favoured ‘Network-as-a-Service’ of the Cloud Opportunities

Telco 2.0 Delegates Cloud Vote, Nov 2010

Source: Telco 2.0 Delegate Vote, 11th Brainstorm, EMEA , Nov 2010.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

Cloud 2.0: What Should Telcos do? IBM’s View

Summary: IBM say that telcos are well positioned to provide cloud services, and forecast an $89Bn opportunity over 5 years globally. Video presentation and slides (members only) including forecast, case studies, and lessons for future competitiveness.

Cloud Services will also feature at Best Practice Live!, Feb 2-3 2011, and the 2011 Telco 2.0 Executive Brainstorms.

 

At the 11th EMEA Telco 2.0 Brainstorm, November 2010, Craig Wilson, VP, IBM Global Telecoms Industry, said that:

  • Cloud Services represent an $89Bn opportunity in 5 years;
  • Telcos / Service Providers are “well positioned” to compete in Cloud Services;
  • Security remains the CIO’s biggest question mark, but one that telcos can help with;
  • and outlined two APAC telco Cloud case studies.

Members of the Telco 2.0 Executive Briefing Service and the Cloud and Enterprise ICT Stream can also download Craig’s presentation here (for membership details please see here, or to join, email contact@telco2.net or call +44 (0) 44 207 247 5003).

See also videos by Oracle describing a range of cloud case studies, Cisco on the market opportunity and their own case study of Cloud benefits, and Telco 2.0’s Analyst Note on the Cloud Opportunity.

Telco 2.0 Next Steps

Objectives:

  • To continue to analyse and refine the role of telcos in Cloud Services, and how to monetise them;
  • To find and communicate new case studies and use cases in this field.

Deliverables:

 

Full Article: Mobile Software Platforms – Rapid Consolidation is Forecast

Summary: New analysis suggests that only only three or four mobile handset software platforms will remain by 2012. 

AreteThis is a Guest Briefing from Arete Research, a Telco 2.0™ partner specialising in investment analysis.

The views in this article are not intended to constitute investment advice from Telco 2.0™ or STL Partners. We are reprinting Arete’s Analysis to give our customers some additional insight into how some Investors see the Telecoms Market.

Mobile Software Home Truths

Wireless Devices

famer%20and%20wife.jpg

Amidst all the swirl of excitement around mobile software, some dull realities are setting in.  As the barn gets crowded with ever more exotic breeds (in alphabetical order: Android, Apple OSX, Blackberry, LiMo, Maemo, Moblin, Symbian, WebOS, WindowsMobile), there is a growing risk of fragmentation and consumer confusion.  We see some unglamorous “home truths” about mobile software getting lost in the weeds.

 

Few, if any, vendors make money from mobile software.  Microsoft makes $160 of gross profit per PC while mobile software is moving royalty-free. The few pure plays (like Opera) rely on sales of services around their software.  Mobile software only gets leverage from related services (often a single one).  These must be tightly linked to devices, e.g., e-mail (Blackberry), e-books (Kindle), music (iTunes) or gaming (XBoxLive), with resulting communities controlled by their choice of software; few services work equally well on all devices (e.g., search, YouTube).

 

AppStores are not (yet) content stores. OEMs must link themselves with cloud services (like Motorola’s new BLUR platform) or offer their own (e.g., ITunes, Ovi, etc.).  Individual developers find it hard to make money through AppStores: if even one were making $10m in sales, it would be widely publicised.  Exclusive or “sponsored” applications like navigation or content-like games should fare much better.

 

We see room for only three to four platforms by 2012.  The pace of innovation, R&D cost, and need for customisation (for hardware, operators and languages) invites consolidation.  Supporting OEMs and reaching out to developers is costly and labour-intensive; only over time might HTML5 browsers supplant device-specific applications.  No platform is so productised as to simply hand over to licensees (be they OEMs or operators).

 

Every smartphone will support one (or more) AppStores.  We do not know how many services or what content AppStores 2.0 might offer, or how they will be made relevant to consumers.  The most popular applications should work on every smartphone, even as some devices (like INQ) are optimised for versions of Facebook, Amazon, Twitter, Skype and other popular digital brands and services. AppStores may help OEMs build relationships with users of those services, though both vendors and operators will try to control billing.

 

All phones are becoming smart.  So-called smartphones get attention as a growth segment in a declining handset market, but “dumbphones” (using proprietary software like Nokia’s S40, Samsung SHP/TouchWiz or LG’s S-Class) are getting more sophisticated.  The costs of the two are converging. Featurephones will soon also support AppStores and Internet services.

 

Table 1: Platform Penetration

 

’09E

’11E

 

Symb. v9.3+/S60

~110m

~240m

S60 goes mid-range

Apple OSX

~30m

~120m

Incl. iPod Touch

B’berry OS 4.5+

~40m

~80m

Doubling OS base

Android

<10m

~80m

From >10 OEMs

WinMo 6+

~10m

~50m

Transition to Win7

Palm WebOS

<5m

~15m

Limits w/o licensing

LiMo

<5m

~20m

Platform for LCHs

Source:  Arete Research estimates. 

 

Hard Graft

The costs for developing and maintaining complex software platforms are increasing.  There are no shortcuts to the sheer volume of work, especially in building on legacy code bases and supporting operator requirements, or developing language packs.  Every platform faces significant roadmap issues. Some handset OEMs are building adaptation layers to port a range of applications to their own branded UIs.  Just supporting multi-core chipsets for handling streaming or managing financial transactions needs additional processing power to deal with security and viruses.  Yet it requires software re-writes and poses power management challenges (i.e., tripling or quadrupling processing will drain batteries faster).

 

We long predicted video would become as ubiquitous as voice, i.e., with devices designed around handling video traffic.  There are a wide range of solutions to cope with streaming video, including in software (i.e., Flash or Silverlight) rather than via hardware optimisations. Apple patented technology around adaptive bit rate codecs to handle streaming in its forthcoming iPhones.  All platforms need to support over the air (OTA) updates, embrace graphics-rich applications, handle HD content, and comply with an array of USB drivers and accessories.

 

It is also not clear whether application downloads are a novelty or a mass market phenomenon. Discovery and recommendation engines need to be improved on most platforms, and marketing must focus on what applications offer. The gap between legacy platforms and an over-the-air customisable user experience is a wide one, and will not be resolved by AppStores, fresh UIs, or moves to go open source. Widget and webkit technologies could bring similar UXs across multiple devices.  Most developers will not need access to lower layers or optimise applications for specific hardware.  Over time, HTML5 browsers could supplant device-specific applications (e.g., GMail runs on an iPhone as a web application, as does WebOutlook on Android), but OEMs are unlikely to embrace this approach.  This also does nothing to extend billing or allow for collection of detailed customer analytics.

 

At the same time, operators’ selection criteria are moving from form factors to user experiences.  Operator UX teams now number in the 100s of staff, even if they fake a fragmented approach: Vodafone-subsidised devices currently support Android Market, Blackberry AppsWorld, OviStore and iPhone AppStore, and runs its own developer programme (Betavine). Few telcos develop native applications, but mostly use ones that run in Java, Webkit, Widgets, etc. Only a few (e.g., Verizon Wireless) offer customised UI.

 

While Apple and Google get the most attention (as pioneers of the AppStore concept, and for providing a shop-front for the open source community), Nokia and Microsoft have pivotal roles to play.  Both offer unprecedented scale (in handsets and computing software), even if both are fast followers.  We do not see Nokia’s commitment to Ovi or Symbian wavering. Though Microsoft’s successive versions of WindowsMobile failed to get traction beyond 10-15m units p.a., we expect a renewed push around Windows7 in 2H10. The MSFT/Yahoo search deal could be a blueprint for closer collaboration with Nokia. With its resources (a $9.5bn R&D budget) and assets (enterprise installed base, XBox, HotMail, and Bing), Microsoft could offer handset OEMs revenue share deals. LGE already committed to ship 50+ Windows models by 2012.

 

Figure 1: Product Differentiation?

arete%20mob%20soft%203%20nov%202009.jpgSource: Arete Research.

Content, Not Applications

An AppStore is not a content store, yet.  The next battle will be to add intelligence and filtering to AppStores, and tightly integrate content with platforms (as with iTunes, Kindle, Zune HD, or Comes With Music).  There are limits to how many applications consumers are likely to use, whereas there is a wide range of content to access via mobile devices.  To handle this, mobile devices also need integration with home CE/PC products. Samsung, for one, aims to provide “three screen” offerings spanning TVs, PCs, cameras, and handsets. There will be efforts by Sony, Apple, Samsung and others to make a single harmonised software platform that spans a wide range of video-capable devices.

 

Figure 2: Putting Software at the Centre of a CE “User Experience”

arete%20mob%20soft%202a.jpg

Source:  Arete Research.

 

With multi-radio (e.g., 3G, WiFi and Bluetooth) integration and voice recognition, mobile devices could become a control point to reach “virtualised” content.  This is a longer-term “cloud computing” angle to mobile software, handling access to and storage of personal content.  OEMs will need to offer tight integration with cloud services, or offer their own “stores of content.”

 

Apple and Google designed platforms with PCs in mind, and drew developers from the vastly larger desktop world.  They benefit from programming in AJAX, whereas Symbian uses a range of older object-oriented languages.  Yet in both handset and PC worlds, OEMs, not developers, create devices.  They are the gatekeepers for software and AppStores, managing the flow of any OTA updates that might alter the UX.  Adobe has provided a good model, with regular updates of its popular Flash and Acrobat software.  Yet user expectations of handset stability will get re-set if devices regularly need updates like PCs do.

Too Much Choice?

The number of companies vying to become the platform of choice is staggering, and itself a problem. Beyond the ones we discuss below, we can add Intel (with its Moblin effort), Palm’s WebOS (which remains device-specific) and the range of Linux variants (like the Nokia-sponsored Maemo, LiMO, and components developed under the OMTP).  The latter shows how limited group initiatives have been: OMTP involves VOD, TMOB, TI, TEF, AT&T, and others, but all of these compete for exclusivity with operator-subsidised devices that will never be OMTP-compliant.  None of the above options are yet mass market (i.e., likely to top 10m+ units in ’10).  Just to confuse matters further, there are other applications environments (e.g., BREW) as well as “component” vendors like Opera, Access, and Adobe.  We look at leading platforms below:

 

Apple’s OSX

Apple excelled at innovating around the UX and using animation to mask some of the iPhone’s early weaknesses (lack of multi-threading, slow image processing).  Apple’s marketing anticipated the market’s direction with its focus on applications, and Apple’s PA Semi unit will help it be first to market with multi-core processing (supporting streaming video).  Apple is still attracting developers with the clarity and simplicity of its SDK, and by testing and proving in each layer of stack via PC products.  We expect OSX to be extended to CE products, and also for Apple to bring AppStores to the PC.

Google’s Android

For a two-year-old platform, Android got ample OEM support, following up its G1 (a.k.a. the Android Developer Phone) with subsequent releases Cupcake/Android v1.1, with the Éclair release being v2.0. Android aims to be binary forward compatible, i.e., existing applications written for G1s will run on new devices without modifications.  Developers create Android Virtual Devices with the SDK to run applications for a range of devices. Development and emulator debug time is far shorter in Android compared with Symbian.

Despite OEM support, Android’s governance remains fuzzy.  Android is open-sourced licensed, but not an open source project: a small (~300 staff) team controls the developer ecosystem and Android Market distribution. It has not productised source code or offered post-sales software management tools, and has limited support for operator-compliant packs, libraries of hardware drivers, and language variants. Some developers say Android is slow to respond to change requests and to accept code modifications.  In exchange for access to Android Market, Google requires OEMs to bundle Google Apps and supply usage analytics from devices.  One key commercialisation partner, WindRiver, was bought by Intel, while another, Teleca, started an Android Feature Club to resolve common integration issues.  Android’s end-game is unclear: is it a hedge against Microsoft or Apple controlling end-devices?  A Trojan Horse for Google services?  Or will it become an independent company with license fees?  If operators don’t need devices “with Google,” then Android may fragment into many custom UIs.

 

Nokia’s Symbian

After a decade under a shifting set of parents, the rump of Symbian was bought by Nokia and made an open source project, including Nokia’s own S60 UI.  Symbian/S60 was initially developed for phone functions, and saw limited traction for downloads under cumbersome tree and branch menu structures. Many developers feel Nokia/Symbian offers too many choices (native Symbian code, J2ME, FlashLite, Web runtime and Python), each with limitations and compatibility issues. The S60 browser is based on webkit, but lacks HTML5 support.  Nokia’s decision to open source Symbian/S60 has stalled its development, as Symbian re-writes and tests third-party software in its 40m line code base.  It will be difficult to make major improvements to Symbian (i.e., to support multi-core processors) during this process.

 

When Nokia ships Direct UI in mid ’10, Symbian will effectively break its backwards compatibility.  Whether it also moves to a completely new release (v.10 from v.9.6) is still open. This may alienate developers that have to re-develop for a new platform and comply with Nokia’s new Direct UI (based on QT).  They also must resolve whether Symbian horizon is sufficient as a publishing tool, or if Nokia can get other OEMs to use OviStore, which still lags rivals on many fronts.  Nokia hopes Symbian will present a credible alternative to Android in mid-2010 when it is fully open source/EPL licensed, with Nokia assuring a large market.

 

Microsoft’s Windows

Windows Mobile 6.5 traced a long evolution from the Pocket PC OS, but still uses an older WinCE 5 kernel.  Microsoft recognised its failings by bringing in new management for Mobile, acquiring Danger (designers of the Sidekick device), and engaging LG as a mass market OEM alongside long-term supporter HTC.  We see 6.5 as simply a stopgap solution until Microsoft brings the innovation seen with its ZuneHD UI and leaner Win7 platforms to mobile.  Microsoft is also offering its software in a reference design called Pink, and may tweak its long-held license fee model with PC-like terms (rebates, discounts and marketing support). This may gain traction among Chinese OEMs, after Taiwanese and US OEMs failed to ramp WinMo to volume.  It is too early to rule out a now-dormant Microsoft, given its scale in computing and revival with Win7.

RIM’s Blackberry OS

In a world moving more “open,” RIM keeps its OS development in-house, stressing the need for security and compression. Yet RIM must evolve the BlackBerry’s UI and bring more developers to its AppsWorld platform, as well as open up its charging model beyond PayPal to embrace operator billing. BlackBerry’s application environment works on a J2ME framework with proprietary extensions, which adds fragmentation and compatibility issues. However, the security and bandwidth compression so valued by enterprises may limit performance for consumers, as applications traverse its NOCs via RIM’s proprietary browser.  RIM’s premium pricing still relies on its messaging franchise, which faces challenges from ActiveSync and efforts to bring push e-mail to mass market price levels. Rivals may not match Blackberry’s UX, but some segments may be less sensitive to RIM’s security and delivery than the price of handsets.  While RIM stresses incremental upgrades for its AppsWorld, we hear they are undertaking an extensive OS re-write to support new multi-core chipsets.

 

Down to Earth

This space gets too much attention for the revenue it directly generates.  Mobile software is a means to an end, and the end is selling devices and Internet services.  The cost of development will narrow the number of platforms by 2013, but not before the sheer number of options bewilder consumers who know about them and frustrate others wishing to get simple access to specific content. Given how rapidly key hardware costs are falling, and how sophisticated mid-range software platforms are becoming, all phones will become smartphones of some sort. Who wants to own a dumbphone?

 

AppStores will evolve to offer a range of content and services, with a major battle brewing over billing and data on consumer usage.  Every device will support some AppStores and work with a range of Internet brands and services.  Some content will be packaged and tightly linked to specific devices.  The Holy Grail in all this mobile software will be its extension to ranges of other CE products.  There is ample reason to scoff at the hype around mobile software — for its marginal economics and inevitable fragmentation — but no doubt as to its future role as a control point for more valuable content and Internet-based services and brands.

 

 

 

IMPORTANT DISCLOSURES

 

For important disclosure information regarding the companies in this report, please call +44 (0)207 959 1300, or send an email to michael.pizzi@arete.net.

 

This publication was produced by Arete Research Services LLP (“Arete”) and is distributed in the US by Arete Research, LLC (“Arete LLC”).

 

Arete’s Rating System. Long (L), Neutral (N), Short (S). Analysts recommend stocks as Long or Short for inclusion in Arete Best Ideas, a monthly publication consisting of the firm’s highest conviction recommendations.  Being assigned a Long or Short rating is determined by a stock’s absolute return potential and other factors, which may include share liquidity, debt refinancing, estimate risk, economic outlook of principal countries of operation, or other company or industry considerations.   Any stock not assigned a Long or Short rating for inclusion in Arete Best Ideas is deemed to be Neutral.  A stock’s return potential represents the difference between the current stock price and the target price.

 

Arete’s Recommendation Distribution.  As of 30 June 2009, research analysts at Arete have recommended 16.9% of issuers covered with Long (Buy) ratings, 21.1% with Short (Sell) ratings, with the remaining 62.0% (which are not included in Arete Best Ideas) deemed Neutral.  A list of all stocks in each coverage group can be found at www.arete.net.

 

Required Disclosures.  Analyst Certification: the research analyst(s) whose name(s) appear(s) on the front cover of this report certify that: all of the views expressed in this report accurately reflect their personal views about the subject company or companies and its or their securities, and that no part of their compensation was, is, or will be, directly or indirectly, related to the specific recommendations or views expressed in this report.

 

Research Disclosures.  Arete Research Services LLP (“Arete”) provides investment advice for eligible counterparties and professional clients. Arete receives no compensation from the companies its analysts cover, does no investment banking, market making, money management or proprietary trading, derives no compensation from these activities and will not engage in these activities or receive compensation for these activities in the future. Arete’s analysts are based in London, authorized and regulated by the UK’s Financial Services Authority (“FSA”); they are not registered as research analysts with FINRA. Additionally, Arete’s analysts are not associated persons and therefore are not subject to Rule 2711 restrictions on communications with a subject company, public appearances and trading securities held by a research analyst account. Arete restricts the distribution of its research services to approved persons only.

 

Reports are prepared for non-private customers using sources believed to be wholly reliable and accurate but which cannot be warranted as to accuracy or completeness.  Opinions held are subject to change without prior notice.  No Arete director, employee or representative accepts liability for any loss arising from the use of any advice provided.  Please see www.arete.net for details of any interests held by Arete representatives in securities discussed and for our conflicts of interest policy.

 

 

© Arete Research Services LLP 2009.  All rights reserved.  No part of this report may be reproduced or distributed in any manner without Arete’s written permission.  Arete specifically prohibits the re-distribution of this report and accepts no liability for the actions of third parties in this respect.

 

Arete Research Services LLP, 27 St John’s Lane, London, EC1M 4BU, Tel: +44 (0)20 7959 1300

Registered in England: Number OC303210

Registered Office: Fairfax House, 15 Fulwood Place, London WC1V 6AY

Arete Research Services LLP is authorized and regulated by the Financial Services Authority

 

US Distribution Disclosures.  Distribution in the United States is through Arete Research, LLC (“Arete LLC”), a wholly owned subsidiary of Arete, registered as a broker-dealer with the Financial Industry Regulatory Authority (FINRA). Arete LLC is registered for the purpose of distributing third-party research. It employs no analysts and conducts no equity research. Additionally, Arete LLC conducts no investment banking, market making, money management or proprietary trading, derives no compensation from these activities and will not engage in these activities or receive compensation for these activities in the future. Arete LLC accepts responsibility for the content of this report.

 

Section 28(e) Safe Harbor.  Arete LLC has entered into commission sharing agreements with a number of broker-dealers pursuant to which Arete LLC is involved in “effecting” trades on behalf of its clients by agreeing with the other broker-dealer that Arete LLC will monitor and respond to customer comments concerning the trading process, which is one of the four minimum functions listed by the Securities and Exchange Commission in its latest guidance on client commission practices under Section 28(e).  Arete LLC encourages its clients to contact Anthony W. Graziano, III (+1 617 357 4800 or anthony.graziano@arete.net) with any comments or concerns they may have concerning the trading process.

 

Arete Research LLC, 3 Post Office Square, 7th Floor, Boston, MA 02109, Tel: +1 617 357 4800

 

Full Article: Nokia’s Strange Services Strategy – Lessons from Apple iPhone and RIM

The profuse proliferation of poorly integrated projects suggests either – if we’re being charitable – a deliberate policy of experimenting with many different ideas, or else – if we’re not – the absence of a coherent strategy.

Clearly Nokia is aware of the secular tendency in all information technology fields that value migrates towards software and specifically towards applications. Equally clearly, they have the money, scale, and competence to deliver major projects in this field. However, so far they have failed to make services into a meaningful line of business, and even the well developed software ecosystem hasn’t seen a major hit like the iPhone and its associated app store.

Nokia Services: project proliferator

So far, the Services division in its various incarnations has brought forward Club Nokia, the Nokia Game, Forum Nokia, Symbian Developer Network, WidSets, Nokia Download!, MOSH, Nokia Comes With Music, Nokia Music Store, N-Gage, Ovi, Mail on Ovi, Contacts on Ovi, Ovi Store…it’s a lot of brands for one company, and that’s not even an exhaustive list. They’ve further acquired Intellisync, Sega.com, Loudeye, Twango, Enpocket, Oz Communications, Gate5, Starfish Software, Navteq and Avvenu since 2005 – that makes an average of just over two services acquisitions a year. Further, despite the decision to integrate all (or most) services into Ovi, there are still five different functional silos inside the Services division.

The great bulk of applications or services available or proposed for mobile devices fall into two categories – social or media. Under social we’re grouping anything that is primarily about communications; under media we’re grouping video, music, games, and content in general. Obviously there is a significant overlap. This is driven by fundamentals; no-one is likely to want to do computationally intensive graphics editing, CAD, or heavy data analysis on a mobile, run a database server on one, or play high-grade full-3D games. Batteries, CPU limitations, and most of all, form factor limitations see to that. And on the other side, communication is a fundamental human need, so there is demand pull as well as constraint push. As we pointed out back in the autumn of 2007, communication, not content, is king.

Aims

In trying to get user adoption of its applications and services, Nokia is pursuing two aims – one is to create products that will help to ship more Nokia devices, and to ship higher-value N- or E- series devices rather than featurephones, and the other is a longer-range hope to create a new business in its own right, which will probably be monetised through subscriptions, advertising,or transactions. This latter aim is much further off that the first, and is affected by the operators’ suspicion of any activity that seems to rival their treasured billing relationship. For example, although quick signup and data import are crucial to deploying a social application, Nokia probably wouldn’t get away with automatically enrolling all users in its services – the operators likely wouldn’t wear it.

Historical lessons

There have been several historical examples of similar business models, in which sales of devices are driven by a social network. However, the common factor is that success has always come from facilitating existing social networks rather than trying to create new ones. This is also true of the networks themselves; if new ones emerge, it’s usually as an epi-phenomenon of generally reduced friction. Some examples:

  1. Telephony itself: nobody subscribed in order to join the telephone community, they subscribed to talk to the people they wanted to talk to anyway.
  2. GSM: the unique selling point was that the people who might want to talk to you could reach you anywhere, and PSTN interworking was crucial.
  3. RIM’s BlackBerry: early BlackBerries weren’t that impressive as such, but they provided access to the social value of your e-mail workflow and groupware anywhere. Remember, the only really valuable IM user base is the 17 million Lotus Notes Sametime users.
  4. 3’s INQ: the Global Mobile Award-winning handset is really a hardware representation of the user’s virtual presence . Hutchison isn’t interested in trying to make people join Club Hutch or use 3Book; they’re interested in helping their users manage their social networks and charging for the privilege.

So it’s unlikely that trying to recruit users into Nokia-specific communities is at all sensible. Nobody likes vendor lock-in. And, if your product is really good, why restrict it to Nokia hardware users? As far as Web applications go, of course, there’s absolutely no reason why other devices shouldn’t be allowed to play. But this fundamental issue – that no-one organises their lives around their friends’ or the friends’ mobile operators’ choices of device vendor – would tend to explain why there have been so many service launches, mergers, and shutdowns. Nokia is trying to find the answer by trial and error, but it’s looking in the wrong place. There is some evidence, however, that they are looking more at facilitating other social applications, but this is subject to negotiation with the operators.

The operator relationship – root of the problem

One of the reasons why is the conflict with operators mentioned above. Nokia’s efforts to build a Nokia-only community mirror the telco fascination with the billing relationship. Telcos tend to imagine that being a customer of Telco X is enough to constitute a substantial social and emotional link; Nokia is apparently working on the assumption that being a customer of Nokia is sufficient to make you more like other Nokia customers than everyone else. So both parties are trying to “own the customer”, when in fact this is probably pointless, and they are succeeding in spoiling each others’ plans. Although telcos like to imagine they have a unique relationship with their subscribers, they in fact know surprisingly little about them, and carriers tend to be very unpopular with the public. Who wants to have a relationship with the Big Expensive Phone Company anyway? Both parties need to rethink their approach to sociability.

What would a Telco 2.0 take on this look like?

First of all, the operator needs to realise that the subscribers don’t love them for themselves; it was the connectivity they were after all along! Tears! Secondly, Nokia needs to drop the fantasy of recruiting users into a vendor-specific Nokiasphere. It won’t work. Instead, both ought to be looking at how they can contribute to other people’s processes. If Nokia can come up with a better service offering, very well – let them use the telco API suite. In fact, perhaps the model should be flipped, and instead of telcos marketing Nokia devices as a bundled add-in with their service, Nokia ought to be marketing its devices (and services) with connectivity and much else bundled into the upfront price, with the telcos getting their share through richer wholesale mechanisms and platform services.

Consider the iPhone. Looking aside from the industrial design and GUI for a moment – I dare you! you can do it! – its key features were integration with iTunes (i.e. with content), a developer platform that offered good APIs and documentation, but also a route to market for the developers and an easy way for users to discover, buy, and install their products, and an internal business model that sweetened the deal for the operators, by offering them exclusivity and a share of the revenue. Everyone still loves the iPhone, everyone still hates AT&T, but would AT&T ever consider not renewing the contract with Apple? They’re stealing our customers’ hearts! Of course not.

Apple succeeded in improving the following processes for two out of three key customer groups:

  1. Users: Acquiring and managing music and video across multiple devices.
  2. Users: Discovering, installing, and sharing mobile applications
  3. Developers: Deploying and selling mobile applications

And as two-sidedness would suggest, they offered the remaining group a share of revenue. The rest is history; the iPhone has become the main driver of growth and profitability at Apple, more than one billion applications downloads have been shipped from the App Store, etc, etc.

Conclusions: turn to small business?

So far, however, Nokia’s approach has mirrored the worst aspects of telcos’ attitude to their subscribers; a combination of possessiveness and indifference. They want to own the customer; they don’t know how or why. It might be more defensible if there was any sign that Nokia is serious about making money from services; that, of course, is poison to the operators and is therefore permanently delayed. Similarly, Nokia would like to have the sort of brand loyalty Apple enjoys and to build the sort of integrated user experience Apple specialises in, but it is paranoid about the operators. The result is essentially an Apple strategy, but not quite.

What else could they try? Consider Nokia Life Tools, the package of information services for farmers and small businesses they are building for the developing world. One thing that Nokia’s services strategy has so far lacked is engagement with enterprises; it’s all been about swapping photos and music and status updates. Although Nokia makes great business-class gadgets, and they provide a lot of useful enablers (multiple e-mail boxes, support for different push e-mail systems, VPN clients, screen output, printer support), there’s a hole shaped like work in their services offering. RIM has been much better here, working together with IBM and Salesforce.com to expand the range of enterprise applications they can mobilise.

Life Tools, however, shows a possible opportunity – it’s all right catering to companies who already have complex workflow systems, but who’s serving the ones that don’t have the scale to invest there? None of the vendors are addressing this, and neither are the telcos. It fits a whole succession of Telco 2.0 principles – focus on enterprises, look for areas where there’s a big difference between the value of bits and their quantity, and work hard at improving wholesale.

It’s almost certainly a better idea than trying to be Apple, but not quite.

Next Steps for Nokia and telcos

  • It is unlikely that ”Nokia users” are a valid community

  • Really successful social hardware facilitates existing social networks

  • Nokia’s problems are significantly explained by their difficult relationship with operators

  • Nokia’s emerging-market Life Tools package might be more of an example than they think

  • A Telco 2.0 approach would emphasise small businesses, offer bundled connectivity, and deal with the operators through better wholesale

Full Article: LiMo – The Tortoise picks up Momentum

Mobile Linux foundation LiMo‘s presence at the Mobile World Congress was impressive. DoCoMo demonstrated a series of handsets built on the OS; and LG & Samsung showed a series of reference implementations. But more impressive than the actual and reference handsets were the toolkits launched by
Access & Azingo.

 limo-1.png
We believe that LiMo has an important role to play in the Mobile Ecosystem and the platform is so compelling that over time more and more handsets based upon the OS will find their way into consumers hands. So why is LiMo different and important?

In a nutshell, it is not owned by anyone and is not being driven forward by any one member. Symbian and Android may also be open-source, but no-one has any serious doubt who is paying for the majority of the resources and therefore whether consciously or sub-consciously whose business model they could favour. The LiMo founder members were split evenly between operators (DoCoMo, Vodafone and Orange) and Consumer Electronic Companies (NEC, Panasonic & Samsung). Since then several other operators, handset makers, chip makers and software vendors have joined. The current board contains a representative sample of organisations across the mobile value chain.

LiMo as the Unifying Entity

The current handset OS market reminds us very much of the days when the computing industry shifted from proprietary operating systems to various mutations of Unix. Over time, more and more companies moved away from proprietary extensions and moved them into full open source. Unix was broken down into a core kernel, various drivers, thousands of bytes of middleware and a smattering of User Interfaces. Value shifted to the applications and services. Today, as open source has matured each company can decide which bits of Unix they want to push resources onto to develop further and which bits they want to include in their own distribution.

 limo-2.png
Figure 2: LiMo Architecture

The reason that Unix developed this way is pure economics – it is just too expensive for many companies to build and maintain their own flavours of operating systems. In fact there is only currently two mainstream companies who can afford to build their own – Microsoft and Apple – and the house of Apple is built upon Unix foundations anyway. Today, we are seeing the same dynamics in the mobile space and it is only a question of time, before more and more companies shift resources away from internal projects and onto open-source ones. LiMo is the perfect home for coordinating this open-source effort – especially if the Limo foundation allows freedom for the suppliers of code to develop their own roadmap according to areas of perceived value and weakness.

 LiMo should be really promiscuous to succeed

 In June 2008, LiMo merged with the LiPS foundation – great news. It is pointless and wasteful to have two foundations doing more or less the same thing, one from a silicon viewpoint and the other from an operator viewpoint. Just before Barcelona, LiMo endorsed the OMTP BONDI specification and announced that it expects future LiMo handsets using a web runtime to support the BONDI specification. Again, great news. It is pointless to redo specification work, perhaps with a slightly different angle. These type of actions are critical to the success of LiMo – embracing the work done by others and implementing it in an open-source, available to all manner.

Compelling base for Application Innovation

The real problem with developing mobile applications today is the porting cost to support the wide array of operating systems. LiMo offers the opportunity to radically reduce this cost. This is going to become critical for the next generation of devices which become wirelessly connected, whether machine-2-machine, general consumer devices or niche applications serving vertical industries. For the general consumer market, the key is to get handsets to the consumers. DoCoMo has done a great job of driving LiMo-based handsets into the Japanese market. 2009 needs to be the year that some European (eg Vodafone) or US (eg Verizon) deploy handsets in other markets.

Also, it is vital that the operators also make available some of its internal capabilties for use directly by the LiMo handsets and allow coupling to externally developed applications. These assets are not just the standard network services, but also internal service delivery platform capabilities. This adds benefits to the cost advantage that LiMo will ultimately have over the other handset operating systems. As in the computing world before, over time value will move away from hardware and operating systems towards applications and services. It is no accident that both Nokia and Google are moving into mobile services as a future growth area. The operators need an independent operating system to hold back their advance onto traditional operator turf.

In summary:

We feel that as complexity increases in the mobile world, the economics of LiMo will become more favourable. It is only a matter of time, but LiMo market share will start to increase – the only question is the timeframe. Crucially, LiMo is well placed to get the buy-in of the most important stakeholders – operators. Operators are to mobile devices as content creators were to VHS; how well would the iPhone have done without AT&T?

Plus:

  • following the same path as the evolution of the computing industry
  • broad and growing industry support

Minus:

  • not yet reached critical mass
  • economic incentives for application developers are still vague

Interesting:

  • commodisation of hardware and operating system layer – value moving towards applications and services
  • a way for operators to counter the growing strength of Apple, Nokia & Google.

Questions:

  • how can operators add their assets to make the operating system more compelling?
  • how can the barriers of intellectual property ownership be overcome?