Will web 3.0 change the role of telcos?

Introduction

Over the past 12 months or so, the notion that the Internet is about to see another paradigm shift has received a lot of airtime. Amid all the dissatisfaction with way the Internet works today, the concept of a web 3.0 is gaining traction. At a very basic level, web 3.0 is about using blockchains (distributed ledgers) to bring about the decentralisation of computing power, resources, data and rewards.

STL Partners has written extensively about the emergence of blockchains and the opportunities they present for telcos. But this report takes a different perspective – it considers whether blockchains and the decentralisation they embody will fix the public Internet’s flaws and usher in a new era of competition and innovation. It also explores the potential role of telcos in reinventing the web in this way and whether it is in their interests to support the web 3.0 movement or protect the status quo.

Our landmark report The Coordination Age: A third age of telecoms explained how reliable and ubiquitous connectivity can enable companies and consumers to use digital technologies to efficiently allocate and source assets and resources. In the case of web 3.0, telcos could help develop solutions and services that can help bridge the gap between the fully decentralised vision of libertarians and governments’ desire to retain control and regulate the digital world.

As it considers the opportunities for telcos, this report draws on the experiences and actions of Deutsche Telekom, Telefónica and Vodafone. It also builds on previous STL Partners reports including:

Enter your details below to download an extract of the report

What do we mean by web 3.0?

The term web 3.0 is widely used to refer to the next step change in the evolution of the Internet. For some stakeholders, it is about the integration of the physical world and the digital world through the expansion of the Internet of Things, the widespread use of digital twins and augmented reality and virtual reality. This concept, which involves the capture and the processing of vast amounts of real-time, real-world data, is sometimes known as the spatial web.

While recognising the emergence of a spatial web, Nokia, for example, has defined web 3.0 as a “visually dynamic smart web” that harness artificial intelligence (AI) and machine learning (ML). It describes web 3.0 as an evolution of a “semantic web” with capacity to understand knowledge and data. Nokia believes that greater interconnectivity between machine-readable data and support for the evolution of AI and ML across “a distributed web” could remake ecommerce entirely.

Note, some of these concepts have been discussed for more than a decade. The Economist wrote about the semantic web in 2008, noting then that some people were trying to rebrand it web 3.0.

Today, the term web 3.0 is most widely used as a shorthand for a redistribution of power and data – the idea of decentralising the computation behind Internet services and the rewards that then ensue. Instead of being delivered primarily by major tech platforms, web 3.0 services would be delivered by widely-distributed computers owned by many different parties acting in concert and in line with specific protocols. These parties would be rewarded for the work that their computers do.

This report will focus primarily on the latter definition. However, the different web 3.0 concepts can be linked. Some commentators would argue that the vibrancy and ultimate success of the spatial web will depend on decentralisation. That’s because processing the real-world data captured by a spatial web could confer extraordinary power to the centralised Internet platforms involved. Indeed, Deloitte has made that link (see graphic below).

In fact, one of the main drivers of the web 3.0 movement is a sense that a small number of tech platforms have too much power on today’s Internet. The contention is that the current web 2.0 model reinforces this position of dominance by funnelling more and more data through their servers, enabling them to stay ahead of competitors. For web 3.0 proponents, the remedy is to redistribute these data flows across many thousands of different computers owned by different entities.  This is typically accomplished using what is known as decentralised apps (dapps) running on a distributed ledger (often referred to as a blockchain), in which many different computers store the code and then record each related interaction/transaction.

The spatial web and web 3.0 – two sides of the same coin?

Spacial-web-Web3-Deloitte

Source: Deloitte

For many commentators, distributed ledgers are at the heart of web 3.0 because they enable the categorisation and storage of data without the need for any central points of control. In an article it published online, Nokia predicted new application providers will displace today’s tech giants with a highly distributed infrastructure in which users own and control their own data. “Where the platform economy gave birth to companies like Uber, Airbnb, Upwork, and Alibaba, web 3.0 technology is driving a new era in social organization,” Nokia argues. “Leveraging the convergence of AI, 5G telecommunications, and blockchain, the future of work in the post-COVID era is set to look very different from what we’re used to. As web 3.0 introduces a new information and communications infrastructure, it will drive new forms of distributed social organisation…Change at this scale could prove extremely challenging to established organisations, but many will adapt and prosper.”

Nokia appears to have published that article in March 2021, but the changes it predicted are likely to happen gradually over an extended period. Distributed ledgers or blockchains are far from mature and many of their flaws are still being addressed. But there is a growing consensus that they will play a significant role in the future of the Internet.

Nokia itself is hoping that the web 3.0 movement will lead to rising demand for programmable networks that developers can harness to support decentralised services and apps. In June 2022, the company published a podcast in which Jitin Bhandari, CTO of Cloud and Network Services at Nokia, discusses the concept of “network as code” by which he means the creation of a persona of the network that can be programmed by ecosystem developers and technology application partners “in domains of enterprise, in domains of web 2.0 and web 3.0 technologies, in domains of industry 4.0 applications, in scenarios of operational technology (OT) applications.”  Nokia envisions that 5G networks will be able to participate in what it calls distributed service chains – the interlinking of multiple service providers to create new value.

Although blockchains are widely associated with Bitcoin, they can enable much more than crypto-currencies. As a distributed computer, a blockchain can be used for multiple purposes – it can store the number of tokens in a wallet, the terms of a self-executing contract, or the code for a decentralised app.

As early as 2014, Gavin Wood, the founder of the popular Ethereum blockchain, laid out a vision that web 3.0 will enable users to exchange money and information on the web without employing a middleman, such as a bank or a tech company. As a result, people would have more control over their data and be able to sell it if they choose.

Today, Ethereum is one of the most widely used (and trusted) blockchains. It bills itself as a permissionless blockchain, which means no one controls access to the service – there are no gatekeepers.

Still, as the Ethereum web site acknowledges, there are several disadvantages to web 3.0 decentralisation, as well as advantages. The graphic below which draws on Ethereum’s views and STL analysis, summarises these pros and cons.

Table of Contents

  • Executive Summary
    • Three ways in which telcos can support web 3.0
    • Challenges facing web 3.0
  • Introduction
  • What do we mean by web 3.0?
    • Transparency versus privacy
    • The money and motivations behind web 3.0
    • Can content also be unbundled?
    • Smart contracts and automatic outcomes
    • Will we see decentralised autonomous organisations?
    • Who controls the user experience?
    • Web 3.0 development on the rise
  • The case against web 3.0
    • Are blockchains really the way forward?
    • Missteps and malign forces
  • Ironing out the wrinkles in blockchains
  • Could and should telcos help build web 3.0?
    • Validating blockchains
    • Telefónica: An interface to blockchains
    • Vodafone: Combining blockchains with the IoT
  • Conclusions

Enter your details below to download an extract of the report

IoT and blockchain: There’s substance behind the hype

Introduction

There is currently a lot of market speculation about blockchain and its possible use-cases, including how it can be used in the IoT ecosystem.

This short report identifies three different reasons why blockchain is an attractive technology to use in IoT solutions, and how blockchain can help operators move up the IoT value chain by enabling new business models.

This report leverages research from the following recent STL publications:

Enter your details below to request an extract of the report


The IoT ecosystem is evolving rapidly, and we are moving towards a hyper-connected and automated future…

Blockchain IoT

Source: STL Partners

This future vision won’t be possible unless IoT devices from different networks can share data securely. There are three things that make blockchain an attractive technology to help overcome this challenge and enable IoT ecosystems:

  1. It creates a tamper-proof audit trails
  2. It enables a distributed operating model
  3. It is open-source

Contents:

  • Introduction
  • IoT is not a quick win for operators
  • Can blockchain help?
  • The IoT ecosystem is evolving rapidly…
  • The future vision won’t be possible unless IoT devices from different networks can share data securely
  • Application 1: Enhancing IoT device security
  • Use-case 1: Protecting IoT devices with blockchain and biometric data
  • Use-case 2: Preventing losses in the global freight and logistics industry
  • Application 2: Enabling self-managing device-to-device networks
  • Use-case 1: Enabling device-to-device payments
  • Use-case 2: Granting location-access through smart locks
  • Use-case 3: Enabling the ‘sharing economy’
  • Blockchain is not a silver bullet
  • Blockchain in operator IoT strategies

Enter your details below to request an extract of the report

How to build an open source telco – and why?

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

Introduction: Why an open source telecom?

Commercial pressures and technological opportunities

For telcos in many markets, declining revenues is a harsh reality. Price competition is placing telcos under pressure to reduce capital spending and operating costs.

At the same time, from a technological point of view, the rise of cloud-based solutions has raised the possibility of re-engineering telco operations to be run with virtualised and open sourced software on low cost, general purpose hardware.

Indeed, rather than pursuing the traditional technological model, i.e. licensing proprietary solutions from the mainstream telecoms vendors (e.g. Ericsson, Huawei, Amdocs, etc.), telcos can increasingly:

  1. Progressively outsource the entire technological infrastructure to a vendor;
  2. Acquire software with programmability and openness features: application programming interfaces(APIs) can make it easier to program telecommunications infrastructure.

The second option promises to enable telcos to achieve their long-standing goals of decreasing the time-to-market of new solutions, while further reducing their dependence on vendors.

Greater adoption of general IT-based tools and solutions also:

  • Allows flexibility in using the existing infrastructure
  • Optimises and reuses the existing resources
  • Enables integration between operations and the network
  • And offers the possibility to make greater use of the data that telcos have traditionally collected for the purpose of providing communications services.


In an increasingly squeezed commercial context, the licensing fees applied by traditional vendors for telecommunication solutions start to seem unrealistic, and the lack of flexibility poses serious issues for operators looking to push towards a more modern infrastructure. Moreover, the potential availability of competitive open source solutions provides an alternative that challenges the traditional model of making large investments in proprietary software, and dependence on a small number of vendors.

Established telecommunications vendors and/or new aggressive ones may also propose new business models (e.g., share of investments, partnership and the like), which could be attractive for some telcos.

In any case, operators should explore and evaluate the possibility of moving forward with a new approach based on the extensive usage of open source software.

This report builds on STL Partners’ 2015 report, The Open Source Telco: Taking Control of Destiny which looked at how widespread use of open source software is an important enabler of agility and innovation in many of the world’s leading internet and IT players. Yet while many telcos then said they crave agility, only a minority use open source to best effect.

In that 2015 report, we examined the barriers and drivers, and outlined six steps for telcos to safely embrace this key enabler of transformation and innovation:

  1. Increase usage of open source software: Overall, operators should look to increase their usage of open source software across their entire organisation due to its numerous strengths. It must, therefore, be consistently and fairly evaluated alongside proprietary alternatives. However, open source software also has disadvantages, dependencies, and hidden costs (such as internally-resourced maintenance and support), so it should not be considered an end in itself.
  2. Increase contributions to open source initiatives: Operators should also look to increase their level of contribution to open source initiatives so that they can both push key industry initiatives forward (e.g. OPNFV and NFV) and have more influence over the direction these take.
  3. Associate open source with wider transformation efforts: Successful open source adoption is both an enabler and symptom of operators’ broader transformation efforts, and should be recognised as such. It is more than simply a ‘technical fix’.
  4. Bring in new skills: To make effective use of open source software, operators need to acquire new software development skills and resources – likely from outside the telecoms industry.
  5. … but bring the whole organisation along too: Employees across numerous functional areas (not just IT) need to have experience with, or an understanding of, open source software – as well as senior management. This should ideally be managed by a dedicated team.
  6. New organisational processes: Specific changes also need to be made in certain functional areas, such as procurement, legal, marketing, compliance and risk management, so that their processes can effectively support increased open source software adoption.

This report goes beyond those recommendations to explore the changing models of IT delivery open to telcos and how they could go about adopting open source solutions. In particular, it outlines the different implementation phases required to build an open source telco, before considering two scenarios – the greenfield model and the brownfield model. The final section of the report draws conclusions and makes recommendations.

Why choose to build an open source telecom now?

Since STL Partners published its first report on open source software in telecoms in 2015, the case for embracing open source software has strengthened further. There are three broad trends that are creating a favourable market context for open source software.

Digitisation – the transition to providing products and services via digital channels and media. This may sometimes involve the delivery of the product, such as music, movies and books, in a digital form, rather than a physical form.

Virtualisation – executing software on virtualised platforms running on general-purpose hardware located in the cloud, rather than purpose-built hardware on premises. Virtualisation allows a better reuse of large servers by decoupling the relationship of one service to one server. Moreover, cloudification of these services means they can be made available to any connected device on a full-time basis.

Softwarisation – the redefinition of products and services though software. This is an extension of digitisation, i.e., the digitisation of music has allowed the creation of new services and propositions (e.g. Spotify). The same goes for the movie industry (e.g. Netflix) or the transformation of the book industry (e.g. ebooks) and newspapers. This paradigm is based on:

  • The ability to digitise the information (transformation of the analogue into a digital signal).
  • Availability of large software platforms offering relevant processing, storage and communications capabilities.
  • The definition of open and reusable application programming interfaces (APIs) which allow processes formerly ‘trapped’ within proprietary systems to be managed or enhanced with other information and by other systems.

These three features have started a revolution that is transforming other industries, e.g. travel agencies (e.g. Booking.com), large hotel chains (e.g. Airbnb), and taxis (e.g. Uber). Softwarisation is also now impacting other traditional industries, such as manufacturing (e.g., Industry 4.0) and, for sure, telecommunications.

Softwarisation in telecommunications amounts to the use of virtualisation, cloud computing, open APIs and programmable communication resources to transform the current network architecture. Software is playing a key role in enabling new services and functions, better customer experience, leaner and faster processes, faster introduction of innovation, and usually lower costs and prices. The softwarisation trend is very apparent in the widespread interest in two emerging technologies: network function virtualization (NFV) and software defined networking (SDN).

The likely impact of this technological transformation is huge: flexibility in service delivery, cost reduction, quicker time to market, higher personalisation of services and solutions, differentiation from competition and more. We have outlined some key telco NFV/SDN strategies in the report Telco NFV & SDN Deployment Strategies: Six Emerging Segments.

What is open source software?

A generally accepted open source definition is difficult to achieve because of different perspectives and some philosophical differences within the open source community.

One of the most high-profile definitions is that of the Open Source Initiative, which states the need to have access to the source code, the possibility to modify and redistribute it, and non-discriminatory clauses against persons, groups or ‘fields of endeavour’ (for instance, usage for commercial versus academic purposes) and others.

For the purpose of this report, STL defines open source software as follows:

▪ Open source software is a specific type of software for which the original source code is made freely available and may be redistributed and modified. This software is usually made available and maintained by specialised communities of developers that support new versions and ensure some form of backward compatibility.

Open source can help to enable softwarisation. As an example, it has greatly helped in moving from proprietary solutions in the web server sector to a common software platform (named LAMP) based on the Linux operating system, the Apache Http server, Mysql server, PhP programming language. All these components are made available as open source. This essentially means that people can freely acquire the source code, modify it and use it. Modifications and improvements are to be returned to the development community.

One of the earliest and most high profile examples of open source software was the Linux operating system, a Unix-like operating system developed under the model of free and open source software development and distribution.

Open source for telecoms: Benefits and barriers

The benefits of using open source for telecoms

As discussed in our earlier report, The Open Source Telco: Taking Control of Destiny, the adoption and usage of open source solutions are being driven by business and technological needs. Ideally, the adoption and exploitation of open source will be part of a broader transformation programme designed to deliver the specific operator’s strategic goals.

Operators implementing open source solutions today tend to do so in conjunction with the deployment of network function virtualization (NFV) and software defined networking (SDN), which will play an important role for the definition and consolidation of the future 5G architectures.

However, as Figure 1 shows, transformation programmes can face formidable obstacles, particularly where a cultural change and new skills are required.

Benefits of transformation and related obstacles

The following strategic forces are driving interest in open source approaches among telecoms operators:

Reduce infrastructure costs. Telcos naturally want to minimise investment in new technologies and reduce infrastructure maintenance costs. Open source solutions seem to provide a way to do this by reducing license fees paid to solution vendors under the traditional software procurement model. As open source software usually runs on general-purpose hardware, it could also cut the capital and maintenance costs of the telco’s computing infrastructure. In addition, the current trend towards virtualisation and SDN should enable a shift to more programmable and flexible communications platforms. Today, open source solutions are primarily addressing the core network (e.g., virtualisation of evolved packet core), which accounts for a fraction of the investment made in the access infrastructure (fibre deployment, antenna installation, and so forth). However, in time open source solutions could also play a major role in the access network (e.g., open base stations and others): an agile and well-formed software architecture should make it possible to progressively introduce new software-based solutions into access infrastructure.

Mitigate vendor lock-in. Major vendors have been the traditional enablers of new services and new network deployments. Moreover, to minimise risks, telco managers tend to prefer to adopt consolidated solutions from a single vendor. This approach has several consequences:

  • Telcos don’t tend to introduce innovative new solutions developed in-house.
  • As a result, the network is not fully leveraged as a differentiator, and can become the full care and responsibility of a vendor.
  • The internal innovation capabilities of a telco have effectively been displaced in favour of those of the vendor.

This has led to the “ossification” of much telecoms infrastructure and the inability to deliver differentiated offerings that can’t easily be replicated by competitors. Introducing open source solutions could be a means to lessen telcos’ dependence on specific vendors and increase internal innovation capabilities.

Enabling new services. The new services telcos introduce in their networks are essentially the same across many operators because the developers of these new services and features are a small set of consolidated vendors that offer the same portfolio to all the industry. However, a programmable platform could enable a telco to govern and orchestrate their network resources and become the “master of the service”, i.e., the operator could quickly create, customise and personalise new functions and services in an independent way and offer them to their customers. This capability could help telcos enter adjacent markets, such as entertainment and financial services, as well as defend their core communications and connectivity markets. In essence, employing an open source platform could give a telco a competitive advantage.

Faster innovation cycles. Depending on a vendor makes the telco dependent on its roadmap and schedule, and on the obsolescence and substitution of existing technologies. The use of out-dated technologies has a huge impact on a telco’s ability to offer new solutions in a timely fashion. An open source approach offers the possibility to upgrade and improve the existing platform (or to move to totally new technologies) without too many constraints posed by the “reference vendor”. This ability could be essential to acquiring and maintaining a technological advantage over competitors. Telcos need to clearly identify the benefits of this change, which represent the reasons, the “why”, for the softwarisation.

Complete contents of how to build an open source telecom report:

  • Executive Summary
  • Introduction: why open source?
  • Commercial pressures and technological opportunities
  • Open Source: Why Now?
  • What is open source software?
  • Open source: benefits and barriers
  • The benefits of using open source
  • Overcoming the barriers to using open source
  • Choosing the right path to open source
  • Selecting the right IT delivery model
  • Choosing the right model for the right scenario
  • Weighing the cost of open source
  • Which telcos are using open source today?
  • How can you build an open source telco?
  • Greenfield model
  • Brownfield model
  • Conclusions and recommendations
  • Controversial and challenging, yet often compelling
  • Recommendations for different kinds of telcos

Figures:

  • Figure 1: Illustrative open source costs versus a proprietary approach
  • Figure 2: Benefits of transformation and the related obstacles
  • Figure 3: The key barriers in the path of a shift to open source
  • Figure 4: Shaping an initial strategy for the adoption of open source solutions
  • Figure 5: A new open source component in an existing infrastructure
  • Figure 6: Different kinds of telcos need to select different delivery models
  • Figure 7: Illustrative estimate of Open Source costs versus a proprietary approach

MWC 2016: The Cloud/NFV Transformation Needle Moves

Enter the open-source software leaders: IT takes telco cloud seriously

One of the most important trends from MWC 2016 was the increased presence, engagement, and enthusiasm of the key open-source software vendors. Companies like Red Hat, IBM, Canonical, HP Enterprise, and Intel are the biggest contributors of code, next to independent developers, to the key open-source projects like OpenStack, OPNFV, and Linux itself. Their growing engagement in telecoms software is a major positive for the prospects of NFV/SDN and telco re-engagement in cloud.

OpenStack, the open-source cloud operating system, is emerging as the key platform for telco cloud and also for NFV implementations. Figure 1, taken from the official OpenStack statistics tracker at Stackalytics.com, shows contributions to the current release of OpenStack by organisational affiliation and by module; this highlights both which companies are contributing heavily to OpenStack development, and which modules are attracting the most development effort.

AT&T’s specialist partner Mirantis shows up as a leading contributor of code for OpenStack, some of which we believe is developed inside AT&T Shannon Labs. Tellingly, among OpenStack modules, the single biggest focus area is Neutron, the OpenStack module which takes care of its networking functions. Anything NFV-related will tend to end up in here.

Figure 1: The contributor ecosystem for OpenStack (% of commits, bug fixes, and reviews by company and module)

Source: Stackalytics

 

  • Executive Summary
  • Enter the open-source software leaders: IT takes telco cloud seriously
  • And (some) telcos get serious about software
  • Open-source development is influencing the standards process
  • The cloud is the network is the cloud
  • Nokia and Intel: ever closer union?

 

  • Figure 1: The contributor ecosystem for OpenStack (% of commits, bug fixes, and reviews by company and module)
  • Figure 2: Mirantis contributes more to OpenStack networking than Red Hat or Cisco (% of commits, bug fixes, and reviews by company, for networking module)
  • Figure 3: Mirantis (and therefore AT&T) drive the key Fuel project forwards

The Open Source Telco: Taking Control of Destiny

Preface

This report examines the approaches to open source software – broadly, software for which the source code is freely available for use, subject to certain licensing conditions – of telecoms operators globally. Several factors have come together in recent years to make the role of open source software an important and dynamic area of debate for operators, including:

  • Technological Progress: Advances in core networking technologies, especially network functions virtualisation (NFV) and software-defined networking (SDN), are closely associated with open source software and initiatives, such as OPNFV and OpenDaylight. Many operators are actively participating in these initiatives, as well as trialling their software and, in some cases, moving them into production. This represents a fundamental shift away from the industry’s traditional, proprietary, vendor-procured model.
    • Why are we now seeing more open source activities around core communications technologies?
  • Financial Pressure: However, over-the-top (OTT) disintermediation, regulation and adverse macroeconomic conditions have led to reduced core communications revenues for operators in both developed and emerging markets alike. As a result, operators are exploring opportunities to move away from their core, infrastructure business, and compete in the more software-centric services layer.
    • How do the Internet players use open source software, and what are the lessons for operators?
  • The Need for Agility: In general, there is recognition within the telecoms industry that operators need to become more ‘agile’ if they are to succeed in the new, rapidly-changing ICT world, and greater use of open source software is seen by many as a key enabler of this transformation.
    • How can the use of open source software increase operator agility?

The answers to these questions, and more, are the topic of this report, which is sponsored by Dialogic and independently produced by STL Partners. The report draws on a series of 21 interviews conducted by STL Partners with senior technologists, strategists and product managers from telecoms operators globally.

Figure 1: Split of Interviewees by Business Area

Source: STL Partners

Introduction

Open source is less optional than it once was – even for Apple and Microsoft

From the audience’s point of view, the most important announcement at Apple’s Worldwide Developer Conference (WWDC) this year was not the new versions of iOS and OS X, or even its Spotify-challenging Apple Music service. Instead, it was the announcement that Apple’s highly popular programming language ‘Swift’ was to be made open source, where open source software is broadly defined as software for which the source code is freely available for use – subject to certain licensing conditions.

On one level, therefore, this represents a clever engagement strategy with developers. Open source software uptake has increased rapidly during the last 15 years, most famously embodied by the Linux operating system (OS), and with this developers have demonstrated a growing preference for open source tools and platforms. Since Apple has generally pushed developers towards proprietary development tools, and away from third-party ones (such as Adobe Flash), this is significant in itself.

An indication of open source’s growth can be found in OS market shares in consumer electronics devices. As Figure 2 shows below, Android (open source) had a 49% share of shipments in 2014; if we include the various other open source OS’s in ‘other’, this increases to more than 50%.

Figure 2: Share of consumer electronics shipments* by OS, 2014

Source: Gartner
* Includes smartphones, tablets, laptops and desktop PCs

However, one of the components being open sourced is Swift’s (proprietary) compiler – a program that translates written code into an executable program that a computer system understands. The implication of this is that, in theory, we could even see Swift applications running on non-Apple devices in the future. In other words, Apple believes the risk of Swift being used on Android is outweighed by the reward of engaging with the developer community through open source.

Whilst some technology companies, especially the likes of Facebook, Google and Netflix, are well known for their activities in open source, Apple is a company famous for its proprietary approach to both hardware and software. This, combined with similar activities by Microsoft (who open sourced its .NET framework in 2014), suggest that open source is now less optional than it once was.

Open source is both an old and a new concept for operators

At first glance, open source also appears to now be less optional for telecoms operators, who traditionally procure proprietary software (and hardware) from third-party vendors. Whilst many (but not all) operators have been using open source software for some time, such as Linux and various open source databases in the IT domain (e.g. MySQL), we have in the last 2-3 years seen a step-change in operator interest in open source across multiple domains. The following quote, taken directly from the interviews, summarises the situation nicely:

“Open source is both an old and a new project for many operators: old in the sense that we have been using Linux, FreeBSD, and others for a number of years; new in the sense that open source is moving out of the IT domain and towards new areas of the industry.” 

AT&T, for example, has been speaking widely about its ‘Domain 2.0’ programme. Domain 2.0 has the objectives to transform AT&T’s technical infrastructure to incorporate network functions virtualisation (NFV) and software-defined networking (SDN), to mandate a higher degree of interoperability, and to broaden the range of alternative suppliers available across its core business. By 2020, AT&T hopes to virtualise 75% of its network functions, and it sees open source as accounting for up to 50% of this. AT&T, like many other operators, is also a member of various recently-formed initiatives and foundations around NFV and SDN, such as OPNFV – Figure 3 lists some below.

Figure 3: OPNFV Platinum Members

Source: OPNFV website

However, based on publicly-available information, other operators might appear to have lesser ambitions in this space. As ever, the situation is more complex than it first appears: other operators do have significant ambitions in open source and, despite the headlines NFV and SDN draw, there are many other business areas in which open source is playing (or will play) an important role. Figure 4 below includes three quotes from the interviews which highlight this broad spectrum of opinion:

Figure 4: Different attitudes of operators to open source – selected interview quotes

Source: STL Partners interviews

Key Questions to be Addressed

We therefore have many questions which need to be addressed concerning operator attitudes to open source software, adoption (by area of business), and more:

  1. What is open source software, what are its major initiatives, and who uses it most widely today?
  2. What are the most important advantages and disadvantages of open source software? 
  3. To what extent are telecoms operators using open source software today? Why, and where?
  4. What are the key barriers to operator adoption of open source software?
  5. Prospects: How will this situation change?

These are now addressed in turn.

  • Preface
  • Executive Summary
  • Introduction
  • Open source is less optional than it once was – even for Apple and Microsoft
  • Open source is both an old and a new concept for operators
  • Key Questions to be Addressed
  • Understanding Open Source Software
  • The Theory: Freely available, licensed source code
  • The Industry: Dominated by key initiatives and contributors
  • Research Findings: Evaluating Open Source
  • Open source has both advantages and disadvantages
  • Debunking Myths: Open source’s performance and security
  • Where are telcos using open source today?
  • Transformation of telcos’ service portfolios is making open source more relevant than ever…
  • … and three key factors determine where operators are using open source software today
  • Open Source Adoption: Business Critical vs. Service Area
  • Barriers to Telco Adoption of Open Source
  • Two ‘external’ barriers by the industry’s nature
  • Three ‘internal’ barriers which can (and must) change
  • Prospects and Recommendations
  • Prospects: An open source evolution, not revolution
  • Open Source, Transformation, and Six Key Recommendations
  • About STL Partners and Telco 2.0
  • About Dialogic

 

  • Figure 1: Split of Interviewees by Business Area
  • Figure 2: Share of consumer electronics shipments* by OS, 2014
  • Figure 3: OPNFV Platinum Members
  • Figure 4: Different attitudes of operators to open source – selected interview quotes
  • Figure 5: The Open IT Ecosystem (incl. key industry bodies)
  • Figure 6: Three Forms of Governance in Open Source Software Projects
  • Figure 7: Three Classes of Open Source Software License
  • Figure 8: Web Server Share of Active Sites by Developer, 2000-2015
  • Figure 9: Leading software companies vs. Red Hat, market capitalisation, Oct. 2015
  • Figure 10: The Key Advantages and Disadvantages of Open Source Software
  • Figure 11: How Google Works – Failing Well
  • Figure 12: Performance gains from an open source activation (OSS) platform
  • Figure 13: Intel Hardware Performance, 2010-13
  • Figure 14: Open source is more likely to be found today in areas which are…
  • Figure 15: Framework mapping current telco uptake of open source software
  • Figure 16: Five key barriers to telco adoption of open source software
  • Figure 17: % of employees with ‘software’ in their LinkedIn job title, Oct. 2015
  • Figure 18: ‘Waterfall’ and ‘Agile’ Software Development Methodologies Compared
  • Figure 19: Four key cultural attributes for successful telco transformation

Full Article: Aligning M2M with Telco 2.0 Strategies

Summary: A review of Telenor, Jasper Wireless and KPN’s approaches to M2M,
examining how M2M strategy needs to fit with an operators’ future
broadband business model strategy. (October 2010)

NB A PDF copy of this article can be downloaded here.

M2M: escaping the cottage industry

The M2M (Machine-to-Machine) market, also known as “Embedded
Mobile”, has frequently been touted as a major source of future growth for the
industry. Verizon Wireless, for example, has set a target of 400% mobile
penetration, implying three embedded devices for each individual subscriber.
However, it is widely considered that this market is cursed by potential –
success always seems to be five years away. At this Spring’s Telco 2.0
Executive Brainstorm, delegates described it as being “sub-scale” and a
“cottage industry”.

 

Specific technical, operational, and economic issues have
driven this initial fragmentation. M2M is characterised by diversity- this is
inevitable, as there are thousands of business processes in each of tens of
thousands of sub-sectors across the industrial economy. As well as the highly
tailored nature of the applications, there is considerable diversity in
hardware and software products, and new products will have to coexist with many
legacy systems. These many diverse but necessary combinations have provided
fertile ground for the separate ‘cottage industries’.

 

As a result, it is conversely difficult to build scale,
despite the large total market size. Also, the high degree of specialisation in
each sub-market acts as a barrier to entry. Volume is critical, as typical
ARPUs for embedded devices are only a fraction of those we have come to expect
from human subscribers. This also implies that efficiency and project execution
are extremely important – there is little margin for error.  Finally, with so much specialisation at both
the application and device ends of the equation, it is hard to see if and where
there is much room for generic functionality in the middle.

Special Technical and Operational Challenges

The technical problems are challenging. M2M applications are
frequently safety-critical, operations-critical, or both. This sets a high bar
in terms of availability and reliability. They often have to operate in
difficult environments. Information security issues will be problematic and new
technologies such as the “Internet of things”/Ubiquitous Computing will make
new demands in terms of disclosure that contradict efforts to secure the
system. An increasingly common requirement is for embedded devices to
communicate directly and to self-organise – in the past, M2M systems have
typically used a client-server architecture and guaranteed security by
isolating their communications networks from the wider world. The security
requirements of a peer-to-peer, internetworked M2M system are qualitatively
different to those of traditional Supervisory, Control, and Data Acquisition
(SCADA) systems.

 

One of the reasons for customer interest in self-organising
systems is that M2M projects often involve large numbers of endpoints, which
may be difficult to access once deployed, and the costs of managing the system
can be very high. How are the devices deployed, activated, maintained, and
withdrawn? How does the system authenticate them? Can a new device appearing on
the scene be automatically detected, authenticated, and connected? A related
problem is that devices are commonly integrated in industrial assets that have
much longer design lives than typical cellular electronics; computers are
typically depreciated over 3 years, but machine tools, vehicles, and other
plant may have a design life of 30 years or more.

This implies that the
M2M element must be repeatedly upgraded during its lifetime, and if possible,
this should happen without a truckroll. (The asset, after all, may be an
offshore wind turbine, in which case no-one will be able to visit it without using
a helicopter
.) This also requires that upgrades can be rolled-back in the
event they go wrong.

The Permanent Legacy Environment

We’ve already noted that there are a great variety of
possible device classes and vendors, and that new deployments will have to
co-exist with legacy systems. In fact, given the disparity between their
upgrade cycles and the design lives of the assets they monitor, it’s more
accurate to say that these devices will exist in a permanent legacy
environment.

Solution: The Importance of System Assurance

Given the complex needs of M2M applications, just providing
GPRS connectivity and modules will not be enough. Neither is there any reason
to think operators will be better than anyone else at developing industrial
process control or management-information systems. However, look again at the
issues we’ve just discussed – they cluster around what might be termed “system
assurance”. Whatever the application or the vendor, customers will need to be
able to activate, deactivate, identify, authenticate, read-out, locate,
command, update, and rollback their fleet of embedded devices. It is almost
certainly best that device designers decide what interfaces their product will
have as extensions to a standard management protocol. This also implies that
the common standard will need to include a function to read out what extensions
are available on a given device. The

similarities with the well-known SNMP (Simple Network
Management Protocol) and with USSD are extensive.

 

These are the problems we need to solve. Are there
technology strategies and business models that operators can use to profit by
solving them?

 

We have encountered a number of examples of how operators
and others have answered this question.

Three Operators’ Approaches

1.  Telenor:
Comprehensive Platform

Telenor Objects is a platform for handling the management,
systems administration, information assurance, and applications development of
large, distributed M2M systems. The core of the product is an open-source
software application developed in-house at Telenor. Commercially, Objects is
offered as a managed service hosted in Telenor’s data centres, either with or
without Telenor data network capacity. This represents concentration on the
system assurance problems we discussed above, with a further concern for rapid
applications development and direct device-to-device communication.

2.  Jasper:
Connectivity Broker.

Several companies – notably Jasper Wireless, Wyless plc.,
and Telenor’s own Connexion division – specialise in providing connectivity for
M2M applications as a managed service. Various implementations exist, but a
typical one is a data-only MVNO with either wholesale or roaming relationships
to multiple physical operators. As well as acting as a broker in wholesale data
service, they may also provide some specialised BSS-OSS features for M2M work,
thus tackling part of the problems given above.

3.  KPN:
M2M Happy Pipe

KPN (an investor in Jasper Wireless) has recently announced
that it intends to deploy a CDMA450 network in the Netherlands exclusively for
M2M applications. Although this is a significant CAPEX commitment to the low
margin connectivity element of the M2M market, it may be a valid option.
Operating at 450MHz, as opposed to 900/1800/1900MHz GSM or 2100MHz UMTS,
provides much better building penetration and coverage at the cost of reduced
capacity. The majority of M2M applications are low bandwidth, many of them will
be placed inside buildings or other radio-absorbing structures, and the low
ARPUs imply that cost minimisation will be significantly more important than
capacity. Where suitable spectrum is available, and a major launch customer –
for example, a smart grid project – exists to justify initial expenditure, such
a multi-tenant data network may be an attractive opportunity. However, this
assumes that the service-enablement element of the product is provided by
someone else – which may be the profitable element.

 

Finally, Verizon Wireless’s Open Development Initiative,
rather than being a product, is a standardisation initiative intended to
increase the variety of devices available for M2M implementers by speeding up
the process of homologating (the official term) new modules. The intention is
to create a catalogue of devices whose characteristics can be trusted and whose
control interfaces are published. This is not a lucrative business, but
something like it is necessary to facilitate the development of M2M hardware
and software.

Horizontal Enablers

These propositions have in common that they each represent a
different horizontal element of the total M2M system-of-systems –
whether it’s the device-management and applications layer, as in Telenor
Objects, a data-only MVNO such as Connexion or Wyless, or a radio network like
KPN’s, it’s a feature or capability that is shared between different vertical
silos and between multiple customers.

 

In developing horizontal enabler capabilities, operators
need to consider how to both drive development and growth of what is
effectively a new market and ensure that they are adding
value and that they are getting paid for it.   There is a natural tension between these
objectives.

 

The tension is between providing a compelling opportunity to
potential ecosystem partners (and in particular, offering them low cost access
to a large potential market) and securing a clear role for providers to extract
value (in particular, through differentiation).

 

Tensions between operators
and users

Linux: a case study

To explore operator options, we have looked to the
experience of Linux. This is an example of how the demands of a highly diverse
user base can be tackled through horizontalisation, modular design, and open
source development. Since the 1990s, the operating system has come to include a
huge diversity of specialised variants, known as distributions. These consist
of elements that are common to all Linux systems – such as one of the various
kernels which provide the core operating system functions, the common API,
drivers for different hardware devices, and a subset of a wide range of
software libraries that provide key utility programs – and modules specific to
the distribution, that implement its special features.

 

For example, Red Hat
Enterprise Linux and OpenSUSE are enterprise-optimised distributions, CentOS is
frequently used for Asterisk and other VoIP applications, Ubuntu is a consumer
distribution (which itself has several specialised variants such as Edubuntu
for educational applications), Android is a mobile-optimised distribution,
Slackware and Debian exist for hardcore developers, Quagga and Zebra are
optimised for use as software-based IP routers, and WindRiver produces
ultra-low power systems for embedded use.

 

In fact, it’s probably easier to illustrate this than it is
to describe it. The following diagram illustrates the growing diversity of the
Linux family.

The evolution of Linux
distributions over time

The reason why this has been a) possible and b)
tolerable  is the horizontalised,
open-source, and modular nature of Linux. It could easily have been far too
difficult to do a version for minimal industrial devices, another for desktop
PCs, and yet another for supercomputers. Or the effort to do so could have
created a morass of highly incompatible subprojects

 

In creating a specialised distribution (or ‘distro’), it’s
possible to rely on the existing features that span the various distributions
and deal with the requirements they have in common. Similarly, a major
improvement in one of those core features has a natural source of scale, and
will tend to attract community involvement in its maintenance, as all the other
distros will rely on it. This structure both supports specialisation and
innovation, and helps to scale up support for the features everyone uses.

 

 

The Linux kernel – horizontal
specialisation in action

 

 

To recap, we think that M2M devices may be a little like
this – very different, but relying on at least a subset of the features in a
common specification. The Linux analogy is especially important given that a
lot of them are likely to use some sort of embedded Linux platform. Minimal
common features are likely to cluster around:

  • Activation/Deactivation – initial switch on of a
    device, provisioning it with connectivity, and eventually switching it off
  • Authentication – checking if this is the device
    it should be
  • Update/Rollback – updating the software and
    firmware on a device, and reversing this if it goes wrong
  • Device Discovery – detecting the presence of new
    devices
  • State Readout – get the current values for
    whichever parameters the device is monitoring
  • Location – where is the device?
  • Device Status – is it working?
  • Generic Event notification parameters – provide
    for notifications to and from devices that are specified by the user

This list is likely to be extended by device implementers
and software developers with device- and application-specific commands and data
formats, so there will also need to be a function to get a device’s interfaces
and capabilities. Technically, this has considerable commonality with formats
like USB, SNMP (Simple Network Management Protocol), SyncML, etc. – it’s
possible that these might be implemented as extensions to one of these
protocols.

 

For our purposes, it’s more interesting to note that these
functions have a lot in common with telcos’ own competences with regard to
device management, activation, OSS/BSS, and the SIM/USIM. Operators in general,
and specifically mobile operators, already have to detect, authenticate,
provision-on, update, and eventually end-of-life a diverse variety of mobile
devices. As much of this as possible must happen over-the-air and
automatically.

 

It is worth noting that Telstra recently announced their
move into the M2M market. Although they are doing so with Jasper Wireless as a
partner, the product (Telstra
M2M Wireless Control Centre
) is a Web-based self-service application for
customers to activate and administer their own devices.

The commercial strategies of Linux vendors

Returning to the IT world, it’s worth asking “how do the
Linux vendors make money?” After all, their product is at least theoretically
free. We see three options.

  • Option 1 – Red Hat, Novell

Both of these major IT companies maintain their own Linux
distribution (RHEL and OpenSUSE respectively, two of the most common enterprise
distros) and are very significant contributors of code to the core development
process. They also develop much application-layer software.

 

As well as releasing the source code, though, they also
offer paid-for, shrink-wrapped versions of the distributions, often including
added extras, and custom installations for major enterprise projects.

 

Typically, a large part of the commercial offering in such a
deal consists of the vendor providing technical support, from first line up to
systems integration and custom software development, and consulting services
over the lifetime of the product. It has been remarked that Linux is only free
if you value your own time at zero – this business model externalises the
maintenance costs and makes them into a commercial product that supports the
“free” element.

  •  Option 2 – IBM

Although IBM has long had its own proprietary Unix-like
operating system, through the 2000s it has become an ever more significant
Linux company – the only enterprise that could claim to be bigger would be
Google. Essentially, they use it as just another software option for their IT
consulting and managed services operation to sell, with the considerable
advantages of no upstream licence costs, very broad compatibility, and maximum
scope for custom development. In return, IBM contributes significant resources
to Linux, and to other open-source projects, notably OpenOffice.

  • Option 3 – Rackspace

And, of course, one way to make money from Linux is good
old-fashioned hosting – they call it the cloud these days. Basically, this
option captures any sort of managed-service offering that uses it as a core
enabler, or even as the product itself.

 

The big divide between the options, in the end, is the cost
of entry and the form it takes. If you aim to tackle Option 1, there is no
substitute for very significant investment in technical R&D, at least to
the level of Objects. Building up the team, the infrastructure, and significant
original technology is the entry ticket. Operators aren’t – with some
honourable exceptions – the greatest at internal innovation, so beware.

 

Telenor: flexibility through integration of multiple strategies

With Objects, Telenor has chosen this daring course.
However, they have also hedged their bets between the Red Hat/Novell model and
the managed-service model, by integrating elements of options 1 and 3. Objects
is technically an open-source software play, and commercially/operationally a
hosted service based in their existing data centre infrastructure. Its business
model is solidly based on usage-based subscription.

 

This doesn’t mean, however, that they couldn’t flex to a
different model in markets where they don’t have telco infrastructure –
offering technical support and consulting to third party implementers of the
software would be an option, and so would rolling it into a broader
systems-integration/consulting offering. In this way, horizontalisation offers
flexibility.

 

Option 2, of course, demands a significant specialisation in
IT, SI, and related trades.. This is probably achievable for those operators,
like BT and DTAG, who maintain a significant IT services line of business.
Otherwise, this would require a major investment and a risky change of focus.

Connectivity: needs a launch customer…

Option 3 – pure-play connectivity – is a commodity business
in a sector where ARPU is typically low. However, oil is also a commodity, and
nobody thinks that’s not a good business to be in. Two crucial elements for
success will be operations excellence – customers will demand high
availability, while low ARPU will constrain operators to an obsessive focus on
cost – and volume. It will be vital to acquire a major launch customer to get
the business off the ground. A smart grid project, for example, would be ideal.
Once there, you can sell the remaining capacity to as many other customers as
you can drum up.

 

Existing operators, like KPN, will have the enormous
advantage of being able to re-use their existing physical footprint of cell
sites, power, and backhaul, by adding a radio network more suited to the
demands of cheap coverage, building penetration, and relatively low bandwidth
demands, such as CDMA450 or WiMAX at relatively low frequencies.

Conclusion: M2M must fit into a total strategy

In conclusion, the future M2M market tends to map onto other
ideas about the future of operators. We identified three key strategies in the Future Broadband Business Models
strategy report
, and they have significant relevance here.

 

“Telco 2.0”, with its aim to be a highly agile development
platform, is likely to look to the software-led Option 1, and perhaps consider partnering
with a suitable player. They might license the Objects brand and make use of the
source code, or else come to an arrangement with Telenor to bring the product
to their customers as a managed service.

 

The wholesale-led and cost-focused  “Happy Pipe”, and its close cousin,
“Government Department”, is likely to take its specialisation in cheap,
reliable connectivity into a string of new vertical markets, picking
appropriate technology and looking for opportunities in the upcoming spectrum
auctions.

 

“Device Specialists”, with their deep concern for finding
the right content, brands, and channels to market are likely to pick Option 2 –
if they have a business like T-Systems or BT Global Services, they’ll integrate
it, otherwise they’ll partner with an IT player.

Telco 2.0 Further Reading

If you found this article interesting, you may also be interested in Enterprise 2.0 – Machine-to-Machine Opening for Business, a report of the M2M session at the last Telco 2.0 Executive Brainstorm, and M2M / Embedded Market Overview, Healthcare Focus, and Strategic Options, our Executive Briefing on the M2M market and the healthcare industry vertical.

Full Article: LiMo – The Tortoise picks up Momentum

Mobile Linux foundation LiMo‘s presence at the Mobile World Congress was impressive. DoCoMo demonstrated a series of handsets built on the OS; and LG & Samsung showed a series of reference implementations. But more impressive than the actual and reference handsets were the toolkits launched by
Access & Azingo.

 limo-1.png
We believe that LiMo has an important role to play in the Mobile Ecosystem and the platform is so compelling that over time more and more handsets based upon the OS will find their way into consumers hands. So why is LiMo different and important?

In a nutshell, it is not owned by anyone and is not being driven forward by any one member. Symbian and Android may also be open-source, but no-one has any serious doubt who is paying for the majority of the resources and therefore whether consciously or sub-consciously whose business model they could favour. The LiMo founder members were split evenly between operators (DoCoMo, Vodafone and Orange) and Consumer Electronic Companies (NEC, Panasonic & Samsung). Since then several other operators, handset makers, chip makers and software vendors have joined. The current board contains a representative sample of organisations across the mobile value chain.

LiMo as the Unifying Entity

The current handset OS market reminds us very much of the days when the computing industry shifted from proprietary operating systems to various mutations of Unix. Over time, more and more companies moved away from proprietary extensions and moved them into full open source. Unix was broken down into a core kernel, various drivers, thousands of bytes of middleware and a smattering of User Interfaces. Value shifted to the applications and services. Today, as open source has matured each company can decide which bits of Unix they want to push resources onto to develop further and which bits they want to include in their own distribution.

 limo-2.png
Figure 2: LiMo Architecture

The reason that Unix developed this way is pure economics – it is just too expensive for many companies to build and maintain their own flavours of operating systems. In fact there is only currently two mainstream companies who can afford to build their own – Microsoft and Apple – and the house of Apple is built upon Unix foundations anyway. Today, we are seeing the same dynamics in the mobile space and it is only a question of time, before more and more companies shift resources away from internal projects and onto open-source ones. LiMo is the perfect home for coordinating this open-source effort – especially if the Limo foundation allows freedom for the suppliers of code to develop their own roadmap according to areas of perceived value and weakness.

 LiMo should be really promiscuous to succeed

 In June 2008, LiMo merged with the LiPS foundation – great news. It is pointless and wasteful to have two foundations doing more or less the same thing, one from a silicon viewpoint and the other from an operator viewpoint. Just before Barcelona, LiMo endorsed the OMTP BONDI specification and announced that it expects future LiMo handsets using a web runtime to support the BONDI specification. Again, great news. It is pointless to redo specification work, perhaps with a slightly different angle. These type of actions are critical to the success of LiMo – embracing the work done by others and implementing it in an open-source, available to all manner.

Compelling base for Application Innovation

The real problem with developing mobile applications today is the porting cost to support the wide array of operating systems. LiMo offers the opportunity to radically reduce this cost. This is going to become critical for the next generation of devices which become wirelessly connected, whether machine-2-machine, general consumer devices or niche applications serving vertical industries. For the general consumer market, the key is to get handsets to the consumers. DoCoMo has done a great job of driving LiMo-based handsets into the Japanese market. 2009 needs to be the year that some European (eg Vodafone) or US (eg Verizon) deploy handsets in other markets.

Also, it is vital that the operators also make available some of its internal capabilties for use directly by the LiMo handsets and allow coupling to externally developed applications. These assets are not just the standard network services, but also internal service delivery platform capabilities. This adds benefits to the cost advantage that LiMo will ultimately have over the other handset operating systems. As in the computing world before, over time value will move away from hardware and operating systems towards applications and services. It is no accident that both Nokia and Google are moving into mobile services as a future growth area. The operators need an independent operating system to hold back their advance onto traditional operator turf.

In summary:

We feel that as complexity increases in the mobile world, the economics of LiMo will become more favourable. It is only a matter of time, but LiMo market share will start to increase – the only question is the timeframe. Crucially, LiMo is well placed to get the buy-in of the most important stakeholders – operators. Operators are to mobile devices as content creators were to VHS; how well would the iPhone have done without AT&T?

Plus:

  • following the same path as the evolution of the computing industry
  • broad and growing industry support

Minus:

  • not yet reached critical mass
  • economic incentives for application developers are still vague

Interesting:

  • commodisation of hardware and operating system layer – value moving towards applications and services
  • a way for operators to counter the growing strength of Apple, Nokia & Google.

Questions:

  • how can operators add their assets to make the operating system more compelling?
  • how can the barriers of intellectual property ownership be overcome?

Full Article: Nokia and Symbian – Missing an Opportunity?

The recent purchase of Symbian by Nokia highlights the tensions around running a consortium-owned platform business. Obviously, Nokia believes that making the software royalty-free and open source is the key to future mass adoption. While Nokia is busy buying Symbian, the competition has moved on and offers a lot more than purely handset features. The team at Telco 2.0 disagree and believe the creation of the Symbian Foundation will cure none of the governance or product issues going forward. Additionally, Symbian isn’t strong in the really important bits of the mobile jigsaw that generates the real value to any of the end-consumer, developer or mobile operator.

In this article, we look at the operating performance of Symbian. In a second we examine the “openness? of Symbian going forward, since “open? remains such a talisman of business model success.

Background

Symbian’s core product is a piece of software code that the user doesn’t interact with directly — it’s low-level operating system code to deal with key presses, screen display, and controlling the radio. Unlike Windows (but rather like Unix) there are three competing user interfaces built on this common foundation: Nokia’s Series 60 (S60), Sony Ericsson’s UIQ, and DoCoMo’s MOAP. Smartphones haven’t taken the world by storm yet, but Symbian is the dominant smartphone platform, and thus is well positioned to trickle down to lower-end handsets over time. What might be relevant to 100m handsets this year could be a billion handsets in two or three years from now. As we saw on the PC with Windows, the character of the handset operating system is critical to who makes money out of the mobile ecosystem.

The “what? of the deal is simple enough — Nokia spent a sum of money equivalent to two years’ licence fees buying out the other shareholders in Symbian, before staving off general horror from other vendors by promising to convert the firm into an open-source foundation like the ones behind Mozilla, Apache and many other open-source projects. The “how? is pretty simple, too. Nokia is going to chip in its proprietary S60, and assign the S60 developers to work on Symbian Foundation projects.

Shareholding Structure

The generic problem with consortium is typically not all members are equal and almost certainly have different objectives. This has always been the case with Symbian.

It is worth examining the final shareholder structure which has been stable since July 2004: Nokia – 47.9%, Ericsson – 15.6%, SonyEricsson – 13.1%, Panasonic – 10.5%, Siemens – 8.4% and Samsung – 4.5%. At the bottom of the article we have listed the key corporate events in Symbian history and the changes in shareholding.

It is interesting to note that: Siemens is out of the handset business, Panasonic doesn’t produce Symbian handsets (it uses LiMo), Ericsson only produces handsets indirectly through SonyEricsson, and Samsung is notably permissive towards handset operating systems.

SonyEricsson has been committed towards Symbian at the top end of its range, although recently is adding Windows Mobile for its Xperia range targeted at corporates.

Nokia seems almost committed though has recently purchased Trolltech — a notable fan of Linux and developer of Qt.

The tensions within the shareholders seem obvious: Siemens was probably in the consortium for pure financial return, whereas for Nokia it was a key component of industrial strategy and cost base for its high-end products. The other shareholders were somewhere in between those extremes. The added variable was that Samsung, Nokia’s strongest competitor, seemed hardly committed to the product.

It is easy to produce a hypotheses that the software roadmap and licence pricing for Symbian was difficult to agree and that was before the user interface angle (see below).

Ongoing Business Model

Going forward, Nokia has solved the argument of licence pricing — it is free. Whether this passed to consumers in the form of lower handset prices is open to debate. After all, Nokia somehow has to recover the cost of an additional 1,000 personnel on its payroll. For SonyEricsson with its recent profit warning, any improvement in margin will be appreciated, but this doesn’t necessarily mean a reduction in pricing.

It also seems obvious that Nokia will also control the software roadmap going forward: it seems to us that handset operators using Symbian will be faced with three options: free-ride on Nokia; pick and choose components and differentiate with self-build components; or pick another OS.

We think that given the chosen licence (Eclipse — described in more detail in next article), plus the history of Symbian user-interfaces, and the dominance of Nokia, all point towards other handset operators producing their own flavours of Symbian going forward.

Competition

Nokia may have bought Symbian, even without competitive pressures, purely to reduce its own royalties. However, the competitive environment adds an additional dimension to the decision.

RIM and Microsoft are extremely strong in the corporate space and both share two features that Symbian are currently extremely weak in — they both excel in synchronizing with messaging and calendaring services.

Apple has also raised the bar in usability. This is something where Symbian has stayed clear, but is certainly not one of the strengths of S60, the Nokia front end. The wife of one of our team — tech-savvy, tri-lingual, with a PhD in molecular biology — couldn’t work out how to change the ringtone, and not for lack of trying. What do you mean it’s not under ‘settings’? Some unkind tongues have even speculated that the S60 user interface was inspired by an Enigma Machine stolen to order by Nokia executives.

Qualcomm is rarely mentioned when phone operating systems are talked about, and that is because they take a completely different approach. Qualcomm’s BREW would be better classified as a content delivery system, and it is gaining traction in Europe. Two really innovative handsets of last year, the O2 Coccoon and the 3-Skypephone, were both based upon Qualcomm software. Qualcomm’s differentiator is that it is not a consumer brand and develops solutions in partnership with operators.

The RIM, Microsoft, Apple and Qualcomm solutions share one thing in common: they incorporate network elements which deliver services.

Nokia is of course moving into back-end solutions through its embryonic Ovi services. And this may be the major point about Symbian: it is only one, albeit important piece of the jigsaw. Meanwhile, as we’ve written before, Ovi remains obsessed around information and entertainment services, neglecting the network side of the core voice and messaging service. Contrast with Apple’s first advance with Visual Voicemail.

As James Balsillie, CEO of RIM, said this week “The sector is shifting rapidly. The middle part is hollowing — there are cheap, cheap, cheap phones and then it is smartphones to a connected platform.��?

Key Symbian Dates.

June 1998 – Launch with Psion owning 40%, Nokia 30% & Ericsson 30%.
Oct 1998 – Motorola Joins Consortium

Jan 1999 – Symbian acquires Ronneby Labs from Ericsson and with it the original UIQ team & codebase.

Mar 1999 – DoCoMo partnership

May 1999 – Panasonic joins Consortium. Equity Stakes now: Psion – 28%, Nokia / Ericsson / Motorola – 21%, Panasonic – 9%.

Jan 2002 – Funding Round of £20.75m. SonyEricsson tales up Ericsson Rights.

Jun 2002 – Siemens Joins Consortium with £14.25m for 5%. Implied Value £285m

Feb 2003 – Samsung Joins Consortium with £17m for 5%. Implied Value £340m.

Aug 2003 – Five Years Anniversary. Original Consortium Members can now sell. Motorola sells stake for £57m to Nokia & Psion. Implied Value £300m.

Feb 2004 – Original Founder Founder Psion decides to sell out. Announces to Sell 31.7% for £135.5m with part of payment dependant of future royalties. Implied Value £427m. Nokia would have > 50% control. David Potter of Psion says total investment in Symbian was £35m to-date, so £135.5m represents a good return.

July 2004 – Preemption of Psion Stake by Panasonic, SonyEricsson & Siemens. Additional Rights issue of £50m taken up by Panasonic, SonyEricsson, Siemens & Nokia. New Shareholding structure: Nokia – 47.9%, Ericsson – 15.6%, SonyEricsson – 13.1%, Panasonic – 10.5%, Siemens – 8.4% and Samsung – 4.5%.

Agree to rise cost base to c. £100m/per annum and headcount of c. 1,200.

Feb 2007 – Agree to sell UIQ to SonyEricsson for £7.1m.

June 2008 – Nokia buys rest of Symbian with Implied Value of €850m (£673m) with approx. payout of – Ericsson – £105m, SonyEricsson – £88.2m, Panasonic – £70.7m, Siemens of £56.5m and Samsung £30.3m. Note, Symbian had net cash of €182m. The price quoted by Nokia of €262m is the net price paid by Nokia to buy out the consortium not the value of the company.