Edge computing market sizing forecast: Second release

This is the second release of STL’s edge computing revenue forecast

In this release, we update the forecast and include regional edge

The edge computing market continues to invite different types of players including telcos, hyperscalers, data centre operators and enterprise connectivity providers. The varying requirements across verticals, business sizes and use cases create an opportunity that can accommodate all these different players. However, it is important for any edge provider to understand how to position its service in the space and what areas of the market to pursue vertically and horizontally.

Through quantitative analysis, this report aims to help telcos and others to identify where opportunities lie. This report presents the key findings of STL Partners’ demand forecast model for edge computing services. Its purpose is to:

  • Assess the demand from 20 use cases which currently rely on edge or will require edge to fully develop;
  • Identify the total revenue across the value chain: device, connectivity, application, edge infrastructure (regional, network and on-premise), and integration and support;
  • Output a full set of results for over 90 countries over the 2020–2030 period per use case and per vertical.

This report is accompanied by a dashboard which presents a summary of our model output and the associated graphics for the world’s regions and for 20 major markets. The dashboard also presents the full revenue output for the 97 countries.

Download the accompanying spreadsheet 

Edge computing addressable revenue will reach US$445 billion by 2030

High-level findings from the model indicate that:

  • The total edge computing addressable market will grow from US$9 billion in 2020 to US$445 billion in 2030 at a CAGR of 48% over the 10-year period.
  • We now forecast regional edge in addition to network and on-prem edge. Regional edge refers to local edge data centres that are outside the telecoms operators’ network. Examples of these include internet exchange data centres, small data centres in Tier 2/3 cities, AWS Local Zones, etc.
  • The vertical opportunities in on-prem and distributed edge are quite different. Telcos and other providers that are looking into the various types of infrastructure to offer edge services should evaluate these differences and assess their own capabilities and willingness to compete in these verticals.
  • The growth in the number of connected devices, as well as the need for higher levels of automation, operational efficiency and cost reduction, will drive the adoption of edge computing across many use cases and verticals over the next 10 years. This will result in increased spend across the value chain.

Enter your details below to download an extract of the report

Total edge computing addressable revenue 2020–2030

This forecast is part of our Edge Insights Service which also includes:

Enter your details below to download an extract of the report

Edge computing market sizing forecast

We have updated this forecast. Check the latest report here

Introducing STL Partners’ edge computing market sizing forecast

This report presents the key findings of STL Partners’ new demand forecast model for edge computing services. Its purpose is to:

  • Assess the demand from 20 use cases which currently rely on edge or will require edge to fully develop;
  • Identify the total revenue across the value chain: hardware, connectivity, application, edge infrastructure (network and on-premise), and integration and support;
  • Output a full set of results for over 180 countries over the 2020–2030 period per use case and per vertical.

This report is accompanied by a dashboard which presents a summary of our model output and the associated graphics for the world’s regions and for 20 major markets. The dashboard also presents the full revenue output for the 180+ countries.

Enter your details below to request an extract of the report

Edge computing addressable revenue will reach US$543 billion by 2030

High-level findings from the model indicate that:

  • The growth in the number of connected devices, as well as the need for higher levels of automation, operational efficiency and cost reduction, will drive the adoption of edge computing across many use cases and verticals over the next 10 years. This will result in increased spend across the value chain.
  • The total edge computing addressable market will grow from US$10 billion in 2020 to US$543 billion in 2030 at a CAGR of 49% over the 10-year period.
  • The total value chain breaks into five main components which are hardware, connectivity, application, integration & support, in addition to the edge infrastructure which includes both on-prem edge and network edge.

Total edge computing addressable revenue

Edge computing

Source: STL Partners

Table of contents

  • Executive Summary
  • Methodology
  • Revenue by value chain component
  • Revenue by use case
  • Revenue by vertical
  • Revenue by region
  • Appendix

For more information on STL Partners’ edge-related services, please go to our Edge Insights Service page.

The new forecast is intended to complement:

<

Enter your details below to download an extract of the report

AI on the Smartphone: What telcos should do

Introduction

Following huge advances in machine learning and the falling cost of cloud storage over the last several years, artificial intelligence (AI) technologies are now affordable and accessible to almost any company. The next stage of the AI race is bringing neural networks to mobile devices. This will radically change the way people use smartphones, as voice assistants morph into proactive virtual assistants and augmented reality is integrated into everyday activities, in turn changing the way smartphones use telecoms networks.

Besides implications for data traffic, easy access to machine learning through APIs and software development kits gives telcos an opportunity to improve their smartphone apps, communications services, entertainment and financial services, by customising offers to individual customer preferences.

The leading consumer-facing AI developers – Google, Apple, Facebook and Amazon – are in an arms race to attract developers and partners to their platforms, in order to further refine their algorithms with more data on user behaviours. There may be opportunities for telcos to share their data with one of these players to develop better AI models, but any partnership must be carefully weighed, as all four AI players are eyeing up communications as a valuable addition to their arsenal.

In this report we explore how Google, Apple, Facebook and Amazon are adapting their AI models for smartphones, how this will change usage patterns and consumer expectations, and what this means for telcos. It is the first in a series of reports exploring what AI means for telcos and how they can leverage it to improve their services, network operations and customer experience.

Contents:

  • Executive Summary
  • Smartphones are the key to more personalised services
  • Implications for telcos
  • Introduction
  • Defining artificial intelligence
  • Moving AI from the cloud to smartphones
  • Why move AI to the smartphone?
  • How to move AI to the smartphone?
  • How much machine learning can smartphones really handle?
  • Our smartphones ‘know’ a lot about us
  • Smartphone sensors and the data they mine
  • What services will all this data power?
  • The privacy question – balancing on-device and the cloud
  • SWOT Analysis: Google, Apple, Facebook and Amazon
  • Implications for telcos

Figures:

  • Figure 1: How smartphones can use and improve AI models
  • Figure 2: Explaining artificial intelligence terminology
  • Figure 3: How machine learning algorithms see images
  • Figure 4: How smartphones can use and improve AI models
  • Figure 5: Google Translate works in real-time through smartphone cameras
  • Figure 6: Google Lens in action
  • Figure 7: AR applications of Facebook’s image segmentation technology
  • Figure 8: Comparison of the leading voice assistants
  • Figure 9: Explanation of Federated Learning

How to build an open source telco – and why?

If you don’t subscribe to our research yet, you can download the free report as part of our sample report series.

Introduction: Why an open source telecom?

Commercial pressures and technological opportunities

For telcos in many markets, declining revenues is a harsh reality. Price competition is placing telcos under pressure to reduce capital spending and operating costs.

At the same time, from a technological point of view, the rise of cloud-based solutions has raised the possibility of re-engineering telco operations to be run with virtualised and open sourced software on low cost, general purpose hardware.

Indeed, rather than pursuing the traditional technological model, i.e. licensing proprietary solutions from the mainstream telecoms vendors (e.g. Ericsson, Huawei, Amdocs, etc.), telcos can increasingly:

  1. Progressively outsource the entire technological infrastructure to a vendor;
  2. Acquire software with programmability and openness features: application programming interfaces(APIs) can make it easier to program telecommunications infrastructure.

The second option promises to enable telcos to achieve their long-standing goals of decreasing the time-to-market of new solutions, while further reducing their dependence on vendors.

Greater adoption of general IT-based tools and solutions also:

  • Allows flexibility in using the existing infrastructure
  • Optimises and reuses the existing resources
  • Enables integration between operations and the network
  • And offers the possibility to make greater use of the data that telcos have traditionally collected for the purpose of providing communications services.


In an increasingly squeezed commercial context, the licensing fees applied by traditional vendors for telecommunication solutions start to seem unrealistic, and the lack of flexibility poses serious issues for operators looking to push towards a more modern infrastructure. Moreover, the potential availability of competitive open source solutions provides an alternative that challenges the traditional model of making large investments in proprietary software, and dependence on a small number of vendors.

Established telecommunications vendors and/or new aggressive ones may also propose new business models (e.g., share of investments, partnership and the like), which could be attractive for some telcos.

In any case, operators should explore and evaluate the possibility of moving forward with a new approach based on the extensive usage of open source software.

This report builds on STL Partners’ 2015 report, The Open Source Telco: Taking Control of Destiny which looked at how widespread use of open source software is an important enabler of agility and innovation in many of the world’s leading internet and IT players. Yet while many telcos then said they crave agility, only a minority use open source to best effect.

In that 2015 report, we examined the barriers and drivers, and outlined six steps for telcos to safely embrace this key enabler of transformation and innovation:

  1. Increase usage of open source software: Overall, operators should look to increase their usage of open source software across their entire organisation due to its numerous strengths. It must, therefore, be consistently and fairly evaluated alongside proprietary alternatives. However, open source software also has disadvantages, dependencies, and hidden costs (such as internally-resourced maintenance and support), so it should not be considered an end in itself.
  2. Increase contributions to open source initiatives: Operators should also look to increase their level of contribution to open source initiatives so that they can both push key industry initiatives forward (e.g. OPNFV and NFV) and have more influence over the direction these take.
  3. Associate open source with wider transformation efforts: Successful open source adoption is both an enabler and symptom of operators’ broader transformation efforts, and should be recognised as such. It is more than simply a ‘technical fix’.
  4. Bring in new skills: To make effective use of open source software, operators need to acquire new software development skills and resources – likely from outside the telecoms industry.
  5. … but bring the whole organisation along too: Employees across numerous functional areas (not just IT) need to have experience with, or an understanding of, open source software – as well as senior management. This should ideally be managed by a dedicated team.
  6. New organisational processes: Specific changes also need to be made in certain functional areas, such as procurement, legal, marketing, compliance and risk management, so that their processes can effectively support increased open source software adoption.

This report goes beyond those recommendations to explore the changing models of IT delivery open to telcos and how they could go about adopting open source solutions. In particular, it outlines the different implementation phases required to build an open source telco, before considering two scenarios – the greenfield model and the brownfield model. The final section of the report draws conclusions and makes recommendations.

Why choose to build an open source telecom now?

Since STL Partners published its first report on open source software in telecoms in 2015, the case for embracing open source software has strengthened further. There are three broad trends that are creating a favourable market context for open source software.

Digitisation – the transition to providing products and services via digital channels and media. This may sometimes involve the delivery of the product, such as music, movies and books, in a digital form, rather than a physical form.

Virtualisation – executing software on virtualised platforms running on general-purpose hardware located in the cloud, rather than purpose-built hardware on premises. Virtualisation allows a better reuse of large servers by decoupling the relationship of one service to one server. Moreover, cloudification of these services means they can be made available to any connected device on a full-time basis.

Softwarisation – the redefinition of products and services though software. This is an extension of digitisation, i.e., the digitisation of music has allowed the creation of new services and propositions (e.g. Spotify). The same goes for the movie industry (e.g. Netflix) or the transformation of the book industry (e.g. ebooks) and newspapers. This paradigm is based on:

  • The ability to digitise the information (transformation of the analogue into a digital signal).
  • Availability of large software platforms offering relevant processing, storage and communications capabilities.
  • The definition of open and reusable application programming interfaces (APIs) which allow processes formerly ‘trapped’ within proprietary systems to be managed or enhanced with other information and by other systems.

These three features have started a revolution that is transforming other industries, e.g. travel agencies (e.g. Booking.com), large hotel chains (e.g. Airbnb), and taxis (e.g. Uber). Softwarisation is also now impacting other traditional industries, such as manufacturing (e.g., Industry 4.0) and, for sure, telecommunications.

Softwarisation in telecommunications amounts to the use of virtualisation, cloud computing, open APIs and programmable communication resources to transform the current network architecture. Software is playing a key role in enabling new services and functions, better customer experience, leaner and faster processes, faster introduction of innovation, and usually lower costs and prices. The softwarisation trend is very apparent in the widespread interest in two emerging technologies: network function virtualization (NFV) and software defined networking (SDN).

The likely impact of this technological transformation is huge: flexibility in service delivery, cost reduction, quicker time to market, higher personalisation of services and solutions, differentiation from competition and more. We have outlined some key telco NFV/SDN strategies in the report Telco NFV & SDN Deployment Strategies: Six Emerging Segments.

What is open source software?

A generally accepted open source definition is difficult to achieve because of different perspectives and some philosophical differences within the open source community.

One of the most high-profile definitions is that of the Open Source Initiative, which states the need to have access to the source code, the possibility to modify and redistribute it, and non-discriminatory clauses against persons, groups or ‘fields of endeavour’ (for instance, usage for commercial versus academic purposes) and others.

For the purpose of this report, STL defines open source software as follows:

▪ Open source software is a specific type of software for which the original source code is made freely available and may be redistributed and modified. This software is usually made available and maintained by specialised communities of developers that support new versions and ensure some form of backward compatibility.

Open source can help to enable softwarisation. As an example, it has greatly helped in moving from proprietary solutions in the web server sector to a common software platform (named LAMP) based on the Linux operating system, the Apache Http server, Mysql server, PhP programming language. All these components are made available as open source. This essentially means that people can freely acquire the source code, modify it and use it. Modifications and improvements are to be returned to the development community.

One of the earliest and most high profile examples of open source software was the Linux operating system, a Unix-like operating system developed under the model of free and open source software development and distribution.

Open source for telecoms: Benefits and barriers

The benefits of using open source for telecoms

As discussed in our earlier report, The Open Source Telco: Taking Control of Destiny, the adoption and usage of open source solutions are being driven by business and technological needs. Ideally, the adoption and exploitation of open source will be part of a broader transformation programme designed to deliver the specific operator’s strategic goals.

Operators implementing open source solutions today tend to do so in conjunction with the deployment of network function virtualization (NFV) and software defined networking (SDN), which will play an important role for the definition and consolidation of the future 5G architectures.

However, as Figure 1 shows, transformation programmes can face formidable obstacles, particularly where a cultural change and new skills are required.

Benefits of transformation and related obstacles

The following strategic forces are driving interest in open source approaches among telecoms operators:

Reduce infrastructure costs. Telcos naturally want to minimise investment in new technologies and reduce infrastructure maintenance costs. Open source solutions seem to provide a way to do this by reducing license fees paid to solution vendors under the traditional software procurement model. As open source software usually runs on general-purpose hardware, it could also cut the capital and maintenance costs of the telco’s computing infrastructure. In addition, the current trend towards virtualisation and SDN should enable a shift to more programmable and flexible communications platforms. Today, open source solutions are primarily addressing the core network (e.g., virtualisation of evolved packet core), which accounts for a fraction of the investment made in the access infrastructure (fibre deployment, antenna installation, and so forth). However, in time open source solutions could also play a major role in the access network (e.g., open base stations and others): an agile and well-formed software architecture should make it possible to progressively introduce new software-based solutions into access infrastructure.

Mitigate vendor lock-in. Major vendors have been the traditional enablers of new services and new network deployments. Moreover, to minimise risks, telco managers tend to prefer to adopt consolidated solutions from a single vendor. This approach has several consequences:

  • Telcos don’t tend to introduce innovative new solutions developed in-house.
  • As a result, the network is not fully leveraged as a differentiator, and can become the full care and responsibility of a vendor.
  • The internal innovation capabilities of a telco have effectively been displaced in favour of those of the vendor.

This has led to the “ossification” of much telecoms infrastructure and the inability to deliver differentiated offerings that can’t easily be replicated by competitors. Introducing open source solutions could be a means to lessen telcos’ dependence on specific vendors and increase internal innovation capabilities.

Enabling new services. The new services telcos introduce in their networks are essentially the same across many operators because the developers of these new services and features are a small set of consolidated vendors that offer the same portfolio to all the industry. However, a programmable platform could enable a telco to govern and orchestrate their network resources and become the “master of the service”, i.e., the operator could quickly create, customise and personalise new functions and services in an independent way and offer them to their customers. This capability could help telcos enter adjacent markets, such as entertainment and financial services, as well as defend their core communications and connectivity markets. In essence, employing an open source platform could give a telco a competitive advantage.

Faster innovation cycles. Depending on a vendor makes the telco dependent on its roadmap and schedule, and on the obsolescence and substitution of existing technologies. The use of out-dated technologies has a huge impact on a telco’s ability to offer new solutions in a timely fashion. An open source approach offers the possibility to upgrade and improve the existing platform (or to move to totally new technologies) without too many constraints posed by the “reference vendor”. This ability could be essential to acquiring and maintaining a technological advantage over competitors. Telcos need to clearly identify the benefits of this change, which represent the reasons, the “why”, for the softwarisation.

Complete contents of how to build an open source telecom report:

  • Executive Summary
  • Introduction: why open source?
  • Commercial pressures and technological opportunities
  • Open Source: Why Now?
  • What is open source software?
  • Open source: benefits and barriers
  • The benefits of using open source
  • Overcoming the barriers to using open source
  • Choosing the right path to open source
  • Selecting the right IT delivery model
  • Choosing the right model for the right scenario
  • Weighing the cost of open source
  • Which telcos are using open source today?
  • How can you build an open source telco?
  • Greenfield model
  • Brownfield model
  • Conclusions and recommendations
  • Controversial and challenging, yet often compelling
  • Recommendations for different kinds of telcos

Figures:

  • Figure 1: Illustrative open source costs versus a proprietary approach
  • Figure 2: Benefits of transformation and the related obstacles
  • Figure 3: The key barriers in the path of a shift to open source
  • Figure 4: Shaping an initial strategy for the adoption of open source solutions
  • Figure 5: A new open source component in an existing infrastructure
  • Figure 6: Different kinds of telcos need to select different delivery models
  • Figure 7: Illustrative estimate of Open Source costs versus a proprietary approach

Cloud 2.0: Network Functions Virtualisation (NFV) vs. Software Defined Networking (SDN)

Network Functions Virtualisation

What is Network Functions Virtualisation?

Network Functions Virtualisation (NFV)  is an ominous sounding term, but on examination relatively easy to understand what it is and why it is needed.

If you run a network whether as an enterprise customer or as a service provider you will end up a stack of dedicated hardware appliances performing a variety of functions needed to make the network work or to optimise its performance. Boxes like Routers, Application Load Balancers, Session Border Controllers (SBC), Network Address Translation (NAT), Deep Packet Inspection (DPI) and Firewalls to pick just a few. Each one of these hardware appliances needs space, power, cooling, configuration, backup, capital investment, replacement as they become obsolete and people who can deploy and manage them leading to on-going capex and opex. And with a few exceptions, each performs a single purpose, so a firewall is always a firewall or an SBC is always an SBC and neither can perform the function of the other.

Contrast this model with the virtualised server or cloud computing world where Virtual Machines run on standard PC/Server hardware, where you can add more compute power/storage on an elastic basis should you need it and where network cards are only required when you connect one physical device to another.

What problems does NFV solve?

NFV seeks to solve the problems of dedicated hardware by deploying the network functions on a virtualised PC/server environment. NFV started as a special interest group running under the auspices of the European Telecommunications Standards Institute (ETSI) by 7 of the world’s largest telecoms operators and has now been joined by additional telecoms companies, equipment vendors and a variety of technology providers.

While NFV can replace many dedicated hardware devices with a virtualised software platform, it is yet to be seen if this approach can deliver the sustained performance and low latency that is currently delivered by some specialised hardware appliances such as load balancing, real time encryption or deep packet inspection.

Figure 8 shows ETSI’s vision of NFV.

Figure 8 – ETSI’s vision for Network Functions Virtualisation
Network Virtualisation Approach June 2013

 Source ETSI

Report Contents

  • Network Functions Virtualisation
  • What is Network Functions Virtualisation?
  • What problems does NFV solve?
  • How does NFV relate to Software Defined Networking (SDN)?
  • Relative benefits of NFV and SDN
  • STL Partners and the Telco 2.0™ Initiative

Report Figures

  • Figure 8 – ETSI’s vision for Network Functions Virtualisation
  • Figure 9 – Network Functions Virtualised and managed by SDN
  • Figure 10 – Network Functions Virtualisation relationship with SDN

Full Article: Device evolution: More power at the edge

The battle for the edge

This document examines the role of “edge” devices that sit at the periphery of a telco’s network – products like mobile phones or broadband gateways that live in the user’s hand or home. Formerly called “terminals”, with the inclusion of ever-better chips and software, such devices are now getting “smarter”. In particular, they are capable of absorbing many new functions and applications – and permit the user or operator to install additional software at a later point in time.

In fact, there is fairly incontrovertible evidence that “intelligence” always moves towards the edge of telecom networks, particularly when it can exploit the Internet and IP data connections. This has already been seen in PCs connected to fixed broadband, or in the shift from mainframes to client/server architectures in the enterprise. The trend is now becoming clearer in mobile, with the advent of the iPhone and other smartphones, as well as 3G-connected notebooks. Home networking boxes like set-tops, gaming consoles and gateways are further examples, which also get progressively more powerful.

This is all a consequence of Moore’s Law: as processors get faster and cheaper, there is a tendency for simple massmarket devices to gain more computing capability and take on new roles. Unsurprisingly, we therefore see a continued focus on the “edge” as a key battleground – who controls and harnesses that intelligence? Is it device vendors, operators, end users themselves, or 3rd-party application providers (“over-the-top players”, to use the derogatory slang term)? Is the control at a software, application or hardware level? Can operators deploy a device strategy that complements their network capabilities, to strengthen their position within the digital value chain and foster two-sided business models? Do developments like Android and femtocells help? Should the focus be on dedicated single-application devices, or continued attempts to control the design, OS or browser of multi-purpose products like PCs and smartphones?

Where’s the horsepower?

First, an illustration of the power of the edge.

If we go back five years, the average mobile phone had a single processor, probably an ARM7, clocking perhaps 30MHz. Much of this was used for the underlying radio and telephony functions, with a little “left over” for some basic applications and UI tools, like Java games.

Today, many the higher-end devices have separate applications processors, and often graphics and other accelerators too. An iPhone has a 600MHz+ chip, and Toshiba recently announced one of the first devices with a 1GHz Qualcomm Snapdragon. Even midrange featurephones can have 200MHz+ to play with, most of which is actually usable for “cool stuff” rather than the radio. [note: 1,000,000,000,000MHz (Megahertz) = 1,000,000,000GHz (Gigahertz) = 1,000,000THz (Terahertz) = 1,000PHz (Petahertz) = 1EHz (Exahertz)] Now project forward another five years. The average device (in developed markets at least) will have 500MHz, with top-end devices at 2GHz+, especially if they are not phones but 3G-connected PCs or MIDs. (These numbers are simplified – in the real world there’s lots of complexity because of different sorts of chips like digital signal processors, graphics accelerators or multicore processors). Set-top boxes, PVRs, game consoles and other CPE devices are growing smarter in parallel.

Now multiply by (say) 8 billion endpoints – mobile handsets, connected PCs, broadband modems, smart consumer electronics and so forth. In developed markets, people may well have 2-4 such devices each. That’s 4 Exahertz (EHz, 1018) of application-capable computing power in people’s hands or home networks, without even considering ordinary PCs and “smart TVs” as well. And much – probably most – of that power will be uncontrolled by the operators, instead being the playground of user- or vendor-installed applications.

Even smart pipes are dumb in comparison

It’s tricky to calculate an equivalent figure for “the network”, but let’s take an approximation of 10 million network nodes (datapoint: there are 3 million cell sites worldwide), at a generous 5GHz each. That means there would be 50 Petahertz (PHz, 1015) in the carrier cloud. In other words, about an 80th of the collective compute power of the edge.

bubley-device-1.png

Now clearly, it’s not quite as bad as that makes it sound – the network can obviously leverage intelligence in a few big control points in the core like DPI boxes, as traffic funnels through them. But at the other end of the pipe is the Internet, with Google and Amazon’s and countless other companies’ servers and “cloud computing” infrastructures. Trying to calculate the aggregate computing power of the web isn’t easy either, but it’s likely to be in the Exahertz range too. Google is thought to have 0.5-1.0 million servers on its own, for example.

bubley-device-2.png

So one thing is certain – the word “terminal” is obsolete. Whatever else happens, the pipe will inevitably become “dumber” (OK, less smart) than the edge, irrespective of smart Telco 2.0 platforms and 4G/NGN networks.

Now, add in all the cool new “web telco” companies (eComm 2009 was full of them) like BT/Ribbit, Voxeo, Jaduka, IfByPhone, Adhearsion and the Telco 2.0 wings of longtime infrastructure players like Broadsoft and Metaswitch (not to mention Skype and Google Voice), and the legacy carrier network platforms look even further disadvantaged.

Intelligent mobile devices tend to be especially hard to control, because they can typically connect to multiple networks – the operator cellular domain, public or private WiFi, Bluetooth, USB and so forth – which makes it easier for applications to “arbitrage” between them for access, content and services – and price.

Controlling device software vs. hardware

The answer is for telcos to try to take control of more of this enormous “edge intelligence”, and exploit it for their own benefit and inhouse services or two-sided strategies. There are three main strategies for operators wanting to exert influence on edge devices:

  1. Provide dedicated and fully-controlled and customised hardware and software end-points which are “locked down” – such as cable set-top boxes, or operator-developed phones in Japan. This is essentially an evolution of the old approach of providing “terminals” that exist solely to act as access points for network-based services. This concept is being reinvented with new Telco-developed consumer electronic products like digital picture frames, but is a struggle for variants of multi-function devices like PCs and smartphones.
  2. Provide separate hardware products that sit “at the edge” between the user’s own smart device and the network, such as cable modems, femtocells, or 3G modems for PCs. These can act as hosts for certain new services, and may also exert policy and QoS control on the connection. Arguably the SIM card fits into this category as well.
  3. Develop control points, in hardware or software, that live inside otherwise notionally “open” devices. This includes Telco-customised UI and OS layers, “policy-capable” connection manager software for notebooks, application certification for smartphones, or secured APIs for handset browsers.

bubley-device-3.png Controlling mobile is even harder than fixed

Fixed operators have long known what their mobile peers are now learning – as intelligence increases in the devices at the edge, it becomes far more difficult to control how they are used. And as control ebbs away, it becomes progressively easier for those devices to be used in conjunction with services or software provided by third parties, often competitive or substitutive to the operators’ own-brand offerings.

But there is a difference between fixed and mobile worlds – fixed broadband operators have been able to employ the second strategy outlined above – pushing out their own fully-controlled edge devices closer to the customer. Smart home gateways, set-top boxes and similar devices are able to sit “in front” of the TV and PC, and can therefore perform a number of valuable roles. IPTV, operator VoIP, online backups and various other “branded” services can exploit the home gateways, in parallel with Internet applications resident on the PC.

Conversely, mobile operators are still finding it extremely hard to control handset software at the OS level. Initiatives like SavaJe have failed, while more recently LiMO is struggling outside Japan. Endless complexities outside of Telcos’ main competence, such as software integration and device power management, are to blame. Meanwhile, other smartphone OS’s from firms like Nokia, Apple, RIM and Microsoft have continually evolved – albeit given huge investments. But most of the “smarts” are not controlled by the operators, most of the time. Further, low-end devices continue to be dominated by closed and embedded “RTOSs” (realtime operating systems), which tend to be incapable of supporting much carrier control either.

In fact, operators are continually facing a “one step forward, two steps back” battle for handset application and UI control . For every new Telco-controlled initiative like branded on-device portals, customised/locked smartphone OS’s, BONDI-type web security, or managed “policy” engines, there is another new source of “control leakage” – Apple’s device management, Nokia’s Ovi client, or even just open OS’s and usable appstores enabling easy download of competing (and often better/free) software apps.

The growing use of mobile broadband computing devices – mostly bought through non-operator channels – makes things worse. Even when sold by Telcos, most end users will not accept onerous operator control-points in their PCs’ application or operating systems, even where those computers are subsidised. There may be 300m+ mobile-connected computers by 2014.

Conclusions

Telcos need to face the inevitable – in most cases, they will not be able to control more than a fraction of the total computing and application power of the network edge, especially in mobile or for “contested” general-purpose devices. But that does not mean they should give up trying to exert influence wherever possible. Single-application “locked” mobile devices, perhaps optimised for gaming or navigation or similar functions have a lot of potential as true “terminals”, albeit used in parallel with users’ other smart devices.

It is far easier for the operator to exert its control at the edge with a wholly-owned and managed device, than via a software agent on a general computing device like a smartphone or notebook PC. Femtocells may turn out to be critical application control points for mobile operators in future. Telcos should look to exploit home networking gateways and other CPE with added-value software and services as soon as possible. Otherwise, consumer electronic devices like TVs and HiFi’s will adopt “smarts” themselves and start to work around the carrier core, perhaps accessing YouTube or Facebook directly from the remote control.

For handsets, controlling smartphone OS’s looks like a lost battle. But certain tactical or upper layers of the stack – browser, UI and connection-manager in particular – are perhaps still winnable. Even where the edge lies outside Telcos’ spheres of control, there are still many network-side capabilities that could be exploited and offered to those that do control the edge intelligence. Telco 2.0 platforms can manage security, QoS, billing, provide context data on location or roaming and so forth. However, carriers need to push hard and fast, before these are disintermediated as well. Google’s clever mapping and location capabilities should be seen as a warning sign that there will be work-arounds for “exposable” network capabilities, if Telcos’ offerings are too slow or too expensive.

Overall, the battle for control of the edge is multi-dimensional, and outcomes are highly uncertain, particularly given the economy and wide national variations in areas like device subsidy and brand preference. But Telcos need to focus on winnable battles – and exploit Moore’s Law rather than beat against it with futility.

We’ll be drilling into this area in much more depth during the Devices panel session at the upcoming Telco 2.0 Brainstorm in Nice in early May 2009.