Understanding the unconnected


Half the world’s population doesn’t have ready access to the internet. Why not? Surprisingly, it’s not simply a case of not being able to afford it.

In the report, The Coordination Age: A Third Age of Telecoms (November 2018), STL set out the need to make better use of the world’s resources.  These include the necessity and importance of making productive and rewarding use of people, time, health, money, and employment.

Communication is fundamental to co-ordination and cannot be achieved over any distance without effective connectivity, which in the modern world means access to the Internet. Improving connectivity can provide more people with better access to:

  • Useful information and services such as banking, insurance, health, education, and entertainment
  • Person-to-person communication, strengthening ties with families, friends and colleagues.
  • Easier and better access to resources, such as goods and markets, thereby facilitating business and trading, and in turn leading to greater prosperity.

It can also help expand the role of women in society, for example, by providing them with the ability to work or access or provide goods and services from home.

To achieve these aims, and make most efficient use all types of resources, it is important that all of the world’s population should be able to get online, preferably each with their own dedicated connection.

This is the first in a series of reports that looks at the question of how to connect the large part of the world’s population that is not online. This report describes the current situation and the reasons why many remain unconnected, even where there is mobile 3G or 4G coverage.

Enter your details below to request an extract of the report

Economic benefits of broadband

People who are unable to use the Internet are excluded from the range of useful services and activities it provides access to, which include banking, health services, education, information, entertainment, jobs, markets and government services, as well as the benefits of maintaining contact with distant families and friends.  Gaining access to these services by other means is usually more difficult and time consuming, often restricts choice, and is, in some cases, impossible. Exclusion from such services can have a negative effect on people’s ability to earn a living, increase their income or improve their way of life. Although the nature of exclusion and its effects differ between developed and developing economies, the consequences can be remarkably similar.

Moreover, widespread Internet access can support empowerment of women in many societies, not just the most undeveloped and conservative, by allowing them to take a more active part in the economy.

A range of studies has shown that greater access to mobile broadband has a positive impact on GDP as well as on people lives.  Among the most recent, published in 2017, is a study from London’s Imperial College Business School. The results show that a 10 percentage point increase in mobile broadband penetration leads to an increase of between 0.6% and 2.8% in GDP.

A World Bank report from 2009 estimated the relative importance of fixed, mobile, Internet access and broadband in stimulating economies (see Figure 4).

Figure 4: Relative impact on GDP of different communications technologies

Source: World Bank

The GSMA claims that, globally, the mobile telecoms industry accounts for 1.4% of GDP and adds a further 2.5% from productivity gains in the broader economy. In sub-Saharan Africa, mobile broadband contributes 7.1% of GDP, according to the GSMA.

The growth in access to the Internet

Use of the Internet has grown rapidly since the introduction of the World Wide Web and over half the world’s population of 7.6 billion are now users. At the end of 2017, 53% of the world’s population (4 billion people) were Internet users, with 43% (3.3 billion) using mobile networks to gain access to the Internet, according to the GSMA. There are also about one billion fixed broadband connections worldwide, but these mostly duplicate mobile data coverage in advanced developed economies, according to the ITU.  Note, that fixed line connections generally support multiple users in a household. Still, approximately 700 million people get online by using shared connections, including the facilities offered by public libraries or Internet cafes, according to the ITU. That means there are now approximately four billion Internet users worldwide.

The overall level of Internet use and the proportion of people connected by mobile in the major regions of the world are shown in Figure 5, which highlights how most Internet users gain access to the Internet through mobile networks.  As discussed, the remainder either have fixed broadband access to their household, make use of Internet cafes, libraries and other public points of access, or share with members of their family or friends.  Figure 5 highlights how the majority of people in the populous regions of sub-Saharan Africa and South Asia remain offline.

Figure 5: Total and mobile Internet users by world region

Source: ITU, UNESCO, GSMA, Hootsuite, STL Partners

In total, 47% of the world’s population (approximately 3.6 billion people) do not use the Internet.  Lack of coverage is one reason for this: Approximately 10% of the world’s people live beyond the reach of a mobile network, according to the GSMA. About 87% of the world’s people are covered by 3G networks and, 72% covered by 4G, while the remaining 3% have access to 2G data connections using GPRS or EDGE, with speeds inadequate for all but the most basic applications. The GSMA defines mobile broadband as over 256kbps.  GPRS cannot reach these speeds and EDGE rarely does.  In practice, there are about one billion people not served by a data network that provides adequate throughput speeds.

According to the GSMA, at the end of 2017, there were five billion unique mobile subscribers worldwide, a penetration of 66%. Some 29% of all mobile connections (excluding IoT) were 4G, 31% were 3G and 40% 2G.

Figure 6 shows the proportion of the populations in the major regions that are connected, covered and not connected and not covered or connected. Even in advanced economies in North America and Europe, more than one quarter of the population remains unconnected.

The proportion of the global population covered by mobile broadband networks, but not online, remained largely unchanged for the three years between 2014 and 2017 at about 44% of the world’s population. However, as Figure 6 shows, this proportion varies considerably across regions, ranging from 26% in North America to 57% in South Asia (Bangladesh, India, and Pakistan).  The percentage in sub-Saharan Africa is lower at 38%, but this is partly due to the lower level of mobile broadband coverage, with 40% of the population having no coverage. By comparison, only 16% of the population in South Asia has no mobile broadband coverage. In other words, lack of coverage is not the biggest obstacle in most of the world.

Figure 6: Mobile connected and unconnected population by region

Source: GSMA, STL Partners


  • Executive Summary
  • Introduction
  • Economic benefits of broadband
  • The growth in Internet usage
  • Counting the addressable unconnected
  • The reasons why people aren’t online
  • Social and economic drivers of Internet access
  • Regional Internet usage patterns
  • Structural, social and economic differences
  • Reasons why people are not connected
  • Why do people not use mobile broadband?
  • Infrastructure; mobile broadband coverage and electricity
  • Cost of smartphones and mobile data
  • Local content and services
  • Social and cultural factors and education
  • Conclusions and areas for action
  • The connected and the unconnected
  • Infrastructure
  • Connecting the covered
  • Are governments prepared to act?
  • Annex: Regional and country data
  • Asia Pacific
  • Eastern Europe
  • Latin America
  • Middle East and North Africa (MENA)
  • North America
  • Sub-Saharan Africa
  • Western Europe


  • Figure 1: Less than half the world’s people have a dedicated Internet connection
  • Figure 2: About 2.8 billion people aged over 11 not using the Internet
  • Figure 3: The growth in the population covered by 3G/4G
  • Figure 4: Relative impact on GDP of different communications technologies
  • Figure 5: Total and mobile Internet users by world region
  • Figure 6: Mobile connected and unconnected population by region
  • Figure 7: Internet penetration of total population by world sub-region
  • Figure 8: HDI and its constituents
  • Figure 9: Global variations in human development
  • Figure 10: Average household sizes by country
  • Figure 11: Key factors influencing broadband take up
  • Figure 12: Global GSMA Mobile Connectivity Index
  • Figure 13: Mobile network coverage by major region
  • Figure 14: Fixed broadband penetration
  • Figure 15: Electricity – percentage access by region
  • Figure 16: Availability of electricity supply in Sub-Saharan Africa (% pop)
  • Figure 17: African countries with largest populations; those with no electricity
  • Figure 18: Smartphone and mobile penetration
  • Figure 19: Examples of cheapest smartphones
  • Figure 20: Affordability index for cheapest Internet-enabled handsets
  • Figure 21: Relative cost of mobile data plans (on a 100-point index) vs. penetration
  • Figure 22: Availability of local services on a 100-point index vs. Internet penetration
  • Figure 23: Literacy and Internet and social media use by sub-region
  • Figure 24: Female access to mobile phone and Internet
  • Figure 25: Proportion of women with access to the Internet in selected MENA states
  • Figure 26: Proportion of women with access to the Internet in Sub-Saharan Africa
  • Figure 27: Asia Pacific overview
  • Figure 28: Asia Pacific country data
  • Figure 29: Eastern Europe overview
  • Figure 30: Eastern Europe country data
  • Figure 31: Latin America overview
  • Figure 32: Latin America country data
  • Figure 33: MENA overview
  • Figure 34: MENA country data
  • Figure 35: North America overview
  • Figure 36: North America country data
  • Figure 37: Sub-Saharan Africa overview
  • Figure 38: Sub-Saharan Africa country data
  • Figure 39: Western Europe overview
  • Figure 40: Western Europe country data

Enter your details below to request an extract of the report

Mobile Broadband 2.0: The Top Disruptive Innovations

Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).

Trends in European data usage

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:


Telco 2.0 has previously published a wide variety of documents and blog posts on mobile broadband topics – content delivery networks (CDNs), mobile CDNs, WiFi offloading, Public WiFi, network outsourcing (“‘Under-The-Floor’ (UTF) Players: threat or opportunity? ”) and so forth. Our conferences have featured speakers and panellists discussing operator data-plan pricing strategies, tablets, network policy and numerous other angles. We’ve also featured guest material such as Arete Research’s report LTE: Late, Tempting, and Elusive.

In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing

Under The Floor (UTF) Players Telco 2.0

This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.

During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.

It starts by discussing some of the issues which determine the extent of any disruption:

  • Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
  • The role of standardisation , and whether it is a facilitator or inhibitor of disruption
  • Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.

The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:

  • Network infrastructure evolution (radio and core)
  • Control and policy functions, and business-model enablers

It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.

This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.

Telco-driven disruption vs. external trends

There are various potential sources of disruption for the mobile broadband marketplace:

  • New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
  • 3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
  • External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.

The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,

External disruptions to monitor

But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls. 

In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.

It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.

Some of the areas being followed by Telco 2.0 include:

  • Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
  • SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
  • Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
  • Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?

It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.

There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.

The mobile data explosion… or maybe not?

It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?

A J-curve or an S-curve?

Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.

In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.

Figure 1 – Trends in European data usage

 Trends in European Data Usage

Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video. 

But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.

Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.

Figure 2 – A plethora of mobile data traffic management solutions

A Plethora of Mobile Data Traffic Management Solutions

What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.

There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.

So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?

The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:

  • Network coverage in terms of area or population
  • Network capacity at the busiest places/times

Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later). 

Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.

So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.

  • Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
  • The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
  • Regulators may start to take different views of traffic management or predicted spectrum requirements.
  • Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
  • Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
  • In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
  • Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.

Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.

Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:

  • “All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
    • Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
  • “Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
    • This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
  • “Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
    • This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business

To read the note in full, including the following analysis…

  • Introduction
  • Telco-driven disruption vs. external trends
  • External disruptions to monitor
  • The mobile data explosion… or maybe not?
  • A J-curve or an S-curve?
  • Evolving the mobile network
  • Overview
  • LTE
  • Network sharing, wholesale and outsourcing
  • WiFi
  • Next-gen IP core networks (EPC)
  • Femtocells / small cells / “cloud RANs”
  • HetNets
  • Advanced offload: LIPA, SIPTO & others
  • Peer-to-peer connectivity
  • Self optimising networks (SON)
  • M2M-specific broadband innovations
  • Policy, control & business model enablers
  • The internal politics of mobile broadband & policy
  • Two sided business-model enablement
  • Congestion exposure
  • Mobile video networking and CDNs
  • Controlling signalling traffic
  • Device intelligence
  • Analytics & QoE awareness
  • Conclusions & recommendations
  • Index

…and the following figures…

  • Figure 1 – Trends in European data usage
  • Figure 2 – A plethora of mobile data traffic management solutions
  • Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
  • Figure 4 – Internal ‘power tensions’ over managing mobile broadband
  • Figure 5 – How a congestion API could work
  • Figure 6 – Relative Maturity of MBB Management Solutions
  • Figure 7 – Laptops generate traffic volume, smartphones create signalling load
  • Figure 8 – Measuring Quality of Experience
  • Figure 9 – Summary of disruptive network innovations

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: 3GPP, Aero2, Alcatel Lucent, AllJoyn, ALU, Amazon, Amdocs, Android, Apple, AT&T, ATIS, BBC, BlackBerry, Bridgewater, CarrierIQ, China, China Mobile, China Unicom, Clearwire, Conex, DoCoMo, Ericsson, Europe, EverythingEverywhere, Facebook, Femto Forum, FlashLinq, Free, Germany, Google, GSMA, H3G, Huawei, IETF, IMEI, IMSI, InterDigital, iPhones,Kenya, Kindle, Light Radio, LightSquared, Los Angeles, MBNL, Microsoft, Mobily, Netflix, NGMN, Norway, NSN, O2, WiFi, Openet, Qualcomm, Radisys, Russia, Saudi Arabia, SoftBank, Sony, Stoke, Telefonica, Telenor, Time Warner Cable, T-Mobile, UK, US, Verizon, Vita, Vodafone, WhatsApp, Yota, YouTube, ZTE.

Technologies and industry terms referenced: 2G, 3G, 4.5G, 4G, Adaptive bitrate streaming, ANDSF (Access Network Discovery and Selection Function), API, backhaul, Bluetooth, BSS, capacity crunch, capex, caps/tiers, CDMA, CDN, CDNs, Cloud RAN, content delivery networks (CDNs), Continuous Computing, Deep packet inspection (DPI), DPI, DRM, Encryption, Enhanced video, EPC, ePDG (Evolved Packet Data Gateway), Evolved Packet System, Femtocells, GGSN, GPS, GSM, Heterogeneous Network (HetNet), Heterogeneous Networks (HetNets), HLRs, hotspots, HSPA, HSS (Home Subscriber Server), HTML5, HTTP Live Streaming, IFOM (IP Flow Mobility and Seamless Offload), IMS, IPR, IPv4, IPv6, LIPA (Local IP Access), LTE, M2M, M2M network enhancements, metro-cells, MiFi, MIMO (multiple in, MME (Mobility Management Entity), mobile CDNs, mobile data, MOSAP, MSISDN, MVNAs (mobile virtual network aggregators)., MVNO, Net Neutrality, network outsourcing, Network sharing, Next-generation core networks, NFC, NodeBs, offload, OSS, outsourcing, P2P, Peer-to-peer connectivity, PGW (PDN Gateway), picocells, policy, Policy and Charging Rules Function (PCRF), Pre-cached video, pricing, Proximity networks, Public WiFi, QoE, QoS, RAN optimisation, RCS, remote radio heads, RFID, self-optimising network technology (SON), Self-optimising networks (SON), SGW (Serving Gateway), SIM-free wireless, single RANs, SIPTO (Selective IP Traffic Offload), SMS, SoftSIM, spectrum, super-femtos, Telco 2.0 Happy Pipe, Transparent optimisation, UMTS, ‘Under-The-Floor’ (UTF) Players, video optimisation, VoIP, VoLTE, VPN, White space, WiFi, WiFi Direct, WiFi offloading, WiMAX, WLAN.

Mobile Broadband Economics: LTE ‘Not Enough’

Summary: Innovation appears to be flourishing in the delivery of mobile broadband. We saw applications that allow users to monitor and control their network usage and services, ‘dynamic pricing’, and other innovative pricing strategies at the EMEA Executive Brainstorm. Despite growing enthusiasm for LTE, delegates considered offloading traffic and network sharing at least as important commercial strategies for managing costs.

Members of the Telco 2.0 Subscrioption Service and Future Networks Stream can download a more comprehensive version of this report in PDF format here. Please email contact@telco2.net or call +44 (0) 207 247 5003 to contact Telco 2.0 or STL Partners for more details.

To share this article easily, please click:



STL Partners’ New Digital Economics Executive Brainstorm & Developer Forum EMEA took place from 11-13 May in London. The event brought together 250 execs from across the telecoms, media and technology sectors to take part in 6 co-located interactive events: the Telco 2.0, Digital Entertainment 2.0, Mobile Apps 2.0, M2M 2.0 and Personal Data 2.0 Executive Brainstorms, and an evening AppCircus developer forum.

Building on output from the last Telco 2.0 events and new analysis from the Telco 2.0 Initiative – including the new strategy report ‘The Roadmap to New Telco 2.0 Business Models’ – the Telco 2.0 Executive Brainstorm explored latest thinking and practice in growing the value of telecoms in the evolving digital economy.

This document gives an overview of the output from the Mobile Broadband Economics session of the  Telco 2.0 stream.

Putting users in control

A key theme of the presentations in this session was putting users in more control of their mobile broadband service, by helping them to both understand what data they have used in an interactive environment, and giving them the option to choose to buy additional data capabilities on-demand when they need and can use it.

Delegates perceptions that key obstacles to building revenue were internal industry issues, and key cost issues involve better collaboration rather than technology (specifically, LTE) were both refreshing and surprising.

Ericsson presented a mobile Broadband Data ‘Fuel gauge’ app to show how users could be better informed of their usage and be interactively offered pricing and service offers.

Figure 1 – Ericsson’s Mobile Broadband ‘Fuel Gauge’

Telco 2.0 - Mobile Broadband Fuel Gauge

Source: Ericsson, 13th Telco 2.0 Executive Brainstorm, London, May 2011

Deutsche Telekom showed its new ‘self-care’ customer app, complete with WiFi finder, Facebook integration, and ad-funding options, and how they are changing from a focus on complex tariffs to essentially Small/Medium/Large options, with tiers of speed, caps, WiFi access, and varying levels of added-on bundled services.

While we admired the apparent simplicity of the UI design of many of the elements of the services shown, we retain doubts on the proposed use of RCS and various other operator-only “enablers”, and will be further examining the pros and cons of RCS in future analysis.

New pricing approaches

In addition to Ericsson’s concept of dynamic pricing, making offers to customers at times of most need and suitability, Openwave showed numerous innovative new approaches to charging by application, time/day, user group and event (e.g. ‘Movie Pass’), segmentation of plans by user type, and how to use data plan sales to sell other services.

Figure 2 – Innovative Mobile Broadband Offers

Telco 2.0 - Mobile Broadband Pricing Options

Source: Openwave, 13th Telco 2.0 Executive Brainstorm, London, May 2011

No single ‘Killer’ obstacle to growth – but lots of challenges

Delegates voted on the obstacles to mobile broadband revenues and the impact of various measures on the control of costs.

Figure 3 – Obstacles to growing Mobile Broadband Revenues

Telco 2.0 - Mobile Broadband Revenue Obstacles

Source: Delegate Vote, 13th Telco 2.0 Executive Brainstorm, London, May 2011

Our take on these results is that:

  • Overall, there appears to be no single ‘killer obstacle’ to growth;
  • Net Neutrality is increasingly seen as a lesser issue in EMEA, certainly than in the US;
  • Whilst securing the largest number of ’major issue’ votes, we are not certain that all delegates fully know the views, needs, expectations and knowledge of upstream customers, and although their expectations are seen as an issue, it does not particularly appear more challenging than organisational or technical ones;
  • Manageable technical and organisational issues (e.g. integration, organisational complexity) appear a bigger obstacle than unmanageable ones (e.g. inability to control devices), although;
  • Implementation issues vary by operator, as can be seen by the relatively large proportions who either do not see integration as an issue at all or see it as a major issue.

Managing Costs: Network Sharing, Offloads as important as LTE 

Figure 4 – Impact of Mobile Broadband Cost Reduction Strategies

Telco 2.0 - Mobile Broadband Cost Strategies

Source: Delegate Vote, 13th Telco 2.0 Executive Brainstorm, London, May 2011

Our take on these results is that the approaches fall into three groups:

  • Strategic, long-term solutions including network sharing, LTE and offloading;
  • Strategies with a potentially important but more moderate impact including pricing, network outsourcing, and video traffic optimisation;
  • And lower impact initiatives such as refreshing the 3G network.

It is interesting that network sharing deals were seen as a more strategic solution to long term cost issues than migration to LTE, although there is logic to this at the current stage of market development with the capital investments and longer time required to build out LTE networks. Similarly, data offload is currently an important cost management strategy.

We found it particularly interesting that network sharing (collaboration) deals are seen as significantly more effective than network outsourcing deals, and will be exploring this further in future analysis.

Next Steps

  • Further research and analysis in this area, including a report on the pros and cons of ‘Under the Floor’ (outsourced network) strategies.
  • More detailed Mobile Broadband sessions at upcoming Telco 2.0 Executive Brainstorms.


Full Report – Entertainment 2.0: New Sources of Revenue for Telcos?

Summary: Telco assets and capabilities could be used much more to help Film, TV and Gaming companies optimize their beleaguered business model. An extract from our new 38 page Executive Briefing report examining the opportunities for ‘Hollywood’ and telcos.


NB A PDF of this 38 page report can be downloaded here.

Executive Summary

Based on output from the Telco 2.0 Initiative’s 1st Hollywood-Telco International Executive Brainstorm held in Los Angeles in May 2010 and subsequent research and analysis, this Executive Briefing provides an introduction to new opportunities for strategic collaboration between content owners and telcos to address some of the fundamental challenges to their mutual business models caused by the growth of online and digital entertainment content.

To help frame our analysis, we have identified four new business approaches that are being adopted by media services providers. These both undermine traditional value chains and stimulate the creation of new business models. We characterise them as:

  1. “Content anywhere” – extending DSAT/MSO subscription services onto multiple devices eg SkyPlayer, TV Anywhere, Netflix/LoveFilm
  2. “Content storefront” – integrating shops onto specific devices and the web. eg Apple iTunes, Amazon, Tesco
  3. “Recreating TV channels through online portals” – controlling consumption with new online portals eg BBC iPlayer, Hulu, YouTube
  4. “Content storage” – providing digital lockers for storing & playback of personal content collections eg Tivo, UltraViolet (formerly DECE)/KeyChest

To thrive in this environment, and counter the continuing threat of piracy, content owners need to create new functionality, experiences and commercial models which are flexible and relevant to a fast moving market.

Our study shows that Telco assets are, theoretically at least, ideally suited to enable these requirements and that strategic collaboration between telcos and content owners could open up new markets for both parties:

  • New distribution channels for content: Telcos building online storefront propositions more easily, with reduced risk and lower costs, based on digital locker propositions like Keychest and UltraViolet;
  • Improved TV experiences: developing services for mobile screens that complement those on the primary viewing screen;
  • Direct-to-consumer engagement for content owners: studios taking advantage of unique telco enabling capabilities for payments, customer care, and customer data for marketing and CRM to engage with consumers in new ways;
  • Operational cost reduction for Studios and Broadcasters: Telco cloud-based services to optimise activities such as content storage, distribution and archive digitisation.

To realise these opportunities both parties – telcos and content owners – need to re-appraise their understanding of the value that each can offer the other.

For telcos, rather than just creating bespoke ‘enterprise ICT solutions’ for the media industry – which tends to be the current approach – long term, strategic value will come from creating interoperable platforms that provide content owners with ‘plug and play’ telco capabilities and enabling services.

For content owners, telcos should be seen as much more than just alternative sales channels to cable.

There is a finite window of opportunity for content owners and telcos to establish places in the new content ecosystems that are developing fast before major Internet players – Apple, Google – and new players use their skills and market positions to dominate online markets. Speedy collaborative action between telcos and studios is required.

In this Executive Briefing, we concentrate on the US market as it is both the largest in the world and the one that most influences the development of professional video content, and the UK, as the largest in Europe.

The developments in both are indicative of the types of changes that are facing all markets, although the exact opportunities and challenges are influenced by the existing make up of the video entertainment market in each country and the specific regulatory environment.

This report is part of an ongoing, integrated programme of research and events by the Telco 2.0 Initiative to foster productive collaboration on new business models in the global digital entertainment marketplace.

Sizing the opportunity

The online entertainment opportunity is often talked down by both telcos and media companies. It is, after all, just a small percentage of the current consumer and advertising spend. Examples are easily cited that diminish the value of the opportunity: global Mobile TV revenues (revenues not profits) don’t reach $1bn; on demand represents just 2% of total TV revenues; online film (rental and download) does slightly better but hasn’t yet reached 5% of filmed entertainment revenues.

Individually, these are not the sort of figures that are going to get telco execs bouncing with enthusiasm but collectively (as illustrated below) the annual digital entertainment market reached revenues of $55.4bn in 2009 and has a growth rate approaching 20%. And that figure is even better if you discount digital magazine and newspaper ad revenue. So, even today, digital entertainment is a market that equates to 83% of Vodafone Group’s 2009/10 revenue and it is experiencing the kind of growth that the mobile industry was once famed for. Realistically, the telco share will remain small for some time but as a growth market it cannot be ignored.

Table 1: Global Value of Digital Entertainment by Content Type 2009

Digital Content Type Revenue (US$ bn) % increase year-on-year
Video on Demand



Pay-per-view TV



Mobile TV



Online and Mobile TV Ads



Digital Music Distribution



Online Film Rental



Digital Film Downloads



Online Games



Wireless Games



Online Game and in-game Ads



Consumer Magazine Digital  Ads



Newspaper Digital Ads



Electronic Book Publishing








Source: Telco 2.0 Initiative and PricewaterhouseCoopers Global Entertainment and Media Outlook: 2010-2014

An important factor to note here is the continued growth of digital music distribution revenue. Music can easily be discounted as a medium that has already moved online but it still has a huge amount of growth space as a year-on-year revenue increase of approaching 30% indicates. This is the key point, digital entertainment is a growth opportunity for the next decade and when you look at the size of the physical products that currently serve the markets targeted by these digital alternatives, it is high growth potential for a market that is already of a considerable size, as illustrated in the table below:

Segment Revenue 2009 ($bn)
Television subscriptions and licence fees


TV Advertising


Recorded Music


Filmed Entertainment




Trade publishing


Book publishing




Source: Telco 2.0 Initiative and PricewaterhouseCoopers Global Entertainment and Media Outlook: 2010-2014

Once you take out the existing on line spend and those elements, such as movie theatre revenues that won’t move online, the current addressable market is in the region of $700bn a year.

Again, to put that in context at our recent Best Practice Live! online conference and exposition, Anthony Hill from Nokia Siemens Networks valued the web services 2.0 market at $1 trillion.

What is more, there is evidence to suggest that as well as the substitution of digital online for physical and broadcast formats, the virtual world is also bringing additional viewers and potentially additional revenue with it. An extract from Nielsen’s A2/M2 Three Screen Report presented at our 1st Hollywood-Telco Executive Brainstorming, suggests that while online video viewing in the US grew 12% year-on-year and mobile viewing grew 57%, this was not at the cost of TV viewing in the home which also grew, if only very marginally by 0.5%.

It is not surprising therefore that content owners are looking to take advantage of the shifts in the market and move up the value chain to take a greater share of the revenues, and that telcos also want to play a part in a significant growth market. Indeed, the motivations pushing both groups towards digital and online entertainment are truly compelling.

In the next two sections we examine these in more detail.

Telcos: Let us entertain you

Telcos are keen to build a bigger role in online entertainment for four reasons:

  • Entertainment provides the kind of eye catching and compelling content that broadband networks, both fixed and mobile, were built for
  • Broadband networks will carry the traffic irrespective of the role of the telco, so it’s strategically important to play in a part of the business that accounts for a majority of their total data traffic
    (We estimate that online video makes up one-third of consumer internet traffic today and that this could grow more than ten times by 2013 to account for over 90% of consumer traffic overall. That makes it vitally important that telcos understand and maximise the opportunities associated with video and although not all video will be entertainment and not all entertainment is video, video entertainment is a major market driver. For more on broadband data trends, see our latest Broadband Strategy Report)
  • Entertainment is a growth market and the type of opportunity that can help build a Telco 2.0 business that, in conjunction with others, could re-ignite the interest of the financial markets in telecoms as a growth stock

  • It is a defensive play. Cable and DSAT providers are bundling communications services – broadband and telephony – into their service offering, eating into the customer bases of telcos. While telcos are still receiving revenue from these through wholesale, they are losing the direct link to customers and the associated customer information, both of which are integral to the ability of telcos to build effective two-sided business models

Downstream opportunity and challenges

Today, the vast majority of telcos are concentrating their activities in the entertainment arena in downstream activities – in IPTV and mobile TV, backed in some instances by a web TV offering as well. The primary success factors are, as might be expected, coverage/reach, quality of service and of course the appeal of the content. These are pre-requisites for success but there are no universally applicable targets by which we can judge success as so much depends on the competitive landscape within each market.

For example, mobile TV is bigger in China and India than in Western Europe and North America and this is despite the late entry of 3G systems in China and the very recent 3G spectrum auction in India which has kept connection speeds low. Furthermore, TV is far from ubiquitous, covering about 75% of the world’s population and Internet penetration sits around 25% and as low as 12% in developing markets, according to the ITU.

So why is mobile TV getting better take up on 2.5G in India than on 3G and 3.5G in mature markets? The simple answer can be found in the penetration levels of alternatives. Mobile simply has better reach than alternative transmission systems in emerging markets and the same factors influence mature markets with different results.
In developed markets, telcos are becoming part of the entertainment value chain as competitors to cable and DSAT but primarily with IPTV as opposed to mobile which, with a few exceptions, is coming more through content-specific apps than general services.

In the US, Comcast’s COO, Steve Burke recently cited telcos along with other cable providers and DSAT service providers as the company’s competition. Telcos have many options of how to enter the market but the default seems to be to think firstly, if not only, of full IPTV services or mobile TV, driven primarily by the desire to defend their communications markets.

Telcos are competing with TV cable and satellite companies for the home market on two fronts. Firstly to deliver high speed broadband connectivity and secondly to offer TV services. The problem for telcos is that IPTV, their TV service offering, has barely scratched the surface despite recent rapid growth.

According to the Broadband Forum, global IPTV subscribers grew 46% year on year for the first quarter of 2010. This equates to 11.4 million new IPTV subscribers, the most rapid growth in any 12 month period yet recorded and the global IPTV market totaled 36.3 million IPTV as at March 31st 2010. To put that number in some sort of context, according to Nielsen, there are 286 million TV viewers (not subscriptions) in the US alone.

The US IPTV broke the 6 million subscriber mark in the first quarter of 2010 and is growing fast but Europe is taking to the technology faster. France tops the IPTV charts, with just over 9 million users. This is perhaps no surprise given the weakness of its cable and satellite TV markets. Conversely, the UK which has decent broadband penetration, ranking 6th worldwide, doesn’t even register on the top ten for IPTV. Market entry is tough in the UK with BSkyB and Virgin Media dominating the pay TV market and in BSkyB’s case, tying up the premium content.

In many countries, telcos have also struggled to do the deals with studios and TV networks that will secure them the most compelling content and are therefore struggling to compete with cable and satellite services.

IPTV realities

In the US, AT&T’s U-verse and Verizon’s FiOS IPTV services have made some inroads. FiOS had 3 million subscribers at the end of Q1 2010, according to the company which also claims the service is available to 12.6 million premises or 28.8% of Verizon’s footprint. Its TV subscriber base had increased 46% to the end of 2009, while the major cable companies saw their shares drop by one or two percent but in absolute numbers cable remains dominant. Conversely, cable companies have seen their shares of broadband connectivity rise at the cost of the telcos.

Yet for telcos, the investments required for increasing speed and capacity through fibre is high. Fibre certainly represents the future for connectivity but its deployment is a long process and building complete end-2-end IPTV services will not make sense for every area in every country. Indeed, even within the US, Verizon is concentrating on core states and has sold its local wireline operations in 16 states to concentrate on building its fibre business where it is strongest. And fibre rollout is just the start.

Becoming a TV service provider is not straightforward and becoming a differentiated TV service provider is even more challenging. In addition to the technical connectivity, it requires deals to be made with networks to show their programming and, if real differentiation is to be made, deals also have to be brokered with studios, production companies and other owners of content, such as the governing bodies of sports, to secure broadcasting rights. Then it requires the development of an easy to use guide and the ability to at least keep up with the technical developments with which established TV providers are differentiating themselves – HD, 3D and integration with web features, such as social networking sites and delivery across multiple screens. This is a significant undertaking.

Many telcos are recognising that they cannot play every role in the video distribution value chain in every market. Indeed, even in France which we’ve established has receptive market conditions for IPTV, Orange has pulled out of competing in the sports and film genres that so often dictate the success of paid for TV services, and France is not alone.

At our 1st Hollywood-Telco Executive Brainstorm and at the 9th Telco 2.0 Executive Brainstorm, Telecom Italia’s representatives reiterated the company’s belief that IPTV was primarily a defensive play, designed to protect broadband revenues and that entertainment-related revenues would come instead from using telco assets and telco-powered capabilities to build new services around TV.

For Telecom Italia this is primarily about the upstream play based around QoE, CRM, billing and customer data and it also believes a business can be built around context and targeted advertising for free content on three screens. Building such functionality, linking consumer information and data about the environment to supplement TV services themselves and offer similar functionality to third parties is a core strategic decision being made by Telecom Italia, more about which can be seen on Antonio Pavolini’s presentation on our Best Practice Live! event site.

The upstream potential for telcos is something we shall return to later but we also believe that by working with, rather than in competition with studios and other content owners, telcos can become involved faster and more effectively in delivering entertainment services to their customers.

In addition, we believe an alternative and more accessible downstream opportunity exists based on digital rights lockers.

Digital Lockers: an alternative downstream option

Digital rights lockers are virtual content libraries hosted in the cloud that allow consumers to develop collections of content that are not tied to a physical format or device. There are a number of digital locker developments for online video, most notably Disney’s Keychest and UltraViolet, the new brand for the Digital Entertainment Content Ecosystem (DECE), a cross industry development.

These basically mean that a consumer can buy a piece of content once and then view it on any device at any time. For example, if a consumer buys Avatar on Blu-ray disc they would be able to register the disc into an UltraViolet-powered rights locker at the time of purchase or at a later date. Once registered, the digital proof of purchase is held in the cloud and then media service providers – cable companies, telcos etc., can access that information in my rights locker to know that they can deliver it to the consumer.

There are two important points here. The first is that from a consumer point of view it gives them what they want in the form of a one-time purchase for multiple formats; the second is that it breaks the tie between the device and the content. Content, in the form of rights, is held in the cloud, and is delivered in the appropriate form for any supported and registered device.

Ultraviolet works using a network-based authentication service and account management hub from Neustar that allows consumers log in and access digital entertainment they have rights to. The system authenticates rights to view content from multiple services, with multiple devices as well as manages the content and registration of devices in consumer accounts.

It means that content doesn’t keep a user locked with a particular device manufacturer as, for example, iTunes content does to the iPhone or iPad. Therefore if a consumer switches to a different device, assuming it is also supported, content will be viewable.

Furthermore, the fact that UltraViolet has multiple content owners committed, namely Fox Entertainment Group, NBC Universal, Paramount, Sony (which chairs UltraViolet) and Warner Brothers, makes it a strong proposition, as to try to compete with a proposition limited to one vendor, with one that was multi-device but limited to the content of a single owner is unlikely to succeed.

What this means for telcos is that an ecosystem is being built that they can simply tap into. A telco could use an UltraViolet-provided API to build access into its own customers offerings, whether that be IPTV, mobile TV or through a web-enabled storefront. They can access the rights locker in the cloud and see what a consumer is entitled to see and deliver it to them seamlessly. They make the enquiry, and deliver the video to the consumer which is in the right format for the device and complete with the necessary rights attached. An indication of how this flow works is illustrated in the diagram below.

Figure 1: Building the Digital Locker Proposition

Source: Telco 2.0 Initiative

The key point here for telcos is that such an approach overcomes the issues associated with scale and potentially with international distribution deals as well.

Both of these were uppermost in the minds of telco execs at the 1st Hollywood-Telco Executive Brainstorm, who constantly reiterated their frustrations at not being able to get the deals they need to give their customers the content they demand because of high ‘minimers’ and a mismatch between the internal operations and practices of telcos and studios. (See Telco 2.0 in Hollywood: There’s Gold in Them Thar Hills).

Collective activities such as UltraViolet are by definition more difficult to develop than those from an individual company as they require balancing of individual priorities and goals with those of the group. However, as we will discuss in more detail later, the desire of studios to compete with other groups to act as the retail and distribution point, makes working together more desirable.

It is also important to recognise that the strategic choices made by media companies have an impact on the options available to telcos. When media companies work together and create a new ecosystem, the value of upstream services from a single telco diminish, while their ability to enter as a downstream player increases as the open nature of an UltraViolet -like ecosystem allows them to grow incrementally. Should telcos want to play on the upstream side in such communities, then they too need to act collectively.

Telcos’ missed opportunity

Telcos have been notable by their absence in the development of UltraViolet which means many of the upstream capabilities they could have offered – authentication, billing, formatting for different devices – are being developed, at least in the first instance in different ways. However, there could be potential for these to develop over time based on building stronger relationships with the UltraViolet ecosystem, particularly around mobile device support. The formatting of data for specific devices is part of the basic operator function and this expertise could be highly valuable to the UltraViolet community but to be offered as an upstream service, rather than a downstream differentiator, it needs to be a collective proposition from the operator community, not piece meal.

However, if telcos are being challenged to develop new models to engage their customers, then content owners are even more so.

UltraViolet and other digital rights locker solutions open an alternative downstream opportunity for telcos and particularly for those that don’t have the market scale to compete with other Pay TV services. It is a middle ground that enables telcos to build an entertainment portfolio and overcome some of the challenges of building an entirely new business.

Content’s business model crunch

Once upon a time there were only two ways to get content to a large amount of people. Broadcast was one and the other was through a delivery chain made up of distributors, aggregators and retailers. The Internet changed that and high speed broadband access changes it again as even HD video can be accessed by individuals through the Internet, opening up new completion to those traditional channels.

All of these bring new challenges to the existing models of traditional media companies, which are both being challenged and pursuing new opportunities, as illustrated in the table below. The colour coding of the challengers reflects the severity of the challenge in the short to medium term (1-3 years).

Content Owner Category Upstream Business Model Downstream Business Model Upstream Business Challengers Downstream Business Challengers New Business Opportunities
Film Studios None Revenue share from various sales windows – movie theatre, DVD sales, DVD rentals, Pay TV, Free-to-air TV – with the respective distributor/retailer None New online rental and sales channels eg Netflix, LoveFilm, iTunes, plus free and pirates Downstream – sell direct to consumer getting all revenue and expand service offering with merchandising upsell etc

Upstream – create upstream advertising business

Free to Air TV Broadcasters Selling advertising inventory


Public/government funding




  Fragmentation of peak audiences;

Google TV and online player services which undermine/destroy the value of advertising.

  Upstream – greater distribution, lifespan delivering more ad value/opportunities; greater value to advertisers based on better measurement; greater targeting and personalisation; instant purchase opportunity.
Pay TV Broadcasters Selling advertising inventory



Pay TV services to consumers Google TV and online player services which undermine/destroy the value of advertising. New sellers of TV content – Netflix, iTunes,

New distributors of TV content – Hulu

New Connected TV propositions – particularly Google TV

Downstream – Maximise value of content with pre-broadcast promotion, post-broadcast access

Upstream – greater distribution, lifespan delivering more ad value/opportunities; greater value to advertisers based on better measurement; greater targeting and personalisation; instant purchase opportunity.

Games Publishers Licensing of IP to third parties eg films, TV, books, comics etc

Ad-funded apps

Largest share of product sales revenue. Total shared with distributor, retailer + licence fee payable to platforms

In-play features

Piracy Spiralling costs of production undermining profitability

Online distribution potential to open the market and reduce the power and value of the publisher role; Piracy

On line potential to break links between the platform and the game and gain larger share of ‘hit’ game revenues; growth and monetisation of casual and mobile games
Independent Games Developers Commissions from Publishers, platforms and studios

Ad-funded apps

Revenue from product sales revenue shared with distributor, retailer + licence fee payable to platforms Spiralling costs of production undermining profitability; Piracy Spiralling costs of production undermining profitability


On line potential to break links between the platform and the game and gain larger share of ‘hit’ game revenues; growth and monetisation of casual and mobile games
Newspaper/Magazine Publishers Sell advertising inventory Revenue share with distributors and retailers Proliferation of online publications hitting subs bases and diluting value to advertisers. Free is the dominant model


Online proliferation of publications; User-generated publications blogs etc;- readership dropping Online provides opportunity for instant access to news, views – faster turn over and more inventory.

Potential to leverage back catalogue and increase life/value of old articles

Book Publishers None Revenue share with distributors and retailers None Online cutting the price of the product and therefore revenue to be shared Potential to go direct to consumers and dramatically lower production costs
Music Labels Licensing model to third party Revenue share with artists, distributors and retailers None Piracy; Free and low cost online models taking too much revenue out of the value chain to sustain it Have to reinvent business model as opportunity already missed

Source: Telco 2.0 Initiative

For some entertainment sectors, especially the music labels, a major battle if not the entire war has been lost and the fear of following in their footsteps keeps the minds of TV and film studios, as well as publishers focused on the possible threats to their own revenue streams.

For other content owners, such as game developers, the opportunities look to outweigh the threats, as their position in the value chain is currently limited by the strength of the platforms and publishers. Indeed, by examining the games market we can see some of the opportunities that are developing for content owners to usurp failing business models and engage more directly with their customers in many more ways.

Games search for better business model

The online games market is currently the most valuable of online video content businesses but this is predominantly made up from mass online games, such as World of Warcraft, as well as casual games and not from the blockbuster platform games that permeate the market. There has long been a feeling amongst developers and even some of the smaller publishers that the business model is broken, stacking the odds against developers.

Developers, whether in-house with publishers or independent, face burgeoning costs caused by the fragmentation of platforms and the increasing reliance on mega hit games. These cost more and more due to competitive pressure driving the complexity and sophistication of the game itself plus licensing fees paid for IP and to console manufacturers, which bizarrely are paid according to the number of games produced not sold. Currently, average development costs range from $15m for two SKUs (the unit attached to each hardware version) to $30 million for all SKUs and the figures keep on rising. For example, according to Blitz games, it cost $40 million to develop Red Dead Redemption, the current number one best-selling game.

It is not surprising therefore that casual and social games which typically have six month development cycles and cost between $30,000 and $300,000 and mobile games which cost even less, generally in the range of $5000 to $20,000 per title, are attracting more of the time and energies of independent developers. What have been missing in the past are effective routes to market for these but in social networking sites for casual games and app stores for mobile games, these are now reaching mass audiences with ease and both bring new business models to the games industry.

The mobile model is a simple storefront revenue share one, which typically delivers 70% of revenue back to the developer. The causal gaming and social networking tie up is following a freemium model, whereby the basic game is played for free and higher levels, additional features and tools can be purchased. Both of these represent potential channels for ‘hardcore’ games to follow, if and it’s a big if, the experience of the console can be replicated online.
OnLive has become one of the first to examine this potential, launching the first version of its cloud-based games service on June 17, 2010 and we will follow up on the development of this and the business model alternatives and opportunities in an Analyst Note later in the year.

However, while the games market faces up to its own business model challenge, it, like many other content areas, should not be viewed in isolation. Games consoles are providing more functionality than just playing games. They also offer Internet access, apps, voice and are also capable of playing DVDs. Meanwhile, games also have the potential to be accessed via other entertainment channels and this is indicative of the blurring boundaries that have emerged across the entire entertainment arena.

Blurring boundaries

Today, we have global media companies, such as News Corp, that owns Film and TV studios, book and newspaper publishers, TV networks, DSAT and cable service providers.

This convergence is a long-standing trend which has also thrown up some very notable failures, such as AOL TimeWarner. However, the synergies across vertical entertainment sectors mean that companies continue to try for the ultimate media company. Indeed, the online environment only encourages this as the content type is no longer tied to the delivery medium. Furthermore, other companies are moving up and down the value chain. In fact the landscape is more complicated with three distinct but sometimes related trends emerging:

  • The value chain is breaking up with existing players given the opportunity to collapse it down to fewer elements and take a larger share for themselves
  • New players are disrupting the ecosystem with new business models that challenge the value of the existing chain just as it is being reconfigured
  • New functionality, such as integrated apps or extension to the mobile screen, is providing further ways to differentiate services

Each of these alone would be considered an industry challenge; combined they set up a bloody battlefield. What is more, the battle is not limited to a single content type. The distinctions between platforms are also blurring. You can get TV through your games console, films on TV from your pay TV service and TV through the internet and all together these forces are totally disrupting the value chain.

These forces have also prompted the emergence of four different business approaches as follows:

  • “Content anywhere” – extending DSAT/MSO subscription services onto multiple devices eg SkyPlayer, TV Anywhere, Netflix/LoveFilm
  • “Content storefront” – integrating shops onto specific devices and the web. eg Apple iTunes, Amazon, Tesco
  • “Recreating TV channels through online portals” – controlling consumption with new online portals eg BBC iPlayer, Hulu, YouTube
  • “Content storage” – providing digital lockers for storing & playback of personal content collections eg Tivo, UltraViolet /KeyChest

These are not mutually exclusive but are providing new approaches that media service providers are using alone or in combination to create new business models that are disrupting the traditional value chain and building new ones.

Disrupting and rebuilding the value chain

Using the example of video distribution, we can see more clearly how things are changing. The traditional value chain for video distribution (illustrated below) sees a consumer purchase the video content from a retailer, who in turn was supplied via a wholesale distributor who bought from the content creators, and watches it on a device bought specifically for the purpose – TV, video/DVD/Blu-Ray player, games console, hifi, etc.

Figure 2: Video Content Value Chain

Each role has a distinct group of companies that serve it and that are highly competitive amongst themselves but as distribution and delivery moves online, the two end points are moving towards the middle to gain greater control over the market and a larger slice of the pie.

The online digital value chain has the same functions to deliver but which companies should provide these are anything but clear as each player has a shot at cutting some of the others out of the chain completely. The obvious point of collapse is between the distribution and retail elements and this is where the likes of Amazon’s on-demand service come in. However, it is not the only convergence point and content owners should be defining the online value chain to position themselves at the centre of it, not taking what they are given.

In the physical world, selling content straight to the consumer had been a logistical impossibility for content owners other than TV broadcasters but with digital a direct channel to the consumer becomes a distinct possibility. What we now have is the possibility to combine distribution and content creation. For content creators this means direct serving through on-demand services of the Internet.

All the major film studios have their own service on-demand services and provide early access to on-demand content through these. For example, Universal launches films on its site the same day they are offered for release on DVD. This demonstrates that although the sale of DVDs is the most lucrative of the second rights windows, a direct online sales channel is even better value for the studios.

However, while a direct sales channel is good for studios, the highly lucrative DVD/Blu-ray retail market is being threatened from all sides – from subscription rental, from VoD download and streaming and from pirates – all of whom are targeting the online opportunities offered up by physical windows. Therefore studios are attempting to both maximise their opportunity for the new channel and protect their traditional revenue streams. It is a decidedly difficult path to tread and the winners and losers will be inevitably be defined by their ability to pull on the vital parts of the value chain.

Taking each link in the chain in turn, we can see how the four new business approaches are being created and how telcos could contribute to their development.

Content Creation

The major studios have the professional content ownership in the bag, at least for the time being. There is little doubt that while user generated content (UGC) creates an interesting new dynamic, especially for news services, and cheaper and more accessible production and online distribution theoretically opens up greater opportunities for independent productions, neither is close to making a serious challenge in reality.

There have been a few examples of this in music where independent artists have used the internet to distribute and social networks to market their work and thus cut out the need for a music label. Given that the experience that music companies have had with online distribution is the one cited as the one to avoid by all content owners, it is worth watching.

Perhaps a stronger play is to draw on new interactive capabilities to supplement the content creation process, helping to define plot lines and characters. This is an area that has already seen some collaboration between telcos and studios, for example when Sprint worked with WPP’s Mindshare and NBC to create a new character for Heroes. This was in fact seen as a development for advertising, rather than creative content but it establishes a precedent for engagement with a show’s audience.

These are both developments that demand tracking but they’re not the most pressing issue at the moment. That comes from the convergence of the online distribution and retail elements of the video value chain.

Distribution and Retail

The distinguishing line between retailers and distributors is a fading one in the digital world as getting the content from content owner to retailer is a simple electronic process. As a result no distributor is required and the content owner can go directly to the retailer, which, if independent of the studios can also act as an aggregator. So a more likely structure is that content owners will become their own retailers and they will compete with independent aggregator retailers that provide a one-stop shop for content for multiple media companies.

This might appear nothing but positive but without the distributor or a requirement for a physical outlet, retailers have the ability to completely reset the pricing model according to their vastly reduced cost base. Content owners have little or no influence over the prices retailers choose to set one the retailer, or indeed the renter has purchased the selling rights. That has already set a challenge to content owners, as we have seen with Apple iTunes and music and increasingly with TV and film content. The $0.99 film may appear a good deal to the consumer but is it enough to maintain the investment in new film development and once price expectations are set, can they be increased?
Studios are, not surprisingly, anxious to avoid having pricing dictated to them by a single dominant online retailer but to compete effectively they need to differentiate their online stores through content of course but also through the ease of use and relevance of its service.

Unlike telcos, or even cable cos and DSAT service providers, content owners have access to the content to differentiate their online service from competitors. They are of course limited to their own content but with a limited number of top studios or content creators, consumers would be able to find their way to the content they liked the most – just as they find their way to the content they want on different TV channels. But content alone does not a service make and this is the steep learning curve that content owners are embarking on.

A full service requires effective content delivery, quick, easy and secure payment/billing facilities, a slick search and recommendation engine, easy links and access to additional services or more video content and efficient customer care. None of these are core competencies for film and TV studios, or even games developers and publishers. However, they are core assets of telcos, the same telcos that are struggling to get the right level of premium content from the studios. There is a natural synergy here, even more so when you take into consideration the motivations for telcos to get into the entertainment arena.

To this end we have identified core telco assets that could aid that differentiation. These are:

  • Interactive Marketing and CRM, whereby telcos are capable and willing to share the information they hold on customers and their behaviours. These leverage both personal and contextual data to create valuable information services
  • Customer Care – a core telco competency that is completely lacking from the skill set of media companies
  • Three Screen Delivery – telcos have vast experience in identifying devices, the OS and software platforms, detecting software configurations and radio connectivity and transcoding to deliver content in the right format for the device
  • Direct Payments – through the telco bill, one click secure and easy payments are possible, overcoming one of the major obstacles identified to put off prospective customers. Includes the ability to detect account status, billing and payments capabilities
  • Identity-based Content Delivery – the telco ability to identify customers and deliver across platforms provides another way to build a content anywhere business

(For more, see How to Out-Apple Apple).

It will also be important for studios to act quickly as hardware vendors have watched the Apple phenomenon and are looking to get a slice of the pie, meaning there are many more than one new entrant.

New roles – device vendors sell content

Entertainment device manufacturers rely on new formats and technologies linked to content to sell new hardware – eg Hi-resolution and 3D films, TV and games. That is how their business model works but online also gives them potential to move up the value chain and get into the retail sector, grabbing a share of the revenue as Apple has done. It is a process that has been exemplified in the mobile industry. With apps and app stores, manufacturers of devices can build communities and develop the potential for three revenue streams: hardware sales, revenue share on app sales and even software licensing to app developers.

There are two converging trends involved here. On the one hand, entertainment and communications devices are becoming one and the same thing, while on the other, devices are moving up the value chain and vendors are looking to gain revenue from outside of the highly competitive consumer electronics markets.

For example, Sony has built up an Internet community around the PS3, using it as the backbone for its connected TV network. It is looking to gain a recurring revenue stream to go alongside the one-off hardware revenues of the TVs themselves. TV replacement cycles are around 7-8 years and while the speed of innovation is increasing – 3D is coming relatively quickly on the heels of HD when compared to the move from colour to remote control to digital for instance – it is still a more uncertain business and one more influence by macro economical trends than pay TV services.

Sony is particularly interesting because its previously separate divisions serving the consumer electronics and entertainment markets are combining their assets to provide a significant challenger to the value chain. For example, in 2008, the Sony Pictures film Hancock was offered to those in the US who had its Bravia TV sets just a few days after the cinema launch and before the DVD went on sale, completely usurping the window release process that sees films follow a well-defined path from cinema to DVD/Blu-Ray sale, then to rental, followed by pay TV and finally free-to-air TV.

The Hancock experiment has not been repeated and instead Sony has done deals other online distributors for film on its Bravia sets, such as with LoveFilm in the UK, but it does show the potential if, like Sony, a company can control both ends of the value chain. However, the new competition does not end there.

New players – TV changing times

Continuing with the TV example, we can see that traditional TV broadcasters look at web distribution as an opportunity to:

  • Extend their reach beyond the limitations of the time-specific schedule
  • Diversify their offering by creating additional services and interactivity around broadcast services

On the flip side of this though, the web is also a threat as it takes away the broadcaster’s iron control over the distribution channel. YouTube has proved to be a key battle ground as broadcast TV programmes that were one posted for free within minutes now have a fee attached and a raft of country-specific as well a general providers of managed solutions have emerged as quality becomes a greater differentiator.

Taking this a stage further are the internet players, such as Apple and Google, which have both released TV specific services that extend their Internet expertise and services to the TV screen. Most significantly though, these also extend their respective business models to the TV screen. How these fit into and challenge the overall connected TV market is illustrated in the table below, which features a number of players from the US, the biggest market for video entertainment and the UK, Europe’s largest market, that are trying out new business models in attempts to gain a greater share of the market.

Table 4: Who’s Doing What in Connected TV

Consumer Proposition Company Product Business Model Strengths/Weaknesses
Pay to view TV services


Amazon Online retail community – streaming and download Retail model for sale and rental

New Business Approach: Content Storefront

Strengths -strong online retail brand and community. Cloud based storage and access

Weaknesses – Complex rights management; weak mobile/portable service.

Apple Apple TV box plugs into any HD TV to provide the Apple interface to content through AppStore and iTunes and Mac-like computer navigation. Available now

Rumours persist about a $30 per month subscription service but Apple has not yet been able to secure the content deals to make this happen.

Hardware sales and 30% revenue share of all film and TV downloads

New Business Approach: Content Storefront & Content Anywhere

Strengths: Apple Brand; Leverages existing content environment; iPhone, iPad for mobile reach and ads

Weaknesses: On-demand service only, now schedule broadcast; Depends on content it can get – has held up sub-based monthly TV service

Google TV Box from Logitech or embedded in new Sony TV. Leverages Google search capabilities and Android developer community

To launch Autumn 2010

Revenue share from Android market and extension of  AdWords from Internet to TV

New Business Approach: Content Channel & Content Anywhere

Strengths: Google brand; search engine; Android App Developer Community and mobile reach;

Weaknesses: Scale -hardware has to be bought; Access to content, currently blocked by 4 major broadcasters in US; Dependence on hardware vendors and content owners

IPTV Verizon FiOS IPTV service with hundreds of channels, VoD and  PVR Triple play service – bundled with broadband and telephony. Defensive activity to protect comms revenues

New Business Approach: Content Anywhere

Strengths: triple play – have control of both Internet and TV channels into the home; control over the delivery pipe

Weaknesses: Scale; Lack of exclusive access to premium content and content differentiation

Sky Player (UK) Web TV player behind pay wall giving same service to subscribers as they get through satellite TV service Watch anywhere value-add. Defensive play against web TV players

New Business Approach: Content Anywhere

Strengths: Leverage existing premium content deals and original content – especially sports; Extensive back catalogue

Weaknesses: Ability to control the quality of the experience

NetFlix/LoveFilm Postal and online rental service Subscription model for rental that breaks pay-per-view rental model

New Business Approach: Content Anywhere & Content Channel

Strengths: Subscription model attractive to consumers wanting set cost; model naturally migrates to online and gets stronger with lower distribution costs

Weaknesses: At the mercy of content owners wanting to protect revenue from other windows; No control over the pipe.

Cable TV Everywhere

In Beta testing on Comcast’s Fancast

Led by Comcast and Time Warner, this provides online access to their cable content behind a pay wall Watch anywhere value-add to prop up premium subscription packages. Defensive play against web TV players such as Hulu

New Business Approach: Content Anywhere

Strengths: Leverage existing premium content deals and original content; Extensive back catalogue; Ability to control the quality of the experience

Weaknesses: Late to the game. Limited to those who already have cable subscription – no online only business model

Hybrid Pay and Free Hulu Plus

Launched June 29, 2010

$10 a month subscription premium service for near real-time access to TV content, on all screens with handover pick up and play across all of them Freemium model building value add on top of the free Hulu service adding subscription revenue to advertising

New Business Approach: Content Channel

Strengths: access to premium new and catalogued content from its parent companies; two-sided business model; three screen delivery

Weaknesses: access to content outside of that from founders;  no control of QoS of pipe – possible victim of throttling


YouTube Free to view video upload for UGC, advertising supported professional content

Experimenting with paid for content


New Business Approach: Content Anywhere

Strengths: Brand, scale and reach; Google technical and financial backing

Weaknesses: High costs of storage; lack of advertising placement

Free to view web TV services Free to Air – Project Canvas (UK) Extension of Freeview (all free to air digital channels)  to the web with full player capabilities Ad-funded

New Business Approach: Content Anywhere

Strengths: reach – accessible by anyone with web access.

Weaknesses: player functions, such as fast forward, undermine value of advertising ; late to the game; no control over QoS of delivery – possible victim of blocking/throttling

Hulu Free access to a range of Flash-based streamed TV and Internet video. Scheduling based on Ad-funded, although the intention has been announced to introduce some paid services

New Business Approach: Content Anywhere

Strengths: free access limited only by internet connections; access to premium new and catalogued content from its parent companies

Weaknesses: no control over the quality of the connection – possible victim of throttling; access to premium content from other media companies; non-sustainable business model for most valuable content

Source: Telco 2.0 Initiative

Apple goes it alone

Apple TV is predominantly a pay-as-you-go model based on iTunes; it is a virtual retailer and has pioneered the content storefront model. Interestingly, TV is the first Apple business not to be built around hardware, suggesting that the company believes its online store is strong enough to carry its own business. However, Steve Jobs has referred to Apple TV as a project that will remain a hobby for some time to come. Despite this, Apple still has a strong role influence over the online video market. Its pricing policies for video on iTunes reset pricing levels and its stand on not supporting Flash is not just a technology decision based on the ‘buggy’ nature of Flash as described by Jobs.

Flash fire

Certainly Flash is quite heavy and would slow the iPhone down, so Apple is protecting its user experience but it is also protecting both its up and downstream business models for professionally created video content.

The vast majority of online video services from Hulu to Netflix, LoveFilm and TV network players are Flash based. This means that they can be viewed via a browser by any device that supports Flash. By taking Flash out of the equation, video consumers on the iPhone are left with two choices: get a different device that supports Flash or purchase video content through iTunes and that means more revenue for Apple. Apple talks about HTML5 as the natural replacement for Flash but this is still new technology and Apple will look to maximise its position for as long as possible.

Video service providers or content owners also have two choices: to build a second site that works on HTML5, as the BBC has for the iPlayer; or create apps for the devices as Netflix, LoveFilm and Hulu have. This later again works well for Apple as the apps strengthen the consumers tie to Apple hardware as should the consumer want to move to say an Android device, they’d lose the app, so it reinforces the company’s hardware business model. Furthermore, where applicable, Apple also gets its 30% of the app revenue and of course, SDK revenue.

The decision not to support Flash may have something to do with technology and protecting the user experience but it clearly also has everything to do with reinforcing the strength of Apple’s existing business models.

As with all other services, Apple and Google are taking fundamentally different approaches with Apple expanding its closed iTunes environment, while Google is all open. Apple is betting on taking a share or distribution/retail revenue; Google on turning TV into another and potentially huge extension to its advertising platform.

The big question here therefore is what this means for the broadcaster’s advertising model, as if Google’s version, which is performance-based and uses real rather than predicted data, is available on a TV screen, won’t the value of TV advertising decrease?

In the Apple case, it establishes the direct relationship with the customer, while Google’s play is purely upstream, extending its reach and delivering a new audience to its advertisers.
Apple also has another and for them, more significant objective, and that’s selling new hardware. While its famous Appstore brought in $1 billion dollars in its first complete year of activity, 30% of which went to Apple, that is put into perspective by the company’s total revenues of $13.50 billion and net quarterly profit of $3.07 billion for the second quarter, 2010.

Connected TVs could represent another diversification for the company that has been so successful at moving into the mobile device market and, just as with the mobile industry, Apple is looking to leverage the tight integration of its content marketplace to differentiate its hardware. This has been the primary driver of Apple’s business model but the expectation is that the relatively low retail price of the box and the lack of any real design differentiation means that for TV, Apple will be looking to create value from the service on the big and combine this with tight integration on the portable ones – the iPod, iPhone, and iPad.

Google on the other hand, is working in conjunction with hardware companies, most notably Sony for the Internet-enabled TV end product and Intel to get the Google functionality embedded in the chipsets. And Google’s objectives are not small, as a spokesman for the company have been quoted saying that it aims to have as big an impact on the TV industry as smartphones have had on the mobile world.

Differentiation options

In essence all the forces converging on TV are competing for the eyeballs of the consumer and the way they are doing this is to offer the consumer more choice: more content on more devices to be watched at more times. The consumer utopia it seems is to be able to pick exactly what they want to watch, when they want to watch it and on what screen; or in other words, the complete antithesis of broadcast TV when you get what your given, when its broadcast and always on the same type of screen.

There are however, problems with the utopia and the companies that solve these most effectively will run out the long term winners. As we are starting to establish there are four key points of competition:

  • The range of premium content
  • Integration of Internet apps to enhance the viewing experience
  • The user interface/ guide for finding the content

Extension to the mobile screen for true anywhere anytime viewing and enhancement of the big screen viewing experience

This is not to say that they are the only ways to compete, merely that they are the primary ones at the moment. Of these, only the appeal of content is proven; consumer demand for and willingness to commit dollars to the other three are, as yet, not.

Companies are however beginning to place their bets and while some are predictable, such as Google looking to leverage its search expertise for content discovery and its Android app development community to bring Internet-style services to TV programming; a lot more remain unclear. In particular, the ability of content owners that don’t have an established distribution channel to draw upon, to go directly to consumers is in question.

Interaction with Internet applications

For Google, key is a Software Development Kit (SDK) that will help independent content providers develop widgets to access their platform, content and participate in Google’s advertising revenue sharing program, similar to AdSense on desktop apps.

The Google TV proposition is Android based to draw on the rapidly-growing global developer community to create new and innovative ways for TV viewers to interact with their TVs. Facebook and Twitter widgets would provide easy to use chat facilities around key programming, for example. They are already used and by linking them in real time with the programme screen itself adds a further level of social networking. Beyond that, the possibilities are almost limitless but platform owners must understand what they are doing and the possible impact on existing revenue streams, particularly advertising as they could find their business model undermined.

For example, at a recent conference in London, an ITV representative speaking about Project Canvas stated that widgets and other apps would be uploaded onto the platform for free. Now imagine if an existing advertiser, say BetFair, which advertisers alongside sports events, launched a widget that enabled live betting on the event being broadcast. Why would they pay for advertising in a prime spot when they can launch the widget for free?

So the big Internet companies are coming to the TV party. They bring with them their own business models and these will compete with and impact on those of established TV studios, networks and distributors.

Free to air broadcasting is a simple one-sided upstream business selling advertising timeslots, whereas pay TV has a two-sided business model which adds consumer subscriptions to the revenue pot. A third pay-per-view element also exists for some pay TV services but revenues from these have declined over recent years. Following these developments through, we can see some major changes on the horizon.

Impact on advertising

TV accounts for the largest single proportion of advertising spend. Globally, it secures 36.6% of total ad revenues, according to PWC, and although spending has fallen on TV advertising over the last two it remains a default choice for buyers. It’s the IBM decision – the no one gets fired for making the obvious decision but if the TV world is changing isn’t advertising following suit?

Initially at least, research suggests that the answer is no. The $56 billion US TV advertising segment is expected to recover, growing by 9.8% during 2010 and thus erasing last year’s losses and returning the sector back to 2006 -2008 levels, according to the Magna Global Advertising Forecast, April 13, 2010.

At the 1st Telco-Hollywood Executive Brainstorm, executives were at pains to point out how TV still dominated viewing consumption patterns, delivers a better reaction to advertisements and are therefore seen as more pervasive than radio, print or online. According to the TVB, Nielsen Media Research Custom Survey 2008, cited by one of the speakers, 70% of adults believe that TV adverts are more persuasive than adverts on other mediums. It would seem that the TV/advertising love affair is set fair. However, we believe that the building trend towards on-demand viewing is challenging this particularly for free-to-air broadcast services.

If we take online advertising, we can see that it currently accounts for 12% of marketing budgets versus 34% of time users spend online. However, tying strategy to a delivery medium rather than a business model is a fundamental mistake. It’s not about online or cable or broadcast but live or on-demand. This is the change that undermines the very foundation of TV, broadcast and scheduling.

Scheduling, search and discovery

Of all the new functions that are hitting the market, it is on-demand that is making the greatest impact at this stage. At the 9th Executive Brainstorm, we asked the audience whether discovery would become the new search. The results were negative but it was the wrong question for entertainment and particularly for TV. The bigger question is whether discovery can become the new scheduling?

Broadband connectivity means that there is the potential for anyone to watch what they want, when they want it and where they want to. In theory therefore the individual can do the job of the schedulers of TV networks and channels and personalize it for themselves. This is especially true for the mass of archived material that is the mainstay of a host of cable DSAT pay TV channels. And the usage figures seem to back up the fact that the desire for consumers to control their own viewing is an irreversible trend and not just a short term fad.

A clear trend has developed around time-shifting whether through VoD, on-demand, or delayed TV; watching when the viewer wants and not when the scheduler says, is proving ever more popular. According to ComScore research on US viewing behaviour, 55% of viewers now watch original programming at a time other than when it’s scheduled. Furthermore, although it is a more pronounced feature in the younger demographic groups, it is also permeating the viewing habits of the over 50s and even over 60s, with 43% of the over 65s watching programmes after they had been aired. In short: everyone is time shifting.

However, a completely unstructured service where consumers search for the content they want relies on them knowing what that is. TV is generally regarded as a ‘learn back’ experience. It doesn’t require a huge amount of concentration, unlike say, video games and as such consumers don’t want to and won’t spend protracted periods of time searching for something, especially if they don’t know what they are searching for. Therefore the default becomes to stick with what you know. Just as many of us do with music, our viewing tastes could get stuck in time and we will miss out on new and different content.

We therefore have two contradictory user behaviours driving service requirements for on-demand video. On the one hand, consumers want to choose their viewing not have it thrust upon them, on the other they don’t want to have to make a huge effort to find their viewing.

The viewing guide is one of the most criticised parts of cable, DSAT and IPTV services. Often slow and clunky the guides, which list the schedules for each channel, struggle to deliver the large amount of information they have in a meaningful and useful way. If we then take out the timings and searching becomes even more difficult.
More sophisticated ways of discovering content are needed that encompass search, recommendation and some form of default scheduling and models for this are appearing.

Recommendation engines such as that used by Amazon or Apple Genius, which use observed tastes to make suggestions for future purchases are now well-established practice for online retail. However, there is potential to take this a stage further.

For example, the online radio service Last.fm offers playlists based on popularity of tracks as a default, as well as search and recommendation. This is interesting primarily because it is having the effect of increasing the amount and variety of music listened to by consumers. This may seem to be a nice, fluffy feature, but it is important for the discovery and support of new music talent and therefore the continued life of the industry.

Last.fm has flaws. It runs two business models, a free, ad-funded one in the US and UK and a subscription model in all other territories. This, the company says, is because it doesn’t have the sales capabilities necessary to run an effective ad sales campaign outside of its core territories and which will dominate in the long term is still uncertain but the value of an effective recommendation engine that acts as a scheduler, is.

As one speaker said at the 9th Telco 2.0 Executive Brainstorm in London, the opinions that influence his viewing habits are those of British actor/comedian, Stephen Fry and his mate Dave. No scheduler can do that but an effective mash up of recommendations perhaps through Facebook, Twitter and a purpose built recommendation engine such as Amazon uses, could.

Pay TV services increased choice dramatically over free-to-air alone and split the ad revenue. That took the number or channels from single figures to hundreds. Now imagine what happens when everyone has an individual channel.
The impact is already being felt on advertising revenue as these predictions for the US market from Magna Global reflect.

Table 5: US Ad Revenue According to Platform

PLATFORM 2009 ($$$) Est. 2010 ($$$) Est. 2011 ($$) Est. % Difference
Digital Online $22,843.7 $24,611.7 $26,792.0  + 8.9%
Cable TV $20,148.8 $21,491.7 $22,477.3  + 4.6%
Broadcast TV  $27,789.3   $29,047.9 $27,384.0  – 5.7%

Source: Magna Global data presented at 1st Hollywood-Telco Executive Brainstorm, May 2010

It does, however, mean that advertisers can target far more effectively and finally put to bed John Wanamaker’s infamous adage: “I know 50% of my advertising is wasted, I just don’t know which half.”
However, there are problems with this vision as well.

On the revenue side, it requires new ways to measure viewing of shows and ads, while in terms of the user experience, it is essential to find an effective way for users to find what they want and discover what they will like but don’t yet know about.

Basically this is a data crunching business. It requires personal data, usage information, content meta data, device preferences and more to be combined to create valuable information. Telcos with their knowledge of the customer, device and environment have high value. Even more valuable is the willingness of telcos to share this data with content owners, unlike many of the alternatives.

Extension to the mobile screen

Beyond time and place shifting, the other major area of competition and development is the ability to deliver content across multiple screens. In many ways it is a development of the same trend: consumers do not want to be restricted on where, when and how they view. The seamless shift from one screen to another would, for example, allow a consumer to move from TV to mobile and then to PC as the commute to work.

These pause and pick up services are of course already in existence. Netflix supports this and it is a central part of the cable industry’s TV Everywhere initiative. Meanwhile, the ability to buy once and view on any screen is the functionality at the heart of rights locker propositions such as Disney’s KeyChest or the UltraViolet initiative that we mentioned earlier.

However, we believe that this alone is only part of the story.

User behaviour suggests that consumers actually like to use more than one screen to interact with content. For example, it’s all very well getting detailed stats to accompany a baseball game through a multi-screen view on IPTV say, but this interferes with the primary viewing function. As ESPN has discovered, a more effective approach is to provide the stats to a mobile device through an app, giving the consumer the ability to choose when and how they look up the info. Add in a chat facility and watching the ball game at home becomes a social activity as it would be seeing it live.

Getting the right platform for the right content is not a trivial matter. It is often assumed that the more intense the experience, the more it consumes the viewer, the more valuable it is. Furthermore, the value is assumed to increase the faster the connection gets as this allows greater intensity. This is not necessarily the case.
Some ‘lean forward’ activities can benefit from greater speeds. For example, many online games will improve with faster response times and these are highly immersive activities. However, not all activities require and always benefit from greater immersion. 3D TV has provoked great debate along these lines as the most intense experience is seen to require greater engagement and brain activity, making it a more complete viewing experience but also a less social one.

Sociability is both a sought after and valuable feature and is not necessarily driven by high immersion, high speed experiences. It’s about getting the right device for the experience, related to the user’s activity. And that requires information about what screens the user has, how they use them, where and when. This is the kind of new user data that telcos can collect and use without drawing heavily on complicated legacy BSS and OSS systems.

So, in summary, we have a situation in which online video distribution is a reality and growing at a fearsome pace but the business models and value chain are far from clear. Indeed, the stress points in existing models for both studios and telcos are more obvious than revenue opportunities. Both of these are areas in which telcos are well placed to play an important part.

The next section therefore looks at how the assets and developments of the content and telco industries can be combined for the benefit of both.

Telco and Media Collaboration

The options for telcos are many and the approach taken is dependent on the structure of the entertainment industry in their country and their own set up. Amongst the considerations they must assess are their willingness to collaborate, the assets and skills available to them and the regulatory environment. As we’ve already established, for some IPTV is a viable option, for others mobile as a mainline channel is also worth pursuing as it is the most ubiquitous option.

In addition, a third downstream option is emerging for telcos to be an access provider to digital rights lockers.
On the upstream side, the opportunities are many but they are far less defined and their development seems to be stuck in an endless chicken and egg situation in which each side is waiting for the other to define the services required. To help move this discussion on and to define some near term opportunities we have honed down the list of possibilities by examining what media companies are looking for from telcos.

Where to Start

Surprisingly, combatting piracy which gains so many headlines is not uppermost in the list of priorities, according to our research.Instead the issues that dominate the thoughts of media executives are primarily those that we have outlined earlier, namely the ability to differentiate through delivering across any screen, enhancing the user experience and improving content discovery, as illustrated in the table below. Underlying that is the desire to re-define and build the value of the upstream side of the entertainment business ie advertising.

Table 6: Importance of addressing issues facing media companies rated 1-5*

*Where 1 is of no importance and 5 is critical           Source: 1st Hollywood-Telco Executive Brainstorm, Santa Monica

However, along with piracy, the area in which telcos and entertainment distributors are most likely to interact is in conflict over the quality of the pipe they are receiving.

QoS, QoE and throttling back

Video is all about the viewing experience so anything that influences that experience is of vital importance to content owners and to the retailer/distributor if this is done by a third party such as Hulu, NetFlix or LoveFilm. As these media service providers have no ability to control pipe, they use adaptive rate video technology to sense the bandwidth available and deliver the quality of video the connection is capable of dealing with effectively.

Adaptive rate technology works to a point as it means that they can deliver the best possible video for the bandwidth at any given time. However, the underlying transmission speed and quality is still beyond the control of the media service provider, so though better connectivity may be possible it is not being delivered. Furthermore, within the confines of the thorny net neutrality debate, the throttling of service types and in some instances specific services is happening to the detriment of online video services. For example, in the UK, LoveFilm has examples of customers with 20MBit/s connections unable to get a satisfactory service even though other video streaming services including the BBC iPlayer work perfectly well.

Telcos want a share of the video revenue that is being generated over their networks, and in throttling, deliberately reducing the speed of connections, they have a stick to beat media service providers with, should they wish to and be allowed to use it. However, just like DRM, throttling is a negative activity and will serve no positive purpose as consumers are just as likely to move ISPs if their services don’t work as they are media service providers. If they want certain content, they will find a way to do and don’t be surprised if such throttling activities pushes more consumers to Pirate Bay and its like where an additional wait will be tolerated to download rather than stream and content comes free.

So is there a better relationship to be had?

Charging for QoS/QoE SLAs to media service providers would be the first choice of telcos and while our research suggests that telcos believe this to be more of a possibility now than a year ago, our view is that it is still a service that requires consistent failures in the market in order to prove its value before it could become a capability media service providers will pay for en masse. Therefore if telcos can’t charge upstream players for a guaranteed pipe, at least in the short term, they need to look at other what other telco assets can offer media companies.

In our analyst note, ‘How to Out-Apple Apple’, we identified a series of telco assets that could be valuable to media companies that are or are intending to sell their content directly to customers through online outlets. These and how they add up against competitors are summarised in the table below.

Table 7: Telcos Offer Unrivalled Asset Combination

Payments Content Delivery User Experience – 3screen Interactive Marketing/ CRM Customer Care
Apple Yes Yes No No* No
Amazon Yes Yes No No* No
Netflix Yes Yes No No No
Cable Cos/Satellite Yes Yes Partially Partially Yes
Other enablers Banks, Credit Cards, PayPal Eg Akamai, L3, Limelight Marketing Ad Agencies Outsource
Telcos Yes Yes Yes (Converged telcos) Yes Yes

*Apple and Amazon have interactive marketing and CRM functions but do not pass data on to content owners.
Source: Telco 2.0 Initiative

At our first Telco-Hollywood Executive brainstorm the value of these telco assets and other capabilities were discussed and rated. All were recognised as offering possible value to studios in the development of their own services. As the graph below shows, it was the functions nearest to the consumer that rated the highest.

Table 8: Telco Capabilities Rated (1-5*) According to Their Perceived Value to Content Owners

*Where 1 is not valuable at all and 5 the most valuable
Source: 1st Hollywood-Telco Executive Brainstorm, Santa Monica

Again, the value of owning the network infrastructure is recognised as hosting content locally and ensuring its effective delivery are natural roles for telcos to take on. Usage and access distribution is the next highest rated capability and the expectation from studios and media companies is that telcos should enter the market as media service providers in some form or another and the downstream market should not be ignored. That said, those assets that are a stage further removed from consumers were also rated as valuable.

Identification and authentication, payments, decision support and data mining (listed under the heading of Interactive Marketing /CRM in table 7) and content protection are all rated as useful. However, there is a caveat here in that what media companies want is complete solutions not raw data or APIs that they then have to build services around.

Over the next 12 months, telcos need to develop complete solutions that can meet the needs of media companies. Simply saying that they have the assets and capabilities is not enough. Also the speed with which the market is changing means that solutions need to be developed fast as the window of opportunity for media companies to gain a more powerful position in the ecosystem will be relatively short. New and powerful players are entering the market and putting pressure of established price paradigms. Once changed these are difficult if not impossible to change back until serious failures in the market appear. The next 1-2 years are therefore vital.

From a telco point of view, this makes playing upstream difficult if they have not already begun to develop solutions that could be packaged for media companies. Downstream opportunities are in some ways less time sensitive but the current market flux offers up opportunities to establish a position that will be harder to reach when it is more stable.

Strategic choices

Telcos therefore have a series of strategic decisions to make about how and where they play in the entertainment market. We have developed a structure of generic strategy choices based on the willingness and ability of telcos to move on and off their own network and whether they intend to offer and end-to-end solution to consumers or play a specific and limited role in the ecosystem. The overall strategies these choices create are illustrated in figure 3 below.

Figure 3: Generic Two-Sided Business Model Strategies

Source: Telco 2.0 Initiative

Taking this a stage further, we can map this general theory onto the specific choices facing telcos in the entertainment market to create a framework with four different approaches to telco involvement in the entertainment ecosystem. As the market is developing, on-net activities are providing the greatest opportunities, as illustrated in figure 4 below.

Figure 4: Entertainment-specific Business Model Strategy Choices

Source: Telco 2.0 Initiative

Entertainment is an increasingly complicated market with collapsing value chains, new entrants and new technologies that are allowing established players to compete in different ways. Having a clear idea of where telcos fit into this dynamic market structure is important and there are roles for telcos to both support media companies in their attempts to go directly to consumers and to go directly to consumers themselves. A second phase, supporting multiple third party platforms and media service providers may emerge but this is a stage further removed as these are currently competing with what telcos and media companies are trying to do themselves.

Taking a look at the bottom left quadrant – enabling media companies to develop a direct sales channel – in more detail, it is possible to identify a range of way in which to do this. There are firstly a range of activities and assets that can be undertaken, as we have already outlined above, and secondly there is a range of ways in which to utilise those assets.

Using Telco 2.0’s established gold analogy, we can see that telcos have the opportunity to be more or less involved; to offer what they have in raw data to media companies and let them do as they wish with it at one end of the scale, right through to offering a complete end-2-end service.
Examples of what type of entertainment-orientated services can be offered at each stage by telcos are illustrated below.

Figure 5: Possible Telco Roles in Entertainment Industry

Telco 2.0 research suggests media companies put greatest value on telco assets that are packaged to them as services. As illustrated above, this means a managed one click payment capability, not an API on top of which they have to build their own payment service. What is more, they want things that have been proven to work, either for other industries or for the telcos itself. Eating your own dog food is not just a sound bite to be trotted out at conferences, it’s an essential tactic if telcos are going to gain credibility as upstream suppliers to the entertainment industry.

Fortunately this is something the telecoms industry is recognising, as a vote at the recent Telco 2 Executive Brainstorm in London where delegates demonstrated that to take full advantage of the customer data they have, they must first find ways to use it effectively themselves. During the session focused on the use and monetisation of customer data, the 180-strong group of executives were asked to rank 1 to 4, the importance of different strategies for beginning to use the customer data they hold in the short terms (next 12 months), using it for their own purposes ranked the highest.

Telco’s two-sided business model for entertainment

There is little doubt that telcos have a strong strategic interest in the entertainment industry both for its opportunity to grow and its impact on their existing businesses. However, they are entering a market that is itself in flux and the opportunities are neither well defined nor static. The market is constantly changing and so are the possible roles and activities open to telcos. Under such circumstances a two-sided approach to the business makes even more sense, giving more choice and opportunity to build revenue as illustrated below.

Figure 6: Telco two-sided business model for entertainment


Source: Telco 2.0 Initiative

However, the extent to which a specific telco gets involved in the up and downstream opportunities will depend on the specific market conditions, together with the skills and assets of the telco. These must also be mapped onto the needs of media companies as telcos must be aware of what the market is looking for.

Over the coming months we will look at each of the downstream models in detail, examining the conditions that make each play viable, together with the tactics that make each downstream strategy effective.

In addition to these we will develop use cases and identify case studies that help define realistic opportunities for upstream services.


  1. The transmission of entertainment online represents a significant growth market for at least the next decade. The addressable market is in the region of $700bn a year and growing, although it will take more than a decade if it ever happens to turn all entertainment into digital/online forms
  2. Content owners are being challenged by the changing patterns of distribution, retail and viewing and are looking to secure their place in new value chains that at least protect, if not increase their existing revenue streams
  3. Although the risks to content owners vary across genres, a major set of opportunities centre on the ability of content owners to go directly to consumers which requires a skill set outside of the core competencies of media companies. Telcos have many, if not all of these capabilities
  4. A second set of opportunities exist for media companies to work together to build ecosystems with their content at its heart that compete favourably with the offerings of new internet players. UltraViolet is an example of this which could compete with Apple iTunes. These approaches create new and better downstream opportunities for telco as they lower the barriers to telcos to enter the entertainment delivery business and allow them to build the business incrementally
  5. Complete end-2-end services in the form of IPTV require significant scale and huge investment. The success of these will be dependent on achieving these and they are heavily impacted by the maturity of competing solutions such as cable and DSAT. Therefore, while prospects look good in for telcos with large market shares that operate markets with weak pay TV markets, the majority of telcos will find differentiation through IPTV difficult and its deployment is therefore best seen as a defensive tactic against telephony and broadband service provision by TV service providers
  6. Other downstream plays are possible for telcos including:

    • Maximising the mobile opportunity by using it to enhance experiences on other screens and not just as another delivery channel. Telcos and content owners need to collaborate to develop value-added services that consumers will pay for
    • Operating as effective storefronts for entertainment content. Telcos have the customer scale, billing, customer information and CRM capabilities that make them ideal retailers but to be effective, telcos must offer content owners more than existing online retailers, such as Apple and Amazon. Of particular importance here is a willingness and ability of telcos to pass back to content owners on data about customer activity
    • The storefront proposition is strengthened further by developments such as UltraViolet which would enable telcos to build propositions while taking out the risk for both content owners and telcos
  7. Telcos have a range of core assets that could enhance the ability of content owners to move up the value chain and go directly to consumers and therefore gain a greater share of the entertainment revenue pie
  8. No other player has the range of enabling capabilities telcos have. However, the Telco USP is in combining these enablers and telcos must look at packaging them together as plug and play services for content owners. These need to apply across content categories (and possibly extend to other vertical markets) so that telcos can turn a niche vertical service into a broader and more valuable opportunity
  9. There is a finite window of opportunity for content owners and telcos to establish places in the new content ecosystems that are developing before major Internet players – Apple, Google – and new players use their skills and market positions to dominate those ecosystems. Speedy collaborative action between telcos and studios is required

Full Article: Devices 2.0: ‘Beyond Smartphones’ – Innovation Strategies for Operators

Summary: managing the role of new device categories in new and existing fixed and mobile business models is a key strategic challenge for operators. This report includes analysis of the practicalities and challenges of creating customised devices, best / worst practice, inserting ‘control points’ in open products, the role of ‘ODMs’, and reviews leading alternative approaches.

NB A PDF Version of this 45 page report can be downloaded here.


As part of its recently-published report on Mobile and Fixed Broadband Business Models, Telco 2.0 highlighted four potential strategic scenarios, one of which was for operators to become “device specialists” as a deliberate element of strategy, either in wireline and wireless domains. This theme was also covered at the April 2010 Telco 2.0 Brainstorm event in London.

Clearly, recent years have displayed accelerating innovation in numerous “end-point” domains – from smartphones, through to machine-to-machine systems and a broad array of new consumer electronics products. Yet there has been only limited effort made in mapping this diversity onto the broader implications for operators and their business prospects. 

Moving on from legacy views

An important aspect of device specialisation for telcos is one of attitude and philosophy. In the past, the power of the network has had primacy – large switching centres were at the heart of the business model, driving telephones – in some cases even supplying them with electrical power via the copper lines as well. Former government monopolies and powerful regulators have further enshrined the doctrines of central control in telecom executives’ minds.

Yet, as has been seen for many years in the computing industry, centralised systems give way to power at the edge of the network, increasingly assisted by a “cloud” of computing resource which is largely independent of the “wiring” need to connect it. The IT industry has long grasped the significance of client/server technology and, more recently, the power of the web and distributed computing, linked to capable and flexible PCs.

But in the telecom industry, some network-side traditionalists still refer to “terminals” as if Moore’s Law has no relevance to their businesses’ success. But the more progressive (or scared) are realising that the concentration of power “at the telecom edge”, coupled with new device-centred ecosystems (think iPhone + iTunes + AppStore), is changing the dynamics of the industry to one ruled by a perspective starting from the user’s hand back inwards to the core.

With the arrival of many more classes of “connected device” – from e-readers, to smart meters or in-vehicle systems – the role of the device becomes ever more pivotal in determining both the structure of supporting business models and the role of telcos in the value chain. It also has many implications for vendors.

The simplest approach is for operators to source and give away attractive devices in order to differentiate and gain new, or retain existing customers – especially in commoditised access segments like ADSL. At the other end of the spectrum, telcos could pursue a much deeper level of integration with new services to drive new fixed or mobile revenue streams – or create completely unique end-to-end propositions to rival those of 3rd-party device players like Apple, Sony or TiVo.

This Executive Brief examines the device landscape from an operator’s or network vendor’s standpoint. It looks at whether service providers should immerse themselves in creating on-device software and unique user experiences – or even commission the manufacture of custom hardware products or silicon. Alternatively, it considers the potential to “outsource” device smarts to friendlier suppliers like RIM or Thomson/Technicolor, which generally have operators’ success at the centre of their strategies. The alternative may be to surrender yet more value to the likes of Apple, Sony or Sling Media, allowing independent Internet or local services to be monetised without an “angle” for telco services.

Structure of this report

The narrative of this document follows this structure:

  • Introduction
  • The four broadband scenarios for operators, and an introduction to the “device specialist”
  • Developing an initial mechanism for mapping the universe of devices onto operator business models, which generally fit with four modes of communication
  • Consider why devices are such a potential threat if not tackled head-on
  • Provide case studies of previous telco experience of device focus, from a stance of best/worst practice
  • Examine enhancements to existing bus models via device focus
  • Analyse examples of new business models enabled by devices
  • Consider the practicalities of device creation and customisation
  • Suggest a mechanism for studying risk/reward in telcos’ device strategies
  • Recommendations and conclusions

A recap: 4 end-game scenarios

Broadband as the driver

Given the broad diversity of national markets in terms of economic development, regulation, competition and technology adoption, it is difficult to create simplistic categories for the network operators of the future. Clearly, there is a big distance between an open access, city-owned local fibre deployment in Europe, versus a start-up WiMAX provider in Africa, or a cable provider in North America.

Nevertheless, it is worth attempting to set out a few ‘end-game’ scenarios, at least for broadband providers in developed markets for which the ‘end’ might at least be in sight. This is an important consideration, as it sets parameters for what different types of telco and network owner can reasonably expect to do in the realm of device innovation and control.

The four approaches we have explored are:

  1. Telco 2.0 Broadband Player. This is the ultimate manifestation of the centralised Telco model, able to gain synergies from vertical integration as well as able to monetise various partner relationships and ecosystems. It involves some combination of:
    • Enhanced retail model providing well-structured connectivity offerings (E.g. tiered, capped and with other forms of granular pricing), as well as an assortment of customer-facing, value-added services. This may well have a device dimension. We also sometimes call this “Telco 1.0+” – improving current ways of doing business, especially through better up-selling, bundling and price discrimination.
    • Improved variants of ‘bulk wholesale’, providing a rich set of options for virtual operators or other types of service provider (e.g. electricity smart grid)
    • New revenue opportunities from granular or ‘slice and dice’ wholesale, based on two-sided business models for access capacity. This could involve prioritised bandwidth for content providers or mobile network offload, various ‘third-party paid’ data propositions, capabilities to embed broadband ‘behind the scenes’ in new types of device and so on.
    • A diverse set of ‘network capability’ or ‘platform’ value-add services for wholesale and upstream customers, such as authentication and billing APIs, and aggregated customer intelligence for advertisers. Again, there may be a device “angle” here – for example the provision of device-management capabilities to third parties.
    • A provider of open Internet services, consumed on other operators’ networks or devices, via normal Internet connectivity, essentially making the telco a so-called ‘over the top’ Internet application provider itself. This requires a measure of device expertise, in terms of application development and user-experience design.
  2. The Happy Piper. The broadband industry often likes to beat itself up with the threat of becoming a ‘dumb pipe’, threatened by service-layer intelligence and value being abstracted by ‘over the top players’. Telco 2.0 believes that this over-simplifies a complex situation, polarising opinion by using unnecessarily emotive terms. There is nothing wrong with being a pipe provider, as many utility companies and satellite operators know to their considerable profit. There are likely to be various sub-types of Telco that believe they can thrive without hugely complex platforms and multiple retail and wholesale offers, either running “wholesale-only” networks, participating in some form of shared or consortium-based approach, or offering “smart pipe services”.
  3. Government Department. There is an increasing trend towards government intervention in broadband and telecoms. In particular, state-guided, fully-open wholesale broadband is becoming a major theme, especially in the case of fibre deployments. There is also the role of stimulus funds, or the role of the public sector itself in driving demand for ‘pipes’ to enable national infrastructure projects such as electricity smart grids. Some telcos are likely to undergo structural separation of network from service assets, or become sub-contract partners for major projects around national infrastructure, such as electricity smart grids or tele-health.
  4. Device specialist, as covered in the rest of this report. This is where the operator puts its device skills at the core of its strategy – in particular, where the end-points become perhaps the most important functional component of the overall service platform. Most of the evolution of the telco’s service / proposition (and/or cost structure) would not work with generic “vanilla” devices – some form of customisation and control is essential. An analogy here is Apple – its iTunes and AppStore ecosystems and business models would not work with generic handsets. Conversely, Google is much less dependent on Android-powered handsets – it is able to benefit from advertising consumed on any type of device with a browser or its own software clients. 

There are also a few others categories of service provider that could be considered but which are outside the scope of this report. Most obvious is ‘Marginalised and unprofitable’, which clearly is not so much a business model as a route towards acquisition or withdrawal. The other obvious group is ‘Greenfield telco in emerging market’, which is likely to focus on basic retail connectivity offers, although perhaps with some innovative pricing and bundling approaches. (A full analysis of all these scenarios is available in Telco 2.0’s new strategy report on Fixed and Mobile Broadband Business Models).

It should be stressed that these options apply to operators’ broadband access in particular. Taking a wider view of their overall businesses, it is probable that different portfolio areas will reflect these (and other) approaches in various respects. In particular, many Telco 2.0 platform plays will often dovetail with specific device ecosystems – for example, where operators deploy their own mobile AppStores for widgets or Android applications.


Figure 1: Potential end-game scenarios for BSPs

Source: Telco 2.0 Initiative

Introducing the device specialist

In many ways, recent trends around telecoms services and especially mobile broadband have been driven as much by end-user device evolution as by network technology, tariffing or operation. Whilst it may be uncomfortable reading for telcos and their equipment vendors, value is moving beyond their immediate grasp. In future, operators will need to accept this – and if appropriate, develop strategies for regaining some measure of influence in that domain.

Smartphones have been around for years, but it has been Apple that has really kick-started the market as a distinct category for active use of broadband access, aided by certain operators which managed to strike exclusive deals to supply it. PCs have clearly driven the broadband market’s growth – but at the expense of a default assumption of “pipe” services. Huawei’s introduction of cheap and simple USB modems helped establish the market for consumer-grade mobile broadband, with well over 50 million “dongles” now shipped. Set-top boxes, ADSL gateways and now femtocells are further helping to redefine fixed broadband propositions, for those broadband providers willing to go beyond basic modems.

Going forward, new classes of device for mobile, nomadic and fixed use promise a mix of new revenue streams – and, potentially, more control over operator business models. In 2010, the advent of the Apple iPad has preceded a stream of “me-too” tablets, with an expectation of strong operator involvement in many of them.

However, not all telcos, either fixed or mobile, can be classified as device specialists. There is a definite art to using hardware or client software as a basis for new and profitable services, with differentiated propositions, new revenue streams and improved user loyalty. There are also complexities with running device management systems, pre-loading software, organising physical sales and supply chains, managing support issues and so on.

Operators can either define and source their own specific device requirements, or sometimes benefit from exclusivity or far-sightedness in recognising attractive products from independent vendors. Various operators’ iPhone exclusives are probably the easiest to highlight, but it is also important to recognise the skills of companies, such as NTT DOCOMO, which defines most of the software stack for its handsets, licensing it out to the device manufacturers.
In the fixed domain, some operators are able to leverage relationships with PC vendors, and in future it seems probable that new categories like smart meters and home entertainment solutions will provide additional opportunities for device-led partnerships.

  • Consequently, it is fair to say that device specialism can involve a number of different activities for operators:
  • A particularly strong focus on device selection, testing, promotion and support.
  • Development of own-brand devices, either produced bespoke in collaboration with ODMs (detailed later in this document), or through relatively superficial customisation of existing devices.
  • Negotiation of periods of device exclusivity in a given market (eg AT&T / iPhone)
  • Definition of the operator’s own in-house OS or device hardware platform, such as the strategies employed by NTT DoCoMo (with its Symbian / Linux variants) or KDDI (modified Qualcomm BREW) in Japan.
  • Provision of detailed specifications and requirements for other vendors’ devices, for example through Orange’s lengthy “Signature” device profiles.
  • Development of the operator’s own UI, applications and services – such as Vodafone’s 360 interface or its previous Live suite.
  • Deployment of device-aware network elements which can optimise end-to-end performance (or manage traffic) differentially by device type or brand.
  • The ability to embed and use “control points” in devices to enable particular business models or usage modes. Clearly, the SIM card is a controller, but it may also be desirable to have more fine-grained mechanisms for policy at an OS level as well. For example, some handset software platforms are designed to allow operators to licence and even “revoke” particular applications, while another emerging group are focused on handset apps used to track data usage and sell upgrades.
  • Development of end-to-end integrated services with devices as core element (similar to Apple or RIM). Much of the value around smartphones has been driven by the link of device-side intelligence to some form of “cloud” feature – RIM’s connection to Microsoft Exchange servers, or Apple iPhone + AppStore / iTunes, for example. Clearly, operators are hoping to emulate this type of distributed device/server symbiosis – perhaps through their own app stores.
  • Lastly, operators may be able to exercise influence on device availability through the enablement of a “device ecosystem” around its services & network. In this case, the telco provides certain platform capabilities, along with testing and certification resources. This enables it to benefit from exclusive devices created by partners, rather than in-house. Verizon’s attempt with its M2M-oriented “Open Device Initiative” is a good example.


Clearly, few operators will be in a place to pursue all of these options. However, in Telco 2.0’s view, there remains significant clear water between those which put device-related activities front and centre in their efforts – and those which are more driven by events and end-point evolution from afar.

New business models vs. old

Despite the broad set of options outlined in the previous section, it is important to recognise that operators’ device initiatives can be grouped into two broad categories:

  • Improving existing business models, for example through improving subscriber acquisition, reducing opex, or inducing an uplift in revenues on a like-for-like basis over older or more generic devices.
  • Enabling new business models, for example by selling devices linked to new end-to-end services, enabling the sale of incremental end-user subscriptions, or better facilitating certain new Telco 2.0-style two-sided opportunities (e.g. advertising).



Although much of the publicity and industry “noise” focuses on the strategic implications of the latter, it is arguably the former, more mundane aspects of device expertise that have the potential to make a bottom-line difference in the near term. While Telco 2.0 also generally prefers to focus on the creation of new revenues and new business model innovation in general, this is one area of the industry where it is also important to consider the inertia of existing services and propositions and the opportunities to reduce opex by optimising the way that devices work with networks. A good example of this is the efficiency and network friendliness of RIM’s Blackberry in comparison with Apple’s iPhone in both data compression technologies and use of signalling.

That said, the initial impetus for deploying the iPhone was mostly around customer acquisition and upselling higher-ARPU plans – but the unexpected success of apps quickly distracted some telcos away from the basics, and more towards their preferred and familiar territory of centralised control.

What are the risks without device focus?

Although many operators bemoan the risks of becoming a “dumb pipe”, few seem to have focused on exactly what is generating that risk. While the “power of the web” and the seeming acceptability of “best effort” communications get cited, it is rare that the finger of blame has pointed directly at the device space.

Over many decades, telecoms engineers and managers have grown up with the idea that devices are properly called “terminals”. Evocative of the 1960s or 1970s, when the most visible computers were “dumb” end-points attached to mainframes, this reflects the historic use of analogue, basic machines like fixed telephones, answering machines or primitive data devices.

Nevertheless, some people in the telecoms industry still stick with this anachronistic phrasing, despite the last twenty or thirty years of ever-smarter devices. The refusal to admit the importance of “the edge” is characteristic of those within telcos and their suppliers that don’t “get” devices, instead remaining convinced that it is possible to control an entire ecosystem from the core outwards.

This flat-earth philosophy is never better articulated than the continuing mantra of fear about becoming “dumb pipes”. It is pretty clear that there are indeed many alternatives for creating “smart pipes”, but those that succeed tend to be aware that, often, the end-points in customers’ hands or living rooms will be smarter still.

In our view, one of the most important drivers of change – if not the most important – is the fast-improving power of devices to become more flexible, open and intelligent. They are increasingly able to “game” the network in a way that older, closed devices were not. Where necessary, they can work around networks rather than simply through them. And, unlike the “dumb” end-points of the past such as basic phones and fax machines, there is considerable value in many products when they are used “offline”.

The markets tend to agree as well – the capitalisation of Apple alone is now over $200bn, with other major device or component suppliers (Nokia, Qualcomm, Microsoft, Intel, RIM) also disproportionately large.

“Openness” is a double-edged sword. While having a basic platform enables operators to customise and tinker to meet their own requirements, that same level of openness is also available to anyone else who wishes to compete. Some operators have managed the delicate balancing act of retaining the benefits of openness for themselves, but closing it down for end-users to access directly – DoCoMo’s use of Symbian and Linux in the “guts” of its phones is probably the best example.

Openness is also being made even easier to exploit through the continued evolution of the web browser. At the moment, it takes considerable programming skill to harness the power of an iPhone or a Nokia Symbian device – or, especially, a less-accessible device like an Internet TV. As it becomes more and more possible to run services and applications inside the browser, the barriers to entry for competing service providers become lowered still further. Even Ericsson, typically one of the most traditional telephony vendors, has experimented with browser-based VoIP . That said, there are some approaches to the web, such as the OMTP BONDI project, which might yet provide telcos with control points over browser capabilities, for example in terms of permitting/denying their access to underlying device features, such as making phone calls or accessing the phonebook.

Compute power: the elephant in the room

There is clear evidence that “intelligence” moves towards the edge of networks, especially when it can be coordinated via the Internet or private IP data connections. This has already been widely seen in the wired domain, with PCs and servers connected through office LANs and home fixed broadband, and is now becoming evident in mobile. There are now several hundred million iPhones, BlackBerries and other smartphones in active data-centric use, as well as over 50m 3G-connected notebooks and netbooks. Home gateways and other device such as femtocells, gaming consoles and Internet TVs are further examples, with billions more smart edge-points on the horizon with M2M and RFID initiatives.

This is a consequence of scale economies and also Moore’s Law, reflecting processors getting faster and cheaper. This applies not just to the normal “computing” chips used for applications, but also to the semiconductors used for the communications parts of devices. Newer telecom technologies like LTE, WiMAX and VDSL are themselves heavily dependent on advanced signal processing techniques, to squeeze more bits into the available network channels.
Ericsson’s talk of 50 billion connected devices by the end of the decade seems plausible, although Amdocs’ sound-bite of 7 trillion by 2017 seems to have acquired a couple of rogue zeroes. That said, even in the smaller figure, not all will be fully “smart”.

Unsurprisingly, we therefore see a continued focus on this “edge intelligence” as a key battleground – who controls and harnesses that power? Is it device suppliers, telcos, end users, or 3rd-party application providers (so-called “over-the-top players”)? Does it complement “services” in the network? Or drive the need for new ones? Could it, perhaps, make them obsolete entirely.

So what remains unclear is how operators might adopt a device strategy that complements their network capabilities, to strengthen their position within the digital value chain and foster two-sided business models. It is important for operators to be realistic about how much of the “edge” they can realistically control, and under what circumstances. Given that price points of devices are plummeting, few customers will voluntarily choose “locked” or operator-restricted devices if similarly-capable but more flexible alternatives cost much the same. Some devices will always be open – in particular PCs. Others will be more closed, but under the control of their manufacturers rather than the telcos – the iPhone being the prime example.

It is therefore hugely important for operators to look at devices as a way of packaging that intelligence into new, specific and valuable business models and propositions – ideally, ones which are hard to replicate through alternative methods. This might imply design and development of completely exclusive devices, or making existing categories more usable. At the margins, there is also the perennial option for subsidy or financing – although that clearly puts even more pressure on the ongoing business model to have a clear profit stream.

There are so many inter-dependent factors here that it is difficult to examine the whole problem space methodically. How do developments like Android and device management help? Should the focus be on dedicated devices, or continued attempts to control the design, OS or browser of multi-purpose products? What aspects of the process of device creation and supply should be outsourced?

Where’s the horsepower?

The telcos are already very familiar with the impact of traditional PCs on their business models – they are huge consumers of data download and upload, but almost impossible to monetise for extra services, as they are bought separately and are generally seen more as endpoints for standalone applications rather than services. The specific issue of the PC (connected via fixed or mobile broadband) is covered separately, but the bottom line is that it is a case study in the ultimate power of open computing and networks. PCs have also been embedded in other “vertical market” end-points such as retail EPOS machines, bank ATMs and various in-vehicle systems.

The problem is now shifting to a much broader consumer environment, as PC-like computing capability shifts to other device categories, most notably smartphones, but also a whole array of other products in the home or pocket.
It is worth considering an illustration of the shifting power of the “edge”, as it applies to mobile phones.

If we go back five or six years, the average mobile phone had a single main processor “core” in its chipset, probably an ARM7, clocking perhaps 30MHz. Much of this was used for the underlying radio (the “modem”) and telephony functions, with a little “left over” for some very basic applications and UI tools, like Java games.

Today, many of the higher-end handsets have separate applications processors as well as the modem chip. The apps processor is used for the high-level OS and related capabilities, and is the cornerstone of the change being observed. An iPhone has a 600MHz+ chip, and various suppliers of Android phones are using a 1GHz Qualcomm Snapdragon chip. Even midrange featurephones can have 200MHz+ to play with, most of which is actually usable for “cool stuff” rather than the radio.

This is where the danger lies for the telcos, as like PCs, it can shift the bias of the device away from consuming billable services and towards running software. (The situation is actually a bit more complex than just the apps processor, as phones can also have various other chips for signal processing, which can be usable in some circumstances for aspects of general computing. The net effect is the same though – massively more computational power, coupled with more sophisticated and open software).

Now, let’s project forward another five years. The average device (in developed markets at least) will have at least 500MHz, with top-end devices at 2GHz+, especially if they are not phones but tablets, netbooks or similar products. Set top-boxes, screenphones, game consoles and other CPE devices are growing smarter in parallel – especially enabled for browsers which can then act as general-purpose (distributed) computing environments. A new class of low-end devices is emerging as well. How and where operators might be able to control web applications is considered below, as it is somewhat different to the “native applications” seen on smartphones.

For the sake of argument, let’s take an average of 500MHz chips, and multiply by (say) 8 billion endpoints.
That’s 4 Exahertz (EHz, 1018) of application-capable computing power in people’s hands or home networks, without even considering ordinary PCs and “smart TVs” as well. And much – probably most – of that power will be uncontrolled by the operators, instead being the playground of user- or vendor-installed applications.

Even smart pipes are dumb in comparison

It is tricky to calculate an equivalent figure for “the network”, but consider an approximation of 10 million network nodes (datapoint: there are 3 million cell sites worldwide), at a generous 5GHz each. That means there would be 50 Petahertz (PHz, 1015) of computing power in the carrier cloud, and it’s including the assumption that most operators will also have thousands of servers in the back-office systems as well as the production network itself.

In other words, the telcos, collectively, have maybe an 80th of the collective compute power of the edge. It is quite possibly much lower than that, but the calculation is intended as an upper bound.

Now clearly, this is not quite as bad a deficit as that makes it sound – the network can obviously leverage intelligence in a few big control points in the core such as GGSNs and DPI boxes, as traffic funnels through them. It can exert control and policy over data flows, as well as what is done at the endpoints.

But at the other end of the pipe is the Internet, with Google and Amazon’s and countless other companies’ servers and “cloud computing” infrastructures. Trying to calculate the aggregate computing power of the web isn’t easy either, but it’s also likely to be in the Exahertz range too. Google is thought to have around one million servers on its own, for example, while the overall server population of the planet (including both Internet and enterprise) is thought to be of the order of 50 million, many of which have multiple processor cores.



Whatever else happens, it seems the pipe will inevitably become relatively “dumber” (i.e. less smart) than the devices at the edge, irrespective of smart Telco 2.0 platforms and 4G/NGN networks. The question is how much of that edge intelligence can be “owned” by the operators themselves.

Controlling device software vs. hardware

The answer is for telcos to attempt to take control of more of this enormous “edge intelligence”, and exploit it for their own benefit and in-house services or two-sided strategies.

There are three main strategies for operators wanting to exert influence on edge devices:

  • Provide dedicated and fully-controlled and customised hardware and software end-points which are “locked down” – such as cable set-top boxes, or operator-developed phones in Japan. This is essentially an evolution of the old approach of providing “terminals” that exist solely to act as access points for network-based services. This concept is being reinvented with new Telco-developed consumer electronic products like digital picture frames, but is a struggle for variants of multi-function devices like PCs and smartphones
  • Provide separate hardware products that sit “at the edge” between the user’s own smart device and the network, such as cable modems, femtocells, or 3G modems for PCs. These can act as hosts for certain new services, and may also exert policy and QoS control on the connection. Arguably the SIM card fits into this category as well
  • Develop control points, in hardware or software, that live inside otherwise notionally “open” devices. This includes SIM-locks, Telco-customised UI and OS layers, “policy-capable” connection manager software for notebooks, application and widget certification for smartphones, or secured APIs for handset browsers. Normally, it will be necessary for the operator to be the original device supplier/retailer for these capabilities to be enabled before sale – few users will be happy for their own device to be configured after purchase with extra controls from their service provider.

Case studies and best / worst practice

Going back 30 years, before telecoms deregulation, many telcos were originally device specialists. In many cases, the incumbent government monopolies were also the only source of supply of telephones and various other communications products (“CPE” – customer premises equipment) – often renting them to users rather than selling them. Since then of course, much has changed. Not only have customers been able to buy standards-compliant, certified terminals on the open market, but the rise of personal computing and mobile communications has vastly expanded the range and capability of end-points available.

But while few telcos could benefit today from owning physical manufacturing plants, there is an increasing argument for operators once again to take a stronger role in defining, sourcing and customising end-user hardware in both mobile and fixed domains. As discussed throughout this document, there is a variety of methods that can be adopted – and also a wide level of depth of focus and investment. Clearly, owning factories is unlikely to be an attractive option – but at the other end of the scale, it is unclear whether merely issuing vague “specifications” or sticking logos on white-labelled goods from China really achieves anything meaningful from a business model standpoint.

It is instructive to examine a few case studies of operator involvement in the device marketplace, to better understand where it can add value as a core plank of strategy, rather than simply as a tactical add-on.


Perhaps the best example of a device-centric operator is NTT DoCoMo in Japan. It would perhaps be more accurate to describe the firm as a technology-centric firm, as it pretty much defines its complete end-to-end system in-house, usually as a front-runner for more general 3GPP systems like WCDMA and LTE, but with subtle local modifications.About 10 years ago, it recognised that handset development was going to be a pivotal factor in delaying its then-new 3G FOMA services, and committed very significant funds to driving the whole device ecosystem to accelerate this.

In fact, DoCoMo has a very significant R&D budget in general, which means that it has been able to develop complete end-to-end platforms like i-Mode, spanning both handset software and back-end infrastructure and services. Although it is known for initiatives like these, as well as its participation in Symbian, Android and LiMo ecosystems, its device expertise goes far beyond handset software. For example, its own in-house research journal covers innovative areas of involvement, such as:

Improved video display on handsets

Development of its own in-vehicle 3G module for telematics applications

Measurement of handset antenna efficiency

In some ways, DoCoMo is in a unique position. It did not have to pay for original 3G spectrum and channelled funds into device and infrastructure development instead. It also operates in an affluent and gadget-centric market that has at times been willing to spend $500-600 on massmarket handsets. It has close ties with a number of Japanese vendors, with whom it spends large amounts on infrastructure and joint R&D. And its early pragmatism with web and software developers (in terms of revenue-share) has largely kept the ecosystem “on-side”, compared with other markets in which a mass of disgruntled application providers have eagerly jumped on off-portal and “open” OS platforms, to the detriment of operators.

In its financial year to March 2009, DoCoMo had a total R&D spend of 100 billion Yen (approximately $1bn). While this is split across both basic research and various initiatives around networks and services, it also has a dedicated “device development” centre. It compares to R&D spending by Vodafone Group in the same period of £280m, or about $450m, while mid-size global mobile group Telenor spent just NOK1.1bn ($180m) in calendar year 2008. For comparison, Apple’s current annualised R&D spend is around $1.6bn per year, and Google’s is $3.2bn – while Nokia’s was over $8bn in 2009 – albeit spread across a much larger number of products, as well as its share in NSN. Even smaller device players such as SonyEricsson spend >$1bn per year.

Although DoCoMo is best known for its handset software involvement – i-Mode, Symbian, LiMo, MOAP and so forth – it also conducts a significant amount of work on more hardcore technology platform development. Between 2005 and 2007, for example, it invested 12.5 billion Yen ($125m) in chipset design for its 3G phones.

It has huge leverage with Japanese handset manufacturers like NEC and Matsushita, as they have limited international reach. This means that DoCoMo is able to enforce adoption of its preferred technology components – such as single integrated chips that it helps design, rather than multiple more expensive processors.

While various operators are now present in handset-OS organisations such as the LiMO Foundation and Open Handset Alliance (Android), DoCoMo’s profile in device software has been considerably greater in the past. It is a founder member of Symbian, driving development of one of the original 3 Symbian user interfaces (the other two being Nokia’s S60 and the now-defunct UIQ). DoCoMo now makes royalty revenues, in some instances, from use of its handset software by manufacturers. It also owns a sizeable stake in browser vendor Access, and has also invested in other handset software suppliers like Aplix.

Verizon Open Device Initiative

From the discussion about DoCoMo above, it is clear that for an operator to start creating its own device platform from the bottom up, it will need extremely deep pockets and very close relationships with willing OEMs to use its designs. For individual handsets or a small series of similar devices, it can clearly choose the ODM route, although this risks limiting differentiation to a thin layer of software and a few “off the peg” hardware choices.

Another option is to try to create a fully-fledged hardware ecosystem, putting in place the tools and business frameworks to help innovative manufacturers create a broad set of niche “long tail” devices that conform to a given operator’s specifications. If successful, this enables a given telco to benefit from a set of unique devices that may well come with new business models attached. Clearly, the operator needs to be of sufficient scale to make the volumes worthwhile – and there also needs to be a guarantee of network robustness, channels to market and back-office support.

Verizon’s “Open Device Initiative” is perhaps the highest-profile example of this type of approach, aiming to foster the creation of a wide range of embedded and M2M products. It assists in the certification of modules, and also links in with its partnership with Qualcomm and nPhase in creating an M2M-enabling platform. A critical aspect of its purpose is a huge reduction in certification and testing time for new devices against its network – something which had historically been a time-to-market disaster lasting up to 12 months, clearly unworkable for categories like connected consumer-oriented devices. It has been targeting a 4-week turnaround instead, working with a streamlined process involving multiple labs and testing facilities.

US rival operator AT&T is attempting a similar approach with its M2M partner Jasper Wireless, although Verizon ODI has been more conspicuous to date.

3 / INQ Mobile

Another interesting approach to device creation is that espoused by the Hutchison 3 group. Its parent company, Hutchison Whampoa, set up a separately-branded device subsidiary called INQ Mobile in October 2008. INQ specialises in producing Internet-centric featurephones with tight integration of web services like Skype, Facebook and Twitter on low-cost platforms. Before the launch of INQ, 3 had already produced an earlier product, the SkypePhone, but had not sold that to the outside marketplace.

At around $100 price points, it is strongly aimed at prepaid-centric or low-subsidy markets where users want access to a subset of Internet properties, but without incurring the costs of a full-blown smartphone. It has worked closely with Qualcomm, especially using its BREW featurephone software stack to enable tight integration with web services and the UI. That said, the company is now switching at least part of its attention to Android-based devices in order to create touchscreen-enabled midmarket devices.

3/INQ highlights one of the paradoxes of operator involvement in device creation – while it is clearly desirable to have a differentiated, exclusive device, it is also important to have a target market of sufficient scale to justify the upfront investment in its creation. Setting up a vehicle to sell the resulting phones or other products in geographies outside the parent’s main market footprint is a way to grow the overall volumes, without losing the benefits of exclusivity.
In this sense, although the 3 Group clearly benefits from its association with INQ, it is not specifically part of the operator’s strategy but that of its ultimate holding company. The separate branding also makes good sense. It is also worth noting that 3 is not wholly beholden to INQ for supply of own-brand devices; its current S2x version of its Skypephone is manufactured by ZTE.

BT Fusion

It is also worthwhile discussing one of the less-successful device initiatives attempted by operators in recent years. Between 2003-2009, BT developed and sold a fixed-mobile converged service called Fusion, which flipped handsets between an ordinary outdoor cellular connection and a local wireless VoIP service when indoors and connected to a BT broadband line.

Intended to reduce the costs associated with use of then-expensive mobile calls, when in range of “free” landline or VoIP connections, it relied on switching to Bluetooth or WiFi voice when within range of a suitable hotspot. The consumer and small-business version relied on a technology called UMA (Universal Mobile Access), while a corporate version used SIP. The mobile portion of the service used Vodafone’s network on an MVNO basis.

Recognising that it needed widespread international adoption to gain traction and scale, BT did many things that were “right”. In particular, it supported the creation of the FMCA (Fixed-Mobile Convergence Alliance) and engaged directly with many handset vendors and network suppliers, notably Motorola for devices and Alcatel-Lucent for systems integration. It also ran extensive trials and testing, and participated in various standards-setting fora.
The service never gained significant uptake, blamed largely on falling prices for mobile calls which reduced the core value proposition. It also reflected a very limited handset portfolio, especially as the technology only supported 2G mobile devices at launch – at just the point when many higher-value customers wanted to transition to 3G.

Conversely, lower-end users generally tend to use prepaid mobile in the UK, which did not fit well with BT’s contract-based pricing oriented around Fusion’s position as an add-on to normal home broadband. In addition, there were significant issues around the user interface, and the interaction of the UMA technology with certain other uses of the WiFi radio that the user did not wish to involve the operator.

The main failure for BT here was in its poor focus on what its customers wanted from devices themselves, as well as certain other aspects of the service wrapper, such as numbering. It was so focused on some of the network and service-centric aspects of Fusion (especially “seamless handover” of voice services) that it ignored many of the reasons that customers buy mobile phones – a range of device brands and models, increasing appeal of 3G, battery life, the latest features like high-resolution cameras and so forth. Towards the end of Fusion’s life, it also looked even weaker once the (unsupported) Apple iPhone raised the bar for massmarket adoption of smartphones. It was withdrawn from sale in early 2009.

BT also overlooked (or over-estimated) the addressable market size for UMA-enabled phones, which should have made it realise that support of the technology was always going to be an afterthought for the OEMs. It also over-relied upon Motorola for lower-end devices, and supported Windows Mobile for its smartphone variants more for reasons of pragmatism than customer demand.

Lastly, BT appears to have underestimated the length of time it would take to get devices from concept, through development and testing to market. In particular, it takes many years (and a clear economic rationale) for an optional feature to become built-into mobile device platforms as standard – and until that occurs, the subset of devices featuring that capability tends to be smaller, more expensive, and often late-to-market as OEMs focus their best engineers and project resources on more scalable investments.

Perhaps the main takeaway here is that telcos’ involvement in complex, technology-led device creation is very risky where the main customer benefit is simply cheaper services, in markets where the incumbent providers have scope to reduce margins to compete. A corollary lesson is that encouraging device vendors to support new functions that only benefit the operators (and only a small proportion of customers) is tricky unless the telcos are prepared to guarantee better purchase prices or large volumes. This may well be a reason that leads to the failure of other phone-based enhancements, such as NFC to date.

The role of the ODM in telco-centric devices

An important group of players in operators’ device strategies are the ODMs (original design manufacturers). Usually based in parts of Asia such as Taiwan and Korea, these firms specialise in developing customised “white label” hardware to certain specifications, which are then re-branded by more well-known vendors. ODMs are rather higher up the value-add hierarchy than CMs (contract manufacturers) that are more just factory-outsourcing companies, with much less design input.

Historically, the ODMs’ main customers were the device “OEMs” (original equipment manufacturers) – including well-known firms like Motorola, SonyEricsson and Palm. Even Nokia contracts-out some device development and manufacturing, despite its huge supply chain effectiveness. Almost all laptops are actually manufactured by ODMs – this supply route is not solely about handsets.

Examples of ODMs include firms like Inventec, Wistron, Arima, Compal and Quanta. Others such as HTC, ZTE and Huawei also design and sell own-brand products (ie act as OEMs) as well as manufacturing additional lines for other firms as ODMs.

In a growing number of instances, operators themselves are now contracting directly with ODMs to produce own-brand products for both mobile and fixed marketplaces. This is not especially new in concept – HTC in particular has provided ODM-based Windows Mobile smartphones and PDAs to various operators for many years. The original O2 XDA, T-Mobile MDA and Orange SPV series of smart devices all came via this route.

More recently, the ODM focus has swung firmly behind Android as the best platform, although there are still Microsoft-based products in the background as well. There are also patchy uses of ODMs to supply own-branded featurephones, usually for low-end prepaid segments of the market.

One trend that is conspicuous has been that the ODMs favoured by operators have tended to differ from those favoured by the other OEMs. MNOs have tended to work with the more experienced and technically-deep ODMs (which often have sizeable own-brand sales as well), perhaps to compensate for their limitations in areas such as radio and chipset expertise. They also want vendors that are capable of executing on sophisticated UI and applications requirements. HTC, ZTE and Sagem have made considerable headway in cutting deals with operators, with ZTE in particular able to leverage its growing global footprint associated with infrastructure sales. Conversely, some of the more “traditional” ODMs from Taiwan, such as Compal and Arima, have struggled to engage with operators to the same degree they can outsource design / manufacture from companies like Motorola and Sony Ericsson.

One of the most interesting recent trends is around new device form-factors, such as web tablets, ebook readers and netbooks/smartbooks. Operators are working with ODMs in the hope of deploying such devices as part of new business models and service propositions – either separate from conventional mobile phone service contracts, or as part of more complex integrated three / four screen converged offers. Again, Android is playing an important role here, especially for products that are Internet-centric such as tablets. Not all such devices are cellular-enabled: some, especially where they are intended for use just within the home, will be WiFi-only, connected via home broadband. Android is important here because of its malleability – it is much easier for operators (and their device partners) to create complete, customised user experiences, as the architecture does not have such a fixed “baseline” of user interface components or applications as Windows. It is also cheaper.

It is nevertheless important to note that ODM-based device strategies are often difficult to turn into new business models, and have various practical complexities in execution. Most ODMs base their products on off-the-shelf “reference designs” from chipset suppliers, alongside standard OS’s (hence Android and WinMob) and a fairly thin layer of in-house IPR and design skills. There is often limited differentiation over commodity designs for a given product, except in the case of the few ODMs that have built up strong software expertise over years (notably HTC).

In addition, the “distance” in terms of both value-chain position and geography often makes operator/ODM partnerships difficult to manage. Often, neither has particularly good skill sets in terms of RF design, embedded software development, UI design and ecosystem management. This means that a range of extra consultants and integrators also need to be roped into the projects. While open OS’s like Android provide an off-the-shelf ecosystem to add spice to the offerings, the overall propositions can suffer from a lack of centralised ownership.

It is worth considering that most previous operator/ODM collaborations have been successful in two contexts:

  • Early Windows Mobile and Pocket PC devices sold to businesses and later consumers, to compete primarily against Nokia/Symbian and provide support for email, web browsing and a limited set of applications. Since the growth of Apple and BlackBerry, these offerings have looked weak, although ODM Android-based smartphones are restoring the balance somewhat.
  • Low-end commodity handsets, primarily aimed at prepaid customers in markets where phones are sold through operator channels. Typically, these have been aimed at less brand-conscious consumers who might otherwise have bought low-tier Nokia, Samsung or LG handsets.

On the other hand, other operator / ODM tie-ups have been rather less successful. In 2009, a number of operators tried rolling out small handheld MIDs (mobile Internet devices), with lacklustre market impact.

One possibility is that ODMs will start to shift focus away from mobile handsets, and more towards other classes of device such as tablets, femtocells and in-car systems. These are all areas in which there is much less incumbency from the major OEM brands like Apple and Samsung, and where operators may be able to sell completely new classes of device, packaged with services.

It has been estimated that own-brand operator handsets remain a “minority sport”, with IDC reported as estimating they only accounted for 1.4% of units shipped in Western Europe in 2008.

Enhancing existing business models

Returning to one of the points made in the introduction, there are two broad methods by which device expertise can enhance operators’ competitive position and foster the creation and growth of extra revenue:

  • Improving the success and profitability of existing services and business models
  • Underpinning the creation of wholly new offerings and propositions

This section of the document considers the former rationale – extending the breadth and depth of current services and brand. Although much of the recent media emphasis (and perceived “sexiness”) around devices is on the creation of new business models and revenue streams, arguably the main benefits of device specialisation for telcos are more prosaic. Deploying or designing the right hardware can reduce ongoing opex, help delay or minimise the need for incremental network capex, improve customer loyalty and directly generate revenue uplift in existing services.

Clearly, it is not new analysis to assert that mobile operators benefit from having an attractive portfolio of devices, in markets where they sell direct to end-users. Exclusive releases of the Apple iPhone clearly drove customer acquisition for operators such as AT&T and O2. Even four years ago, operators which merely gained preferential access to new colours of the iconic Motorola RAZR saw an uplift in subscriber numbers.

But the impact on ongoing business models goes much further than this, for those telcos that have the resources and skill to delve more deeply into the ramifications of device selection. Some examples include:

  • There is a significant difference between devices in terms of return rates from dissatisfied customers – either because of specific faults (crashing, for example) or poor user experience. This can cause significant losses in terms of the financial costs of device supply/subsidy, along with negative impact on customer loyalty.
  • Less serious than outright returns, it is also important to recognise the difference in performance of devices on an ongoing basis. In 2009, Andorra-based research lab Broadband Testing found huge variations between different smartphones in the basics of “being a phone” – some regularly dropped calls under certain circumstances such as 3G-to-2G transitions, for example. Often, users will wrongly associate dropped calls with flaws in the network rather than the handset – thereby generating a negative perception for the telco.
  • Another important aspect of opex relates to handling support calls, which can easily cost $20 per event – and sometimes much more for complex inquiries needing a technical specialist. This becomes much more of an issue for certain products, such as advanced data-capable products, where configuration of network settings, email accounts and VoIP services can be hugely problematic. A single extra technical call, per user per year, can wipe out the telco’s gross margin. Devices which have setup “wizards” or even just clearer menus can reduce the call-centre burden considerably. Even in the fixed world, home gateways or other products designed to work well “out of the box” are essential to avoid profit-sapping support calls (or worse, “truck rolls”). This can mean something as trivial as colour-coding cables and sockets – or as sophisticated as remote device management and diagnostics.
  • Selection of data devices with specific chipsets and radio components can have a measurable impact on network performance. Certain standards and techniques are only implemented in particular semiconductor suppliers’ products, which can use available capacity more efficiently. Used in sufficiently large numbers, the cumulative effect can result in reduced capex on network upgrades. While few carriers have the leverage to force new chip designs into major handset brands’ platforms, the situation could be very different for 3G dongles and internal modules used in PCs, which tend to be much more generic and less brand-driven. UK start-up Icera Semiconductor has been pursuing this type of engagement strategy with network operators such as Japan’s SoftBank.
  • Device accessories can add value to a service provider’s existing offerings, adding loyalty, encouraging contract renewal, and potentially justifying uplift to higher-tier bundles. For home broadband, the provision of capable gateways with good WiFi can differentiate versus alternative ISPs. For those providing VoIP or IPTV, the addition of cordless handsets or PVRs / media servers can add value. In mobile, the provision of car-kits can improve voice usage and revenues significantly.
  • Operators’ choice of devices can impact significantly on ARPU. There is historical evidence that a good SMS client on a mobile phone will drive greater usage and revenue, for example. In the fixed-broadband world, providing gateways with (good) WiFi instead of simple modems has driven multiple users per household – and thus a need for higher-tier services and greater overall perception of value.

Figure 2: Operators need to consider the effects of basic device performance on customer satisfaction and the network

Source: Broadband Testing

There are also much simpler ways in which devices can bolster current services’ attractiveness: 2010 and 2011 are likely to see an increasing number of new devices being sold or given away by operators in order to retain existing customers using existing services.

In particular, a new class of WiFi-based web tablets are expected to become quite popular among fixed broadband companies looking to avoid churn or downward pricing pressure, as well as (perhaps) acting as future platforms for new services as well. Although there are numerous technical platforms for tablets, it seems likely that Android will enable a broad array of inexpensive Asian ODMs to produce competent products, especially as they will not need complex integration of voice telephony or other similar features. The growing maturity of web browsers and widgets (for example with HTML5), as well as the flexibility of the Android Marketplace, should enable sufficient flexibility for use of the products with most leading-edge web services.

Expect to see plenty of “free” pseudo-iPads being given as inducements to retain customers, or perhaps to upsell them to a higher-tier package. The ability for fixed broadband providers to compete with their mobile peers, through providing subsidised devices, should not be underestimated. By the same token, mobile operators may choose to give away free or discounted femtocells

It is also possible for operators’ direct involvement in the device marketplace to lead to lower costs for existing business models. Various groups of operators have collectively acted in partnership to reduce device prices through collective purchasing and negotiation, as well as enabling larger-scale logistics and supply chain operations. In Japan, NTT DoCoMo has conducted a considerable amount of research on chipset integration, with the result of enabling cheaper handset platforms (see case study below).

Operator home gateways

Probably the most visible and successful area for operator-controlled and branded devices has been the home gateway provided by many ADSL operators, as well as their peers’ offerings of cable modems and set-top boxes. While these are usually produced by companies such as Thomson / Technicolor and 2Wire, many operators undertake very substantial customisation of both hardware and software.

Up to a point, these products have acted as service “hubs”, enabling fixed broadband providers to offer a variety of value-added options such as IPTV, VoIP, remote storage and other service offerings. They normally have WiFi (and, sometimes, “community” connectivity such as the BT / FON tie-up) and various ports for PCs and other devices. Some incorporate wireless DECT or WiFi phones. Most are remotely manageable and can support software upgrades, as well as some form of interactivity via the customer’s PC. Given that most home broadband contracts last at least a year – and are rarely churned – the cost can be defrayed relatively easily into the ongoing service costs. 



That is the good side of home gateways. The downside is that they rarely generate additional incremental revenue streams after the initial installation. Users only infrequently visit operators’ portals, or even less often use the in-built management software for the device. They respond with indifference to most forms of marketing after the initial sign-up: anecdotally, telephone sales and direct mail have poor response rates.

Nevertheless, these products still form a centrepiece of many broadband providers’ strategies and competitive differentiation:

  • Most obviously, they are needed to support higher broadband speeds, which remains the key differentiator between telcos selling ADSL or cable connectivity. “Upgradeability” to faster speeds is one of the most likely options to drive aftermarket revenue uplift or induce loyalty via “free” improvements whilst maintaining price against a falling market. In some countries, the ability to support fibre as well as copper is an important form of future-proofing. Potentially, the inclusion of femtocell modules also confers extra upgrade potential.
  • If well-designed, they can prompt selection of a higher-end monthly tariff or bundle at the initial sale, especially where the operator has a range of alternative products. For example, Orange sells its low-end plans with a basic wireless router, while its higher-end offerings use its LiveBox to support value-adds like VoIP, UMA and so forth. BT offers a free DECT handset with its top-end bundle.
  • Gateways can have the ability to reduce operating costs, especially if they have good self-diagnostics and helpdesk software.
  • In some cases, the gateway can stimulate an ecosystem of accessories such as cordless handsets or other add-ons. Orange, once again, uses its LiveBox as a platform for additional “digital home” products such as a networked storage drive, Internet radio and even a smoke detector*. These can either generate additional revenue directly in hardware sales, or by incremental services – or even just greater utilisation of the base offers. In the future, it seems likely that this approach could evolve into a much broader set of services, such as smart-grid electricity monitoring.


(*The Orange France smoke detector service is interesting, in that it comes with two additional options for the user to subscribe to either Orange’s own €2 per month alerting service, or a third-party “upstream” insurance and assistance firm’s more comprehensive offering [Mondial Assistance] at €9 per month)

As such, it is (in the long term) a potentially massive assistance to operators wishing to pursue two-sided models. It can act as a control point for network QoS, helping differentiate certain end-user ‘consumption’ devices through physical ports or separate WiFi identities. It can store information or provide built-in applications (for example, web caching). This approach could enable a work-around for Net Neutrality, if two-sided upstream partners’ applications are prioritised not over the Internet connection, but instead by virtue of having some form of local ‘client’ and intelligence in the operator’s broadband box. While this might not work for live TV or real-time gaming, there could certainly be other options that might allow more ‘slice and dice’ revenue to be extracted.

It is also much more feasible (net neutrality laws permitting) to offer differentiated QoS or bandwidth guarantees on fixed broadband, when there is a separate hardware device acting as a “demarcation point”, and able to measure and report on real-world conditions and observed connectivity behaviour. This is critical, as it seems likely that “upstream” providers will demand proof that the network actually delivered on the QoS promises.

The bottom line is that operators intending to leverage in-home services need a fully-functional gateway. It is notable that some operators are now backing away from these towards less-functional and cheaper ADSL modems (for example, Telecom Italia’s Alice service), which may reflect a recognition that added-value sales are much more difficult than initially thought.

It is difficult to monetise PCs beyond “pipes”

Despite our general enthusiasm for innovation in gaining revenues from new “upstream” providers, Telco 2.0 believes that the most important two-sided opportunities will involve devices other than PCs. We also feel it is highly unlikely that operators will be able to sell many incremental “retail” services to PCs users, beyond connectivity. That said, we can envisage some innovation in pricing models, especially for mobile broadband in which factors like prepaid, “occasional” nomadicity and offload may play a part. There may also be some bundling – for example of music services, online storage or hosted anti-virus / anti-spam functions. One other area of exception may be around cloud computing services for small businesses.

Although the popular image of broadband is people on FaceBook, running Skype or BitTorrent or watching YouTube on a laptop, these services are not likely to support direct ‘slice and dice’ wholesale capacity revenues from the upstream providers. Telco 2.0 believes that in certain cases (eg fixed IPTV), Internet or media companies might be prepared to pay an operator extra for improved delivery of content or applications. But there is very little evidence that PC-oriented providers such as YouTube, for example, will be prepared to pay “cold hard cash” to broadband providers for supposed “quality of service”. PCs are ideal platforms for alternative approaches – rate adaptation, buffering, or other workarounds. PC users are comparatively tolerant, and are more prone to multi-tasking while downloads occur. However, these companies may still be able to generate advertising revenue-share, telco B2B value-added services (VAS) and API-based revenues in some circumstances – especially via mobile broadband.

That said, for mobile broadband, PCs are really more of a problem than an opportunity, generating upwards of 80% of downstream data traffic for many mobile operators – 99.9% of which goes straight to the Internet, through what is actually quite complex and expensive core network “machinery”. Offloading PC-based mobile traffic to the Internet via WiFi or femtocell is a highly attractive option – even if it means forgoing a small opportunity for uplift. The benefits of increasing capacity available for smartphones or niche devices without extra capex on upgrades far outweighs this downside in most cases.

In the fixed world, the data consumption of PCs may eventually look like a red herring, except for the most egregiously-demanding users. The real pain (and, perhaps, opportunity) in terms of network costs will increasingly come from other devices connected via broadband, especially those capable of showing long-form HD video like large-screen TVs and digital video recorders. Other non-PC devices connected via fixed broadband including game consoles, tablets, smartphones (via WiFi), femtocells, smart meters, healthcare products and so on.

As the following section describes, PC-based applications are generally too difficult to track or charge for on a granular basis, while other supplementary products and associated applications tend to be easier to monitor and bill – and often have value chains and consumer expectations that are more accepting of paid services.

The characteristics which distinguish PCs from other broadband-connected devices include:

  • High-volume traffic. With a few exceptions that can be dealt with via caps or throttling, most PC users struggle to use more than perhaps 30GB/month today on fixed broadband, and 5GB on mobile. This is likely to scale roughly in parallel with overall network capacity, rather than out-accelerate it. Conversely, long-form professional video content has the potential to use many GB straight away, with a clear roadmap to ever-higher traffic loads as pixel densities increase. Clearly, PCs are today often facilitators in video downloads, but relatively few users can be bothered to hook their computers up to a large screen. In the future, there are likely to be more directly Internet-connected TVs, as well as specialist boxes like the Roku;
  • Multiple / alternative accesses. PCs will increasingly be used with different access networks – perhaps ADSL and WiFi at home, 3G mobile broadband while travelling, and paid WiFi hotspots in specific locations. This makes it much more difficult to monetise any individual pipe, as the user (and content/app provider) has relatively simple methods for arbitrage and ‘least cost routing’;
  • Likelihood of obfuscation. PCs are much more likely to be able to work around network policies and restrictions, as they are ideal platforms for new software and are generally much less controlled by the operator or vendor. Conversely, the software in a TV or health monitoring terminal is likely to be static, and certainly less prone to user experimentation. This means that if the network can identify certain traffic flows to/from a TV today, they are unlikely to have changed significantly in a year’s time. Nobody will install a new open-source P2P application on their Panasonic TV, or a VPN client in their blood-pressure monitor. Conversely, PC applications will require a continued game of cat-and-mouse to stay on top of. There is also much less risk of Google, Microsoft or another supplier giving away free encryption / tunnelling / proxying software and hiding all the data from prying DPI eyes;
  • Cost of sale and support. Few Telcos are going to want to continually make hundreds of new sales and marketing calls to the newest ‘flavour of the month’ Web 2.0 companies in the hope of gaining a small amount of wholesale revenue. Conversely, a few ‘big names’ in other areas offer much more scope for solid partnerships – Netflix, Blockbuster, BBC, Xbox Live, Philips healthcare, Ubiquisys femtocells and so on. A handful of consumer electronics manufacturers and other Telcos represents a larger and simpler opportunity than a long tail of PC-oriented web players. Some of the latter’s complexity will be reduced by the emergence of intermediary companies but even with these, operators will almost certainly focus on the big deals;
  • Reverse wholesale threats. The viral adoption and powerful network effects of many PC-based applications mean that operators may be playing with fire if they try to extract wholesale revenues for data capacity. It is very easy for users of a popular site or service (e.g. Facebook) to mobilise against the operator – or even for the service provider to threaten to boycott specific ISPs and suggest that users churn. This is much less likely for individual content-to-person models like TV, where it is easier to assert control from a BSP point of view;
  • Consumer behaviour and expectations. Consumers (and content providers) are used to paying more/differently for video viewed on a TV versus on a PC. Similarly, the value chains for other non-PC services are less mature and are probably easier for fixed BSPs to interpose themselves in, especially while developers and manufacturers are still dealing with ‘best efforts’ Internet access. PC-oriented developers are already good at managing variable connection reliability, so tend to have less incentive to pay for improvements. There are some exceptions here, such as applications which are ‘mission critical’ (e.g. hosted Cloud / SaaS software for businesses, or real time healthcare monitoring), but most PC-based applications and their users are remarkably tolerant of poor connectivity. Conversely, streaming HD video, femtocell traffic and smart metering have some fairly critical requirements in terms of network quality and security, which could be monetised by fixed BSPs;
  • Congestion-aware applications. PC applications (and to degree those on smartphones) are becoming much better at watching network conditions and adapting to congestion. It is much more difficult for a BSP to charge a content or application provider for transport, if they can instead invest the money in more adaptive and intelligent software. This is much more likely to occur on higher-end open computing devices with easily-updateable software.

Taken as a whole, Telco 2.0 is doubtful that PCs represent a class of device that can be exploited by operators much, beyond connectivity revenues. In the fixed world, we feel that telcos have other, better, opportunities and more important threats (around video, tablets and new ecosystems like smart grids). In the mobile world, we think operators need to consider the cost of servicing PC-based mobile broadband, rather than the mostly-mythical new revenue streams – and just focus on managing or offloading the traffic with the greatest ease and lowest cost feasible.

PCs are unlikely to disappear – but they should not command an important share of telcos’ limited bandwidth for services innovation.

Devices and new telco business models

The last part of previous section has given a flavour of how network end-points might contribute to business model innovation, or at least permit the layering-on of incremental services such as the Orange smoke-detector service. It is notable that, in that case, the new proposition is actually a “two box” service, involving a generic telco-controlled unit (the LiveBox gateway), together with a separate device that actually enabled and instantiated the new service (the detector itself).

When it comes to generating new device-based operating and revenue models, telcos have two main choices:

  • Developing services around existing multi-purpose devices (principally PCs or smartphones)
  • Developing services around new and mostly single-application devices (Internet TVs, smart meters, healthcare monitors, in-vehicle systems, sensors and so forth).

The home gateway, discussed above, is a bit of a special category, as it is potentially both a “service end-point” in its own right and the hub for extra gadgets hooked into it through WiFi.

The first option – using multi-function devices – has both advantages and disadvantages. The upside is a large existing user base, established manufacturers and scale economies, and well-understood distribution channels. The downside is the diversity of those marketplaces in terms of fragmented platforms and routes to market, huge competition from alternative developers and service providers, an urgent need to avoid disruption to existing revenues streams and experience – and the strategic presence of behemoths such as Apple, Google and Nokia.
Smartphones and PCs are separately analysed later in this document, as each group has very separate challenges that impinge to only a limited degree on the newer and more fragmented device types.

With new devices there are also a series of important considerations. In theory, many can be deployed in “closed” end-to-end systems with a much greater measure of operator control. Even where they rely on notionally “open” OS’s or other platforms, that openness might be exploited by the telco in terms of, say, user interface and internal programming – but not left fully-open to the user to add in additional applications. (This is perfectly normal in the M2M world – many devices have Windows or Linux internals, such as barcode scanners and bank ATM machines, but these are isolated from the user’s intervention).

However, despite the ability to create completely standalone revenue models, there are still other practical concerns. Certain device types may fit poorly with telcos’ back-office systems, especially old and inflexible billing systems. There will also be huge issues about developing dedicated retail and customer-support channels for niche devices, outside their usual mechanisms for selling mobile services or mass-market broadband and telephony. There may also be challenges dealing with the role of the incumbent brands and their existing partnerships.

Devices map onto 4 communications models

Clearly, the device universe driving telecom services is a broad one – dominated in volume terms by mobile phones and smartphones, as well as driven from a data standpoint by PCs. There are also the numerically smaller, but highly important constituencies of fixed phones, servers and corporate PBXs. But increasingly, the landscape looks more fragmented, with ever more devices becoming network-connected and also open to applications and “smartness”. TVs, tablets, sensors, meters, advertising displays, gaming products and so forth – plus newcomers in diverse areas of machine-to-machine and consumer electronics.

Consequently, it is difficult to develop broad-brush strategies that span this diversity, especially given the parallel divergence of business models and demands on the network. To help clarify the space, we have developed a broad mechanism for classifying devices into different ”communications models”. Although the correlation is not perfect, we feel that there is a good-enough mapping between the ways in which devices communicate, and the ways in which users or ecosystems might be expected to pay for services.

(Note: P2P here refers to devices that are primarily for person-to-person communications, not peer-to-peer in the context of BitTorrent etc. In essence, these devices are “phones” or variants thereof, although they may also have additional “smart” data capabilities).


It is worth pointing out that PCs represent a combination of all of these models. They are discussed separately, in another section – although Telco 2.0 feels that they are much more difficult to monetise beyond connectivity for operators.

Person-to-person communication

The majority of devices connected to telcos’ networks today are primarily intended for person-to-person (also sometimes called peer-to-peer) communications: they are phones, used for calling or texting other phones, both mobile and fixed. Because they have been associated with numbers – and specific people, locations or businesses – the business models have always revolved around subscriptions and continuity.

Telco 2.0 believes that there is limited scope for device innovation here beyond additional smartness – and to a degree, smartphones (like PCs) also could be considered special cases that transcend the categories described here. They are examined below. [Note: this refers to the types of communication application – there are likely to be yet more new ways in which voice and SMS can be used, controlled and monetised even on basic phones through back-end APIs in the network].

Yes, there could be niche products which evolve specifically intended as “social network devices” and clearly there is also a heritage of products optimised for email and various forms of instant messaging. But these functions are generally integrated into handsets, either operator-controlled or through third-party platforms such as BlackBerry’s email and messaging.

A recurring theme among fixed operators for the past 20 years has been that of videophones. Despite numerous attempts to design, specify or sell them, we have yet to see any rapid uptake, despite widespread use of webcams on PCs. The most recent attempt has been the advent of “screenphones” optimised for web/widget display, with additional video capture and display capabilities they hope may eventually become more widely-used. These too have had limited appeal.

Although handsets clearly represent a huge potential opportunity for telcos’ two-sided aspirations through voice/SMS APIs and smartphone applications and advertising, it seems unlikely that device innovation will result in totally new classes of product here. As such, operators’ peer-to-peer device strategy will likely to revolve around better control of smartphones’ experience and application suites, along with attempts to bring on new massmarket services for featurephones. This is likely to take the form of various new web/widget frameworks such as the Joint Innovation Labs’ platform (JIL), run by Vodafone, Verizon, SoftBank and China Mobile.

Other less-likely handset business models could evolve around new “core” communications modes – although we remain sceptical that the 3GPP- and GSMA-backed Rich Communications Suite will succeed in the fashion of SMS for a huge number of reasons. In particular, any new core P2P mode needs very high penetration levels to be attained before reaching critical mass for uptake – something hard to achieve given the diversity of device platforms, the routes to market, and the existing better-than-RCS capabilities already built into products such as the iPhone and BlackBerry. Adding in a lack of clear business case, poor fit with prepay models and weak links to consumer behaviour and psychology (eg “coolness”), we feel that “silo” optimised solutions developed by operators, device vendors or third parties are much more likely to succeed than lowest-common-denominator “official” standards.

Downloads and streaming

The most visible – and potentially problematic – category of new connected devices are those that are intended as media consumption products. This includes TVs, e-book readers, PVRs, Internet radios, advertising displays and so forth. Clearly, some of these have been connected to telco services in some way before (notably via IPTV), but the recent trends of embedding intelligence (and “raw” direct Internet access) is changing the game further. Although it is also quite flexible, we believe that the new Apple iPad is best represented within this category.

There are four main problems here:

  • The suppliers of these devices are often strong consumer electronic brands, with limited experience of engaging with operators at all, let alone permitting them to interfere in hardware or software specification or design. Furthermore, their products generally have significant “offline” usage modes such as terrestrial TV display, over which operators cannot hope to exert influence at all. As such, any telco involvement will likely need to be ring-fenced to new services supported. This also makes it difficult to conceive of many products which could be profitable if confined solely to sales within an individual operator’s customer base.
  • It is unlikely that many of the more expensive items of display and media consumption technology will be supplied directly by operators, or subsidised by them. This makes it very difficult for operators to get their software/UI load into the supply chain, unless there were generic open-Internet downloads available.
  • These devices – especially those which display high-definition video – can consume huge amounts of network resource. Living-room LCD TVs can pull down 5GB per hour, if connected to the Internet for streamed IPTV, which might not even be watched if the viewer leaves the room. In the mobile domain, dedicated TV technologies have gained limited traction, but streaming music and audio can instead soak up large volumes of 3G bandwidth. There is a risk that as display technology evolves (3D, HD etc), these products may become even more of a threat to economics than open PCs.
  • For in-home or in-office usage scenarios, the devices will normally be used “behind” the telco access gateway and thus be outside the usual domain of operator influence. This makes it less palatable to consumers to have “control points”, and also raises the issue of responsibility for poor in-home connectivity if they are operator-controlled.

All that said, there are still important reasons for telcos to become more skilled in this category of devices. Firstly, it is important for them to understand the types of traffic that may be generated – and, possibly, learn how to identify it in the network for prioritisation. There could well be options for two-sided models here – for example, prioritisation or optimisation of HD video for display on living-room TVs, for which there may well be revenue streams to share, as well as user expectations that would not embrace “buffering” of streamed data during congested periods.

Moreover, there is a subset of this class of “display” devices which are much more amenable to entirely new business models beyond connectivity. Mobile devices such as the Apple iPad (or operator-controlled equivalents) could be bundled with content and applications. Non-consumer products such as connected advertising displays could benefit from many telco value-adds: imagine a road-side advert that changed to reflect the real-time mix of drivers in the vicinity, calculated via the operator’s network intelligence.

There are also further positives to this group of products that may offset the problems listed above. Generally, they are much less “open” than PCs and smartphones, and tend to have fixed software and application environments. This predictability makes it much less likely that new usage modes will emerge suddenly, or new work-arounds for network controls be implemented. It also makes “illicit” usage far less probable – few people are going to download a new BitTorrent client to their TV, or run Skype on a digital-advertising display.

Cloud services & control

Probably the most interesting class of new devices are those that are expected to form the centrepiece of emerging “cloud services” business models, or which are centrally-controlled in some way. In both cases, while the bulk of data traffic is downstream, there is an important back-channel from the device back to the network. Possible examples here would be smart meters for next-generation electricity grids, personal healthcare terminals, or “locked” tablets used for delivering operator-managed (or at least, operator-mediated) services into the home.

These devices would typically be layered onto existing broadband service connections in the home (probably linked in via WiFi), or else could have a separate cellular module for wide-area connectivity. While they may have some form of user interface or screen, it is likely that this will not be “watched” in the same sense as a TV or media tablet, instead used for specific interactive tasks.

These types of application have some different network requirements to other devices – most typically, they will require comparatively small volumes of data, but often with extremely high levels of security and reliability, especially for use cases such as healthcare and energy management. Other devices may be less constrained by network quality – perhaps new appliances for the home, such as “family agenda and noticeboard” tablets.

There are numerous attractions here for operators – while these devices are likely to be used for a variety of tasks, their impact on the network in terms of capacity should generally be light. Conversely, the requirements for security should enable a premium to be charged – probably to the “ecosystem owner” such as a public-sector body or a utility. In some cases, there could well be additional associated revenue streams open to the telco alongside connectivity – both direct from end users, and perhaps also from managing delivery to upstream providers.

There is also a significant likelihood that cloud-based services will be based around long-term, subscription-type billing models, as the devices will likely be in regular and ongoing use, and also probably of minimal functionality when disconnected.


A number of new device categories are emerging that are “upload-centric” – using the telco network as a basis for gathering data or content, rather than consuming it. Examples include CCTV cameras, networks of sensors (eg for environmental monitoring), or digital cameras that can upload photos directly.

These are highly interesting in terms of new business models for telcos:

  • Firstly, they are almost all incremental to existing connections rather than substitutional – and thus represent a source of entirely new revenue, even if the operators are just supplying connectivity.
  • Secondly, this class of device is likely to involve new, wider ecosystems, often involving parties that have limited experience and skill in managing networks or devices. This provides the opportunity for operator to add significant value in terms of overall management and control. Examples include camera manufacturers, public-sector authorities operating surveillance or measurement networks and so forth. This yields significant opportunity for two-sided revenues for telcos, or perhaps overall “managed service” provision.
  • Thirdly, it is probable that traditional “subscription” models, as seen in normal telephony services, will be unwieldy or a generally poor fit with this class of device. For example, a digital 3G-uploading camera is likely to be used irregularly and is thus unsuited to regular monthly fees. It may also make sense to price such devices on a customised “per photo” basis, rather than per-MB – and it would probably be desirable to bundle a certain allowance into the upfront device purchase price. Clearly, there is value to be gained by the telco or a specialist service provider like Jasper Wireless here, re-working the billing and charging mechanisms, handling separate roaming deals and so forth.

In addition, there is an opportunity to engineer these new business models from the ground up to reflect network usage and load. They are likely to generate fairly predictable traffic – most of it upstream. This may present certain challenges, as most assumptions are for download-centric networks, but the fact that application-specific devices should be “deterministic” should help assuage those problems from a planning point of view. For example, if an operator knows that it has to support a million CCTV cameras, each uploading an average of 3MB per hour from fixed locations, that is relatively straightforward to add into the capacity planning process – certainly much more so than an extra million smartphones using unknown applications at unknown times, while moving around.

All that said, it remains unclear that the total number of device sales and aggregate revenues make this category a truly critical area for telcos. In many cases it is likely to be “nice to have” rather than must-have – and it is certainly not obvious that the current nascent market will be large enough to accommodate every operator in a given market attempting to enter the space simultaneously. For a few operators this area may “move the needle” if a few choice deals are struck (e.g. for national environmental monitoring), but for others it will be many years, if ever.

One example of this category of product is the remote smoke-detector offered by Orange in France, which is provided as a value-add to its home broadband offer. This has a variety of service models, including one involving a subscription to another upstream provider of monitoring/alerting functions (Mondial Assistance), for which Orange presumably gains a revenue share.

Operators’ influence on smartphones and featurephones

Perhaps the key telco battleground at present is around smartphones. The growth of the iPhone, the entrenched position of BlackBerry, the emergence of Android and the theoretical numeric advantage of Symbian and Nokia are all important aspects of the landscape. They are encouraging data plan uptake by consumers, catalysing the applications ecosystem and – on the downside – fostering rampant bandwidth utilisation and providing ready platforms for Internet behemoths to drive services loyalty at the expense of the telcos.

In principle smartphones should be excellent platforms for operators launching new services and exploiting alternative business models – advertising, downloadable apps linked to identity or billing services, third-party payments for enhanced connectivity and so forth. Yet up until now, with a few exceptions (notably DoCoMo in Japan), there have been very limited new revenue streams on handsets beyond basic voice, messaging, ringtones and flat (or flattish) data plans. BlackBerry’s BES and BIS services are the only widely-adopted 3rd-party data services sold outside of bundles by a significant number of operators, although operator billing for their own (or others’) appstores holds potential.

This is a general area that Telco 2.0 has covered in various recent research reports, examining the role of Apple, Google, RIM and others. Fixed operators have long known what their mobile peers are now learning – as intelligence increases in the devices at the edge, it becomes far more difficult to control how they are used. And as control ebbs away, it becomes progressively easier for those devices to be used in conjunction with services or software provided by third parties, often competitive or substitutive to the operators’ own-brand offerings.

A full discussion of the smartphone space merits its own strategy report, and thus coverage in this document on the broader device markets is necessarily summarised.

What is less visible is how and where operators can impose themselves in this space from a business model point of view. There is some precedent for operators developing customised versions of smartphone OS software, as well as unique devices (eg Vodafone / LiMo, DoCoMo / Symbian and Linux, or KDDI / Qualcomm BREW). Many have fairly “thin” layers of software to add some branding and favoured applications, over the manufacturer’s underlying OS and UI. Symbian and LiMo have been more accommodating in this regard, compared to Apple and RIM, with Microsoft and Palm somewhere in the middle.

However, in the majority of cases this has not led to sustainable revenue increases or competitive advantage for the operators concerned – not least because there appears to have been a negative correlation with overall usability, especially given links to back-end services like iTunes and the BlackBerry BIS email infrastructure. Where one company has complete control of the “stovepipe”, it is much easier to optimise for complexities such as battery life, manage end-to-end performance criteria such as latency and responsiveness, and be incentivised to ensure that fixing one problem does not lead to unintended consequences elsewhere. In contrast, where operators merely customise a smartphone OS or its applications, they often lack the ability to drill down into the lower levels of the platform where needed.

More recently, Android has seemed to represent a greater opportunity, as its fully open-source architecture enables operators to tinker with the lower layers of the OS if they so desire, although there are endless complexities in creating “good” smartphones outside of telcos’ main competence, such as software integration and device power management. Symbian’s move to openness could also produce a similar result. It is in this segment that operators have the greatest opportunity for business model innovation. We are already seeing moves to operator-controlled application ecosystems, as well as mobile advertising linked to the browser or other functions. That said, early attempts by operators to create own-label social networking services, or “cross-operator” applications, seem to have had limited success.

Further down the chain, it is important not to forget the huge market occupied by their less-glamorous featurephone brethren. Especially in prepaid-centric markets where subsidy is rare, the majority of customers use lesser devices from the likes of Nokia’s Series 40 range, or the huge range from Samsung and LG. Worse still for operators, many of these devices are bought “vanilla” from separate retail channels over which they have little control.
While it is theoretically possible for service providers to “push” their UIs and applications down to non-customised handsets in the aftermarket, in reality that rarely happens as it has huge potential to cause customer dissatisfaction. More generally, some minimal customisation is provided via the SIM card applications – although over time this may become slightly more sophisticated.

Realistically, the only way that operator can easily control new business models linked to prepaid mobile phone subscribers is through own-brand phones (see ODM section below), or via very simple “per day” or “per month” fixed-fee services like web access or maybe video.

Overall, it could be viewed that operators are continually facing a “one step forward, two steps back” battle for handset application and UI control. For every new Telco-controlled initiative like in-house appstores, customised/locked smartphone OS’s, BONDI-type web security, or managed “policy” engines, there is another new source of “control leakage” – Apple’s device management, Nokia’s Ovi client, or even just open OS’s and third-party appstores enabling easy download of competing (and often better/free) software apps.

Multi-platform user experience

The rest of this document has talked about devices as standalone products, linked to particular services or business models. But it actually seems fair to assume that many users will be using a variety of platforms, in a variety of contexts, acquired through a myriad of channels.

This suggests that operators have some scope to define and own a new space – “multi-platform experience”. The idea is to compete to get as great an aggregate share of attention and familiarity as possible, tied to the provision of both end-user service fees and, potentially, two-sided offerings that benefit from this extra customer insight and access.

For example, users may wish to view their photos, or access their social networks, via digital cameras, mobile phone(s), PC, tablet, TV, in-car system and various other endpoints. They will want to have similar (but not identical) preferences and modes of behaviour. Yet there will likely be one which is the cornerstone of the overall experience, with the others expected to be reflections of it. This will drive ongoing purchasing behaviour of additional devices and services – Apple has understood this well.

Operators need to either start to drive these user experience expectations and preferred interaction patterns – or be prepared to accommodate others’. For example, there now appears to significant value to many users in ensuring that new technology products are optimised for Facebook. While this may be a blow to the operators’ hopes of dominating a particular service domain, relinquishing it may be a small price to pay for overall importance in the user’s digital lifestyle. A telco providing a tablet with a Grade-A Facebook experience has a portal to introducing the user to other in-house services.


For mobile operators

  • The key element of device strategy remains the selection, testing and sale of handsets – along with basic customisation and obtaining exclusivity where possible. Larger operators – especially those which are in post-paid centric markets – have more flexibility in creating or pushing new device classes and supporting new business models.
  • Mobile operators do not have a distinguished past in creating device UIs, with various failed experiments in on-device portals and application stacks. Consider focusing on control points (eg API security) underneath the apps and browser, rather than branding the direct interface to the user.
  • New classes of mobile device (tablets, in-car devices, M2M) are less risky than smartphones, but are unlikely to “move the dial” in terms of revenues for many years. They will also likely require more complex and customised back-end systems to support new business models. Nonetheless, they can prove fruitful for long-term initiatives and partnerships (eg in healthcare or smart metering).
  • Bridge the gap between RAN and device teams within your organisation, to understand the likely radio impacts of new products – especially if they are for data-hungry applications or ones with unusual traffic patterns such as upstream-heavy. Silicon and RF may be complex and “unsexy”, but they can make a huge difference to overall network opex and capex.
  • While Android appeals because of its ODM-friendliness and flexibility, it remains unproven as an engine for new business models and still has uncertain customer appeal. Do not turn your back on existing device partnerships (RIM, Apple, Nokia etc) until this becomes clearer.
  • Yoda in Star Wars had wise advice “Do. Or do not. There is no ‘try’”. Creating devices is expensive, time-consuming and not for the faint-hearted. Uncommitted or under-resourced approaches may end up causing more harm than good. Be prepared to write some large cheques and do it right, first time.
  • If you are serious about investing in fully-customised handsets, consider following 3’s path with INQ and sell them to other non-competing operators around the world, to amortise the costs over greater volumes.
  • Examine the potential for raising revenue or customer satisfaction from device-side utilities rather principle applications. For example, self-care or account-management apps on a smartphone can be very useful, while well thought-out connection management clients for mobile broadband PCs are a major determinant of customer loyalty.
  • Another promising domain of device specialism lies around creating enhanced experiences for existing successful applications – for example porting FaceBook and Twitter, or particular media properties, to custom software loads on handsets. Done well, this also has the potential to form the basis of a two-sided business model. For example, if an operator pitched a “YouTube-optimised” phone, tied in with end-to-end network policy management and customer data exposure, there could be significant advertising revenue-share opportunities.
  • Mobile operators should generally consider enterprise-grade devices (eg tablets, meters, in-vehicle systems) only in conjunction with specialist partners.

  • De-prioritise initatives around netbooks and laptops with embedded 3G connectivity. They represent huge loads on the network, are difficult to sell, and are extremely hard to monetise beyond “pipe” revenues.

For fixed & cable operators

  • The core recommendation is to continue focusing on (and enhancing) existing home gateway and set-top box products. These should be viewed as platforms for existing and future services – some of which will be directly monetisable (eg IPTV) while others are more about loyalty and reduction of opex (eg self-care and integrated femtocell modules).
  • Consider the use of relatively inexpensive custom devices (eg WiFi tablets) which are locked to usage via your gateway. Potentially, these could be given for free in exchange for a commitment to longer/renewed contracts or higher service tiers – and may also form the basis of future services provided via appstores or widgets.
  • Work collaboratively with innovative consumer electronics suppliers in areas such as Internet-connected TVs and games consoles. These vendors are potentially interested in end-to-end cloud services – including value-added capabilities from the network operators. They may also be amenable to suggestions on how to create “network-friendly” products, and co-market them with the operator.
  • Some operators may have the customer branding strength and physical distribution channels to sell adjunct product such as storage devices, Internet radios, IPTV remote controls and so forth. There may additional revenue opportunities from services as well – for example, including a Spotify subscription with a set of external speakers. However, do not underestimate the challenges of overall system integration or customer support.
  • Take a leadership role in pursuing digital home opportunities. There is a narrow window of opportunity in which fixed operators have the upper hand here – over time, it is likely that mobile operators and their device vendors will start to gain more traction. For now, WiFi (and maybe powerline) connections are the in-home network of choice, with the WiFi router provided by a fixed/cable operator being at its centre.
  • A pivotal element of success is ensuring that an adequate customer support and device-management system is in place. Otherwise incremental opex costs may more than offset the benefits from incremental revenue streams.

  • Fixed telcos should look to exploit home networking gateways, femtocells and other CPE, before consumer electronic devices like TVs and HiFi’s adopt too many “smarts” and start to work around the carrier core, perhaps accessing YouTube or Facebook directly from the remote control. At present, it is only open devices with a visible, capable and accessible user interface or browser (e.g. PCs and smartphones) that can exploit the wider Internet. Inclusion of improved Internet connectivity and user control in other classes of device will broaden their ability to circumvent operator-hosted services.


Telcos need to face the inevitable – in most cases, they will not be able to control more than a fraction of the total computing and application power of the device universe, especially in mobile or for “contested” general-purpose devices. Even broadband “device specialists” will need to accept that their role cannot diminish the need for some completely “vanilla” network end-points, such as most PCs.

But that does not mean they should give up trying to exert influence or design their own hardware and software where it makes sense – as well as developing services that compete on equal terms with the web, for those devices beyond their direct reach.

They should also ensure that at least as much consideration is given to optimising devices for their current business models, as well as hoping they can form the basis of innovative offerings.

Some of the most promising new options include:

  • Single-application “locked” mobile devices, perhaps optimised for gaming or utility metering or navigation or similar functions, which have a lot of potential as true “terminals” and the cornerstone of specific business models, albeit used in parallel with users’ other smart devices.
  • Even notionally-open devices like smartphones and tablets can be controlled, especially through application-layer pinch points. Apple is the pre-eminent exponent of this art, controlling the appstore with an iron fist. This is not easy for operators to emulate, but is a very stark benchmark of the possible outcome. Android can help here, but only for those operators prepared to invest sufficient time and money on getting devices right. Another option is to work with firms like RIM, which tend to have more “controllable” OS’s and which are operator-friendly.
  • It is far easier for the operator to exert its control at the edge with a standalone, wholly-owned and managed device, than via a software agent on a general computing device like a smartphone or notebook PC. However, it is more difficult and expensive to create and distribute a wholly-owned and branded device in the first place. Few people will buy a Vodafone television, or an AT&T camera – partnerships will be key here.
  • Devices which support web applications only (eg tablets) are somewhat different propositions to those which can also support “native” applications. Operators are more likely to find the “security model” for a browser cheaper and easier to manage than a full, deep OS, affording more fine-grained control over what the user can and cannot do. The downside is that browser-resident apps are generally not as flexible or powerful as native apps.
  • On devices with multiple network interfaces (3G, WiFi, Bluetooth, USB etc) a pivotal control layer is the “connection manager”, which directs traffic through different or multiple paths. In many cases, some of those paths will be outside operator control, allowing “leakage” of application data and thus revenue opportunity.
  • Even where aspects of the device itself lie outside Telcos’ spheres of control, there are still many “exposable” network-side capabilities that could be exploited and offered to application providers, if Telcos’ own integrated offerings are too slow or too expensive. Identity, billing, location, call-control can be provided via APIs to add value to third-party services, while potentially, customer data could be used to help personalise services, subject to privacy constraints. However, carriers need to push hard and fast, before these are disintermediated as well. Google’s clever mapping and location capabilities should be seen as a warning sign that there will be substitutes available that do not rely on the telcos.
  • We may also see ‘comes with data’ products offered by the Telco themselves with their own product teams as a sort of internal upstream customer. If Dell or Apple or Sony can sell a product with connectivity bundled into the upfront price, but no ongoing contract, why not the operators themselves?

The other side to device specialists is the potential for them to become buyers rather than sellers of two-sided services. If Operator X has a particularly good UI or application capability, then (if commercial arrangements permit), it could exploit Operator Y’s willingness to offer managed QoS or other capabilities. This is most likely to happen where the two Telcos don’t compete in a given market – or if one is fixed and the other mobile. Our managed offload use case in the recent Broadband report envisages a situation in which a fixed ‘device specialist’ uses a WiFi or femto-enabled gateway to assist a mobile broadband provider in removing traffic from the macro network.

In addition to these, there are numerous device-related “hygiene factors” that can improve operators’ bottom line, through reducing capex/opex costs, or improving customer acquisition and ongoing revenue streams. Improved testing and specification to reduce customer support needs, minimise impact on networks and guarantee good performance are all examples. For example, RIM’s BlackBerry devices are often seen as being particularly network-friendly, as are some 3G modems featuring advanced radio receiver technology.

Overall, the battle for control of the edge is multi-dimensional, and outcomes are highly uncertain, particularly given the economy and wide national variations in areas like device subsidy and brand preference. But Telcos need to focus on winnable battles – and exploit Moore’s Law rather than beat against it with futility.

Figure 3: Both hardware and software/UI provide grounds for telco differentiation