Convergence, coexistence or competition: How will 5G and Wi-Fi 6 interact?

Introduction: Wi-Fi vs cellular

The debate around Wi-Fi and cellular convergence is not new. However, the introduction of next generation mobile and cellular technologies, Wi-Fi 6 and 5G, has once again reignited this debate. Further impetus for discussion has been provided by industry bodies, including the Wi-Fi Alliance, IEEE, Wireless Broadband Alliance (WBA), Next Generation Mobile Networks Alliance (NGMN) and 3GPP, developing standards to enable the convergence between 5G and Wi-Fi.

5G, introduced by 3GPP’s release 15 in 2018, and deployed internationally by telecoms operators since 2019, is considered a significant upgrade to 4G and LTE. Its improved capabilities such as increased speed, coverage, reliability, and security promise to enable a host of new use cases in a wide range of industries.

Simultaneously, Wi-Fi has evolved into its 6th generation, with Wi-Fi 6 technology emerging in 2019. This new evolution of Wi-Fi can provide speeds that are 40% higher than its predecessor, as well as improved visibility and transparency for better network control and management. Some of the key enhancements of the new generation are detailed in the table below.

Figure 1: There are a number of key differences between next generation Wi-Fi and cellular connectivity

key-differences-next-generation-wifi-cellular-activity

Source: STL Partners

The market context for convergence

Industry bodies have been promoting convergence

The Wireless Broadband Alliance (WBA) and the Next Generation Mobile Networks Alliance (NGMN) produced a joint report in 2021 promoting the future convergence between Wi-Fi and 5G. The report highlights the merits of convergence, noting a number of use cases and verticals that could stand to benefit from closer alignment between the two technologies. Further, the 3GPP have increasingly sought to include standards with each new release that enable convergence between Wi-Fi and cellular. 3GPP’s release 8 introduced the concept of ‘access network discovery and selection function’ (ANDSF) which allowed user equipment to discover non-3GPP access networks, including Wi-Fi. In 2018, release 15 included optional 3GPP access for native 5G services via these non 3GPP access networks. Most recently, release 16 introduced ‘access traffic steering, splitting and switching’ (ATSSS), allowing both 3GPP and non-3GPP connectivity to multiple access networks, which is a key enabler of the resilience model of convergence. Similarly, the IEEE, sponsored by the Wi-Fi Alliance has been discussing the potential pathways to convergence for a number of years. However, these bodies are less vocal about future convergence possibilities, likely given Wi-Fi’s current dominance in the provision of enterprise wireless connectivity.

Spectrum auctions

The possibility of convergence has been further supported in recent years by releases of spectrum in the 6GHz band for unlicensed use in the USA, UK, South Korea and other major markets. Spectrum in the same 6GHz range can also be used to support 5G connectivity in addition to the existing 5GHz band. With the ability to share the same spectrum, this could theoretically promote closer coupling of 5G and Wi-Fi. However, given similar propagation characteristics for each technology, it remains to be proven as to whether the increasing availability of spectrum will help to push convergence forward.

There is a disconnect between theory and practice

While standards define what is possible, the purpose of industry bodies is to be future-focused, paving the way for the rest of the ecosystem to follow. What is possible in theory must be supported in practice, and the supply-side ecosystem, including network operators, system integrators (SIs), network equipment providers (NEPs) and hardware manufacturers have a role to play if convergence is to become more widespread.

Similarly, for devices to access converged networks, they must be equipped with 5G and Wi-Fi chips. While mobile phones support both connectivity types, the vast majority of connected devices that enterprises deploy are Wi-Fi only. Until 5G chips or modules become more widely available, and used in a greater number of devices, convergence will likely remain relegated to specific use cases. For example, use cases that depend on the mobility afforded by being able to ‘switch over’ from Wi-Fi to mobile seamlessly, or highly mission critical use cases in verticals such as manufacturing that can justify the investment in (private) 5G as a back-up to Wi-Fi. We discuss both of these use cases in more detail in the report. The full ecosystem must ultimately work in concert for convergence to become a realistic possibility for a larger number of enterprises.

 

Table of Contents

  • Executive Summary
    • Convergence is still immature on both the demand and supply sides
    • What do we mean by co-existence, convergence and competition?
  • Preface
  • Introduction
  • The market context for convergence
    • Industry bodies have been promoting convergence
    • Spectrum auctions
    • There is a disconnect between theory and practice
    • There are two key use cases for convergence
  • A future trend towards convergence is still immature
    • Regional differences in the maturity of 5G
    • Inconsistent definitions
    • Who manages convergence?
  • It is still too early to see high levels of demand for convergence from enterprise customers
    • Wi-Fi is the incumbent, 5G must overcome a number of barriers before it can become a genuine partner or alternative
    • Decisions regarding convergence are driven by industry characteristics
    • Supply side players must educate enterprise customers about convergence (if they believe it is beneficial to the enterprise)
  • Conclusion

Related research

Indoor wireless: A new frontier for IoT and 5G

Introduction to Indoor Wireless

A very large part of the usage of mobile devices – and mobile and other wireless networks – is indoors. Estimates vary but perhaps 70-80% of all wireless data is used while fixed or “nomadic”, inside a building. However, the availability and quality of indoor wireless connections (of all types) varies hugely. This impacts users, network operators, businesses and, ultimately, governments and society.

Whether the use-case is watching a YouTube video on a tablet from a sofa, booking an Uber from a phone in a company’s reception, or controlling a moving robot in a factory, the telecoms industry needs to give much more thought to the user-requirements, technologies and obstacles involved. This is becoming ever more critical as sensitive IoT applications emerge, which are dependent on good connectivity – and which don’t have the flexibility of humans. A sensor or piece of machinery cannot move and stand by a window for a better signal – and may well be in parts of a building that are inaccessible to both humans and many radio transmissions.

While mobile operators and other wireless service providers have important roles to play here, they cannot do everything, everywhere. They do not have the resources, and may lack site access. Planning, deploying and maintaining indoor coverage can be costly.

Indeed, the growing importance and complexity is such that a lot of indoor wireless infrastructure is owned by the building or user themselves – which then brings in further considerations for policymakers about spectrum, competition and more. There is a huge upsurge of interest in both improved Wi-Fi, and deployments of private cellular networks indoors, as some organisations recognise connectivity as so strategically-important they wish to control it directly, rather than relying on service providers. Various new classes of SP are emerging too, focused on particular verticals or use-cases.

In the home, wireless networks are also becoming a battleground for “ecosystem leverage”. Fixed and cable networks want to improve their existing Wi-Fi footprint to give “whole home” coverage worthy of gigabit fibre or cable connections. Cellular providers are hoping to swing some residential customers to mobile-only subscriptions. And technology firms like Google see home Wi-Fi as a pivotal element to anchor other smart-home services.

Large enterprise and “campus” sites like hospitals, chemical plants, airports, hotels and shopping malls each have complex on-site wireless characteristics and requirements. No two are alike – but all are increasingly dependent on wireless connections for employees, visitors and machines. Again, traditional “outdoors” cellular service-providers are not always best-placed to deliver this – but often, neither is anyone else. New skills and deployment models are needed, ideally backed with more cost—effective (and future-proofed) technology and tools.

In essence, there is a conflict between “public network service” and “private property” when it comes to wireless connectivity. For the fixed network, there is a well-defined “demarcation point” where a cable enters the building, and ownership and responsibilities switch from telco to building owner or end-user. For wireless, that demarcation is much harder to institutionalise, as signals propagate through walls and windows, often in unpredictable and variable fashion. Some large buildings even have their own local cellular base stations, and dedicated systems to “pipe the signal through the building” (distributed antenna systems, DAS).

Where is indoor coverage required?

There are numerous sub-divisions of “indoors”, each of which brings its own challenges, opportunities and market dynamics:

• Residential properties: houses & apartment blocks
• Enterprise “carpeted offices”, either owned/occupied, or multi-tenant
• Public buildings, where visitors are more numerous than staff (e.g. shopping malls, sports stadia, schools), and which may also have companies as tenants or concessions.
• Inside vehicles (trains, buses, boats, etc.) and across transport networks like metro systems or inside tunnels
• Industrial sites such as factories or oil refineries, which may blend “indoors” with “onsite”

In addition to these broad categories are assorted other niches, plus overlaps between the sectors. There are also other dimensions around scale of building, single-occupant vs. shared tenancy, whether the majority of “users” are humans or IoT devices, and so on.

In a nutshell: indoor wireless is complex, heterogeneous, multi-stakeholder and often expensive to deal with. It is no wonder that most mobile operators – and most regulators – focus on outdoor, wide-area networks both for investment, and for license rules on coverage. It is unreasonable to force a telco to provide coverage that reaches a subterranean, concrete-and-steel bank vault, when their engineers wouldn’t even be allowed access to it.

How much of a problem is indoor coverage?

Anecdotally, many locations have problems with indoor coverage – cellular networks are patchy, Wi- Fi can be cumbersome to access and slow, and GPS satellite location signals don’t work without line- of-sight to several satellites. We have all complained about poor connectivity in our homes or offices, or about needing to stand next to a window. With growing dependency on mobile devices, plus the advent of IoT devices everywhere, for increasingly important applications, good wireless connectivity is becoming more essential.

Yet hard data about indoor wireless coverage is also very patchy. UK regulator Ofcom is one of the few that reports on availability / usability of cellular signals, and few regulators (Japan’s is another) enforce it as part of spectrum licenses. Fairly clearly, it is hard to measure, as operators cannot do systematic “drive tests” indoors, while on-device measurements usually cannot determine if they are inside or outside without being invasive of the user’s privacy. Most operators and regulators estimate coverage, based on some samples plus knowledge of outdoor signal strength and typical building construction practices. The accuracy (and up-to-date assumptions) is highly questionable.

Indoor coverage data is hard to find

Contents:

  • Executive Summary
  • Likely outcomes
  • What telcos need to do
  • Introduction to Indoor Wireless
  • Overview
  • Where is indoor coverage required?
  • How much of a problem is indoor coverage?
  • The key science lesson of indoor coverage
  • The economics of indoor wireless
  • Not just cellular coverage indoors
  • Yet more complications are on the horizon…
  • The role of regulators and policymakers
  • Systems and stakeholders for indoor wireless
  • Technical approaches to indoor wireless
  • Stakeholders for indoor wireless
  • Home networking: is Mesh Wi-Fi the answer?
  • Is outside-in cellular good enough for the home on its own?
  • Home Wi-Fi has complexities and challenges
  • Wi-Fi innovations will perpetuate its dominance
  • Enterprise/public buildings and the rise of private cellular and neutral host models
  • Who pays?
  • Single-operator vs. multi-operator: enabling “neutral hosts”
  • Industrial sites and IoT
  • Conclusions
  • Can technology solve MNO’s “indoor problem”?
  • Recommendations

Figures:

  • Indoor coverage data is hard to find
  • Insulation impacts indoor penetration significantly
  • 3.5GHz 5G might give acceptable indoor coverage
  • Indoor wireless costs and revenues
  • In-Building Wireless face a dynamic backdrop
  • Key indoor wireless architectures
  • Different building types, different stakeholders
  • Whole-home meshes allow Wi-Fi to reach all corners of the building
  • Commercial premises now find good wireless essential
  • Neutral Hosts can offer multi-network coverage to smaller sites than DAS
  • Every industrial sector has unique requirements for wireless

Net Neutrality 2021: IoT, NFV and 5G ready?

Introduction

It’s been a while since STL Partners last tackled the thorny issue of Net Neutrality. In our 2010 report Net Neutrality 2.0: Don’t Block the Pipe, Lubricate the Market we made a number of recommendations, including that a clear distinction should be established between ‘Internet Access’ and ‘Specialised Services’, and that operators should be allowed to manage traffic within reasonable limits providing their policies and practices were transparent and reported.

Perhaps unsurprisingly, the decade-long legal and regulatory wrangling is still rumbling on, albeit with rather more detail and nuance than in the past. Some countries have now implemented laws with varying severity, while other regulators have been more advisory in their rules. The US, in particular, has been mired in debate about the process and authority of the FCC in regulating Internet matters, but the current administration and courts have leaned towards legislating for neutrality, against (most) telcos’ wishes. The political dimension is never far away from the argument, especially given the global rise of anti-establishment movements and parties.

Some topics have risen in importance (such as where zero-rating fits in), while others seem to have been mostly-agreed (outright blocking of legal content/apps is now widely dismissed by most). In contrast, discussion and exploration of “sender-pays” or “sponsored” data appears to have reduced, apart from niches and trials (such as AT&T’s sponsored data initiative), as it is both technically hard to implement and suffers from near-zero “willingness to pay” by suggested customers. Some more-authoritarian countries have implemented their own “national firewalls”, which block specific classes of applications, or particular companies’ services – but this is somewhat distinct from the commercial, telco-specific view of traffic management.

In general, the focus of the Net Neutrality debate is shifting to pricing issues, often in conjunction with the influence/openness of major web and app “platform players” such as Facebook or Google. Some telco advocates have opportunistically tried to link Net Neutrality to claimed concerns over “Platform Neutrality”, although that discussion is now largely separate and focused more on bundling and privacy concerns.

At the same time, there is still some interest in differential treatment of Internet traffic in terms of Quality of Service (QoS) – and also, a debate about what should be considered “the Internet” vs. “an internet”. The term “specialised services” crops up in various regulatory instruments, notably in the EU – although its precise definition remains fluid. In particular, the rise of mobile broadband for IoT use-cases, and especially the focus on low-latency and critical-communications uses in future 5G standards, almost mandate the requirement for non-neutrality, at some levels at least. It is much less-likely that “paid prioritisation” will ever extend to mainstream web-access or mobile app data. Large-scale video streaming services such as Netflix are perhaps still a grey area for some regulatory intervention, given the impact they have on overall network loads. At present, the only commercial arrangements are understood to be in CDNs, or paid-peering deals, which are (strictly speaking) nothing to do with Net Neutrality per most definitions. We may even see pressure for regulators to limit fees charged for Internet interconnect and peering.

This report first looks at the changing focus of the debate, then examines the underlying technical and industry drivers that are behind the scenes. It then covers developments in major countries and regions, before giving recommendations for various stakeholders.

STL Partners is also preparing a broader research piece on overall regulatory trends, to be published in the next few months as part of its Executive Briefing Service.

What has changed?

Where have we come from?

If we wind the clock back a few years, the Net Neutrality debate was quite different. Around 2012/13, the typical talking-points were subjects such as:

  • Whether mobile operators could block messaging apps like WhatsApp, VoIP services like Skype, or somehow charge those types of providers for network access / interconnection.
  • If fixed-line broadband providers could offer “fast lanes” for Netflix or YouTube traffic, often conflating arguments about access-network links with core-network peering capacity.
  • Rhetoric about the so-called “sender-pays” concept, with some lobbying for introducing settlements for data traffic that were reminiscent of telephony’s called / caller model.
  • Using DPI (deep packet inspection) to discriminate between applications and charge for “a la carte” Internet access plans, at a granular level (e.g. per hour of view watched, or per social-network used).
  • The application of “two-sided business models”, with Internet companies paying for data capacity and/or quality on behalf of end-users.

Since then, many things have changed. Specific countries’ and regions laws’ will be discussed in the next section, but the last four years have seen major developments in the Netherlands, the US, Brazil, the EU and elsewhere.

At one level, the regulatory and political shifts can be attributed to the huge rise in the number of lobby groups on both Internet and telecom sides of the Neutrality debate. However, the most notable shift has been the emergence of consumer-centric pro-Neutrality groups, such as Access Now, EDRi and EFF, along with widely-viewed celebrity input from the likes of comedian John Oliver. This has undoubtedly led to the balance of political pressure shifting from large companies’ lawyers towards (sometimes slogan-led) campaigning from the general public.

But there have also been changes in the background trends of the Internet itself, telecom business models, and consumers’ and application developers’ behaviour. (The key technology changes are outlined in the section after this one). Various experiments and trials have been tried, with a mix of successes and failures.

Another important background trend has been the unstoppable momentum of particular apps and content services, on both fixed and mobile networks. Telcos are now aware that they are likely to be judged on how well Facebook or Spotify or WeChat or Netflix perform – so they are much less-inclined to indulge in regulatory grand-standing about having such companies “pay for the infrastructure” or be blocked. Essentially, there is tacit recognition that access to these applications is why customers are paying for broadband in the first place.

These considerations have shifted the debate in many important areas, making some of the earlier ideas unworkable, while other areas have come to the fore. Two themes stand out:

  • Zero-rating
  • Specialised services

Content:

  • Executive summary
  • Contents
  • Introduction
  • What has changed?
  • Where have we come from?
  • Zero-rating as a battleground
  • Specialised services & QoS
  • Technology evolution impacting Neutrality debate
  • Current status
  • US
  • EU
  • India
  • Brazil
  • Other countries
  • Conclusions
  • Recommendations

Connectivity for telco IoT / M2M: Are LPWAN & WiFi strategically important?

Introduction

5G, WiFi, GPRS, NB-IoT, LTE-M & LTE Categories 1 & 0, SigFox, Bluetooth, LoRa, Weightless-N & Weightless-P, ZigBee, EC-GSM, Ingenu, Z-Wave, Nwave, various satellite standards, optical/laser connections and more….. the list of current or proposed wireless network technologies for the “Internet of Things” seems to be growing longer by the day. Some are long-range, some short. Some high power/bandwidth, some low. Some are standardised, some proprietary. And while most devices will have some form of wireless connection, there are certain categories that will use fibre or other fixed-network interfaces.

There is no “one-size fits all”, although some hope that 5G will ultimately become an “umbrella” for many of them, in the 2020 time-frame and beyond. But telcos, especially mobile operators, need to consider which they will support in the shorter-term horizon, and for which M2M/IoT use-cases. That universe is itself expanding too, with new IoT products and systems being conceived daily, spanning everything from hobbyists’ drones to industrial robots. All require some sort of connectivity, but the range of costs, data capabilities and robustness varies hugely.

Two over-riding question themes emerge:

  • What are the business cases for deploying IoT-centric networks – and are they dependent on offering higher-level management or vertical solutions as well? Is offering connectivity – even at very low prices/margins – essential for telcos to ensure relevance and differentiate against IoT market participants?
  • What are the longer-term strategic issues around telcos supporting and deploying proprietary or non-3GPP networking technologies? Is the diversity a sensible way to address short-term IoT opportunities, or does it risk further undermining the future primacy of telco-centric standards and business models? Either way telcos need to decide how much energy they wish to expend, before they embrace the inevitability of alternative competing networks in this space.

This report specifically covers IoT-centric network connectivity. It fits into Telco 2.0’s Future of the Network research stream, and also intersects with our other ongoing work on IoT/M2M applications, including verticals such as the connected car, connected home and smart cities. It focuses primarily on new network types, rather than marketing/bundling approaches for existing services.

The Executive Briefing report IoT – Impact on M2M, Endgame and Implications from March 2015 outlined three strategic areas of M2M business model innovation for telcos:

  • Improve existing M2M operations: Dedicated M2M business units structured around priority verticals with dedicated resources. Such units allow telcos to tailor their business approach and avoid being constrained by traditional strategies that are better suited to mobile handset offerings.
  • Move into new areas of M2M: Expansion along the value chain through both acquisitions and partnerships, and the formation of M2M operator ‘alliances.’
  • Explore the Internet of Things: Many telcos have been active in the connected home e.g. AT&T Digital Life. However, outsiders are raising the connected home (and IoT) opportunity stakes: Google, for example, acquired Nest for $3.2 billion in 2014.
Figure 2: The M2M Value Chain

 

Source: STL Partners, More With Mobile

In the 9 months since that report was published, a number of important trends have occurred in the M2M / IoT space:

  • A growing focus on the value of the “industrial Internet”, where sensors and actuators are embedded into offices, factories, agriculture, vehicles, cities and other locations. New use-cases and applications abound on both near- and far-term horizons.
  • A polarisation in discussion between ultra-fast/critical IoT (e.g. for vehicle-to-vehicle control) vs. low-power/cost IoT (e.g. distributed environmental sensors with 10-year battery life). 2015 discussion of IoT connectivity has been dominated by futuristic visions of 5G, or faster-than-expected deployment of LPWANs (low-power wide-area networks), especially based on new platforms such as SigFox or LoRa Alliance.
  • Comparatively slow emergence of dedicated individual connections for consumer IoT devices such as watches / wearables. With the exception of connected cars, most mainstream products connect via local “capillary” networks (e.g. Bluetooth and WiFi) to smartphones or home gateways acting as hubs, or a variety of corporate network platforms. The arrival of embedded SIMs might eventually lead to more individually-connected devices, but this has not materialised in volume yet.
  • Continued entry, investment and evolution of a broad range of major companies and start-ups, often with vastly different goals, incumbencies and competencies to telcos. Google, IBM, Cisco, GE, Intel, utility firms, vehicle suppliers and 1000s of others are trying to carve out roles in the value chain.
  • Growing impatience among some in the telecom industry with the pace of standardisation for some IoT-centric developments. A number of operators have looked outside the traditional cellular industry suppliers and technologies, eager to capitalise on short-term growth especially in LPWAN and in-building local connectivity. In response, vendors including Huawei, Ericsson and Qualcomm have stepped up their pace, although fully-standardised solutions are still some way off.

Connectivity in the wider M2M/IoT context

It is not always clear what the difference is between M2M and IoT, especially at a connectivity level. They now tend to be used synonymously, although the latter is definitely newer and “cooler”. Various vendors have their own spin on this – Cisco’s “Internet of Everything”, and Ericsson’s “Networked Society”, for example. It is also a little unclear where the IoT part ends, and the equally vague term “networked services” begins. It is also important to recognise that a sizeable part of the future IoT technology universe will not be based on “services” at all, although “user-owned” devices and systems are much harder for telcos to monetise.

An example might be a government encouraging adoption of electric vehicles. Cars and charging points are “things” which require data connections. At one level, an IoT application may simply guide drivers to their closest available power-source, but a higher-level “societal” application will collate data from both the IoT network and other sources. Thus data might also flow from bus and train networks, as well as traffic sensors, pollution monitors and even fitness trackers for walking and cycling, to see overall shifts in transport habits and help “nudge” commuters’ behaviour through pricing or other measures. In that context, the precise networks used to connect to the end-points become obscured in the other layers of software and service – although they remain essential building blocks.

Figure 3: Characterising the difference between M2M and IoT across six domains

Source: STL Partners, More With Mobile

(Note: the Future of Network research stream generally avoids using vague and loaded terms like “digital” and “OTT”. While concise, we believe they are often used in ways that guide readers’ thinking in wrong or unhelpful directions. Words and analogies are important: they can lead or mislead, often sub-consciously).

Often, it seems that the word “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Some may actually involve analogue technology as well as digital. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

Figure 4: ‘Digital’ and IoT convergence

Source: STL Partners, More With Mobile

We will also likely see many hybrid solutions emerging, for example where dedicated devices are combined with smartphones/PCs for particular functions. Thus a “digital home” service may link alarms, heating sensors, power meters and other connections via a central hub/console – but also send alerts and data to a smartphone app. It is already quite common for consumer/business drones to be controlled via a smartphone or tablet.

In terms of connectivity, it is also worth noting that “M2M” generally just refers to the use of conventional cellular modems and networks – especially 2G/3G. IoT expands this considerably – as well as future 5G networks and technologies being specifically designed with new use-cases in mind, we are also seeing the emergence of a huge range of dedicated 4G variants, plus new purpose-designed LPWAN platforms. IoT also intersects with the growing range of local/capillary[1] network technologies – which are often overlooked in conventional discussions about M2M.

Figure 5: Selected Internet of Things service areas

Source: STL Partners

The larger the number…

…the less relevance and meaning it has. We often hear of an emerging world of 20bn, 50bn, even trillions of devices being “networked”. While making for good headlines and press-releases, such numbers can be distracting.

While we will definitely be living in a transformed world, with electronics around us all the time – sensors, displays, microphones and so on – that does not easily translate into opportunities for telecom operators. The correct role for such data and forecasts is in the context of a particular addressable opportunity – otherwise one risks counting toasters, alongside sensors in nuclear power stations. As such, this report does not attempt to compete in counting “things” with other analyst firms, although references are made to approximate volumes.

For example, consider a typical large, modern building. It’s common to have temperature sensors, CCTV cameras, alarms for fire and intrusion, access control, ventilation, elevators and so forth. There will be an internal phone system, probably LAN ports at desks and WiFi throughout. In future it may have environmental sensors, smart electricity systems, charging points for electric vehicles, digital advertising boards and more. Yet the main impact on the telecom industry is just a larger Internet connection, and perhaps some dedicated lines for safety-critical systems like the fire alarm. There may well be 1,000 or 10,000 connected “things”, and yet for a cellular operator the building is more likely to be a future driver of cost (e.g. for in-building radio coverage for occupants’ phones) rather than extra IoT revenue. Few of the building’s new “things” will have SIM cards and service-based radio connections in any case – most will link into the fixed infrastructure in some way.

One also has to doubt some of the predicted numbers – there is considerable vagueness and hand-waving inherent in the forecasts. If a car in 2020 has 10 smart sub-systems, and 100 sensors reporting data, does that count as 1, 10 or 100 “things” connected? Is the key criterion that smart appliances in a connected home are bought individually – and therefore might be equipped with individual wide-area network connections? When such data points are then multiplied-up to give traffic forecasts, there are multiple layers of possible mathematical error.

This highlights the IoT quantification dilemma – everyone focuses on the big numbers, many of which are simple spreadsheet extrapolations, made without much consideration of the individual use-cases. And the larger the headline number, the less-likely the individual end-points will be directly addressed by telcos.

 

  • Executive Summary
  • Introduction
  • Connectivity in the wider M2M/IoT context
  • The larger the number…
  • The IoT network technology landscape
  • Overview – it’s not all cellular
  • The emergence of LPWANs & telcos’ involvement
  • The capillarity paradox: ARPU vs. addressability
  • Where does WiFi fit?
  • What will the impact of 5G be?
  • Other technology considerations
  • Strategic considerations
  • Can telcos compete in IoT without connectivity?
  • Investment vs. service offer
  • Regulatory considerations
  • Are 3GPP technologies being undermined?
  • Risks & threats
  • Conclusion

 

  • Figure 1: Telcos can only fully monetise “things” they can identify uniquely
  • Figure 2: The M2M Value Chain
  • Figure 3: Characterising the difference between M2M and IoT across six domains
  • Figure 4: ‘Digital’ and IoT convergence
  • Figure 5: Selected Internet of Things service areas
  • Figure 6: Cellular M2M is growing, but only a fraction of IoT overall
  • Figure 7: Wide-area IoT-related wireless technologies
  • Figure 8: Selected telco involvement with LPWAN
  • Figure 9: Telcos need to consider capillary networks pragmatically
  • Figure 10: Major telco types mapped to relevant IoT network strategies

Do network investments drive creation & sale of truly novel services?

Introduction

History: The network is the service

Before looking at how current network investments might drive future generations of telco-delivered services, it is worth considering some of the history, and examining how we got where we are today.

Most obviously, the original network build-outs were synonymous with the services they were designed to support. Both fixed and mobile operators started life as “phone networks”, with analogue or electro-mechanical switches. (Earlier descendants were designed to service telegraph and pagers, respectively). Cable operators began as conduits for analogue TV signals. These evolved to support digital switches of various types, as well as using IP connections internally.

From the 1980s onwards, it was hoped that future generations of telecom services would be enabled by, and delivered from, the network itself – hence acronyms like ISDN (Integrated Services Digital Network) and IN (Intelligent Network).

But the earliest signs that “digital services” might come from outside the telecom network were evident even at that point. Large companies built up private networks to support their own phone systems (PBXs). Various 3rd-party “value-added networks” (VAN) and “electronic data interchange” (EDI) services emerged in industries such as the automotive sector, finance and airlines. And from the early 1990s, consumers started to get access to bulletin boards and early online services like AOL and CompuServe, accessed using dial-up modems.

And then, around 1994, the first web browsers were introduced, and the model of Internet access and ISPs took off, initially with narrowband connections using modems, but then swiftly evolving to ADSL-based broadband. From 1990 onwards, the bulk of new consumer “digital services” were web-based, or using other Internet protocols such as email and private messaging. At the same time, businesses evolved their own private data networks (using telco “pipes” such as leased-lines, frame-relay and the like), supporting their growing client/server computing and networked-application needs.

Figure 1: In recent years, most digital services have been “non-network” based

Source: STL Partners

For fixed broadband, Internet access and corporate data connections have mostly dominated ever since, with rare exceptions such as Centrex phone and web-hosting services for businesses, or alarm-monitoring for consumers. The first VoIP-based carrier telephony service only emerged in 2003, and uptake has been slow and patchy – there is still a dominance of old, circuit-based fixed phone connections in many countries.

More recently, a few more “fixed network-integrated” offers have evolved – cloud platforms for businesses’ voice, UC and SaaS applications, content delivery networks, and assorted consumer-oriented entertainment/IPTV platforms. And in the last couple of years, operators have started to use their broadband access for a wider array of offers such as home-automation, or “on-boarding” Internet content sources into set-top box platforms.

The mobile world started evolving later – mainstream cellular adoption only really started around 1995. In the mobile world, most services prior to 2005 were either integrated directly into the network (e.g. telephony, SMS, MMS) or provided by operators through dedicated service delivery platforms (e.g. DoCoMo iMode, and Verizon’s BREW store). Some early digital services such as custom ringtones were available via 3rd-party channels, but even they were typically charged and delivered via SMS. The “mobile Internet” between 1999-2004 was delivered via specialised WAP gateways and servers, implemented in carrier networks. The huge 3G spectrum licence awards around 2000-2002 were made on the assumption that telcos would continue to act as creators or gatekeepers for the majority of mobile-delivered services.

It was only around 2005-6 that “full Internet access” started to become available for mobile users, both for those with early smartphones such as Nokia/Symbian devices, and via (quite expensive) external modems for laptops. In 2007 we saw two game-changers emerge – the first-generation Apple iPhone, and Huawei’s USB 3G modem. Both catalysed the wide adoption of the consumer “data plan”- hitherto almost unknown. By 2010, there were virtually no new network-based services, while the “app economy” and “vanilla” Internet access started to dominate mobile users’ behaviour and spending. Even non-Internet mobile services such as BlackBerry BES were offered via alternative non-telco infrastructure.

Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7

Source: Disruptive Analysis

By 2013, there had still been very few successful mobile digital-services offers that were actually anchored in cellular operators’ infrastructure. There have been a few positive signs in the M2M sphere and wholesaled SMS APIs, but other integrated propositions such as mobile network-based TV have largely failed. Once again the transition to IP-based carrier telephony has been slow – VoLTE is gaining grudging acceptance more from necessity than desire, while “official” telco messaging services like RCS have been abject failures. Neither can be described as “digital innovation”, either – there is little new in them.

The last two years, however, have seen the emergence of some “green shoots” for mobile services. Some new partnering / charging models have borne fruit, with zero-rated content/apps becoming quite prevalent, and a handful of developer platforms finally starting to gain traction, offering network-based features such as location awareness. Various M2M sectors such as automotive connectivity and some smart-metering has evolved. But the bulk of mobile “digital services” have been geared around iOS and Android apps, anchored in the cloud, rather than telcos’ networks.

So in 2015, we are currently in a situation where the majority of “cool” or “corporate” services in both mobile and fixed worlds owe little to “the network” beyond fast IP connectivity: the feared mythical (and factually-incorrect) “dumb pipe”. Connected “general-purpose” devices like PCs and smartphones are optimised for service delivery via the web and mobile apps. Broadband-connected TVs are partly used for operator-provided IPTV, but also for so-called “OTT” services such as Netflix.

And future networks and novel services? As discussed below, there are some positive signs stemming from virtualisation and some new organisational trends at operators to encourage innovative services – but it is not yet clear that they will be enough to overcome the open Internet’s sustained momentum.

What are so-called “digital services”?

It is impossible to visit a telecoms conference, or read a vendor press-release, without being bombarded by the word “digital” in a telecom context. Digital services, digital platforms, digital partnerships, digital agencies, digital processes, digital transformation – and so on.

It seems that despite the first digital telephone exchanges being installed in the 1980s and digital computing being de-rigeur since the 1950s, the telecoms industry’s marketing people have decided that 2015 is when the transition really occurs. But when the chaff is stripped away, what does it really mean, especially in the context of service innovation and the network?

Often, it seems that “digital” is just a convenient cover, to avoid admitting that a lot of services are based on the Internet and provided over generic data connections. But there is more to it than that. Some “digital services” are distinctly non-Internet in nature (for example, if delivered “on-net” from set-top boxes). New IoT and M2M propositions may never involve any interaction with the web as we know it. Hybrids where apps use some telco network-delivered ingredients (via APIs), such as identity or one-time SMS passwords are becoming important.

And in other instances the “digital” phrases relate to relatively normal services – but deployed and managed in a much more efficient and automated fashion. This is quite important, as a lot of older services still rely on “analogue” processes – manual configuration, physical “truck rolls” to install and commission, and high “touch” from sales or technical support people to sell and operate, rather than self-provisioning and self-care through a web portal. Here, the correct term is perhaps “digital transformation” (or even more prosaically simply “automation”), representing a mix of updated IP-based networks, and more modern and flexible OSS/BSS systems to drive and bill them.

STL identifies three separate mechanisms by which network investments can impact creation and delivery of services:

  • New networks directly enable the supply of wholly new services. For example, some IoT services or mobile gaming applications would be impossible without low-latency 4G/5G connections, more comprehensive coverage, or automated provisioning systems.
  • Network investment changes the economics of existing services, for example by removing costly manual processes, or radically reducing the cost of service delivery (e.g. fibre backhaul to cell sites)
  • Network investment occurs hand-in-hand with other changes, thus indirectly helping drive new service evolution – such as development of “partner on-boarding” capabilities or API platforms, which themselves require network “hooks”.

While the future will involve a broader set of content/application revenue streams for telcos, it will also need to support more, faster and differentiated types of data connections. Top of the “opportunity list” is the support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. Many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition or business model – faster- or lower-powered networks, or perhaps even the much-discussed but rarely-seen monetisation of “QoS” (Quality of Service). Even if not paid for directly, QoS could perhaps be integrated into compelling packages and data-service bundles.

There is also the potential for more “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. (We covered this area in depth in the recent Telco 2.0 brief on Mobile Edge Computing How 5G is Disrupting Cloud and Network Strategy Today.)

In other words, virtualisation and the “software network” might allow truly new services, not just providing existing services more easily. That said, even if the answer is that the network could make a large-enough difference, there are still many extra questions about timelines, technology choices, business models, competitive and regulatory dynamics – and the practicalities and risks of making it happen.

Part of the complexity is that many of these putative new services will face additional sources of competition and/or substitution by other means. A designer of a new communications service or application has many choices about how to turn the concept into reality. Basing network investments on specific predictions of narrow services has a huge amount of risk, unless they are agreed clearly upfront.

But there is also another latent truth here: without ever-better (and more efficient) networks, the telecom industry is going to get further squeezed anyway. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Businesses and governments will look to manage their networking and communications costs – and may get access to dark fibre or spectrum to build their own networks, if commercial services don’t continue to improve in terms of price-performance. New connectivity options are springing up too, from WiFi to drones to device-to-device connections.

In other words: some network investment will be “table stakes” for telcos, irrespective of any new digital services. In many senses, the new propositions are “upside” rather than the fundamental basis justifying capex.

 

  • Executive Summary
  • Introduction
  • History: The network is the service
  • What are so-called “digital services”?
  • Service categories
  • Network domains
  • Enabler, pre-requisite or inhibitor?
  • Overview
  • Virtualisation
  • Agility & service enablement
  • More than just the network: lead actor & supporting cast
  • Case-studies, examples & counter-examples
  • Successful network-based novel services
  • Network-driven services: learning from past failures
  • The mobile network paradox
  • Conclusion: Services, agility & the network
  • How do so-called “digital” services link to the network?
  • Which network domains can make a difference?
  • STL Partners and Telco 2.0: Change the Game

 

  • Figure 1: In recent years, most digital services have been “non-network” based
  • Figure 2: Mobile data services only shifted to “open Internet” plans around 2006-7
  • Figure 3: Network spend both “enables” & “prevents inhibition” of new services
  • Figure 4: Virtualisation brings classic telco “Network” & “IT” functions together
  • Figure 5: Virtualisation-driven services: Cloud or Network anchored?
  • Figure 6: Service agility is multi-faceted. Network agility is a core element
  • Figure 7: Using Big Data Analytics to Predictively Cache Content
  • Figure 8: Major cablecos even outdo AT&T’s stellar performance in the enterprise
  • Figure 9: Mapping network investment areas to service opportunities

Key Questions for NextGen Broadband Part 1: The Business Case

Introduction

It’s almost a cliché to talk about “the future of the network” in telecoms. We all know that broadband and network infrastructure is a never-ending continuum that evolves over time – its “future” is continually being invented and reinvented. We also all know that no two networks are identical, and that despite standardisation there are always specific differences, because countries, regulations, user-bases and legacies all vary widely.

But at the same time, the network clearly matters still – perhaps more than it has for the last two decades of rapid growth in telephony and SMS services, which are now dissipating rapidly in value. While there are certainly large swathes of the telecom sector benefiting from content provision, commerce and other “application-layer” activities, it is also true that the bulk of users’ perceived value is in connectivity to the Internet, IPTV and enterprise networks.

The big question is whether CSPs can continue to convert that perceived value from users into actual value for the bottom-line, given the costs and complexities involved in building and running networks. That is the paradox.

While the future will continue to feature a broader set of content/application revenue streams for telcos, it will also need to support not just more and faster data connections, but be able to cope with a set of new challenges and opportunities. Top of the list is support for “Connected Everything” – the so-called Internet of Things, smart homes, connected cars, mobile healthcare and so on. There is a significant chance that many of these will not involve connection via the “public Internet” and therefore there is a possibility for new forms of connectivity proposition evolving – faster- or lower-powered networks, or perhaps even the semi-mythical “QoS”, which if not paid for directly, could perhaps be integrated into compelling packages and data-service bundles. There is also the potential for “in-network” value to be added through SDN and NFV – for example, via distributed servers close to the edge of the network and “orchestrated” appropriately by the operator. But does this add more value than investing in more web/OTT-style applications and services, de-coupled from the network?

Again, this raises questions about technology, business models – and the practicalities of making it happen.

This plays directly into the concept of the revenue “hunger gap” we have analysed for the past two years – without ever-better (but more efficient) networks, the telecom industry is going to get further squeezed. While service innovation is utterly essential, it also seems to be slow-moving and patchy. The network part of telcos needs to run just to stand still. Consumers will adopt more and faster devices, better cameras and displays, and expect network performance to keep up with their 4K videos and real-time games, without paying more. Depending on the trajectory of regulatory change, we may also see more consolidation among parts of the service provider industry, more quad-play networks, more sharing and wholesale models.

We also see communications networks and applications permeating deeper into society and government. There is a sense among some policymakers that “telecoms is too important to leave up to the telcos”, with initiatives like Smart Cities and public-safety networks often becoming decoupled from the mainstream of service providers. There is an expectation that technology – and by extension, networks – will enable better economies, improved healthcare and education, safer and more efficient transport, mechanisms for combatting crime and climate change, and new industries and jobs, even as old ones become automated and robotised.

Figure 1 – New services are both network-integrated & independent

 

Source: STL Partners

And all of this generates yet more uncertainty, with yet more questions – some about the innovations needed to support these new visions, but also whether they can be brought to market profitably, given the starting-point we find ourselves at, with fragmented (yet growing) competition, regulatory uncertainty, political interference – and often, internal cultural barriers within the CSPs themselves. Can these be overcome?

A common theme from the section above is “Questions”. This document – and a forthcoming “sequel” – is intended to group, lay out and introduce the most important ones. Most observers just tend to focus on a few areas of uncertainty, but in setting up the next year or so of detailed research, Telco 2.0 wants to fully list and articulate all of the hottest issues. Only once they are collated, can we start to work out the priorities – and inter-dependencies.

Our belief is that all of the detailed questions on “Future Networks” can, it fact, be tied back to one of two broader, more over-reaching themes:

  • What are the business cases and operational needs for future network investment?
  • Which disruptions (technological or other) are expected in the future?

The business case theme is covered in this document. It combines future costs (spectrum, 4G/5G/fibre deployments, network-sharing, virtualisation, BSS/OSS transformation etc.) and revenues (data connectivity, content, network-integrated service offerings, new Telco 2.0-style services and so on). It also encompasses what is essential to make the evolution achievable, in terms of organisational and cultural transformation within telcos.

A separate Telco 2.0 document, to be published in coming weeks, will cover the various forthcoming disruptions. These are expected to include new network technologies that will ultimately coalesce to form 5G mobile and new low-power wireless, as well as FTTx and DOCSIS cable evolution. In addition, virtualisation in both NFV and SDN guises will be hugely transformative.

There is also a growing link between mobile and fixed domains, reflected in quad-play propositions, industry consolidation, and the growth of small-cells and WiFi with fixed-line backhaul. In addition, to support future service innovation, there need to be adequate platforms for both internal and external developers, as well as a meaningful strategy for voice/video which fits with both network and end-user trends. Beyond the technical, additional disruption will be delivered by regulatory change (for example on spectrum and neutrality), and also a reshaped vendor landscape.

The remainder of this report lays out the first five of the Top 10 most important questions for the Future Network. We can’t give definitive analyses, explanations or “answers” in a report of this length – and indeed, many of them are moving targets anyway. But taking a holistic approach to laying out each question properly – where it comes from, and what the “moving parts” are, we help to define the landscape. The objective is to help management teams apply those same filters to their own organisations, understand how can costs be controlled and revenues garnered, see where consolidation and regulatory change might help or hinder, and deal with users and governments’ increasing expectations.

The 10 Questions also lay the ground for our new Future Network research stream, forthcoming publications and comment/opinion.

Overview: what is the business case for Future Networks?

As later sections of both this document and the second in the series cover, there are various upcoming technical innovations in the networking pipeline. Numerous advanced radio technologies underpin 4.5G and 5G, there is ongoing work to improve fibre and DSL/cable broadband, virtualisation promises much greater flexibility in carrier infrastructure and service enablement, and so on. But all those advances are predicated on either (ideally) more revenues, or at least reduced costs to deploy and operate. All require economic justification for investment to occur.

This is at the core of the Future Networks dilemma for operators – what is the business case for ongoing investment? How can the executives, boards of directors and investors be assured of returns? We all know about the ongoing shift of business & society online, the moves towards smarter cities and national infrastructure, changes in entertainment and communication preferences and, of course, the Internet of Things – but how much benefit and value might accrue to CSPs? And is that value driven by network investments, or should telecom companies re-focus their investments and recruitment on software, content and the cloud?

This is not a straightforward question. There are many in the industry that assert that “the network is the key differentiator & source of value”, while others counter that it is a commodity and that “the real value is in the services”.

What is clear is that better/faster networks will be needed in any case, to achieve some of the lofty goals that are being suggested for the future. However, it is far from clear how much of the overall value-chain profit can be captured from just owning the basic machinery – recent years have shown a rapid de-coupling of network and service, apart from a few areas.

In the past, networks largely defined the services offered – most notably broadband access, phone calls and SMS, as well as cable TV and IPTV. But with the ubiquitous rise of Internet access and service platforms/gateways, an ever-increasing amount of service “logic” is located on the web, or in the cloud – not enshrined in the network itself. This is an important distinction – some services are abstracted and designed to be accessed from any network, while others are intimately linked to the infrastructure.

Over the last decade, the prevailing shift has been for network-independent services. In many ways “the web has won”. Potentially this trend may reverse in future though, as servers and virtualised, distributed cloud capabilities get pushed down into localised network elements. That, however, brings its own new complexities, uncertainties and challenges – it a brave (or foolhardy) telco CEO that would bet the company on new in-network service offers alone. We will also see API platforms expose network “capabilities” to the web/cloud – for example, W3C is working on standards to allow web developers to gain insights into network congestion, or users’ data-plans.

But currently, the trend is for broadband access and (most) services to be de-coupled. Nonetheless, some operators seem to have been able to make clever pricing, distribution and marketing decisions (supported by local market conditions and/or regulation) to enable bundles to be made desirable.

US operators, for example, have generally fared better than European CSPs, in what should have been comparably-mature markets. But was that due to a faster shift to 4G networks? Or other factors, such as European telecom fragmentation and sub-scale national markets, economic pressures, or perhaps a different legacy base? Did the broad European adoption of pre-paid (and often low-ARPU) mobile subscriptions make it harder to justify investments on the basis of future cashflows – or was it more about the early insistence that 2.6GHz was going to be the main “4G band”, with its limitations later coming back to bite people? It is hard to tease apart the technology issues from the commercial ones.

Similar differences apply in the fixed-broadband world. Why has adoption and typical speed varied so much? Why have some markets preferred cable to DSL? Why are fibre deployments patchy and very nation-specific? Is it about the technology involved – or the economy, topography, government policies, or the shape of the TV/broadcast sector?

Understanding these issues – and, once again, articulating the questions properly – is core to understanding the future for CSPs’ networks. We are in the middle of 4G rollout in most countries, with operators looking at the early requirements for 5G. SDN and NFV are looking important – but their exact purpose, value and timing still remain murky, despite the clear promises. Can fibre rollouts – FTTC or FTTH – still be justified in a world where TV/video spend is shifting away from linear programming and towards online services such as Netflix?

Given all these uncertainties, it may be that either network investments get slowed down – or else consolidation, government subsidy or other top-level initiatives are needed to stimulate them. On the other hand, it could be the case that reduced costs of capex and opex – perhaps through outsourcing, sharing or software-based platforms, or even open-source technology – make the numbers work out well, even for raw connectivity. Certainly, the last few years have seen rising expenditure by end-users on mobile broadband, even if it has also contributed to the erosion of legacy services such as telephony and SMS, by enabling more modern/cheaper rivals. We have also seen a shift to lower-cost network equipment and software suppliers, and an emphasis for “off the shelf” components, or open interfaces, to reduce lock-in and encourage competition.

The following sub-sections each frame a top-level, critical question relating to the business case for Future Networks:

  • Will networks support genuinely new services & enablers/APIs, or just faster/more-granular Internet access?
  • Speed, coverage, performance/QoS… what actually generates network value? And does this derive from customer satisfaction, new use-cases, or other sources?
  • Does quad-play and fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation and culture: what needs to change to support future network investments?

 

  • Executive Summary
  • Introduction
  • Overview: what is the business case for Future Networks?
  • Supporting new services or just faster Internet?
  • Speed, coverage, quality…what is most valuable?
  • Does quad-play & fixed-mobile convergence win?
  • Consolidation, network-sharing & wholesale: what changes?
  • Telco organisation & culture: what changes?
  • Conclusions

 

  • Figure 1 – New services are both network-integrated & independent
  • Figure 2 – Mobile data device & business model evolution
  • Figure 3 – Some new services are directly enabled by network capabilities
  • Figure 4 – Network investments ultimately need to map onto customers’ goals
  • Figure 5 – Customers put a priority on improving indoor/fixed connectivity
  • Figure 6 – Notional “coverage” does not mean enough capacity for all apps
  • Figure 7 – Different operator teams have differing visions of the future
  • Figure 8 – “Software telcos” may emulate IT’s “DevOps” organisational dynamic

 

Mobile Marketing and Commerce: the technology battle between NFC, BLE, SIM, & Cloud

Introduction

In this briefing, we analyse the bewildering array of technologies being deployed in the on-going mobile marketing and commerce land-grab. With different digital commerce brokers backing different technologies, confusion reigns among merchants and consumers, holding back uptake. Moreover, the technological fragmentation is limiting economies of scale, keeping costs too high.

This paper is designed to help telcos and other digital commerce players make the right technological bets. Will bricks and mortar merchants embrace NFC or Bluetooth Low Energy or cloud-based solutions? If NFC does take off, will SIM cards or trusted execution environments be used to secure services? Should digital commerce brokers use SMS, in-app notifications or IP-based messaging services to interact with consumers?

STL defines Digital Commerce 2.0 as the use of new digital and mobile technologies to bring buyers and sellers together more efficiently and effectively (see Digital Commerce 2.0: New $Bn Disruptive Opportunities for Telcos, Banks and Technology Players).  Fast growing adoption of mobile, social and local services is opening up opportunities to provide consumers with highly-relevant advertising and marketing services, underpinned by secure and easy-to-use payment services. By giving people easy access to information, vouchers, loyalty points and electronic payment services, smartphones can be used to make shopping in bricks and mortar stores as interactive as shopping through web sites and mobile apps.

This executive briefing weighs the pros and cons of the different technologies being used to enable mobile commerce and identifies the likely winners and losers.

A new dawn for digital commerce

This section explains the driving forces behind the mobile commerce land-grab and the associated technology battle.

Digital commerce is evolving fast, moving out of the home and the office and onto the street and into the store. The advent of mass-market smartphones with touchscreens, full Internet browsers and an array of feature-rich apps, is turning out to be a game changer that profoundly impacts the way in which people and businesses buy and sell.  As they move around, many consumers are now using smartphones to access social, local and mobile (SoLoMo) digital services and make smarter purchase decisions. As they shop, they can easily canvas opinion via Facebook, read product reviews on Amazon or compare prices across multiple stores. In developed markets, this phenomenon is now well established. Two thirds of 400 Americans surveyed in November 2013 reported that they used smartphones in stores to compare prices, look for offers or deals, consult friends and search for product reviews.

At the same time, the combination of Internet and mobile technologies, embodied in the smartphone, is enabling businesses to adopt new forms of digital marketing, retailing and payments that could dramatically improve their efficiency and effectiveness. The smartphones and the data they generate can be used to optimise and enable every part of the entire ‘wheel of commerce’ (see Figure 4).

Figure 4: The elements that make up the wheel of commerce

The elements that make up the wheel of commerce Feb 2014

Source: STL Partners

The extensive data being generated by smartphones can give companies’ real-time information on where their customers are and what they are doing. That data can be used to improve merchants’ marketing, advertising, stock management, fulfilment and customer care. For example, a smartphone’s sensors can detect how fast the device is moving and in what direction, so a merchant could see if a potential customer is driving or walking past their store.

Marketing that makes use of real-time smartphone data should also be more effective than other forms of digital marketing. In theory, at least, targeting marketing at consumers in the right geography at a specific time should be far more effective than simply displaying adverts to anyone who conducts an Internet search using a specific term.

Similarly, local businesses should find sending targeted vouchers, promotions and information, delivered via smartphones, to be much more effective than junk mail at engaging with customers and potential customers. Instead of paying someone to put paper-based vouchers through the letterbox of every house in the entire neighbourhood, an Indian restaurant could, for example, send digital vouchers to the handsets of anyone who has said they are interested in Indian food as they arrive at the local train station between 7pm and 9pm in the evening. As it can be precisely targeted and timed, mobile marketing should achieve a much higher return on investment (ROI) than a traditional analogue approach.

In our recent Strategy Report, STL Partners argued that the disruption in the digital commerce market has opened up two major opportunities for telcos:

  1. Real-time commerce enablement: The use of mobile technologies and services to optimise all aspects of commerce. For example, mobile networks can deliver precisely targeted and timely marketing and advertising to consumer’s smartphones, tablets, computers and televisions.
  2. Personal cloud: Act as a trusted custodian for individuals’ data and an intermediary between individuals and organisations, providing authentication services, digital lockers and other services that reduce the risk and friction in every day interactions. An early example of this kind of service is financial services web site Mint.com (profiled in the appendix of this report). As personal cloud services provide personalised recommendations based on individuals’ authorised data, they could potentially engage much more deeply with consumers than the generalised decision-support services, such as Google, TripAdvisor, moneysavingexpert.com and comparethemarket.com, in widespread use today.

These two opportunities are inter-related and could be combined in a single platform. In both cases, the telco is acting as a broker – matching buyers and sellers as efficiently as possible, competing with incumbent digital commerce brokers, such as Google, Amazon, eBay and Apple. The Strategy Report explains in detail how telcos could pursue these opportunities and potentially compete with the giant Internet players that dominate digital commerce today.

For most telcos, the best approach is to start with mobile commerce, where they have the strongest strategic position, and then use the resulting data, customer relationships and trusted brand to expand into personal cloud services, which will require high levels of investment. This is essentially NTT DOCOMO’s strategy.

However, in the mobile commerce market, telcos are having to compete with Internet players, banks, payment networks and other companies in land-grab mode – racing to sign up merchants and consumers for platforms that could enable them to secure a pivotal (and potentially lucrative) position in the fast growing mobile commerce market. Amazon, for example, is pursuing this market through its Amazon Local service, which emails offers from local merchants to consumers in specific geographic areas.

Moreover, a bewildering array of technologies are being used to pursue this land-grab, creating confusion for merchants and consumers, while fuelling fragmentation and limiting economies of scale.

In this paper, we weigh the pros and cons of the different technologies being used in each segment of the wheel of commerce, before identifying the most likely winners and losers. Note, the appendix of the Strategy Report profiles many of the key innovators in this space, such as Placecast, Shopkick and Square.

What’s at stake

This section considers the relative importance of the different segments of the wheel of commerce and explains why the key technological battles are taking place in the promote and transact segments.

Carving up the wheel of commerce

STL Partners’ recent Strategy Report models in detail the potential revenues telcos could earn from pursuing the real-time commerce and personal cloud opportunities. That is beyond the scope of this technology-focused paper, but suffice to say that the digital commerce market is large and is growing rapidly: Merchants and brands spend hundreds of billions of dollars across the various elements of the wheel of commerce. In the U.S., the direct marketing market alone is worth about $155 billion per annum, according to the Direct Marketing Association. In 2012, $62 billion of that total was spent on digital marketing, while about $93 billion was spent on traditional direct mail.

In the context of the STL Wheel of Commerce (see Figure 3), the promote segment (ads, direct marketing and coupons) is the most valuable of the six segments. Our analysis of middle-income markets for clients suggests that the promote segment accounts for approximately 40% of the value in the wheel of digital commerce today, while the transact segment (payments) accounts for 20% and planning (market research etc.) 16% (see Figure 5). These estimates draw on data released by WPP and American Express.

Note, that payments itself is a low margin business – American Express estimates that merchants in the U.S. spend four to five times as much on marketing activities, such as loyalty programmes and offers, as they do on payments.

Figure 5: The relative size of the segments of the wheel of commerce

The relative size of the segments of the wheel of commerce Feb 2014

Source: STL Partners

 

  • Introduction
  • Executive Summary
  • A new dawn for digital commerce
  • What’s at stake
  • Carving up the wheel of commerce
  • The importance of tracking transactions
  • It’s all about data
  • Different industries, different strategies
  • Tough technology choices
  • Planning
  • Promoting
  • Guiding
  • Transacting
  • Satisfying
  • Retaining
  • Conclusions
  • Key considerations
  • Likely winners and losers
  • The commercial implications
  • About STL Partners

 

  • Figure 1: App notifications are in pole position in the promotion segment
  • Figure 2: There isn’t a perfect point of sale solution
  • Figure 3: Different tech adoption scenarios and their commercial implications
  • Figure 4: The elements that make up the wheel of commerce
  • Figure 5: The relative size of the segments of the wheel of commerce
  • Figure 6: Examples of financial services-led digital wallets
  • Figure 7: Examples of Mobile-centric wallets in the U.S.
  • Figure 8: The mobile commerce strategy of leading Internet players
  • Figure 9: Telcos can combine data from different domains
  • Figure 10: How to reach consumers: The technology options
  • Figure 11: Balancing cost and consumer experience
  • Figure 12: An example of an easy-to-use tool for merchants
  • Figure 13: Drag and drop marketing collateral into Google Wallet
  • Figure 14: Contrasting a secure element with host-based card emulation
  • Figure 15: There isn’t a perfect point of sale solution
  • Figure 16: The proportion of mobile transactions to be enabled by NFC in 2017
  • Figure 17: Integrated platforms and point solutions both come with risks attached
  • Figure 18: Different tech adoption scenarios and their commercial implications

Mobile Broadband 2.0: The Top Disruptive Innovations

Summary: Key trends, tactics, and technologies for mobile broadband networks and services that will influence mid-term revenue opportunities, cost structures and competitive threats. Includes consideration of LTE, network sharing, WiFi, next-gen IP (EPC), small cells, CDNs, policy control, business model enablers and more.(March 2012, Executive Briefing Service, Future of the Networks Stream).

Trends in European data usage

  Read in Full (Members only)  Buy a single user license online  To Subscribe click here

Below is an extract from this 44 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003. We’ll also be discussing our findings and more on Facebook at the Silicon Valley (27-28 March) and London (12-13 June) New Digital Economics Brainstorms.

To share this article easily, please click:



Introduction

Telco 2.0 has previously published a wide variety of documents and blog posts on mobile broadband topics – content delivery networks (CDNs), mobile CDNs, WiFi offloading, Public WiFi, network outsourcing (“‘Under-The-Floor’ (UTF) Players: threat or opportunity? ”) and so forth. Our conferences have featured speakers and panellists discussing operator data-plan pricing strategies, tablets, network policy and numerous other angles. We’ve also featured guest material such as Arete Research’s report LTE: Late, Tempting, and Elusive.

In our recent ‘Under the Floor (UTF) Players‘ Briefing we looked at strategies to deal with some of of the challenges facing operators’ resulting from market structure and outsourcing

Under The Floor (UTF) Players Telco 2.0

This Executive Briefing is intended to complement and extend those efforts, looking specifically at those technical and business trends which are truly “disruptive”, either immediately or in the medium-term future. In essence, the document can be thought of as a checklist for strategists – pointing out key technologies or trends around mobile broadband networks and services that will influence mid-term revenue opportunities and threats. Some of those checklist items are relatively well-known, others more obscure but nonetheless important. What this document doesn’t cover is more straightforward concepts around pricing, customer service, segmentation and so forth – all important to get right, but rarely disruptive in nature.

During 2012, Telco 2.0 will be rolling out a new MBB workshop concept, which will audit operators’ existing technology strategy and planning around mobile data services and infrastructure. This briefing document is a roundup of some of the critical issues we will be advising on, as well as our top-level thinking on the importance of each trend.

It starts by discussing some of the issues which determine the extent of any disruption:

  • Growth in mobile data usage – and whether the much-vaunted “tsunami” of traffic may be slowing down
  • The role of standardisation , and whether it is a facilitator or inhibitor of disruption
  • Whether the most important MBB disruptions are likely to be telco-driven, or will stem from other actors such as device suppliers, IT companies or Internet firms.

The report then drills into a few particular domains where technology is evolving, looking at some of the most interesting and far-reaching trends and innovations. These are split broadly between:

  • Network infrastructure evolution (radio and core)
  • Control and policy functions, and business-model enablers

It is not feasible for us to cover all these areas in huge depth in a briefing paper such as this. Some areas such as CDNs and LTE have already been subject to other Telco 2.0 analysis, and this will be linked to where appropriate. Instead, we have drilled down into certain aspects we feel are especially interesting, particularly where these are outside the mainstream of industry awareness and thinking – and tried to map technical evolution paths onto potential business model opportunities and threats.

This report cannot be truly exhaustive – it doesn’t look at the nitty-gritty of silicon components, or antenna design, for example. It also treads a fine line between technological accuracy and ease-of-understanding for the knowledgeable but business-focused reader. For more detail or clarification on any area, please get in touch with us – email mailto:contact@stlpartners.com or call +44 (0) 207 247 5003.

Telco-driven disruption vs. external trends

There are various potential sources of disruption for the mobile broadband marketplace:

  • New technologies and business models implemented by telcos, which increase revenues, decrease costs, improve performance or alter the competitive dynamics between service providers.
  • 3rd party developments that can either bolster or undermine the operators’ broadband strategies. This includes both direct MBB innovations (new uses of WiFi, for example), or bleed-over from adjacent related marketplaces such as device creation or content/application provision.
  • External, non-technology effects such as changing regulation, economic backdrop or consumer behaviour.

The majority of this report covers “official” telco-centric innovations – LTE networks, new forms of policy control and so on,

External disruptions to monitor

But the most dangerous form of innovation is that from third parties, which can undermine assumptions about the ways mobile broadband can be used, introducing new mechanisms for arbitrage, or somehow subvert operators’ pricing plans or network controls. 

In the voice communications world, there are often regulations in place to protect service providers – such as banning the use of “SIM boxes” to terminate calls and reduce interconnection payments. But in the data environment, it is far less obvious that many work-arounds can either be seen as illegal, or even outside the scope of fair-usage conditions. That said, we have already seen some attempts by telcos to manage these effects – such as charging extra for “tethering” on smartphones.

It is not really possible to predict all possible disruptions of this type – such is the nature of innovation. But by describing a few examples, market participants can gauge their level of awareness, as well as gain motivation for ongoing “scanning” of new developments.

Some of the areas being followed by Telco 2.0 include:

  • Connection-sharing. This is where users might link devices together locally, perhaps through WiFi or Bluetooth, and share multiple cellular data connections. This is essentially “multi-tethering” – for example, 3 smartphones discovering each other nearby, perhaps each with a different 3G/4G provider, and pooling their connections together for shared use. From the user’s point of view it could improve effective coverage and maximum/average throughput speed. But from the operators’ view it would break the link between user identity and subscription, and essentially offload traffic from poor-quality networks on to better ones.
  • SoftSIM or SIM-free wireless. Over the last five years, various attempts have been made to decouple mobile data connections from SIM-based authentication. In some ways this is not new – WiFi doesn’t need a SIM, while it’s optional for WiMAX, and CDMA devices have typically been “hard-coded” to just register on a specific operator network. But the GSM/UMTS/LTE world has always relied on subscriber identification through a physical card. At one level, it s very good – SIMs are distributed easily and have enabled a successful prepay ecosystem to evolve. They provide operator control points and the ability to host secure applications on the card itself. However, the need to obtain a physical card restricts business models, especially for transient/temporary use such as a “one day pass”. But the most dangerous potential change is a move to a “soft” SIM, embedded in the device software stack. Companies such as Apple have long dreamed of acting as a virtual network provider, brokering between user and multiple networks. There is even a patent for encouraging bidding per-call (or perhaps per data-connection) with telcos competing head to head on price/quality grounds. Telco 2.0 views this type of least-cost routing as a major potential risk for operators, especially for mobile data – although it also possible enables some new business models that have been difficult to achieve in the past.
  • Encryption. Various of the new business models and technology deployment intentions of operators, vendors and standards bodies are predicated on analysing data flows. Deep packet inspection (DPI) is expected to be used to identify applications or traffic types, enabling differential treatment in the network, or different charging models to be employed. Yet this is rendered largely useless (or at least severely limited) when various types of encryption are used. Various content and application types already secure data in this way – content DRM, BlackBerry traffic, corporate VPN connections and so on. But increasingly, we will see major Internet companies such as Apple, Google, Facebook and Microsoft using such techniques both for their own users’ security, but also because it hides precise indicators of usage from the network operators. If a future Android phone sends all its mobile data back via a VPN tunnel and breaks it out in Mountain View, California, operators will be unable to discern YouTube video from search of VoIP traffic. This is one of the reasons why application-based charging models – one- or two-sided – are difficult to implement.
  • Application evolution speed. One of the largest challenges for operators is the pace of change of mobile applications. The growing penetration of smartphones, appstores and ease of “viral” adoption of new services causes a fundamental problem – applications emerge and evolve on a month-by-month or even week-by-week basis. This is faster than any realistic internal telco processes for developing new pricing plans, or changing network policies. Worse, the nature of “applications” is itself changing, with the advent of HTML5 web-apps, and the ability to “mash up” multiple functions in one app “wrapper”. Is a YouTube video shared and embedded in a Facebook page a “video service”, or “social networking”?

It is also really important to recognise that certain procedures and technologies used in policy and traffic management will likely have some unanticipated side-effects. Users, devices and applications are likely to respond to controls that limit their actions, while other developments may result in “emergent behaviours” spontaneously. For instance, there is a risk that too-strict data caps might change usage models for smartphones and make users just connect to the network when absolutely necessary. This is likely to be at the same times and places when other users also feel it necessary, with the unfortunate implication that peaks of usage get “spikier” rather than being ironed-out.

There is no easy answer to addressing these type of external threats. Operator strategists and planners simply need to keep watch on emerging trends, and perhaps stress-test their assumptions and forecasts with market observers who keep tabs on such developments.

The mobile data explosion… or maybe not?

It is an undisputed fact that mobile data is growing exponentially around the world. Or is it?

A J-curve or an S-curve?

Telco 2.0 certainly thinks that growth in data usage is occurring, but is starting to see signs that the smooth curves that drive so many other decisions might not be so smooth – or so steep – after all. If this proves to be the case, it could be far more disruptive to operators and vendors than any of the individual technologies discussed later in the report. If operator strategists are not at least scenario-planning for lower data growth rates, they may find themselves in a very uncomfortable position in a year’s time.

In its most recent study of mobile operators’ traffic patterns, Ericsson concluded that Q2 2011 data growth was just 8% globally, quarter-on-quarter, a far cry from the 20%+ growths seen previously, and leaving a chart that looks distinctly like the beginning of an S-curve rather than a continued “hockey stick”. Given that the 8% includes a sizeable contribution from undoubted high-growth developing markets like China, it suggests that other markets are maturing quickly. (We are rather sceptical of Ericsson’s suggestion of seasonality in the data). Other data points come from O2 in the UK , which appears to have had essentially zero traffic growth for the past few quarters, or Vodafone which now cites European data traffic to be growing more slowly (19% year-on-year) than its data revenues (21%). Our view is that current global growth is c.60-70%, c.40% in mature markets and 100%+ in developing markets.

Figure 1 – Trends in European data usage

 Trends in European Data Usage
 

Now it is possible that various one-off factors are at play here – the shift from unlimited to tiered pricing plans, the stronger enforcement of “fair-use” plans and the removal of particularly egregious heavy users. Certainly, other operators are still reporting strong growth in traffic levels. We may see resumption in growth, for example if cellular-connected tablets start to be used widely for streaming video. 

But we should also consider the potential market disruption, if the picture is less straightforward than the famous exponential charts. Even if the chart looks like a 2-stage S, or a “kinked” exponential, the gap may have implications, like a short recession in the economy. Many of the technical and business model innovations in recent years have been responses to the expected continual upward spiral of demand – either controlling users’ access to network resources, pricing it more highly and with greater granularity, or building out extra capacity at a lower price. Even leaving aside the fact that raw, aggregated “traffic” levels are a poor indicator of cost or congestion, any interruption or slow-down of the growth will invalidate a lot of assumptions and plans.

Our view is that the scary forecasts of “explosions” and “tsunamis” have led virtually all parts of the industry to create solutions to the problem. We can probably list more than 20 approaches, most of them standalone “silos”.

Figure 2 – A plethora of mobile data traffic management solutions

A Plethora of Mobile Data Traffic Management Solutions

What seems to have happened is that at least 10 of those approaches have worked – caps/tiers, video optimisation, WiFi offload, network densification and optimisation, collaboration with application firms to create “network-friendly” software and so forth. Taken collectively, there is actually a risk that they have worked “too well”, to the extent that some previous forecasts have turned into “self-denying prophesies”.

There is also another common forecasting problem occurring – the assumption that later adopters of a technology will have similar behaviour to earlier users. In many markets we are now reaching 30-50% smartphone penetration. That means that all the most enthusiastic users are already connected, and we’re left with those that are (largely) ambivalent and probably quite light users of data. That will bring the averages down, even if each individual user is still increasing their consumption over time. But even that assumption may be flawed, as caps have made people concentrate much more on their usage, offloading to WiFi and restricting their data flows. There is also some evidence that the growing numbers of free WiFi points is also reducing laptop use of mobile data, which accounts for 70-80% of the total in some markets, while the much-hyped shift to tablets isn’t driving much extra mobile data as most are WiFi-only.

So has the industry over-reacted to the threat of a “capacity crunch”? What might be the implications?

The problem is that focusing on a single, narrow metric “GB of data across the network” ignores some important nuances and finer detail. From an economics standpoint, network costs tend to be driven by two main criteria:

  • Network coverage in terms of area or population
  • Network capacity at the busiest places/times

Coverage is (generally) therefore driven by factors other than data traffic volumes. Many cells have to be built and run anyway, irrespective of whether there’s actually much load – the operators all want to claim good footprints and may be subject to regulatory rollout requirements. Peak capacity in the most popular locations, however, is a different matter. That is where issues such as spectrum availability, cell site locations and the latest high-speed networks become much more important – and hence costs do indeed rise. However, it is far from obvious that the problems at those “busy hours” are always caused by “data hogs” rather than sheer numbers of people each using a small amount of data. (There is also another issue around signalling traffic, discussed later). 

Yes, there is a generally positive correlation between network-wide volume growth and costs, but it is far from perfect, and certainly not a direct causal relationship.

So let’s hypothesise briefly about what might occur if data traffic growth does tail off, at least in mature markets.

  • Delays to LTE rollout – if 3G networks are filling up less quickly than expected, the urgency of 4G deployment is reduced.
  • The focus of policy and pricing for mobile data may switch back to encouraging use rather than discouraging/controlling it. Capacity utilisation may become an important metric, given the high fixed costs and low marginal ones. Expect more loyalty-type schemes, plus various methods to drive more usage in quiet cells or off-peak times.
  • Regulators may start to take different views of traffic management or predicted spectrum requirements.
  • Prices for mobile data might start to fall again, after a period where we have seen them rise. Some operators might be tempted back to unlimited plans, for example if they offer “unlimited off-peak” or similar options.
  • Many of the more complex and commercially-risky approaches to tariffing mobile data might be deprioritised. For example, application-specific pricing involving packet-inspection and filtering might get pushed back down the agenda.
  • In some cases, we may even end up with overcapacity on cellular data networks – not to the degree we saw in fibre in 2001-2004, but there might still be an “overhang” in some places, especially if there are multiple 4G networks.
  • Steady growth of (say) 20-30% peak data per annum should be manageable with the current trends in price/performance improvement. It should be possible to deploy and run networks to meet that demand with reducing unit “production cost”, for example through use of small cells. That may reduce the pressure to fill the “revenue gap” on the infamous scissors-diagram chart.

Overall, it is still a little too early to declare shifting growth patterns for mobile data as a “disruption”. There is a lack of clarity on what is happening, especially in terms of responses to the new controls, pricing and management technologies put recently in place. But operators need to watch extremely closely what is going on – and plan for multiple scenarios.

Specific recommendations will depend on an individual operator’s circumstances – user base, market maturity, spectrum assets, competition and so on. But broadly, we see three scenarios and implications for operators:

  • “All hands on deck!”: Continued strong growth (perhaps with a small “blip”) which maintains the pressure on networks, threatens congestion, and drives the need for additional capacity, spectrum and capex.
    • Operators should continue with current multiple strategies for dealing with data traffic – acquiring new spectrum, upgrading backhaul, exploring massive capacity enhancement with small cells and examining a variety of offload and optimisation techniques. Where possible, they should explore two-sided models for charging and use advanced pricing, policy or segmentation techniques to rein in abusers and reward those customers and applications that are parsimonious with their data use. Vigorous lobbying activities will be needed, for gaining more spectrum, relaxing Net Neutrality rules and perhaps “taxing” content/Internet companies for traffic injected onto networks.
  • “Panic over”: Moderating and patchy growth, which settles to a manageable rate – comparable with the patterns seen in the fixed broadband marketplace
    • This will mean that operators can “relax” a little, with the respite in explosive growth meaning that the continued capex cycles should be more modest and predictable. Extension of today’s pricing and segmentation strategies should improve margins, with continued innovation in business models able to proceed without rush, and without risking confrontation with Internet/content companies over traffic management techniques. Focus can shift towards monetising customer insight, ensuring that LTE rollouts are strategic rather than tactical, and exploring new content and communications services that exploit the improving capabilities of the network.
  • “Hangover”: Growth flattens off rapidly, leaving operators with unused capacity and threatening brutal price competition between telcos.
    • This scenario could prove painful, reminiscent of early-2000s experience in the fixed-broadband marketplace. Wholesale business models could help generate incremental traffic and revenue, while the emphasis will be on fixed-cost minimisation. Some operators will scale back 4G rollouts until cost and maturity go past the tipping-point for outright replacement of 3G. Restrictive policies on bandwidth use will be lifted, as operators compete to give customers the fastest / most-open access to the Internet on mobile devices. Consolidation – and perhaps bankruptcies – may ensure as declining data prices may coincide with substitution of core voice and messaging business

To read the note in full, including the following analysis…

  • Introduction
  • Telco-driven disruption vs. external trends
  • External disruptions to monitor
  • The mobile data explosion… or maybe not?
  • A J-curve or an S-curve?
  • Evolving the mobile network
  • Overview
  • LTE
  • Network sharing, wholesale and outsourcing
  • WiFi
  • Next-gen IP core networks (EPC)
  • Femtocells / small cells / “cloud RANs”
  • HetNets
  • Advanced offload: LIPA, SIPTO & others
  • Peer-to-peer connectivity
  • Self optimising networks (SON)
  • M2M-specific broadband innovations
  • Policy, control & business model enablers
  • The internal politics of mobile broadband & policy
  • Two sided business-model enablement
  • Congestion exposure
  • Mobile video networking and CDNs
  • Controlling signalling traffic
  • Device intelligence
  • Analytics & QoE awareness
  • Conclusions & recommendations
  • Index

…and the following figures…

  • Figure 1 – Trends in European data usage
  • Figure 2 – A plethora of mobile data traffic management solutions
  • Figure 3 – Not all operator WiFi is “offload” – other use cases include “onload”
  • Figure 4 – Internal ‘power tensions’ over managing mobile broadband
  • Figure 5 – How a congestion API could work
  • Figure 6 – Relative Maturity of MBB Management Solutions
  • Figure 7 – Laptops generate traffic volume, smartphones create signalling load
  • Figure 8 – Measuring Quality of Experience
  • Figure 9 – Summary of disruptive network innovations

Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 44 page report in PDF format hereNon-Members, please subscribe here, buy a Single User license for this report online here for £795 (+VAT for UK buyers), or for multi-user licenses or other enquiries, please email contact@telco2.net / call +44 (0) 207 247 5003.

Organisations, geographies, people and products referenced: 3GPP, Aero2, Alcatel Lucent, AllJoyn, ALU, Amazon, Amdocs, Android, Apple, AT&T, ATIS, BBC, BlackBerry, Bridgewater, CarrierIQ, China, China Mobile, China Unicom, Clearwire, Conex, DoCoMo, Ericsson, Europe, EverythingEverywhere, Facebook, Femto Forum, FlashLinq, Free, Germany, Google, GSMA, H3G, Huawei, IETF, IMEI, IMSI, InterDigital, iPhones,Kenya, Kindle, Light Radio, LightSquared, Los Angeles, MBNL, Microsoft, Mobily, Netflix, NGMN, Norway, NSN, O2, WiFi, Openet, Qualcomm, Radisys, Russia, Saudi Arabia, SoftBank, Sony, Stoke, Telefonica, Telenor, Time Warner Cable, T-Mobile, UK, US, Verizon, Vita, Vodafone, WhatsApp, Yota, YouTube, ZTE.

Technologies and industry terms referenced: 2G, 3G, 4.5G, 4G, Adaptive bitrate streaming, ANDSF (Access Network Discovery and Selection Function), API, backhaul, Bluetooth, BSS, capacity crunch, capex, caps/tiers, CDMA, CDN, CDNs, Cloud RAN, content delivery networks (CDNs), Continuous Computing, Deep packet inspection (DPI), DPI, DRM, Encryption, Enhanced video, EPC, ePDG (Evolved Packet Data Gateway), Evolved Packet System, Femtocells, GGSN, GPS, GSM, Heterogeneous Network (HetNet), Heterogeneous Networks (HetNets), HLRs, hotspots, HSPA, HSS (Home Subscriber Server), HTML5, HTTP Live Streaming, IFOM (IP Flow Mobility and Seamless Offload), IMS, IPR, IPv4, IPv6, LIPA (Local IP Access), LTE, M2M, M2M network enhancements, metro-cells, MiFi, MIMO (multiple in, MME (Mobility Management Entity), mobile CDNs, mobile data, MOSAP, MSISDN, MVNAs (mobile virtual network aggregators)., MVNO, Net Neutrality, network outsourcing, Network sharing, Next-generation core networks, NFC, NodeBs, offload, OSS, outsourcing, P2P, Peer-to-peer connectivity, PGW (PDN Gateway), picocells, policy, Policy and Charging Rules Function (PCRF), Pre-cached video, pricing, Proximity networks, Public WiFi, QoE, QoS, RAN optimisation, RCS, remote radio heads, RFID, self-optimising network technology (SON), Self-optimising networks (SON), SGW (Serving Gateway), SIM-free wireless, single RANs, SIPTO (Selective IP Traffic Offload), SMS, SoftSIM, spectrum, super-femtos, Telco 2.0 Happy Pipe, Transparent optimisation, UMTS, ‘Under-The-Floor’ (UTF) Players, video optimisation, VoIP, VoLTE, VPN, White space, WiFi, WiFi Direct, WiFi offloading, WiMAX, WLAN.

‘Under-The-Floor’ (UTF) Players: threat or opportunity?

Introduction

The ‘smart pipe’ imperative

In some quarters of the telecoms industry, the received wisdom is that the network itself is merely an undifferentiated “pipe”, providing commodity connectivity, especially for data services. The value, many assert, is in providing higher-tier services, content and applications, either to end-users, or as value-added B2B services to other parties. The Telco 2.0 view is subtly different. We maintain that:

  1. Increasingly valuable services will be provided by third-parties but that operators can provide a few end-user services themselves. They will, for example, continue to offer voice and messaging services for the foreseeable future.
  2. Operators still have an opportunity to offer enabling services to ‘upstream’ service providers such as personalisation and targeting (of marketing and services) via use of their customer data, payments, identity and authentication and customer care.
  3. Even if operators fail (or choose not to pursue) options 1 and 2 above, the network must be ‘smart’ and all operators will pursue at least a ‘smart network’ or ‘Happy Pipe’ strategy. This will enable operators to achieve three things.
  • To ensure that data is transported efficiently so that capital and operating costs are minimised and the Internet and other networks remain cheap methods of distribution.
  • To improve user experience by matching the performance of the network to the nature of the application or service being used – or indeed vice versa, adapting the application to the actual constraints of the network. ‘Best efforts’ is fine for asynchronous communication, such as email or text, but unacceptable for traditional voice telephony. A video call or streamed movie could exploit guaranteed bandwidth if possible / available, or else they could self-optimise to conditions of network congestion or poor coverage, if well-understood. Other services have different criteria – for example, real-time gaming demands ultra-low latency, while corporate applications may demand the most secure and reliable path through the network.
  • To charge appropriately for access to and/or use of the network. It is becoming increasingly clear that the Telco 1.0 business model – that of charging the end-user per minute or per Megabyte – is under pressure as new business models for the distribution of content and transportation of data are being developed. Operators will need to be capable of charging different players – end-users, service providers, third-parties (such as advertisers) – on a real-time basis for provision of broadband and maybe various types or tiers of quality of service (QoS). They may also need to offer SLAs (service level agreements), monitor and report actual “as-experienced” quality metrics or expose information about network congestion and availability.

Under the floor players threaten control (and smartness)

Either through deliberate actions such as outsourcing, or through external agency (Government, greenfield competition etc), we see the network-part of the telco universe suffering from a creeping loss of control and ownership. There is a steady move towards outsourced networks, as they are shared, or built around the concept of open-access and wholesale. While this would be fine if the telcos themselves remained in control of this trend (we see significant opportunities in wholesale and infrastructure services), in many cases the opposite is occurring. Telcos are losing control, and in our view losing influence over their core asset – the network. They are worrying so much about competing with so-called OTT providers that they are missing the threat from below.

At the point at which many operators, at least in Europe and North America, are seeing the services opportunity ebb away, and ever-greater dependency on new models of data connectivity provision, they are potentially cutting off (or being cut off from) one of their real differentiators.
Given the uncertainties around both fixed and mobile broadband business models, it is sensible for operators to retain as many business model options as possible. Operators are battling with significant commercial and technical questions such as:

  • Can upstream monetisation really work?
  • Will regulators permit priority services under Net Neutrality regulations?
  • What forms of network policy and traffic management are practical, realistic and responsive?

Answers to these and other questions remain opaque. However, it is clear that many of the potential future business models will require networks to be physically or logically re-engineered, as well as flexible back-office functions, like billing and OSS, to be closely integrated with the network.
Outsourcing networks to third-party vendors, particularly when such a network is shared with other operators is dangerous in these circumstances. Partners that today agree on the principles for network-sharing may have very different strategic views and goals in two years’ time, especially given the unknown use-cases for new technologies like LTE.

This report considers all these issues and gives guidance to operators who may not have considered all the various ways in which network control is being eroded, from Government-run networks through to outsourcing services from the larger equipment providers.

Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators Under The Floor Players Fig 1 Defending Network Capabilities

Source: STL Partners

Industry structure is being reshaped

Over the last year, Telco 2.0 has updated its overall map of the telecom industry, to reflect ongoing dynamics seen in both fixed and mobile arenas. In our strategic research reports on Broadband Business Models, and the Roadmap for Telco 2.0 Operators, we have explored the emergence of various new “buckets” of opportunity, such as verticalised service offerings, two-sided opportunities and enhanced variants of traditional retail propositions.
In parallel to this, we’ve also looked again at some changes in the traditional wholesale and infrastructure layers of the telecoms industry. Historically, this has largely comprised basic capacity resale and some “behind the scenes” use of carriers-carrier services (roaming hubs, satellite / sub-oceanic transit etc).

Figure 2 – Telco 1.0 Wholesale & Infrastructure structure

Under The Floor (UTF) Players Fig 2 Telco 1.0 Scenario

Source: STL Partners

Content

  • Revising & extending the industry map
  • ‘Network Infrastructure Services’ or UTF?
  • UTF market drivers
  • Implications of the growing trend in ‘under-the-floor’ network service providers
  • Networks must be smart and controlling them is smart too
  • No such thing as a dumb network
  • Controlling the network will remain a key competitive advantage
  • UTF enablers: LTE, WiFi & carrier ethernet
  • UTF players could reduce network flexibility and control for operators
  • The dangers of ceding control to third-parties
  • No single answer for all operators but ‘outsourcer beware’
  • Network outsourcing & the changing face of major vendors
  • Why become an under-the-floor player?
  • Categorising under-the-floor services
  • Pure under-the-floor: the outsourced network
  • Under-the-floor ‘lite’: bilateral or multilateral network-sharing
  • Selective under-the-floor: Commercial open-access/wholesale networks
  • Mandated under-the-floor: Government networks
  • Summary categorisation of under-the-floor services
  • Next steps for operators
  • Build scale and a more sophisticated partnership approach
  • Final thoughts
  • Index

 

  • Figure 1 – Competition in the services layer means defending network capabilities is increasingly important for operators
  • Figure 2 – Telco 1.0 Wholesale & Infrastructure structure
  • Figure 3 – The battle over infrastructure services is intensifying
  • Figure 4 – Examples of network-sharing arrangements
  • Figure 5 – Examples of Government-run/influenced networks
  • Figure 6 – Four under-the-floor service categories
  • Figure 7: The need for operator collaboration & co-opetition strategies

Broadband 2.0: Mobile CDNs and video distribution

Summary: Content Delivery Networks (CDNs) are becoming familiar in the fixed broadband world as a means to improve the experience and reduce the costs of delivering bulky data like online video to end-users. Is there now a compelling need for their mobile equivalents, and if so, should operators partner with existing players or build / buy their own? (August 2011, Executive Briefing Service, Future of the Networks Stream).
Telco 2.0 Mobile CDN Schematic Small
  Read in Full (Members only)    Buy This Report    To Subscribe

Below is an extract from this 25 page Telco 2.0 Report that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream here. Non-members can buy a Single User license for this report online here for £595 (+VAT) or subscribe here. For multiple user licenses, or to find out about interactive strategy workshops on this topic, please email contact@telco2.net or call +44 (0) 207 247 5003.

To share this article easily, please click:

//

Introduction

As is widely documented, mobile networks are witnessing huge growth in the volumes of 3G/4G data traffic, primarily from laptops, smartphones and tablets. While Telco 2.0 is wary of some of the headline shock-statistics about forecast “exponential” growth, or “data tsunamis” driven by ravenous consumption of video applications, there is certainly a fast-growing appetite for use of mobile broadband.

That said, many of the actual problems of congestion today can be pinpointed either to a handful of busy cells at peak hour – or, often, the inability of the network to deal with the signalling load from chatty applications or “aggressive” devices, rather than the “tonnage” of traffic. Another large trend in mobile data is the use of transient, individual-centric flows from specific apps or communications tools such as social networking and messaging.

But “tonnage” is not completely irrelevant. Despite the diversity, there is still an inexorable rise in the use of mobile devices for “big chunks” of data, especially the special class of software commonly known as “content” – typically popular/curated standalone video clips or programmes, or streamed music. Images (especially those in web pages) and application files such as software updates fit into a similar group – sizeable lumps of data downloaded by many individuals across the operator’s network.

This one-to-many nature of most types of bulk content highlights inefficiencies in the way mobile networks operate. The same data chunks are downloaded time and again by users, typically going all the way from the public Internet, through the operator’s core network, eventually to the end user. Everyone loses in this scenario – the content publisher needs huge servers to dish up each download individually. The operator has to deal with transport and backhaul load from repeatedly sending the same content across its network (and IP transit from shipping it in from outside, especially over international links). Finally, the user has to deal with all the unpredictability and performance compromises involved in accessing the traffic across multiple intervening points – and ends up paying extra to support the operator’s heavier cost base.

In the fixed broadband world, many content companies have availed themselves of a group of specialist intermediaries called CDNs (content delivery networks). These firms on-board large volumes of the most important content served across the Internet, before dropping it “locally” as near to the end user as possible – if possible, served up from cached (pre-saved) copies. Often, the CDN operating companies have struck deals with the end-user facing ISPs, which have often been keen to host their servers in-house, as they have been able to reduce their IP interconnection costs and deliver better user experience to their customers.

In the mobile industry, the use of CDNs is much less mature. Until relatively recently, the overall volumes of data didn’t really move the needle from the point of view of content firms, while operators’ radio-centric cost bases were also relatively immune from those issues as well. Optimising the “middle mile” for mobile data transport efficiency seemed far less of a concern than getting networks built out and handsets and apps perfected, or setting up policy and charging systems to parcel up broadband into tiered plans. Arguably, better-flowing data paths and video streams would only load the radio more heavily, just at a time when operators were having to compress video to limit congestion.

This is now changing significantly. With the rise in smartphone usage – and the expectations around tablets – Internet-based CDNs are pushing much more heavily to have their servers placed inside mobile networks. This is leading to a certain amount of introspection among the operators – do they really want to have Internet companies’ infrastructure inside their own networks, or could this be seen more as a Trojan Horse of some sort, simply accelerating the shift of content sales and delivery towards OTT-style models? Might it not be easier for operators to build internal CDN-type functions instead?

Some of the earlier approaches to video traffic management – especially so-called “optimisation” without the content companies’ permission of involvement – are becoming trickier with new video formats and more scrutiny from a Net Neutrality standpoint. But CDNs by definition involve the publishers, so potentially any necessary compression or other processing can be collaboratively, rather than “transparently” without cooperation or willingness.

At the same time, many of the operators’ usual vendors are seeing this transition point as a chance to differentiate their new IP core network offerings, typically combining CDN capability into their routing/switching platforms, often alongside the optimisation functions as well. In common with other recent innovations from network equipment suppliers, there is a dangled promise of Telco 2.0-style revenues that could be derived from “upstream” players. In this case, there is a bit more easily-proved potential, since this would involve direct substitution of the existing revenues already derived from content companies, by the Internet CDN players such as Akamai and Limelight. This also holds the possibility of setting up a two-sided, content-charging business model that fits OK with rules on Net Neutrality – there are few complaints about existing CDNs except from ultra-purist Neutralists.

On the other hand, telco-owned CDNs have existed in the fixed broadband world for some time, with largely indifferent levels of success and adoption. There needs to be a very good reason for content companies to choose to deal with multiple national telcos, rather than simply take the easy route and choose a single global CDN provider.

So, the big question for telcos around CDNs at the moment is “should I build my own, or should I just permit Akamai and others to continue deploying servers into my network?” Linked to that question is what type of CDN operation an operator might choose to run in-house.

There are four main reasons why a mobile operator might want to build its own CDN:

  • To lower costs of network operation or upgrade, especially in radio network and backhaul, but also through the core and in IP transit.
  • To improve the user experience of video, web or applications, either in terms of data throughput or latency.
  • To derive incremental revenue from content or application providers.
  • For wider strategic or philosophical reasons about “keeping control over the content/apps value chain”

This Analyst Note explores these issues in more details, first giving some relevant contextual information on how CDNs work, especially in mobile.

What is a CDN?

The traditional model for Internet-based content access is straightforward – the user’s browser requests a piece of data (image, video, file or whatever) from a server, which then sends it back across the network, via a series of “hops” between different network nodes. The content typically crosses the boundaries between multiple service providers’ domains, before finally arriving at the user’s access provider’s network, flowing down over the fixed or mobile “last mile” to their device. In a mobile network, that also typically involves transiting the operator’s core network first, which has a variety of infrastructure (network elements) to control and charge for it.

A Content Delivery Network (CDN) is a system for serving Internet content from servers which are located “closer” to the end user either physically, or in terms of the network topology (number of hops). This can result in faster response times, higher overall performance, and potentially lower costs to all concerned.

In most cases in the past, CDNs have been run by specialist third-party providers, such as Akamai and Limelight. This document also considers the role of telcos running their own “on-net” CDNs.

CDNs can be thought of as analogous to the distribution of bulky physical goods – it would be inefficient for a manufacturer to ship all products to customers individually from a single huge central warehouse. Instead, it will set up regional logistics centres that can be more responsive – and, if appropriate, tailor the products or packaging to the needs of specific local markets.

As an example, there might be a million requests for a particular video stream from the BBC. Without using a CDN, the BBC would have to provide sufficient server capacity and bandwidth to handle them all. The company’s immediate downstream ISPs would have to carry this traffic to the Internet backbone, the backbone itself has to carry it, and finally the requesters’ ISPs’ access networks have to deliver it to the end-points. From a media-industry viewpoint, the source network (in this case the BBC) is generally called the “content network” or “hosting network”; the destination is termed an “eyeball network”.

In a CDN scenario, all the data for the video stream has to be transferred across the Internet just once for each participating network, when it is deployed to the downstream CDN servers and stored. After this point, it is only carried over the user-facing eyeball networks, not any others via the public Internet. This also means that the CDN servers may be located strategically within the eyeball networks, in order to use its resources more efficiently. For example, the eyeball network could place the CDN server on the downstream side of its most expensive link, so as to avoid carrying the video over it multiple times. In a mobile context, CDN servers could be used to avoid pushing large volumes of data through expensive core-network nodes repeatedly.

When the video or other content is loaded into the CDN, other optimisations such as compression or transcoding into other formats can be applied if desired. There may also be various treatments relating to new forms of delivery such as HTTP streaming, where the video is broken up into “chunks” with several different sizes/resolutions. Collectively, these upfront processes are called “ingestion”.

Figure 1 – Content delivery with and without a CDN

Mobile CDN Schematic, Fig 1 Telco 2.0 Report

Source: STL Partners / Telco 2.0

Value-added CDN services

It is important to recognise that the fixed-centric CDN business has increased massively in richness and competition over time. Although some of the players have very clever architectures and IPR in the forms of their algorithms and software techniques, the flexibility of modern IP networks has tended to erode away some of the early advantages and margins. Shipping large volumes of content is now starting to become secondary to the provision of associated value-added functions and capabilities around that data. Additional services include:

  • Analytics and reporting
  • Advert insertion
  • Content ingestion and management
  • Application acceleration
  • Website security management
  • Software delivery
  • Consulting and professional services

It is no coincidence that the market leader, Akamai, now refers to itself as “provider of cloud optimisation services” in its financial statements, rather than a CDN, with its business being driven by “trends in cloud computing, Internet security, mobile connectivity, and the proliferation of online video”. In particular, it has started refocusing away from dealing with “video tonnage”, and towards application acceleration – for example, speeding up the load times of e-commerce sites, which has a measurable impact on abandonment of purchasing visits. Akamai’s total revenues in 2010 were around $1bn, less than half of which came from “media and entertainment” – the traditional “content industries”. Its H1 2011 revenues were relatively disappointing, with growth coming from non-traditional markets such as enterprise and high-tech (eg software update delivery) rather than media.

This is a critically important consideration for operators that are looking to CDNs to provide them with sizeable uplifts in revenue from upstream customers. Telcos – especially in mobile – will need to invest in various additional capabilities as well as the “headline” video traffic management aspects of the system. They will need to optimise for network latency as well as throughput, for example – which will probably not have the cost-saving impacts expected from managing “data tonnage” more effectively.

Although in theory telcos’ other assets should help – for example mapping download analytics to more generalised customer data – this is likely to involve extra complexity with the IT side of the business. There will also be additional efforts around sales and marketing that go significantly beyond most mobile operators’ normal footprint into B2B business areas. There is also a risk that an analysis of bottlenecks for application delivery / acceleration ends up simply pointing the finger of blame at the network’s inadequacies in terms of coverage. Improving delivery speed, cost or latency is only valuable to an upstream customer if there is a reasonable likelihood of the end-user actually having connectivity in the first place.

Figure 2: Value-added CDN capabilities

Mobile CDN Schematic - Functionality Chart - Telco 2.0 Report

Source: Alcatel-Lucent

Application acceleration

An increasingly important aspect of CDNs is their move beyond content/media distribution into a much wider area of “acceleration” and “cloud enablement”. As well as delivering large pieces of data efficiently (e.g. video), there is arguably more tangible value in delivering small pieces of data fast.

There are various manifestations of this, but a couple of good examples illustrate the general principles:

  • Many web transactions are abandoned because websites (or apps) seem “slow”. Few people would trust an airline’s e-commerce site, or a bank’s online interface, if they’ve had to wait impatiently for images and page elements to load, perhaps repeatedly hitting “refresh” on their browsers. Abandoned transactions can be directly linked to slow or unreliable response times – typically a function of congestion either at the server or various mid-way points in the connection. CDN-style hosting can accelerate the service measurably, leading to increased customer satisfaction and lower levels of abandonment.
  • Enterprise adoption of cloud computing is becoming exceptionally important, with both cost savings and performance enhancements promised by vendors. Sometimes, such platforms will involve hybrid clouds – a mixture of private (Internal) and public (Internet) resources and connectivity. Where corporates are reliant on public Internet connectivity, they may well want to ensure as fast and reliable service as possible, especially in terms of round-trip latency. Many IT applications are designed to be run on ultra-fast company private networks, with a lot of “hand-shaking” between the user’s PC and the server. This process is very latency-dependent, and especially as companies also mobilise their applications the additional overhead time in cellular networks may otherwise cause significant problems.

Hosting applications at CDN-type cloud acceleration providers achieves much the same effect as for video – they can bring the application “closer”, with fewer hops between the origin server and the consumer. Additionally, the CDN is well-placed to offer additional value-adds such as firewalling and protection against denial-of-service attacks.

To read the 25 note in full, including the following additional content…

  • How do CDNs fit with mobile networks?
  • Internet CDNs vs. operator CDNs
  • Why use an operator CDN?
  • Should delivery mean delivery?
  • Lessons from fixed operator CDNs
  • Mobile video: CDNs, offload & optimisation
  • CDNs, optimisation, proxies and DPI
  • The role of OVPs
  • Implementation and planning issues
  • Conclusion & recommendations

… and the following additional charts…

  • Figure 3 – Potential locations for CDN caches and nodes
  • Figure 4 – Distributed on-net CDNs can offer significant data transport savings
  • Figure 5 – The role of OVPs for different types of CDN player
  • Figure 6 – Summary of Risk / Benefits of Centralised vs. Distributed and ‘Off Net’ vs. ‘On-Net’ CDN Strategies

……Members of the Telco 2.0 Executive Briefing Subscription Service and Future Networks Stream can download the full 25 page report in PDF format here. Non-Members, please see here for how to subscribe, here to buy a single user license for £595 (+VAT), or for multi-user licenses and any other enquiries please email contact@telco2.net or call +44 (0) 207 247 5003.

Organisations and products referenced: 3GPP, Acision, Akamai, Alcatel-Lucent, Allot, Amazon Cloudfront, Apple’s Time Capsule, BBC, BrightCove, BT, Bytemobile, Cisco, Ericsson, Flash Networks, Huawei, iCloud, ISPs, iTunes, Juniper, Limelight, Netflix, Nokia Siemens Networks, Ooyala, OpenWave, Ortiva, Skype, smartphone, Stoke, tablets, TiVo, Vantrix, Velocix, Wholesale Content Connect, Yospace, YouTube.

Technologies and industry terms referenced: acceleration, advertising, APIs, backhaul, caching, CDN, cloud, distributed caches, DNS, Evolved Packet Core, eyeball network, femtocell, fixed broadband, GGSNs, HLS, HTTP streaming, ingestion, IP network, IPR, laptops, LIPA, LTE, macro-CDN, micro-CDN, middle mile, mobile, Net Neutrality, offload, optimisation, OTT, OVP, peering proxy, QoE, QoS, RNCs, SIPTO, video, video traffic management, WiFi, wireless.

Public Wifi: Destroying LTE/Mobile Value?

Summary: By building or acquiring Public WiFi networks for tens of $Ms, highly innovative fixed players in the UK are stealthily removing $Bns of value from 3G and 4G mobile spectrum as smartphone and other data devices become increasingly carrier agnostic. What are the lessons globally?

Below is an extract from this 15 page Telco 2.0 Analyst Note that can be downloaded in full in PDF format by members of the Telco 2.0 Executive Briefing service and Future Networks Stream using the links below.

Read in Full (Members only)        To Subscribe

The mobile broadband landscape is a key session theme at our upcoming ‘New Digital Economics’ Brainstorm (London, 11-13 May). Please use the links or email contact@telco2.net or call +44 (0) 207 247 5003 to find out more.

To share this article easily, please click:

//

Two recent announcements have reignited interest in the UK Public WiFi space: Sky buying The Cloud for a reputed figure just short of £50m and Virgin Media announcing their intention to invest in building a metro WiFi network based around their significant outdoor real estate in the major conurbations.

These can be seen narrowly as competitive reactions to the success of the BT Openzone public WiFi product, which is a clear differentiator for the BT home broadband offer in the eyes of the consumer. The recent resurgence of BT market share in the home broadband market hints that public WiFi is an ingredient valued by consumers, especially when the price is bundled into the home access charges and therefore perceived as “free” by the consumer.

This trend is being accelerated by the new generation of Smartphones sensing whether private and public WiFi access or mobile operator network access offer the best connection for the end-user and then making the authentication process much easier. Furthermore, the case of the mobile operators is not helped by laptops and more importantly tablets and other connected devices such as e-readers offering WiFi as a default means of access with mobile operator 3G requiring extra investment in both equipment and access with a clumsy means of authentication.

In a wider context, the phenomena should be extremely concerning for the UK mobile operators. There has been a two decade trend of voice traffic inside the home moving from fixed to mobile networks with a clear revenue gain for the mobile operators. In the data world, it appears that the bulk of the heavy lifting appears to being served within the home by private WiFi and outside of the home in nomadic spots served by public WiFi.

With most of the public WiFi hotspots in the UK being offered by fixed operators, there is a potential value shift from mobile to fixed networks reversing that two decade trend. As the hotspots grow and critically, once they become interconnected, there is an increasing risk to mobile operators in terms of the value of investment in expensive ‘4G’ / LTE spectrum.

Beyond this, a major problem for mobile operators is that the current trend for multi-mode networking (i.e. combination of WiFi and 3G access) limits the ability of operators to provide VAS services and/or capture 2-sided business model revenues, since so much activity is off-network and outside of the operator’s control plane.

The history of WiFi presents reality lessons for Mobile Operators, namely:

  • With Innovation, it not always the innovators who gain the most;
  • Similarly, with Standards setting, it not always the people who set the standards who gain the most; and
  • WiFi is a classic case of Apple driving mass adoption and reaping the benefits – to this day, Apple still seems to prefer WiFi over 3G.

This analyst note explains the flurry of recent announcements in the context of:

  • The unique UK market structure;
  • Technology Adoption Cycles;
  • How intelligence at the edge of the network will drive both private and public WiFi use;
  • How public WiFi in the UK might evolve;
  • The longer term value threat to the mobile operators;
  • How O2 and Vodafone are taking different strategies to fight back; and
  • Lessons for other markets.

Unique Nature of the UK Market Structure

In May 2002, BT Cellnet, the mobile arm of BT, soon to be renamed O2, demerged from BT leaving the UK market as one of few markets in the world where the incumbent PTT did not have a mobile arm. Ever since BT has tried to get into the mobility game with varying degrees of success:

  • In the summer of 2002, it launched its public WiFi service called OpenZone;
  • In September 2003 it announced plans for WiFi in all public phone boxes ;
  • In May 2004, it launched an MVNO with Vodafone with plans for the doomed BT Fusion UMA (Bluetooth then WiFi ) phone;
  • In May 2006, with Metro WiFi plans in partnership with local authorities in 12 cities; and
  • In Oct 2007, in partnership with FON to put public WiFi in each and every BT home routers.

After trying out different angles in the mobility business for five years, BT finally discovered a workable business model with public WiFi around the FON partnership. BT now effectively bundle free public WiFi to its broadband users in return for establishing a public hotspot within their own home.

Huge Growth in UK Public Wifi Usage

Approximately 2.6m or 47% customers of a total of 5.5m BT broadband connections have taken this option. This creates the image of huge public WiFi coverage and clearly currently differentiates BT from other home broadband providers. And, the public WiFi network is being used much more: 881 million minutes in the current quarter compared to 335 million minutes in the previous year.

The other significant element of the BT public WiFi network is the public hotspots they have built with hotels, restaurants, airports. The hotspots number around 5k, of which 1.2k are wholesale arrangements with other public WiFi hotspot providers. While not significant in number, these provide the real incremental value to the BT home broadband user who can connect for “free” in these high traffic locations.

BT was not alone in trying to build a public WiFi business. The Cloud was launched in the UK in 2003 and tried to build a more traditional public WiFi business building upon a combination of direct end user revenues and wholesale and interconnect arrangements. That Sky are paying “south of £50m” for The Cloud compared to the “€50m invested” over the years by the VC backers implies the traditional public WiFi business model just doesn’t work. A different strategy will be taken by Sky going forward.

Sky is the largest pay-tv provider in the UK currently serving approximately 10m homes by satellite DTH. In 2005, Sky decided upon a change of strategy and decided that in addition to offering its customers video services, they needed to offer broadband and phone services. Sky has subsequently invested approximately £1bn in buying an altnet, Easynet, for £211m, in building a LLU network on top of BT infrastructure and acquiring 3m broadband customers. If the past is anything to go by, Sky will be planning on investing considerable further sums in The Cloud to make it at a minimum a comparable service to BT Openzone for its customers.

Virgin Media is the only cable operator of any significance in the UK with a footprint of around 50% of the UK mainly in the dense conurbations. Virgin Media is the child of many years of cable consolidation and historically suffered from disparate metro cable networks of varying quality and an overleveraged balance sheet. The present management has a done a good job of tidying up the mess and upgrading the networks to DOCSIS 3.0 technology. In the last year, Virgin Media has started to expand its footprint again and investing in new products with plans for building a metro WiFi network based around its large footprint of cabinets in the street.

Virgin Media has a large base of 4.3m home broadband users to protect and an even larger base of potential homes to sell services into. In addition, Virgin Media is the largest MVNO in the UK with around 3m mobile subscribers. In recent years, Virgin Media have focused upon selling mobile services into their current cable customers. Although, Virgin Media’s public WiFi strategy is not in the public domain, it is clear that they plan on investing in 2011.

TalkTalk is the only other significant UK Home Broadband player with 4.2m home broadband users and currently has no declared public WiFi strategies.

The mobile operators which have invested in broadband, namely O2 and Orange, have failed to gain traction in the marketplace.

The key trend here is that the fixed broadband network providers are moving outside of the home and providing more value to their customers on the move.

Technology Adoption Cycles

Figure 1: Geoffrey Moore’s Technology Adoption Cycle

Geoffrey Moore documented technology adoption cycles, originally in the “Crossing the Chasm” book and subsequently in the “Living in the Fault Line” book. These books described the pain in products crossing over from early adopters to the mass market. Since publication, they have established themselves as the bible for a generation of Technology marketers. Moore distinguishes six zones, which are adopted to describe the situation of public WiFi in the UK.

  1. The early market: a time of great excitement when visionaries are looking to get on board. In the public WiFi market this period was clearly established in mid-2005 era when public WiFi networks where promoted as real alternatives to private MNOs.
  2. The chasm: a time of great despair as initial interest wanes and the mainstream is not comfortable with adoption. The UK WiFi market has been stagnating for the previous few years as investment in public WiFi has declined and customer adoption has not accelerated beyond the techno-savvy.
  3. The bowling alley: a period of niche adoption ahead of the general marketplace. The UK market is currently in this period. The two key skittles to fall were the BT FON deal changing the public WiFi business model, and the launch of the iPhone with auto-sensing and easy authentication of public WiFi.
  4. The tornado: a period of mass-market adoption. The UK market is about to enter in this phase as public WiFi investment is reinvigorated deploying providing “bundled” access to most home broadband users.
  5. Main street: Base infrastructure has been deployed and the goal is to flesh out the potential. We are probably a few years away from this and this phase will focus on ease-of-use, interconnect of public WiFi networks, consolidation of smaller players and alternate revenue sources such as advertising.
  6. Total Assimilation: Everyone is using the technology and the market is ripe for another wave of disruption. For UK WiFi, this is probably at least a decade away, but who know what the future holds?

Flashback: How Private WiFi crossed the Chasm

It is worthwhile at this point to revisit the history of WiFi as it provides some perspective and pointers for the future, especially who the winners and losers will be in the public WiFi space.

Back in 1985 when deregulation was still in fashion, the USA FCC opened up some spectrum to provide an innovation spurt to US industry under a license exempt and “free-to-use” regime. This was remarkable in itself given that previously spectrum, whether for radio and television broadcasting or public and private communications, had been exclusively licensed. Any applications in the so-called ISM (Industrial, Scientific and Medical) bands would have to deal with contention from other applications using the spectrum and therefore the primary use was seen as indoor and corporate applications.

Retail department stores, one of the main clients of NCR (National Cash Registers), tended to reconfigure their floor space on a regular basis and the cost of continual rewiring of point-of-sales equipment was a significant expense. NCR saw an opportunity to use the ISM bands to solve this problem and started a R&D project in the Netherlands to create wireless local area networks which required no cabling.

At this time, the IEEE were leading the standardization effort for local area networks and the 802.3 Ethernet specification initially approved in 1987 still forms the basis of the most wired LAN implementations today. NCR decided that the standards road was the route to take and played a leading role in the eventual creation of 802.11 wireless LAN standards in 1997. Wireless LAN was considered too much of a mouthful and was reinvented as WiFi in 1999 with the help of a branding agency.

Ahead of the standards approval, NCR launched products under the WaveLAN brand in 1990 but the cost of the plug-in cards at US$1,400 were very expensive compared to the wired ethernet cards which were priced at around US$400. Product take-up was slow outside of early adopters.

In 1991 an early form of Telco-IT convergence emerged as AT&T bought NCR. An early competitor for the ISM bandwidth emerged with AT&T developing a new generation of digital cordless phones using the 2.4GHz band. To this day, in the majority of UK and worldwide households, DECT handsets in the home compete with WiFi for spectrum. Product development of the cards continued and was made consumer friendly easier with the adoption on the PCMIA card slots in PCs.

By 1997, WiFi technology was firmly stuck in the chasm. The major card vendors (Proxim, Aironet, Xircom and AT&T) all had non-standardized products and the vendors were at best marginally profitable struggling to grow the market.
AT&T had broken up and the WiFi business became part of Lucent Technologies. The eyes and brains of the big communications companies (Alcatel, Ericsson, Lucent, Motorola, Nokia, Nortel and Siemens) were focused on network solutions with 3G holding the promise for the future.

All that was about to change in early 1998 with a meeting between Steve Jobs of Apple and Richard McGinn, CEO of Lucent:

  • Steve Jobs declared “Wireless LANs are the greatest thing on earth, Apple wants a radio card for US$50, which Apple will retail at US$99”;
  • Rich McGinn declared 1999 to be the year of DSL and asked if Apple would be ready; and
  • Steve Jobs retort was revealing to this day “Probably not next, maybe the year after; depends upon whether there is one standard worldwide”.

Figure 2: The Apple Airport

In early 1998 the cost of the cards was still above US$100 and needed a new generation of chips to bring the cost down to the Apple price point. Further, Apple wanted to use the 11Mbit/s standard which had just been developed rather than the current 2Mbit/s. However, despite the challenges the product was launched in July 1999 as the Apple Airport with the PCMCIA card at US$99 and the access point at US$299. Apple was the first skittle to fall as private WiFi crossed the chasm. The Windows based OEMs rushed to follow.

By 2001, Lucent had spun out its chip making arm as Agere Systems which had a market share of 50% of a US$1bn WiFi market, which would have been nothing but a pin prick on either the AT&T or Lucent profit and loss had Agere remained as part of them.

The final piece in the WiFi jigsaw fell into place when Intel acquired Xircom in 1999 and developed the Xircom technology and used their WiFi patents as protection against competitors. In 2003, Intel launched its Centrino chipset with built in WiFi functionality for laptops supported by a US$300m worldwide marketing campaign. Effectively for the consumer WiFi had become part the laptop bundle.

Agere Systems and all its WiFi heritage was finished and they discontinued its WiFi activities in 2004.

There are three clear pointers for the future:

  • The players who take a leading role in the early market will not necessary be the ones to succeed in Main Street;
  • Apple took a leading role in the adoption of WiFi and still seems massively committed to WiFi technology to this day;
  • Technology adoption cycles tend to be longer than expected.

Intelligence at the edge of the Network

As early as 2003, Broadcom and Phillips were launching specialized WiFi chips aimed at mobile phones. Several cellular handsets were launched with WiFi combined with 2G/3G connectivity, but the connectivity software was clunky for the user.

The launch of the iPhone in 2007 began a new era where the device automatically attempts to connect to any WiFi network if the signal strength is better than the 2G/3G network. The era of the home or work WiFi network being the preferred route for data traffic was ushered in.

Apple is trying to make authentication as simple as possible: enter the key for any WiFi network once and it will be remembered for the handset’s lifetime and connect automatically when a user returns in range. However, in dense urban networks with multiple WiFi access points, it is quite annoying to be prompted for key after key. The strength of the federated authentication system in cellular networks is therefore still a critical advantage.

The iPhone also senses that some applications can only be used when WiFi connections are available. The classic example is Apple’s own Facetime (video calling) application. Mobile Operators seem happy in the short run that bandwidth intensive applications are kept off their networks. But, there is a longer term value statement with the users being continually being reminded that WiFi networks are superior to mobile operators’ networks.

Other mobile operating systems, such as Android and Windows Phone 7, have copied the Apple approach and today there is no going back: multi-modal mobile phones are here to stay and the devices themselves decide which network to use unless the user over-rides this choice.

One of underlying rules of the internet is that intelligence moves to the edge of the network. The edges are probably in the eyes of Apple and Google the handsets and their server farms. It is not beyond the realms of possibility that future Smartphones will be supplied with automatic authentication for both WiFi and Cellular networks with least-cost routing software determining the best price for the user. As intelligence moves to the edge so does value.

Public WiFi Hotspots – the Business Model challenges

The JiWire directory estimates that there are c. 414k public WiFi locations across the globe at the end of 2010, and there are WiFi hotspots currently located 26.5k in the UK. Across the globe, there is a shift from a paid-for model to a free-model with the USA being top of the free chart with 54% of public WiFi locations being free.

For a café chain offering free access to WiFi is a good model to follow. The theory is that people will make extra visits to buy a coffee just to check their email or some other light internet visit. Starbucks started the trend by offering free WiFi access, all the rest felt compelled to follow. Nowadays, all the major chains whether Costa Coffee, Café Nero and even McDonalds offer free WiFi access provided by either BT Openzone or Sky’s The Cloud. A partnership with a public WiFi provider is perfect as the café chain doesn’t have to provide complicated networking support or regulatory compliance. The costs for the public WiFi provider are relativity small especially if they are amortized across a large base of broadband users.

For hotels and resorts, the business case is more difficult as most hotels are quite large and multiple access points are required to provide decent coverage to all rooms. Furthermore, hotels traditionally have made additional revenues from most services and therefore complexity is added with billing systems. For most hotels and resorts a revenue share agreement is negotiated with the WiFi service provider.

For public places, such as airports and train stations, the business case is also complicated by the owners knowing these sites are high in footfall and therefore demand a premium for any activity whether retail or service based. It is a similar problem that mobile operators face when trying to provide coverage in major locations: access to prime locations is expensive. In the UK, the entry of Sky into the public WiFi and its long association with Sports brings an intriguing possible partnership with the UK’s major venues.

These three types of locations currently account for 75% of current public WiFi usage according to JiWire.

To read the rest of the article, including:

  • How will UK Public WiFi Evolve?
  • Challenge to Mobile Operators
  • O2 Tries an Alternative
  • Vodafone Goes with Femtos
  • Lessons for Other Markets

Members of the Telco 2.0TM Executive Briefing Subscription Service and Future Networks Stream can access and download a PDF of the full report here. Non-Members, please see here for how to subscribe. Alternatively, please email contact@telco2.net or call +44 (0) 207 247 5003 for further details. ‘Growing the Mobile Internet’ and ‘Lessons from Apple: Fostering vibrant content ecosystems’ are also featured at our AMERICAS and EMEA Executive Brainstorms and Best Practice Live! virtual events.